With all the exciting things happening lately, I almost forgot to post this.
If you are into high performance computing or cloud gaming and GPUs, have a quick look at our Immersion Cooling Concept Design for 64 Intel Xeon Phi coprocessors and 8 HPC mainboards in the space of a suitcase. After the design was shown at the 3M booth at the HPCC-USA Supercomputer Conference in Rhode Island, we’ve been getting a couple of questions. Especially after AMD announced its Radeon Sky Series GPU for the cloud at the Game Developers Conference (GDC) last month, and NVIDIA showed off their GRID systems at the GPU Technology Conference (GTC), people started to connect the dots.
We have also received word from Intel that it is actually possible to boot the Intel Xeon Phi coprocessor without a host system, although this is currently reserved for future products or for folks building their own HPC systems and super computers. We believe soon GPU makers will follow suite, as putting 8 bulky GPUs in a server and cooling it with fans and air conditions is just not the most elegant and economical thing to do. Now imagine filling a tank with hundreds of power dense GPUs or Xeon Phi coprocessors, directly connected to a multidimensional high speed network for extreme bandwidth throughput. Welcome to immersion cooling land.
AMD and NVIDIA would really do us all a big favor if they’d start selling their products without heatsinks (as some gamers pointed out after last month’s GDC, the time has come that GPUs are larger than the rest of the computer). Intel is already shipping the Xeon Phi coprocessor without heatsinks upon request for OEMs (guys like us who cook their own thermal solution), so we are going in the right direction!
As we’ve pointed out earlier, this whole Intel Xeon Phi coprocessor concept design was built with real off-the-shelf hardware (Intel HPC main boards, server power supplies etc) and works just as well with NVIDIA Tesla Kepler GPUs as with Phi or other PCIe cards. And if you decide to move on to the next generation, you can easily swap them without the need for a new infrastructure or enclosures. Out with the old, in with the new.
We are just beginning to see the real advantages with all these power hungry GPUs and CPUs being installed in clusters.
Here are a couple of quick facts:
- Immersion cooling enclosures can be built “around” any high density electronics
- Including Intel Xeon Phi coprocessors, NVIDIA Tesla Kepler, NVIDIA GRID, or other GPUs
- Two-Phase immersion cooling is way ahead of air, water or oil cooling
(or any non-phase change cooling as a matter of fact)
- It will save you over 90% on electricity bill and cut down all the unnecessary infrastructure investments
- The design is very elegant and future proof
(new generation hardware doesn’t need a redesign of the cooling system)
- 3M Novec Engineered Fluids are odorless and not messy or oily, boards come out dry
(we get these questions all the time)
- Fluids are environmentally friendly, non-toxic and non-flammable
- Available with various boiling points such as 34°C, 49°C, 61°C etc
(select for optimal hardware performance and optimized energy savings)
- Our designs range from 3U vertical (aka 3U suitcase) to complete custom solutions to oversize racks
- A lot more space could be saved if these cards weren’t designed for air cooling (ie. bulky connectors)
- Remove all unnecessary parts from your hardware during the design stage and save extra time and money