With all the exciting things happening lately, I almost forgot to post this.

If you are into high performance computing or cloud gaming and GPUs, have a quick look at our Immersion Cooling Concept Design for 64 Intel Xeon Phi coprocessors and 8 HPC mainboards in the space of a suitcase. After the design was shown at the 3M booth at the HPCC-USA Supercomputer Conference in Rhode Island, we've been getting a couple of questions. Especially after AMD announced its Radeon Sky Series GPU for the cloud at the Game Developers Conference (GDC) last month, and NVIDIA showed off their GRID systems at the GPU Technology Conference (GTC), people started to connect the dots.



b2ap3_thumbnail_NvidiaGRID.jpg
We have also received word from Intel that it is actually possible to boot the Intel Xeon Phi coprocessor without a host system, although this is currently reserved for future products or for folks building their own HPC systems and super computers. We believe soon GPU makers will follow suite, as putting 8 bulky GPUs in a server and cooling it with fans and air conditions is just not the most elegant and economical thing to do. Now imagine filling a tank with hundreds of power dense GPUs or Xeon Phi coprocessors, directly connected to a multidimensional high speed network for extreme bandwidth throughput. Welcome to immersion cooling land.

b2ap3_thumbnail_small_nvidia-grid-vca-1.jpg
AMD and NVIDIA would really do us all a big favor if they'd start selling their products without heatsinks (as some gamers pointed out after last month's GDC, the time has come that GPUs are larger than the rest of the computer). Intel is already shipping the Xeon Phi coprocessor without heatsinks upon request for OEMs (guys like us who cook their own thermal solution), so we are going in the right direction!


As we've pointed out earlier, this whole Intel Xeon Phi coprocessor concept design was built with real off-the-shelf hardware (Intel HPC main boards, server power supplies etc) and works just as well with NVIDIA Tesla Kepler GPUs as with Phi or other PCIe cards. And if you decide to move on to the next generation, you can easily swap them without the need for a new infrastructure or enclosures. Out with the old, in with the new.

We are just beginning to see the real advantages with all these power hungry GPUs and CPUs being installed in clusters.

 

Here are a couple of quick facts:

 

 

b2ap3_thumbnail_hpcc-usa.jpg