I can build a computer way faster than that in a desktop machine - apparently these guys have never heard of either Tesla video processors or Server boards - and AMD? Come on!
Good desktop superPC:
- Thermaltake Mozart TX (has secondary system slot)
Primary system:
- 2x Intel XEON x86-64 Penryn (or hold out for Nehylm)
- A Server Board featuring ATX form factor and 2 PCIe16 slots
- 16GB of fast memory
- 1x Internal Tesla C870
- 1x External Tesla D870 (modified to fit internally if required)
- One or Two (depending on available space) 16GB ramdrives on the PCI, one of which holding the OS.
Secondary System:
- JetWay J9F2-EXTRME-LF Socket M Intel 945GM HDMI Mini ITX Intel Motherboard
- 4GB of fast memory
- A 2.6 Ghz Core 2 duo Mobile
- A server class raid on the PCI.
- A solid state drive for the OS
- A few 15,000 RPM Server drives
Still fitting in a desktop case, this has some seriously bigger beef than the mentioned system:
1) It features Tesla gpu processors. Teslas are GPU's which are DESIGNED to be used in HPC, and are more powerful by a lot. The main system features three, as does the previously mentioned system, though two are admittedly chained on one PCIe16 slot, the massive power of the systems, coupled with appropriate software design to avoid bus transitions (which is a problem anyways, at least for current chips with the front-side bus.
2) It features two main CPU's. While GPU's are indeed cool, CPU's are quite cool as well, and many problems are better handled by them. Being able to handle both types of problems well means that this machine will generally have significant performance gains over the mentioned system.
3) The use of RAM drives. Ramdrives are a serious pain. But there is really no alternative to using ram as a storage medium when it comes to speed, and a ramdrive will run an OS and swap file way faster than anything else.
4) A Second system powering I/O - While the second system cannot be particularly powerful (nobody makes good MicroITX boards
), it can easily handle I/O duties, and mabye a bit of extra computing. Chained to it are some serious server drives at 15,000 RPM (SSD is just not really that cool with the right programming), running on a server class RAID controller. The system's OS would be supplied by a SSD drive, since that particular application would get some performance gain from it.
The two systems would communicate using the fastest protocol I can find, preferably infiniband. If needed, the raid controller may be moved to the PCIe 1 slot to free the PCI slot, but a PCIe1 innifiband card would be better.
The onboard video from the second system would be used as the main interface. The primary system would never see human I/O except during install and maintenance.
The downside to this ridiculous mess - cost, which I can't imagine can run under $10,000, and would estimate to be closer to $15,000. Fair enough - I doubt anyone would build this kind of system, since for that price they can just buy a rack and a bunch of S870's or S1060's from Nvidia to put in it, but if you are talking about the absolute most power you can jam into a full tower case using COTS parts, this system has to be basically it.