There’s an asymmetry in the data center, and it might be an opportunity for somebody to build a new product line (hint, hint: HP, Dell).
There are plenty of products that consist of a box filled with storage devices – we call them SANs (storage area networks). They’re essentially what allows big data to exist, by packing large amounts of storage into a relatively small space.
So why not do something with CPUs (central processing units) that replicates the idea behind the SAN?
The processing power in a data center still generally consists of rack mounted servers. A standard rack server or blade server box contains a fully fledged computer, with an OS (or minimally some kind of virtualization software), CPUs on a motherboard, memory, network card, power supply etc etc.
Right now the highest density servers pack around 900 CPU cores into a standard size rack (according to this post, from about a year ago). The boxes (either standard rack-mount servers or blade servers) are fully fledged servers, so they have many different components required to support an operating system.
If we did away with the notion of an actual server, and instead had a box that just packed in as many CPUs as possible, a bit of memory for caching, and an optical networking port on the back (and a minimal BIOS to allow tasks to be assigned to the CPUs via the network), it would have many advantages.
Because the boxes wouldn’t be fully fledged servers that run an OS, many extraneous items could be removed – things like graphics cards, storage etc. The result would be compact in size. How many CPUs can you pack into a 9U rack box, if nothing else went inside it? Hundreds? A thousand? Obviously cooling is a limiting factor. We could still be talking about 10k CPU cores in a single rack. Extend that out to a row of racks, or a floor in a data center, and we could be talking about hundreds of millions of cores in a relatively small space.
If you then further assumed that the cooling would be operated externally (i.e. build in a connector for an external cooling system), the box wouldn’t even need fans. This means that both power and cooling could be allocated around the data center in a very efficient way.
The resulting data center would then have separate sections for a SAN, the new computing network, and some stock servers in the middle to tie everything together. Might make a nice new product line for somebody.
Update (9 April 2013): Usually it takes more than two months before my predictions come out in the marketplace. The HP Moonshot server series, which has just been released yesterday, allows up to 1800 tiny servers in a 47U rack. Big step in the direction, above.