Man creates webservers from stuff in his drawer -

Hardware researcher Dave Andersen has worked out how to create a new Google from bits he found in his bottom drawer.

Andersen, who is a a computer science professor at Carnegie Mellon, has more interesting stuff in his bottom drawer than we do. If we had to do the same thing we would have to assemble a computer from a couple of run out biros, a coffee stained paperback and a pencil which gives Winnie the Pooh an anal probe.

But Anderson's bottom draw was full of tiny computers with 600 MHz chips. Built by Soekris Engineering, they were meant to be wireless access points or network firewalls and they were used in an earlier project.

Anderson thought the gear had to be useful for something else. Normally that would involve paint, nylon thread and double sided tape to make an art feature mobile, but he thought they would make a rather good super-low-power DNS (domain name system) server.

The DNS servers run on only about 5 watts of power rather than 500. When he showed them to his students they told him he was thinking too small.

The class worked out that by tinkering with the machines and linking them together you could run a massive application each machine could never execute on its own.

It needed some natty coding to split the application's duties into tiny pieces and spread them evenly across the network. But it could be used to create the sort of databases you would run for Google, Facebook or Twitter.

After a few years of development, Anderson and his students are starting to sell "Wimpy Nodes" to the likes of Amazon and Facebook. It has also been given cash by Intel.

At the moment the problem is that the Fast Array of Wimpy Nodes isn't always fast and only some applications respond to the treatment.

Strangely, one of the main opponents of the FAWN system comes from Google researcher Urs Hölzle.  Wired found a paper published in chip design magazine IEEE Micro, Google's parallel computing guru said that brawny cores still beat wimpy cores, most of the time. This is because of Amdahl's law which limits parallelisation performance. Moving data between so many cores can bog down the entire system and besides, you have to rewrite all your software.

Andersen said that while the article was "reasonably balanced," it was written from the perspective of a company that doesn't want to change too much of its software.