Nvidia gloats over Intel GPU speed concession -

Plus ça change, plus c'est la meme chose, if you’ll pardon our French, to describe the latest cat fight between industry giants Intel and Nvidia over graphics and acceleration.

In this particular case, Nvidia is smugly patting itself on the back after an Intel rep told the audience at the International Symposium on Computer Architecture (ISCA) in Saint-Malo, France that GPUs were “only” up to 14 times faster than CPUs.

Nvidia, as un-humble as ever, refused to take that as a compliment, however, and are protesting – in true French fashion – that GPUs are actually much, much faster. 100 times faster even.

Intel’s paper entitled “Debunking the 100x GPU vs CPU Myth” debunked nothing of the sort, according to Nvidia, which reckons the codes used on the GTX 280 Intel used for comparison with its Core i7 960 were running “right out-of-the-box, without any optimisation.” Merde alors!

Even without all the extra fiddling about and tweaking, Intel was still forced to admit, however, that the application kernels it was testing ran up to 14 times faster on an Nvidia GPU, the first time Intel has ever made such a concession.

Nvidia was quick to point out that while it was indeed true that not all applications could see a 100x speedup, some apps have seen 100x speedups and then some. To illustrate the point, the firm has just sent out a rather long, tedious and self-inflating list of developers that have achieved speed ups of more than 100x in their applications. If you’re interested, and don’t have anything better to do (like watch the footy) that list can be found here on Nvidia’s cringeworthily named CUDA Zone.

nv

“The real myth here is that multi-core CPUs are easy for any developer to use and see performance improvements,” says Nvidia spinner Mark Priscaro, adding that even undergraduates (snort!) studying parallel programming at M.I.T. have disputed this “when they looked at the performance increase they could get from different processor types and compared this with the amount of time they needed to spend in re-writing their code.”

Apparently the college boffins felt that they needed to invest just as much lab time coding for a CPU than a GPU, but could at least get 35x more performance on the former…and play a better game of World of Warcraft when the professor wasn’t looking.

“At the end of the day, the key thing that matters is what the industry experts and the development community are saying and, overwhelmingly, these developers are voting by porting their applications to GPUs,” noted Priscaro.

Another win for Nvidia at the ISCA today was that the firm’s chief scientist Bill Dally snagged the 2010 Eckert-Mauchly Award for his contribution to architecture for parallel computing. Bravo!

* Update This is how Intel reacted. It sent TechEye a statement: 

"While understanding kernel performance can be useful, kernels typically represent only a fraction of the overall work a real application does. As you can see from the data in the paper – claims around the GPU’s kernel performance are often exaggerated.

"General purpose processors such as the Intel Core i7 or the Intel Xeon are the best choice for the vast majority of applications, be they for the client, general server or HPC market segments. This is because of the well-known Intel Architecture programming model, mature tools for software development, and more robust performance across a wide range of workloads and not just certain application kernels.

"While it is possible to program a graphics processor to compute on non-graphics workloads, optimal performance is typically achieved only with a high amount of hand optimization, require graphics languages similar to DirectX or OpenGL shader programs or non-industry standard languages. For those HPC application that do benefit from an extremely high level of parallelism, the Intel MIC architecture will be a good choice as it supports standard tools and libraries in standard high level languages like C/C++, FORTRAN, OpenMP, MPI among many other standards."