The Geforce GTX 680 has finally broken cover. With the NDAs going up in smoke today, the now unshackled reviewers have posted their take on Nvidia’s newest and brightest Geforce graphics chippery, and it’s looking mildly good for the green one.
If you were expecting something of a pure revolution in graphics performance, Nvidia might disappoint. It hasn’t disappointed, however, when it comes to reinventing the wheel. It seems Nvidia got the right idea then and there.
According to the official communiqué, Nvidia has gifted the new 3.54 billion transistor GTX 680 with 1536 CUDA cores and dialled up the core clock all the way to 1006MHz. Memory is configured as four separate 256-bit controllers managing 512MB of 6GHz GDDR5 each. It’s comfortably small at 294mm2 and much, much smaller than the GTX 580’s huge 520mm2. It's good to see Nvidia didn't reach for the moon here.
Performance-wise it is outputting the claimed 10 percent - 15 percent superiority over AMD’s HD 7970 chip, but you can always argue that AMD’s counterpart is clocked lower at reference. Both are formidable overclockers, so there will be another duel with the post-launch custom-cooled cards from Nvidia’s add-in board partners. We have to confess that we were expecting a greater superiority on Nvidia’s behalf, but this takes us to the next point: Features.
First off, Kepler introduces a new hardware multimonitor support, that lets you – for example – game on three screens and keep a fourth screen outputting something completely different like your desktop, or an HD movie. This is Nvidia’s response to Eyefinity - which also begs the question: why haven’t we seen multimonitor testing going on in the reviews?
Secondly, what Nvidia seems to have nailed on the head are all matters related to power consumption. Ever since the AMD’s 5000 series cards, Nvidia has been trailing AMD on idle power, but it decided to up the ante with the Kepler architecture.
Kepler introduces, among other things, GPU Boost, a sort of Turbo Core for GPUs, which boosts core clock when needed, and saves power when it doesn’t. Previously, GPUs adopted software profiles for specific games, so as soon as the EXE file was identified, GPUs would rev up their engines. Turbo Core allows it to clock GPUs on the fly, depending on the task requirements. It also seriously downclocks the card during the game when the full muscle of the chip is not required. This brings with it serious power savings.
Third, Nvidia has introduced new AA modes like FXAA and TXAA. TXAA stands for Temporal Anti-Aliasing, which has two modes, one would provide 16x MSAA with a 2X MSAA performance hit, while the other offers >16x MSAA quality with a 4x MSAAA hit.
The Geforce GTX 680 is also rated as a 195W chip and the launch price is a rather ‘affordable’ $499, something that Nvidia wants to drive into consumers’ skulls: this time you can have your cake and eat it. Pricing seems to be spot on for the reference card, not to mention you needn’t upgrade your power supply. Expect to see insanely priced Super Overclocked editions coming soon, considering the card’s overclocking potential.
AMD and Nvidia have been locked in this fierce competition for a while now. Every time a new chip is launched, it takes over where the rival left off. If anyone questioned AMD about its 6000-series’ performance, it made up for it in power efficiency and display features. If anyone questioned Nvidia about its display features and power efficiency, they shut them up with Kepler.
We’ve collected a few reviews so you can take a look for yourself.