In 2014/2015, it took NVIDIA 6 months from the launch of the Maxwell 2 architecture to get GTX Titan X out the door. All things considered, that was a fast turnaround for a new architecture. However now that we’re the Pascal generation, it turns out NVIDIA is in the mood to set a speed record, and in more ways than one.
Announced this evening by Jen-Hsun Huang at an engagement at Stanford University is the NVIDIA Titan X, NVIDIA’s new flagship video card. Based on the company’s new GP102 GPU, it’s launching in less than two weeks, on August 2nd….(continued)
Why This Matters
The enthusiast gaming market is headed in a bad direction. It was just weeks ago that Intel launched the new Broadwell-E 6950x. The 10-core monstrosity with an even more monstrous price tag attached that is more functional as a show piece to demonstrate your wealth than your computing prowess. But that’s a topic for another day. Today, we’re dealing with the new oddly named Nvidia Titan X. Which again…I could write an entire article on. But putting the name aside, this is a card that I’ve both been anticipating, and also dreading.
I started my Titan journey with the original GTX Titan. 3 of them. Watercooled. Voltage unlocked. Clocked at over 1.3GHz. While that number may seem small by todays standards, it allowed them to remain top end cards for almost a full 2 years. Because while the Titan Black that was launched had an extra SMM enabled, it wasn’t a significant difference, and the unlockable voltage on the original GTX Titan, which was not an option on the Titan Black, allowed it overcome its lower core count with a higher core clock. Even when the Maxwell GTX 980 came out, which was a significant architectural improvement over the former Kepler cards, they still only performed about 10% faster than my OG Titans. But then Then came the Titan X.
At roughly 2x the performance, and 2x the memory, I couldn’t resist. So I upgraded. Initially to just 2 cards. But ended up greatly missing the power of the 3rd card for pushing higher frame rates, so gave in and went back with a Tri-SLI setup. And I’ve been incredibly happy with them since I bought them. Except…well…there is no except. Overclocked to 1500MHz on the GPU, and 8000MHz on the memory, they can take all kinds of punishment with ease. The only real limiting factor is…DX11. And…DX12. The thing is, under DX11 you have relatively easy SLI implementation. Even if it’s not perfect, there are workarounds people share online for setting the correct SLI bits through Nvidia Inspector.
But with DX11, you have significant CPU limitations. Workload distribution across cores is atrocious for many games. And DX11 draw call limitations absolutely kill open world games like Assassin’s Creed, Watch Dogs, Rise of The Tomb Raider, and Grand Theft Auto 5. DX12 fixes this. DX12 is incredible. DX12 is everything I’ve wanted in a graphics API for the past 20 years. It’s the answer to the question of why my consoles performed so much better than my PC, even with inferior hardware. But the problem with DX12 there is that proper implementation of Multi-GPU (SLI/Crossfire) is in the hands of the developer. And even games published by Microsoft, like Quantum Break, said that adding in multi-GPU support is outside the scope of their project. That it’s too much work for them. And Quantum Break is a AAA game, from I think the biggest or at least one of the biggest software companies in the world.
And this is why…despite dreading having to upgrade to the new “Nvidia Titan X” so soon, I’m reluctantly embracing it. And excited for it. Because at the end of the day, more important than anything else, is single-gpu performance. Never having to worry about SLI scaling issues, stuttering, flickering shadows, or a total lack of SLI support in general. See? I wasn’t just rambling pointlessly for 5 minutes. But I’m not done ranting yet. Because I’m quite pissed at Nvidia for the Nvidia Titan X. And I’ll tell you why.
As opposed to the original Titan, or the Titan X, this is the first Titan card that is not a full fat card. They’re doing a modified repeat of the original Titan launch strategy. Part of the reason the Titan X performed so much better than the original Titan, despite being roughly the same die size, on the same 28nm process, is because a large part of the original Titan die was dedicated to fp64 compute, which means Zilch for gaming. Nvidia called it “optimization.” But really, they just cut out the extra compute portion for maxwell and dedicated the space to more of our great Cuda Core buddies. So on the same die space, we ended up with more cores for gaming. That along with a higher clock, meant about 2x the performance increase.
Now here’s the problem with this new Nvidia X card. The original Titan was 551mm2 die size. Titan X was 600mm2. The full fat Pascal Titan is 610mm2. But this “Nvidia Titan X” card is the equivalent of a 471mm2 die. So they’re selling the most cut down die possible, and yet raising the price 20% over the previous Titan X. And what’s worse…they’re they even cut out HBM2 from this card, and went with 10Gbps GDDR5x, which is about 25% as fast as what HBM2 would have provided, along with lower power consumption, lower heat, and ultimately higher clocks. Now of course, as a business, their strategy works. There are enough of us who are going to buy these cards at this premium price. But it still leaves a bad taste in my mouth. Because I know that with this launch, they’re already preparing for a 2017 launch of the next Titan card, whatever it happens to be called, that will be the full 610mm2 die, along with HBM2.
That was a long history lesson, and some ranting. Here’s a quick recap. The Nvidia Titan X will be an absolutely amazing card, and if you have the money for it, there’s no reason not to buy it. However, note that Nvidia is intentionally holding back because AMD hasn’t done enough to pressure them. The best hope for consumers right now, is for AMD to finally come out with a card that can give an Nvidia a challenge, and our pocket books a break. Until then, Nvidia Titan X pre-order, here I come.
As a side note…for this generation, I will finally be switching from Tri-SLI down to just standard 2 card SLI in order to take advantage of the new 2x HB SLI bridge, to get away from over-reliance on SLI, and also to prepare for One-GPU-Per-Eye VR rendering. Let me know your thoughts below.