Microsoft just announced that the new XBOX Series X will indeed pump out roughly 12 teraflops of graphics processing power.
That sounds like a lot. But let’s look at the evolution of game consoles.
Historically, considering older consoles like Sony’s playstation 1 and 2, there was a large generational jump in regards of computational power, but the jump in render resolution wasn’t as high. This means that the additional GPU wasn’t only used to render higher resolutions but to increase the fidelity of graphical effects as well.
The number of floating point operations per second are only a rough indicator of actual performance and it also depends on a lot of things like memory bandwidth and dedicated hardware that enables effects like the XBOX Series X’ raytracing hardware.
But considering the same render pipeline on different hardware, the Gflops are a pretty good indicator.
Higher resolutions and faster GPUs are great, but to answer the question of “can we achieve better graphical fidelity”, we need to not only look at the processing power, but the processing power per pixel.
These charts, comparing PlayStation and XBOX game consoles over the generations, are on a logarithmic scale, due to the huge jump in processing power from earlier to current generation of consoles.
Don’t get me wrong, I still think, next generation games on XBOX Series X and Playstation 5 are going to look great. But because the processing power only grew proportionally with the number of pixels we have to push, the fact that we’re expecting to finally hit 60 frames per second is eating up most of the performance gains.
Note: the Playstation 5s computing power hasn’t been officially confirmed by Sony yet, but i’m expecting it to be on par with the XBOX Series X.
|Playstation 4 pro||2016||4200||2560||1440||30||2764.8|
|XBOX one X||2017||6000||3840||2160||30||6220.8|
|XBOX Series X||2020||12000||3840||2160||60||12441.6|
While there was a significant increase in processing power per pixel in game consoles from the mid-90s to the mid 2000s, the GPU improvements from the mid-2000s to the next generation coming out in 2020 were mostly offset by every-increasing render resolutions required to drive HD and Ultra HD TVs of increasing sizes.
Games on PS2 looked Amazingly smooth and lifelike, compared to the first PlayStation, and PS3 enabled real-time lighting and made deferred render pipelines feasible. But from PS3 to PS4 pro, we’ve been mostly talking about small improvements beyond higher resolutions and frame rates. Consider the category of “remastered” games like Skyrim or The Last of Us, that have minimal graphical improvements beyond resolution increases.
To truly achieve better graphics, beyond “just” higher resolutions, we will need to create better, more efficient render and asset pipelines and make use of hardware advancements that go beyond raw processing power. We’ll need to make use of additional memory to bake more assets in smart ways and use faster storage for better streaming of those assets. The new Variable Rate Shading could contribute to an actual jump in graphical fidelity.
But just looking at the raw processing power and without any further optimizations, the doubling of GPU power of the XBOX series X compared to the XBOX one X will only help render things at 60 instead of 30 frames per second.