Infographics: 3D Accelerators Evolution
The history of the development of GPUs and graphics technologies they brought into video games
The first truly 3D video game is considered to be Descent. It was released back in 1995, a year before 3D accelerators were invented. The first successful 3D accelerator was Voodoo Graphics from 3dfx, released in 1996. It could process up to 1 million polygons per second. However, the major advantage of Voodoo was complete autonomy in terms of 3D graphics processing — you could enjoy 3D games like the aforementioned Descent or the first Quake even on a weak processor of the Intel 486 level.
There were some drawbacks in Voodoo Graphics, though, as it couldn’t process 2D. So for a full-fledged work, a separate 2D video card was still required. Nvidia solved this problem a year later, in 1997, by releasing Riva 128. In addition to the 3D chip, it also contained a two-dimensional one. This also allowed running three-dimensional games in the window mode of a two-dimensional operating system. Besides, support for high resolutions appeared — up to 1600x1200. At the same time, you needed a good processor — Pentium 2 for that time.
Thus formed the initial concept of 3D accelerators. They can work in two modes — 2D and 3D — and also take most of the graphics processing load from the CPU. However, with the development of technology, to ensure a balance in performance in pair with powerful graphics cards, a processor of at least the middle class became necessary. Today, these are Intel Core i5 or AMD Ryzen 5 level solutions.
We decided to find out how 3D accelerators developed from 1996 to 2019. To do this, we collected information about the growth of megahertz, the amount of video memory, theoretical performance, and so on. And for clarity, we designed everything in the form of infographics.
Graphics processing unit (GPU)
A graphics processing unit (GPU) is the main component of a video card. It is it that does all the hard work: speeds up graphics processing, draws frames, and bring the result on the screen. Due to its narrow focus, the GPU copes with such work much better than the central processing unit (CPU).
The term GPU itself was coined by Nvidia in 1999 with the release of the GeForce 256 graphics card, presented as “the world's first GPU.” The company itself described the new concept as “a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines.” The “triangles” here are obviously polygons — the minimum parts of a three-dimensional image from which objects are built.
Further on the graph, you can trace the increase in the frequencies of the GPU and the number of main cores. Note that the cores became universal (Shading Units) only by the end of the 2000s. Starting with the GeForce 8000 generation for Nvidia and Radeon HD 2000 for AMD. Therefore, until about 2006, cores are understood to mean various blocks with long-forgotten names like Pixel Pipelines or Pixel/Vertex Shaders.
You may notice that the “red” have usually more cores, while the "green" often followed the principle of “less is more.” The same applies to another parameter in the graph above — Thermal Design Power. For very old 3dfx models or the first ATI products (subsequently purchased by AMD), there is practically no data. Then, you can see that flagship Radeon cards were most often more demanding than GeForce flagships.
Video memory (VRAM)
In terms of functionality, video memory (VRAM) is similar to classic random access memory (RAM). This is one of the reasons why the video card can take the missing amount of megabytes from regular RAM. The video memory temporarily stores the graphic data needed at a particular moment.
VRAM has also other characteristics like frequency and bandwidth, measured in megahertz (MHz) and gigabytes per second (GB/s) respectively. Their development can be traced below:
The technology for manufacturing video memory has changed several times over the years. We took it out in a separate chapter below.
Until the 2000s, 3D accelerators were mainly equipped with the memory of the very ancient EDO standard and sometimes with a slightly more modern SDRAM (SDR). The latter was used for some time in the main RAM of the computer. In the early 2000s, all memory (both RAM and VRAM) switched to the DDR standard. But by the mid-zero, there was a separation into a new generation of DDR2 and a separate significantly faster GDDR-memory.
In the illustration below, this is clearly visible:
API is an application programming interface. It allows you to "make friends" of the game with the operating system, forcing its graphic part to work in the software environment of the OS.
Voodoo 3D accelerators used a proprietary API called Glide. This was the first easy-to-use interface — in fact, the processing and adaptation of early versions of OpenGL to the needs of game developers. It was Glide that the first fully three-dimensional projects like Descent or Quake were based on.
Soon, however, the original OpenGL was greatly improved with updates. In addition, it worked on any graphics card, not just 3dfx products like Glide. OpenGL replaced Glide and became the standard for a while — until 2001. Therefore, we omitted Glide in the chart below — it did not exist for too long.
And in 2001, DirectX 8.0 was released. Microsoft’s API has finally begun to support all the necessary technologies, including new-fashioned shaders. Since then, in PC gaming Direct3D (the graphic part of DirectX) is considered to be the main application programming interface. However, OpenGL is still used in some projects.
In general, the main measure of the performance of a video card is contemprorary games. However, it would be strange to compare the number of FPS from the first Doom of 1993 with the smoothness of the picture from the 2016 one. Therefore, we took data on theoretical performance: pixel fillrate, texture fillrate, and new-fashioned teraflops (TFLOPS). In a nutshell, the higher the number, the better.
Unfortunately, there is no data about teraflops for very old models like 3dfx Voodoo or ATI 3D Rage. Moreover, theoretical performance of GPUs was even less than 1 trillion flops (teraflops) until the 2000s. So their flops data can seem a bit insignificant.
At the end of the 2000s, the theoretical performance of Radeon is on average twice as high as GeForce. However, in games, as a rule, competitors like the GeForce GTX 480 and Radeon HD 5870 were more or less close to each other. Then, since the 2010s, teraflops are closer to the true difference.
Now let's look at new technologies appearing with every new series of video cards. Most of the innovations gradually brought the graphics closer to what we see in modern games. To show how these technologies came into video games, we made a timeline featuring every step. See it below.
You could notice that for 10 years since 2009 most technologies have a bit lesser impact. It was not until 2018 when Nvidia impressed us with their RTX. Other than that, little turned out to be that innovative. The first Crysis game released 12 years ago and it is still really fresh in terms of graphics. Crytek even teased a remake or a remastered version of the game recently. That said, we are looking forward to seeing ray-tracing effects in that jungle paradise.