1. Game Development

The End of Fixed-Function Rendering Pipelines (And How to Move On)

Scroll to top
9 min read

Fixed-function pipelines have no more business on our video cards. Here's what you need to know about them—and how to make the switch away from them, if you still haven't done so yet.

In The Beginning: The Rise of Graphics Hardware

Once upon a time, when game development (or coding anything that has to do with real-time graphics, really) was limited to writing to a relatively small matrix of color intensities (the frame buffer) and sending it to hardware, you could do anything you wanted with it. You could draw to the image one, ten, or a hundred times. You could pass over the frame buffer again to do some neat FX. In short, whatever your processor was capable of doing, you could do it to the image that gets sent to the monitor. This allowed you to do some really great and creative stuff, but people never (or seldom) used it to that extent. But why?

The answer: because it was slow. A 640×480px image (a common resolution at that time) contains 307,200 pixels. And the CPUs were so much slower then, that you couldn't really do much in the short time given to you to draw that frame. After all, if you want to keep drawing at 30FPS, you've got only around 30ms to update your game logic and render to the screen, and that has to include the overhead of intercommunication with the hardware. It wasn't much.

Then came along the cool GPUs with rendering pipelines. You, the developer, would take care of updating the game logic and sending your textures and triangles to the GPU, and it would do the heavy-lifting and number crunching on them. Those were fixed-function rendering pipelines (FFPs): meaning that you couldn't configure the functions they performed. You could tell them "make the fog dark gray" or "don't do the lighting for me!" and you could configure a lot of the other parameters, but the functions themselves remained. 

The hardware was wired and narrowly specialized so that it performed some standard operations on your data. And because it was wired that way, it was so much faster than doing them on your processor. But one downside was that you were paying a lot for that speed: you were paying in flexibility. But if you ever wanted to draw anything complex, the CPU simply wasn't fast enough, and submitting your vertices to the GPU was the only choice.

This is roughly how the fixed-function pipeline worked. The image is not intended as an accurate representation, but to give you a clue as to how it all performed.

The Demands for Better Realism

The graphics world was still changing rapidly. Just like all creative and talented people, game developers love challenges, and one of the challenges for them was (and will remain!), for a long time, to render ever-better looking, realistic images. 

The fixed-function pipeline provided some nice features, such as multiple blending modes, per-vertex Gouraud shading, fog effects, stencil buffers (for shadow volumes) and such, so the developers used what they could. Soon, there were some really impressive effects going on, all by virtue of real-life phenomena simulated using a few cheap tricks (well, cheap by today's standards). 

This was all going very well, but it was still limited by the amount of functions the fixed pipeline could do. After all, the real world has so many different materials, and for simulating them, the only variation they were allowed to perform was changing some blend modes, adding some more textures, or tweaking the light reflection colors.

Then it happened: the first programmable GPUs came along. They did not come overnight, of course, and they were bound to arrive some day, but this still created excitement. These GPUs had what was called a programmable rendering pipeline: you could now write programs, called shaders, in a limited assembly language, and have them execute for each vertex or fragment, on the video card. This was a big leap forward, and it was just getting better. 

Soon the assembly languages increased in complexity and expressiveness, and high-level languages for GPU programming emerged, such as HLSL, GLSL, and later Cg. Today we have geometry shaders which can even stream out new vertices, or shaders which dynamically control tessellation and tessellated triangles, and inside them we can sample an awful lot of textures, dynamically branch and do all sorts of crazy math on the input values.

When you give developers these advantages, they go wild; soon they were writing shaders for all sorts of stuff: parallax mapping, custom lighting models, refraction, you name it. Later, even completely custom lighting systems emerged, such as deferred shading and light pre-pass, and you could see complex post-processing effects such as screen space ambient occlusion and horizon based ambient occlusion.  Some were even "abusing" shaders to do repetitive, math-heavy tasks, like statistical processing or breaking string hashes. (This was before general-purpose computing on GPUs got mainstream support.) 

In short, computer graphics exploded with the introduction of shaders, and with good reason: the ability to program what exactly happened to vertices, fragments, textures, and so on, and to do it fast, provided nearly endless possibilities.

A simplified representation of the programmable pipeline. Note how the specific transformation, shading or texture stages were replaced by specialized shaders. (Tessellation omitted for clarity.)

The Complete Switch

Soon, fixed function pipelines were obsolete, at least for game developers. After all, why bother with such walled gardens when you can program precisely what happens to your data? They stayed in use for a lot longer in some applications where realism was not an issue, such as for CAD. But by and large, they were getting ignored. 

OpenGL ES 2.0, released in 2007, deprecated or removed its fixed-function pipeline in favor of a programmable one. OpenGL 3.2, back in 2009, finally removed all notion of fixed-function vertex and fragment processing (however, it remains available for legacy use via a compatibility profile). It's clear that it makes very little sense today to work with the limited pipeline when you've got powerful GPUs capable of doing awesome things at your disposal.

Because these APIs force you to use shaders (and this includes DirectX, which, while not explicitly removing the functionality, includes tools to help migrate from the old approach, and has virtually no new documentation regarding the FFP), they are difficult to get right for a beginner. If you're only starting as a 3D programming rookie, it's a lot easier just to tell the API your matrices, lighting parameters, and whatnot, and have it do it all for you. 

But in the long run, it will benefit you much more if you learn to write programs that precisely describe the process. You will intricately understand what's going on under the hood, comprehend some very important concepts the FFP does not require you to, and be able to tweak your materials very easily to do something complex the fixed-function can never do for you (and it's useful for debugging, too!).

I've mentioned OpenGL ES, and let me build up on that in more detail. As gaming on mobile becomes more and more popular, it makes sense to create virtual worlds ever-increasing in complexity. Most of the fixed-function calls were removed in ES 2.0 (which, naturally, means they're absent from subsequent versions as well). This essentially means that, in order to use any of the features after ES 1.1, you need to use shaders. 

ES 2.0 is supported by iPhones since 3GS, iPads since the first release, and iPod Touch devices of generation 3 and higher. Qualcomm Snapdragon, a chip widely used in Android phone production, also supports OpenGL ES 2.0. That is very wide support, because ES 2.0 is not exactly "new": it is over 7 years old now. To get the most out of these architectures, you need to let go of the fixed-function pipeline

I assume most of you have done that long, long ago, but it's not that hard for me to imagine some 2D graphics engines or legacy games that still use fixed-function (because there's no need for more). This is all fine, but using it for new projects, or training programmers in them, seems like a waste of time. This is amplified by the fact that a lot of tutorials you can find on the Internet are grossly outdated and will teach you how to use the FFP from the beginning—and before you even realize what's going on, you'll be deep in there.

My first brush with 3D graphics was an ancient DirectX 7 tutorial written in Visual Basic. At that time, using the fixed-function pipeline generally made sense, because hardware was not advanced enough to achieve the same functionality with shaders at the same rate. But today, we see graphics APIs starting to drop or severely deprecate support for it, and it really becomes only an artifact of the past. It is a good and useful artifact which makes us nostalgic, but we should be staying away from it. It's old and no longer used.

These sweet fresnel reflections and refractions, generated by a demo from OGRE (Open-source Graphics Rendering Engine), could have never been done if you were sticking to the ways that old DirectX 8 or OpenGL 1.1 tutorials advise.

Conclusion

If you are into serious game development, sticking to the fixed-function pipeline is a relic of bygone days. If you're thinking of getting into 3D graphics, my advice (and the advice of many, many developers out there) is to simply avoid it. 

If you see passing of lighting positions to the graphics API (not as a shader parameter), or API function calls such as glFogv, run like the wind and don't look back. There's a brave new world of programmable GPUs out there, and it's been around for a long time. Anything else probably just wastes your time.

Even if you're just into 2D graphics, it's still a wise idea not to rely on the FFP anymore. (Of course, that's as long as you are okay with not supporting some ancient hardware.) Shaders can provide you with some great, lightning-fast effects. Image blurring, sharpening, warp grids, vector rendering, and large-scale particle or physics simulation can all be done on the GPU, and they can all benefit both 2D and 3D games. 

So, again, my advice, even if you're not explicitly going to learn about 3D game development, is to learn to write shaders. They are fun to work with, and I guarantee you're going to spend many amusing hours perfecting a cool shader effect: a dynamic butty skybox, car paint or parallel-split shadow mapping, or whatever your heart desires. At least that's happened to me: once you're used to working with the fixed pipeline because of some limitation (as I was forced to, back in the day, in order to get acceptable performance on my 5200FX), programming the GPU is a blast, and heaps of fun.

I hope I have explained to those for whom it was unclear, how 3D graphics used to work long ago and how it works now, and I hope I have convinced the few of you who were on the verge of following NeHe or Swiftless tutorials to do otherwise and go look at something more modern. As always, I might have made some mistakes, so feel free to point them out in the comments. Until next time!

Did you find this post useful?
Want a weekly email summary?
Subscribe below and we’ll send you a weekly email summary of all new Game Development tutorials. Never miss out on learning about the next big thing.
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.