Modern OpenGL Guide Open GL

Modern%20OpenGL%20Guide

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 101

DownloadModern OpenGL Guide  Open GL
Open PDF In BrowserView PDF
Modern OpenGL Guide test
Alexander Overvoorde
January 2019

Contents
Introduction
E-book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Window and OpenGL context
Setup . . . . . . . . . . . . . .
Libraries . . . . . . . . . . . . .
SFML . . . . . . . . . . .
SDL . . . . . . . . . . . .
GLFW . . . . . . . . . . .
Others . . . . . . . . . . .
SFML . . . . . . . . . . . . . .
Building . . . . . . . . . .
Code . . . . . . . . . . . .
SDL . . . . . . . . . . . . . . .
Building . . . . . . . . . .
Code . . . . . . . . . . . .
GLFW . . . . . . . . . . . . . .
Building . . . . . . . . . .
Code . . . . . . . . . . . .
One more thing . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

The graphics pipeline
Vertex input . . . . . . . . . . . . . . . .
Shaders . . . . . . . . . . . . . . . . . . .
Vertex shader . . . . . . . . . . . . .
Fragment shader . . . . . . . . . . .
Compiling shaders . . . . . . . . . .
Combining shaders into a program .
Making the link between vertex data
Vertex Array Objects . . . . . . . . . . .

1

3
4
4
4

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

5
6
6
7
7
7
7
7
7
8
11
11
11
14
14
14
16

. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
and attributes
. . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

18
20
23
23
24
25
26
27
28

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Drawing . . . . . . . . . .
Uniforms . . . . . . . . .
Adding some more colors
Element buffers . . . . . .
Exercises . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

28
30
31
32
35

Textures objects and parameters
Wrapping . . . . . . . . . . . . . .
Filtering . . . . . . . . . . . . . . .
Loading texture images . . . . . .
SOIL . . . . . . . . . . . . . .
Alternative options . . . . . .
Using a texture . . . . . . . . . . .
Texture units . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

35
37
38
40
40
41
41
44
45

Matrices
Basic operations . . . . . . . . .
Addition and subtraction .
Scalar product . . . . . . .
Matrix-Vector product . . . . . .
Translation . . . . . . . . .
Scaling . . . . . . . . . . . .
Rotation . . . . . . . . . . .
Matrix-Matrix product . . . . . .
Combining transformations
Transformations in OpenGL . . .
Model matrix . . . . . . . .
View matrix . . . . . . . . .
Projection matrix . . . . . .
Putting it all together . . .
Using transformations for 3D . .
A simple transformation . .
Going 3D . . . . . . . . . .
Exercises . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

47
47
47
48
48
49
49
50
51
52
52
53
53
53
54
54
55
58
60

Extra buffers
Preparations . . . . . . . . . . . . . . .
Depth buffer . . . . . . . . . . . . . . .
Stencil buffer . . . . . . . . . . . . . . .
Setting values . . . . . . . . . . . .
Using values in drawing operations
Planar reflections . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

60
60
61
62
63
64
65
67

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

Framebuffers

67

2

Creating a new framebuffer . . .
Attachments . . . . . . . . . . .
Texture images . . . . . . .
Renderbuffer Object images
Using a framebuffer . . . . . . .
Post-processing . . . . . . . . . .
Changing the code . . . . . . . .
Post-processing effects . . . . . .
Color manipulation . . . . .
Blur . . . . . . . . . . . . .
Sobel . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . .
Exercises . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.

68
69
69
70
70
71
71
73
73
75
76
77
77

Geometry shaders
Setup . . . . . . . . . . . . . . . . . . .
Basic geometry shader . . . . . . . . . .
Input types . . . . . . . . . . . . .
Output types . . . . . . . . . . . .
Vertex input . . . . . . . . . . . . .
Vertex output . . . . . . . . . . . .
Creating a geometry shader . . . . . . .
Geometry shaders and vertex attributes
Dynamically generating geometry . . . .
Conclusion . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

77
78
80
82
82
83
83
84
86
89
92
93

Transform feedback
Basic feedback . . . . . . . . . . . . . . .
Feedback transform and geometry shaders
Variable feedback . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

93
. 93
. 97
. 99
. 100
. 101

Introduction
This guide will teach you the basics of using OpenGL to develop modern graphics
applications. There are a lot of other guides on this topic, but there are some
major points where this guide differs from those. We will not be discussing
any of the old parts of the OpenGL specification. That means you’ll be taught
how to implement things yourself, instead of using deprecated functions like
glBegin and glLight. Anything that is not directly related to OpenGL itself,
like creating a window and loading textures from files, will be done using a few
small libraries.

3

To show you how much it pays off to do things yourself, this guide also contains a
lot of interactive examples to make it both fun and easy to learn all the different
aspects of using a low-level graphics library like OpenGL!
As an added bonus, you always have the opportunity to ask questions at the end
of each chapter in the comments section. I’ll try to answer as many questions as
possible, but always remember that there are plenty of people out there who are
willing to help you with your issues. Make sure to help us help you by specifying
your platform, compiler, the relevant code section, the result you expect and
what is actually happening.

E-book
This guide is now available in e-book formats as well:
• EPUB
• PDF

Credits
Thanks to all of the contributors for their help with improving the quality of this
tutorial! Special thanks to the following people for their essential contributions
to the site:
• Toby Rufinus (code fixes, improved images, sample solutions for last
chapters)
• Eric Engeström (making the site mobile friendly)
• Elliott Sales de Andrade (improving article text)
• Aaron Hamilton (improving article text)

Prerequisites
Before we can take off, you need to make sure you have all the things you need.
•
•
•
•
•
•

A reasonable amount of experience with C++
Graphics card compatible with OpenGL 3.2
SFML, GLFW or SDL for creating the context and handling input
GLEW to use newer OpenGL functions
SOIL for textures
GLM for vectors and matrices

Context creation will be explained for SFML, GLFW and SDL, so use whatever
library suites you best. See the next chapter for the differences between the
three if you’re not sure which one to use.

4

You also have the option of creating the context yourself using Win32,
Xlib or Cocoa, but your code will not be portable anymore. That
means you can not use the same code for all platforms.
If you’ve got everything you need, let’s begin.

Window and OpenGL context
Before you can start drawing things, you need to initialize OpenGL. This is done
by creating an OpenGL context, which is essentially a state machine that stores
all data related to the rendering of your application. When your application
closes, the OpenGL context is destroyed and everything is cleaned up.
The problem is that creating a window and an OpenGL context is not part of the
OpenGL specification. That means it is done differently on every platform out
there! Developing applications using OpenGL is all about being portable, so this
is the last thing we need. Luckily there are libraries out there that abstract this
process, so that you can maintain the same codebase for all supported platforms.
While the available libraries out there all have advantages and disadvantages,
they do all have a certain program flow in common. You start by specifying the
properties of the game window, such as the title and the size and the properties
of the OpenGL context, like the anti-aliasing level. Your application will then
initiate the event loop, which contains an important set of tasks that need to
be completed over and over again until the window closes. These tasks usually
handle window events like mouse clicks, updating the rendering state and then
drawing.
This program flow would look something like this in pseudocode:
#include 
int main()
{
createWindow(title, width, height);
createOpenGLContext(settings);
while (windowOpen)
{
while (event = newEvent())
handleEvent(event);
updateScene();
drawGraphics();
presentGraphics();

5

}
}

return 0;

When rendering a frame, the results will be stored in an offscreen buffer known
as the back buffer to make sure the user only sees the final result. The
presentGraphics() call will copy the result from the back buffer to the visible
window buffer, the front buffer. Every application that makes use of real-time
graphics will have a program flow that comes down to this, whether it uses a
library or native code.
Supporting resizable windows with OpenGL introduces some complexities as
resources need to be reloaded and buffers need to be recreated to fit the new
window size. It’s more convenient for the learning process to not bother with
such details yet, so we’ll only deal with fixed size (fullscreen) windows for now.

Setup
Instead of reading this chapter, you can make use of the OpenGL
quickstart boilerplate, which makes setting up an OpenGL project
with all of the required libraries very easy. You’ll just have to install
SOIL separately.
The first thing to do when starting a new OpenGL project is to dynamically link
with OpenGL.
• Windows: Add opengl32.lib to your linker input
• Linux: Include -lGL in your compiler options
• OS X: Add -framework OpenGL to your compiler options
Make sure that you do not include opengl32.dll with your application. This file
is already included with Windows and may differ per version, which will cause
problems on other computers.
The rest of the steps depend on which library you choose to use for creating the
window and context.

Libraries
There are many libraries around that can create a window and an accompanying
OpenGL context for you. There is no best library out there, because everyone has
different needs and ideals. I’ve chosen to discuss the process for the three most
popular libraries here for completeness, but you can find more detailed guides
on their respective websites. All code after this chapter will be independent of
your choice of library here.

6

SFML
SFML is a cross-platform C++ multimedia library that provides access to
graphics, input, audio, networking and the system. The downside of using this
library is that it tries hard to be an all-in-one solution. You have little to no
control over the creation of the OpenGL context, as it was designed to be used
with its own set of drawing functions.
SDL
SDL is also a cross-platform multimedia library, but targeted at C. That makes
it a bit rougher to use for C++ programmers, but it’s an excellent alternative
to SFML. It supports more exotic platforms and most importantly, offers more
control over the creation of the OpenGL context than SFML.
GLFW
GLFW, as the name implies, is a C library specifically designed for use with
OpenGL. Unlike SDL and SFML it only comes with the absolute necessities:
window and context creation and input management. It offers the most control
over the OpenGL context creation out of these three libraries.
Others
There are a few other options, like freeglut and OpenGLUT, but I personally
think the aforementioned libraries are vastly superior in control, ease of use and
on top of that more up-to-date.

SFML
The OpenGL context is created implicitly when opening a new window in SFML,
so that’s all you have to do. SFML also comes with a graphics package, but
since we’re going to use OpenGL directly, we don’t need it.
Building
After you’ve downloaded the SFML binaries package or compiled it yourself,
you’ll find the needed files in the lib and include folders.
• Add the lib folder to your library path and link with sfml-system
and sfml-window. With Visual Studio on Windows, link with the
sfml-system-s and sfml-window-s files in lib/vc2008 instead.

7

• Add the include folder to your include path.
The SFML libraries have a simple naming convention for different
configurations. If you want to dynamically link, simply remove the -s
from the name, define SFML_DYNAMIC and copy the shared libraries.
If you want to use the binaries with debug symbols, additionally
append -d to the name.
To verify that you’ve done this correctly, try compiling and running the following
code:
#include 
int main()
{
sf::sleep(sf::seconds(1.f));
return 0;
}
It should show a console application and exit after a second. If you run into any
trouble, you can find more detailed information for Visual Studio, Code::Blocks
and gcc in the tutorials on the SFML website.
Code
Start by including the window package and defining the entry point of your
application.
#include 
int main()
{
return 0;
}
A window can be opened by creating a new instance of sf::Window. The basic
constructor takes an sf::VideoMode structure, a title for the window and a window style. The sf::VideoMode structure specifies the width, height and optionally the pixel depth of the window. Finally, the requirement for a fixed size window is specified by overriding the default style of Style::Resize|Style::Close.
It is also possible to create a fullscreen window by passing Style::Fullscreen
as window style.
sf::ContextSettings settings;
settings.depthBits = 24;
settings.stencilBits = 8;
settings.antialiasingLevel = 2; // Optional
// Request OpenGL version 3.2

8

settings.majorVersion = 3;
settings.minorVersion = 2;
settings.attributeFlags = sf::ContextSettings::Core;
sf::Window window(sf::VideoMode(800, 600), "OpenGL", sf::Style::Close, settings);
The constructor can also take an sf::ContextSettings structure that allows
you to request an OpenGL context and specify the anti-aliasing level and the
accuracy of the depth and stencil buffers. The latter two will be discussed later,
so you don’t have to worry about these yet. In the latest version of SFML, you
do need to request these manually with the code above. We request an OpenGL
context of version 3.2 in the core profile as opposed to the compatibility mode
which is default. Using the default compatibility mode may cause problems
while using modern OpenGL on some systems, thus we use the core profile.
Note that these settings are only a hint, SFML will try to find the closest valid
match. It will, for example, likely create a context with a newer OpenGL version
than we specified.
When running this, you’ll notice that the application instantly closes after
creating the window. Let’s add the event loop to deal with that.
bool running = true;
while (running)
{
sf::Event windowEvent;
while (window.pollEvent(windowEvent))
{
}

}

When something happens to your window, an event is posted to the event queue.
There is a wide variety of events, including window size changes, mouse movement
and key presses. It’s up to you to decide which events require additional action,
but there is at least one that needs to be handled to make your application run
well.
switch (windowEvent.type)
{
case sf::Event::Closed:
running = false;
break;
}
When the user attempts to close the window, the Closed event is fired and we
act on that by exiting the application. Try removing that line and you’ll see that
it’s impossible to close the window by normal means. If you prefer a fullscreen

9

window, you should add the escape key as a means to close the window:
case sf::Event::KeyPressed:
if (windowEvent.key.code == sf::Keyboard::Escape)
running = false;
break;
You have your window and the important events are acted upon, so you’re now
ready to put something on the screen. After drawing something, you can swap
the back buffer and the front buffer with window.display().
When you run your application, you should see something like this:

Figure 1:
Note that SFML allows you to have multiple windows. If you want to make
use of this feature, make sure to call window.setActive() to activate a certain
window for drawing operations.
Now that you have a window and a context, there’s one more thing that needs
to be done.

10

SDL
SDL comes with many different modules, but for creating a window with an
accompanying OpenGL context we’re only interested in the video module. It
will take care of everything we need, so let’s see how to use it.
Building
After you’ve downloaded the SDL binaries or compiled them yourself, you’ll find
the needed files in the lib and include folders.
• Add the lib folder to your library path and link with SDL2 and SDL2main.
• SDL uses dynamic linking, so make sure that the shared library (SDL2.dll,
SDL2.so) is with your executable.
• Add the include folder to your include path.
To verify that you’re ready, try compiling and running the following snippet of
code:
#include 
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Delay(1000);

}

SDL_Quit();
return 0;

It should show a console application and exit after a second. If you run into any
trouble, you can find more detailed information for all kinds of platforms and
compilers in the tutorials on the web.
Code
Start by defining the entry point of your application and include the headers for
SDL.
#include 
#include 
int main(int argc, char *argv[])
{
return 0;
}
11

To use SDL in an application, you need to tell SDL which modules you need
and when to unload them. You can do this with two lines of code.
SDL_Init(SDL_INIT_VIDEO);
...
SDL_Quit();
return 0;
The SDL_Init function takes a bitfield with the modules to load. The video
module includes everything you need to create a window and an OpenGL context.
Before doing anything else, first tell SDL that you want a forward compatible
OpenGL 3.2 context:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_STENCIL_SIZE, 8);
You also need to tell SDL to create a stencil buffer, which will be relevant
for a later chapter. After that, create a window using the SDL_CreateWindow
function.
SDL_Window* window = SDL_CreateWindow("OpenGL", 100, 100, 800, 600, SDL_WINDOW_OPENGL);
The first argument specifies the title of the window, the next two are the
X and Y position and the two after those are the width and height. If
the position doesn’t matter, you can specify SDL_WINDOWPOS_UNDEFINED or
SDL_WINDOWPOS_CENTERED for the second and third argument. The final parameter specifies window properties like:
• SDL_WINDOW_OPENGL - Create a window ready for OpenGL.
• SDL_WINDOW_RESIZABLE - Create a resizable window.
• Optional SDL_WINDOW_FULLSCREEN - Create a fullscreen window.
After you’ve created the window, you can create the OpenGL context:
SDL_GLContext context = SDL_GL_CreateContext(window);
...
SDL_GL_DeleteContext(context);
The context should be destroyed right before calling SDL_Quit() to clean up
the resources.
Then comes the most important part of the program, the event loop:
SDL_Event windowEvent;
while (true)
{
if (SDL_PollEvent(&windowEvent))
{

12

}
}

if (windowEvent.type == SDL_QUIT) break;

SDL_GL_SwapWindow(window);

The SDL_PollEvent function will check if there are any new events that have to
be handled. An event can be anything from a mouse click to the user moving the
window. Right now, the only event you need to respond to is the user pressing
the little X button in the corner of the window. By breaking from the main
loop, SDL_Quit is called and the window and graphics surface are destroyed.
SDL_GL_SwapWindow here takes care of swapping the front and back buffer after
new things have been drawn by your application.
If you have a fullscreen window, it would be preferable to use the escape key as
a means to close the window.
if (windowEvent.type == SDL_KEYUP &&
windowEvent.key.keysym.sym == SDLK_ESCAPE) break;
When you run your application now, you should see something like this:

Figure 2:

13

Now that you have a window and a context, there’s one more thing that needs
to be done.

GLFW
GLFW is tailored specifically for using OpenGL, so it is by far the easiest to use
for our purpose.
Building
After you’ve downloaded the GLFW binaries package from the website or
compiled the library yourself, you’ll find the headers in the include folder and
the libraries for your compiler in one of the lib folders.
• Add the appropriate lib folder to your library path and link with GLFW.
• Add the include folder to your include path.
You can also dynamically link with GLFW if you want to. Simply link
with GLFWDLL and include the shared library with your executable.
Here is a simple snippet of code to check your build configuration:
#include 
#include 
int main()
{
glfwInit();
std::this_thread::sleep_for(std::chrono::seconds(1));
glfwTerminate();
}
It should show a console application and exit after a second. If you run into any
trouble, just ask in the comments and you’ll receive help.
Code
Start by simply including the GLFW header and define the entry point of the
application.
#include 
int main()
{
return 0;
}

14

To use GLFW, it needs to be initialised when the program starts and you need
to give it a chance to clean up when your program closes. The glfwInit and
glfwTerminate functions are geared towards that purpose.
glfwInit();
...
glfwTerminate();
The next thing to do is creating and configuring the window. Before calling
glfwCreateWindow, we first set some options.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_RESIZABLE, GL_FALSE);
GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", nullptr, nullptr); // Windowed
GLFWwindow* window =
glfwCreateWindow(800, 600, "OpenGL", glfwGetPrimaryMonitor(), nullptr); // Fullscreen
You’ll immediately notice the first three lines of code that are only relevant
for this library. It is specified that we require the OpenGL context to support
OpenGL 3.2 at the least. The GLFW_OPENGL_PROFILE option specifies that we
want a context that only supports the new core functionality.
The first two parameters of glfwCreateWindow specify the width and
height of the drawing surface and the third parameter specifies the window
title. The fourth parameter should be set to NULL for windowed mode and
glfwGetPrimaryMonitor() for fullscreen mode. The last parameter allows you
to specify an existing OpenGL context to share resources like textures with.
The glfwWindowHint function is used to specify additional requirements for a
window.
After creating the window, the OpenGL context has to be made active:
glfwMakeContextCurrent(window);
Next comes the event loop, which in the case of GLFW works a little differently
than the other libraries. GLFW uses a so-called closed event loop, which means
you only have to handle events when you need to. That means your event loop
will look really simple:
while(!glfwWindowShouldClose(window))
{
glfwSwapBuffers(window);
glfwPollEvents();
}

15

The only required functions in the loop are glfwSwapBuffers to swap the back
buffer and front buffer after you’ve finished drawing and glfwPollEvents to
retrieve window events. If you are making a fullscreen application, you should
handle the escape key to easily return to the desktop.
if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
glfwSetWindowShouldClose(window, GL_TRUE);
If you want to learn more about handling input, you can refer to the documentation.

Figure 3:
You should now have a window or a full screen surface with an OpenGL context.
Before you can start drawing stuff however, there’s one more thing that needs
to be done.

One more thing
Unfortunately, we can’t just call the functions we need yet. This is because it’s
the duty of the graphics card vendor to implement OpenGL functionality in
their drivers based on what the graphics card supports. You wouldn’t want your
16

program to only be compatible with a single driver version and graphics card,
so we’ll have to do something clever.
Your program needs to check which functions are available at runtime and link
with them dynamically. This is done by finding the addresses of the functions,
assigning them to function pointers and calling them. That looks something like
this:
Don’t try to run this code, it’s just for demonstration purposes.
// Specify prototype of function
typedef void (*GENBUFFERS) (GLsizei, GLuint*);
// Load address of function and assign it to a function pointer
GENBUFFERS glGenBuffers = (GENBUFFERS)wglGetProcAddress("glGenBuffers");
// or Linux:
GENBUFFERS glGenBuffers = (GENBUFFERS)glXGetProcAddress((const GLubyte *) "glGenBuffers");
// or OSX:
GENBUFFERS glGenBuffers = (GENBUFFERS)NSGLGetProcAddress("glGenBuffers");
// Call function as normal
GLuint buffer;
glGenBuffers(1, &buffer);
Let me begin by asserting that it is perfectly normal to be scared by this snippet
of code. You may not be familiar with the concept of function pointers yet, but
at least try to roughly understand what is happening here. You can imagine
that going through this process of defining prototypes and finding addresses of
functions is very tedious and in the end nothing more than a complete waste of
time.
The good news is that there are libraries that have solved this problem for us.
The most popular and best maintained library right now is GLEW and there’s
no reason for that to change anytime soon. Nevertheless, the alternative library
GLEE works almost completely the same save for the initialization and cleanup
code.
If you haven’t built GLEW yet, do so now. We’ll now add GLEW to your
project.
• Start by linking your project with the static GLEW library in the lib
folder. This is either glew32s.lib or GLEW depending on your platform.
• Add the include folder to your include path.
Now just include the header in your program, but make sure that it is included
before the OpenGL headers or the library you used to create your window.
#define GLEW_STATIC
#include 

17

Don’t forget to define GLEW_STATIC either using this preprocessor directive or by
adding the -DGLEW_STATIC directive to your compiler command-line parameters
or project settings.
If you prefer to dynamically link with GLEW, leave out the define and
link with glew32.lib instead of glew32s.lib on Windows. Don’t
forget to include glew32.dll or libGLEW.so with your executable!
Now all that’s left is calling glewInit() after the creation of your window and
OpenGL context. The glewExperimental line is necessary to force GLEW to
use a modern OpenGL method for checking if a function is available.
glewExperimental = GL_TRUE;
glewInit();
Make sure that you’ve set up your project correctly by calling the glGenBuffers
function, which was loaded by GLEW for you!
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
printf("%u\n", vertexBuffer);
Your program should compile and run without issues and display the number 1
in your console. If you need more help with using GLEW, you can refer to the
website or ask in the comments.
Now that we’re past all of the configuration and initialization work, I’d advise
you to make a copy of your current project so that you won’t have to write all
of the boilerplate code again when starting a new project.
Now, let’s get to drawing things!

The graphics pipeline
By learning OpenGL, you’ve decided that you want to do all of the hard work
yourself. That inevitably means that you’ll be thrown in the deep, but once
you understand the essentials, you’ll see that doing things the hard way doesn’t
have to be so difficult after all. To top that all, the exercises at the end of this
chapter will show you the sheer amount of control you have over the rendering
process by doing things the modern way!
The graphics pipeline covers all of the steps that follow each other up on processing
the input data to get to the final output image. I’ll explain these steps with help
of the following illustration.
It all begins with the vertices, these are the points from which shapes like
triangles will later be constructed. Each of these points is stored with certain

18

Figure 4:
attributes and it’s up to you to decide what kind of attributes you want to store.
Commonly used attributes are 3D position in the world and texture coordinates.
The vertex shader is a small program running on your graphics card that processes
every one of these input vertices individually. This is where the perspective
transformation takes place, which projects vertices with a 3D world position
onto your 2D screen! It also passes important attributes like color and texture
coordinates further down the pipeline.
After the input vertices have been transformed, the graphics card will form
triangles, lines or points out of them. These shapes are called primitives because
they form the basis of more complex shapes. There are some additional drawing
modes to choose from, like triangle strips and line strips. These reduce the
number of vertices you need to pass if you want to create objects where each
next primitive is connected to the last one, like a continuous line consisting of
several segments.
The following step, the geometry shader, is completely optional and was only
recently introduced. Unlike the vertex shader, the geometry shader can output
more data than comes in. It takes the primitives from the shape assembly
stage as input and can either pass a primitive through down to the rest of the
pipeline, modify it first, completely discard it or even replace it with other
primitive(s). Since the communication between the GPU and the rest of the
PC is relatively slow, this stage can help you reduce the amount of data that
needs to be transferred. With a voxel game for example, you could pass vertices
as point vertices, along with an attribute for their world position, color and
material and the actual cubes can be produced in the geometry shader with a
point as input!

19

After the final list of shapes is composed and converted to screen coordinates,
the rasterizer turns the visible parts of the shapes into pixel-sized fragments.
The vertex attributes coming from the vertex shader or geometry shader are
interpolated and passed as input to the fragment shader for each fragment. As
you can see in the image, the colors are smoothly interpolated over the fragments
that make up the triangle, even though only 3 points were specified.
The fragment shader processes each individual fragment along with its interpolated attributes and should output the final color. This is usually done by
sampling from a texture using the interpolated texture coordinate vertex attributes or simply outputting a color. In more advanced scenarios, there could
also be calculations related to lighting and shadowing and special effects in this
program. The shader also has the ability to discard certain fragments, which
means that a shape will be see-through there.
Finally, the end result is composed from all these shape fragments by blending
them together and performing depth and stencil testing. All you need to know
about these last two right now, is that they allow you to use additional rules to
throw away certain fragments and let others pass. For example, if one triangle
is obscured by another triangle, the fragment of the closer triangle should end
up on the screen.
Now that you know how your graphics card turns an array of vertices into an
image on the screen, let’s get to work!

Vertex input
The first thing you have to decide on is what data the graphics card is going to
need to draw your scene correctly. As mentioned above, this data comes in the
form of vertex attributes. You’re free to come up with any kind of attribute you
want, but it all inevitably begins with the world position. Whether you’re doing
2D graphics or 3D graphics, this is the attribute that will determine where the
objects and shapes end up on your screen in the end.
Device coordinates
When your vertices have been processed by the pipeline outlined
above, their coordinates will have been transformed into device coordinates. Device X and Y coordinates are mapped to the screen
between -1 and 1.

20

Just like a graph, the center has coordinates (0,0) and the y
axis is positive above the center. This seems unnatural because
graphics applications usually have (0,0) in the top-left corner and
(width,height) in the bottom-right corner, but it’s an excellent
way to simplify 3D calculations and to stay resolution independent.
The triangle above consists of 3 vertices positioned at (0,0.5), (0.5,-0.5) and
(-0.5,-0.5) in clockwise order. It is clear that the only variation between the
vertices here is the position, so that’s the only attribute we need. Since we’re
passing the device coordinates directly, an X and Y coordinate suffices for the
position.
OpenGL expects you to send all of your vertices in a single array, which may
be confusing at first. To understand the format of this array, let’s see what it
would look like for our triangle.

21

float vertices[] = {
0.0f, 0.5f, // Vertex 1 (X, Y)
0.5f, -0.5f, // Vertex 2 (X, Y)
-0.5f, -0.5f // Vertex 3 (X, Y)
};
As you can see, this array should simply be a list of all vertices with their
attributes packed together. The order in which the attributes appear doesn’t
matter, as long as it’s the same for each vertex. The order of the vertices doesn’t
have to be sequential (i.e. the order in which shapes are formed), but this requires
us to provide extra data in the form of an element buffer. This will be discussed
at the end of this chapter as it would just complicate things for now.
The next step is to upload this vertex data to the graphics card. This is important
because the memory on your graphics card is much faster and you won’t have
to send the data again every time your scene needs to be rendered (about 60
times per second).
This is done by creating a Vertex Buffer Object (VBO):
GLuint vbo;
glGenBuffers(1, &vbo); // Generate 1 buffer
The memory is managed by OpenGL, so instead of a pointer you get a positive
number as a reference to it. GLuint is simply a cross-platform substitute for
unsigned int, just like GLint is one for int. You will need this number to
make the VBO active and to destroy it when you’re done with it.
To upload the actual data to it you first have to make it the active object by
calling glBindBuffer:
glBindBuffer(GL_ARRAY_BUFFER, vbo);
As hinted by the GL_ARRAY_BUFFER enum value there are other types of buffers,
but they are not important right now. This statement makes the VBO we just
created the active array buffer. Now that it’s active we can copy the vertex
data to it.
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
Notice that this function doesn’t refer to the id of our VBO, but instead to the
active array buffer. The second parameter specifies the size in bytes. The final
parameter is very important and its value depends on the usage of the vertex
data. I’ll outline the ones related to drawing here:
• GL_STATIC_DRAW: The vertex data will be uploaded once and drawn many
times (e.g. the world).
• GL_DYNAMIC_DRAW: The vertex data will be created once, changed from
time to time, but drawn many times more than that.
• GL_STREAM_DRAW: The vertex data will be uploaded once and drawn once.
22

This usage value will determine in what kind of memory the data is stored
on your graphics card for the highest efficiency. For example, VBOs with
GL_STREAM_DRAW as type may store their data in memory that allows faster
writing in favour of slightly slower drawing.
The vertices with their attributes have been copied to the graphics card now,
but they’re not quite ready to be used yet. Remember that we can make up any
kind of attribute we want and in any order, so now comes the moment where
you have to explain to the graphics card how to handle these attributes. This is
where you’ll see how flexible modern OpenGL really is.

Shaders
As discussed earlier, there are three shader stages your vertex data will pass
through. Each shader stage has a strictly defined purpose and in older versions
of OpenGL, you could only slightly tweak what happened and how it happened.
With modern OpenGL, it’s up to us to instruct the graphics card what to do
with the data. This is why it’s possible to decide per application what attributes
each vertex should have. You’ll have to implement both the vertex and fragment
shader to get something on the screen, the geometry shader is optional and is
discussed later.
Shaders are written in a C-style language called GLSL (OpenGL Shading Language). OpenGL will compile your program from source at runtime and copy it
to the graphics card. Each version of OpenGL has its own version of the shader
language with availability of a certain feature set and we will be using GLSL
1.50. This version number may seem a bit off when we’re using OpenGL 3.2,
but that’s because shaders were only introduced in OpenGL 2.0 as GLSL 1.10.
Starting from OpenGL 3.3, this problem was solved and the GLSL version is
the same as the OpenGL version.
Vertex shader
The vertex shader is a program on the graphics card that processes each vertex
and its attributes as they appear in the vertex array. Its duty is to output the
final vertex position in device coordinates and to output any data the fragment
shader requires. That’s why the 3D transformation should take place here. The
fragment shader depends on attributes like the color and texture coordinates,
which will usually be passed from input to output without any calculations.
Remember that our vertex position is already specified as device coordinates
and no other attributes exist, so the vertex shader will be fairly bare bones.
#version 150 core
in vec2 position;

23

void main()
{
gl_Position = vec4(position, 0.0, 1.0);
}
The #version preprocessor directive is used to indicate that the code that follows
is GLSL 1.50 code using OpenGL’s core profile. Next, we specify that there is
only one attribute, the position. Apart from the regular C types, GLSL has
built-in vector and matrix types identified by vec* and mat* identifiers. The
type of the values within these constructs is always a float. The number after
vec specifies the number of components (x, y, z, w) and the number after mat
specifies the number of rows /columns. Since the position attribute consists of
only an X and Y coordinate, vec2 is perfect.
You can be quite creative when working with these vertex types. In
the example above a shortcut was used to set the first two components
of the vec4 to those of vec2. These two lines are equal:
gl_Position = vec4(position, 0.0, 1.0);
gl_Position = vec4(position.x, position.y, 0.0, 1.0);
When you’re working with colors, you can also access the individual
components with r, g, b and a instead of x, y, z and w. This makes
no difference and can help with clarity.
The final position of the vertex is assigned to the special gl_Position variable,
because the position is needed for primitive assembly and many other built-in
processes. For these to function correctly, the last value w needs to have a value of
1.0f. Other than that, you’re free to do anything you want with the attributes
and we’ll see how to output those when we add color to the triangle later in this
chapter.
Fragment shader
The output from the vertex shader is interpolated over all the pixels on the
screen covered by a primitive. These pixels are called fragments and this is
what the fragment shader operates on. Just like the vertex shader it has one
mandatory output, the final color of a fragment. It’s up to you to write the code
for computing this color from vertex colors, texture coordinates and any other
data coming from the vertex shader.
Our triangle only consists of white pixels, so the fragment shader simply outputs
that color every time:
#version 150 core
out vec4 outColor;

24

void main()
{
outColor = vec4(1.0, 1.0, 1.0, 1.0);
}
You’ll immediately notice that we’re not using some built-in variable for outputting the color, say gl_FragColor. This is because a fragment shader can
in fact output multiple colors and we’ll see how to handle this when actually
loading these shaders. The outColor variable uses the type vec4, because each
color consists of a red, green, blue and alpha component. Colors in OpenGL are
generally represented as floating point numbers between 0.0 and 1.0 instead of
the common 0 and 255.
Compiling shaders
Compiling shaders is easy once you have loaded the source code (either from
file or as a hard-coded string). You can easily include your shader source in the
C++ code through C++11 raw string literals:
const char* vertexSource = R"glsl(
#version 150 core
in vec2 position;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
}
)glsl";
Just like vertex buffers, creating a shader itself starts with creating a shader
object and loading data into it.
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, 1, &vertexSource, NULL);
Unlike VBOs, you can simply pass a reference to shader functions instead of
making it active or anything like that. The glShaderSource function can take
multiple source strings in an array, but you’ll usually have your source code in
one char array. The last parameter can contain an array of source code string
lengths, passing NULL simply makes it stop at the null terminator.
All that’s left is compiling the shader into code that can be executed by the
graphics card now:
glCompileShader(vertexShader);

25

Be aware that if the shader fails to compile, e.g. because of a syntax error,
glGetError will not report an error! See the block below for info on how to
debug shaders.
Checking if a shader compiled successfully
GLint status;
glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &status);
If status is equal to GL_TRUE, then your shader was compiled successfully. Retrieving the compile log
char buffer[512];
glGetShaderInfoLog(vertexShader, 512, NULL, buffer);
This will store the first 511 bytes + null terminator of the compile
log in the specified buffer. The log may also report useful warnings
even when compiling was successful, so it’s useful to check it out
from time to time when you develop your shaders.
The fragment shader is compiled in exactly the same way:
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, 1, &fragmentSource, NULL);
glCompileShader(fragmentShader);
Again, be sure to check if your shader was compiled successfully, because it will
save you from a headache later on.
Combining shaders into a program
Up until now the vertex and fragment shaders have been two separate objects.
While they’ve been programmed to work together, they aren’t actually connected
yet. This connection is made by creating a program out of these two shaders.
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
Since a fragment shader is allowed to write to multiple buffers, you need to
explicitly specify which output is written to which buffer. This needs to happen
before linking the program. However, since this is 0 by default and there’s only
one output right now, the following line of code is not necessary:
glBindFragDataLocation(shaderProgram, 0, "outColor");
Use glDrawBuffers when rendering to multiple buffers, because only
the first output will be enabled by default.

26

After attaching both the fragment and vertex shaders, the connection is made by
linking the program. It is allowed to make changes to the shaders after they’ve
been added to a program (or multiple programs!), but the actual result will
not change until a program has been linked again. It is also possible to attach
multiple shaders for the same stage (e.g. fragment) if they’re parts forming the
whole shader together. A shader object can be deleted with glDeleteShader,
but it will not actually be removed before it has been detached from all programs
with glDetachShader.
glLinkProgram(shaderProgram);
To actually start using the shaders in the program, you just have to call:
glUseProgram(shaderProgram);
Just like a vertex buffer, only one program can be active at a time.
Making the link between vertex data and attributes
Although we have our vertex data and shaders now, OpenGL still doesn’t know
how the attributes are formatted and ordered. You first need to retrieve a
reference to the position input in the vertex shader:
GLint posAttrib = glGetAttribLocation(shaderProgram, "position");
The location is a number depending on the order of the input definitions. The
first and only input position in this example will always have location 0.
With the reference to the input, you can specify how the data for that input is
retrieved from the array:
glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 0, 0);
The first parameter references the input. The second parameter specifies the
number of values for that input, which is the same as the number of components
of the vec. The third parameter specifies the type of each component and
the fourth parameter specifies whether the input values should be normalized
between -1.0 and 1.0 (or 0.0 and 1.0 depending on the format) if they aren’t
floating point numbers.
The last two parameters are arguably the most important here as they specify
how the attribute is laid out in the vertex array. The first number specifies the
stride, or how many bytes are between each position attribute in the array. The
value 0 means that there is no data in between. This is currently the case as the
position of each vertex is immediately followed by the position of the next vertex.
The last parameter specifies the offset, or how many bytes from the start of the
array the attribute occurs. Since there are no other attributes, this is 0 as well.
It is important to know that this function will store not only the stride and the

27

offset, but also the VBO that is currently bound to GL_ARRAY_BUFFER. That
means that you don’t have to explicitly bind the correct VBO when the actual
drawing functions are called. This also implies that you can use a different VBO
for each attribute.
Don’t worry if you don’t fully understand this yet, as we’ll see how to alter this
to add more attributes soon enough.
glEnableVertexAttribArray(posAttrib);
Last, but not least, the vertex attribute array needs to be enabled.

Vertex Array Objects
You can imagine that real graphics programs use many different shaders and
vertex layouts to take care of a wide variety of needs and special effects. Changing
the active shader program is easy enough with a call to glUseProgram, but it
would be quite inconvenient if you had to set up all of the attributes again every
time.
Luckily, OpenGL solves that problem with Vertex Array Objects (VAO). VAOs
store all of the links between the attributes and your VBOs with raw vertex
data.
A VAO is created in the same way as a VBO:
GLuint vao;
glGenVertexArrays(1, &vao);
To start using it, simply bind it:
glBindVertexArray(vao);
As soon as you’ve bound a certain VAO, every time you call glVertexAttribPointer,
that information will be stored in that VAO. This makes switching between
different vertex data and vertex formats as easy as binding a different VAO! Just
remember that a VAO doesn’t store any vertex data by itself, it just references
the VBOs you’ve created and how to retrieve the attribute values from them.
Since only calls after binding a VAO stick to it, make sure that you’ve created
and bound the VAO at the start of your program. Any vertex buffers and
element buffers bound before it will be ignored.

Drawing
Now that you’ve loaded the vertex data, created the shader programs and linked
the data to the attributes, you’re ready to draw the triangle. The VAO that was
used to store the attribute information is already bound, so you don’t have to

28

worry about that. All that’s left is to simply call glDrawArrays in your main
loop:
glDrawArrays(GL_TRIANGLES, 0, 3);
The first parameter specifies the kind of primitive (commonly point, line or triangle), the second parameter specifies how many vertices to skip at the beginning
and the last parameter specifies the number of vertices (not primitives!) to
process.
When you run your program now, you should see the following:

Figure 5:
If you don’t see anything, make sure that the shaders have compiled correctly,
that the program has linked correctly, that the attribute array has been enabled,
that the VAO has been bound before specifying the attributes, that your vertex
data is correct and that glGetError returns 0. If you can’t find the problem,
try comparing your code to this sample.

29

Uniforms
Right now the white color of the triangle has been hard-coded into the shader
code, but what if you wanted to change it after compiling the shader? As
it turns out, vertex attributes are not the only way to pass data to shader
programs. There is another way to pass data to the shaders called uniforms.
These are essentially global variables, having the same value for all vertices
and/or fragments. To demonstrate how to use these, let’s make it possible to
change the color of the triangle from the program itself.
By making the color in the fragment shader a uniform, it will end up looking
like this:
#version 150 core
uniform vec3 triangleColor;
out vec4 outColor;
void main()
{
outColor = vec4(triangleColor, 1.0);
}
The last component of the output color is transparency, which is not very
interesting right now. If you run your program now you’ll see that the triangle
is black, because the value of triangleColor hasn’t been set yet.
Changing the value of a uniform is just like setting vertex attributes, you first
have to grab the location:
GLint uniColor = glGetUniformLocation(shaderProgram, "triangleColor");
The values of uniforms are changed with any of the glUniformXY functions,
where X is the number of components and Y is the type. Common types are f
(float), d (double) and i (integer).
glUniform3f(uniColor, 1.0f, 0.0f, 0.0f);
If you run your program now, you’ll see that the triangle is red. To make things
a little more exciting, try varying the color with the time by doing something
like this in your main loop:
auto t_start = std::chrono::high_resolution_clock::now();
...

auto t_now = std::chrono::high_resolution_clock::now();
float time = std::chrono::duration_cast>(t_now - t_start).count

30

glUniform3f(uniColor, (sin(time * 4.0f) + 1.0f) / 2.0f, 0.0f, 0.0f);
Although this example may not be very exciting, it does demonstrate that
uniforms are essential for controlling the behaviour of shaders at runtime. Vertex
attributes on the other hand are ideal for describing a single vertex.

Success! You can find the full code here if you get stuck. 59 Exercises • Make the rectangle with the blended image grow bigger and smaller with sin. (Solution) • Make the rectangle flip around the X axis after pressing the space bar and slowly stop again. (Solution) Extra buffers Up until now there is only one type of output buffer you’ve made use of, the color buffer. This chapter will discuss two additional types, the depth buffer and the stencil buffer. For each of these a problem will be presented and subsequently solved with that specific buffer. Preparations To best demonstrate the use of these buffers, let’s draw a cube instead of a flat shape. The vertex shader needs to be modified to accept a third coordinate: in vec3 position; ... gl_Position = proj * view * model * vec4(position, 1.0); We’re also going to need to alter the color again later in this chapter, so make sure the fragment shader multiplies the texture color by the color attribute: vec4 texColor = mix(texture(texKitten, Texcoord), texture(texPuppy, Texcoord), 0.5); outColor = vec4(Color, 1.0) * texColor; Vertices are now 8 floats in size, so you’ll have to update the vertex attribute offsets and strides as well. Finally, add the extra coordinate to the vertex array: float vertices[] = { // X Y Z -0.5f, 0.5f, 0.0f, 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, -0.5f, 0.0f, }; R 1.0f, 0.0f, 0.0f, 1.0f, G 0.0f, 1.0f, 0.0f, 1.0f, B 0.0f, 0.0f, 1.0f, 1.0f, U 0.0f, 1.0f, 1.0f, 0.0f, V 0.0f, 0.0f, 1.0f, 1.0f Confirm that you’ve made all the required changes by running your program and checking if it still draws a flat spinning image of a kitten blended with a puppy. A single cube consists of 36 vertices (6 sides * 2 triangles * 3 vertices), so I will ease your life by providing the array here. 60 glDrawArrays(GL_TRIANGLES, 0, 36); We will not make use of element buffers for drawing this cube, so you can use glDrawArrays to draw it. If you were confused by this explanation, you can compare your program to this reference code.
It immediately becomes clear that the cube is not rendered as expected when seeing the output. The sides of the cube are being drawn, but they overlap each other in strange ways! The problem here is that when OpenGL draws your cube triangle-by-triangle, it will simply write over pixels even though something else may have been drawn there before. In this case OpenGL will happily draw triangles in the back over triangles at the front. Luckily OpenGL offers ways of telling it when to draw over a pixel and when not to. I’ll go over the two most important ways of doing that, depth testing and stencilling, in this chapter. Depth buffer Z-buffering is a way of keeping track of the depth of every pixel on the screen. The depth is an increasing function of the distance between the screen plane and a fragment that has been drawn. That means that the fragments on the sides of the cube further away from the viewer have a higher depth value, whereas fragments closer have a lower depth value. If this depth is stored along with the color when a fragment is written, fragments drawn later can compare their depth to the existing depth to determine if the new fragment is closer to the viewer than the old fragment. If that is the case, it should be drawn over and otherwise it can simply be discarded. This is known as depth testing. OpenGL offers a way to store these depth values in an extra buffer, called the depth buffer, and perform the required check for fragments automatically. The fragment shader will not run for fragments that are invisible, which can have a significant impact on performance. This functionality can be enabled by calling glEnable. glEnable(GL_DEPTH_TEST); If you enable this functionality now and run your application, you’ll notice that you get a black screen. That happens because the depth buffer is filled with 0 depth for each pixel by default. Since no fragments will ever be closer than that they are all discarded. 61 The depth buffer can be cleared along with the color buffer by extending the glClear call: glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); The default clear value for the depth is 1.0f, which is equal to the depth of your far clipping plane and thus the furthest depth that can be represented. All fragments will be closer than that, so they will no longer be discarded.
With the depth test capability enabled, the cube is now rendered correctly. Just like the color buffer, the depth buffer has a certain amount of bits of precision which can be specified by you. Less bits of precision reduce the extra memory use, but can introduce rendering errors in more complex scenes. Stencil buffer The stencil buffer is an optional extension of the depth buffer that gives you more control over the question of which fragments should be drawn and which shouldn’t. Like the depth buffer, a value is stored for every pixel, but this time you get to control when and how this value changes and when a fragment should be drawn depending on this value. Note that if the depth test fails, the stencil test no longer determines whether a fragment is drawn or not, but these fragments can still affect values in the stencil buffer! To get a bit more acquainted with the stencil buffer before using it, let’s start by analyzing a simple example. Figure 18: In this case the stencil buffer was first cleared with zeroes and then a rectangle of ones was drawn to it. The drawing operation of the cube uses the values from the stencil buffer to only draw fragments with a stencil value of 1. 62 Now that you have an understanding of what the stencil buffer does, we’ll look at the relevant OpenGL calls. glEnable(GL_STENCIL_TEST); Stencil testing is enabled with a call to glEnable, just like depth testing. You don’t have to add this call to your code just yet. I’ll first go over the API details in the next two sections and then we’ll make a cool demo. Setting values Regular drawing operations are used to determine which values in the stencil buffer are affected by any stencil operation. If you want to affect a rectangle of values like in the sample above, simply draw a 2D quad in that area. What happens to those values can be controlled by you using the glStencilFunc, glStencilOp and glStencilMask functions. The glStencilFunc call is used to specify the conditions under which a fragment passes the stencil test. Its parameters are discussed below. • func: The test function, can be GL_NEVER, GL_LESS, GL_LEQUAL, GL_GREATER, GL_GEQUAL, GL_EQUAL, GL_NOTEQUAL, and GL_ALWAYS. • ref: A value to compare the stencil value to using the test function. • mask: A bitwise AND operation is performed on the stencil value and reference value with this mask value before comparing them. If you don’t want stencils with a value lower than 2 to be affected, you would use: glStencilFunc(GL_GEQUAL, 2, 0xFF); The mask value is set to all ones (in case of an 8 bit stencil buffer), so it will not affect the test. The glStencilOp call specifies what should happen to stencil values depending on the outcome of the stencil and depth tests. The parameters are: • sfail: Action to take if the stencil test fails. • dpfail: Action to take if the stencil test is successful, but the depth test failed. • dppass: Action to take if both the stencil test and depth tests pass. Stencil values can be modified in the following ways: • GL_KEEP: The current value is kept. • GL_ZERO: The stencil value is set to 0. • GL_REPLACE: The stencil value is set to the reference value in the glStencilFunc call. • GL_INCR: The stencil value is increased by 1 if it is lower than the maximum value. 63 • GL_INCR_WRAP: Same as GL_INCR, with the exception that the value is set to 0 if the maximum value is exceeded. • GL_DECR: The stencil value is decreased by 1 if it is higher than 0. • GL_DECR_WRAP: Same as GL_DECR, with the exception that the value is set to the maximum value if the current value is 0 (the stencil buffer stores unsigned integers). • GL_INVERT: A bitwise invert is applied to the value. Finally, glStencilMask can be used to control the bits that are written to the stencil buffer when an operation is run. The default value is all ones, which means that the outcome of any operation is unaffected. If, like in the example, you want to set all stencil values in a rectangular area to 1, you would use the following calls: glStencilFunc(GL_ALWAYS, 1, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glStencilMask(0xFF); In this case the rectangle shouldn’t actually be drawn to the color buffer, since it is only used to determine which stencil values should be affected. glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); The glColorMask function allows you to specify which data is written to the color buffer during a drawing operation. In this case you would want to disable all color channels (red, green, blue, alpha). Writing to the depth buffer needs to be disabled separately as well with glDepthMask, so that cube drawing operation won’t be affected by leftover depth values of the rectangle. This is cleaner than simply clearing the depth buffer again later. Using values in drawing operations With the knowledge about setting values, using them for testing fragments in drawing operations becomes very simple. All you need to do now is re-enable color and depth writing if you had disabled those earlier and setting the test function to determine which fragments are drawn based on the values in the stencil buffer. glStencilFunc(GL_EQUAL, 1, 0xFF); If you use this call to set the test function, the stencil test will only pass for pixels with a stencil value equal to 1. A fragment will only be drawn if it passes both the stencil and depth test, so setting the glStencilOp is not necessary. In the case of the example above only the stencil values in the rectangular area were set to 1, so only the cube fragments in that area will be drawn. 64 glStencilMask(0x00); One small detail that is easy to overlook is that the cube draw call could still affect values in the stencil buffer. This problem can be solved by setting the stencil bit mask to all zeroes, which effectively disables stencil writing. Planar reflections Let’s spice up the demo we have right now a bit by adding a floor with a reflection under the cube. I’ll add the vertices for the floor to the same vertex buffer the cube is currently using to keep things simple: float vertices[] = { ... } -1.0f, -1.0f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, -1.0f, -0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.0f, 0.0f, 0.0f, 1.0f, -1.0f, 1.0f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, -1.0f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f Now add the extra draw call to your main loop: glDrawArrays(GL_TRIANGLES, 36, 6); To create the reflection of the cube itself, it is sufficient to draw it again but inverted on the Z-axis: model = glm::scale( glm::translate(model, glm::vec3(0, 0, -1)), glm::vec3(1, 1, -1) ); glUniformMatrix4fv(uniModel, 1, GL_FALSE, glm::value_ptr(model)); glDrawArrays(GL_TRIANGLES, 0, 36); I’ve set the color of the floor vertices to black so that the floor does not display the texture image, so you’ll want to change the clear color to white to be able to see it. I’ve also changed the camera parameters a bit to get a good view of the scene.
Two issues are noticeable in the rendered image: 65 • The floor occludes the reflection because of depth testing. • The reflection is visible outside of the floor. The first problem is easy to solve by temporarily disabling writing to the depth buffer when drawing the floor: glDepthMask(GL_FALSE); glDrawArrays(GL_TRIANGLES, 36, 6); glDepthMask(GL_TRUE); To fix the second problem, it is necessary to discard fragments that fall outside of the floor. Sounds like it’s time to see what stencil testing is really worth! It can be greatly beneficial at times like these to make a little list of the rendering stages of the scene to get a proper idea of what is going on. • Draw regular cube. • Enable stencil testing and set test function and operations to write ones to all selected stencils. • Draw floor. • Set stencil function to pass if stencil value equals 1. • Draw inverted cube. • Disable stencil testing. The new drawing code looks like this: glEnable(GL_STENCIL_TEST); // Draw floor glStencilFunc(GL_ALWAYS, 1, 0xFF); // Set any stencil to 1 glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glStencilMask(0xFF); // Write to stencil buffer glDepthMask(GL_FALSE); // Don't write to depth buffer glClear(GL_STENCIL_BUFFER_BIT); // Clear stencil buffer (0 by default) glDrawArrays(GL_TRIANGLES, 36, 6); // Draw cube reflection glStencilFunc(GL_EQUAL, 1, 0xFF); // Pass test if stencil value is 1 glStencilMask(0x00); // Don't write anything to stencil buffer glDepthMask(GL_TRUE); // Write to depth buffer model = glm::scale( glm::translate(model, glm::vec3(0, 0, -1)), glm::vec3(1, 1, -1) ); glUniformMatrix4fv(uniModel, 1, GL_FALSE, glm::value_ptr(model)); glDrawArrays(GL_TRIANGLES, 0, 36); 66 glDisable(GL_STENCIL_TEST); I’ve annotated the code above with comments, but the steps should be mostly clear from the stencil buffer section. Now just one final touch is required, to darken the reflected cube a little to make the floor look a little less like a perfect mirror. I’ve chosen to create a uniform for this called overrideColor in the vertex shader: uniform vec3 overrideColor; ... Color = overrideColor * color; And in the drawing code for the reflected cube glUniform3f(uniColor, 0.3f, 0.3f, 0.3f); glDrawArrays(GL_TRIANGLES, 0, 36); glUniform3f(uniColor, 1.0f, 1.0f, 1.0f); where uniColor is the return value of a glGetUniformLocation call.

Navigation menu