What this means is that you can simply change which texture object is bound to GL_TEXTURE2D between draw calls to sample from different textures using the same texture unit, or you can change to entirely different texture unit by assign a new value to your texture sampling uniform, or you can do both. So if you have activated GL_TEXTURE0, defined and configured a texture object, bound it to GL_TEXTURE2D and assigned the value zero to your uniform variable, then texture unit 0 is configured for 2-D sampling of that texture object in your fragment shader. This all remains unchanged, but it is this unit number: 0 for GL_TEXTURE0, 1 for GL_TEXTURE1 that you assign to your texture sampler uniforms before making glDraw calls. Recall that OpenGL supports multiple texture units enabled with glActiveTexture(GL_TEXTURE0, … GL_TEXTUREn) and that glTexture calls are applied to the last activated unit. What value should we assign? The answer is the texture unit number. MyTexture is a uniform variable and as we learned earlier, these variables must be assigned before glDraw calls. This takes care of the fragment shader code, but what about CPU setup? For 2-D textures, we use uniform sampler2D myTexture to declare the texture and texture2D(myTexture, v_TextureCoord.st) to read a “texel” (texture pixel) from it. We pass that variable as a parameter to a built-in texture sampling function. So how do we “sample” a color from a texture in a fragment shader? The answer is we use a new kind of uniform variable called a “texture sampler” to refer to the texture.
#Max opengl 4.4 textures code
If you like glTe圎nv(GL_REPLACE) then just ignore the color shader input, if you like glTe圎nv(GL_MODULATE) then you implement modulation logic in shader code before writing the result to gl_FragColor. What you do is take that input color and then “sample” a color from a texture using texture coordinates (also usually input from the vertex shader) and then perform whatever kind of blending math you want. Recall that a fragment shader typically gets as input an interpolated color (usually calculated from a set of lighting equations) from the vertex shader. GlTe圎nv must be replaced by code in the fragment shader. You still have to enable your texture units, create and configure your texture objects (including mipmap chains), upload texture data to them, bind them to a texture target and configure texture coordinates on your vertices as attribute data. However, glTe圎nv, glTexGen and the texture matrix stack have been removed. Texture setup in modern OpenGL is nearly equivalent to what you are used to in the fixed-function API.
The code referenced in this blog is available here.