Getting the true z value from the depth buffer

From http://web.archive.org/web/20130416194336/http://olivers.posterous.com/linear-depth-in-glsl-for-real

// == Post-process frag shader ===========================================
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
    float z_b = texture2D(depthBuffTex, vTexCoord).x;
    float z_n = 2.0 * z_b - 1.0;
    float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}

[edit] So here’s the explanation (with 2 mistakes, see Christian’s comment below) :

An OpenGL perspective matrix looks like this : from songho.ca

When you multiply this matrix by an homogeneous point [x,y,z,1], it gives you: [don’t care, don’t care, Az+B, -z] (with A and B the 2 big components in the matrix).

OpenGl next does the perspective division: it divides this vector by its w component. This operation is not done in shaders (except special cases like shadowmapping) but in hardware; you can’t control it. w = -z, so the Z value becomes -A/z -B.

We are now in Normalized Device Coordinates. The Z value is between 0 and 1. For some stupid reason, OpenGL requires that it should be moved to the [-1,1] range (just like x and y). A scaling and offset is applied.

This final value is then stored in the buffer.

The above code does the exact opposite :

  • z_b is the raw value stored in the buffer
  • z_n linearly transforms z_b from [-1,1] to [0,1]
  • z_e is the same formula as z_n=-A/z_e -B, but solved for z_e instead. It’s equivalent to z_e = -A / (z_n+B). A and B should be computed on the CPU and sent as uniforms, btw.

The opposite function is :

varying float depth; // Linear depth, in world units
void main(void)
{
    float A = gl_ProjectionMatrix[2].z;
    float B = gl_ProjectionMatrix[3].z;
    gl_FragDepth  = 0.5*(-A*depth + B) / depth + 0.5;
}

Leave a Comment