Why does my canvas go blank after converting to image?

Kevin Reid’s preserveDrawingBuffer suggestion is the correct one, but there is (usually) a better option. The tl;dr is the code at the end.

It can be expensive to put together the final pixels of a rendered webpage, and coordinating that with rendering WebGL content even more so. The usual flow is:

  1. JavaScript issues drawing commands to WebGL context
  2. JavaScript returns, returning control to the main browser event loop
  3. WebGL context turns drawing buffer (or its contents) over to the compositor for integration into web page currently being rendered on screen
  4. Page, with WebGL content, displayed on screen

Note that this is different from most OpenGL applications. In those, rendered content is usually displayed directly, rather than being composited with a bunch of other stuff on a page, some of which may actually be on top of and blended with the WebGL content.

The WebGL spec was changed to treat the drawing buffer as essentially empty after Step 3. The code you’re running in devtools is coming after Step 4, which is why you get an empty buffer. This change to the spec allowed big performance improvements on platforms where blanking after Step 3 is basically what actually happens in hardware (like in many mobile GPUs). If you want work around this to sometimes make copies of the WebGL content after step 3, the browser would have to always make a copy of the drawing buffer before step 3, which is going to make your framerate drop precipitously on some platforms.

You can do exactly that and force the browser to make the copy and keep the image content accessible by setting preserveDrawingBuffer to true. From the spec:

This default behavior can be changed by setting the preserveDrawingBuffer attribute of the WebGLContextAttributes object. If this flag is true, the contents of the drawing buffer shall be preserved until the author either clears or overwrites them. If this flag is false, attempting to perform operations using this context as a source image after the rendering function has returned can lead to undefined behavior. This includes readPixels or toDataURL calls, or using this context as the source image of another context’s texImage2D or drawImage call.

In the example you provided, the code is just changing the context creation line:

gl = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});

Just keep in mind that it will force that slower path in some browsers and performance will suffer, depending on what and how you are rendering. You should be fine in most desktop browsers, where the copy doesn’t actually have to be made, and those do make up the vast majority of WebGL capable browsers…but only for now.

However, there is another option (as somewhat confusingly mentioned in the next paragraph in the spec).

Essentially, you make the copy yourself before step 2: after all your draw calls have finished but before you return control to the browser from your code. This is when the WebGL drawing buffer is still in tact and is accessible, and you should have no trouble accessing the pixels then. You use the the same toDataUrl or readPixels calls you would use otherwise, it’s just the timing that’s important.

Here you get the best of both worlds. You get a copy of the drawing buffer, but you don’t pay for it in every frame, even those in which you didn’t need a copy (which may be most of them), like you do with preserveDrawingBuffer set to true.

In the example you provided, just add your code to the bottom of drawScene and you should see the copy of the canvas right below:

function drawScene() {
  ...

  var webglImage = (function convertCanvasToImage(canvas) {
    var image = new Image();
    image.src = canvas.toDataURL('image/png');
    return image;
  })(document.querySelectorAll('canvas')[0]);

  window.document.body.appendChild(webglImage);
}

Leave a Comment