How do I release a CGImageRef in iOS

Your memory issue results from the copied data, as others have stated. But here’s another idea: Use Core Graphics’s optimized pixel interpolation to calculate the average.

  1. Create a 1×1 bitmap context.
  2. Set the interpolation quality to medium (see later).
  3. Draw your image scaled down to exactly this one pixel.
  4. Read the RGB value from the context’s buffer.
  5. (Release the context, of course.)

This might result in better performance because Core Graphics is highly optimized and might even use the GPU for the downscaling.

Testing showed that medium quality seems to interpolate pixels by taking the average of color values. That’s what we want here.

Worth a try, at least.

Edit: OK, this idea seemed too interesting not to try. So here’s an example project showing the difference. Below measurements were taken with the contained 512×512 test image, but you can change the image if you want.

It takes about 12.2 ms to calculate the average by iterating over all pixels in the image data. The draw-to-one-pixel approach takes 3 ms, so it’s roughly 4 times faster. It seems to produce the same results when using kCGInterpolationQualityMedium.

I assume that the huge performance gain is a result from Quartz noticing that it does not have to decompress the JPEG fully but that it can use the lower frequency parts of the DCT only. That’s an interesting optimization strategy when composing JPEG compressed pixels with a scale below 0.5. But I’m only guessing here.

Interestingly, when using your method, 70% of the time is spent in CGDataProviderCopyData and only 30% in the pixel data traversal. This hints to a lot of time spent in JPEG decompression.

Pixel Iterating Screenshot Draw-To-One-Pixel Screenshot

Note: Here’s a late follow up on the example image above.

Leave a Comment