Detection of Blur in Images/Video sequences

Motion blur and camera shake are kind of the same thing when you think about the cause: relative motion of the camera and the object. You mention slow shutter speed — it is a culprit in both cases.

Focus misses are subjective as they depend on the intent on the photographer. Without knowing what the photographer wanted to focus on, it’s impossible to achieve this. And even if you do know what you wanted to focus on, it still wouldn’t be trivial.

With that dose of realism aside, let me reassure you that blur detection is actually a very active research field, and there are already a few metrics that you can try out on your images. Here are some that I’ve used recently:

  • Edge width. Basically, perform edge detection on your image (using Canny or otherwise) and then measure the width of the edges. Blurry images will have wider edges that are more spread out. Sharper images will have thinner edges. Google for “A no-reference perceptual blur metric” by Marziliano — it’s a famous paper that describes this approach well enough for a full implementation. If you’re dealing with motion blur, then the edges will be blurred (wide) in the direction of the motion.
  • Presence of fine detail. Have a look at my answer to this question (the edited part).
  • Frequency domain approaches. Taking the histogram of the DCT coefficients of the image (assuming you’re working with JPEG) would give you an idea of how much fine detail the image has. This is how you grab the DCT coefficients from a JPEG file directly. If the count for the non-DC terms is low, it is likely that the image is blurry. This is the simplest way — there are more sophisticated approaches in the frequency domain.

There are more, but I feel that that should be enough to get you started. If you require further info on either of those points, fire up Google Scholar and look around. In particular, check out the references of Marziliano’s paper to get an idea about what has been tried in the past.

Leave a Comment