How to verify the correctness of calibration of a webcam?

Hmm, are you looking for “handsome” or “accurate”?

Camera calibration is one of the very few subjects in computer vision where accuracy can be directly quantified in physical terms, and verified by a physical experiment. And the usual lesson is that (a) your numbers are just as good as the effort (and money) you put into them, and (b) real accuracy (as opposed to imagined) is expensive, so you should figure out in advance what your application really requires in the way of precision.

If you look up the geometrical specs of even very cheap lens/sensor combinations (in the megapixel range and above), it becomes readily apparent that sub-sub-mm calibration accuracy is theoretically achievable within a table-top volume of space. Just work out (from the spec sheet of your camera’s sensor) the solid angle spanned by one pixel – you’ll be dazzled by the spatial resolution you have within reach of your wallet. However, actually achieving REPEATABLY something near that theoretical accuracy takes work.

Here are some recommendations (from personal experience) for getting a good calibration experience with home-grown equipment.

  1. If your method uses a flat target (“checkerboard” or similar), manufacture a good one. Choose a very flat backing (for the size you mention window glass 5 mm thick or more is excellent, though obviously fragile). Verify its flatness against another edge (or, better, a laser beam). Print the pattern on thick-stock paper that won’t stretch too easily. Lay it after printing on the backing before gluing and verify that the square sides are indeed very nearly orthogonal. Cheap ink-jet or laser printers are not designed for rigorous geometrical accuracy, do not trust them blindly. Best practice is to use a professional print shop (even a Kinko’s will do a much better job than most home printers). Then attach the pattern very carefully to the backing, using spray-on glue and slowly wiping with soft cloth to avoid bubbles and stretching. Wait for a day or longer for the glue to cure and the glue-paper stress to reach its long-term steady state. Finally measure the corner positions with a good caliper and a magnifier. You may get away with one single number for the “average” square size, but it must be an average of actual measurements, not of hopes-n-prayers. Best practice is to actually use a table of measured positions.

  2. Watch your temperature and humidity changes: paper adsorbs water from the air, the backing dilates and contracts. It is amazing how many articles you can find that report sub-millimeter calibration accuracies without quoting the environment conditions (or the target response to them). Needless to say, they are mostly crap. The lower temperature dilation coefficient of glass compared to common sheet metal is another reason for preferring the former as a backing.

  3. Needless to say, you must disable the auto-focus feature of your camera, if it has one: focusing physically moves one or more pieces of glass inside your lens, thus changing (slightly) the field of view and (usually by a lot) the lens distortion and the principal point.

  4. Place the camera on a stable mount that won’t vibrate easily. Focus (and f-stop the lens, if it has an iris) as is needed for the application (not the calibration – the calibration procedure and target must be designed for the app’s needs, not the other way around). Do not even think of touching camera or lens afterwards. If at all possible, avoid “complex” lenses – e.g. zoom lenses or very wide angle ones. For example, anamorphic lenses require models much more complex than stock OpenCV makes available.

  5. Take lots of measurements and pictures. You want hundreds of measurements (corners) per image, and tens of images. Where data is concerned, the more the merrier. A 10×10 checkerboard is the absolute minimum I would consider. I normally worked at 20×20.

  6. Span the calibration volume when taking pictures. Ideally you want your measurements to be uniformly distributed in the volume of space you will be working with. Most importantly, make sure to angle the target significantly with respect to the focal axis in some of the pictures – to calibrate the focal length you need to “see” some real perspective foreshortening. For best results use a repeatable mechanical jig to move the target. A good one is a one-axis turntable, which will give you an excellent prior model for the motion of the target.

  7. Minimize vibrations and associated motion blur when taking photos.

  8. Use good lighting. Really. It’s amazing how often I see people realize late in the game that you need a generous supply of photons to calibrate a camera 🙂 Use diffuse ambient lighting, and bounce it off white cards on both sides of the field of view.

  9. Watch what your corner extraction code is doing. Draw the detected corner positions on top of the images (in Matlab or Octave, for example), and judge their quality. Removing outliers early using tight thresholds is better than trusting the robustifier in your bundle adjustment code.

  10. Constrain your model if you can. For example, don’t try to estimate the principal point if you don’t have a good reason to believe that your lens is significantly off-center w.r.t the image, just fix it at the image center on your first attempt. The principal point location is usually poorly observed, because it is inherently confused with the center of the nonlinear distortion and by the component parallel to the image plane of the target-to-camera’s translation. Getting it right requires a carefully designed procedure that yields three or more independent vanishing points of the scene and a very good bracketing of the nonlinear distortion. Similarly, unless you have reason to suspect that the lens focal axis is really tilted w.r.t. the sensor plane, fix at zero the (1,2) component of the camera matrix. Generally speaking, use the simplest model that satisfies your measurements and your application needs (that’s Ockam’s razor for you).

  11. When you have a calibration solution from your optimizer with low enough RMS error (a few tenths of a pixel, typically, see also Josh’s answer below), plot the XY pattern of the residual errors (predicted_xy – measured_xy for each corner in all images) and see if it’s a round-ish cloud centered at (0, 0). “Clumps” of outliers or non-roundness of the cloud of residuals are screaming alarm bells that something is very wrong – likely outliers due to bad corner detection or matching, or an inappropriate lens distortion model.

  12. Take extra images to verify the accuracy of the solution – use them to verify that the lens distortion is actually removed, and that the planar homography predicted by the calibrated model actually matches the one recovered from the measured corners.

Leave a Comment