Selecting and configuring the optimum illumination set-up is one of the most crucial factors when designing a vision system. The missing information caused by incorrect illumination makes subsequent analysis much more difficult or impossible, as missing information in the image can never be recovered by the analysis algorithm.
Virtually all cameras need a lens of some kind to collect the light that is scattered from the surface of an object. The lens reconstructs this scattered light as an image on a light sensitive area behind the lens, normally a CCD or CMOS sensor.
Acquisition technology has seen major changes in the last few years,
particularly with the adoption of new bus technologies. Traditional
frame grabber technology has been joined by a range of generic
interfaces such as FireWire and USB (2.0 and 3.0) along with Gigabit
The basis of all imaging software is the ability to acquire, transfer,
manipulate and interpret the pixel data output by a camera. What happens to
these images can vary from the relatively simple task of saving them to disk,
through to using them in a complex pattern recognition application.
When selecting a PC for an imaging application there are many factors
that need to considered in order to ensure that the solution chosen
delivers the performance needed, combined with both long-term
reliability and stability of supply.
Although not essential for all applications, calibrating a vision system is important if you are looking to extract data and make decisions based on measurements using real world units, such as for robot guidance.
Today 3D machine vision is most commonly used for the precise three-
dimensional inspection and measurement of complex 3D free formed surfaces, but
new fields of applications are constantly being explored.