logo img logo img Computer Vision Science Research Projects
Dr Libor Spacek
Omnidirectional Vision
Goals
  1. Finding the distance of objects using stereopsis. Finding the distance of objects by purely passive visual means is useful for robotics guidance, as the range information has greater precision and is operable over larger area than is normally the case with active ranging systems.
  2. Finding the position and motion (egomotion) of the camera. This is particularly important for autonomous navigation.
  3. Finding the position and motion of the objects and of the robot.
This research involves theoretical comparisons between general purpose omnidirectional vision methods, implementation of new methods, and experimental procedures for their comparison against published methods with similar goals.
Tasks
Some tasks to be implemented and tested (not necessarily mapping directly to individual projects). Most of these tasks are harder than they may appear at first sight. Background reading and experimental results comparison will be necessary in all cases:
  1. Registration: it is necessary to find the centre of each image, preferably with sub-pixel accuracy, and to align the axis of the mirror with that of the camera. More generally, to determine all of the sensor (camera+mirror) intrinsic and extrinsic parameters.
  2. Unwarping transformation (to warp = to distort): transform the original image from the circular image of the mirror, often in the polar coordinates (r,theta), to the rectangular cartesian coordinates (x,y) of a traditional rectangular image. This requires pixel interpolation. It is easier for humans to interpret the rectangular images. The unwarped image is also sometime called the panoramic image.
  3. Image matching: we need to be able to match pairs of images to find the rotation of the observer (robot) in between the two positions. When done well, this should be more accurate than the use of a compass or odometry methods. There is a variety of ways in which the matching can be done.
  4. Stereopsis: we can match individual image features along the radial epipolar lines to estimate the distance of objects. Again, a number of methods are possible.
  5. Motion: we can find the instantaneous velocities of edges from pairs of images in a motion sequence. This gives us more information about the motion of the observer (robot). New methods taking advantage of the special nature of the omnidirectional vision are being investigated.

logo img left portrait logo img right portrait Designed and maintained by Dr Libor Spacek. Updated Friday, 16-Feb-2007 17:12:12 GMT
publications,  biography, email: spacl (@essex.ac.uk).
Some of my courses: Computer Vision,  Sensor Signal Processing,  Vision for Robotics,  Robot Programming.