Research Areas: Image Correspondences | Cel Animation | Computer Vision | Augmented Reality
 


Computer Vision

Model-Based 3D Pose Recovery using Simulated Motion
A novel method using simulated motion to recover planar objects in a single perspective image is proposed. The method first assumes a candidate shape to be a perspective projection of a model, from which a template is obtained. Based on this assumption, camera motion is computed that will bring the projection of the template to coincide with the candidate. If the matching hypothesis is correct, the recovered surface normal of the model agrees with the expected value. Matching is therefore determined by computing an invariant in the form of the proximity of the recovered surface normal to the expected normal (of the model). This invariant can be easily computed using the inner dot product between these two normals.  When a match is inferred, the pose of the object can be recovered from the hypothesized camera motion. Furthermore, if the length of any sides of the object is known, the absolute location of the object can be computed.

It has been recognized that the accuracy of 3D recovery using structure from motion depends on the relative positions of the two corresponding shapes. In this respect, the proposed template matching scheme has a powerful built-in feature, that of an active template. Reacting to the candidate presented, the template is dynamically sized, orientated and positioned to ensure accurate computation of the geometry of recovered 3D object. In addition, as the template is generated by the system, it is not subjected to imaging or measurement error. This significantly contributes to the overall accuracy of the recovery. The accuracy and stability of the proposed active template scheme are confirmed by a series of tests. Current technique handles only planar objects. Further work aims to extend the technique to general objects. 

It has been demonstrated that the technique can recognize planar objects subjected to any 3D rotation and translation provided, of course, that their shapes are still distinguishable in the image. Triangle shaped surfaces cannot be recognized by the technique as it requires a minimum of four image points in the two corresponding sets of images in the 2D projective space. Although the method does not handle occlusion per se, a subset of points describing a complicated object shape can be used for recognition if they are sufficient to uniquely identify its shape from the other objects. Thence, it does not matter if the rest of the points are occluded. The limitation of the above technique is that it can only handles planar objects. This project aims to extend the technique to general polyhedral and curve objects.

Invariant Shape Recognition (Seah Hock Soon and John Sng Poh Wei)
A technique based on simulated motion (SM) has been developed for recognizing planar shapes from a single perspective image. It is assumed that models of the shapes to be recognized are available in the system database. The method is 3D rotation, translation and scale invariant. In this project, the above-described technique will be used to recognize the 26 characters of the alphabet in any arbitrary position. This involves segmentation of the image acquired from a CCD camera, identification and extraction of salient points. From the extracted points, the system is required to search the database and apply the SM recognition technique to identify the character in the image.

Constructing 3D Shapes From Sketches (Seah Hock Soon and Lee Yong Tsui)
Sketching is the natural and traditional means for designers to visualise and convey their ideas. CAD systems are the tools today for defining a design completely. However, CAD systems, though very powerful with very good user interfaces, are never quite the tool for a designer during the formative stages of a design. This is because these systems would usually require inputs, such as dimensions and orientations, which can be tedious and are a hindrance to the natural flow of the designer.

This project looks at the possibility of bridging this gap, by converting designer sketches to 3D shapes. More often than not, we humans can look at a sketch and immediately have a good impression of what the shape looks like in 3D, the project basically wants to create that capability in a computer. However, a sketch never contains enough information to support directly the creation of a 3D shape, and there belies the challenge. How does the human visual and perception system form that image of the shape in the mind? This problem has been studied and there are credible solutions. The job here is to build on these pioneering work and create a computer system that will take a sketch into a 3D shape. And beyond that, convert the 3D shape into a solid model that can be read by a CAD system.

View Morphing
Image morphing techniques can generate compelling 2D transitions between images. However, differences in object pose or viewpoint often cause unnatural distortions in image morphs that are difficult to correct manually. Using basic principles of projective geometry, this project investigates an image morphing that correctly handles 3D projective camera and scene transformations. The technique should work by prewarping two images prior to computing a morph and then postwarping the interpolated images. Because no knowledge of 3D shapes is required, the technique may be applied to photographs and drawings, as well as rendered scenes. The ability to synthesize changes both in viewpoint and image structure affords a wide variety of interesting 3D effects via simple image transformations.


Go to HOME PAGE | Next Research Area:  Augmented Reality