This paper presents a novel approach to creating full view panoramic mosaics from image sequences. Unlike current panoramic stitching methods, which usually require pure horizontal camera panning, our system does not require any controlled motions or constraints on how the images are taken (as long as there is no strong motion parallax).
For example, images taken from a hand-held digital camera can be stitched seamlessly into panoramic mosaics. Because we represent our image mosaics using a set of transforms, there are no singularity problems such as those existing at the top and bottom of cylindrical or spherical maps.Our algorithm is fast and robust because it directly recovers 3D rotations instead of general 8 parameter planar perspective transforms. Methods to recover camera focal length are also presented. We also present an algorithm for ef? ciently extracting environment maps from our image mosaics. By mapping the mosaic onto an artibrary texture-mapped polyhedron surrounding the origin, we can explore the virtual environment using standard 3D graphics viewers and hardware without requiring special-purpose players.
CR Categories and Subject Descriptors: I. 3. [Computer Graphics]: Picture/Image Generation - Viewing Algorithms; I. : Enhancement - Registration. Additional Keywords: full-view panoramic image mosaics, environment mapping, virtual environments, image-based rendering. A number of techniques have been developed for capturing panoramic images of real-world scenes (for references on computergenerated environment maps, see.
One way is to record an image onto a long ? lm strip using a panoramic camera to directly capture a cylindrical panoramic image. Another way is to use a lens with a very large eld of view such as a sheye lens.Mirrored pyramids and parabolic mirrors can also be used to directly capture panoramic images. A less hardware-intensive method for constructing full view panoramas is to take many regular photographic or video images in order to cover the whole viewing space.
These images must then be aligned and composited into complete panoramic images using an image mosaic or “stitching” algorithm. Most stitching systems require a carefully controlled camera motion (pure pan), and only produce cylindrical images. In this paper, we show how uncontrolled 3D camera rotation can be used.The case of general camera rotation has been studied previously, using an 8-parameter planar perspective motion model.
By contrast, our algorithm uses a 3-parameter rotational motion model, which is more robust since it has fewer unknowns. Since this algorithm requires knowing the camera’s focal length, we develop a method for computing an initial focal length estimate from a set of 8-parameter perspective registrations. We also investigate how to close the “gap” (or “overlap”) due to accumulated registration errors after a complete panoramic sequence has been assembled.To demonstrate the advantages of our algorithm, we apply it to a sequence of images taken with a handheld digital camera. In our work, we represent our mosaic by a set of transformations.
Each transformation corresponds to one image frame in the input image sequence and represents the mapping between image pixels and viewing directions in the world, i. e. , it represents the camera matrix. During the stitching process, our approach makes no commitment to the nal output representation (e. g. spherical or cylindrical), which allows us to avoid the singularities associated with such representations.
Once a mosaic has been constructed, it can, of course, be mapped into cylindrical or spherical coordinates, and displayed using a special purpose viewer. In this paper, we argue that such specialized representations are not necessary, and represent just a particular choice of geometry and texture coordinate embedding. Instead, we show how to convert our mosaic to an environment map, i. e. , how to map our mosaic onto any texture-mapped polyhedron surrounding the origin.
This allows us to use standard 3D graphics APIs and 3D model formats, and to use 3D graphics accelerators for texture mapping.The remainder of our paper is structured as follows. Sections 2 and 3 review our algorithms for panoramic mosaic construction using cylindrical coordinates and general perspective transforms. Section 4 describes our novel direct rotation recovery algorithm.
Section 5 presents our technique for estimating the focal length from perspective registrations. Section 6 discusses how to eliminate the “gap” in a panorama due to accumulated registration errors. Section 7 presents our algorithm for projecting our panoramas onto texturemapped 3D models (environment maps). We close with a discussion and a description of ongoing and future work.