Panoramas come full circle
Daniel K L Oi looks at spherical pano-
ramas and ‘Virtual Reality’ shooting
D
igital photography has opened The number of shots required to
up possibilities either difficult cover the whole sphere depends on
or impossible to achieve tra- the angle of view of the lens and cam-
ditionally. One example is panoramic era combination. For convenience,
photography, the creation of an image fisheye lenses are often used, though
which encompasses the entire view any wide angle lens may be suitable.
surrounding a point in space from For instance, an 8mm fisheye
separate images. These images can lens on an APS-C camera requires 4
then be displayed on computer as an horizontal shots, 1 zenith shot (taken
interactive virtual reality movie or straight up) and 1 nadir (taken
else presented as a still image using a straight down) shot. In comparison,
variety of cartographic projections. a 17mm rectilinear lens on the
The process for the creation same camera requires 3 rows of 10
of spherical panoramas starts off shots each, one zenith shot (straight
with the acquisition of the source up) and one nadir shot (straight
images. These images should down) – see the montage below.
be in all directions looking out Special care is needed when tak-
from a single fixed point in space ing the nadir shot as the tripod would
or the point of perspective: otherwise be in the way. The nadir
shot can be taken handheld with the
camera as close to the original posi-
tion as possible and the photographer
stepping back. This resulting shot A sequence of 32 images covering the sphere. These
can then be used to patch the hole were taken using a 17mm lens on the Alpha A700
at the bottom of the sphere. mounted on a Nodal Ninja 3 panoramic head.
This is achieved by rotating the cam-
era and lens around the no-parallax-
point (NPP) of the lens, usually by
means of a special panoramic head:
After the full set of images
(above) has been taken, they are
imported into the computer and
loaded into the panoramic stitch-
ing software. The software works
out from overlapping images their
relative position and the distortion
characteristics of the taking lens in
order to assemble the images upon a
virtual sphere surrounding the point
of perspective. This is achieved by
identifying common features, called
control points, in overlapping regions
of pairs of images. This can either be
done manually by the photographer
picking out corresponding pairs
The position of the camera and of points, or automatically by the
lens is adjusted until the horizontal software (screen shot, right).
and vertical axes of the panoramic After the images have been
head coincide with the NPP. aligned and assembled (see
photoworld 18
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34