IRS-1C imagery is acquired in a continuous strip mode and any subscene segment represents an artifi-
cial window that is extracted from a long digital image and defined in a uniform coordinate system.
The fact that the IRS-1C PAN camera has three CCD arrays offers a possibility to simultaneously
adjust the images acquired by them by exploiting the knowledge of inter-CCD alignment angles.
Currently, many remote sensing sensors exist that can be practically used for space cartography, includ-
ing IRS-1Cr1D PAN. Successful exploitation of the high accuracy potential of these systems depends on
good mathematical models for the sensor modelling. A number of papers have been published on different
approaches for the position and attitude determina-
Ž .
tion of these sensors Westin, 1990; Dowman, 1991 . The approaches differ in the use of constraints and in
the methods of determining the initial values of the unknowns. These methods demand multiple control
points for sensor modelling and orientation. The requirement of highly accurate control is a major
problem in remote areas, where IRS-1C PAN images are most useful for topographic mapping. Therefore,
with the aim of reducing the control requirements to a minimum, a model for the orientation of full
IRS-1C PAN scenes was developed.
The approach was initially developed for a single image and strip, but has been extended in this study
to include a block of subscenes. The satellite position is derived using the collinearity condition equations.
The initial orbit is obtained from the given ephemeris data and refinement is carried out using an iterative
least squares solution. This can be done directly, as long as the digital image data is given in a fully
continuous stream of scan lines in a uniform time frame. The method described represents a rigorous
geometric reconstruction of ground coordinates from IRS-1C PAN images. This method of processing is
ideal when the following conditions exist:
Ø multiple, monoscopic coverage from different de-
tectors during the same pass are available; Ø
minimal control points are available or cost con- Ž
siderations require reduction of used control in this model, a minimum of one GCP for process-
. ing is required ; and
Ø the desirable geometric accuracy is to be compa-
rable with the input GCP accuracy or the resolu- tion of the sensor.
2. Adjustment model
The model is based on a previous model devel- Ž
oped for SPOT satellite image data Radhadevi et al., .
Ž 1994 . It was further extended for twin-strips i.e.,
. parallel strips
of SPOT data simultaneously ac- Ž
quired with a twin-instrument configuration Krish- .
nan et al., 1998 . The ephemeris data-stream from both instruments in a twin-pair configuration can be
appended and the pair can be used as images in the same strip.
Orientation of IRS-1C PAN imagery poses other problems than those of SPOT because of the differ-
ences in the internal geometry of the sensors, pay- load steering mechanism, projection geometry of the
optical instrument, coordinate system in which the ephemeris is given and the file configurations. Tak-
ing into account all these factors, the rectification procedure for IRS-1C PAN subscenes was devel-
oped. A detailed description of this model can be
Ž .
found in Radhadevi et al. 1998 . A multiscene
adjustment model for scenes in the same pass has been developed as an extension of this subscene
adjustment method. The experience gained through twin-strip adjustment of SPOT satellite images helped
us model possible alignment errors of IRS-1C PAN sensors and correct for them. The geometry of this
extended image can be modelled with a single GCP, as for a subscene. The requirement of only one GCP
for orbit attitude modelling is made possible by the following approximations and information:
Ø the short path of the orbit is approximated as a
circular arc; Ø
the slowly varying orbital parameters are mod- elled as a linear function of time;
Ø the attitude parameters and the radius of the orbit
are modelled as third-order polynomials in time; Ø
knowledge of the internal geometry and the angu- lar relationships between the CCD arrays;
Ø ephemeris data to derive the initial values of the
parameters; and Ø
appropriate weight matrices for measurements and parameters.
With these approximations and a priori informa- tion, the set of orbital parameters needed in the
adjustment can be reduced to four: i, inclination; V , longitude of ascending node; t, time at the ascending
node; and r , orbital radius.
o
The attitude parameters to be modelled are: v, roll; r, pitch; and k , yaw.
For the full PAN scene adjustment, it is necessary to regard the four orbital parameters as corrections to
the set of estimated values. Start values of the sim- plified orbital model parameters are estimated from
the appended ephemeris of all scenes. By using the same corrections to all scenes, the extended image is
kept rigid and the orbital parameters for the whole pass are kept to four. Similarly, start values for
attitudes are calculated from the ephemeris data. This is possible because the overlap between successive
scenes allows an uninterrupted sequence of attitude measurements to be constructed. With these start
values, the same corrections can be applied to all scenes.
Ž .
Jacobsen 1988 mentions four types of possible geometric problems while adjusting the scenes from
the three arrays of the IRS-1C PAN camera. The first one is the different focal length of the three CCDs,
which is reflected as a scale change. In his model, Jacobsen accommodates scale changes through the
use of additional parameters. A second possible problem is that the sensors may be in the same plane
and parallel to each other but not forming one line, i.e., shifted in relation to each other. In our model,
the shift of the CCD-line sensors in the orbit direc- tion is corrected by time-tagging and computing the
number of overlap lines in the along-track direction. The third possible error, a rotation in the image
plane, defined by each CCD line and the orbit direc- tion, is corrected through the sensor-body transfor-
Ž mation matrix containing the alignment offsets of
. the CCD arrays which is included in the rotation
matrix. The errors due to the difference of these alignment angles from the pre-values are attributed
to the attitude angles and these will be corrected while updating the attitude in the adjustment. The
fourth problem is a rotation against an axis in the
Ž .
image plane out-of-plane rotation . This will also cause a scale change in the CCD direction. Using the
effective focal length values for each CCD, this error is accounted for in our model. Thus, the main error,
while handling a full PAN scene, is due to the discrepancy in the focal length.
Ž .
Srivastava and Alurkar 1997 explained a method for in-flight calibration of IRS-1C image geometry.
The parameters they considered include the focal length, focal plane placement of PAN CCDs and
alignment between PAN and LISS-III cameras. The first two parameters are the same as those we ex-
plained in the previous paragraph. The third parame- ter is irrelevant when we handle only PAN images.
In the swath model described by Srivastava, LISS-III is considered as the primary sensor and orientation
parameters for the other IRS cameras are determined by using inter-camera alignment angles. Srivastava
also demonstrates that errors in the across-track di- rection are greater than the errors in the along-track
one, clearly showing a scale error that can be ac- counted for by determining the effective focal lengths
for each CCD compatible with the image measure-
Ž ments. Even though the roll angles calculated from
. the first and last pixel positions are slightly different
for the three CCDs, Srivastava gives only one cor- Ž
rected focal length value possibly for the middle .
CCD . In this study, the modelling approach for full
scenes is modified from the single scene approach to take care of the following factors:
Ž . 1 The CCD arrays cannot be placed edge-to-edge
without doing an in-flight calibration for each array. The focal lengths of each array will vary slightly.
Ž . 2 The weight matrices for attitude parameters in
the adjustment model should be chosen properly to take care of errors due to attitude variations within
the full scene. Ž .
3 The areas of image overlap are to be com- puted automatically.
Ž . 4 Scene centre time, scene centre pixel, total
number of lines, total pixels, look direction of each pixel, etc. should be computed from the overlaps
considering all scenes from the three detectors as a single, very large image.
The geometric modelling of full scenes is then done by absolute and relative-to-each-other determi-
Ž nation of the focal lengths of the three CCDs pre-
. processing , automatic determination of overlap and
sidelap areas, and block adjustment. These steps are described in more detail below.
2.1. Preprocessing Preprocessing requires computation of the abso-
lute focal length for the middle array and the orienta-
tion and focal length of detectors 1 and 3 relative to detector 2. This preprocessing is not necessary for all
image blocks being handled. Once a relationship between the focal lengths of the detectors is estab-
lished, we can directly go to the determination of overlap areas and the block adjustment.
2.1.1. Absolute focal length computation of the mid- dle array
Absolute focal length computation was performed using two different methods. In the first method, the
exterior orientation parameters for the scene acquired by the middle detector were determined with differ-
ent focal lengths. A detailed description of this model
Ž .
can be found in Radhadevi et al. 1998 . A single ray ground point intersection was accomplished with the
corresponding focal lengths and errors in the longi- tude direction were computed at checkpoints. It was
noticed from the first data set of IRS-1C that there was a change in the focal length from the pre-launch
Ž .
values 973.9 mm . From Fig. 1, the value is approx- imately 982 mm. The absolute size of focal lengths
can be hardly calculated from in-flight measurements because of the unavoidable errors in the point trans-
fer from ground to image. By repeating the above exercise with many data sets, we determined the
focal length value as 982.30 mm for the IRS-1C PAN middle array.
In the second method, the focal length was also used as a parameter, along with the other orbit and
attitude parameters in the model. Single ray ground point intersection was performed with the computed
focal length and errors in the longitude direction were obtained at checkpoints. With this method, the
focal length for the middle array was computed as 982.5 mm. Finally, an average focal length value
after processing 40 scenes using both methods was computed as 982.48 mm.
2.1.2. RelatiÕe orientation and relatiÕe focal length computation of detectors 1 and 3
A small error in the focal length is of little importance when modelling a single scene because it
Ž Fig. 1. Focal length vs. longitudinal error for detector 2 of the IRS-1C PAN camera image acquired on 20 January 1996, path and row
. 100r60, subscene D7 and look angle s y0.0002044025 . GCP and check points from 1:50,000-scale map.
will be absorbed by the orbital parameters. However, when adjusting the images acquired by the three
detectors together, errors in focal length will cause different scales along a line in the three images. If
we determine the variations in focal lengths of detec- tors 1 and 3 in relation to detector 2, these scale
distortions can be eliminated. If a small error exists in the absolute focal length of the middle array, it
will be absorbed and corrected through the orbital parameters for the whole block. The steps used for
relative focal length computation are described be- low:
Step 1. Considering the computed absolute focal length of detector 2 as error-free, orbit attitude mod-
elling of the middle image is accomplished. Step 2. Conjugate points are identified between
the images acquired by detectors 1 and 2, and 2 and 3. These points are assigned ground coordinates
derived from the middle scene. Step 3. Attitude angles and focal lengths are
treated as parameters in modelling detector 1 and 3 scenes. Orbital parameters are constrained to be equal
to those in the middle scene. The attitude parameters are then allowed to vary to account for deviations
from the nominal pointing directions of the detectors. Additional equations are included in the attitude
model treating the focal length also as a parameter. Solving for this scale distortion, the images from the
three CCD arrays can be placed edge to edge.
2.2. Automatic computation of oÕerlap and sidelap areas
Computation of overlap and sidelap areas is an important step before block adjustment. Since the
sidelap between detectors 2 and 3 for the IRS-1C PAN camera is minimal at some latitudes, manual
conjugate point identification from displayed images is very difficult, if not impossible. We have auto-
mated this process with correlation procedures. Two images acquired from different detectors and
on the same pass are selected. The scene start-times of the images indicate which image was acquired
first. From the left image, a column of pixels from the right top corner is considered, as is a column of
the same length from the left top corner of the right image. The correlation coefficient between these two
columns is then calculated. The column in the image
Ž which was acquired later is kept fixed
say ‘L’ .
image throughout the process. From the other image Ž
. ‘R’ image , the column to be correlated is moved
horizontally in a step of one pixel at a time. For each new column from the ‘R’ image a correlation coeffi-
cient is calculated with the fixed column length of the ‘L’ image. This is repeated 500 or 700 times
depending upon a rough approximation of the side- lap in pixels. From these stepwise correlation coeffi-
cients, the maximum coefficient is determined and stored, along with the column position at which it is
obtained. Next, the column in the ‘R’ image is moved down vertically in a step of one scan line at a
time and the previous steps are repeated. From the correlation coefficients obtained in the vertical
matching, a new maximum is determined. The side- lap between the two images can then be computed
from the position of the maximum correlation coeffi- cient.
With this method, the overlap and sidelap compu- tation between the images in a full scene can be
performed automatically.
2.3. Block adjustment Block adjustment is performed using the new
focal length values. All three detectors are treated as a single array and time tagging is accomplished
Table 1 Test details
Test Pathrrow
Subscenes adjusted Date of pass
Look angle 1
100r60 C1; C2; C3; C4; C5; C6; C7; C8; C9
11 April 1996 y17.988
2 100r60
D1; D4; D5; D6; D7; D8; D9 20 January 1996
q0.018
Table 2 RMS errors obtained for two tests with different sources of control and check points
Ž . Ž .
Test No. of GCPs
No. of check points Source of controlrcheck points
Latitude RMS error m Longitude RMS error m
1 1
10 Surveyed
12.6 10.4
1 40
1:25,000-scale map 30.8
36.4 1
26 1:50,000-scale map
45.6 45.4
2 1
7 Surveyed
8.4 12.4
1 41
1:25,000-scale map 37.8
42.0 1
31 1:50,000-scale map
55.2 41.2
using the computed overlap and sidelap. The ephemeris data from the different detectors can then
be easily appended. We can adjust the block, even when one or two scenes in the beginning or at the
end of the block are missing. However, the middle scene must be available. The centre pixel and the
Fig. 2. Planimetric error vectors of test 1 with one surveyed GCP and three check point sources.
centre line can be computed from the overlap and sidelap. The following parameters are computed.
Ø Roll angle at the centre pixel s roll at the first
Ž . Ž
pixel of detector 1 q centre pixel y 1 roll at the last pixel of detector 3 y roll at the first pixel of
. detector 1 rtotal number of pixels.
Ø Look angle s mirror step numbermirror step
size q roll at the centre pixel. The adjustment using the orbit attitude modelling
Ž .
approach described by Radhadevi et al. 1998 can now be performed. The effect of small errors in the
look angle will be corrected by updating the attitude angles. To account for the scale variations between
the detectors, the relative focal length values com- puted for different image points are employed.
3. Test details