Abstract:Calibration in a multi camera network has widely been studied for over several years starting from the earlier days of photogrammetry. Many authors have presented several calibration algorithms with their relative advantages and disadvantages. In a stereovision system, multiple view reconstruction is a challenging task. However, the total computational procedure in detail has not been presented before. Here in this work, we are dealing with the problem that, when a world coordinate point is fixed in space, image coordinates of that 3D point vary for different camera positions and orientations. In computer vision aspect, this situation is undesirable. That is, the system has to be designed in such a way that image coordinate of the world coordinate point will be fixed irrespective of the position & orientation of the cameras. We have done it in an elegant fashion. Firstly, camera parameters are calculated in its local coordinate system. Then, we use global coordinate data to transfer all local coordinate data of stereo cameras into same global coordinate system, so that we can register everything into this global coordinate system. After all the transformations, when the image coordinate of the world coordinate point is calculated, it gives same coordinate value for all camera positions & orientations. That is, the whole system is calibrated.