In Malta news today, I have been working on my depth recognition software. On the flight over I took a good span of time reading about a variety of different depth recognition algorithms. The simplest of these is the Fixed Window algorithm. The algorithm relies on summing the differences in pixel intensities over a box of pixels or a window of fixed size. The pixel box is kept stationary on the left image while it is moved on the right image. The distance between the two boxes that correspond the most, or the two boxes with the fewest differences in pixel intensity, are used to measure the disparity at that point on the image. So, the disparity is equal to the horizontal difference in position of the right box compared to the left box. I have been attempting to implement this algorithm in my code to detect the depth using images from our GoPro cameras. Currently, the code works for very simple images that I have constructed in Microsoft Paint, but not for more complex images.
One downside of using this algorithm is the fact that it computes the disparity at every single pixel in the image. The images from the GoPros are 8 mega-pixels before my correction software converts and updates them. So the depth map for a full image takes more than an hour to compute. Nonetheless, when everything is working, these high resolution images will allow us to have a precise measurement of distance from our underwater images. Time permitting, I hope to implement more efficient depth computation algorithms later. Two such algorithms are Fast Bilateral Stereo and Fast Segment Driven Detection. Both of these produce rapid noisy results that can be corrected with a Logically Consistent Stereo algorithm.
The other members of the team were working on their own coding projects today. Eric had a great deal of progress with Unity3D and Google Earth. Tyler was similarly productive with his image correction software. He used a number of techniques related to thresholding that masked off the non-textured areas of the image. Ultimately, this will allow us to texture images from the ROV onto the sonar and image generated cistern meshes because it will remove the dark water regions from the textures leaving only wall images. Both of us will be working on the projective texturing code over the next few days.