Robots and autonomous vehicles “see” with stereo vision, that is, combinations of images from different vantage points. Controlling robots and autonomous vehicles and performing scientific analysis using in-situ imagery requires accurate reconstructions of the viewed terrain, as well as accurate location and orientation (pose) of the utilized camera sensors. 

To achieve this, Simultaneous Localization and Mapping (SLAM) and related engineering techniques are used to create 3D meshes and mosaics from multiple images. An important component to these techniques is global optimization of camera information and 3D locations of features observed in the imagery. It is particularly challenging to align and integrate images when they are blurry or otherwise involve noisy data. Although this process is understood on a general level, the large number of variables associated with SLAM optimization inhibits engineers’ ability to define causal interactions between variables and outputs. 

We have designed and developed VECTOR, a visualization tool that allows for data exploration and insight into global optimization variables. Our tool allows scientists to improve the accuracy of the 3-dimensional mesh that is algorithmically derived from 2-dimensional imaging.