Image Acquisition of Critical Bridge Components Using Vision-guided Autonomous Vehicle
Abstract
This research proposes a vision-guided autonomous navigation framework for unmanned vehicles performing image acquisition for bridge inspection. The proposed framework integrates visual SLAM with RGB-D image input with semantic segmentation to detect and localize critical structural components like columns. The detected components are converted to the parametric map to generate navigation goals for image collection. The proposed approach is first validated in the synthetic bridge inspection environment using an unmanned ground vehicle. The feasibility of the framework is further studied by the laboratory-scale prototyping and validation using TurtleBot3 equipped with Jetson TX2 onboard computer. In the simulation environment, the proposed framework can achieve autonomous navigation to up to 6 columns and acquisition of image data with 90% success rate for 3 columns. Furthermore, the performance evaluation in the real-world environment shows that the developed hardware-software prototype can navigate and collect image data of up to 2 columns, with more than 60% success rate navigating to the first column. The results indicate the significant potential of achieving autonomous navigation and image acquisition with limited onboard computational resources, contributing to the enhanced efficiency and reliability of bridge management.