Article Title
Simultaneous Localization and Mapping Using Stereoscopic Computer Vision for Autonomous Vehicles
Faculty Mentor
Daniel Ford
Abstract
Creating a robotic system capable of autonomously making decisions based on the environment must first be capable of visualizing and recording its surroundings. This paper focuses on creating a robotic system that makes use of stereoscopic imaging from two offset camera feeds to create a 3D image. The robot is able to generate a 3D image from the cameras and convert the 3D images into a digital map. Furthermore, the robot is able use this model to detect its location within the digital map for navigational and mapping purposes. This process of simultaneously localizing and mapping (SLAM) is a revolutionary procedure used to generate maps in real time. By combining these three aspects, the robotic system can generate an accurate 3D map for the region of operation. Stereoscopic imaging provides many benefits over conventional mapping methods which allow for cheap, rapid, and detailed autonomous mapping. This paper demonstrates how a low cost robotic system can independently generate a 3D map in real time.
Recommended Citation
Ferrese, Robert
(2012)
"Simultaneous Localization and Mapping Using Stereoscopic Computer Vision for Autonomous Vehicles,"
Quercus: Linfield Journal of Undergraduate Research: Vol. 1, Article 1.
Available at:
https://digitalcommons.linfield.edu/quercus/vol1/iss1/1