Wednesday, December 4, 2024 10am
About this Event
Baskin Engineering 1156 High Street, Santa Cruz, California 95064
Ground Vehicle Autonomy has myriad applications and has become an area of ongoing research and development for society as a whole; from search and rescue to autonomous driving and “robo-taxis” to advanced farming techniques, there are countless ways in which removing the human from the vehicle presents advantages in terms of economics, safety, and efficiency. Underlying this entire field of research is the need for robust localization, perception, mapping, and control within the context of the immediate vehicle environment. More generally, Guidance (where to go), Navigation (where am I), and Control (how do I get to where I want to go) remains a difficult problem within the context of autonomous ground vehicles.
To date, GNC solutions are largely individually created for the specific robot at hand, often with specialized considerations and constraints embedded directly into the GNC solution. This work presents a unified GNC framework that can be easily adapted to heterogeneous autonomous ground vehicles that incorporates diverse sensors and sensing modalities in a coherent and mathematically correct way.
In support of this research, we have developed an advanced ROS-enabled ground vehicle robot based on the HUSKY 200A chassis along with a high-fidelity ROS-based simulation (Rviz and Gazebo). The physical robot incorporates GPS, LiDAR, stereo cameras, RGBD cameras, fiber option and MEMs gyros and accelerometers, as well as Minicomputer for on-board processing; including significant upgrades in both hardware and firmware. We will develop a feedback control system and estimator, combining EKF and PID as one system; the EKF will take raw sensor measurements (GPS, inertials, image data) and estimate both position and attitude (implemented in Python). We will test the ground vehicle for performance in structured and unstructured environments outdoors to validate this estimation and control architecture.
The next steps include converting the software Inertial navigation system with GPS from C to Python, implementing two RGBD cameras as side depth cameras, and using the Kalman filter with Graph-SLAM (grid-SLAM) algorithms. Finally, we will adjust the vehicle self-recognition and parameters estimation state and pose estimation (localization and mapping) using the results of our testing and design iteration.
Our final goal is an autonomous off-road vehicle that can navigate an unstructured environment using any sensors available with capability to self-identify misleading information and remove that from the navigation and environmental solution.
Event Host: Haitham Hasan Fadhil Alsaade, Ph.D. Student, Electrical & Computer Engineering
Advisor: Gabriel Elkaim
0 people are interested in this event
User Activity
No recent activity