This is the repo for group project autonomous driving of team 13 the lecture Introduction to ROS in summer semester 2023.
📺 Video: Route 1
📺 Video: Route 2
global planner | local planner | controller | |
---|---|---|---|
Route 1 | waypoint-global-planner | teb_local_planner | Ackermann controller |
Route 2 | move_base build-in global_planner | move_base build-in base_local_planner | PID controller |
- Successfully working perception pipeline
- Successfully working path planning
- Successfully working trajectory planning
- Successfully avoiding other cars
- Successfully stopping/driving at street lights
- Time to complete the mission: 212s
- Solving the problem without using semantic camera: YOLOv5
- New msg: perception_msgs/Boundingboxes.msg, perception_msgs/Boundingbox.msg, perception_msgs/Trafficstate.msg
- New Service: planning/srv/PlanGoal.srv
Please git clone this repo according to the first step in Getting Started
- Run the sh file to install all- Prerequisites packages
chmod +x requirements.sh
./requirements.shgroup
- Or Run following command to install required packages.
Perception:
sudo apt install ros-noetic-octomap-rviz-plugins ros-noetic-octomap-server
pip install -r Autonomous_Driving_ws/src/perception/yolov5/src/yolov5/requirements.txt
Control:
sudo apt install ros-noetic-pid ros-noetic-robot-localization ros-noetic-smach-ros
Planning
sudo apt-get install ros-noetic-navigation
sudo apt-get install ros-noetic-teb-local-planner
sudo apt-get install ros-noetic-ackermann-msgs
- Use the following command to clone the repository
git clone [email protected]:i2ros_g13/i2ros_g13_autonomous_driving.git --depth 1
- Build it with
catkin build
. - Download the Unity Environment: https://syncandshare.lrz.de/getlink/fiEg9ocZ6Pc5iuEa4QqN1b/
- Unzip the Unity file and copy the files to .../devel/lib/simulation/
- run the following command to launch.
source devel/setup.bash
roslaunch simulation yolov5_simulation.launch
The car will start driving along the generated global and local path. The traffic rules will be followed correctly.
- perception_pkg: including a node
trafficlights_detect_node
, which extract the area of traffic light from semantic image and then recognize the color of the traffic light in RGB image. It gives the controller the state of traffic light to stop the car of let it move again. Additionally a bounding box in color red or green is also drawn and outputed by this node. - depth_image_proc: a package to convert the depth image data to 3D point cloud data.
- OctoMap: a package which can get the occupancy grids and map from 3D point cloud data.
- yolov5: A package to perform traffic light detection using YOLOv5. It includes a
trafficlights_detect_node_yolov5
ros node, which uses the trained YOLOv5 model for inference and publish the detection results.
- planning(Route 1): node
waypoint_sending_server
sends waypoints for waypoint global planner; nodeglobal_path_planning_client
considers traffic lights and car position and decides when and which waypoints will be sent. - auto2dnav: congiuration of move_base package
- move_base: primary pakage used for planning and navigation Tasks
- auto2dnav: node
cmd_vel_to_ackermann_drive
converts the orignal desired linear and angular velocity to desired linear velocity and steering angle - cnotroller_pkg(Route 1): node
controller_node
calculates the final command values