Skip to content

Stoplight, Streetsign and Person Detection


Logo

MAE-ECE 148 Final Project

Team 1 Winter 2023

Table of Contents
  1. Team Members
  2. Final Project
  3. Early Quarter
  4. Acknowledgments
  5. Contact

Team Members

Logo

Arturo Amaya (Left), Arjun Naageshwaran (Middle), Hariz Megat Zariman (Right)

Team Member Major and Class

  • Arturo Amaya - Computer Engineering (EC26) - Class of 2023
  • Arjun Naageshwaran - MAE Ctrls & Robotics (MC34) - Class of 2024
  • Hariz Megat Zariman - Computer Engineering (EC26) - Class of 2024

Final Project

The goal of this project was a three-fold application of computer vision. Using the OAKD Lite camera, we aimed to recognize stoplights, traffic signs and people and use those visual inputs to change the state of our car.

Primary Goals:

1) Red, yellow and green stoplights would make the car stop, slow down, and go 2) (Stretch goal) Stop signs, speed limit signs, and right turn signs would be recognized and make the car perform their designated function 3) Individuals could be recognized and followed by the car with dynamic throttle and steering, while the car simultaneously recognized and performed stoplight and streetsign functions

Goal : Red, Yellow and Green stoplights

The car was able to detect red and green images succesfully. When shown red, the car completely stopped, and would only move forward when a green light was shown. Due to time constraints and inconsistent lighting conditions, the car was not able to detect a yellow light consistently but would be able to reduce throttle on detection.

Goal : Person Following

The car was able to identify person objects using the OAKD camera. By doing so, it was then capable of following a person by adjusting steering values based on how far the person strayed from the center of the camera's FOV. Furthermore, the car adjusted its throttle such that the further away a person is, the faster it goes and stops when approaching to near.

Stretch Goal: Street Sign Detection

Our team was able to obtain obtain the Mapillary Dataset (containing many real-world street signs) and extract the relevant files and labels which were useful to our project (such as the U-turn and stop sign). Unfortunately, due to time constraints, unlabeled images and issues with training the dataset (slow locally, incorrect format for GPU cluster and many others), we were unable to reach this goal on time. However, we were able to implement the movement methods for the car object if it were to identify these signs,

See car.py for these methods.

Final Project Documentation

Early Quarter

Mechanical Design

Prefabricated parts of the car were given to us, but parts like the base plate, dual camera mount, LIDAR stand, and cases for sensitive electronics (VESC, GNSS, Servo PWM, etc.) were either 3D printer or laser cut. These are a few of the models compared to their real-life, 3D printed counterparts.

Top Camera Mount Bottom Camera Mount LIDAR Mount
Camera Mount (Physical) LIDAR Mount (Physical)

Electronic Hardware

Our team used only the electronic components given to us. In particular, we focused primarily on the OAK-D camera, Jetson NANO and the GNSS board (only used for the 3 GPS Autonomous Laps). When assembling the circuit, we used the following circuit diagram (given by the TAs of the class):

Autonomous Laps

Below is a youtube playlist of the car completing 3 autonomous laps using the DonkeyCar framework under different conditions.

Acknowledgments

Credited Code Examples: * Traffic light example Git * DepthAI Git * VESC Object Drive Folder * DonkeyCar Framework

Special thanks to Professor Jack Silberman, and TAs Kishore Nukala and Moises Lopez (WI23), and to all our amazing classmates of Winter 2023

Contact

  • Hariz Megat Zariman - hzariman@gmail.com | mqmegatz@ucsd.edu
  • Arjun Naageshwaran - arjnaagesh@gmail.com | anaagesh@ucsd.edu
  • Arturo Amaya - a1amaya@ucsd.edu | aramaya@ucsd.edu