Close

Yogesh Gajjar

Perception | Localization & Mapping | Computer Vision | Robotics

Resume

About Me

A hooman with dreams!

Coding? Pretty exciting, eh? Tell me about it!

Hello, I'm Yogesh, a software engineer based in Ann Arbor, MI. I'm an ambitious and self-motivated graduate with a strong inclination towards Localization & Mapping, Computer Vision/Perception in Autonomous Driving. I'm an avid learner with a practical mindset. My team and I are Ford sponsor award winners at CalHacks 6.0 Hackathon organized at UC Berkley for developing a platform called IDEAS (Intelligent Driver Enhanced Assistance System) in 36 hours.

I work at Qualcomm, Inc as a Localization and Mapping Algorithm Engineer where we are develop advance localization algorithms to provide the vehicle with centimeter precision within the map in real time.
I have worked with Prof. Jyotirmoy Deshmukh at USC CPS-VIDA as a Graduate Researcher where I developed first open-source ROS wrapper for DeepSORT Multi-Object tracking algorithm publishing unique object ID's on Jetson Xavier Platform.

I like to create things that involve a camera, LiDAR, and a car. The fusion makes it undeniably beautiful, especially the world it creates around it. The future of this fusion is near, and I'm excited to a part of it.

I like programming, reading, travelling, and cooking. I've played national-level tennis and I'm a drummer. I follow tennis, cricket, and formula-1 racing.

P.S. Don't forget to checkout my work behind the supercool background gif you saw above.

Experience

Qualcomm Automated Driving

Localization & Mapping Systems Engineer

I am part of a dynamic team that is collaboratively engaged with a prominent Original Equipment Manufacturer (OEM) to pioneer the development of an autonomous driving solution. This ambitious project is poised to revolutionize the automotive industry by bringing higher-end automated driving solutions features to consumer vehicles.
This encompasses the design and implementation of intricate algorithms, sensor fusion systems, and intelligent control modules, all aimed at delivering a safe, reliable, and user-friendly autonomous driving experience.


Languages : AUTOSAR C++, Python, Matlab
Sensors : LiDAR, Camera, Radar, GPS, IMU

Arriver Software, Inc

Localization & Mapping Algorithm Engineer

I was an integral member of the Localization and Mapping team, where I played a pivotal role in developing critical software components for High-Precision Localization (HPL) and MapInterface modules. My contributions extended beyond software development as I actively engaged in rigorous in-vehicle software testing, ensuring the reliability and performance of our solutions.

Furthermore, I took pride in my involvement in the software development lifecycle, from its inception to the point of production commencement. This encompassed a comprehensive understanding of the entire development process and the seamless integration of my work into the final product.

My tenure with the localization and mapping team was marked by a commitment to excellence and a drive to deliver innovative, high-quality software solutions that directly impacted the success of the projects I was involved in.


Languages : AUTOSAR C++, Python, Matlab
Sensors : LiDAR, Camera, Radar, GPS, IMU

USC Cyber- Physical Systems-VIDA Group

Graduate Researcher

I worked with Prof. Jyotirmoy Deshmukh where we developed monitoring algorithms for data streams that are generated by perception algorithms. The goal of the research is to incorporate safe autonomy.
Parallel to our research, we were building USC's first autonomous delivery robot prototype from start to finish. It's a 1/10th size outdoor robot equipped with sensors and capable of real-time perception (object detection, multi-object tracking) and mapping.


Responsibility -

  • Lead the development, build, and bring-up execution of USC's first autonomous delivery vehicle prototype from start to finish. Software stack includes object detection, visual odometry, localization, planning, and controls.
  • Accelerate deployment and testing of real-time perception, SLAM, and tracking network onto the autonomous delivery robot prototype.
  • Spearhead research on developing Signal Temporal (STL) monitors, and vision-based Timed Quality Temporal (TQTL) monitors for ROS to track, and quantify perception robustness.
  • Integrate ROS multi-object tracking, lane-line detection, and semantic segmentation architectures with AV software stack.

Languages : C, C++, Python, Matlab
Sensors : Zed 3D Camera, Hokuyo LiDAR, Vecternav IMU
System on Chip : Jetson Xavier, Jetson TX2
Other : PCL, ROS, TensorFlow, Keras

Algorithms Include

  • Object detection using Yolov3 capable of detecting road objects.
  • DeepSORT+ Yolov3 Deep Learning based Multi-Object Tracking in ROS.
  • Google's Cartographer SLAM
  • ROS Navigation for autonomous navigation
  • Semantic Segmentation using DeepLabV3+ (under progress)

Frenzy Labs, Inc

Computer Vision Intern

Frenzy Labs, Inc is a LA based startup that develops self-labeling image technology that trains computer vision systems to detect exact products in complex visual scenes. They scale high-quality image datasets in a fraction of the time incurred by enterprises today and reduce manual labeling workforces by 99%.


Responsibility -

  • Proposed and developed a network architecture by integrating state-of-the-art R-CNN RetinaNet object detection and H-CNN EfficientNet classification network that improved apparel classification/detection performance by 5%.
  • Devised an end-to-end testing pipeline with RESTful request dispatching using Flask framework to accelerate model evaluation and deployment with reproducibility and traceability.
  • Optimize a state-of-the-art backbone H-CNN and R-CNN network for apparel localization and classification from an image, smart labeling, and product search from photograph and content. Trained dataset(~3M images) and with an increased 2% accuracy.


Languages : Python, HTML5
Libraries : Flask, Redis, CSS, TensorFlow, Keras, OpenCV

Reliance Industries Limited

Technical Manager

Responsibility -

  • Designing plant control logic on Rockwell Automation’s Programmable Logic Controllers (PLC), Schneider Electric Distributed Control System (DCS), and Emergency Shutdown System.
  • Implementing process control schemes with PID control for efficient and reliable functioning of Reactor, Cooler, and Heater.
  • Implementing design changes during plant turnaround in field devices and integrating control loop on plant Distributed Control System (DCS).

Education

Udacity

December 2020 - March 2021

Sensor Fusion Nanodegree

Coursework & Projects :

University of Southern California

Janaury 2019 - December 2020

Master of Science in Electrical & Computer Engineering

GPA: 3.64/4.0

Coursework :

Institute of Technology, Nirma University

July 2011 - May 2015

Bachelor of Technology in Instrumentation and Control Engineering

GPA: 8.02/10.0

Coursework :

Projects

2D Road Object Detection using Yolov3

  • Applied Deep Learning approach to detect road objects using a popular object detection neural network Yolov3.
  • The network was trained on Berkeley DeepDrive 100k Dataset by weights pre-trained on Imagenet. Used a transfer learning technique by freezing the convolutional layers and training the fully connected layers on the dataset.
  • Deployed trained weights on f1-tenth vehicle for a real time road object detection.

Stack - C++, ROS, Darknet, Python, Jetson Xavier

View Project

3D LiDAR Obstacle Detection

  • Coded a LiDAR road obstacle detection pipeline on LiDAR point cloud data using C++ and PCL.
  • The pipeline includes own designed segmentation(RANSAC) and clustering(KDTree) method followed by adding a 3D bounding box around the obstacle.

Stack - C++, PCL, Point Cloud Data

View Project

Traffic Light Detection using Yolov3

  • Applied Deep Learning approach to detect road traffic lights using a popular object detection neural network Yolov3.
  • The network was trained on Bosch Small Traffic Light Dataset by weights pre-trained on Imagenet. Used a transfer learning technique by freezing the convolutional layers and training the fully connected layers on the dataset.
  • Deployed trained weights on f1-tenth vehicle for a real time traffic light detection.

Stack - C++, ROS, Darknet, Python, Jetson TX2

View Project

Multi-Object 2D Tracking in ROS using Yolov3 and DeepSORT

  • Developed first open-source ROS wrapper for multi-object tracker which utilizes yolov3 object detection.
  • The DeepSORT tracker is pre-trained on Mars dataset and uses deep learning methods to incorporate deep associations between frames.
  • Deployed the multi-object tracker on f1-tenth vehicle for a real time tracking of objects with 25fps.

Stack - ROS, Python, Jetson Xavier, TensorFlow, Yolov3

View Project

MATLAB Parrot Mini-Drone Competition

  • Developed a line-following algorithm for the mini-drone using image processing techniques like edge detection and hough transform.
  • Applied feed-back control system to hover the drone over the line to reach a specific destination.
  • Designed STL(Signal Temporal Logic) specifications to improve the line tracking accuracy and drone navigation efficiency.

Stack - MATLAB, Simulink

View Project

IDEAS - Intelligent Driver Enhanced Assistant System

  • Estimated Driver Emotion and Drowsiness (GCP Vision API) and closest Rest spots (Google Maps API) and queried to/by (JSON Requests) the Ford SDL API
  • Voice and Text Alerts are given if the driver is drowsy and 3 closest Rest-Spots are displayed on the Infotainment System
  • Based on Driver Emotion, a playlist of songs of similar mood is generated (Spotify API) and top 10 results are suggested on the Infotainment System

Stack - Java, Python, OpenCV, JSON requests, APIs - Google Cloud Vision, Google Maps, Spotify, Ford SDL

View Project

Image Processing Algorithms

  • Edge Detection - Sobel, Canny edge detectors
  • Image Half-Toning - Dithering, Thresholding, Separable Error Diffusion
  • Geometric Image Modification - Warping, Image Panorama Stitching
  • Texture Classification and Texture Segmentation
  • Bag Of Words, SURF/SIFT Feature extraction

Stack - C++

View Project

CNN based Distracted Driver Detection

  • Predicted state of driver from 45,000 images falling under 10 classes with a vanilla CNN architecture and pre-trained ResNet-50 architecture. Achieved a robust 98% accuracy.

Stack - Python, Keras, OpenCV

View Project

CNN based CIFAR-10 Image Classification

Trained a CNN image classifier on 50k CIFAR-10 images using two different architectures.

  • LeNet-5 being the first architecture achieved 74% accuracy.
  • YGNet, an architecture designed by me achieved 90% test accuracy. This architecture is motivated by the paper "Striving for Simplicity - All Convolutional Net" with few hyper-parameter and layer tweaks.

Stack - Python, Keras, PyTorch

View Project

Sentiment Analysis of User Reviews

Performed Sentiment Analysis on 53k reviews which includes Amazon, IMDB and Yelp.

  • Trained 4 different classification models (Random Forest, Logistic Regression, Linear & RBF SVM)
  • Analysed performance of the models with 6 different dataset configurations.

Stack - Python, Sci-kit Learn, Numpy, Pandas, NLTK

View Project

RRT Robot Motion Planner

Simulated the working of RRT motion planning which finds a route between the start node and goal node


Stack - Python

View Project

State Estimation using Kalman Filter

Demonstrated the working of the Kalman Filter to estimate the state of the system. Utilized a constant jerk model to simulate the working of Kalman Filter.


Stack - Python

View Project

Tars - USC Self Driving

  • Lane Detection and Processing for a Self Driving Fully Autonomous Car Prototype
  • Employed Image Processing and PID control for smooth steering action and Speed Control
  • Demonstrated the functioning on a 1/10 scale RC car controlled using a Raspberry Pi 3


Stack - Python, OpenCV, PID Controls

View Project

Semantic Segmentation of road objects using DeepLabv3+ network architecture (Under Progress)

10,000 diverse images with pixel-level and rich instance-level annotations. Stack - Python, DeepLabv3+, Jetson Xavier

View Project
-->

Skills

Languages :

  • C++
  • C
  • Python
  • Matlab
  • HTML
  • CSS

Packages/Libraries :

  • Scikit-Learn
  • PyTorch
  • TensorFlow/Keras
  • OpenCV
  • Flask
  • Redis
  • PCL
  • ROS

CNN Architectures :

  • LeNet-5
  • ResNet-50
  • VGG-16
  • EfficientNet

R-CNN/Segmentation Algorithms :

  • Faster-RCNN
  • YOLOv3/v4
  • RetinaNet
  • Mask-RCNN
  • DeepLab

Image Processing Algorithms :

  • Edge Detection
  • Demosaicing
  • Denoising
  • Digital Half-Toning
  • Geometrical Image Modification
  • Morphological Processing

Machine Learning Algorithms :

  • Ensemble
  • Dimension-Reduction
  • Instance-based
  • Clustering
  • Regression
  • Bayesian

Robotics :

  • Kalman Filter
  • Extended Kalman Filter
  • Unscented Kalman Filter
  • Particle Filter
  • SLAM
  • RRT
  • A*
  • Robot Kinematics & Dynamics

Hardware :

  • Arduino
  • Raspberry Pi
  • Jetson AGX Xavier
  • Jetson TX2
  • Turtlebot3
  • Intel Aero Drone

Miscellaneous

  • Git/GitHub
  • Latex
  • SAP

Get in Touch