Coding? Pretty exciting, eh? Tell me about it!
Hello, I'm Yogesh, a software engineer based in Ann Arbor, MI.
I'm an ambitious and self-motivated graduate with a strong inclination towards Localization & Mapping, Computer Vision/Perception in Autonomous Driving.
I'm an avid learner with a practical mindset. My team and I are Ford sponsor award winners at
CalHacks 6.0 Hackathon organized at UC Berkley for developing a platform
called
IDEAS (Intelligent Driver Enhanced Assistance System) in 36 hours.
I work at Qualcomm, Inc as a Localization and Mapping Algorithm Engineer where we are develop advance localization
algorithms to provide the vehicle with centimeter precision within the map in real time.
I have worked with Prof. Jyotirmoy Deshmukh at
USC CPS-VIDA as a Graduate Researcher where I developed first open-source ROS wrapper for DeepSORT Multi-Object
tracking algorithm publishing unique object ID's on Jetson Xavier Platform.
I like to create things that involve a camera, LiDAR, and a car. The fusion makes it undeniably beautiful, especially the world it creates around it.
The future of this fusion is near, and I'm excited to a part of it.
I like programming, reading, travelling, and cooking. I've played national-level tennis and I'm a drummer. I follow tennis, cricket, and formula-1 racing.
P.S. Don't forget to checkout my work behind the supercool background gif you saw above.
I am part of a dynamic team that is collaboratively engaged with a prominent Original Equipment Manufacturer (OEM) to pioneer the development of an autonomous driving solution.
This ambitious project is poised to revolutionize the automotive industry by bringing higher-end automated driving solutions features to consumer vehicles.
This encompasses the design and implementation of intricate algorithms, sensor fusion systems, and intelligent control modules, all aimed at delivering a safe, reliable, and user-friendly autonomous driving experience.
Languages : AUTOSAR C++, Python, Matlab
Sensors : LiDAR, Camera, Radar, GPS, IMU
I was an integral member of the Localization and Mapping team, where I played a pivotal role in developing critical software components for High-Precision Localization (HPL) and MapInterface modules.
My contributions extended beyond software development as I actively engaged in rigorous in-vehicle software testing, ensuring the reliability and performance of our solutions.
Furthermore, I took pride in my involvement in the software development lifecycle, from its inception to the point of production commencement.
This encompassed a comprehensive understanding of the entire development process and the seamless integration of my work into the final product.
My tenure with the localization and mapping team was marked by a commitment to excellence and a drive to deliver innovative, high-quality software solutions that directly impacted the success of the projects I was involved in.
Languages : AUTOSAR C++, Python, Matlab
Sensors : LiDAR, Camera, Radar, GPS, IMU
I worked with Prof. Jyotirmoy Deshmukh where we developed monitoring algorithms for data
streams that are generated by perception algorithms. The goal of the research is to incorporate safe
autonomy.
Parallel to our research, we were building USC's first autonomous delivery robot prototype from start to finish. It's a 1/10th size outdoor robot equipped with sensors
and capable of real-time perception (object detection, multi-object tracking) and mapping.
Responsibility -
Languages : C, C++, Python, Matlab
Sensors : Zed 3D Camera, Hokuyo LiDAR, Vecternav IMU
System on Chip : Jetson Xavier, Jetson TX2
Other : PCL, ROS, TensorFlow, Keras
Algorithms Include
Frenzy Labs, Inc is a LA based startup that develops self-labeling image technology that trains computer vision systems to detect exact products in complex visual scenes. They scale high-quality image datasets in a fraction of the time incurred by enterprises today and reduce manual labeling workforces by 99%.
Responsibility -
Languages : Python, HTML5
Libraries : Flask, Redis, CSS, TensorFlow, Keras, OpenCV
Responsibility -
Coursework & Projects :
Coursework :
Coursework :
Stack - C++, ROS, Darknet, Python, Jetson Xavier
View ProjectStack - C++, PCL, Point Cloud Data
View ProjectStack - C++, ROS, Darknet, Python, Jetson TX2
View ProjectStack - ROS, Python, Jetson Xavier, TensorFlow, Yolov3
View ProjectStack - MATLAB, Simulink
View ProjectStack - Java, Python, OpenCV, JSON requests, APIs - Google Cloud Vision, Google Maps, Spotify, Ford SDL
View ProjectStack - C++
View ProjectStack - Python, Keras, OpenCV
View ProjectTrained a CNN image classifier on 50k CIFAR-10 images using two different architectures.
Stack - Python, Keras, PyTorch
View ProjectPerformed Sentiment Analysis on 53k reviews which includes Amazon, IMDB and Yelp.
Stack - Python, Sci-kit Learn, Numpy, Pandas, NLTK
View ProjectSimulated the working of RRT motion planning which finds a route between the start node and goal node
Stack - Python
Demonstrated the working of the Kalman Filter to estimate the state of the system. Utilized a constant jerk model to simulate the working of Kalman Filter.
Stack - Python
Stack - Python, OpenCV, PID Controls
10,000 diverse images with pixel-level and rich instance-level annotations. Stack - Python, DeepLabv3+, Jetson Xavier
View Project