Responsibilities – what you will be doing:
Conduct applied research in multi-modal domains to enhance our autonomous capabilities. Specifically, you will be applying SOTA Machine Learning and Reinforcement Learning techniques for tasks of perception, navigation, and planning This multi-modality includes the full sensory-stack of our vehicles (lidar, cameras, localization sensors, etc.).
Implement pipelines to achieve robust production level models from simulative and real data preparation, through implementing tailor-made networks or adapting and modifying existing networks, optimizing learning parameters and evaluating performance.
Deploy trained models to accelerated inference environments using various optimization and quantization techniques.
Take a meaningful part in designing and implementing our Machine Learning infrastructure, frameworks, and evaluation tools.
Requirements – does this describe you?
MSc/PhD in Computer Science or Electrical Engineering with 5+ years of experience.
Solid background in Machine Learning algorithms in the domain of autonomous systems, with proven experience of RL-based development.
Proficiency in Python programming, experienced working with relevant deep learning and Computer Vision libraries (Pytorch/TensorFlow, Scipy, OpenCV)
Proven experience with production-level designing and implementation of SOTA algorithms.
Advantages this will bring you to our front row:
C++ programming.
Experience in conversion and optimizing trained models to accelerate inference environments within embedded HW (TensorRT, Jetson etc.).
Experience with working in simulative environments and dealing with Sim2Real challenges.