We are looking for a deep learning algorithm developer to join our growing team under Mobileye’s Algorithms Group. The team is building an innovative multimodal learning framework aimed at improving autonomous driving performance by understanding long-tail cases and providing actionable navigation insights. We’re combining state-of-the-art vision-language models to revolutionize how we train, validate, and scale our autonomous systems.
If you’re passionate about deep learning and engineering high-impact autonomous solutions - this is the place for you.
What will your job look like:
- Contribute to dataset curation activities - collecting, cleaning, labeling, and preparing multimodal data for training and validation.
- Train and fine-tune LLMs, VLMs, and VLA models to interpret visual scenes and produce actionable navigation insights supporting autonomous vehicle decision-making.
- Support validation of multimodal models - evaluating vision-language-action behavior and helping identify performance gaps across driving scenarios.
- Collaborate closely with AV planners, perception teams, and infrastructure engineers to ensure seamless deployment in a real-time ecosystem.
- You’ll have the opportunity to influence the strategic direction of language-driven autonomy - proposing new ideas, shaping model capabilities, and driving innovation from research to real-world deployment.
All you need is:
- M.Sc. in Deep Learning, Computer Vision, NLP, or a related field (Ph.D. an advantage).
- Hands-on experience in developing deep learning models.
- Strong programming skills in Python (additional C++ is an advantage).
- Experience with modern DL frameworks (e.g., PyTorch, TensorFlow).
- Experience with large multimodal or language models (LLMs/VLMs/VLA models) and their real-world integration - advantage.
Mobileye changes the way we drive, from preventing accidents to semi and fully autonomous vehicles. If you are an excellent, bright, hands-on person with a passion to make a difference come to lead the revolution!