Verobotics, a fast-growing deep-tech startup, is looking to hire a Computer Vision Engineer with a strong focus on Deep Learning.
The company develops advanced autonomous robotic systems for large-scale infrastructure environments, combining robotics, AI, and computer vision into production-grade, scalable platforms that transform exterior maintenance and inspection.
About the Role
This role is first and foremost a Deep Learning–driven computer vision position.
You will design, train, and deploy state-of-the-art deep learning models that enable robotic systems to perceive, interpret, and reason about complex 3D environments in real time.
The position sits at the intersection of deep learning research and real-world production, requiring both strong theoretical understanding and hands-on experience bringing neural models from experimentation to embedded robotic deployment.
What You Will Do
- Develop and implement deep learning models for visual understanding, including detection, segmentation, and 3D perception.
- Design neural-based solutions for depth estimation, pose estimation, 3D reconstruction, and scene understanding.
- Work with learning-based approaches for multi-view geometry and 3D data (point clouds, voxel grids, meshes).
- Translate research-grade deep learning ideas into robust, scalable, production-ready systems.
- Build training, evaluation, and experimentation pipelines for large-scale datasets and continuous model improvement.
- Collaborate closely with navigation, control, hardware, and operations teams to integrate learned perception models into complete autonomous robotic systems.
Key Responsibilities
- Design and train deep neural networks for 3D perception, object detection, tracking, segmentation, and scene understanding.
- Apply deep learning to multi-sensor data, including LiDAR, stereo/RGB-D cameras, IMU, and GNSS.
- Optimize neural models for real-time inference on embedded and robotic platforms.
- Run simulations, offline experiments, and real-world field tests to validate model robustness and performance.
- Contribute to end-to-end learning-based perception pipelines, from data collection and labeling to deployment and monitoring.
Requirements
- B.Sc./M.Sc. in Computer Science, Electrical/Computer Engineering, Applied Mathematics, or a related field (Ph.D. preferred).
- Strong background in Deep Learning for Computer Vision, with hands-on experience designing and training neural networks.
- Proven experience with learning-based 3D perception, such as depth estimation, reconstruction, pose estimation, or SLAM.
- Experience working with 3D data representations (Point Clouds, Meshes, Neural Radiance Fields – NeRF, or similar).
- Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow, along with OpenCV and PCL.
- Hands-on experience with advanced sensing systems, including LiDAR and RGB-D cameras.
- Ability to bridge research, software engineering, and hardware constraints in real-world robotic systems.