
חדש באתר! העלו קורות חיים אנונימיים לאתר ואפשרו למעסיקים לפנות אליכם!
About the AI Division
The AI Division is a unique and dedicated group within Ceva, driving innovation in Machine Learning and Generative AI architectures for edge devices and cloud inference.
Our R&D domains span Neural Network Processors (NPU), Vision DSPs, and advanced AI algorithms for applications across smartphones, tablets, automotive, surveillance cameras and many more edge AI systems.
We combine cutting-edge hardware IP design with embedded software and system-level solutions, enabling the next generation of intelligent and energy-efficient devices.
About the Role:
In this role, you will be a key contributor to the design and implementation of Ceva’s AI Graph Compiler software stack for Neural Processing Units (NPUs). You will take part in defining software architecture, implementing performance-critical components, and enabling efficient execution of advanced neural networks under tight power, memory, and latency constraints.
You will work closely with hardware and system architects, software and hardware engineers, influencing both software and hardware decisions. You will design and implement major parts of Ceva NPU embedded solutions, actively promoting Ceva AI capabilities to the customers.
What will you do:
Own and design key components of the AI Graph Compiler software stack for NPU-based systems.
Optimize inference performance (latency, throughput, memory footprint, power) for edge deployments.
Collaborate on HW–SW co-design, influencing NPU architecture.
Support IP evaluations and silicon bring-up, root-cause complex HW/SW issues, and influence development methodologies.
Mentor junior engineers and contribute to technical best practices.
Requirements:
Advantages: