Real Time Group, LTD is looking for a skilled Real-Time Embedded Developer with expertise in Artificial Intelligence (AI) to join our dynamic engineering team. The ideal candidate will be responsible for designing, developing, and optimizing high-performance, low-latency firmware and software for resource-constrained embedded systems. This role requires a unique blend of deep real-time operating system (RTOS) knowledge, low-level programming proficiency, and practical experience in deploying and optimizing AI/Machine Learning (ML) models on edge devices.
Job Responsibilities:
- Develop and maintain efficient, reliable, and testable real-time firmware in C/C++ for various microcontrollers (MCUs) and microprocessors (MPUs).
- Design, implement, and optimize RTOS applications for determinism, low latency, and efficient resource utilization..
- Perform hardware-software integration and debugging using tools like oscilloscopes, logic analyzers, and in-circuit emulators/debuggers.
- Develop and implement communication protocols (e.g., SPI, I2C, UART, Ethernet, Wi-Fi, Bluetooth LE).
- Contribute to the entire software development lifecycle, including requirements definition, design, coding, testing, and deployment.
- Collaborate with hardware engineers to define specifications, select components, and bring up new hardware platforms.
Job Requirements:
Embedded Systems Expertise:
- Bachelor's or Master's degree in Electrical Engineering, Computer Science, or a related technical field.
- 5+ years of professional experience in embedded software development.
- Expert-level proficiency in C and C++ for embedded systems.
- Deep practical experience with RTOS concepts (task scheduling, memory management, synchronization primitives, interrupt handling).
- Proven experience with bare-metal programming and understanding of hardware-software interfaces, including register-level programming.
- Strong debugging skills for challenging real-time issues, including race conditions and priority inversion.
AI and Edge Computing Expertise:
- Solid understanding of Machine Learning (ML) fundamentals and common model architectures (e.g., CNNs, RNNs/LSTMs, transformers).
- Practical experience in quantization, pruning, and model compression techniques to reduce model size and improve inference speed on edge devices.
- Experience with ML deployment frameworks such as TensorFlow Lite/TFLite-Micro or similar tools for on-device inference.
- Familiarity with Embedded Linux or other operating systems for higher-end edge devices is a plus.
- Ability to profile and benchmark AI model performance (latency, memory, power) on target hardware.