About Regulus
Regulus is an agile defense-tech startup tackling the most complex challenges in counter-drone and uncrewed defense. With hundreds of combat-proven systems deployed by the IDF and global partners, we are now engineering the next generation of C-UxS (Counter-Uncrewed Systems). We leverage advanced electronic warfare and kinetic measures to detect and neutralize hostile threats. If you want to push technological boundaries while safeguarding Israel and our allies, join us at this pivotal stage of our growth.
This is a unique opportunity to join a company at a pivotal growth stage and help shape its products, culture, and future.
About The Role
As an LLM / Applied AI Engineer, you will design and deploy LLM-powered agents for mission reasoning, operator decision support, automated analysis tools, workflow automation, and next-generation AI-driven C2 systems. You will build RAG pipelines, enforce strong safety layers, integrate LLM reasoning into operational workflows, and shape how autonomy and humans collaborate in critical defense missions.
Key Responsibilities
- LLM-Based Agents & Mission Reasoning
- Build agents capable of reasoning over multi-sensor tracks, mission state, and operational context.
- Implement decision-support tools: threat interpretation, contextual alerts, explanations, and recommendations.
- Enable natural-language interaction with operators (e.g., “Why is this track high-threat?”).
- RAG, Tool-Use & Workflow Automation
- Create Retrieval-Augmented Generation pipelines using internal data, threat libraries, SOPs/ROE, sensor logs, and system telemetry.
- Implement tool-calling and API access for safe interaction with system state and operational modules.
- Build AI-driven workflows, copilots, and automated analysis utilities.
- LLM Safety, Guardrails & Deployment
- Design guardrail layers: constrained decoding, validation logic, fallback behavior, and deterministic checks.
- Deploy optimized LLMs using quantization, distillation, vLLM, llama.cpp, or on-prem acceleration.
- Develop LLM evaluation frameworks addressing hallucination reduction and reliability.
- Backend Integration
- Integrate LLM services with Python microservices, REST/gRPC backends, and mission systems.
- Collaborate with autonomy, perception, and C2 teams to support full-stack integration.
- Continuous Improvement
- Fine-tune LLMs and embeddings on internal datasets.
- Evaluate model behavior in real operational scenarios and refine accordingly.
- Incorporate operator feedback and mission outcomes into continuous improvement cycles.
Requirements
- Strong understanding of transformer architecture, embeddings, attention, and modern LLM internals.
- Experience with quantization, deployment, and runtime optimization (vLLM, llama.cpp, TensorRT-LLM).
- Proficiency in Python and backend integration.
- Hands-on experience with LangChain, LlamaIndex, and agent frameworks.
- Experience in prompt engineering, fine-tuning, and building RAG pipelines.
Preferred Qualifications
- Experience in defense, robotics, autonomous systems, or mission-critical environments.
- Background in LLM safety/guardrails and high-stakes evaluation.
- Familiarity with geospatial reasoning, mission planning, and multi-sensor contexts.
- Experience with CI/CD, Docker, and on-prem deployment setups.