DevJobs

Applied AI Security Engineer

Overview
Skills
  • Python Python
  • agentic AI systems
  • backend services
  • LLM inference
  • middleware services
  • validation
  • access control
  • retrieval filtering
  • response validation
  • resource management
  • rate limiting
  • RAG pipelines
  • policy enforcement
  • plugin systems
  • output inspection
  • MCP protocols
  • input inspection
  • cost management

The office location is: 121 Menachem Begin Road, Tel Aviv, 61st floor, in the POINT office complex. As per our office policy, you will be required to visit the site at least three times a week.


Lenovo Digital Trust Lab is seeking an Applied AI Security Engineer to design, build, and deploy runtime security controls for AI and agentic systems. This role focuses on protecting AI systems during inference and execution, including LLM guardrails, agent tool control, MCP gateway protections, abuse prevention, and cost/resource safeguards.

You will translate AI-security research and threat models into practical controls that operate in real time—bridging the gap between adversarial research and deployed systems.


Key Responsibilities


  • Design and implement runtime AI security controls (guardrails, filters, policy engines, gateways).
  • Build protections for LLM inference, agent tool execution, MCP / plugin frameworks, and RAG pipelines.
  • Implement prompt, input, and output inspection for abuse, jailbreaks, data leakage, and policy violations.
  • Develop resource and abuse controls (rate limiting, cost protection, Denial-of-Wallet mitigations).
  • Turn abstract threats into concrete, testable controls.
  • Integrate controls into existing AI platforms and SDKs with minimal performance impact.
  • Collaborate with AI red-teaming, model evaluation, monitoring, and product teams.
  • Contribute to threat modeling and validation of controls against real attack scenarios.



Minimum Requirements:


  • 3+ years of experience as an Applied AI Engineer, Software Engineer, or ML Engineer working on production AI systems.
  • Strong experience with Python and building backend or middleware services.
  • Hands-on experience working with LLM inference and agentic AI systems, including tool calling, orchestration layers, or multi-step reasoning workflows.
  • Understanding of AI threat vectors (prompt injection, jailbreaks, data leakage, tool abuse).
  • Familiarity with runtime control concepts such as policy enforcement, validation, rate limiting, or access control.



Preferred Requirements:



  • Experience building or securing agentic AI frameworks, including tool execution, plugin systems, or MCP-like protocols.
  • Hands-on experience implementing LLM guardrails, input/output inspection, or policy-based enforcement at inference time.
  • Familiarity with RAG pipelines, including retrieval filtering and response validation.
  • Experience designing protections against agent misuse and abuse, including prompt injection, tool abuse, and excessive compute usage.
  • Knowledge of cost and resource management in AI systems (token budgets, rate limiting, Denial-of-Wallet prevention).
  • Background in AI security, application security, or abuse prevention is a strong plus, but not mandatory.


Lenovo