Jeen.AI empowers enterprises with generative AI through advanced AI agents, automations, voice analytics, and knowledge-based insights -- deployed across any cloud or on-premise environment. Trusted by government, defense, and enterprise organizations shaping tomorrow's technological landscape.
Why Join Us?
You won't be managing dashboards from a desk. As a DevOps Engineer at Jeen.AI, you'll deploy a production AI platform directly into customer environments -- from Azure AKS clusters to air-gapped OpenShift installations in high-security settings. You'll own the full deployment lifecycle across government, defense, and enterprise organizations in Israel and global markets.
This is a rare opportunity to build and scale real AI infrastructure from the ground up: architecting GitOps pipelines, automating multi-cloud provisioning, and solving the hard problems that come with deploying complex microservices platforms into diverse, constrained environments.
Responsibilities
- Deploy and operate our AI platform across customer environments, including cloud (Azure, AWS, GCP) and on-premise/air-gapped Kubernetes and OpenShift clusters
- Design and maintain Helm charts, ArgoCD ApplicationSets, and GitOps workflows for multi-environment delivery (dev, staging, production)
- Build and improve CI/CD pipelines using GitHub Actions -- including multi-platform Docker image builds, automated security scanning (Trivy), and ArgoCD sync triggers
- Manage secrets lifecycle using External Secrets Operator and Azure Key Vault across cloud and on-premise deployments
- Own infrastructure-as-code with Terraform for Azure, AWS, and GCP resource provisioning
- Operate and tune stateful workloads: PostgreSQL, RabbitMQ, Redis, and MinIO across environments with varying storage capabilities (NFS, local, managed)
- Maintain and extend the observability stack -- Prometheus, Grafana, Loki, Tempo, and OpenTelemetry -- to ensure platform reliability and performance
- Drive automation for air-gapped deployment workflows, including offline archive creation and local registry management
- Identify and resolve performance bottlenecks, harden security posture, and improve platform resilience
Requirements
- 3+ years of experience as a DevOps Engineer in a production environment
- Strong hands-on experience with Kubernetes and Docker, including Helm chart development and management
- Experience deploying and operating workloads on at least one major cloud provider (Azure preferred; AWS or GCP also valued)
- Proficiency with CI/CD pipelines -- GitHub Actions preferred, Jenkins also relevant
- Solid scripting skills in Python and/or Bash
- Experience operating PostgreSQL, including backup, scaling, and troubleshooting in Kubernetes
- Understanding of GitOps principles and tools such as ArgoCD
- Familiarity with monitoring and observability tooling (Prometheus, Grafana, Loki, or equivalent)
Preferred Qualifications
- Security clearance (or eligibility to obtain one)
- Experience with OpenShift, including Security Context Constraints (SCCs) and enterprise deployment patterns
- Hands-on experience with air-gapped or disconnected environment deployments
- Familiarity with secrets management tools (External Secrets Operator, Azure Key Vault, HashiCorp Vault)
- Experience with Terraform for multi-cloud infrastructure provisioning
- Background in AI/ML infrastructure -- LLM serving, vector databases (pgvector), or embedding pipelines
- Experience with message brokers (RabbitMQ) and caching layers (Redis) in production