DevJobs

Senior Data Engineer

Overview
Skills
  • Python Python
  • Flink Flink
  • Kafka Kafka
  • CI/CD CI/CD
  • Airflow Airflow
  • Apache Iceberg
  • batch architectures
  • data modeling
  • infrastructure-as-code
  • stream processing systems
  • Athena
  • AWS EMR
  • Azure Data Explorer
  • Dagster
  • Kubeflow
About Us

Zenity is the first and only holistic platform built to secure and govern AI Agents from buildtime to runtime. We help organizations defend against security threats, meet compliance, and drive business productivity. Trusted by many of the world’s F500 companies, Zenity provides centralized visibility, vulnerability assessments, and governance by continuously scanning business-led development environments. We recently raised $38 million in a Series B funding, solidifying our position as a leader in the industry and enabling us to accelerate our mission of securing AI Agents everywhere.

About The Role

You will architect and build a scalable data platform from scratch, engineering high-throughput, low-latency pipelines that drive real-time security analytics and AI-powered systems. As a key member of the Data & AI Algorithms group, you will collaborate with AI/ML engineers, data scientists, and security researchers to design production-grade infrastructure at scale. This role requires strong ownership, systems thinking, and the agility to operate in a fast-moving environment.

What You’ll Do

  • Build ML infrastructure to support scalable, low-latency production deployment of data & AI models.
  • Ensure availability, reliability, and performance of mission-critical data infrastructure
  • Define and promote best practices for data modeling, orchestration, CI/CD, and infrastructure-as-code
  • Collaborate cross-functionally to enable data-driven product capabilities

Requirements:

Qualifications

  • 6+ years of hands-on experience building and operating data systems at scale
  • Production experience with big data frameworks such as Apache Flink, Kafka Streams, or similar distributed data processing systems.
  • Hands-on experience with modern data lakes and open table formats as Apache Iceberg
  • Strong Python programming skills
  • Strong CI/CD and infrastructure-as-code capabilities
  • Experience with cloud-native data services such as AWS EMR, Athena, Azure Data Explorer
  • Familiarity with orchestration tools such as Airflow, Kubeflow, Dagster, or similar
  • Excellent communication skills with a strong ownership and problem-solving mindset
  • Experience in data modeling
  • Experience with stream processing systems (Kafka, Flink) and large-scale batch architectures
Zenity