DevJobs

ML Data Engineer

Overview
Skills
  • Python Python ꞏ 3y
  • SQL SQL
  • Scala Scala
  • Java Java
  • Spark Spark
  • Kafka Kafka
  • Flink Flink
  • CI/CD CI/CD
  • AWS AWS
  • Azure Azure
  • GCP GCP
  • Airflow Airflow
  • stream processing
  • testing
  • ETL
  • ELT
  • documentation
  • data warehouses
  • data lakes
  • code review
  • version control
  • batch processing
  • vector stores
  • vector databases
  • Weights & Biases
  • object stores
  • MLflow
  • feature stores
  • Beam
  • Argo
Why Join Us?

Join Check Point’s AI research group, a cross-functional team of ML engineers, researchers and security experts building the next generation of AI-powered security capabilities. Our mission is to leverage large language models to understand code, configuration, and human language at scale, and to turn this understanding into security AI capabilities that will drive Check Point’s future security solutions.

We foster a hands-on, research-driven culture where you’ll work with large-scale data, modern ML infrastructure, and a global product footprint that impacts over 100,000 organizations worldwide.

Key Responsibilities

Your Impact & Responsibilities

As a Data Engineer – AI Technologies, you will be responsible for building and operating the data foundation that enables our LLM and ML research: from ingestion and augmentation, through labeling and quality control, to efficient data delivery for training and evaluation.

You Will

  • Own data pipelines for LLM training and evaluation Design, build and maintain scalable pipelines to ingest, transform and serve large-scale text, log, code and semi-structured data from multiple products and internal systems.
  • Drive data augmentation and synthetic data generation Implement and operate pipelines for data augmentation (e.g., prompt-based generation, paraphrasing, negative sampling, multi-positive pairs) in close collaboration with ML Research Engineers.
  • Build tagging, labeling and annotation workflows Support human-in-the-loop labeling, active learning loops and semi-automated tagging. Work with domain experts to implement tools, schemas and processes for consistent, high-quality annotations.
  • Ensure data quality, observability and governance Define and monitor data quality checks (coverage, drift, anomalies, duplicates, PII), manage dataset versions, and maintain clear documentation and lineage for training and evaluation datasets.
  • Optimize training data flows for efficiency and cost Design storage layouts and access patterns that reduce training time and cost (e.g., sharding, caching, streaming). Work with ML engineers to make sure the right data arrives at the right place, in the right format.
  • Build and maintain data infrastructure for LLM workloads Work with cloud and platform teams to develop robust, production-grade infrastructure: data lakes / warehouses, feature stores, vector stores, and high-throughput data services used by training jobs and offline evaluation.
  • Collaborate closely with ML Research Engineers and security experts Translate modeling and security requirements into concrete data tasks: dataset design, splits, sampling strategies, and evaluation data construction for specific security use

Qualifications

What You Bring

  • 3+ years of hands-on experience as a Data Engineer or ML/Data Engineer, ideally in a product or platform team.
  • Strong programming skills in Python and experience with at least one additional language commonly used for data / backend (e.g., SQL, Scala, or Java).
  • Solid experience building ETL / ELT pipelines and batch/stream processing using tools such as Spark, Beam, Flink, Kafka, Airflow, Argo, or similar.
  • Experience working with cloud data platforms (e.g., AWS, GCP, Azure) and modern data storage technologies (object stores, data warehouses, data lakes).
  • Good understanding of data modeling, schema design, partitioning strategies and performance optimization for large datasets.
  • Familiarity with ML / LLM workflows: train/validation/test splits, dataset versioning, and the basics of model training and evaluation (you don’t need to be the primary model researcher, but you understand what the models need from the data).
  • Strong software engineering practices: version control, code review, testing, CI/CD, and documentation.
  • Ability to work independently and in collaboration with ML engineers, researchers and security experts, and to translate high-level requirements into concrete data engineering tasks.

Nice to Have

  • Experience supporting LLM or NLP workloads, including dataset construction for pre-training / fine-tuning, or retrieval-augmented generation (RAG) pipelines.
  • Familiarity with ML tooling such as experiment tracking (e.g., Weights & Biases, MLflow) and ML-focused data tooling (feature stores, vector databases).
  • Background in security / cyber domains (logs, alerts, incidents, SOC workflows) or other high-volume, high-variance data environments.
  • Experience with data quality and observability platforms, or building in-house monitoring for data freshness, drift and anomalies.
  • Experience in environments where data infrastructure directly affects training efficiency and GPU utilization.

Why Join Us

  • Work at the intersection of modern data engineering, LLMs and real-world cyber security, with immediate impact on global customers.
  • Own the data layer that makes advanced ML and LLM research possible, directly influencing training efficiency, quality and speed of iteration.
  • Collaborate with experienced ML engineers, researchers and security experts in a fast-moving, supportive environment.
  • Access modern cloud and GPU infrastructure and large, unique datasets from one of the world’s leading cyber security vendors.
Check Point Software Technologies