Company Overview:
Cellebrite’s (Nasdaq: CLBT) mission is to enable its global customers to protect and save lives by enhancing digital investigations and intelligence gathering to accelerate justice in communities around the world.
Cellebrite’s AI-powered Digital Investigation Platform enables customers to lawfully access, collect, analyze, and share digital evidence in legally sanctioned investigations while preserving data privacy. Thousands of public safety organizations, intelligence agencies, and businesses rely on Cellebrite’s digital forensic and investigative solutions—available via cloud, on-premises, and hybrid deployments—to close cases faster and safeguard communities.
To learn more, visit us at www.cellebrite.com, https://investors.cellebrite.com/investors and find us on social media @Cellebrite.
Position Overview:
Cellebrite is seeking a
Data Tech Lead to join its SaaS Data Platform team.
This role offers a unique opportunity to
define and lead the architecture of large-scale, cloud-native data platforms, ensuring high performance, scalability, and security across the organization.
You will own the data platform
end-to-end-from architecture and design through development and production deployment-while collaborating closely with Data Science, Machine Learning, DevOps, Backend, and Product teams.
Key Responsibilities:
- Lead the architecture and evolution of Cellebrite’s Data Platform, ensuring scalability, reliability, security, and long-term maintainability
- Design, develop, and maintain batch and streaming ETL/ELT pipelines using Spark, Glue, Athena, Iceberg, Lambda, Kinesis, Step Functions, and EKS
- Define and implement Data Lake / Lakehouse architecture, including storage layouts, table design, schema management, partitioning, and lifecycle policies
- Optimize performance, reliability, and observability of data pipelines and backend services
- Introduce and standardize modern data patterns (lakehouse, event-driven pipelines, schema-aware processing)
- Ensure security, compliance, and auditability using AWS best practices (IAM, encryption, auditing)
- Mentor and guide engineers, lead design reviews and architecture discussions, and promote best practices
- Collaborate closely with Data Science, ML, backend, DevOps, and Product teams to deliver end-to-end data-driven solutions.
Requirements:
- 8+ years of experience in Data Engineering and/or Backend Development in AWS-based, cloud-native environments
- Strong hands-on experience writing Spark jobs (PySpark) and running workloads on EMR and/or Glue
- Proven ability to design and implement scalable backend services and data pipelines
- Deep understanding of data modeling, data quality, pipeline optimization, and distributed systems
- Experience with Infrastructure as Code and automated deployment of data infrastructure (Terraform / CDK / SAM)
- Strong debugging, testing, and performance-tuning skills in agile environments
- High level of ownership, curiosity, and problem-solving mindset.
Nice to Have:
- AWS certifications (Solutions Architect, Data Engineer)
- Experience with ML pipelines or AI-driven analytics
- Familiarity with data governance, self-service data platforms, or data mesh architectures
- Experience with PostgreSQL, DynamoDB, MongoDB
- Experience building or consuming high-scale APIs
- Background in multi-threaded or distributed system development
- Domain experience in cybersecurity, law enforcement, or other regulated industries.
Location: Petah Tikva, Israel