Why Join Us?
We are looking for a talented Data Engineer to join our analytics team in the Big Data Platform group.
You will support our product and business data initiatives, expand our data warehouse, and optimize our data pipeline architecture with an AI first attitude.
The ideal candidate is experienced in leveraging AI tools as part of modern data pipeline development, enabling scalable solutions, accelerating delivery, and continuously exploring new approaches and technologies.
The right candidate is excited by the prospect of building the data architecture for the next generation of products and data initiatives.
This is a unique opportunity to join a team full of outstanding people making a big impact on Check Point.
We work on multiple products in many domains to deliver truly innovative solutions in the Cyber Security and Big Data realm.
This role requires the ability to collaborate closely with both R&D teams and business stakeholders, to understand their needs and translate them into robust and scalable data solutions.
Key Responsibilities
- Maintain and develop enterprise-grade Data Warehouse and Data Lake environments
- Create data infrastructure for various R&D groups across the organization to support product development and optimization
- Work with data experts to assist with technical data-related issues and support infrastructure needs
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, redesigning infrastructure for scalability
- Build and maintain robust ETL/ELT pipelines for data ingestion, transformation, and delivery across various systems
- Incorporate AI-assisted tools into data pipeline design, development, and optimization to improve efficiency, scalability, and innovation
Qualifications
- B.Sc. in Engineering or a related field
- 3+ years of experience as a Data Engineer working on production systems
- Advanced SQL knowledge and experience with relational databases
- Proven experience using Python
- Hands-on experience building, optimizing, and automating data pipelines, architectures, and data sets
- Experience in creating and maintaining ETL/ELT processes
- Strong project management and organizational skills
- Strong collaboration skills with both technical (R&D) and non-technical (business) teams
- Experience using AI tools as part of the data engineering workflow, with a mindset of experimentation, working at scale, and exploring new technologies
- Advantage: Azure data services, Databricks, EventHub, and Spark