We are seeking a skilled and motivated Data Engineer with expertise in Elasticsearch, cloud technologies, and Kafka. As a data engineer, you will be responsible for designing, building and maintaining scalable and efficient data pipelines that will support our organization's data processing needs.
Responsibilities:
- Design and develop data platforms based on Elasticsearch, Databricks, and Kafka
- Build and maintain data pipelines that are efficient, reliable and scalable
- Collaborate with cross-functional teams to identify data requirements and design solutions that meet those requirements
- Write efficient and optimized code that can handle large volumes of data
- Implement data quality checks to ensure accuracy and completeness of the data
- Troubleshoot and resolve data pipeline issues in a timely manner
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- 3+ years of experience in data engineering
- Expertise in Elasticsearch, cloud technologies (such as AWS, Azure, or GCP), Kafka and Databricks
- Proficiency in programming languages such as Python, Java, or Scala
- Experience with distributed systems, data warehousing and ETL processes
- Experience with Container environment such AKS\EKS\OpenShift is a plus
- high security clearance is a plus