We are looking for a skilled DevOps Engineer to join our dynamic Data Engineering team, who will be responsible for managing and optimizing our data infrastructure while ensuring seamless integration, deployment, and monitoring of our systems.
Responsibilities:
- Managing, maintaining, and optimizing Kubernetes clusters.
- Managing Spark clusters within the EKS environment.
- Design and implement CI/CD pipelines in GitLab to automate and streamline deployment processes.
- Manage and maintain Kafka clusters, ensuring high availability and reliability of our data streaming platform.
- Develop and implement FinOps platforms for efficient cost management within AWS and Snowflake. Establish centralized monitoring to track system performance, reliability, and cost optimization.
- Infrastructure as Code (IaC): utilize Terraform for infrastructure provisioning, ensuring consistency and scalability. Good understanding of Linux and networking.
Requirements:
- 3+ years experience as a DevOps, building CI/CD pipelines with Gitlab.
- 3+ years experience with AWS: proficiency in managing foundational AWS services like IAM/VPC/EC2/S3/Route53/EKS/ALB with Terraform and understanding of AWS pricing model.
- 3+ years experience with Kubernetes, Kafka, Spark (EMR on EKS - advantage).
- 2+ years experience with AWS DATA services and MLOps.
- 2+ years experience in Python programming, and designing and implementing monitoring solutions using DataDog, Grafana.
- Understanding FinOps principles and best practices for cost optimization especially in AWS Data services - advantage.