Fetcherr experts in deep learning, e-commerce, and digitization, Fetcherr disrupts traditional systems with its cutting-edge AI technology. At its core is the Large Market Model (LMM), an adaptable AI engine that forecasts demand and market trends with precision, empowering real-time decision-making. Specializing initially in the airline industry, Fetcherr aims to revolutionize industries with dynamic AI-driven solutions.
We are looking for an experienced
Data Engineer Team Leader to drive our data engineering efforts and lead a team of skilled data engineers. This role is a blend of
hands-on technical leadership and team management, responsible for building and scaling data infrastructure that powers real-time pricing, large-scale data pipelines, and machine learning products.
You will lead the team through architecture decisions, development, and deployment of mission-critical systems—while growing and mentoring a high-performing team.
Responsibilities
- Lead a team of data engineers building robust, scalable, and high-performance data pipelines and infrastructure.
- Design, build, and maintain distributed data processing workflows (batch & streaming).
- Drive best practices for data quality, validation, testing, and observability.
- Own and evolve Fetcherr’s data architecture in alignment with business and product goals.
- Manage sprint planning, task breakdown, code reviews, and performance feedback for your team.
- Contribute hands-on to key development tasks and architecture decisions.
- Recruit, mentor, and grow the data engineering team.
Requirements:
You'll be a great fit if you have...
- BSc or MSc in Computer Science, Engineering, or related field.
- 6+ years of experience in data engineering, including:
- Strong Python development background.
- Advanced SQL skills and data modeling experience.
- Experience with cloud platforms (GCP preferred; AWS/Azure acceptable).
- 2+ years in a technical leadership or team lead role.
- Strong experience with orchestration tools like Apache Airflow, Prefect, or Dagster.
- Hands-on experience with distributed data frameworks: Apache Beam, Spark, or similar.
- Familiarity with Docker, CI/CD (GitHub, GitLab).
- Proven ability to design and maintain data platforms processing hundreds of terabytes of data.
- Strong communication and collaboration skills.
- Fluent English (spoken and written).
Nice to Have
- Experience with Google Cloud Platform (BigQuery, Dataflow, Pub/Sub).
- Experience with ML pipelines or collaboration with data science teams.
- Exposure to airline domain, pricing systems, or GDS feeds.
- Familiarity with data contracts, versioning, and lineage tools.
- Strong foundation in data structures and algorithms.
- Experience optimizing pipelines and databases for cost