Who We Are?
Our Data team consists of highly skilled senior software and data professionals who collaborate to solve complex data challenges. We process billions of records daily from multiple sources using multi-stage pipelines with intricate data structures and advanced queries.
We are responsible for building data pipelines end to end—from raw data ingestion to the creation of actionable datasets—following the bronze, silver, and gold paradigm. This includes business logic, infrastructure, ETLs, optimization, and ongoing maintenance.
The data we deliver drives insights and decision-making across the organization and enhances our product offerings. We leverage technologies such as AWS, Snowflake, Iceberg, Airflow, Spark, and more.
What You’ll Do?
- Develop and maintain ETL pipelines, including complex SQL queries
- Bridge the gap between business requirements and technical solutions
- Design, develop, and optimize scalable data infrastructure
- Write high-quality, maintainable code and SQL
- Utilize technologies such as Snowflake, Iceberg, Airflow, Spark, and other data infrastructure tools
- Integrate external data sources into our data ecosystem
- Contribute to the evolution of our AWS-based infrastructure
Who You Are?
- 5+ years of experience as a Data Engineer, Data Scientist, or Backend Developer
- Passionate about transforming raw data into clean, actionable datasets that drive insights for stakeholders
- Proficient in writing and optimizing complex SQL queries (hundreds of lines long)
- Strong development experience in Python and Regex
- Experience working with complex data structures
- Hands-on experience with cloud data warehouses like Snowflake, BigQuery, Databricks
- Ability to understand business or product needs and translate them into a development plan
- Experience with Big Data and cloud-based environments, preferably AWS
- Knowledge of Scala and Spark is a plus
- Excellent teamwork and collaboration with data analysts, BI developers, and other business champions