DevJobs

Data Analytics Engineer

Overview
Skills
  • Python Python
  • SQL SQL
  • Kafka Kafka
  • AWS AWS
  • Azure Azure
  • GCP GCP
  • Snowflake Snowflake
  • Airflow Airflow
  • BigQuery
  • Data Lake
  • dbt
  • Redshift
  • S3
  • Apache Spark
  • Databricks

We are seeking a skilled Data Analytics Engineer to join our dynamic Analytics team. In this role, you will play a critical part in building, maintaining, and scaling our data infrastructure to support business intelligence, product analytics, and business operations. You will collaborate closely with analysts, data scientists, and cross-functional stakeholders to ensure clean, reliable, and efficient data pipelines. This is an opportunity to contribute to a data-driven culture and help drive actionable insights across the company.


  • Dsign, build, and optimize robust data pipelines to ingest, process, and store data from various sources.
  • Maintain and optimize our data warehouse (e.g., Snowflake, BigQuery, Redshift) and build data models to ensure scalability, reliability, and performance.
  • Develop and maintain ETL/ELT workflows to enable data accessibility for reporting and analysis.
  • Collaborate with the analytics teams (BI, product, and business) to understand data requirements and cross teams (GTM, product) and provide the necessary infrastructure & data models to support their objectives.
  • Monitor and troubleshoot data quality, pipeline failures, and performance issues, implementing fixes and improvements as needed.
  • Contribute to automating manual processes and improving data reliability and efficiency.
  • Stay up-to-date with emerging trends, tools, and technologies in data engineering to drive innovation and continuous improvement.

Requirements

  • 3+ years of experience in a data engineering role, with a proven track record of building and managing data pipelines and infrastructure.
  • Good understanding of data warehousing concepts such as dimensional models, database design, and data modeling
  • Strong business understanding and ability to analyze data
  • Strong proficiency with SQL for data manipulation and querying.
  • Hands-on experience with ETL/ELT tools (e.g., dbt,Apache Airflow )
  • Experience with cloud platforms such as AWS, GCP, or Azure, including data-related services (e.g., S3, Redshift, BigQuery, Data Lake, Snowflake).
  • Familiarity with programming languages such as Python
  • Knowledge of tools and frameworks for big data processing (e.g., Apache Spark, Kafka, Databricks) is a plus.
  • Strong problem-solving skills, attention to detail, and ability to work independently and collaboratively in a fast-paced environment
  • Excellent communication and interpersonal skills.

G-STAT