DevJobs

Data Engineer

Overview
Skills
  • SQL SQL
  • Python Python
  • Spark Spark
  • Kafka Kafka
  • RDBMS RDBMS
  • Tableau Tableau
  • Power BI Power BI
  • AWS AWS
  • Snowflake Snowflake
  • Airflow Airflow
  • data modeling ꞏ 5y
  • data pipelines ꞏ 5y
  • ETLs ꞏ 5y
  • non-relational databases
  • Kinesis
  • Redshift
  • Looker
  • Hadoop
  • Fivetran
  • BigQuery
  • Amazon Athena

We are looking for an experienced Data Engineer who thrives in a fast-paced environment and can work independently while collaborating with multiple stakeholders. The ideal candidate will have a strong background in writing ETLs, data pipelines, and data modeling, along with the ability to understand and align data solutions with business needs.

Key Responsibilities
  • Design, develop, and maintain ETL processes and data pipelines to collect, transform, and store structured and unstructured data.
  • Build and optimize data models to support analytics and business intelligence needs.
  • Work closely with cross-functional stakeholders (analysts, engineers, product managers, business teams) to translate business requirements into scalable data solutions.
  • Ensure data integrity, quality, and consistency across various sources and pipelines.
  • Monitor, troubleshoot, and improve performance and reliability of data workflows.
  • Implement best practices for data governance, security, and compliance.
  • Take ownership and work independently while effectively collaborating with multiple stakeholders across teams.
Key Requirements
  • 5+ years of experience in data engineering, including hands-on experience with ETLs, data pipelines, and data modeling.
  • Strong proficiency in SQL and experience working with relational and non-relational databases.
  • Experience with cloud platforms, preferably AWS.
  • Familiarity with Amazon Athena is an advantage.
  • Experience with workflow orchestration tools, particularly Apache Airflow, is an advantage.
  • Experience with Fivetran or other data integration tools is an advantage.
  • Proficiency in programming languages such as Python for data processing.
  • Strong communication skills to engage effectively with multiple stakeholders.
  • Self-driven with the ability to work independently and manage multiple priorities in a fast-paced environment.
Nice to Have
  • Experience with big data technologies (e.g., Spark, Hadoop).
  • Knowledge of streaming data processing (Kafka, Kinesis, etc.).
  • Exposure to data warehousing solutions such as Redshift, Snowflake, or BigQuery.
  • Experience working with business intelligence tools (Looker, Tableau, Power BI).
Why Join Us?
  • Work on impactful, large-scale data engineering projects.
  • Be part of a collaborative, data-driven team.
  • Opportunities for career growth and professional development.
  • Competitive salary and benefits package.
  • Contribute to cutting-edge healthcare technology that enhances patient outcomes.



If you're passionate about building scalable data solutions, enjoy working with multiple stakeholders, and thrive in an independent, fast-paced environment, we’d love to hear from you!

Apply now and join our team! 🚀

Sweetch