MedOrion is an early stage startup company striving to help healthy people stay healthy. Our cutting-edge platform empowers health organizations to unlock the psychological barriers of their members towards important medical procedures such as Cancer Screening and Medication management.
We have already established partnerships with three Top-10 health insurance companies in the U.S and are currently managing 1.5 Million lives. Our impressive growth from zero to 1.2M ARR within the last 12 months sets the stage for our next big leap: scaling towards becoming a dominant U.S health insurance market player.
As we set ourselves to scale, we seek a senior data engineer to lead our SaaS product's data infrastructure. Our cloud-based system digests Medical data daily for Millions of members and serves both reporting/analytical features and machine learning pipelines.
We seek a senior data engineer to take our product to the next level using modern development methodologies and tools. On the day-to-day work, you will lead the design, development, and maintenance of all the data infrastructures of the product and work with different teams, including Product, Data Science, and Customer Success.
Responsibilities:
- Design, build, and optimize data pipelines using BigQuery and Python for a Software as a Service Product.
- Create and manage data models, schemas, and databases for efficient data storage and retrieval.
- Ensure data quality and reliability through validation, cleansing, and error handling processes.
- Collaborate with cross-functional teams to identify data needs and implement solutions that align with business objectives.
- Define and implement a comprehensive BI strategy, including reporting, dashboarding, and data visualization.
- Identify key performance indicators (KPIs) and develop metrics to measure business performance and track progress.
- Evaluate and recommend tools, frameworks, and methodologies that enhance our data capabilities.
Qualification:
- Proficiency with SQL and database management systems.
- Familiarity with Python and experience with relevant data processing libraries/frameworks (e.g., Pandas, NumPy, PySpark).
- Strong expertise in designing and implementing data pipelines, ETL processes, and data warehousing solutions.
- Proficiency in working with BigQuery and or similar data warehousing technologies.
- Experience in building scalable and efficient data models for analytics and reporting purposes.
- Experience in modern development frameworks such as Git, Dataform and dbt.
- Solid understanding of data governance, data security, and data privacy best practices (HIPAA is an advantage)
- Experience with BI tools (e.g., Looker, Tableau, Metabase) and proficiency in designing interactive dashboards and reports.