4+ years of direct experience with SQL (e.g., Redshift/Postgres/MySQL, Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines - MUST
2+ years of Python
3+ years of experience in scalable data architecture, fault-tolerant ETL, and monitoring of data quality in the cloud
Experience working with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, spark-streaming, DBT, Airflow)
Exceptional troubleshooting and problem-solving abilities, debugging, and root-causing defects in large-scale systems.
Deep understanding of distributed data processing architecture and tools such as Kafka, Spark, and Airflow
Experience with design patterns and coding best practices, understanding of data modeling concepts, techniques, and best practices
Proficiency with modern source control systems, especially Git
Basic Linux/Unix system administration skills
Nice to have
BS or MS degree in Computer Science or a related technical field - An advantage
Experience with data warehouses
NoSQL, Large scale DBs.
Understanding fintech business processes
DataOps - AWS.
Microservices
Experience in DBT
What Else
Energetic and Data enthusiastic
Analytical
Self-motivated and work well both independently and as part of a team
Excellent verbal/written communication & data presentation skills, including experience communicating with business and technical teams.
You are a team player with solid communication skills.
Love to explore new technologies and Fast and self-learner: you can quickly master concepts, disciplines, and methods.