
חדש באתר! העלו קורות חיים אנונימיים לאתר ואפשרו למעסיקים לפנות אליכם!
Who we are?
Cypago transforms the way organizations build, maintain and monitor their security and compliance with its first of breed Cyber GRC (Governance, Risk, Compliance) Automation (CGA) platform.
Running at a high pace and growing fast, we provide our customers with a deep-tech-based product that solves a super painful problem.
With Cypago organizations all over the world are assured to be continuously secure and compliant with security frameworks requiring high cybersecurity demands, and can automate their security and risk management program seamlessly.
Why you should join us?
What you will do?
As a Senior Data Software Engineer you will lead, design, build and code production-grade complex high scale data pipelines and code infrastructure.
TLDR: This job is unique and intended for experienced data folks and software engineers only, that have been also blessed with creativity, passion and problem-solving capabilities.
What you’ll bring?
• 7+ years of experience as a software engineer
• 1+ years of experience in Go
• 3+ years of experience in Python
• Proven track record of designing and building robust production-grade complex high scale data pipelines
• 2+ years working with Airflow/Argo
• 2+ years of designing and building production-grade ETL pipelines
• 1+ years of designing and building production-grade ELT pipelines
• 2+ years of experience working with Data Warehouses - Snowflake/Redshift/BigQuery as both data lake and data warehouse, including ELT pipelines
• Proven experience in designing complex and compound DAG based workflows
• 2+ years designing and building production-grade high scale analytics systems
• 5+ years working with SQL and no-SQL databases - PostgeSQL, MongoDB/Cassandra/other
• 2+ years experience with AWS
• Experience with cloud native data services such as EMR, Glue, Athena, DataFlow and others
• 3+ years experience working with big data tools such as Spark, Hadoop, HBase
• Experience working with queueing, pub/sub and caching systems - Kafka, Redis
• Experience with both OLTP RDBMSes and OLAP columnar databases
• Strong architectural and design experience plus design patterns knowledge and practice
• DB internals knowledge of how things work under the hood
• Data savvy - knows well existing solutions, tools, trends and being able to combine them together to develop robust sustainable and long lasting solutions
• Being in love with data and tech - constantly eager to research for new solutions, products, and trends and update with latest news
• Graph DB experience - Neo4j/Neptune - big advantage
• Algorithmic and theoretic background - graph algorithms, optimizations and compiler theory - advantage
• Practical cloud native ML background and experience - advantage
• Dev community involvement / OSS contributor - advantage
• Academic computer Science and algorithmic background - advantage
Join our A team and be a part of a fun and dynamic environment to work at.
Experience real professional challenges, do something meaningful and also - make lots of fun!