DevJobs

Data Engineer

Overview
Skills
  • SQL SQL ꞏ 3y
  • Python Python ꞏ 3y
  • Spark Spark
  • Kafka Kafka
  • PostgreSQL PostgreSQL ꞏ 3y
  • Design Patterns
  • Microservices Microservices
  • Snowflake Snowflake ꞏ 3y
  • AWS AWS
  • Azure Azure
  • GCP GCP
  • Kubernetes Kubernetes
  • Docker Docker
  • Airflow Airflow
  • RabbitMQ RabbitMQ
  • Terraform Terraform
  • Impala ꞏ 3y
  • OOP Languages ꞏ 3y
  • ETL development ꞏ 3y
  • Data warehousing ꞏ 3y
  • Data modeling ꞏ 3y
  • PySpark
  • RDBMS RDBMS
  • Databricks
BioCatch is the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online user’s physical and cognitive digital behavior to protect individuals online. BioCatch’s mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust and ease seamlessly co-exist. Today, BioCatch counts over 25 of the top 100 global banks as customers who use BioCatch solutions to fight fraud, drive digital transformation, and accelerate business growth. BioCatch’s Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps BioCatch to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, BioCatch continues to innovate to solve tomorrow’s problems. For more information, please visit www.biocatch.com.

Main responsibilities:

  • Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
  • Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
  • Monitor and optimize our (teams’) cloud costs.
  • Design and construct monitoring tools to ensure the efficiency and reliability of data processes.

Requirements:

Requirements

  • 3+ Years of Experience in data engineering and big data. - Must
  • Experience in working with different databases (SQL, Snowflake, Impala, PostgreSQL) – Must
  • Experience in programming languages (Python, OOP Languages) – Must
  • Experience with Data modeling, ETL development, data warehousing – Must
  • Experience with building both batch and streaming data pipelines using PySpark – Big Advantage
  • Experience in Messaging systems (Kafka, RabbitMQ etc) – Big Advantage
  • Experience working with any of the major cloud providers: Azure, Google Cloud , AWS) – Big Advantage
  • Creating and Maintaining Microservices data processes - Big Advantage
  • Basic knowledge in DevOps concepts (Docker, Kubernetes, Terraform) – Advantage
  • Experience in Design Patterns concepts –Advantage

Our stack: Azure, GCP, Databricks, Snowflake, Airflow, RDBMS, Spark, Kafka, Kubernetes, Micro-Services, Python, SQL

Your stack: Proven strong back-end software engineering skills, ability to think for yourself and challenge common assumptions, commit to high-quality execution and embrace collaboration.
BioCatch