DevJobs

Senior Data Engineer

Overview
Skills
  • Python Python
  • SQL SQL
  • Kafka Kafka
  • ML ML
  • Linux Linux
  • AWS AWS
  • Terraform Terraform
  • Databricks
  • ELT
  • ETL
  • PySpark
  • data visualization tools
  • Debezium
  • delta table format
  • parquet
  • algorithmic applications
  • AWS-native data solutions
  • dashboard creation
  • data analysis
  • data modeling

Location: Tel Aviv, Israel

Versatile is an innovative AI-driven construction intelligence startup, committed to transforming the construction industry with cutting-edge technology. Our mission is to enhance the efficiency, safety, and productivity of construction projects through intelligent solutions.


We’re hiring a hands-on Senior Data Engineer who wants to build data products that move the needle in the physical world. Your work will help construction professionals make better, data-backed decisions every day. You’ll be part of a high-performing engineering team based in Tel Aviv.


What you will be doing:

  • Lead the design, development, and ownership of scalable data pipelines (ETL/ELT) that power analytics, product features, and downstream consumption.
  • Collaborate closely with Product, Data Science, Data Analytics, and full-stack/platform teams to deliver data solutions that serve product and business needs.
  • Build and optimize data workflows using Databricks, Spark (PySpark, SQL), Kafka, and AWS-based tooling.
  • Implement and manage data architectures that support both real-time and batch processing, including streaming, storage, and processing layers.
  • Develop, integrate, and maintain data connectors and ingestion pipelines from multiple sources.
  • Manage the deployment, scaling, and performance of data infrastructure and clusters, including Databricks, Kafka, and AWS services.
  • Use Terraform (and similar tools) to manage infrastructure-as-code for data platforms.
  • Model and prepare data for analytics, BI, and product-facing use cases, ensuring high performance and reliability.


Requirements:

  • 5+ years of hands-on experience working with large-scale data systems in production environments.
  • Proven experience designing, deploying, and integrating big data frameworks such as PySpark, Kafka, Databricks, or equivalent cloud-based data platforms.
  • Strong expertise in Python and SQL, with experience building and optimizing data processing workflows (batch and/or streaming).
  • Experience with AWS cloud services and Linux-based environments.
  • Background in building ETL/ELT pipelines and orchestrating workflows end-to-end.
  • Understanding of event-driven and domain-driven design principles in modern data architectures.
  • Experience preparing data for BI, analytics, and consumption by product teams or platforms.
  • Familiarity with infrastructure-as-code tools (e.g., Terraform) — advantage.
  • Experience with parquet, delta table format, Debezium, or AWS-native data solutions — advantage.
  • Experience in data modeling, data analysis, or supporting visualization/analytics teams — advantage.
  • Familiarity with data visualization tools or creating dashboards — advantage.
  • Experience supporting machine learning or algorithmic applications — nice to have.
  • BSc or higher in Computer Science, Engineering, Mathematics, or another quantitative field.

Versatile