DevJobs

Data Infra Tech Lead

Overview
Skills
  • SQL SQL
  • Python Python
  • Elasticsearch Elasticsearch
  • Snowflake Snowflake
  • Networking Networking
  • BigQuery
  • Databricks
  • OLAP
  • identity and access management
  • vector databases
  • storage
  • relational database management systems
  • MCP
  • LLM-related technologies
  • A2A
  • FinOps
  • embeddings
  • cloud infrastructure
  • cloud cost management frameworks
  • Amazon Web Services
  • AI agent protocols
At Lusha, we're building for Builders, we build fast & AI-first --- so we look for Builders!

By a builder, we mean someone who turns “maybe” into “done”.

The Data Group builds and maintains the core data assets that power our products, from our Company and Contact databases to our Connect and Recommendation systems. The Data Tech Lead sits at the forefront of that mission, leading engineering innovation that directly shapes the experience of our growing user base and drives the evolution of Lusha’s intelligent data services.

This role is based in Tel Aviv. We work in a hybrid model, with 3 days a week in the office.

Your impact and responsibilities:

  • Leading the design and development of scalable, high-performance data workflows, including both batch pipelines and real-time data products.
  • Defining, implementing, and enforcing engineering best practices related to code quality, testing, CI/CD pipelines, observability, and documentation.
  • Mentoring, supporting, and growing a team of data engineers, fostering a collaborative and high-performance engineering culture.
  • Identifying opportunities to create new data assets and features that expand Lusha’s product capabilities and value proposition.
  • Driving architectural decision-making in areas of data modeling, storage solutions, and compute resources within cloud environments such as Databricks and Snowflake.
  • Collaborating closely with cross-functional stakeholders- including Product, DevOps, and R&D - to ensure effective delivery and platform stability.
  • Promoting and championing a data-driven mindset across the organization, balancing technical rigor with business context and strategic goals.

Requirements:

  • Minimum 5 years of hands-on experience designing, building, and maintaining large-scale data pipelines for both batch processing and streaming use cases.
  • Deep expertise in Python and SQL, with a focus on writing clean, performant, and maintainable code.
  • Strong analytical and problem-solving skills, with the ability to break down complex technical challenges and align solutions to business objectives.
  • Solid background in data modeling, analytics, and designing architectures for scalability, performance, and cost efficiency.
  • Strong problem-solving skills and ability to work in a fast-paced, high-growth environment
  • Practical experience working with modern OLAP systems and cloud data platforms, including Databricks, Snowflake, or BigQuery.
  • Familiarity with AI agent protocols (such as A2A, MCP) and LLM-related technologies (e.g., vector databases, embeddings) is a plus.
  • AI-saviness, with comfort adopting AI tools and staying current with emerging AI trends and technologies.
  • Experience leading Task Forces and cross-domain projects

Bonus Points:

  • Experience with Elasticsearch, Databricks, and relational database management systems.
  • Strong understanding of cloud infrastructure (preferably Amazon Web Services), including networking, storage, identity and access management (IAM), and cost optimization.
  • Background in FinOps or cloud cost management frameworks - an advantage.
  • Management experience.
Lusha