At Lusha, we're building for Builders, we build fast & AI-first, so we look for Builders! By a builder, we mean someone who turns “maybe” into “done”.
We’re looking for a Software Engineer (Intelligence & Data) to join the Users & Integrations team within Lusha’s Intelligence Group. This role is built for an engineer who thrives on solving complex backend and data challenges.
In this role, you will take ownership of crucial user data integrations and architect the sophisticated matching logic that powers our platform. You will work extensively with large-scale data pipelines, translate complex algorithms into high-performance production code, and tackle massive scalability challenges to enhance the data experience for Lusha’s customers
Where does this role fit in our vision?
Every role at our company is designed with a clear purpose. At Lusha, data is everything, it’s at the heart of everything we do. The Intelligence Group is responsible for shaping the experience of hundreds of thousands of users who rely on our data daily.
The Users Team is the engine behind Lusha’s data connectivity, handling massive-scale user data integrations and engineering complex entity-matching logic. By translating millions of data signals and advanced algorithms into high-performance pipelines, we ensure users receive highly accurate, tailored data - optimizing their overall experience while driving the core KPIs of our Intelligence Group.
What will you be responsible for?
- Developing and implementing robust, scalable data pipelines and integration solutions within Lusha’s Databricks-based environment.
- Developing models and implementing algorithms, with a strong emphasis on delivering high-quality results.
- Leveraging technologies like Spark, Kafka, and Airflow to tackle complex data challenges and enhance business operations.
- Designing innovative data solutions that support millions of data points, ensuring high performance and reliability.
Requirements:
What we look for:
- 3+ years of software engineering experience building scalable backend systems, complex data integrations, and robust data infrastructure.
- A strong builder mindset, with experience turning ideas into working solutions
- Algorithmic experience, including developing and optimizing machine learning models and implementing advanced data algorithms.
- Experience working with cloud ecosystems, preferably AWS (S3, Glue, EMR, Redshift, Athena) or comparable cloud environments (Azure/GCP).
- Expertise in extracting, ingesting, and transforming large datasets efficiently.
- A passion for sharing knowledge, fostering a supportive engineering culture, and engaging in collaborative problem-solving with your peers.
Nice-to-have
- Knowledge of big data platforms, such as Spark, Databricks, Elasticsearch, and Kafka for real-time data streaming.
- Hands-on experience working with Vector Databases and embedding techniques, with a focus on search, recommendations, and personalization.