Overview
Would you like to be part of an exciting and impactful product? Would you like to help protect millions and make the world a safer place? Use cutting-edge technologies at a high scale?
Oligo is a rapidly growing startup headquartered in Tel Aviv, leading the way in reshaping Application Security. With a strong investment of $28 million from top-tier VCs including Lightspeed, Ballistic Ventures, and TLV Partners, we are developing a unique solution to address open source security challenges. Our real-time monitoring and breakthrough detection capabilities allow us to pinpoint and mitigate serious threats, making the digital world safer. As our team expands rapidly, we are looking for a senior data infrastructure engineer to join us in our mission to redefine application security.
As a senior data infrastructure engineer, you will own our data processing pipeline and play a crucial role in optimizing and scaling it in addition to other data services. You will be responsible for designing, developing, and maintaining efficient ETL processes and data storage solutions to handle large volumes of streaming data. This role requires strong technical skills, experience with big data technologies and in a cloud environment, and the ability to work in a fast-paced, dynamic environment. As the company grows, there may be opportunities for leadership and mentorship within the R&D team.
Responsibilities
- Design, develop, and maintain scalable ETL processes for efficient processing of streaming data into our platform.
- Optimize data ingestion and processing pipelines to handle large volumes of data in real-time, ensuring high performance and reliability.
- Continuously monitor, analyze, and improve the performance and efficiency of data processing and storage systems.
- Collaborate with the R&D team to define data requirements and implement efficient data models.
- Work closely with the R&D team to identify, investigate, and resolve data-related issues and anomalies.
- Conduct thorough testing, performance tuning, and debugging of data pipelines to ensure data integrity and accuracy.
- Stay up-to-date with emerging technologies, tools, and trends in big data processing and backend systems to propose and implement innovative solutions.
Qualifications
- Proven experience as a data/backend engineer in a large-scale data processing environment for more than 5 years.
- Solid understanding of ETL concepts, data modeling, and schema design.
- Strong proficiency in SQL and experience working with relational databases, such as PostgreSQL.
- Experience with big data processing frameworks and technologies
- Familiarity with containerization using Kubernetes and experience deploying applications in Kubernetes clusters.
- Familiarity with distributed computing and cloud platforms (e.g., AWS, Azure, or GCP).
- Experience with architecture concepts such as event-driven, CQRS and micro-services.
- Demonstrated leadership skills with the ability to provide guidance, mentorship, and technical expertise to the team.
- Strong problem-solving skills and the ability to work independently as well as collaboratively in a team-oriented environment.
- Excellent communication skills and the ability to effectively articulate technical concepts to both technical and non-technical stakeholders.
- Knowledge of data warehousing concepts and technologies (e.g., Redshift, Snowflake) is a plus.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field is a plus.