About Tarci
Tarci is a VC backed Continuous Intelligence Platform. Leveraging our dynamic data, Tarci helps Enterprises servicing SMBs cut through the noise of competitive markets and gain a natural advantage to make actionable, informed business decisions.
The Role
As the Data Team Leader, you will play a pivotal role in managing and guiding a team of Data Engineers and Data Analysts.
During this role you will be responsible for the data pipeline from massive scale data collection, extraction of insights and generation of data repositories to be used by Tarci Platform and Api’s.
Vast experience in developing highly scalable data oriented systems is essential for this role.
Key Responsibilities
- Lead and mentor a team of Data Engineers and Data Analysts to ensure high performance, collaboration, and professional development.
- Set clear objectives, provide regular feedback, and facilitate effective communication within the team.
- Lead the design, development, and optimization of the infrastructure to meet growing data demands while ensuring reliability and efficiency.
- Drive data analysis initiatives, leveraging rule-based, SQL-based, and machine learning techniques to extract insights and patterns from data.
- Collaborate with stakeholders to understand requirements and develop data classification models to enhance data organization and accessibility.
Requirements
- Proven experience (7+ years) in managing and working in teams of Data Engineers and Data Analysts.
- Strong proficiency in Python/Scala/Java programming language for data manipulation, automation, and scripting.
- Deep understanding of data engineering concepts, including data pipelines, ETL processes, and data warehousing.
- Experience with AWS/Azure/GCP infrastructure and services for data management and processing.
- Familiarity with web crawling techniques, data scraping, and APIs for data extraction.
- Expert in SQL-based databases for data querying and manipulation.
- Knowledge of machine learning algorithms and techniques for data analysis and classification.
- Excellent communication skills and ability to collaborate effectively with cross-functional teams.
- Strong problem-solving skills and a proactive approach to identifying and resolving technical challenges.
- Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience;
Preferred Qualifications
- Experience with big data technologies such as Apache Spark, Hadoop, or similar frameworks.
- Familiarity with Presto Sql
- Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes.