Description
Tango is a successful, market leader, a live-streaming Platform with 450+ million registered users, in an industry projected to reach $240 BILLION in the next couple of years.
The B2C platform, based on the best-quality global video technology, allows millions of talented people around the world to create their own live content, engage with their fans, and monetize their talents.
Tango live stream was founded in 2018 and is powered by 500+ global employees operating in a culture of growth, learning, and success!
The Tango team is a vigorous cocktail of hard workers, creative brains, energisers, geeks, overachievers, athletes, and more. We push the limits to bring our app from “one of the top” to “the leader”.
The best way to describe Tango's work style is not to use the word “impossible”. We believe that success is a thorny path that runs on sleepless nights, corporate parties, tough releases, and, of course, our users' smiles (and as we are a LIVE app, we truly get to see our users all around the world smiling right in front of us in real-time!).
Do you want to join the party?
Responsibilities
- Architect and maintain scalable cloud-based data infrastructure (compute, storage, orchestration, messaging, workflow management).
- Collaborate closely with Data Engineering to operationalize new pipelines, frameworks, and data models.
- Implement infrastructure-as-code (e.g., Terraform) to ensure consistent, automated environment provisioning.
- Develop internal tooling to support deployment automation, testing frameworks, and pipeline lifecycle management.
- Own reliability, uptime, and performance across all production data workflows.
- Implement monitoring, alerting, logging, and traceability using modern observability platforms.
- Champion data quality, lineage tracking, and automated validation frameworks.
- Lead incident response, root-cause analysis, and postmortems for pipeline or platform issues.
- Work daily with data engineers, analysts, platform engineers, and stakeholders to improve reliability and developer experience.
- Lead architectural reviews and guide teams in adopting DataOps best practices.
- Mentor junior engineers and contribute to long-term data platform strategy.
- Maintain clear, consistent documentation of operational processes, infrastructure components, and standards.
Requirements
- 5–8+ years in DataOps, DevOps, Platform Engineering, or similar roles.
- Strong hands-on experience with modern cloud data ecosystems (GCP, AWS, Azure).
- Deep understanding of:
- Distributed systems and ETL/ELT patterns
- Orchestration frameworks (e.g., Airflow, Cloud Composer)
- Streaming and messaging systems (Kafka, Pub/Sub, etc.)
- Batch and streaming processing frameworks (e.g., Apache Beam, Spark, Flink)
- Infrastructure-as-code (Terraform), containers (Docker), CI/CD tooling
- Python and SQL for automation and data workflow integration
- Experience operating production-grade data platforms with a strong focus on SLAs, reliability, and cost optimization.
Nice to Have
- Google Cloud Platform experience- especially BigQuery, Dataflow, Pub/Sub, Dataplex, or Cloud Composer- is a significant plus .
- Experience with BI platforms such as Looker.
- Familiarity with ML Ops/model lifecycle management.
- Real-time data processing experience with Kafka, Flink, or similar.
- Expertise in cost optimization and performance tuning for cloud-based data warehouses.