Rise is revolutionizing programmatic digital advertising with AI-powered auction optimization, driving greater efficiency in real-time bidding (RTB). Our algorithms ensure partners maximize revenue by reaching the right audience at the right moment.
At Rise, we’re on a mission to redefine how digital advertising works, and we’re doing it at scale with innovative solutions trusted by some of the largest industry leaders and top publishers across CTV, in-app, and web.
What makes us different? It’s our people. We believe happy employees make for exceptional work, so we prioritize creating a workplace where you’ll feel inspired, valued, and excited to start your day.
We are looking for a Senior Big Data Developer to join our dynamic Data Infra Engineering team at Rise, a data science-driven ad-tech company. You will play a key role in designing and optimizing robust, scalable, and efficient big data infrastructure to support AI-driven decision-making and analytics. In this position, you’ll collaborate with cross-functional teams in a highly technical and fast-paced environment, working with modern cloud technologies to solve challenging problems and drive impactful results.
Responsibilities:
- Design, build, and maintain scalable, reliable data pipelines and AWS-based infrastructure (EMR, S3, Redshift, Glue, RDS, Spark) for Rise’s ad-serving platform.
- Develop and optimize real-time streaming data workflows using tools like Kafka, Kinesis, or MSK.
- Optimize data pipelines for performance, low latency, high throughput, and cost efficiency.
- Work closely with data engineers, data scientists, ML ops, analysts, and other stakeholders to understand data needs and improve data processes.
- Automate workflows to ensure high performance in production and development environments.
- Monitor and maintain data systems, ensuring uptime, reliability, and stability.
Requirements:
- At least 3 years of experience as a Big Data Developer or in a similar data infrastructure role.
- Strong hands-on experience with AWS data infrastructure (EMR, S3, Glue, Spark) and Google Cloud services (BigQuery, GCS, Dataflow, Pub/Sub).
- Experience with data processing frameworks such as Apache Spark and streaming tools like Kafka, Kinesis, Spark Streaming, Flink, or MSK for real-time data ingestion and transformation.
- Proficiency with tools like Airflow for managing workflows and Python for building and optimizing data workflows.
- Expertise in optimizing data pipelines for performance, cost efficiency, and scalability in cloud environments (AWS, GCP, etc.), including networking and VPC setups.
- Strong communication skills and the ability to collaborate effectively with cross-functional teams.
Advantages:
- Knowledge of GoLang programming language
- Hands-on experience with IAC tools like: Terraform, Cloudformation, Pulumi
- Experience with MLops methodologies and tools