Rise offers fully programmatic media solutions for publishers, empowering them to make informed business decisions through advanced data-powered solutions and AI models. Our platform seamlessly integrates with their content and enhances the user experience. With advancements perfectly tailored to meet advertising needs, publishers are enabled to maximize both revenues and profits.
Responsibilities:
- Design and build efficient, high-quality cloud architectures to meet organizational needs.
- Manage, monitor, and scale our distributed, highly available SaaS platform to ensure reliability and performance.
- Work closely with engineering teams to ensure their applications are scalable and performant.
- Develop innovative solutions to complex problems, implementing them independently or in collaboration with other teams.
- Debug production issues across all tiers of a massive distributed system, spanning dozens of services.
- Continuously work to enhance our core platform architecture.
- Embed with various teams to promote best practices related to reliability, scalability, and observability.
- Actively pursue knowledge and skills in engineering best practices to stay current in the field.
Requirements:
- Minimum of 2 years as a DevOps Engineer.
- At least 1 year of production experience with Kubernetes.
- Proficiency in a programming language such as Golang or Python.
- Experience with monitoring tools like CloudWatch, Datadog, Grafana.
- Hands-on experience with cloud infrastructure platforms such as AWS, GCP, etc.
- Proficient in using IaC tools like Terraform, Pulumi, CloudFormation, etc.
- Strong knowledge of CI/CD systems such as GitHub Actions, Jenkins, CircleCI, etc.
- A strong desire to learn and grow while working on a highly performant ad-serving platform.
- Ability to clearly communicate in both technical and non-technical settings.
- Hands-on experience with Linux operating systems, Bash scripting, and computer networking.
Adventeges:
- Advanced knowledge and experience with Pulumi.
- Experience in building and managing data pipelines.
- Proficiency in using Apache Airflow for workflow automation.
- Hands-on experience with AWS EMR and Apache Spark for big data processing.
- Knowledge of Google BigQuery for data warehousing and analytics.