About The Job
The Red Hat Engineering team is looking for a Senior Integration Engineer to join our Ecosystem engineering team. In this role, you will be part of a team responsible for designing and implementing the platform for enterprise companies and contributing to industry-leading technologies in the Kubernetes ecosystem. As a part of a geographically distributed team, you will collaborate with multiple Red Hat engineering teams and open source communities around the globe. To be successful in this role, you'll need to have motivation, curiosity, passion for problem-solving, and experience with Linux technologies and open source.
What You Will Do
- Play an active role applying Kubernetes and Red Hat OpenShift to customer use cases
- Work closely with partners and key customers to integrate their workloads on Red Hat’s platforms
- Develop and integrate changes in multiple projects using open source methodologies to provide end-to-end solutions
- Troubleshoot, analyze bug root causes, and provide resolutions
- Review design, enhancement proposal, and patches from other team members and other collaborators
- Work with various engineering team, across the organization, to ensure products are developed and tested correctly
- Publish results, conclusions, recommendations and best practices via web postings and documents or conference talks to the support team, partners and customers.
What You Will Bring
- 5+ year of relevant technical experience
- Experience working with the Linux operating system (RHEL, Fedora, CentOS or other)
- Advanced level of experience with Kubernetes
- Advanced level knowledge and experience in development in Go and Python
- Recent hands on experience with distributed computation, either at the end-user or infrastructure provider level
- Experience with AI/ML technologies and frameworks (classifiers, pytorch, tensorflow etc)
- Technical leadership acumen in a global team environment
- Excellent written and verbal communication skills; fluent English language skills
Following is considered a plus
- Bachelor's degree in statistics, mathematics, computer science, operations research, or a related quantitative field, or equivalent expertise; Master’s or PhD is a big plus
- Experience in engineering, consulting or another field related to distributed model training or data processing in a customer environment or supporting a data science team
- Experience with container technologies (OpenShift, Kubernetes,podman, docker)
- Familiarity with popular python machine learning tools such as PyTorch, Tensorflow, and Hugging Face