Job ID: 207262
Required Travel : Minimal
Managerial - No
Location: :Israel- RAANANA (Amdocs Site)
Who are we?
Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers’ innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers’ migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com
At Amdocs, our mission is to empower our employees to 'Live Amazing, Do Amazing' every day. We believe in creating a workplace where you not only excel professionally but also thrive personally. Through our culture of making a real impact, fostering growth, embracing flexibility, and building connections, we enable them to live meaningful lives while making a difference in the world.
In one sentence
The Amdocs Data and AI platform is looking for a Software/ML Engineer specialist to join us in Raanana, Israel. In this role, you will join Data Science & MLOps team that develops features on our product based on the Databricks platform, where Spark is at the heart of our implementation.
You will work end-to-end: from data preparation and feature engineering, through ML/LLM modeling, to production-grade deployment, CI/CD, and monitoring.
You’ll need excellent technical skills along with strong communication and ownership.
We are a team with open discussions, where every voice counts, and we are open-minded about adopting new technologies when they make sense.
What will your job look like?
- Develop production-grade ML/LLM services and pipelines on Databricks, using Spark.
- Design, implement, and maintain reusable preprocessing and feature engineering components.
- Industrialize the ML lifecycle (experiment tracking, model packaging, deployment, monitoring) and improve reliability, performance, and cost.
- Build ML pipelines and model deployments using Jenkins and Databricks Asset Bundles.
- Work closely with data scientists and engineers to translate research into scalable, maintainable production code.
- Own features from design through production, including writing tests, documenting, and providing operational support.
- Come to the office 3 times a week.
All you need is...
- Mandatory - Python development specialist with at least 5 years of experience.
- Mandatory - Spark development specialist with at least 3 years of experience.
- Mandatory - Solid software engineering practices: clean code, testing, packaging, debugging, and performance profiling.
- Mandatory - Experience implementing and operating ML pipelines in production (CI/CD, automation, monitoring, rollbacks).
- Mandatory - Experience with LLM solutions: prompt engineering, RAG pipelines, fine-tuning, evaluation, and/or agent workflows.
- Mandatory – At least 2 years experience working with Linux.
Considered a Plus
- Hands-on experience with Databricks (jobs/workflows), Spark optimization, and operating data/ML pipelines at scale.
- Experience with Databricks Asset Bundles and production deployment patterns.
- Experience with Jenkins pipelines and infrastructure-as-code mindset for repeatable deployments.
- Experience with MLflow (tracking/model registry) and/or model serving patterns (batch, real-time).
Why You Will Love This Job
- You will be challenged to design and develop new software applications.
- You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth.
Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce