Honeycomb
At Honeycomb, we're not just building technology, we’re reshaping the future of insurance.
In 2025, Honeycomb was recognized by Dun & Bradstreet as “Top 10 Best Start Up Companies to Work For” in Israel and named by LinkedIn as “Top 10 Startups in Chicago”.
How did we earn these honors?
Honeycomb is a rapidly growing global startup, generously backed by top-tier investors and powered by an exceptional team of thinkers, builders, and problem-solvers. Dual-headquartered in Chicago and Tel Aviv (R&D center), and with 5 offices across the U.S., we are reinventing the commercial real estate insurance industry, an industry long overdue for disruption. Just as importantly, we ensure every employee feels deeply connected to our mission and one another.
With over $55B in insured assets, Honeycomb operates across 20+ major states, covering 60% of the U.S. population and increasing its coverage.
If you’re looking for a place where innovation is celebrated, culture actually means something, and smart people challenge you to be better every day - Honeycomb might be exactly what you’ve been looking for.
About The Role
We’re hiring a Senior Machine Learning Platform Engineer to build and own the foundations that enable our data science teams to ship ML and GenAI systems to production. You’ll develop core capabilities such as data/feature pipelines, training and evaluation infrastructure, and model/agent serving.
What You’ll Do
- Build and own backend systems and APIs that support end-to-end ML workflows (data → training → deploy).
- Design for performance, scalability, reliability, and cost in production ML environments.
- Develop infrastructure for LLM/agentic workflows (tool execution, retrieval, evaluation, observability).
- Partner closely with data scientists to iterate on platform capabilities and developer experience.
- Drive Innovation: Continuously evaluate and integrate emerging infrastructure technologies that can give Honeycomb’s AI platform a competitive edge.
Basic Requirements
- 6+ years in ML infrastructure, data/backend, or platform engineering.
- Strong system design and API development skills.
- Experience with containerized environments such as Docker and Kubernetes
- Proven experience architecting and deploying production ML/AI systems at scale.
- Strong background in Python and experience with large-scale systems and automation.
- Strong communication skills and ability to collaborate across engineering, data science, and product teams.
Nice to Have
- Production experience with vision models.
- GCP experience (especially BigQuery).
- Experience with distributed computing systems, preferably Dask
- Experience with pipeline orchestration frameworks (Airflow, Prefect or Dagster)