DevJobs

Deep Learning Compiler Engineer (MLIR/LLVM)

Overview
Skills
  • C++ C++ ꞏ 5y
  • Linux Linux
  • High-performance computing
  • Parallel programming
  • Code generation
  • Compiler design
  • Dataflow graphs
  • Hardware accelerators
  • IR transformation
  • LLVM
  • Middle-end optimizations
  • MLIR
  • Optimization techniques
  • Performance analysis
Job Details

Job Description:

Join our compiler team and contribute to the development of an MLIR-based compiler that drives performance improvements on Intel deep learning accelerators.

This compiler delivers significant performance gains across Intel products and directly impacts cutting-edge deep learning workloads.

In This Role, You Will

  • Design and implement new optimizations within the MLIR and LLVM frameworks to enhance model-level performance for deep learning applications.
  • Collaborate with architecture and performance teams to identify and address bottlenecks in the compiler pipeline.
  • Engage with internal customers and developers to understand requirements and support model-level performance tuning.
  • Explore and prototype novel compilation techniques to improve hardware utilization and efficiency.

This position is expected to change location in the near future to the Petah Tikva campus.

Qualifications

  • 5+ years of experience in C++ and familiarity with modern software development practices.
  • Background in high-performance computing or parallel programming.
  • Strong analytical and problem-solving skills.
  • Experience with development on Linux.

Advantage

  • Strong experience with compiler design and middle-end optimizations, preferably using MLIR and LLVM.
  • Experience with code generation, performance analysis, and tuning for hardware accelerators.
  • Knowledge of dataflow graphs, IR transformation, and optimization techniques.

Job Type

Experienced Hire

Shift

Shift 1 (Israel)

Primary Location:

Israel, Tel Aviv

Additional Locations:

Business Group

The Software Team drives customer value by enabling differentiated experiences through leadership AI technologies and foundational software stacks, products, and services. The group is responsible for developing the holistic strategy for client and data center software in collaboration with OSVs, ISVs, developers, partners and OEMs. The group delivers specialized NPU IP to enable the AI PC and GPU IP to support all of Intel's market segments. The group also has HW and SW engineering experts responsible for delivering IP, SOCs, runtimes, and platforms to support the CPU and GPU/accelerator roadmap, inclusive of integrated and discrete graphics.

Posting Statement

All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance.

Position of Trust

N/A

Work Model for this Role

This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. * Job posting details (such as work model, location or time type) are subject to change.
Intel