GPU ML Engineer (apple)
apple Waltham, United States
2024-10-27
Job posting number: #153508 (Ref:apl-200572765)
Job Description
Summary
Apple’s Compute Frameworks team in GPU, Graphics and Displays org provides a suite of high-performance data parallel algorithms for developers inside and outside of Apple for iOS, macOS and Apple TV. Our efforts are currently focused in the key areas of linear algebra, image processing, machine learning, along with other projects of key interest to Apple. We are always looking for exceptionally dedicated individuals to grow our outstanding team.
Description
Our team is seeking extraordinary machine learning and GPU programming engineers who are passionate about providing robust compute solutions for accelerating machine learning networks on Apple Silicon.
Role has the opportunity to influence the design of compute and programming models in next generation GPU architectures.
Responsibilities:
Adding optimizations in machine learning computation graph.
Defining and implementing APIs in Metal Performance Shaders Graph, investigating new algorithms.
Develop/maintain/optimize ML training acceleration technologies.
Support adoption of GPU-accelerated training across 1st and 3rd party clients.
Tune GPU-accelerated training across products.
Performing in-depth analysis, compiler and kernel level optimizations to ensure the best possible performance across hardware families.
Intended deliverables:
ML training GPU acceleration technology.
Optimized ML training across products.
If this sounds of interest, we would love to hear from you!
Role has the opportunity to influence the design of compute and programming models in next generation GPU architectures.
Responsibilities:
Adding optimizations in machine learning computation graph.
Defining and implementing APIs in Metal Performance Shaders Graph, investigating new algorithms.
Develop/maintain/optimize ML training acceleration technologies.
Support adoption of GPU-accelerated training across 1st and 3rd party clients.
Tune GPU-accelerated training across products.
Performing in-depth analysis, compiler and kernel level optimizations to ensure the best possible performance across hardware families.
Intended deliverables:
ML training GPU acceleration technology.
Optimized ML training across products.
If this sounds of interest, we would love to hear from you!
View Orignal JOB on: italents.net
Minimum Qualifications
- Proven programming and problem-solving skills.
- Good understanding of machine learning fundamentals.
- GPU compute programming models & optimization techniques.
- GPU compute framework development, maintenance, and optimization.
- Machine learning development using one or more ML frameworks (TensorFlow, PyTorch or JAX).
Key Qualifications
Preferred Qualifications
- Experience with adding computational graph support, runtime or device backend to Machine learning libraries (TensorFlow, PyTorch or JAX) support.
- Experience with high performance parallel programming, GPU programming or LLVM/MLIR compiler infrastructure.
- Experience with system level programming and computer architecture.
- Background in mathematics, including linear algebra and numerical methods.