Staff Software Engineer, Machine learning Performance (google)
Job posting number: #154578 (Ref:112427511035372230)
Job Description
Qualifications
Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 8 years of experience in software development, and with data structures/algorithms.
- 5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture.
- 5 years of experience with machine learning algorithms and tools (e.g., TensorFlow), artificial intelligence, deep learning, or natural language processing.
Preferred qualifications:
- Experience in performance analysis and optimization, including system architecture, performance modeling, or similar.
- Experience working in a complex, matrixed organization involving cross-functional, or cross-business projects.
- Experience in a technical leadership role leading project teams and setting technical direction.
- Experience in distributed development and large-scale data processing.
- Experience in compiler optimizations or related fields.
Summary
- Bachelor's degree or equivalent practical experience.
- 8 years of experience in software development, and with data structures/algorithms.
- 5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture.View Orignal JOB on: italents.net
- 5 years of experience with machine learning algorithms and tools (e.g., TensorFlow), artificial intelligence, deep learning, or natural language processing.
Description
Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
The TPU Performance team is responsible for performance and extracting maximum efficiency for AI/ML training workloads. We drive Google Machine Learning performance using deep fleet-scale, benchmark analysis, and out of the box auto-optimizations.
We focus on performance analysis to identify performance opportunities in Google production, research Machine Learning (ML) workloads, and land optimizations to the entire fleet. Our work demonstrates Machine Learning performance on the large-scale and latest accelerators at Machine Learning Performance. We push efficiency on multipod Machine Learning models.
Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
Responsibilities
- Focus on Large Language Models (Google Deepmind Gemini, Bard, Search Magi, Cloud LLM APIs), performance analysis, and optimizations.
- Identify and maintain Large Language Model (LLM) training and serving benchmarks that are representative to Google production, industry and Machine Learning community, use them to identify performance opportunities and drive TensorFlow/JAX TPU out-of-the-box performance, and to gate TF/JAX releases.
- Engage with Google Product teams to solve their LLM performance problem such as onboarding new LLM models and products on Google new TPU hardware, enabling LLMs to train efficiently on very large-scale (i.e., thousands of TPUs), etc.
- Explore model/data efficiency techniques such as new ML model architecture/optimizer/training technique to solve a ML task more efficiently, new techniques to reduce the label/unlabeled ML data needed to train a model to aim accuracy.