New job, posted less than a week ago!
Job Details
Posted date: May 13, 2026
Location: Kirkland, WA
Level: Director
Estimated salary: $253,500
Range: $207,000 - $300,000
Description
Identify and maintain LLM training and serving benchmarks, using them to identify performance opportunities, drive XLA:GPU/Triton performance toward XLA releases. Engage with various teams, like DeepMind, to solve challenging ML model performance problems. Run architecture-level simulations on GPU designs and perform roofline analysis to guide partner teams. Analyze performance and efficiency metrics to identify bottlenecks and then design and implement solutions at Google fleet-wide scale. Run performance benchmarks on GPU hardware using internal and external tools such as TRT-LLM, vLLM , and SGLang.Google Cloud’s mission is to make every business successful through AI by combining cutting-edge technology, infrastructure, and talent. AI/ML software engineers in Cloud bridge the gap between pioneering models and a massive product vehicle reaching billions. Our talent density and AI-powered tools drive rapid development, rooted in a culture of empowerment and a bias to action. In this role, you aren’t just building technology; you’re shaping the frontier of enterprise and driving the evolution of advanced models.
While known for pioneering work with TPUs, GPUs are an equally vital and rapidly expanding frontier within Google's machine learning infrastructure. GPUs are indispensable to Google’s diverse and ever-evolving landscape for strategic, pragmatic, and performance-driven reasons ensuring top performance for our machine learning (ML) models, adapting to ML workloads, achieving results, and influencing next-gen GPU architectures via partnerships.
In recognition of hardware as a strength, Google’s Core ML organization is heavily invested in growing the powerhouse team of GPU experts, and we invite you to be at its vanguard.
In this role, you will have the opportunity to move beyond incremental improvements and architect transformative solutions, shaping the future of AI and accelerated computing for Google and the world.The AI and Infrastructure team is redefining what’s possible. We empower Google customers with breakthrough capabilities and insights by delivering AI and Infrastructure at unparalleled scale, efficiency, reliability and velocity. Our customers include Googlers, Google Cloud customers, and billions of Google users worldwide.
We're the driving force behind Google's groundbreaking innovations, empowering the development of our cutting-edge AI models, delivering unparalleled computing power to global services, and providing the essential platforms that enable developers to build the future. From software to hardware our teams are shaping the future of world-leading hyperscale computing, with key teams working on the development of our TPUs, Vertex AI for Google Cloud, Google Global Networking, Data Center operations, systems research, and much more.
The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Qualifications
Minimum qualifications: Bachelor’s degree or equivalent practical experience.8 years of experience in software development.
5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture. Experience with modern GPU architectures (NVIDIA, AMD, or other AI accelerators), memory hierarchies, and performance bottlenecks. Experience with modern LLMs and their deployment on AI accelerators. Experience with low-level GPU programming (CUDA, Triton, CUTLASS, etc.) and performance engineering techniques.
Preferred qualifications: Master’s degree or PhD in Engineering, Computer Science, or a related technical field.
8 years of experience with data structures and algorithms. 3 years of experience in a technical leadership role leading project teams and setting technical direction.
3 years of experience working in a structured organization involving cross-functional, or cross-business projects. Experience with compiler optimization, code generation, and runtime systems for GPU architectures (OpenXLA, MLIR, Triton, etc.).
Extended Qualifications
Bachelor’s degree or equivalent practical experience.8 years of experience in software development.
5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture. Experience with modern GPU architectures (NVIDIA, AMD, or other AI accelerators), memory hierarchies, and performance bottlenecks. Experience with modern LLMs and their deployment on AI accelerators. Experience with low-level GPU programming (CUDA, Triton, CUTLASS, etc.) and performance engineering techniques.