Google Director, Product Management, Safety, Abuse and AI Risks

New job, posted less than a week ago!

Job Details

Posted date: Feb 12, 2026

Location: Kirkland, WA

Level: Executive

Estimated salary: $382,500
Range: $320,000 - $445,000


Description

Define the product outlook and strategy for a cross-Workspace AI safety and risk platform that balances short needs to ship quickly and long-term goals to scale globally. Analyze threats and emerging AI capabilities to drive innovation, and deliver forward-thinking approaches to long-term opportunities, such as modernizing our threat prevent classifiers. Develop a comprehensive understanding of the company's product strategy and ensure the GenAI platform aligns with and enables that strategy. Own the cross-functional product development process from ideation through to launch. Drive the team's rhythm of business by managing leadership review forums and tracking key company-level deliverables. Advocate for a culture of experimentation and data-driven decision-making to rapidly bring excellent GenAI experiences to market.

Google Workspace from Google Cloud is a smart, simple and secure family of productivity apps like Gmail, Docs, Drive, and Calendar. Designed for real-time collaboration, they simplify work and increase team productivity. With Google Workspace, information flows freely, so great ideas are never lost.

To safeguard the next generation of productivity, we established a unified Workspace Safety, Abuse, and AI Risk organization. While keeping users safe from spam and abuse, we are extending our role to Generative AI. As we integrate Generative AI, we focus on user trust, policy-aligned model behavior, and proactive threat mitigation. Bringing these critical functions together ensures unified prioritization of safety guardrails, robust resource management, and deeper alignment with our AI Foundations and Engineering partners.

As the Director of Product for Safety, Abuse, and AI Risk, Workspace, you will drive product strategy and inspire a talented team to build secure and trustworthy AI-powered productivity platforms. You will build a product team spanning multiple products and foundational capabilities, working with Engineering, Research, Legal, and Trust and Safety leads to define the safety architecture of our AI features.

The mission is to build cross-Workspace safety frameworks, develop advanced abuse-detection capabilities, and ensure product experiences are resilient against adversarial threats. You will work with global customers, regulatory bodies, and internal Go-to-Market (GTM) colleagues to understand the evolving digital safety landscape, translating complex risks into technical designs and robust product implementations.

Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

The US base salary range for this full-time position is $320,000-$445,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.



Qualifications

Minimum qualifications: Bachelor's degree Engineering, Computer Science or, in a technical field, or equivalent practical experience. 15 years of experience in product management. 5 years of defending high-usage internet platforms from abuse and safety risks 3 years of experience utilizing ML or AI to build user-facing experiences

Preferred qualifications: Master’s degree in engineering or a related field or MBA Expertise leading teams towards business goals Ability to balance technical and business issues Effective leadership capabilities, and ability to motivate a team Excellent communication skills, with the ability to communicate with both technical and business experts

Extended Qualifications

Bachelor's degree Engineering, Computer Science or, in a technical field, or equivalent practical experience. 15 years of experience in product management. 5 years of defending high-usage internet platforms from abuse and safety risks 3 years of experience utilizing ML or AI to build user-facing experiences

Email job link for Director, Product Management, Safety, Abuse and AI Risks at Google

Provide your email address to receive a message with the job link and details.

Check out other jobs at Google.