New job, posted less than a week ago!
Job Details
Posted date: Dec 24, 2025
Location: Seattle, WA
Level: Director
Estimated salary: $198,500
Range: $160,000 - $237,000
Description
Own the safety strategy for new GenAI features on Search, including defining youth-specific risks for new capabilities (e.g., image, video, agentic), analyzing and prioritizing emerging risks, driving testing, and prioritizing mitigation. Lead and effectively influence cross-functional teams (e.g., Product, Engineering, Responsible AI Testing, Research, Policy) to implement safety initiatives. Act as a key advisor to executive stakeholders (including T&S, Legal, and Product teams) on safety issues. Use technical judgment to develop testing requirements, analyze results, design mitigations, and drive post-launch monitoring. Operate with a 30,000-foot view while maintaining superb executional, organization, and operational focus. Mentor analysts, fostering a culture of excellence and acting as a subject matter expert on agentic features, models, and industry standards. Perform on-call responsibilities on a rotating basis, including weekend coverage.Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
The Kids and Learning Trust and Safety (T&S) team works alongside Product, Engineering, and Policy teams to proactively identify risks associated with evolving GenAI experiences on Search. The team detects harm patterns, develops Applied AI solutions to address novel trust problems, and defines industry best practices.
This is an exciting opportunity to be part of enabling safe access to GenAI experiences for global youth, which is a company-level priority.
In this role, you will leverage your critical thinking and leadership skills to analyze risks and opportunities presented by emerging features and models. You will synthesize perspectives to drive resolution and action on trust issues. You will be an influential leader, superb communicator, security expert, and exceptional executor who fosters a culture of excellence and teamwork.
At Google we work hard to earn our users’ trust every day. Trust and Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Qualifications
Minimum qualifications: Bachelor's degree or equivalent practical experience.10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or related fields. 1 year of experience with AI safety and security, adversarial testing, or red teaming. Experience with common LLM security vulnerabilities (e.g., prompt injection, jailbreaking, data exfiltration) and designing mitigation strategies.
Preferred qualifications: Master's degree or PhD in a relevant field. Experience in SQL, building dashboards, data collection/transformation, visualization, or experience in a scripting/programming language (e.g., Python). Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety. Experience with machine learning. Experiance working with data analytics, interpreting ML model performance metrics, and understanding technical architecture to identify safety gaps. Excellent problem-solving and critical thinking skills with meticulous attention to detail in an ever-changing environment.
Extended Qualifications
Bachelor's degree or equivalent practical experience.10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or related fields. 1 year of experience with AI safety and security, adversarial testing, or red teaming. Experience with common LLM security vulnerabilities (e.g., prompt injection, jailbreaking, data exfiltration) and designing mitigation strategies.
Check out other jobs at Google.