New job, posted less than a week ago!
Job Details
Posted date: Mar 24, 2026
Location: Seattle, WA
Level: Director
Estimated salary: $253,500
Range: $207,000 - $300,000
Description
Lead the team's strategy, research and direction to identify risks and threats with an evolving AI threat landscape. Translate insights into proactive mitigation goals to address novel abuse and attack vectors across product surfaces and capabilities. Provide technical leadership to scope and drive comprehensive, transparent and scalable anti-abuse defenses for interconnected AI ecosystems. Be the team's thought leader and engage with Engineering, Product, Policy and Legal to deploy scalable, and defensible mitigation processes for risks with Large Language Models (LLMs). Lead adversarial simulations, proactive assessments, discovery programs to surface unknown attack vectors and risks of AI systems and next-generation capabilities. Architecting novel testing frameworks to expose, multi-stage vulnerabilities. Direct rapid investigation, mitigation and response for high-severity AI abuse incidents, collaborating across product, research and policy teams.Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities. Google's Trust and Safety (T&S) team is entrusted with an immense responsibility: making the internet safer for everyone. Within this group, our T&S Gemini and Labs team operates at the absolute frontier of technology. We are the stewards tasked with safety for Google's generative AI products and features, ensuring it is developed and deployed with the highest standards of safety and integrity.
As the Security Engineering Manager for T&S Gemini and Labs, you would be thought leader and domain expert to inform the team's strategy and drive execution to combat novel Generative AI (GenAI) threats and provide technical leadership in cyber-security, intelligence, and threat analysis. You will mentor and lead the team to grow at the critical intersection of AI research and real-world security/harms to build the foundational defenses that prevent the misuse of generative models and agents. By pioneering threat detection and mitigation strategies, you will empower products to push the boundaries of AI innovation safely, ethically, and securely. You will not just participate in this mission; you will help lead it, setting the standard for the industry in addressing unprecedented safety issues at a global scale.
The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Qualifications
Minimum qualifications: Bachelor's degree or equivalent practical experience.8 years of experience with security engineering, computer and network security and security protocols.
8 years of experience with security analysis, abuse detection or threat modeling. 3 years of experience leading teams in a technical capacity or leading technical risk analysis in an enterprise environment.
Experience in people management.
Preferred qualifications: Experience in applied vulnerability research, or advanced pen testing/red teaming/bug bounties. Experience in analyzing systems and identifying security and abuse problems, threat modeling, and remediation. Understanding of generative AI technologies, large language models (LLMs), and AI agents. Ability to review or be exposed to sensitive or violative content as part of core role. Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
Extended Qualifications
Bachelor's degree or equivalent practical experience.8 years of experience with security engineering, computer and network security and security protocols.
8 years of experience with security analysis, abuse detection or threat modeling. 3 years of experience leading teams in a technical capacity or leading technical risk analysis in an enterprise environment.
Experience in people management.