New job, posted less than a week ago!
Job Details
Posted date: Jan 20, 2026
There have been 2 jobs posted with the title of Senior Analyst, Content Adversarial Red Team all time at Google.Location: Seattle, WA
Level: Director
Estimated salary: $198,500
Range: $160,000 - $237,000
Description
Lead and guide the team's efforts in identifying and analyzing high-complexity content risks, with a special focus on the safety of users under 18 and influence cross-functional teams, including Product, Engineering, Research, and Policy, to drive the implementation of safety initiatives.Develop and deploy tailored and red teaming exercises that identify emerging, unanticipated, or unknown threats.Drive the creation and refinement of net new red teaming methodologies, strategies and tactics to help build the U18 red teaming program and ensure coherence and consistency across all testing modalities.Design, develop, and oversee the execution of innovative and red teaming strategies to uncover content abuse risks.Act as a key advisor to executive leadership on content safety issues, providing actionable insights and recommendations. This role will be exposed to graphic, controversial, or upsetting content.Trust and Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
The Content Adversarial Red Team (CART) within Trust and Safety conducts unstructured adversarial testing of Google’s premier generative AI products to uncover emerging content risks not identified in structured evaluations. CART works alongside product, policy, and enforcement teams to build the safest possible experiences for Google users.
In this role, you will develop and drive the team’s strategic plans while acting as a key advisor to executive leadership, leveraging cross-functional influence to advance safety initiatives. As a member of the team, you will mentor analysts and foster a culture of continuous learning by sharing your deep expertise in adversarial techniques. Additionally, you will represent Google’s AI safety efforts in external forums, collaborating with industry partners to develop best practices for responsible AI and solidifying our position as a thought leader in the field.
At Google we work hard to earn our users’ trust every day. Trust and Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.
The US base salary range for this full-time position is $160,000-$237,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Qualifications
Minimum qualifications: Bachelor's degree or equivalent practical experience.10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or a related field. Experience in Artificial Intelligence or Machine Learning.
Preferred qualifications: Master's degree or PhD in a relevant field. 3 years of experience in red teaming, vulnerability research or penetration testing. Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
Experience with machine learning. Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g., Python). Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment.
Extended Qualifications
Bachelor's degree or equivalent practical experience.10 years of experience in data analytics, trust and safety, policy, cybersecurity, business strategy, or a related field. Experience in Artificial Intelligence or Machine Learning.
Check out other jobs at Google.