Google Novel AI Testing Analyst

New job, posted less than a week ago!

Job Details

Posted date: Sep 04, 2025

Location: Seattle, WA

Level: Senior

Estimated salary: $133,500
Range: $110,000 - $157,000


Description

Advise the evolution of AI testing by inventing and scaling novel evaluation methodologies and reusable frameworks, collaborating with engineering to enhance testing infrastructure. Take ownership of end-to-end evaluations for pioneering GenAI products, translating safety policies into concrete test protocols. Design sophisticated prompt strategies and conduct quantitative and qualitative analysis to identify systemic risks and provide insights for launch decisions. Be a subject matter expert, offer expert consultation, drive alignment across Trust and Safety teams, and lead strategic programs to improve the overall testing ecosystem and advance AI safety practices across the organization.

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.

In this role, you will handle issues of ensuring safety for Google’s newest and most advanced AI products and will design insightful evaluation strategies for technologies where standard testing protocols do not yet exist or apply. You will own structured pre-launch safety, neutrality, and testing from end-to-end for Google's GenAI products and will partner with Trust and Safety experts to align on standards, developing sophisticated prompt strategies, and leveraging data analysis to provide insights on potential risks. You will manage multiple stakeholders through effective communication, and streamline processes. You will help invent the future of testing itself and will develop and scale testing methodologies, partnering with engineering teams to design the novel infrastructure and tools required to automate and scale. You will be instrumental in shaping the future of responsible AI development, ensuring Google's products are safe and trustworthy for everyone.

At Google we work hard to earn our users’ trust every day. Trust & Safety is Google’s team of abuse fighting and user trust experts working daily to make the internet a safer place. We partner with teams across Google to deliver bold solutions in abuse areas such as malware, spam and account hijacking. A team of Analysts, Policy Specialists, Engineers, and Program Managers, we work to reduce risk and fight abuse across all of Google’s products, protecting our users, advertisers, and publishers across the globe in over 40 languages.

The US base salary range for this full-time position is $110,000-$157,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.



Qualifications

Minimum qualifications: Bachelor's degree or equivalent practical experience.

4 years of experience in trust and safety, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, red teaming, AI testing, adversarial testing, or similar. 1 year of experience in data analytics or research, business process analysis, or global program management, or leading cross-functional process improvements.

Preferred qualifications: Experience using data to provide solutions and recommendations and working with multiple large datasets. Experience working with Google's products and services, particularly Generative AI products. Understanding of AI systems, machine learning, and their potential risks. Ability to think and identify emerging threats and vulnerabilities. Excellent problem-solving and critical thinking skills with attention to detail in an ever-changing environment Excellent communication and presentation skills and the ability to influence cross-functionally at various levels.

Extended Qualifications

Bachelor's degree or equivalent practical experience.

4 years of experience in trust and safety, product policy, privacy and security, legal, compliance, risk management, intel, content moderation, red teaming, AI testing, adversarial testing, or similar. 1 year of experience in data analytics or research, business process analysis, or global program management, or leading cross-functional process improvements.

Email/text job link for Novel AI Testing Analyst at Google

Provide your email or phone number to recieve a short message with the job link and details.

Check out other jobs at Google.