New job, posted less than a week ago!
Job Details
Posted date: Dec 04, 2025
There have been 2 jobs posted with the title of Senior Product Manager - AI Safety and Security all time at Microsoft.There have been 2 Senior Product Manager - AI Safety and Security jobs posted in the last month.
Category: Product Management
Location: Redmond, WA
Estimated salary: $188,900
Range: $119,800 - $258,000
Employment type: Full-Time
Work location type: 0 days / week in-office – remote
Role: Individual Contributor
Description
OverviewSecurity represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
Artificial Intelligence (AI) has the potential to change the world around us. At Microsoft, we are committed to the advancement of AI driven by ethical principles. We are looking for an experienced Senior Product Manager - AI Safety and Security to join a high-impact team that sits at the intersection of Cybersecurity and Generative AI. As a Senior Product Manager, you will lead the development of internal platform capabilities that help to secure Microsoft’s flagship Generative AI and Agentic AI products through detecting threat activity and producing threat intelligence. You will partner closely with our core development team including engineers, data science, and applied science, and you will seek to deeply understand customer needs—our customers range from AI Incident Response and Threat Hunters to Security Researchers and engineers. You will also collaborate with stakeholders in Responsible AI, policy, legal, privacy and compliance. As Senior Product Manager on the team, you will operate independently within a defined product area and collaborate with other PMs and software engineers (SWEs) to deliver capabilities. For this role, you will need a technical background, a deep understanding of security use cases in generative AI, and the ability to collaboratively innovate to protect and secure Microsoft and our customers. Are you passionate about the safety and security of AI and how that intersects with our lives? Do you think critically about how adversaries exploit AI and do you obsess about things like cross-prompt injection attacks? Do you dream about making developers’ lives easier? This may be a great opportunity for you!
More about our team:
We are the Artificial Generative Intelligence Security (AeGIS) team, and we are charged with ensuring justified confidence in the safety and security of Microsoft’s generative AI products. This encompasses providing an infrastructure for AI safety and security; serving as a coordination point for all things AI incident response; researching the quickly evolving threat landscape; red teaming AI systems for failures; and empowering Microsoft with this knowledge. We partner closely with product engineering teams to mitigate and address the full range of threats that face AI services – from traditional security risks to novel security threats like indirect prompt injection and entirely AI-native threats like the manufacture of sexual exploitation and abuse material (SEAM) or deep fake production or the use of AI to run automated scams. We are a mission-driven team intent on delivering trustworthy AI and response processes when it does not live up to those standards. We are always learning. Insatiably curious. We lean into uncertainty, take risks, and learn quickly from our mistakes. We build on each other’s ideas, because we are better together. We are motivated every day to empower others to do and achieve more through our technology and innovation. Together we make a difference for all of our customers, from end users to Fortune 50 enterprises. Our team has people from a wide variety of backgrounds, previous work histories, and life experiences, and we are eager to maintain and grow that diversity. Our diversity of backgrounds and experiences enables us to create innovative solutions for our customers. Our culture is collaborative and customer-focused.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Responsibilities
Logging and Telemetry: Contribute to roadmap definition and own execution for logging and detection features, including helping to define what must be logged to reconstruct attacks and build reliable detections; understand architecture and pipelines for existing logging, data storage, and observability systems; and identify what additional infrastructure should be built. Help to develop policy and engineering standards.Detection: Contribute to roadmap planning and own feature execution for advanced detection capabilities, including building infrastructure for anomaly detection, scaling attack pattern identification, and enabling signature-based threat hunting. Continuously update detection signals as attackers evolve new techniques.Threat Intelligence: Help shape feature requirements and deliver capabilities that meet diverse stakeholder needs. Partner with data science to design pipelines for aggregating and correlating multi-source signals. Deliver actionable insights, trend analyses, and automated reporting on malicious activity, integrated into detection and response workflows.Mitigations: Partner with the AI red team and security research to turn new attack techniques into prioritized product capabilities for mitigation and detection.User Discovery: Conduct user discovery to deeply understand end-to-end workflows, pain points, and priorities. Translate customer needs into clear use cases and Product Requirements Documents (PRDs). Work closely with engineering to assess technical tradeoffs and feasibility.Customer Awareness/Top of Funnel: Support adoption and onboarding for teams using the platform.Define and track success metrics, and commensurately make product improvements and changes based on metrics. Stay at the forefront of the AI threat landscape by following research, model evolution, real-world AI cyberattacks, and changes to frameworks and standards (NIST, MITRE, OWASP, etc.).Collaborate with adjacent teams to identify integration opportunities.Embody our Culture and Values
Qualifications
Required Qualifications:
Bachelor's Degree AND 5+ years experience in product/service/project/program management or software development OR equivalent experience.Preferred Qualifications:
7+ years driving complex platform, data, or security products end-to-end, including discovery, prioritization, and launch.3+ years of experience leading ambiguous product areas, defining requirements, developing roadmaps, and working with multi-disciplinary teams to execute them.Demonstrated experience in cybersecurity (SIEM, SOAR, XDR/EDR, cloud security, log/observability platforms, threat detection, security research, or similar).Demonstrated understanding of vulnerabilities and mitigations in AI systems.Ability to drive results in ambiguous environments, maintain strong attention to detail, and collaborate effectively across a large organization.Deep understanding of LLM-based systems—prompts, system instructions, agents/tools, RAG, embeddings—and experience in leading execution to build and/or secure AI copilots or agent-based products.Familiarity with advanced concepts in AI safety, such as metacognition and mechanistic interpretability. Familiarity with large-scale telemetry systems (data lakes, streaming pipelines, etc.).Experience with cloud-native environments (Azure preferred), Kubernetes, and modern data/LLM/ML stacks.Exceptional written and verbal skills; adept at articulating business needs and driving alignment across engineering, research, and security teams. #MSFT Security #MSECAI #AI #RAI #Safety #Security #MSECAI #AEGIS #AIIR #AISP
Product Management IC4 - The typical base pay range for this role across the U.S. is USD $119,800 - $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 - $258,000 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:
https://careers.microsoft.com/us/en/us-corporate-pay
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Check out other jobs at Microsoft.