Microsoft Principal AI Security Researcher

New job, posted less than a week ago!

Job Details

Posted date: Jan 30, 2026

There have been 3 jobs posted with the title of Principal AI Security Researcher all time at Microsoft.

Category: Security Research

Location: Multiple Locations, Multiple Locations

Estimated salary: $222,050
Range: $139,900 - $304,200

Employment type: Full-Time

Work location type: 0 days / week in-office – remote

Role: Individual Contributor


Description

Overview

Microsoft Sentinel Platform NEXT R&D labs is the strategic incubation engine behind the next generation of AI-native security products. We are looking to hire a Principal AI Security Researcher who thrives in a bottoms-up, fast-paced, highly technical environment. The Sentinel Platform team will be building cloud solutions meeting scales that few companies in the industry are required to support, that leverage state-of-the-art technologies to deliver holistic protection to a planet scale user base.

Our team blends scientific rigor, curiosity, and customer obsession to deliver life-changing innovations that protect millions of users and organizations by building the next generation of Artificial Intelligence (AI)-native security products. We pursue long horizon bets while landing near term impact, taking ideas from zero-to-one (0→1) prototypes to Minimum Viable Products (MVPs) and then one-to-many (1→N) platform integration across Microsoft Defender, Sentinel, Entra, Intune, and Purview. Our culture blends ambition and scientific rigor with curiosity, humility, and customer obsession; we invest in new knowledge, collaborate across worldclass scientists and engineers, and tackle the immense challenge of protecting millions of customers.

As a Principal AI Security Researcher, you will be the cybersecurity expert in our product-focused applied research and development (R&D) team, which focuses in artificial intelligence and machine learning and drives innovation from concept to production. You will work on a wide range of AI/ML challenges for cybersecurity, including, but not limited to, system design, collaborating with world-class scientists and engineers to deliver robust, evaluating our AI models and system’s outputs, scalable, and responsible AI systems for security applications.

Responsibilities

Security AI Research: be the security expert to our AI-focused team, helping evaluate our systems on real data, improve system inputs, triage and investigate AI-based findings, leverage AI and security experience to incubate and transform our products, educate applied scientists in cybersecurity.Collaboration: Partner with engineering, product, and research teams to translate scientific advances into robust, scalable, and production-ready solutions.AI/ML Research: design, development, and analysis of novel AI and machine learning models and algorithms for security and enterprise-scale applications.Experimentation & Evaluation: Design and execute AI experiments, simulations, and evaluations to validate models and system performance, ensuring measurable improvements. • Customer Impact: Engage with enterprise customers and field teams to co-design solutions, gather feedback, and iterate quickly based on real-world telemetry and outcomes.

Qualifications

Required/minimum qualifications

Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detectionOR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 4+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detectionOR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 6+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detectionOR equivalent experience.Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafterAdditional or preferred qualificationsDoctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 5+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 8+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 12+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection OR equivalent experience.5+ years of experience in cybersecurity, AI, software development lifecycle, large-scale computing, modeling, and/or anomaly detection.5+ years of professional experience in security operations, pen-testing, researching cyber threats, understanding attacker methodology, tools, and infrastructure.Demonstrated autonomy and success driving zero-to-one (0→1) initiatives ML background and hands-on experience Experience with ML lifecycle: model training, fine-tuning, evaluation, continuous monitoring, and more. Coding ability in one or more languages (e.g., Python, C#, C++, Rust, JavaScript/TypeScript). Familiarity and previous work in the field of cybersecurity (e.g., threat detection/response, SIEM/SOAR, identity, endpoint, cloud security) and familiarity with analyst workflows.

#MSFT Security #SentinelPlatform

Security Research IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:

https://careers.microsoft.com/us/en/us-corporate-pay

This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.



Email job link for Principal AI Security Researcher at Microsoft

Provide your email address to receive a message with the job link and details.

Check out other jobs at Microsoft.