Microsoft Senior Security Researcher

New job, posted less than a week ago!

Job Details

Posted date: Apr 02, 2025

There have been 14 jobs posted with the title of Senior Security Researcher all time at Microsoft.
There have been 14 Senior Security Researcher jobs posted in the last month.

Category: Research, Applied, & Data Sciences

Location: Redmond, WA

Estimated salary: $183,700
Range: $117,200 - $250,200

Employment type: Full-Time

Travel amount: 25.0%

Work location type: Up to 100% work from home

Role: Individual Contributor


Description

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.

Do you want to find responsible AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security experts to cause trust and safety failures in Microsoft’s big AI systems. We are looking for a Senior AI Safety Researcher where you will get to work alongside experts to push the boundaries of AI Red Teaming. We are a fast paced, interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, and Responsible AI experts with the mission of proactively finding failures in Microsoft’s big bet AI systems. Your work will impact Microsoft’s AI portfolio including Phi series, Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot and help keep Microsoft’s customers safe and secure.

More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

The AI Red Team is looking for security researchers who can combine the development of cutting-edge attack techniques, with the ability to deliver complex, time limited operations as part of a diverse team. This includes the ability to manage several priorities at once, manage stakeholders, and communicate clearly with a range of audiences. Understand the products & services that the AI Red Team is testing, including the technology involved and the intended users to develop plans to test them. Understand the risk landscape of AI Safety & Security including cybersecurity threats, Responsible AI policies, and the evolving regulatory landscape to develop new attack methodologies for these areas. Conduct operations against systems as part of a multi-disciplinary team, delivering against multiple priority areas within a set timeline.

As a Senior Secuirty Researcher, you will:

Discover and exploit GenAI vulnerabilities end-to-end in order to assess the safety of systemsManage product group stakeholders as priority recipients and collaborators for operational sprints Drive clarity on communication and reporting for red teaming peers when working with product groups, other AI Safety & Security ecosystem leads, and business decision makers Develop methodologies and techniques to scale and accelerate AI Red Teaming  Collaborate with teams to influence measurement and mitigations of these vulnerabilities in AI systemsResearch new and emerging threats to inform the organization  Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations 



Qualifications

Required/Minimum Qualifications:

Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics predictive analytics, research).OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research).OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research).OR equivalent experience.

Other Requirements:

Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Additional or Preferred Qualifications:

Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research).OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research).OR equivalent experience.3+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).Experience presenting at conferences or other events in the outside research/industry community as an invited speaker.3+ years experience conducting research as part of a research program (in academic or industry settings).1+ year(s) experience developing and deploying live production systems, as part of a product team.1+ year(s) experience developing and deploying products or systems at multiple points in the product cycle from ideation to shipping.Bio/Chem education and CBRN weapons background Penetration testing qualifications; GPEN/GXPN, GWAPT, OSCP/OSCE, CRT/CCT/CCSAS, Participation in bug bounties/CTFs, Experience of using common penetration testing tools; Kali Linux, Burpsuite, Nmap, Nessus, etc  Familiarity with one or more programming languages from: Python, C#, C/C++, PowerShell Prior knowledge of Generative AI not required, but basic familiarity with AI or willingness to learn is desired.

Applied Sciences IC4 - The typical base pay range for this role across the U.S. is USD $117,200 - $229,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $153,600 - $250,200 per year.

Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay

Microsoft will accept applications for the role until April 7, 2025.

#MSFTSecurity #MSFTNSBE25



Email/text job link for Senior Security Researcher at Microsoft

Provide your email or phone number to recieve a short message with the job link and details.

Check out other jobs at Microsoft.