Principal Applied Scientist
Microsoft
Principal Applied Scientist
Multiple Locations, United States
Save
Overview
We are seeking a Principal Applied Scientist to join our Autonomous Defense and Protection Team (ADAPT), focusing on advancing AI-driven capabilities to enable autonomous defense through collaboration between blue and red teams. In this role, you will leverage LLMs and agentic systems to design workflows that empower autonomous agents to tackle complex security scenarios.
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires tomake the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.
The Microsoft Security AI (Artificial Intelligence) Research team is responsible for defending Microsoft and our customers through applied AI innovation. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Defending Microsoft’s complex environment provides a unique opportunity to build and evaluate autonomous defense through emerging generative AI capabilities. Microsoft understands and learns from its own defensive expertise, including via teams like Microsoft Threat Intelligence Center (MSTIC), and has the opportunity to build a unique knowledge graph describing the relationship between risk, investigation, and response. This data, built over Microsoft’s complex digital estate, along with Microsoft AI forms the foundation for innovative solutions to defend Microsoft.
Your responsibilities include processing structured and unstructured data, fine-tuning models for security tasks, creating systems for synthetic data generation, and partnering with applied research scientists to build a foundation for training and evaluating agentic capabilities. You will also collaborate with security researchers to integrate AI and security expertise, driving innovation and advancing autonomy in security operations.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
Qualifications
Required/Minimum Qualifications
- Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
- OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 4+ years related experience (e.g., statistics, predictive analytics, research)
- OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research)
- OR equivalent experience.
- 6+ years of experience as a machine learning engineer, including designing and managing ML pipelines, building end-to-end systems from research ideas to functional MVPs, and prototyping solutions for real-world applications
- 4+ years of relevant industry experience driving cutting-edge research into real world impact.
- 4+ years of experience in research areas such as generative AI, reinforcement learning, or similar machine learning techniques.
Other Requirements
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Additional or Preferred Qualifications
- Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 9+ years related experience (e.g., statistics, predictive analytics, research)
- OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 6+ years related experience (e.g., statistics, predictive analytics, research)
- OR equivalent experience.
- 5+ years experience creating publications (e.g., patents, libraries, peer-reviewed academic papers).
- 2+ years experience presenting at conferences or other events in the outside research/industry community as an invited speaker.
- 5+ years experience conducting research as part of a research program (in academic or industry settings).
- 3+ years experience developing and deploying live production systems, as part of a product team.
- 3+ years experience developing and deploying products or systems at multiple points in the product cycle from ideation to shipping
- 4+ years of experience with LLM-based agentic systems, unstructured data analysis using LLMs, and / or graph algorithms
- 4+ years of experience in Python, PyTorch, TensorFlow, or other machine learning frameworks.
- Experience in Knowledge Graphs applied to security.
- Experience generating synthetic data and environments to train LLM-based AI agents.
- Experience in safety and ethical aspects of AI. - Experience in technology transfer of applied research.
- Experience conducting high-quality research and publishing. - Experience in working with large-scale datasets.
- Experience in applying machine learning to security and safety domains, such as malware detection, fraud prevention, or cyber-physical systems.
- Background in cyber security including knowledge of adversary tradecraft, emerging threats, or SOC operations.
Applied Sciences IC5 - The typical base pay range for this role across the U.S. is USD $137,600 - $267,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $180,400 - $294,000 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications for the role until January 12, 2025
#MSFTSecurity #MSECAIR #Cybersecurity #SecurityResearch #LLM #AI #AIAgents #GenAI #MSecADAPT #RedTeam
Responsibilities
- Collaborate with security research teams and applied scientists to implement AI-driven techniques that address capability gaps and enable autonomy in security operations.
- Design and refine workflows leveraging LLMs and agentic systems to empower autonomous agents in analyzing and operationalizing insights from complex security scenarios.
- Build and optimize data pipelines to process structured and unstructured data, enabling context extraction and integration with other efforts to operationalize insights.
- Support the generation of synthetic data and simulation environments to train and evaluate agentic capabilities in real-world security contexts.
- Fine-tune and optimize machine learning models for security-specific applications, ensuring seamless integration into security workflows.
- Help define metrics and frameworks for evaluating autonomous agent capabilities, driving continuous improvement and alignment with organizational goals
Other