DevJobs

Principal Security AI Software Engineer

Overview
Skills
  • ML ML ꞏ 10y
  • AI ꞏ 10y
  • Data Science ꞏ 10y
  • Software Engineering ꞏ 10y
  • Generative AI
  • ML Ops
  • Agentic Systems
  • LLM-based tools
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We aim to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best every day. In doing so, we create life-changing innovations that impact billions of lives around the world.

As a Principal Applied AI Engineer in the AI Benchmarking team within Microsoft Security AI, you will play a critical role in shaping the performance of AI models and agentic systems across the entire range of Microsoft Security products and programs. Working at the intersection of cutting-edge AI research and real-world threat protection, you will design, build, and optimize benchmarks to drive tomorrow’s AI systems. Your contributions will ensure that Microsoft Security remains at the forefront of AI-driven defense, providing robust, adaptive, and trustworthy protection across complex digital environments. In this role, precision, efficiency, and strategic thinking are paramount, as you help drive innovation in security technology while maintaining the highest standards of reliability and integrity.

Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

Responsibilities

  • Design and optimize Generative AI benchmarks and evaluations to support efficient model training, agentic system evaluation, and solution deployment within Microsoft Security AI.
  • Collaborate with product, research and engineering to ensure seamless integration of evaluation datasets and test harnesses into AI-driven security solutions.
  • Curate scalable, reliable, and secure datasets that enable benchmarking and evaluation of core security tasks such as threat intelligence, anomaly detection, and penetration testing.
  • Ensure data integrity and reliability by implementing rigorous validation, monitoring, and governance practices.
  • Drive innovation in data engineering by exploring new technologies, architectures, and methodologies to improve AI security performance.

Qualifications

  • Bachelor's or Master’s degree in Computer Science, Data Science, Engineering, or a related technical field
  • 10+ years of industry experience in applied AI/ML, data science, or software engineering roles
  • Strong hands-on programming skills
  • Proven experience designing or evaluating AI/ML systems and/or benchmarking pipelines
  • At least 2 years of experience in the security domain (e.g., threat detection, anomaly detection, SOC environments)
  • Familiarity with ML Ops practices – taking models from experimentation to production
  • Demonstrated ability to collaborate across research, product, and engineering teams
  • Experience with Generative AI, agentic systems, or LLM-based tools
  • Strong understanding of data quality, validation, and governance practices
  • Background in both AI and Security contexts, particularly where the two intersect
  • Growth mindset, strong sense of ownership, and ability to mentor junior team members

Preferred

  • Experience working with or building evaluation datasets, test harnesses, or performance metrics for AI systems
  • Familiarity with modern Generative AI benchmarks
  • Familiarity with AI security vulnerabilities
  • Academic publishing or prior contribution to research communities

Other Requirements

Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

Microsoft Cloud Background Check

This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

Understanding of secure execution environments for safe workload handling, including potential exposure to malware analysis.

#MSFTSecurity #MSECAI #SoftwareEngineering #AIEngineering #SecurityAI #CloudComputing #AzureML #DataScience #TestEval #ThreatProtection #TechCareers

Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Microsoft