Senior Software Engineer- GPU
Microsoft
Senior Software Engineer- GPU
Multiple Locations, United States
Save
Overview
The AI Frameworks team at Microsoft develops AI software that enables running AI models everywhere, from world’s fastest AI supercomputers, to servers, desktops, mobile phones, internet of things (IoT) devices, and internet browsers. We collaborate with our hardware teams and hardware partners to build software stacks for novel AI accelerators. We work closely with machine learning researchers and developers to optimize and scale out model training and inference.
The team operates at the intersection of AI algorithmic innovation, purpose-built AI hardware, systems, and software. We own inference performance of OpenAI and other state of the art Large Language Models (LLMs) and work directly with OpenAI on the models hosted on the Azure OpenAI service serving some of the largest workloads on the planet with trillions of inferences per day in major Microsoft products, including Office, Windows, Bing, SQL Server, and Dynamics.
As a Senior Software Engineer- GPU on the team, you will have the opportunity to work on multiple levels of the AI software stack, including the fundamental abstractions, programming models, compilers, runtimes, libraries and APIs to enable large scale training and inferencing of models. You will benchmark OpenAI and other LLMs for performance on GPUs and Microsoft HW, debug and optimize performance, monitor performance and enable these models to be deployed in the shortest amount of time and the least amount of HW possible helping achieve Microsoft Azure's capex goals.
This is a technical role: it requires hands-on software design and development skills. We’re looking for someone who has a demonstrated history of solving hard technical problems and is motivated to tackle the hardest problems in building a full end-to-end AI stack.
By applying to this position, you are being considered for multiple like positions within our organization for an invitation-only virtual Interview Day. Position specifics, including hiring team, location, and position details will be determined following the interview process.
Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day.
Qualifications
Required Qualifications:
- Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to C, C++, or Python
- OR equivalent experience.
- 2+ years’ practical experience working on High Performance Applications and Performance Debug and Optimization on CPU's/GPU's.
Other Requirements:
Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings:
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
Preferred Qualifications:
- Master's Degree in Computer Science or related technical field AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
- OR equivalent experience.
- Technical background and solid foundation in software engineering principles, computer architecture, GPU architecture, HW neural net acceleration.
- Experience in end-to-end performance analysis and optimization of state-of-the-art LLMs, HPC applications including proficiency using GPU profiling tools.
- Experience in DNN/LLM inference and experience in one or more DL frameworks such as PyTorch, Tensorflow, or ONNX Runtime and familiarity with CUDA, ROCm, Triton.
- Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers.
- Experience in working with orchestration platforms like K8 and Service Fabric
- 2+ years of experience with Deep Learning, and AI Infrastructure including Diagonostic, Profiling and Performance Analysis Tools.
Software Engineering IC4 - The typical base pay range for this role across the U.S. is USD $117,200 - $229,200 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $153,600 - $250,200 per year.
Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay
Microsoft will accept applications and processes offers for these roles on an ongoing basis.
#AIFX#
#AIPLATFORM#
#AIPLATREF#
#SHPE24MSFT#
Responsibilities
- Identify and drive improvements to end-to-end inference performance of OpenAI and other state-of-the-art LLMs
- Measure, benchmark performance on Nvidia/AMD GPU's and first party Microsoft silicon
- Optimize and monitor performance of LLMs and build SW tooling to enable insights into performance opportunities ranging from the model level to the systems and silicon level, help reduce the footprint of the computing fleet and achieve Azure AI capex goals
- Enable fast time to market LLMs/models and their deployments at scale by building SW tools that afford velocity in porting models on new Nvidia, AMD GPUs and Maia silicon
- Design, implement, and test functions or components for our AI/DNN/LLM frameworks and tools
- Speeding up/reducing complexity of key components/pipelines to improve performance and/or efficiency of our systems
- Communicate and collaborate with our partners both internal and external
- Embody our Culture and Values