logo of amd

AMDAI Software Development Engineer

社招全职 Engineering地点:北京状态:招聘

任职要求


If you are passionate about AI/ML frameworks, software architecture, and/or compilers this is your opportunity. You will be working in one of the core areas, such as AI/ML frameworks (e.g. PyTorch, TensorFlow, ONNX/OnnxRuntime), AI runtime components, and/or optimization tooling to accelerate AI/ML workloads on AMD hardware platforms. You will collaborate closely with AI researchers to drive the development of framework components to efficiently map AI models to run on a variety of HW AI accelerators. You will have demonstrated focus in at least one of these areas: developing and deploying model optimization features, such as graph fusion, quantization, sparsity, and/or experience with profiling and accelerating workloads on acceleerators such as GPU or NPU, and AI execution runtimes. You should be someone who can work in a dynamic, fast-paced development environment, with excellent leadership and collaboration skills. You will work with multiple engineering teams that are geographically dispersed. You will work on next generation framework software, guiding other senior developers and domain experts. REQUIRED EXPERIENCE: Experience with development in one of the focus areas: AI frameworks, AI runtime stacks, and/or performance tuning and optimizations for workloads running on ML accelerator HWs. Experience with ML frameworks such as PyTorch, OnnxRuntime, JAX, TensorFlow Proficient in C++ programming. Experience developing and debugging in Python. Excellent skills at designing Python tools and libraries used by large number of users. Experience with AI model architectures, e.g. Transformers, CNNs. Team player and ready to work with a geographically distributed team.

工作职责


THE ROLE: AMD is looking for a world class AI frameworks engineer who can provide technical leadership in the development of various AI frameworks in the AMD ecosystem. You will need to drive technical direction for next generation frameworks for AI model training and inference for wide variety of AMD devices, current and future, such as MI Instinct, and Radeon GPUs, XDNA devices, including the recently released Ryzen AI, Alveo V70 and Versal ACAP, and datacenter CPUs such as EPYC. You will work enhance the AI framework capabilities to enable cutting-edge models on onto AMD’s cutting-edge hardware.
包括英文材料
开发框架+
内核+
FineTuning+
PyTorch+
TensorFlow+
ONNX+
JAX+
C+++
Python+
相关职位

logo of amd
社招 Enginee

THE ROLE: Triton is a language and compiler for writing highly efficient custom deep learning primitives. It's widely adopted in open AI software stack projects like PyTorch, vLLM, SGLang, and many others. AMD GPU is an official backend in Triton and we are fully committed to it. If you are interested in making GPUs running fast via developing the Triton compiler and kernels, please come join us!

更新于 2025-10-06
logo of amd
社招 Enginee

Key Responsibilities: Develop and integrate hardware kernels and contribute to NPU runtime development. Collaborate with the hardware team to identify and resolve functional and performance issues. Lead the end-to-end deployment and optimization of LLM and Stable Diffusion models. Work closely with customers to support the development of new features and performance improvements, ensuring timely delivery.

更新于 2025-10-06
logo of amd
社招 Enginee

THE ROLE: MTS Software development engineer on teams building and optimizing Deep Learning applications and AI frameworks for AMD GPU compute platforms.  Work as part of an AMD development team and open-source community to analyze, develop, test and deploy improvements to make AMD the best platform for machine learning applications. THE PERSON: Strong technical and analytical skills in C++ development in a Linux environment. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Optimize Deep Learning Frameworks: In depth experience in enhance and optimize frameworks like TensorFlow and PyTorch for AMD GPUs in open-source repositories. Develop GPU Kernels: Create and optimize GPU kernels to maximize performance for specific AI operations. Develop & Optimize Models: Design and optimize deep learning models specifically for AMD GPU performance. Collaborate with GPU Library Teams: Work tightly with internal teams to analyze and improve training and inference performance on AMD GPUs. Collaborate with Open-Source Maintainers: Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream. Work in Distributed Computing Environments: Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems. Utilize Cutting-Edge Compiler Tech: Leverage advanced compiler technologies to improve deep learning performance. Optimize Deep Learning Pipeline: Enhance the full pipeline, including integrating graph compilers. Software Engineering Best Practices: Apply sound engineering principles to ensure robust, maintainable solutions.

更新于 2025-09-17
logo of nvidia
社招

• Craft and develop robust inferencing software that can be scaled to multiple platforms for functionality and performance • Performance analysis, optimization and tuning • Closely follow academic developments in the field of artificial intelligence and feature update TensorRT-LLM • Provide feedback into the architecture and hardware design and development • Collaborate across the company to guide the direction of machine learning inferencing, working with software, research and product teams • Publish key results in scientific conferences

更新于 2025-05-19