AMDGPU Kernel Development Engineer
任职要求
Highly skilled engineer with strong technical and analytical expertise in C++ development within Linux environments. The ideal candidate will thrive in both collaborative team settings and independent work, with the ability to define goals, manage development efforts, and deliver high-quality solutions. Strong problem-solving skills, a proactive approach, and a keen understanding of software engineering best practices are essential. KEY RESPONSIBILITIES: Optimize Deep Learning Frameworks: Enhance and optimize frameworks like TensorFlow and PyTorch for AMD GPUs in open-source repositories. Develop GPU Kernels: Create and optimize GPU kernels to maximize performance for specific AI operations. Develop & Optimize Models: Design and optimize deep learning models specifically for AMD GPU performance. Collaborate with GPU Library Teams: Work closely with int…
工作职责
THE ROLE: As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your strong experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.
THE ROLE: We are seeking a talented Machine Learning Kernel Developer to design, develop, and optimize low-level machine learning kernels for AMD GPUs using the ROCm software stack. In this role, you will work on high-impact projects to accelerate AI frameworks and libraries, with a focus on emerging technologies like Large Language Models (LLMs) and other generative AI workloads. THE PERSON: The ideal candidate will have hands-on experience with GPU programming (ROCm or CUDA) and a passion for pushing the boundaries of AI performance. KEY RESPONSIBILITIES: Design and implement highly optimized ML kernels (e.g., matrix operations, attention mechanisms) for AMD GPUs using ROCm. Profile, debug, and tune kernel performance to maximize hardware utilization for AI workloads. Collaborate with ML researchers and framework developers to integrate kernels into AI frameworks (e.g., PyTorch, TensorFlow) and inference engines (e.g., vLLM). Contribute to the ROCm software stack by identifying and resolving bottlenecks in libraries like MIOpen, HIP, or Composable Kernel. Stay updated on the latest AI/ML trends (LLMs, quantization, distributed inference) and apply them to kernel development. Document and communicate technical designs, benchmarks, and best practices. Troubleshoot and resolve issues related to GPU compatibility, performance, and scalability. REQUIRED EXPERIENCE: 2+ years of experience in GPU kernel development for machine learning (ROCm or CUDA). Proficiency in C/C++ and Python, with experience in performance-critical programming. Strong understanding of ML frameworks (PyTorch, TensorFlow) and GPU-accelerated libraries. Basic knowledge of modern AI technologies (LLMs, transformers, inference optimization). Familiarity with parallel computing, memory optimization, and hardware architectures. Problem-solving skills and ability to work in a fast-paced environment.
THE ROLE: MTS Software development engineer on teams building and optimizing Deep Learning applications and AI frameworks for AMD GPU compute platforms. Work as part of an AMD development team and open-source community to analyze, develop, test and deploy improvements to make AMD the best platform for machine learning applications. THE PERSON: Strong technical and analytical skills in C++ development in a Linux environment. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Optimize Deep Learning Frameworks: In depth experience in enhance and optimize frameworks like TensorFlow and PyTorch for AMD GPUs in open-source repositories. Develop GPU Kernels: Create and optimize GPU kernels to maximize performance for specific AI operations. Develop & Optimize Models: Design and optimize deep learning models specifically for AMD GPU performance. Collaborate with GPU Library Teams: Work tightly with internal teams to analyze and improve training and inference performance on AMD GPUs. Collaborate with Open-Source Maintainers: Engage with framework maintainers to ensure code changes are aligned with requirements and integrated upstream. Work in Distributed Computing Environments: Optimize deep learning performance on both scale-up (multi-GPU) and scale-out (multi-node) systems. Utilize Cutting-Edge Compiler Tech: Leverage advanced compiler technologies to improve deep learning performance. Optimize Deep Learning Pipeline: Enhance the full pipeline, including integrating graph compilers. Software Engineering Best Practices: Apply sound engineering principles to ensure robust, maintainable solutions.
THE ROLE: As a core member of the team, you will play a pivotal role in optimizing and developing deep learning frameworks for AMD GPUs. Your strong experience will be critical in enhancing GPU kernels, deep learning models, and training/inference performance across multi-GPU and multi-node systems. You will engage with both internal GPU library teams and open-source maintainers to ensure seamless integration of optimizations, utilizing cutting-edge compiler technologies and advanced engineering principles to drive continuous improvement.
THE ROLE: We are seeking a talented Machine Learning Kernel Developer to design, develop, and optimize low-level machine learning kernels for AMD GPUs using the ROCm software stack. In this role, you will work on high-impact projects to accelerate AI frameworks and libraries, with a focus on emerging technologies like Large Language Models (LLMs) and other generative AI workloads. THE PERSON: The ideal candidate will have hands-on experience with GPU programming (ROCm or CUDA) and a passion for pushing the boundaries of AI performance. KEY RESPONSIBILITIES: Design and implement highly optimized ML kernels (e.g., matrix operations, attention mechanisms) for AMD GPUs using ROCm. Profile, debug, and tune kernel performance to maximize hardware utilization for AI workloads. Collaborate with ML researchers and framework developers to integrate kernels into AI frameworks (e.g., PyTorch, TensorFlow) and inference engines (e.g., vLLM, SGLang). Contribute to the ROCm software stack by identifying and resolving bottlenecks in libraries like MIOpen, BLAS, or Composable Kernel. Stay updated on the latest AI/ML trends (LLMs, quantization, distributed inference) and apply them to kernel development. Document and communicate technical designs, benchmarks, and best practices. Troubleshoot and resolve issues related to GPU compatibility, performance, and scalability. REQUIRED EXPERIENCE: 2+ years of experience in GPU kernel development for machine learning (ROCm or CUDA). Proficiency in C/C++ and Python, with experience in performance-critical programming. Strong understanding of ML frameworks (PyTorch, TensorFlow) and GPU-accelerated libraries. Basic knowledge of modern AI technologies (LLMs, transformers, inference optimization). Familiarity with parallel computing, memory optimization, and hardware architectures. Problem-solving skills and ability to work in a fast-paced environment.