英伟达Compute Architecture Software Engineer
任职要求
• 5+ working years' experience in software engineering, particularly in GPU programming and LLM inference. • Strong proficiency in programming languages such as Python, C++, and CUDA. • A solid understanding of deep learning frameworks and techniques. • Outstanding problem-solving skills and the ability to work collaboratively in a team setting. • Ambitious approach with a proven track record of taking initiative and delivering results. • BS or above degree in Computer Science, Engineering, or a related field, or equivalent experience. Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family www.nvidiabenefits.com/NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
工作职责
• You will develop and optimize software solutions to accelerate LLM inference using GPU technology. • Collaborate closely with a world-class team of engineers to implement and refine GPU-based algorithms. • Analyze and determine the most effective methods to improve performance, ensuring seamless execution across diverse computing environments. • Engage in both individual and team projects, contributing to NVIDIA's mission of leading the AI revolution. • Work in an empowering and inclusive environment to successfully implement groundbreaking AI solutions.
THE ROLE: The mission of the Principal Technical Lead is to orchestrate and elevate the quality, consistency, and competitiveness of AMD's GPU software ecosystem on Linux. This leader will bridge strategic objectives with technical execution across the ROCm stack and Linux driver portfolios (both packaged and inbox), ensuring a seamless, powerful, and reliable experience for developers, researchers, and enterprises choosing AMD for their accelerated computing needs. KEY RESPONSIBILITIES: Strategic Technical Leadership & SOW Definition Act as the central technical nexus between Product Management, Software Architecture, and engineering teams (kernel, ROCm, QA, support). Translate high-level product goals and market requirements into detailed, actionable, and prioritized Technical Statements of Work (SOWs) for RSL AI validation team ensure validation plans are coherent, dependencies are managed, and resources are aligned to deliver on strategic commitments for both Radeon and Ryzen AI solutions. Quality, Test & Process Optimization: Own the definition and evolution of the product quality bar for AMD's Linux GPU software. · Champion and drive the implementation of a robust, scalable, and automated CI/CD and test infrastructure across Native Linux, WSL, and various hardware platforms. Establish key performance indicators (KPIs) for software quality, release velocity, and regression rates. Use data to drive continuous improvement in development and testing efficiency Unified User Experience & Competitive Analysis: Define and monitor a holistic user experience (UX) scorecard encompassing installation, performance predictability, documentation, and debugging. Institute a formal, ongoing competitive analysis framework to benchmark the AMD software stack (ROCm + Drivers) against key competitors across performance, feature parity, stability, and usability. Serve as the ultimate internal advocate for the end-user, ensuring customer and community feedback is systematically integrated into the development lifecycle. Linux Ecosystem & Driver Consistency: Provide technical guidance and oversight to ensure flawless synchronization between the AMD packaged driver and the upstream Linux kernel (inbox) driver. Strengthen AMD's partnership with the Linux kernel community and major distributions (e.g., Canonical, Red Hat, SUSE). Drives a consistent and high-quality user experience regardless of the driver delivery channel (OS vendor vs. AMD.com).
• Writing highly tuned compute kernels to perform core deep learning operations (e.g. matrix multiplies, convolutions, normalizations) • Following general software engineering best practices including support for regression testing and CI/CD flows • Collaborating with teams across NVIDIA:• CUDA compiler team on generating optimal assembly code • Deep learning training and inference performance teams on which layers require optimization • Hardware and architecture teams on the programming model for new deep learning hardware features