英伟达Developer Technology Engineer – AI
任职要求
NVIDIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
What You'll Be Doing:
• Working directly with key application developers to understand the current and future problems they are solving, crafting and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both reference code development and direct contribution to the applications.
• Collaborating closely with diverse groups at NVIDIA such as the architecture, research, libraries, tools, and system software teams to influence the design of next-generation architectures, software platforms, and programming models, by investigating the impact on application performance and developer efficiency.
• Need to travel from time to time for conferences and for on-site visits with developers.
What We Need To See:
• A BS…工作职责
N/A
• Study and develop cutting-edge techniques in deep learning, graphs, machine learning, and data analytics, and perform in-depth analysis and optimization to ensure the best possible performance on current- and next-generation GPU architectures. • Work directly with key customers to understand the current and future problems they are solving and provide the best AI solutions using GPUs. • Collaborate closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models.
• Working directly with key application developers (especially LLM) to understand the current and future problems they are solving, creating and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both library development and direct contribution to the applications. This includes training and inference optimization for large language models, directly contributing to frameworks such as Megatron and TRTLLM, SGLang, vLLM... • Collaborating closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models, including by investigating impact on application performance and developer productivity. • Engaging in deep optimization of high-performance operators, involving but not limited to CUDA deep optimization, instruction and compiler optimization. These optimizations will directly support customers or be integrated into products like cuDNN, cuBLAS, and CUTLASS... • Some travel is required for conferences and for on-site visits with developers.
• Research and develop cutting-edge techniques in deep learning, machine learning, HPC (High Performance Computing), graphs and data analytics, and perform in-depth analysis and optimization to ensure the best performance on NVIDIA current- and next-generation accelerated computing platform, including GPU, CPU and DPU. • Work directly with key customers to understand the current and future problems they are solving and optimize their workloads to maximize performance on our platform. • Collaborate closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to design and develop next-generation architectures, software platforms, and programming models.
• Working directly with key application developers (especially LLM) to understand the current and future problems they are solving, creating and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both library development and direct contribution to the applications. This includes training and inference optimization for large language models, directly contributing to frameworks such as Megatron and TRTLLM, SGLang, vLLM... • Collaborating closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models, including by investigating impact on application performance and developer productivity. • Engaging in deep optimization of high-performance operators, involving but not limited to CUDA deep optimization, instruction and compiler optimization. These optimizations will directly support customers or be integrated into products like cuDNN, cuBLAS, and CUTLASS... • Some travel is required for conferences and for on-site visits with developers.