英伟达Developer Technology Engineer – AI
任职要求
• MS or PhD from university in engineering or computer science or related disciplines. • 2+ years working experience • Strong knowledge of C/C++, software design, programming techniques, or AI algorith…
工作职责
• Research and develop cutting-edge techniques in deep learning, machine learning, HPC (High Performance Computing), graphs and data analytics, and perform in-depth analysis and optimization to ensure the best performance on NVIDIA current- and next-generation accelerated computing platform, including GPU, CPU and DPU. • Work directly with key customers to understand the current and future problems they are solving and optimize their workloads to maximize performance on our platform. • Collaborate closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to design and develop next-generation architectures, software platforms, and programming models.
• Study and develop cutting-edge techniques in deep learning, graphs, machine learning, and data analytics, and perform in-depth analysis and optimization to ensure the best possible performance on current- and next-generation GPU architectures. • Work directly with key customers to understand the current and future problems they are solving and provide the best AI solutions using GPUs. • Collaborate closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models.
• Working directly with key application developers (especially LLM) to understand the current and future problems they are solving, creating and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both library development and direct contribution to the applications. This includes training and inference optimization for large language models, directly contributing to frameworks such as Megatron and TRTLLM, SGLang, vLLM... • Collaborating closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models, including by investigating impact on application performance and developer productivity. • Engaging in deep optimization of high-performance operators, involving but not limited to CUDA deep optimization, instruction and compiler optimization. These optimizations will directly support customers or be integrated into products like cuDNN, cuBLAS, and CUTLASS... • Some travel is required for conferences and for on-site visits with developers.
• Working directly with key application developers (especially LLM) to understand the current and future problems they are solving, creating and optimizing core parallel algorithms and data structures to provide the best solutions using GPUs, through both library development and direct contribution to the applications. This includes training and inference optimization for large language models, directly contributing to frameworks such as Megatron and TRTLLM, SGLang, vLLM... • Collaborating closely with the architecture, research, libraries, tools, and system software teams at NVIDIA to influence the design of next-generation architectures, software platforms, and programming models, including by investigating impact on application performance and developer productivity. • Engaging in deep optimization of high-performance operators, involving but not limited to CUDA deep optimization, instruction and compiler optimization. These optimizations will directly support customers or be integrated into products like cuDNN, cuBLAS, and CUTLASS... • Some travel is required for conferences and for on-site visits with developers.