英伟达AI Computing Performance Architect Intern, Perf Analysis and Kernel Dev - 2026
任职要求
• Pursuing BS, MS or PhD in relevant discipline (CS, EE, CE). • Strong software skills with C/C++, Python, MPI, OpenMP etc. • Solid computer science (CS) SW & HW arch background. • Experience of DL workload and operator performance will be a plus. • Familiarity with GPU computing and parallel programming models will be a plus. • Excellent oral and written communication skills. • Good organizational, time management and task prioritization skills. N…
工作职责
• Design, develop, and optimize major layers in LLM (e.g attention, GEMM, inter-GPU communication) for NVIDIA's new architectures. • Implement and fine-tune kernels to achieve optimal performance on NVIDIA GPUs. • Conduct in-depth performance analysis of GPU kernels, including Attention and other critical operations. • Identify bottlenecks, optimize resource utilization, and improve throughput, and power efficiency • Create and maintain workloads and micro-benchmark suites to evaluate kernel performance across various hardware and software configurations. • Generate performance projections, comparisons, and detailed analysis reports for internal and external stakeholders. • Collaborate with architecture, software, and product teams to guide the development of next-generation deep learning hardware and software.
• Design, develop, and optimize major layers in LLM (e.g attention, GEMM, inter-GPU communication) for NVIDIA's new architectures. • Implement and fine-tune kernels to achieve optimal performance on NVIDIA GPUs. • Conduct in-depth performance analysis of GPU kernels, including Attention and other critical operations. • Identify bottlenecks, optimize resource utilization, and improve throughput, and power efficiency • Create and maintain workloads and micro-benchmark suites to evaluate kernel performance across various hardware and software configurations. • Generate performance projections, comparisons, and detailed analysis reports for internal and external stakeholders. • Collaborate with architecture, software, and product teams to guide the development of next-generation deep learning hardware and software.
NVIDIA is developing processor and system architectures that accelerate deep learning and high-performance computing applications. We are looking for an intern deep learning system performance architect to join our AI performance modelling, analysis and optimization efforts. In this position, you will have a chance to work on DL performance modelling, analysis, and optimization on state-of-the-art hardware architectures for various LLM workloads. You will make your contributions to our dynamic technology focused company. What you’ll be doing: • Analyze state of the art DL networks (LLM etc.), identify and prototype performance opportunities to influence SW and Architecture team for NVIDIA's current and next gen inference products. • Develop analytical models for the state of the art deep learning networks and algorithm to innovate processor and system architectures design for performance and efficiency. • Specify hardware/software configurations and metrics to analyze performance, power, and accuracy in existing and future uni-processor and multiprocessor configurations. • Collaborate across the company to guide the direction of next-gen deep learning HW/SW by working with architecture, software, and product teams.
NVIDIA networking designs and manufactures high-performance networking equipment that enable the most powerful super computers in the largest data centers in the world. With a distributed collection of NVIDIA GPUs inter-connected by networking solutions such as InfiniBand, Ethernet, or RoCE (RDMA over Converged Ethernet) we make powerful ML/AI platforms possible. We are seeking motivated, personable, and independent individuals to join our team!We seek experienced software embedded engineers to help support our groundbreaking, innovative technologies that make AI workloads in large clusters even more performant. As a networking Sr. Solutions Architect at NVIDIA you will have agency and palpable effects on the business, and work closely with customers and R&D teams. What you’ll be doing: • Support networking technologies such as Spectrum-X and work with customers on their technical challenges and requirements using said technologies during pre-sales activities • Develop proof-of-concept materials for innovative technologies for use by early adopters • Gain customers’ trust and understand their needs to help design and deploy groundbreaking NVIDIA networking platforms to run AI and HPC workloads • Address sophisticated and highly visible customer issues • Work closely with R&D teams to develop new features for customers • Help with product requirements alongside engineering and product marketing
NVIDIA is leading company of AI computing. At NVIDIA, our employees are passionate about AI, HPC , VISUAL, GAMING. Our Solution Architect team is more focusing to bring NVIDIA new technology into difference industries. We help to design the architecture of AI computing platform, analyze the AI and HPC applications to deliver our value to customers. This role will be instrumental in leveraging NVIDIA's cutting-edge technologies to optimize open-source and proprietary large models, create AI workflows, and support our customers in implementing advanced AI solutions. What you’ll be doing: • Drive the implementation and deployment of NVIDIA Inference Microservice (NIM) solutions • Use NVIDIA NIM Factory Pipeline to package optimized models (including LLM, VLM, Retriever, CV, OCR, etc.) into containers providing standardized API access • Refine NIM tools for the community, help the community to build their performant NIMs • Design and implement agentic AI tailored to customer business scenarios using NIMs • Deliver technical projects, demos and customer support tasks • Provide technical support and guidance to customers, facilitating the adoption and implementation of NVIDIA technologies and products • Collaborate with cross-functional teams to enhance and expand our AI solutions