英伟达Deep Learning Senior Engineer, End-To-End Autonomous Driving
任职要求
At NVIDIA, we are seeking exceptional engineers to join our autonomous driving team to design, implement, and deploy cutting-edge end-to-end autonomous driving systems, running on NVIDIA chips in mass-production vehicles. Our strategy has evolved from AI 1.0 — building a driver from scratch — to AI 2.0 — teaching an intelligent agent to drive. This next phase leverages LLMs, VLMs, and VLAs to bring unprecedented reasoning, planning capabilities, and interactivity with the driving system to autonomous vehicles and general robotics. Let’s build the future of autonomy—together. What You’ll Be Doing: • Design and train innovative large-scale models—including generative, imitation, and reinforcement learning—to enhance the planning and reasoning capabilities of our driving systems. • Build, pre-train, and fine-tune LLM/VLM/VLA systems for deployment in real-world autonomous driving and robotics applications. • Explore novel data generation and collection strategies to improve diversity and quality of training datasets. • Collaborate with cross-functional teams to deploy AI models in production environments, ensuring performance, safety, and reliability standards are met. • Integrate machine learning models directly with vehicle firmware to deliver production-quality, safety-critical software. …
工作职责
N/A
Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. Today, a self-driving car powered by AI can meander through a country road at night and find its way. An AI-powered robot can learn motor skills through trial and error — this is truly an extraordinary time and the era of AI has begun. Image recognition and speech recognition — GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. The GPU started out as the engine for simulating human creativity, conjuring up the amazing virtual worlds of video games and Hollywood films. Now, NVIDIA’s GPU runs Deep Learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world. Just as human imagination and intelligence are linked, computer graphics and AI come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for Deep Learning, and NVIDIA is increasingly known as “the AI computing company.” Make the choice to join us today. Our team builds NVIDIA’s end-to-end autonomous driving application.We are seeking senior software engineers who are passionate about performance with interest in optimizing self-driving solutions that run on NVIDIA’s multi-computer and heterogenous HW architectures. What you’ll be doing: • Develop, maintain and optimize performance KPIs necessary to deliver NVIDIA’s L2/L3/L4 autonomous driving solutions • Devise acceleration strategies and patterns to improve software architecture and its efficiency on our computers with multiple heterogeneous hardware engines while meeting or exceeding product goals • Develop highly efficient product code in C++, making use of algorithmic parallelism offered by GPGPU programming (CUDA)/ARM NEON while following quality and safety standards such as defined by MISRA • Diagnose and fix performance issues reported on our target platform including on-road & simulation
Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. Now, NVIDIA’s GPU runs Deep Learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world.We are now looking for an extraordinary Senior Perception Engineer to develop and productize NVIDIA’s autonomous driving solutions. As a member of our perception team, you will work on building world-class 3D obstacle perception solutions based on multi-sensor fusion, including cameras, ultrasonic sensors, and radar, to estimate high-resolution reconstruction of the world, such as occupancy networks. The primary approach will be deep learning. You will be challenged to improve robustness and accuracy as well as efficiency of the solutions to fully enable autonomous driving anywhere and anytime. What you’ll be doing: • Perception experts with application focus will be on multi-sensor fusion based deep learning model development for obstacle perception/fusion in complex driving environments. • Applied research and development of innovative deep learning and multi-sensor fusion algorithms to improve output accuracy of 3D obstacle perception solutions under challenging and diverse scenarios, with a focus on high-resolution world reconstruction (e.g., occupancy networks). • Identify and analyze the strength and weakness of the developed 3D obstacle perception solutions using large scale benchmark data (both real and synthetic) and improve them iteratively through KPI building and optimization. This includes careful data verification, model architecture design, understanding details of loss function engineering, and being capable of finding detailed ML bugs and iterating toward perfection. • Productize the developed 3D obstacle perception solutions by meeting product requirements for safety, latency, and SW robustness, with a strong emphasis on production deep learning model development. • Drive and prioritize data-driven development by working with large data collection and labeling teams to bring in high value data to improve perception system accuracy. Efforts will include data collection prioritization and planning, labeling prioritization, so that value of data is maximized.
• Investigate and resolve sensor calibration and egomotion algorithm/toolchain issues across multiple OEM vehicle platforms. • Develop core autonomous driving functionality for global markets by fusing state-of-the-art perception DNNs with map signals. • Build real-time 3D world models for planning, integrating diverse inputs from sensors and external sources. • Develop and optimize LLM, VLM, and VLA systems for autonomous driving applications, including pre-training and fine-tuning. • Design innovative data generation and collection strategies to improve dataset diversity and quality. • Collaborate with cross-functional teams to deploy end-to-end AI models in production, ensuring performance, safety, and reliability standards are met.