英伟达Perception Engineer - Autonomous Driving
任职要求
• Subject to arrangement to different works at any time to take on different tasks and challenges• Self-motivated attitude and motivation to make things success, intent to learn• BS/MS degree in Computer Science/EE or related• Proven fundamentals in c++/pytorch programming and SW design and debug skills• Abundant knowledge of ML/DL techniques for Computer Vision and autonomous driving• Knowledge and experiences for perception stack for…
工作职责
• Work on research, design and implementation of software features that are beneficial to our customers to meet their performance targets and build unique values• Autonomous driving perception SW stack for improvement and issue fixes.• Autonomous driving perception DNN improvement/eval/KPI for DNNs.• Autonomous driving perception issue triage and root causing.• Develop solutions for DNN model acceleration, optimization and deployment.
Intelligent machines powered by Artificial Intelligence computers that can learn, reason and interact with people are no longer science fiction. GPU Deep Learning has provided the foundation for machines to learn, perceive, reason and solve problems. Now, NVIDIA’s GPU runs Deep Learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world.We are now looking for an extraordinary Senior Perception Engineer to develop and productize NVIDIA’s autonomous driving solutions. As a member of our perception team, you will work on building world-class 3D obstacle perception solutions based on multi-sensor fusion, including cameras, ultrasonic sensors, and radar, to estimate high-resolution reconstruction of the world, such as occupancy networks. The primary approach will be deep learning. You will be challenged to improve robustness and accuracy as well as efficiency of the solutions to fully enable autonomous driving anywhere and anytime. What you’ll be doing: • Perception experts with application focus will be on multi-sensor fusion based deep learning model development for obstacle perception/fusion in complex driving environments. • Applied research and development of innovative deep learning and multi-sensor fusion algorithms to improve output accuracy of 3D obstacle perception solutions under challenging and diverse scenarios, with a focus on high-resolution world reconstruction (e.g., occupancy networks). • Identify and analyze the strength and weakness of the developed 3D obstacle perception solutions using large scale benchmark data (both real and synthetic) and improve them iteratively through KPI building and optimization. This includes careful data verification, model architecture design, understanding details of loss function engineering, and being capable of finding detailed ML bugs and iterating toward perfection. • Productize the developed 3D obstacle perception solutions by meeting product requirements for safety, latency, and SW robustness, with a strong emphasis on production deep learning model development. • Drive and prioritize data-driven development by working with large data collection and labeling teams to bring in high value data to improve perception system accuracy. Efforts will include data collection prioritization and planning, labeling prioritization, so that value of data is maximized.
• Investigate and resolve sensor calibration and egomotion algorithm/toolchain issues across multiple OEM vehicle platforms. • Develop core autonomous driving functionality for global markets by fusing state-of-the-art perception DNNs with map signals. • Build real-time 3D world models for planning, integrating diverse inputs from sensors and external sources. • Develop and optimize LLM, VLM, and VLA systems for autonomous driving applications, including pre-training and fine-tuning. • Design innovative data generation and collection strategies to improve dataset diversity and quality. • Collaborate with cross-functional teams to deploy end-to-end AI models in production, ensuring performance, safety, and reliability standards are met.
