ASMLData Engineer
任职要求
Introduction to the job ASML Customer Support (CS) Diagnostics is at the core of ASML’s ambition to significantly reduce diagnostic labor hours, improve system availability, and enable predictive and self‑healing service capabilities towards 2030. The Data Engineering Engineer who will play a key role in building, scaling, and operationalizing AI‑driven diagnostics, observability, and predictive maintenance solutions. This role goes beyond tooling or automation: you will own the full lifecycle of data and AI solutions that directly impact diagnostic accuracy, MTTR, MTBF, and service efficiency. And you will work at the intersection of machine data, diagnostics domain knowledge, and advanced analytics, collaborating closely with CS Diagnostics, Field, D&E, and central platform teams Role and responsibilities AI, Analytics & Model Ownership Design, develop, deploy, and maintain machine learning and deep learning models for Predictive Maintenance (PdM), Fault Detection & Classification, and root‑cause identification and observability improvement. Own the end‑to‑end model lifecycle, problem definition and data exploration, feature engineering and model development, validation, deployment, monitoring, and retraining. Continuously improve model performance based on field feedback, diagnostic outcomes, and new data availability. Data Engineering & Platform Development Design and implement scalable, cloud‑native data pipelines to ingest, transform, and provision large volumes of structured and unstructured machine data. Work with platforms such as Azure, Databricks, Spark, and Kusto to ensure reliable, performant, and secure data access. Ensure data quality, traceability, and reproducibility for downstream analytics and AI applications. Enable early access to data through proof‑of‑concept pipelines, while ensuring smooth transition to production‑grade solutions Diagnostics Domain Enablement Improve observability through machine data by identifying gaps, defining required signals, and translating diagnostic needs into data and model …
工作职责
N/A
Design and build cloud-based data warehouses to deliver efficient analytical and reporting capabilities for Apple’s global and regional sales and finance teams. Develop highly scalable data pipelines to ingest and process data from multiple source systems, leveraging Apache Airflow for workflow orchestration, scheduling, and monitoring. Architect generic, reusable solutions that enforce to data warehousing best practices while addressing complex business requirements. Analyze and optimize existing systems, providing improvements and ongoing support as needed. Uphold the highest standards of data integrity and software quality, ensuring reliable and accurate outputs. We are looking for a proactive self-starter who takes initiative, learns fast, and works well across teams. Join our growing team where no two days are the same - solving tough technical challenges and business problems in a fast-paced environment.
The Role: We are looking for a Data Engineer to be part of our Data Analytics team. This person will design, develop, maintain and support our Enterprise Data Warehouse & Manufacturing and Supply Chain Intelligent Solution within Tesla using various data & AI/BI tools, this position offers unique opportunity to make significant impact to the entire organization in developing data tools and applying AI into the process of manufacturing and supply chain lifecycle. Responsibilities: - Work in a time constrained environment to analyze, design, develop and deliver Enterprise Data Warehouse solutions for Supply Chain/Enterprise Teams. - Initiate or generalize BI solution across the system used by factories in different region globally - Factorize and translate business pain point into executable IT solutions - Setting up, maintaining and optimizing bigdata platform for production usage in reporting, analysis applications. - Establish scalable, efficient, automated processes for data analyses, model development, validation and implementation. - Create ETL pipelines using Spark/Flink. - Create real time data streaming and processing using technologies like Kafka , Spark etc. - Develop collaborative relationships with key business sponsors and IT resources for the efficient resolution of work requests. - Provide timely and accurate estimates for newly proposed functionality enhancements, especially in critical situations. - Develop, enforce, and recommend enhancements to Applications in the area of standards, methodologies, compliance, and quality assurance practices; participate in design and code walkthroughs. Minimum
- Work with global teams to enable reliable data for GCR business operations while following strict security compliance requirements, and build data foundation for GCR users to self-service for their use cases - Build and enhance data platforms to support end users to easily and securely access data and insights by leveraging AI, AWS services, and open source services - Collaborate with product managers and SDE team members to design and implement data products that meet business requirements and deliver measurable value - Implement robust data quality monitoring, validation frameworks, and governance practices while optimizing compute solutions for performance and cost efficiency
1. Design, develop, and maintain scalable data pipelines to support ML model development and production deployment. 2. Implement and maintain CI/CD pipelines for the data and ML solutions. 3. Collaborate with data scientists and other team members to understand data requirements and implement efficient data processing solutions. 4. Create and manage data warehouses and data lakes, ensuring proper data governance and security measures are in place. 5. Collaborate with product managers and business stakeholders to understand data needs and translate them into technical requirements. 6. Stay current with emerging technologies and best practices in data engineering, and propose innovative solutions to improve data infrastructure and processes for ML models and analytics applications. 7. Participate in code reviews and contribute to the development of best practices for data engineering within the team.