SupercellData Engineer, Analytics
任职要求
• 5+ years in Data Engineering or a related field. • Expertise in Python and SQL, with the ability to guide others in querying and best practices. • Proven track record of designing and maintaining large-scale ETL processes. • Familiarity with modern data stacks (e.g., Databricks, Spark) and build/orchestration tools. • Proactive, independent, and passionate about delivering high-quality data in a fast-paced …
工作职责
• Own team-specific data pipelines and products end-to-end. • Plan, execute, and maintain data engineering roadmaps, aligning with wider company initiatives. • Define what data is collected to serve our evolving business needs. • Develop pipelines to deliver new datasets, uncover insights, and improve decision-making. • Continuously improve the scalability, reliability, and performance of our data systems. • Support data analysts and other stakeholders with timely, accurate data. • Participate in on-call rotations to maintain pipeline stability.
The Role: We are looking for a Data Engineer to be part of our Data Analytics team. This person will design, develop, maintain and support our Enterprise Data Warehouse & Manufacturing and Supply Chain Intelligent Solution within Tesla using various data & AI/BI tools, this position offers unique opportunity to make significant impact to the entire organization in developing data tools and applying AI into the process of manufacturing and supply chain lifecycle. Responsibilities: - Work in a time constrained environment to analyze, design, develop and deliver Enterprise Data Warehouse solutions for Supply Chain/Enterprise Teams. - Initiate or generalize BI solution across the system used by factories in different region globally - Factorize and translate business pain point into executable IT solutions - Setting up, maintaining and optimizing bigdata platform for production usage in reporting, analysis applications. - Establish scalable, efficient, automated processes for data analyses, model development, validation and implementation. - Create ETL pipelines using Spark/Flink. - Create real time data streaming and processing using technologies like Kafka , Spark etc. - Develop collaborative relationships with key business sponsors and IT resources for the efficient resolution of work requests. - Provide timely and accurate estimates for newly proposed functionality enhancements, especially in critical situations. - Develop, enforce, and recommend enhancements to Applications in the area of standards, methodologies, compliance, and quality assurance practices; participate in design and code walkthroughs. Minimum
• Design and implement end-to-end data pipelines (ETL) to ensure efficient data collection, cleansing, transformation, and storage, supporting both real-time and offline analytics needs. • Develop automated data monitoring tools and interactive dashboards to enhance business teams’ insights into core metrics (e.g., user behavior, AI model performance). • Collaborate with cross-functional teams (e.g., Product, Operations, Tech) to align data logic, integrate multi-source data (e.g., user behavior, transaction logs, AI outputs), and build a unified data layer. • Establish data standardization and governance policies to ensure consistency, accuracy, and compliance. • Provide structured data inputs for AI model training and inference (e.g., LLM applications, recommendation systems), optimizing feature engineering workflows. • Explore innovative AI-data integration use cases (e.g., embedding AI-generated insights into BI tools). • Provide technical guidance and best practice on data architecture and BI solution
• Design and implement end-to-end data pipelines (ETL) to ensure efficient data collection, cleansing, transformation, and storage, supporting both real-time and offline analytics needs. • Develop automated data monitoring tools and interactive dashboards to enhance business teams’ insights into core metrics (e.g., user behavior, AI model performance). • Collaborate with cross-functional teams (e.g., Product, Operations, Tech) to align data logic, integrate multi-source data (e.g., user behavior, transaction logs, AI outputs), and build a unified data layer. • Establish data standardization and governance policies to ensure consistency, accuracy, and compliance. • Provide structured data inputs for AI model training and inference (e.g., LLM applications, recommendation systems), optimizing feature engineering workflows.
• Design and implement end-to-end data pipelines (ETL) to ensure efficient data collection, cleansing, transformation, and storage, supporting both real-time and offline analytics needs. • Develop automated data monitoring tools and interactive dashboards to enhance business teams’ insights into core metrics (e.g., user behavior, AI model performance). • Collaborate with cross-functional teams (e.g., Product, Operations, Tech) to align data logic, integrate multi-source data (e.g., user behavior, transaction logs, AI outputs), and build a unified data layer. • Establish data standardization and governance policies to ensure consistency, accuracy, and compliance. • Provide structured data inputs for AI model training and inference (e.g., LLM applications, recommendation systems), optimizing feature engineering workflows. • Explore innovative AI-data integration use cases (e.g., embedding AI-generated insights into BI tools). • Provide technical guidance and best practice on data architecture that meets both traditional reporting purpose and modern AI Agent requirements.