微软Senior Software Engineer
任职要求
Required• Master’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering OR Bachelor’s Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering OR equivalent experience.• Experience building and maintaining production data pipelines at scale using technologies such as Apache Spark, Kafka, or similar distributed processing frameworks. • Experience writing production-quality Python, Scala, or Java code for data processing applications. • Experience with cloud data platforms (Azure, AWS, or GCP) and their data services. • Experience with schema management and data governance practices.Preferred• Experience with real-time data processing and streaming architectures.• Experience with orchestration frameworks such as ADF, Airflow, Prefect, or Dagster.• Experience with containerization (Docker, Kubernetes) for data pipeline deployment.• Experience implementing data quality frameworks and monitoring solutions. Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or the recruiting process, please send a request via the Accommodation request form. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.
工作职责
• Build, maintain, and enhance data ETL pipelines for processing large-scale data with low latency and high throughput to support Copilot operations.• Own data quality initiatives including monitoring, validation, and remediation to ensure integrity across attribution datasets and downstream dashboards.• Implement schema management solutions that enable quick and seamless evolution of attribution data without disrupting consumers.• Develop and maintain infrastructure that supports both batch and real-time attribution requirements.• Collaborate with product managers, marketing analysts, and data scientists to deliver insights for campaign optimization and growth strategies.• Design scalable attribution data architectures that can handle growing data volumes and evolving business needs.• Implement comprehensive monitoring and observability solutions for attribution pipelines, including SLA tracking and automated alerting.
As a pivotal member of the Copilot Team, you will bring unique perspectives and expertise to the organization, driving innovative features and delivering transformative AI-powered experiences:• This is an IC role, Coding / engineering design time >70%• Manage complex projects from conception to implementation, with a focus on delivering AI-driven user interfaces and performance-optimized web applications.• Coordinate technical delivery through sprints, fostering collaboration throughout the project lifecycle.• Collaborate across geographies and time zones to establish best practices and develop automated processes that mitigate development risks.• Investigate and debug complex performance issues in applications, ensuring optimal user experience and system efficiency.• Design and implement performance testing strategies to proactively address bottlenecks.• Work closely with Product Designers, Product Managers, and Engineers to deliver AI-enhanced products that delight users.• Drive team-wide investments in infrastructure and foundational systems to support long-term technical roadmaps.• Solve technical challenges to deliver outstanding outcomes for customers and the business.
• Design, develop, and manage Streaming and Batch pipelines, supporting key functionalities such as large-scale index construction, web page crawling and feature extraction, image processing, and context re-writing. • Optimize continuously a platform to manage, schedule, and monitor hundreds of pipelines. • Optimize continuously a platform to view, track, debug, and operate massive scale Ads Data. • Evaluate and optimize code and design, to maximize performance, minimize complexity. • Mentor junior SDE and solely drive feature development from ground zero.
Build/Improve experiment platforms for new scenarios.Build data pipelines on multiple computation platforms for reporting, analysis and metrics pre-computation with stable SLA and good quality.Build agents for productivity improvement.
•  Work with a team of passionate engineers to deliver success for customers•  Design, implement, test, and operate data services.•  Release features on time, with high quality, meeting functional, performance, scalability, and compliance requirements.•  Drive quality right from the design phase, incorporating best practices and engineering for testability.•  Solve problems relating to mission critical services and create solutions to prevent problem recurrence.•  Participate in product live site and operations.•  Mentor and grow our engineers to better deliver on the goals above