字节跳动AI数据开发工程师-ByteDance Research
任职要求
1、计算机及相关专业本科或本科以上学历,良好的沟通和团队协作能力; 2、具有3年以上数据处理或模型训练相关经验,熟悉图像或视频数据处理方法; 3、精通Python或Golang至少一种编程语言;且能够…
工作职责
字节跳动ByteDance Research致力于人工智能领域的前沿技术研究,涵盖自然语言处理、计算机视觉、机器学习、机器人、AI for Science、Responsible AI等多个领域,同时将研究成果落地,为公司现有的产品和业务提供技术支持和服务。 1、为视频生成模型训练生产高质量的训练数据; 2、负责/参与搭建数据生产工作流,并能够持续提升数据生产的效率、稳定性和易用性; 3、与算法工程师密切配合,理解视频生成模型研发流程,负责/参与模型生成中数据解决方案的设计、开发和维护;同时能够探索业界前沿的多模态数据处理相关技术,并应用到数据生产中。
团队介绍:我们是「豆包视频生成模型-PixelDance」团队,我们专注于开发视频生成模型,解决视频生成的关键问题,包括但不限于高动态性视频生成、内容一致性保障。构建行业领先的视频基础模型,引领技术的未来潮流。视频生成工程团队的工作涉及到模型生产的全周期流程,在这里,你有机会参与到模型的数据生产、训练加速、推理加速、服务部署的每个环节。同时你将接触到最先进的视频生成技术、海量的数据、大规模的集群,我们期待你能够和我们的模型一同Scale UP。 1、负责LLM和Diffusion Model的性能优化; 2、通过TensorRT、量化、剪枝、算子融合、CUDA算子编写等性能优化的手段,结合业务需求,将GPU性能发挥到极致; 3、负责ByteDance Research推理优化技术的调研和引入; 4、与算法部门深度合作,进行算法与系统的联合优化。
Team Introduction: Data AML is ByteDance's machine learning middle platform, providing training and inference systems for recommendation, advertising, CV (computer vision), speech, and NLP (natural language processing) across businesses such as Douyin, Toutiao, and Xigua Video. AML provides powerful machine learning computing capabilities to internal business units and conducts research on general and innovative algorithms to solve key business challenges. Additionally, through Volcano Engine, it delivers core machine learning and recommendation system capabilities to external enterprise clients. Beyond business applications, AML is also engaged in cutting-edge research in areas such as AI for Science and scientific computing. Research Project Introduction: Large-scale recommendation systems are being increasingly applied to short video, text community, image and other products, and the role of modal information in recommendation systems has become more prominent. ByteDance's practice has found that modal information can serve as a generalization feature to support business scenarios such as recommendation, and the research on end-to-end ultra-large-scale multimodal recommendation systems has enormous potential. It is expected to further explore directions such as multimodal cotraining, 7B/13B large-scale parameter models, and longer sequence end-to-end based on algorithm-engineering CoDesign. Engineering research directions include: Representation of multimodal samples Construction of high-performance multimodal inference engines based on the PyTorch framework Development of high-performance multimodal training frameworks Application of heterogeneous hardware in multimodal recommendation systems 1. Algorithmic research directions include: 2. Design of reasonable recommendation-advertising and multimodal cotraining architectures 3. Sparse Mixture of Experts (Sparse MOE) 4. Memory Network 5. Hybrid precision techniques 团队介绍: Data AML是字节跳动公司的机器学习中台,为抖音/今日头条/西瓜视频等业务提供推荐/广告/CV/语音/NLP的训练和推理系统。为公司内业务部门提供强大的机器学习算力,并在这些业务的问题上研究一些具有通用性和创新性的算法。同时,也通过火山引擎将一些机器学习/推荐系统的核心能力提供给外部企业客户。此外,AML还在AI for Science,科学计算等领域做一些前沿研究。 课题介绍: 大规模推荐系统正在越来越多的应用到短视频、文本社区、图像等产品上,模态信息在推荐系统中的作用也越来越大。 字节实践中发现模态信息能够很好的作为泛化特征支持推荐等业务场景,端到端的超大规模多模态推荐系统的研究具有非常大的想象空间。 期望在算法和工程CoDesign基础上,对多模态Cotrain、7B/13B大规模参数模型、更长序列端到端等方向进一步进行探索。 工程上研究方向包括多模态样本的表征、基于 pytorch 框架的高性能多模态推理引擎、高性能多模态训练框架的构建、异构硬件在多模态推荐系统上的应用;算法上的研究方向包括设计合理的推荐广告和多模态Cotrain结构、Sparse MOE、Memory Network、混合精度等。 1、负责机器学习系统架构的设计开发,以及系统性能调优; 2、负责解决系统高并发、高可靠性、高可扩展性等技术难关; 3、覆盖机器学习系统多个子方向领域的工作,包括:资源调度、任务编排、模型训练、模型推理、模型管理、数据集管理、工作流编排、ML for System等; 4、负责机器学习系统前瞻技术的调研和引入,比如:最新硬件架构、异构计算系统、GPU优化技术的引入落地; 5、研究基于机器学习方法,实现对集群/服务资源使用情况的分析和优化。
Team Introduction: TikTok Content Security Algorithm Research Team The International Content Safety Algorithm Research Team is dedicated to maintaining a safe and trustworthy environment for users of ByteDance's international products. We develop and iterate on machine learning models and information systems to identify risks earlier, respond to incidents faster, and monitor potential threats more effectively. The team also leads the development of foundational large models for products. In the R&D process, we tackle key challenges such as data compliance, model reasoning capability, and multilingual performance optimization. Our goal is to build secure, compliant, and high-performance models that empower various business scenarios across the platform, including content moderation, search, and recommendation. Research Project Background: In recent years, Large Language Models (LLMs) have achieved remarkable progress across various domains of natural language processing (NLP) and artificial intelligence. These models have demonstrated impressive capabilities in tasks such as language generation, question answering, and text translation. However, reasoning remains a key area for further improvement. Current approaches to enhancing reasoning abilities often rely on large amounts of Supervised Fine-Tuning (SFT) data. However, acquiring such high-quality SFT data is expensive and poses a significant barrier to scalable model development and deployment. To address this, OpenAI's o1 series of models have made progress by increasing the length of the Chain-of-Thought (CoT) reasoning process. While this technique has proven effective, how to efficiently scale this approach in practical testing remains an open question. Recent research has explored alternative methods such as Process-based Reward Model (PRM), Reinforcement Learning (RL), and Monte Carlo Tree Search (MCTS) to improve reasoning. However, these approaches still fall short of the general reasoning performance achieved by OpenAI's o1 series of models. Notably, the recent DeepSeek R1 paper suggests that pure RL methods can enable LLM to autonomously develop reasoning skills without relying on the expensive SFT data, revealing the substantial potential of RL in advancing LLM capabilities. 团队介绍: 国际化内容安全算法研究团队致力于为字节跳动国际化产品的用户维护安全可信赖环境,通过开发、迭代机器学习模型和信息系统以更早、更快发掘风险、监控风险、响应紧急事件,团队同时负责产品基座大模型的研发,我们在研发过程中需要解决数据合规、模型推理能力、多语种性能优化等方面的问题,从而为平台上的内容审核、搜索、推荐等多项业务提供安全合规,性能优越的基座模型。 课题介绍: 课题背景: 近年来,大规模语言模型(Large Language Models, LLM)在自然语言处理和人工智能的各个领域都取得了显著的进展。这些模型展示了强大的能力,例如在生成语言、回答问题、翻译文本等任务上表现优异。然而,LLM 的推理能力仍有很大的提升空间。在现有的研究中,通常依赖于大量的监督微调(Supervised Fine-Tuning, SFT)数据来增强模型的推理性能。然而,高质量 SFT 数据的获取成本高昂,这对模型的开发和应用带来了极大的限制。 为了提升推理能力,OpenAI 的 o1 系列模型通过增加思维链(Chain-of-Thought, CoT)的推理过程长度取得了一定的成功。这种方法虽然有效,但在实际测试时如何高效地进行扩展仍是一个开放的问题。一些研究尝试使用基于过程的奖励模型(Process-based Reward Model, PRM)、强化学习(Reinforcement Learning, RL)以及蒙特卡洛树搜索算法(Monte Carlo Tree Search, MCTS)等方法来解决推理问题,然而这些方法尚未能达到 OpenAI o1 系列模型的通用推理性能水平。最近deepseek r1在论文中提到通过纯强化学习的方法,可以使得 LLM 自主发展推理能力,而无需依赖昂贵的 SFT 数据。这一系列的工作都揭示着强化学习对LLM的巨大潜力。 课题挑战: 1、Reward模型的设计:在强化学习过程中,设计一个合适的reward模型是关键。Reward模型需要准确地反映推理过程的效果,并引导模型逐步提升其推理能力。这不仅要求对不同任务精准设定评估标准,还要确保reward模型能够在训练过程中动态调整,以适应模型性能的变化和提高。 2、稳定的训练过程:在缺乏高质量SFT数据的情况下,如何确保强化学习过程中的稳定训练是一个重大挑战。强化学习过程通常涉及大量的探索和试错,这可能导致训练不稳定甚至模型性能下降。需要开发具有鲁棒性的训练方法,以保证模型在训练过程中的稳定性和效果。 3、如何从数学和代码任务上拓展到自然语言任务上:现有的推理强化方法主要应用在数学和代码这些CoT数据量相对丰富的任务上。然而,自然语言任务的开放性和复杂性更高,如何将成功的RL策略从这些相对简单的任务拓展到自然语言处理任务上,要求对数据处理和RL方法进行深入的研究和创新,以实现跨任务的通用推理能力。 4、推理效率的提升:在保证推理性能的前提下,提升推理效率也是一个重要挑战。推理过程的效率直接影响到模型在实际应用中的可用性和经济性。可以考虑利用知识蒸馏技术,将复杂模型的知识传递给较小的模型,以减少计算资源消耗。另外,使用长思维链(Long Chain-of-Thought, Long-CoT)技术来改进短思维链(Short-CoT)模型,也是一种潜在的方法,以在保证推理质量的同时提升推理速度。