logo of bytedance

字节跳动Large Model Algorithm Researcher (Multimodal & Code AI) | 大模型算法研究员(多模态与Code AI方向)-TikTok AI创新中心-筋斗云人才计划

校招全职A118205地点:新加坡状态:招聘

任职要求


1. Got a doctor degree, with priority given to those who have published papers in fields such as machine learning (ML), computer vision (CV), and natural language processing (NLP).
2. Possess excellent programming skills, data structure, and algorithm skills, proficient in C/C++ or Python programming languages. Priority will be given to those who have won awards in competitions such as ACM/ICPC, NOI/IOI, Top Coder, and Kaggle.
3. Have research experience in the field of machine learning, particularly in large-scale language models (LLMs) and generative artificial intelligence.
4. Be passionate about technology, have outstanding problem analysis and solving abilities, be enthusiastic about solving challenging problems, and possess good communication skills and team spirit.

1、获得博士学位,在机器学习(ML)、计算机视觉(CV)、自然语言处理NLP)等领域发布过论文者优先;
2、具备出色的编程能力、数据结构算法技能,熟练掌握C/C++Python编程语言,在ACM/ICPC、NOI/IOI、Top Coder、Kaggle等比赛中获奖者优先;
3、拥有机器学习领域的研究经历,特别是在大规模语言模型(LLMs)和生成式人工智能方面;
4、热爱技术,出色的问题分析和解决能力,并对解决具有挑战性的问题充满热情,良好的沟通能力和团队合作精神。

工作职责


Team Introduction:
The TikTok AI Innovation Center is a department focused on building AI infrastructure and driving cutting-edge research in AI. We explore industry-leading AI technologies, including large language models (LLMs) and multimodal large models, with the goal of developing models that can understand multilingual content and vast amounts of video data, ultimately delivering a better content consumption experience for users. In the Code AI domain, we leverage the powerful code understanding and reasoning capabilities of LLMs to enhance program performance and R&D efficiency.

Project Introduction:
Multimodal foundation large models (VLM) represent a research hotspot in the industry and a critical technology for TikTok's business scenario applications. In 2024, TikTok's Innovation Center developed VFM V1, a multimodal large model tailored for TikTok's business scenarios. It matches the performance of the best open-source model Qwen VL on public test sets, while significantly outperforming all other foundation models on TikTok's business test sets. In the future, we aim to continuously develop foundation models with efficient perception and reasoning capabilities, capable of handling multilingual and massive video content understanding algorithms to deliver a better content consumption experience for users.

Project Challenges:
Enhance the multimodal perception encoder: The current encoder uses a fixed frame rate. We need to explore more efficient adaptive frame rates while considering the integration of modalities such as audio and user behavior.
How to fuse multimodal perception and thinking capabilities to promote stronger comprehensive perception and cognitive abilities of the model.

团队介绍:
TikTok AI创新中心,是致力于AI基础设施建设和创新研究的部门,探索行业领先的人工智能技术,包括大语言模型,多模态大模型等研究方向。我们希望研发能够处理多语言和海量视频内容理解的模型算法,为用户带来更好的内容消费体验。在Code AI方向,我们利用大语言模型强大的代码理解与推理能力,提升程序性能与研发效率。

课题介绍:
多模态基础大模型VLM 是行业的研究热点,也是TikTok业务场景应用的关键技术,2024年TikTok AI创新中心研发了面向TikTok业务场景的多模态大模型VFM V1,在公开测试集上能够与最好的开源模型 Qwen VL持平,同时在 TikTok 业务测试集上,能够大幅领先所有其它基础模型。未来,我们希望持续研发具有高效感知和推理思考能力的基础模型,能够处理多语言和海量视频内容理解的模型算法,为用户带来更好的内容消费体验。

课题挑战:
1、增强多模态感知编码器,当前的编码器是固定帧率,需要探索更高效的自适应帧率,同时考虑音频、用户行为等模态加入;
2、如何融合多模态感知和思考能力,促进更强的模型综合感知和认知能力。
包括英文材料
NLP+
算法+
C+
C+++
Python+
Kaggle+
学历+
机器学习+
OpenCV+
数据结构+
相关职位

logo of bytedance
校招A40464A

Team Introduction: The Risk Control R&D Team is dedicated to addressing various challenges posed by malicious activities across ByteDance's products including Douyin and Toutiao. Their work spans multiple domains of risk governance such as content, transactions, traffic, and accounts. By leveraging technologies such as machine learning, multimodal models, and large models, the team strives to understand user behaviors and content, thereby identifying potential risks and issues. By continuously deepening their understanding of business and user behaviors, the team drives innovation in models and algorithms with an aim to build an industry-leading risk control algorithm system. Project Objectives: Optimize and enhance large models' ability to understand and reason about structured data (sequential data, graph data) based on risk control data. Project Necessity: Data in risk control scenarios is primarily structured, while large models have significantly improved their understanding of text and images. Integrating non-text/image structured data from risk control scenarios with large models to enable better comprehension of structured data remains an industry-wide challenge. This involves three key difficulties: 1. How to effectively align structured information with the NLP semantic space, allowing models to simultaneously understand both data structure and semantic information. 2. How to use appropriate instructions to enable large models to interpret structural information in structured data. 3. How to endow large language models with step-by-step reasoning capabilities for graph learning downstream tasks, thereby inferring more complex relationships and attributes. Project Content: Current industry explorations of structured data include: 1. Graph data understanding (e.g., GraphGPT: Enabling large models to read graph data, SIGIR'2024). 2. Graph data RAG (e.g., Microsoft GraphRAG: Unlocking LLM discovery on narrative private data). 3. Sequential data understanding (e.g., StructGPT: A large model reasoning framework for structured data, EMNLP-2023). However, current efforts mainly focus on understanding single-type structured data, and several challenges remain in risk control scenarios: 1. How to effectively fuse and understand various types of structured data, especially the integration of graph and sequential data. 2. Addressing the challenges mentioned in the ""Project Necessity"" section, particularly the step-by-step reasoning capabilities for downstream tasks, which are currently underexplored—especially reasoning over sequential data. Research Directions: 1. Large model structured data understanding 2. Large model structured data RAG 3. Large model thought chains 团队介绍: 风控研发团队致力于解决各个产品(包括抖音、头条等)面临的各种黑灰产对抗问题,涵盖内容、交易、流量、账号等多个方面的风险治理领域。利用机器学习、多模态、大模型等技术对用户行为、内容进行理解从而识别潜在的风险和问题。不断深入理解业务和用户行为,进行模型和算法创新,打造业界领先的风控算法体系。 课题介绍: 1、课题目标:以风控数据为基础,优化提高大模型对于结构化数据(序列数据、图数据)的理解推理能力; 2、课题背景:风控场景下的数据主要为结构化数据,而目前大模型对于文本和图像的理解能力有了很大的提升,如何跟风控场景的非文本、图像数据(结构化数据)结合起来,让大模型能够更好的理解结构化的数据,是一个业界难题。 面临着三大挑战 : 1)如何有效地将结构化的信息与nlp语义空间进行对齐,使得模型能够同时理解数据结构和语义信息; 2)如何用适当的指令使得大模型理解结构化数据中的结构信息; 3)如何赋予大语言模型图学习下游任务的逐步推理能力,从而逐步推断出更复杂的关系和属性。 3、课题内容:目前业界对结构化数据探索有: 1)图数据理解相关GraphGPT:让大模型读懂图数据(SIGIR'2024); 2)图数据RAG相关GraphRAG:Unlocking LLM discovery on narrative private data; 3)序列数据理解相关StructGPT:面向结构化数据的大模型推理框架(EMNLP-2023)。 目前的主要工作都是单一结构数据的理解,在风控场景下还面临几个问题: 1)对各种不同种类的的结构化数据融合理解怎么做,特别是融合图和序列数据的数据理解; 2)针对课题必要性中的问题; 3)对于下游任务的推理能力,目前的研究比较少,针对序列数据的推理能力研究非常少。 4、研究方向:大模型结构化数据理解、大模型结构化数据RAG、大模型思维链。

更新于 2025-05-26
logo of bytedance
校招A221696

Team Introduction: The team primarily focuses on recommendation services for the International E-commerce Mall, covering information flow recommendation in core scenarios such as the mall homepage, transaction funnels, product detail pages, stores & showcases. Committed to providing hundreds of millions of users daily with precise and personalized recommendations for products, live streams, and short videos, the team dedicates itself to solving challenging problems in modern recommendation systems. Through algorithmic innovations, we continuously enhance user experience and efficiency, creating greater user and social value. Project Background/Objectives: This project aims to explore new paradigms for large models in the recommendation field, breaking through the long-standing structures of recommendation models and Infra solutions, achieving significantly better performance than current baseline models, and applying them across multiple business scenarios such as Douyin short videos/LIVE/E-commerce/Toutiao. Developing large models for recommendation is particularly challenging due to the high demands on engineering efficiency and the personalized nature of user recommendation experiences. The project will conduct in-depth research across the following directions to explore and establish large model solutions for recommendation scenarios: Project Challenges/Necessity: The emergence of LLMs in the natural language field has outperformed SOTA models in numerous vertical tasks. In contrast, industrial-grade recommendation systems have seen limited major innovations in recent years. This project seeks to revolutionize the long-standing paradigms of recommendation model architectures and Infra in the recommendation field, delivering models with significantly improved performance and applying them to scenarios like Douyin short video and LIVE. Key challenges include: High engineering efficiency requirements for recommendation systems; Personalized nature of user recommendation experiences; Effective content representation for media formats like short videos and live streams. The project will address these through deep research in model parameter scaling, content/user representation learning, multimodal content understanding, ultra-long sequence modeling, and generative recommendation models, driving systematic upgrades to recommendation models. Project Content: 1. Representation Learning Based on Content Understanding and User Behavior 2. Scaling of Recommendation Model Parameters and computing 3. Ultra-Long Sequence Modeling 4. Generative Recommendation Models Involved Research Directions: Recommendation Algorithms, Large Recommendation Models. 团队介绍: 推荐与营销团队,主要负责国际电商商城推荐业务,涵盖商城首页、交易链路、商品详情页、店铺&橱窗等多个核心场景的信息流推荐业务,致力于每天为亿量级用户提供精准个性化商品、直播、短视频推荐服务;团队致力于解决现代推荐系统中各种有挑战的问题,通过算法不断提升用户体验和效率、创造更大的用户和社会价值。 课题背景/目标: 本项目旨在探索推荐领域下的大模型新范式,突破现在持续了较长时间的推荐模型结构和Infra的方案,且效果大幅好于现在的基线模型,在抖音短视频/直播/电商/头条等多个业务场景上得到应用。推荐领域的大模型是比较有挑战的事情,推荐对工程效率的要求更高,且用户的推荐体验上是个性化的,本课题会以下多个方向来做深入的研究,探索和建设推荐场景的大模型方案。 课题挑战/必要性: 自然语言领域LLM的出现,效果在众多垂直任务上都好于sota模型,从推荐领域看过去工业级推荐系统在较长的时间没有大幅的变化过。本项目旨在探索推荐领域下的大模型方案,改变现在持续了较长时间的推荐模型结构和Infra的基本范式,且效果大幅好于现在的模型,在抖音短视频/直播等多个业务场景上得到应用。但是怎么做好推荐领域的大模型也是一个比较有挑战的事情,推荐对工程效率的要求更高,且用户的推荐体验上是个性化的,以及如何短视频、直播等体裁上做号内容的表征也是需要被解决的问题,这里会从模型参数scaling up、内容和用户的表征学习、内容理解多模态、超长序列建模、生成式推荐模型等多个方向来做深入的研究,对推荐场景的模型做系统性的升级。 课题内容: 1、基于内容理解和用户行为的表征学习; 2、推荐模型参数和算力scaling up; 3、超长序列建模; 4、生成式推荐模型。 涉及研究方向:推荐算法、推荐大模型。

更新于 2025-05-26
logo of bytedance
校招A07472

Team Introduction: The Search Team is primarily responsible for the innovation of search algorithm and architecture research and development (R&D) for products such as Douyin, Toutiao, and Xigua Video, as well as businesses like E-commerce and Local Services. We leverage cutting-edge machine learning technologies for end-to-end modeling and continuously push for breakthroughs. We also focus on the construction and performance optimization of distributed and machine learning systems — ranging from memory and disk optimization to innovations in index compression and exploration of recall and ranking algorithms — providing students with ample opportunities to grow and develop themselves. The main areas of work include: 1. Exploring Cutting-Edge NLP Technologies: From basic tasks like word segmentation and Named Entity Recognition (NER) to advanced business functions like text and multimodal pre-training, query analysis, and fundamental relevance modeling, we apply deep learning models throughout the pipeline where every detail presents a challenge. 2. Cross-Modal Matching Technologies: Applying deep learning techniques that combine Computer Vision (CV) and Natural Language Processing (NLP) in search, we aim to achieve powerful semantic understanding and retrieval capabilities for multimodal video search. 3. Large-Scale Streaming Machine Learning Technologies: Utilising large-scale machine learning to address recommendation challenges in search, making the search more personalized and intuitive in understanding user needs. 4. Architecture for data at the scale of hundreds of billions: Conducting in-depth research and innovation in all aspects, from large-scale offline computing and performance and scheduling optimization of distributed systems to building high-availability, high-throughput, and low-latency online services. 5. Recommendation Technologies: Leveraging ultra-large-scale machine learning to build industry-leading search recommendation systems and continuously explore and innovate in search recommendation technologies. 团队介绍: 字节跳动搜索团队主要负责抖音、今日头条、西瓜视频等产品以及电商、生活服务等业务的搜索算法创新和架构研发工作。我们使用前沿的机器学习技术进行端到端建模并不断创新突破,同时专注于分布式系统、机器学习系统的构建和性能优化,从内存、Disk等优化到索引压缩、召回、排序等算法的探索,充分给同学们提供成长自我的机会。 主要工作方向包括: 1、探索前沿的NLP技术:从基础的分词、NER,文本、多模态预训练,到业务上的Query分析、基础相关性等,全链路应用深度学习模型,每个细节都充满挑战; 2、跨模态匹配技术:在搜索中应用CV+NLP深度学习技术,实现多模态视频搜索强大的语义理解和检索能力; 3、大规模流式机器学习技术:应用大规模机器学习,解决搜索中的推荐问题,让搜索更加个性化更加懂你; 4、千亿级数据规模的架构:从大规模离线计算,分布式系统的性能、调度优化,到构建高可用、高吞吐和低延迟的在线服务的方方面面都有深入研究和创新; 5、推荐技术:基于超大规模机器学习技术,构建业界领先的搜索推荐系统,对搜索推荐技术进行探索和创新。 课题背景/目标: 随着大模型技术的快速发展,智能搜索领域迎来了新的机遇和挑战。传统搜索技术在面对海量数据、多模态信息以及用户复杂需求时,逐渐暴露出模型容量不足、语义理解能力有限、资源利用率低等问题。基于大模型的智能搜索构建旨在通过引入大模型技术,提升搜索系统的智能化水平,优化用户体验,并解决超大规模检索、复杂语义理解、资源高效利用等核心问题。具体目标包括: 1、探索大模型与排序算法的结合,提升个性化排序的精度和用户体验; 2、研究生成式检索算法,解决百亿乃至千亿级别候选库的超大规模检索问题; 3、利用大语言模型(LLM)提升复杂多义query的搜索满意度; 4、构建高性能、低资源消耗的大规模批流一体检索和计算系统,提升资源利用率。 课题挑战/必要性: 1、个性化排序的挑战:传统排序算法难以充分利用多模态信息(如文本、图像、视频等),且模型复杂度有限,无法满足用户对精准化和个性化搜索的需求; 2、超大规模检索的挑战:传统判别式模型在千亿级别候选库的检索中,面临模型容量不足、索引效率低下等问题,亟需新一代检索算法; 3、复杂query理解的挑战:用户搜索需求日益复杂,传统搜索引擎难以准确理解长难句、多义query的语义,导致搜索结果满意度低; 4、资源利用率的挑战:搜索系统存储和计算分离的架构导致资源利用率低,如何在保证性能的同时优化资源使用成为关键问题; 5、基于大模型的智能搜索构建是解决上述挑战的必要途径。通过引入大模型技术,可以显著提升搜索系统的语义理解能力、检索效率和资源利用率,从而为用户提供更精准、更高效的搜索体验。 课题内容: 1、个性化排序大模型研究; 2、超大规模生成式检索算法研究; 3、基于LLM提升复杂多义query的搜索满意度; 4、高性能大规模批流一体检索和计算系统。 涉及的研究方向:排序大模型、生成式检索与跨模态融合、大语言模型(LLM)与复杂query理解、高性能计算与存储架构。

更新于 2025-05-26
logo of bytedance
校招A238623

Team Introduction: TikTok Content Security Algorithm Research Team The International Content Safety Algorithm Research Team is dedicated to maintaining a safe and trustworthy environment for users of ByteDance's international products. We develop and iterate on machine learning models and information systems to identify risks earlier, respond to incidents faster, and monitor potential threats more effectively. The team also leads the development of foundational large models for products. In the R&D process, we tackle key challenges such as data compliance, model reasoning capability, and multilingual performance optimization. Our goal is to build secure, compliant, and high-performance models that empower various business scenarios across the platform, including content moderation, search, and recommendation. Research Project Background: In recent years, Large Language Models (LLMs) have achieved remarkable progress across various domains of natural language processing (NLP) and artificial intelligence. These models have demonstrated impressive capabilities in tasks such as language generation, question answering, and text translation. However, reasoning remains a key area for further improvement. Current approaches to enhancing reasoning abilities often rely on large amounts of Supervised Fine-Tuning (SFT) data. However, acquiring such high-quality SFT data is expensive and poses a significant barrier to scalable model development and deployment. To address this, OpenAI's o1 series of models have made progress by increasing the length of the Chain-of-Thought (CoT) reasoning process. While this technique has proven effective, how to efficiently scale this approach in practical testing remains an open question. Recent research has explored alternative methods such as Process-based Reward Model (PRM), Reinforcement Learning (RL), and Monte Carlo Tree Search (MCTS) to improve reasoning. However, these approaches still fall short of the general reasoning performance achieved by OpenAI's o1 series of models. Notably, the recent DeepSeek R1 paper suggests that pure RL methods can enable LLM to autonomously develop reasoning skills without relying on the expensive SFT data, revealing the substantial potential of RL in advancing LLM capabilities. 团队介绍: 国际化内容安全算法研究团队致力于为字节跳动国际化产品的用户维护安全可信赖环境,通过开发、迭代机器学习模型和信息系统以更早、更快发掘风险、监控风险、响应紧急事件,团队同时负责产品基座大模型的研发,我们在研发过程中需要解决数据合规、模型推理能力、多语种性能优化等方面的问题,从而为平台上的内容审核、搜索、推荐等多项业务提供安全合规,性能优越的基座模型。 课题介绍: 课题背景: 近年来,大规模语言模型(Large Language Models, LLM)在自然语言处理和人工智能的各个领域都取得了显著的进展。这些模型展示了强大的能力,例如在生成语言、回答问题、翻译文本等任务上表现优异。然而,LLM 的推理能力仍有很大的提升空间。在现有的研究中,通常依赖于大量的监督微调(Supervised Fine-Tuning, SFT)数据来增强模型的推理性能。然而,高质量 SFT 数据的获取成本高昂,这对模型的开发和应用带来了极大的限制。 为了提升推理能力,OpenAI 的 o1 系列模型通过增加思维链(Chain-of-Thought, CoT)的推理过程长度取得了一定的成功。这种方法虽然有效,但在实际测试时如何高效地进行扩展仍是一个开放的问题。一些研究尝试使用基于过程的奖励模型(Process-based Reward Model, PRM)、强化学习(Reinforcement Learning, RL)以及蒙特卡洛树搜索算法(Monte Carlo Tree Search, MCTS)等方法来解决推理问题,然而这些方法尚未能达到 OpenAI o1 系列模型的通用推理性能水平。最近deepseek r1在论文中提到通过纯强化学习的方法,可以使得 LLM 自主发展推理能力,而无需依赖昂贵的 SFT 数据。这一系列的工作都揭示着强化学习对LLM的巨大潜力。 课题挑战: 1、Reward模型的设计:在强化学习过程中,设计一个合适的reward模型是关键。Reward模型需要准确地反映推理过程的效果,并引导模型逐步提升其推理能力。这不仅要求对不同任务精准设定评估标准,还要确保reward模型能够在训练过程中动态调整,以适应模型性能的变化和提高。 2、稳定的训练过程:在缺乏高质量SFT数据的情况下,如何确保强化学习过程中的稳定训练是一个重大挑战。强化学习过程通常涉及大量的探索和试错,这可能导致训练不稳定甚至模型性能下降。需要开发具有鲁棒性的训练方法,以保证模型在训练过程中的稳定性和效果。 3、如何从数学和代码任务上拓展到自然语言任务上:现有的推理强化方法主要应用在数学和代码这些CoT数据量相对丰富的任务上。然而,自然语言任务的开放性和复杂性更高,如何将成功的RL策略从这些相对简单的任务拓展到自然语言处理任务上,要求对数据处理和RL方法进行深入的研究和创新,以实现跨任务的通用推理能力。 4、推理效率的提升:在保证推理性能的前提下,提升推理效率也是一个重要挑战。推理过程的效率直接影响到模型在实际应用中的可用性和经济性。可以考虑利用知识蒸馏技术,将复杂模型的知识传递给较小的模型,以减少计算资源消耗。另外,使用长思维链(Long Chain-of-Thought, Long-CoT)技术来改进短思维链(Short-CoT)模型,也是一种潜在的方法,以在保证推理质量的同时提升推理速度。

更新于 2025-05-26