
商汤Research Engineer
任职要求
•Knowledge in Machine Learning, Artificial Intelligence or a Master's degree or a PhD in a relevant discipline. •Familiarity with RAG and Agent development paradigms, as well as an understanding of advanced technologies and application scenarios in these fields. •Proficiency in retrieval technologies, including vector search, hybrid retrieval, and re-ranking. •Strong programming skills, familiarity with mainstream deep learning frameworks such as PyTorch, and proficiency in at least one of Python or C++. •Curiosity about large language models and emerging technologies, with strong learning and innovation capabilities. •Keen ability to identify business and data-related problems, along with excellent analytical and problem-solving skills. •Familiarity with machine learning and large model-related algorithms, training methods, and data mining techniques.
工作职责
•Participate in the development of knowledge-based Q&A or conversational AI products, enhancing the capabilities of large language models in areas such as RAG (Retrieval-Augmented Generation) and Agents, including data systems, algorithm optimization, prompt engineering, and evaluation iteration. •Build a systematic and specialized knowledge base, optimize the full-link retrieval technology, and continuously improve retrieval precision and recall. •Improve the performance of large language models in Q&A and conversational scenarios through techniques such as RAG, SFT (Supervised Fine-Tuning), and RLHF (Reinforcement Learning from Human Feedback).
1、面向机器人真实场景,负责具身多模态大模型的工程化与规模化落地,包括大规模训练性能与效果优化,模型工程化与效果优化,端侧与异构部署加速,机器人系统集成。
Team Introduction: TikTok Content Security Algorithm Research Team The International Content Safety Algorithm Research Team is dedicated to maintaining a safe and trustworthy environment for users of ByteDance's international products. We develop and iterate on machine learning models and information systems to identify risks earlier, respond to incidents faster, and monitor potential threats more effectively. The team also leads the development of foundational large models for products. In the R&D process, we tackle key challenges such as data compliance, model reasoning capability, and multilingual performance optimization. Our goal is to build secure, compliant, and high-performance models that empower various business scenarios across the platform, including content moderation, search, and recommendation. Research Project Background: In recent years, Large Language Models (LLMs) have achieved remarkable progress across various domains of natural language processing (NLP) and artificial intelligence. These models have demonstrated impressive capabilities in tasks such as language generation, question answering, and text translation. However, reasoning remains a key area for further improvement. Current approaches to enhancing reasoning abilities often rely on large amounts of Supervised Fine-Tuning (SFT) data. However, acquiring such high-quality SFT data is expensive and poses a significant barrier to scalable model development and deployment. To address this, OpenAI's o1 series of models have made progress by increasing the length of the Chain-of-Thought (CoT) reasoning process. While this technique has proven effective, how to efficiently scale this approach in practical testing remains an open question. Recent research has explored alternative methods such as Process-based Reward Model (PRM), Reinforcement Learning (RL), and Monte Carlo Tree Search (MCTS) to improve reasoning. However, these approaches still fall short of the general reasoning performance achieved by OpenAI's o1 series of models. Notably, the recent DeepSeek R1 paper suggests that pure RL methods can enable LLM to autonomously develop reasoning skills without relying on the expensive SFT data, revealing the substantial potential of RL in advancing LLM capabilities. 团队介绍: 国际化内容安全算法研究团队致力于为字节跳动国际化产品的用户维护安全可信赖环境,通过开发、迭代机器学习模型和信息系统以更早、更快发掘风险、监控风险、响应紧急事件,团队同时负责产品基座大模型的研发,我们在研发过程中需要解决数据合规、模型推理能力、多语种性能优化等方面的问题,从而为平台上的内容审核、搜索、推荐等多项业务提供安全合规,性能优越的基座模型。 课题介绍: 课题背景: 近年来,大规模语言模型(Large Language Models, LLM)在自然语言处理和人工智能的各个领域都取得了显著的进展。这些模型展示了强大的能力,例如在生成语言、回答问题、翻译文本等任务上表现优异。然而,LLM 的推理能力仍有很大的提升空间。在现有的研究中,通常依赖于大量的监督微调(Supervised Fine-Tuning, SFT)数据来增强模型的推理性能。然而,高质量 SFT 数据的获取成本高昂,这对模型的开发和应用带来了极大的限制。 为了提升推理能力,OpenAI 的 o1 系列模型通过增加思维链(Chain-of-Thought, CoT)的推理过程长度取得了一定的成功。这种方法虽然有效,但在实际测试时如何高效地进行扩展仍是一个开放的问题。一些研究尝试使用基于过程的奖励模型(Process-based Reward Model, PRM)、强化学习(Reinforcement Learning, RL)以及蒙特卡洛树搜索算法(Monte Carlo Tree Search, MCTS)等方法来解决推理问题,然而这些方法尚未能达到 OpenAI o1 系列模型的通用推理性能水平。最近deepseek r1在论文中提到通过纯强化学习的方法,可以使得 LLM 自主发展推理能力,而无需依赖昂贵的 SFT 数据。这一系列的工作都揭示着强化学习对LLM的巨大潜力。 课题挑战: 1、Reward模型的设计:在强化学习过程中,设计一个合适的reward模型是关键。Reward模型需要准确地反映推理过程的效果,并引导模型逐步提升其推理能力。这不仅要求对不同任务精准设定评估标准,还要确保reward模型能够在训练过程中动态调整,以适应模型性能的变化和提高。 2、稳定的训练过程:在缺乏高质量SFT数据的情况下,如何确保强化学习过程中的稳定训练是一个重大挑战。强化学习过程通常涉及大量的探索和试错,这可能导致训练不稳定甚至模型性能下降。需要开发具有鲁棒性的训练方法,以保证模型在训练过程中的稳定性和效果。 3、如何从数学和代码任务上拓展到自然语言任务上:现有的推理强化方法主要应用在数学和代码这些CoT数据量相对丰富的任务上。然而,自然语言任务的开放性和复杂性更高,如何将成功的RL策略从这些相对简单的任务拓展到自然语言处理任务上,要求对数据处理和RL方法进行深入的研究和创新,以实现跨任务的通用推理能力。 4、推理效率的提升:在保证推理性能的前提下,提升推理效率也是一个重要挑战。推理过程的效率直接影响到模型在实际应用中的可用性和经济性。可以考虑利用知识蒸馏技术,将复杂模型的知识传递给较小的模型,以减少计算资源消耗。另外,使用长思维链(Long Chain-of-Thought, Long-CoT)技术来改进短思维链(Short-CoT)模型,也是一种潜在的方法,以在保证推理质量的同时提升推理速度。
Team Introduction: Research & Development (R&D) Team: The R&D team is dedicated to building and maintaining industry-leading products that drive the success of global business. By joining us, you'll work on core scenarios such as user growth, social features, live streaming, e-commerce consumer side, content creation, and content consumption, helping our products scale rapidly across global markets. You'll also face deep technical challenges in areas like service architecture and infrastructure engineering, ensuring our systems operate with high quality, efficiency, and security. Meanwhile, our team also provides comprehensive technical solutions across diverse business needs, continuously optimizing product metrics and improving user experience. Research Project Introduction: As the world's leading short-video platform, we faces multiple challenges in its recommendation systems, including data sparsity for new users leading to insufficient personalisation, high timeliness requirements for live steaming recommendations, difficulty in maintaining user interest diversity, and complex e-commerce recommendation system chains. Traditional recommendation methods heavily rely on historical behaviour modeling, which struggles with the cold-start problem for new users. Live-streaming recommendations demand real-time responsiveness to rapidly changing content dynamics (e.g., host interactions, traffic fluctuations) within extremely short time windows (typically within 30 minutes) posing higher demands on the system's real-time perception and decision-making capabilities. Additionally, the immersive single-feed format amplifies the challenge of maintaining content diversity, requiring a careful balance between multi-interest learning and the risk of content drift caused by exploratory recommendations. The current e-commerce recommendation system follows a multi-stage funnel architecture (recall–ranking–re-ranking), which often leads to inconsistent chains, high maintenance costs, and an overreliance on short-term value prediction. This leads users to fall into content homogenization fatigue. To address these pain points, this project proposes leveraging large language models (LLMs) and large model technologies to achieve significant breakthroughs. On one hand, LLMs—with their vast knowledge base and few-shot reasoning capabilities—can infer new users' potential intentions from registration data and external knowledge, thereby alleviating cold-start issues. On the other hand, by integrating graph neural networks (GNNs) and full-lifecycle user behavior sequences for modeling social preferences, we aim to improve the accuracy of interest prediction. Additionally, the project explores the generalization capabilities, long-context awareness, and end-to-end modeling strengths of large models to simplify the e-commerce recommendation chains, enhance adaptability to real-time changes, and improve exploratory recommendation effectiveness. The ultimate goal is to build a more streamlined system with more accurate recommendations, enhancing user experience and retention while driving sustainable business growth. 团队介绍: TikTok是一个覆盖150个国家和地区的国际短视频平台,我们希望通过TikTok发现真实、有趣的瞬间,让生活更美好。TikTok 在全球各地设有办公室,全球总部位于洛杉矶和新加坡,办公地点还包括纽约、伦敦、都柏林、巴黎、柏林、迪拜、雅加达、首尔和东京等多个城市。 TikTok研发团队,旨在实现TikTok业务的研发工作,搭建及维护业界领先的产品。加入我们,你能接触到包括用户增长、社交、直播、电商C端、内容创造、内容消费等核心业务场景,支持产品在全球赛道上高速发展;也能接触到包括服务架构、基础技术等方向上的技术挑战,保障业务持续高质量、高效率、且安全地为用户服务;同时还能为不同业务场景提供全面的技术解决方案,优化各项产品指标及用户体验。 在这里, 有大牛带队与大家一同不断探索前沿, 突破想象空间。 在这里,你的每一行代码都将服务亿万用户。在这里,团队专业且纯粹,合作氛围平等且轻松。目前在北京,上海,杭州、广州、深圳分别开放多个岗位机会。 课题介绍: TikTok作为全球领先的短视频平台,面临新用户数据稀疏导致的个性化推荐不足、直播推荐时效性要求高、用户兴趣多样性维护困难以及电商推荐系统链路复杂等多重挑战。传统推荐方法依赖历史行为建模,难以解决新用户冷启动问题,且直播推荐需在极短窗口期内(通常30分钟内)实时捕捉内容动态变化(如主播互动、流量波动),这对系统的实时感知与快速决策能力提出更高要求。此外,单列沉浸式场景放大了多样性问题,需平衡多峰兴趣学习与探索引发的内容穿越风险。当前电商推荐系统采用多阶段漏斗架构(召回-排序-混排),存在链路不一致、维护成本高、过度依赖短期价值预测等问题,导致用户易陷入内容同质化疲劳。 针对上述痛点,项目提出结合大语言模型(LLM)和大模型技术实现突破:一方面利用LLM的海量知识储备与Few-shot推理能力,通过注册信息与外部知识推理新用户潜在意图,缓解冷启动问题;另一方面,在社交偏好建模中融合GNN与用户全生命周期行为序列,提升兴趣预测精准度。同时,探索大模型的泛化能力、长上下文感知及端到端建模优势,简化电商推荐链路,增强实时动态适应性与兴趣探索能力,最终实现系统更简洁、推荐更精准、用户体验与留存双提升的目标,推动业务可持续增长。