Tencent
Bellevue, Washington, US

Research Internship- Multimodal LLM (Speech/Music/Audio/Vision/Language)

Onsite$80,169 – $120,000/yrPosted Jan 21, 2026LinkedIn

Skip the busywork

ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.

Resume tailored to this roleApplied in secondsTrack every application
Download the app

About this role

About Tencent AI Lab at Seattle Area

Tencent is a leading internet company in China. Tencent AI Lab at Seattle Area was established in May 2017. The lab strives to continuously improve AI's capability in perception, cognition, and creativity. Researchers there aim at solving challenging real-world problems with advanced technologies and publish extensively at top conferences and journals.

Research Internship: Multimodal LLM (Speech/Music/Audio/Vision/Language)

Tencent AI Lab is dedicated to advancing cutting-edge AI technologies, with a particular focus on innovative breakthroughs in large foundation models. The lab's long-term ambition is to drive the development of Artificial General Intelligence (AGI), and ultimately, Artificial Superintelligence (ASI). We are seeking research interns who are interested in developing novel speech/music/audio/vision/language processing techniques and large multimodal models for our Seattle area office located at Bellevue WA for the year 2026.

Every research intern will work with researchers on a research project aimed at attacking one of the core problems by inventing cutting edge techniques. We encourage discussions and collaborations between researchers and interns. Interns are also encouraged to publish the results from the internship. Our projects span a wide range of areas, including developing more effective multimodal pretraining and post-training strategies for audio, speech, music, image, and video understanding and generation. We aim to enable fully duplex conversations, design more efficient large-model architectures, enhance multimodal memory and reasoning capabilities, and advance novel audio, speech, music, image, and video processing techniques—such as encoding, tokenization, and representation learning—with a focus on multimodal applications and end-to-end large models.