Research Intern – AI Incubation
Skip the busywork
ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.
About this role
Zoom’s AI Incubation team is looking for a PhD Research Intern to dive into the next wave of LLM innovation. You will work alongside a world-class group of PhDs and applied scientists, contributing to high-impact projects in post-training, reinforcement learning (RL), and federated AI.
As an intern, you won't just be watching from the sidelines. You will own a specific research project, conduct experiments at scale, and help develop the breakthroughs that power the next generation of the Zoom AI Companion. This is a unique opportunity to see how frontier AI research is translated into a product used by millions.
About the team
The AI Incubation team is a high-impact applied research group known for building one of the industry's best-performing federated AI systems. We operate at the frontier of:
- Agentic Intelligence: Moving beyond chat to models that "do."
- Federated AI: Privacy-preserving, edge-to-cloud learning across diverse model ecosystems (Anthropic, OpenAI, Google).
- Advanced Alignment: Pushing RLHF, DPO, and RLAIF to new heights of reasoning and reliability.
Responsibilities
- Execute Research Projects: Design and implement experiments in areas like LLM fine-tuning, preference optimization (DPO/PPO), or distributed federated learning.
- Prototype & Evaluate: Build and benchmark new model architectures or training recipes to improve reasoning, personalization, and safety.
- Collaborate: Work closely with senior scientists to refine research hypotheses and troubleshoot large-scale training runs.
- Document & Present: Synthesize your findings into internal reports or potential publications, presenting your work to the broader AI organization.
- Stay Curious: Keep pace with the latest ArXiv drops and open-source developments to ensure our methods remain state-of-the-art.
What we’re looking for
- Academic Background: Currently enrolled in a PhD Computer Science, ML, AI, or a related quantitative field.
- Technical Proficiency: Strong coding skills in PyTorch. Experience with libraries like Hugging Face Transformers, DeepSpeed, or FlashAttention is a major plus.
- Research Focus: Familiarity with at least one of the following: LLM post-training (SFT/RLHF), Federated Learning, Multimodal models, or Agentic workflows.
- Problem Solver: A track record of tackling open-ended research problems, evidenced by publications (NeurIPS, ICML, ICLR, etc.), high-quality open-source contributions, or advanced course projects.
- Communication: Ability to explain complex technical concepts clearly and collaborate in a fast-paced, iterative environment.