Handshake
San Francisco, CA, USA

Handshake AI Research Intern, Summer 2026

Onsite$12,000 - $15,000/moPosted Oct 3, 2025WebsiteLinkedIn

Skip the busywork

ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.

Resume tailored to this roleApplied in secondsTrack every application
Download the app

About this role

About the Role

Handshake AI builds the data engines that power the next generation of large language models. Our research team works at the intersection of cutting-edge model post-training, rigorous evaluation, and data efficiency. Join us for a focused Summer 2026 internship where your work can ship directly into our production stack and become a publishable research contribution. To start between May and June 2026.

Projects You Could Tackle

  • LLM Post-Training: Novel RLHF / GRPO pipelines, instruction-following refinements, reasoning-trace supervision.
  • LLM Evaluation: New multilingual, long-horizon, or domain-specific benchmarks; automatic vs. human preference studies; robustness diagnostics.
  • Data Efficiency: Active-learning loops, data value estimation, synthetic data generation, and low-resource fine-tuning strategies.

Each intern owns a scoped research project, mentored by a senior scientist, with the explicit goal of an archive-ready manuscript or top-tier conference submission.

Desired Capabilities

  • Current PhD student in CS, ML, NLP, or related field.
  • Publication track record at top venues (NeurIPS, ICML, ACL, EMNLP, ICLR, etc.).
  • Hands-on experience training and experimenting with LLMs (e.g., PyTorch, JAX, DeepSpeed, distributed training stacks).
  • Strong empirical rigor and a passion for open-ended AI questions.

Extra Credit

  • Prior work on RLHF, evaluation tooling, or data selection methods.
  • Contributions to open-source LLM frameworks.
  • Public speaking or teaching experience (we often host internal reading groups).