General Motors
Sunnyvale, California, United States of America; San Francisco, California, United States of America; Mountain View, California, United States of America

2026 Summer Intern – AI/ML Intern – Vision Language Model/Action (PhD)

Hybrid$13,100/moPosted Dec 19, 2025WebsiteLinkedIn

Skip the busywork

ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.

Resume tailored to this roleApplied in secondsTrack every application
Download the app

About this role

Work Arrangement:

Hybrid: This internship is categorized as hybrid. The selected intern is expected to report to the office up to three times per week or as determined by the team.

About the Team:

The AI Research organization is dedicated to advancing the state-of-the-art in AI for autonomous vehicles. We are a collaborative, forward-thinking group of researchers and engineers tackling some of the most complex challenges in autonomy and machine learning.

About the Role:

As a VLM/VLA Research Intern on the AI Research team, you will operate at the frontier of Embodied AI, developing foundational models that bridge the gap between high-level reasoning and physical execution. Your work will focus on advancing vision-language-action architectures to solve critical challenges in data mining and end-to-end autonomous driving. This role offers a unique opportunity to work on real-world AI/ML systems at scale, collaborating with and receiving mentorship from world-class researchers to shape the future of grounded foundation models in the autonomous vehicle industry.

What You’ll Do:

  • Drive the development of embodied foundation models and vision-language-action architectures that unify multimodal perception with robotic control.
  • Prototype and refine ML models that leverage VLA architectures to improve decision-making and reasoning for autonomous vehicles through imitation and reinforcement learning.
  • Utilize vision-language models and generative techniques (such as world models) to improve the model's understanding of complex driving scenarios.
  • Partner with perception, robotics, and systems engineering teams to integrate VLA research into the broader autonomous stack and validate models in closed-loop environments.
  • Engage in high-level technical brainstorming, share insights across the AI Research org, and contribute to the academic community through top-tier conference publications.