Bosch Group
Sunnyvale, CA, USA
Vision-Language-Action Models - Intern
Skip the busywork
ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.
Resume tailored to this roleApplied in secondsTrack every application
Download the appAbout this role
Job Description
- Conduct advanced research on LLMs /VLMs for autonomous driving
- Design, implement supervised and reinforcement fine-tuning algorithms to optimize LLMs/VLMs for autonomous driving task.
- Collaborate with mentors and team members to refine research goals, discuss technical challenges, and explore extensions such as closed-loop fine-tuning and RL integration.
- Regularly report research progress through meetings, written updates, and technical presentations.
- Analyze experimental results, document methodologies, and summarize findings in clear and reproducible formats.
- Contribute to the preparation of research papers, technical reports, or potential submissions to top conferences
Qualifications
Basic Qualifications
- Ph.D. student in Computer Science, Robotics or related fields.
- Hands-on experience on developing algorithms with focus on at least two of the following areas: multimodal foundation models, 3D scene understanding, autonomous driving, reinforcement learning and robotic navigation or planning.
- Solid Python skills and proficient with libraries such as PyTorch.
- Minimum GPA of 3.0
Preferred Qualifications
- Publication record in top venues including CVPR, ICCV, ECCV, ICLR etc.
- Familiar with CARLA or NavSim
- Able to work independently, has strong research and problem-solving skills
- Good communication and teamwork skills