ByteDance
San Jose, CA, USA

Software Engineer Intern (ML System) - 2026 Start (PhD)

Onsite$60/hrPosted Aug 15, 2025WebsiteLinkedIn

Skip the busywork

ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.

Resume tailored to this roleApplied in secondsTrack every application
Download the app

About this role

About the Team (AML-MLsys)

AML-MLsys combines system engineering and the art of machine learning to develop and maintain massively distributed ML training and Inference system/services around the world, providing high-performance, highly reliable, scalable systems for LLM/AIGC/AGI.

In our team, you'll have the opportunity to build the large-scale heterogeneous system integrating with GPU/NPU/RDMA/Storage and keep it running stable and reliable, enrich your expertise in coding, performance analysis and distributed system, and be involved in the decision-making process. You'll also be part of a global team with members from the United States, China and Singapore working collaboratively towards unified project direction.

We are looking for talented individuals to join us for an internship in 2026. Internships at ByteDance aim to offer students industry exposure and hands-on experience. Watch your ambitions become reality as your inspiration brings infinite opportunities at ByteDance.

PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts.

Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date).

Candidates who pass resume screening will be invited to participate in ByteDance's technical online assessment.

Responsibilities

  • Participating in online architecture design and optimization centered around LLM inference tasks, achieving high concurrency and throughput in large-scale online systems.
  • Participating in the establishment of a comprehensive system covering stability, disaster recovery, R&D efficiency, and cost, enhancing overall system stability.
  • Participating in the design and implementation of end-to-end online pipeline systems with multiple models, plugins, and storage-computation components, enabling agile, flexible, and observable continuous delivery.
  • Collaborating closely with the MLE for optimization of algorithms and systems.
  • Being proactive, optimistic, highly responsible, and demonstrating meticulous work ethic, as well as possessing strong team communication and collaboration skills.

Qualifications

Minimum Qualifications:

  • Excellent coding skills, strong understanding of data structures, and fundamental knowledge of algorithms. Proficiency in programming languages such as C/C++, Java, Go, Python, etc.
  • Rich experience in online architecture, with the ability to troubleshoot independently.
  • Strong sense of responsibility, good learning ability, communication skills, and self-motivation.

Preferred Qualifications:

  • Understanding of GPU hardware architecture, familiarity with GPU software stack (CUDA, cuDNN), and experience in GPU performance analysis.
  • Knowledge of LLM models, experience in accelerating LLM model optimization is preferred.