
Software Engineer Graduate (AI Infra Compute) - 2026 Start (PhD)
Skip the busywork
ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.
About this role
About the team:
Compute division focuses on building large-scale and highly available cloud infrastructure, which supports both public cloud products (like Volc Engine ECS service) and the internal products. The Compute-US team focuses on the development and research of the compute infrastructure platform.
We are looking for talented individuals to join our team in 2026. As a graduate, you will get opportunities to pursue bold ideas, tackle complex challenges, and unlock limitless growth. Launch your career where inspiration is infinite at ByteDance.
Successful candidates must be able to commit to an onboarding date by end of year 2026. Please state your availability and graduation date clearly in your resume.
Responsibilities
- Develop key technologies to optimize our AI Infra stack, including training infra, inference infra, and AI agents.
- Work with academia and open source communities on joint development.
- Follow the latest technologies from academia or industry and conduct deep-dive analysis.
- Present our research and products in academic papers.
Qualifications
Minimum Qualifications
- PhD in Computer Science, Computer Engineering, or a related technical discipline.
- Experience with at least one of the following areas:
- LLM training infra, including optimizations for various post-training workloads such as RL training, knowledge distillation, etc.
- LLM inference infra, including inference engine performance improvements, more efficient execution parallelism, GPU kernel optimizations, etc.
- AI Agent Infra, including computer-use agents, coding agents, agent memory, agent sandbox, etc.
- Commit to proactive continuous learning, demonstrate enthusiasm for AI technologies, and exhibit a strong ability to quickly grasp and apply new technologies.
- Good communication and teamwork skills.
Preferred Qualifications
- Frequent contributors or maintainers in AI Infra related open source communities such as vLLM or SGLang.
- Having top tier CS conference publications such as OSDI or MLSys.