
Software Engineer Graduate (Data Arch - Data Ecosystem ) - 2026 (PhD)
Skip the busywork
ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.
About this role
About the team:
The TikTok Data Ecosystem Team has the vital role of crafting and implementing a storage solution for offline data in TikTok's recommendation system, which caters to more than a billion users. Their primary objectives are to guarantee system reliability, uninterrupted service, and seamless performance. They aim to create a storage and computing infrastructure that can adapt to various data sources within the recommendation system, accommodating diverse storage needs. Their ultimate goal is to deliver efficient, affordable data storage with easy-to-use data management tools for the recommendation, search, and advertising functions.
We are looking for talented individuals to join our team in 2026. As a graduate, you will get unparalleled opportunities for you to kickstart your career, pursue bold ideas and explore limitless growth opportunities. Co-create a future driven by your inspiration with TikTok.
Successful candidates must be able to commit to an onboarding date by end of year 2026.
Responsibilities:
- Design and implement real-time and offline data architecture for large-scale recommendation systems.
- Build scalable and high-performance streaming Lakehouse systems that power feature pipelines, model training, and real-time inference.
- Collaborate with ML platform teams to support PyTorch-based model training workflows and design efficient data formats and access patterns for large-scale samples and features.
- Own core components of our distributed storage and processing stack, from file format to stream compaction to metadata management.
Minimum Qualifications:
- PhD or Master’s degree in Computer Science or related technical field.
- Experience building large-scale distributed systems, preferably in storage, stream processing, or ML infrastructure.
- Solid understanding of Apache Flink internals, with hands-on experience in state management, connectors, or UDFs.
- Familiarity with modern Lakehouse technologies such as Apache Paimon, Iceberg, Delta Lake, or Hudi, especially around incremental ingestion, schema evolution, and snapshot isolation.
Preferred Qualifications:
- Experience in designing and optimizing Flink + Paimon architectures for unified batch/stream processing.
- Familiarity with feature storage and training data pipelines, and their integration with PyTorch, especially for large-scale model training.
- Knowledge of columnar file formats (Parquet, ORC, Lance) and how they are used in feature engineering or ML data loading.
- Proficiency in Java/Scala/C++, and strong debugging/performance tuning ability.
- Previous experience in Lakehouse metadata management, compaction scheduling, or data versioning is a plus.
- (Optional) Knowledge of legacy data stores like HBase/Kudu is a bonus but not required.