X
Mountain View, CA, USA

2026 PhD Residency, Machine Learning Explainability (Tapestry)

Onsite$109,000 – $150,000/yrPosted 1 week agoWebsiteLinkedIn

Skip the busywork

ApplyBolt rewrites your resume for this exact role and hits submit. You just pick the jobs.

Resume tailored to this roleApplied in secondsTrack every application
Download the app

About this role

About the role

As a PhD Resident in Machine Learning Explainability, you will join Tapestry’s six-month PhD Residency Program to research how modern AI techniques—particularly large language models (LLMs) and graph-based models—can explain complex decision-making systems used in electric grid operations.

This role focuses on improving trust, transparency, and usability of highly constrained optimization engines such as economic dispatch, unit commitment, and long-term planning tools. Your work will explore how complex model outputs, constraints, and system behaviors can be translated into clear, human-understandable explanations for both expert and non-expert users.

How you will make 10X Impact

  • Research and prototype explainability approaches for complex optimization and decision-making systems in the electric grid.
  • Apply LLMs to generate natural-language explanations of model outputs, constraints, and tradeoffs.
  • Explore reasoning over graph-structured data (e.g., power grids) to produce grounded, faithful explanations.
  • Investigate methods to detect and explain incorrect or anomalous network model inputs in natural language.
  • Translate academic research into practical feasibility demonstrations using real or simulated grid data.
  • Collaborate with machine learning researchers and power systems experts to refine approaches and evaluation methods.
  • Clearly document findings and communicate insights to inform future research and product directions.

What you should have:

  • Currently enrolled in a PhD program in Machine Learning, Computer Science, Electrical Engineering, or a related field.
  • Strong research experience in machine learning or deep learning.
  • Hands-on experience with LLMs, transformer-based models, or graph neural networks.
  • Strong programming skills in Python and experience with frameworks such as PyTorch or JAX.
  • Ability to reason about complex systems and communicate technical concepts clearly.
  • Interest in applying AI to high-impact, real-world infrastructure challenges.

It’d be great if you also had one or more of these:

  • Experience with optimization, convex optimization, or decision-making systems.
  • Familiarity with power systems, energy modeling, or networked physical systems.
  • Prior work in explainability, interpretability, or human-centered AI.
  • Publications in ML or AI venues (e.g., NeurIPS, ICML, ICLR).
  • Experience working on applied research projects in industry or startup environments.