
Internship Details
Google Deepmind
At DeepMind, I joined the People + AI Research (PAIR) team to develop and implement tools that help product teams build more trustworthy and reliable AI systems. I focused on improving the behavior of large language models in conversational settings—making sure AI communicates clearly, handles errors gracefully, and maintains user trust—while collaborating closely with research scientists and engineers.
Student Researcher
5 months
Cambridge, MA
Framework for Responsible Conversational AI: Six Core Principles

User Needs + Defining Success

Data + Model Evolution

Mental Models + Expectations

Explainability + Trust

Feedback + Controls

Errors + Graceful Failures

Errors + Graceful Failures
I contributed a suite of technical patterns and evaluation tooling to the PAIR Guidebook—aligned with its six core principles—including configurable prompt scaffolds, feedback loops for RAG pipelines, and fallback mechanisms for error handling.
rh692@cornell.edu
©Gloria Hu 2025