The 🧋PEARLS🧋 (Pragmatically Exploring Agents with Reinforced LanguageS) Lab at UCSD CSE is motivated by the question of how to imbue learning agents with the ability to understand and generate contextually relevant natural language in service of enabling more effective Human-AI collaboration. We are also united in our shared passion for high quality 🧋boba (or pearl) tea🧋.
This work lies primarily at the intersection of Machine Learning, especially Reinforcement Learning, and Natural Language Processing while drawing inspiration from Cognitive Science. In humans, such communication is grounded in experience and real-world context: “what” we say or do depends on the current context around us, “why” we say or do something draws on worldly knowledge (e.g. commonsense) gained through experience, and “how” we execute an action is shaped via a tight interactive feedback loop with other humans and our environment.
The core theme of our research involves creating trustworthy and responsible language-based AI agents that can: (1) interactively gather and align to human preferences via feedback to shape “how” a task is executed, and (2) build and use neurosymbolic world models that tell an agent “what” action to take in the current context and “why” to take that action given the dynamics of a grounded environment.