“Rational Thoughts in Neural Codes”
Hosted by the Department of Physics
Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning to reach subjective goals. We interpret behavioral data by assuming an agent behaves rationally — that is, they take actions that optimize their subjective reward according to their understanding of the task and its relevant causal variables. We apply a new method, Inverse Rational Control (IRC), to learn an agent’s internal model and reward function by maximizing the likelihood of its measured sensory observations and actions. Technically, we define an animal’s strategy as solving a Partially Observable Markov Decision Process (POMDP), and we invert this model to find the task and subjective costs that have maximum likelihood. This is a generalization of both Inverse Reinforcement Learning and Inverse Optimal Control. Our mathematical formulation thereby extracts rational and interpretable thoughts of the agent from its behavior.
For inquiries contact firstname.lastname@example.org.
Please note that for all in-person events, attendees must adhere to Washington University’s public health requirements, including the latest events and meetings protocol. Guests will be required to show a successful self-screening result and wear a mask at all times.