Yang Chen: Mean-Field Game as A Framework for Many-agent Inverse Reinforcement Learning

Speaker:

Yang Chen (University of Auckland)

Time:

  • 16:20-17:20 (Time in Beijing)
  • 21:20-22:20 (Time in Auckland)
  • June 17, 2022 (Friday)

Venue:

B1-518B, Research Building 4

Abstract:

Inverse reinforcement learning (IRL) is used to automate reward design using demonstrated behaviours in cases where a reward function is not accessible to forward reinforcement learning agents. However, IRL becomes intractable in the face of a large number of agents due to the curse of dimensionality. The recent formalism of mean-field games provides a mathematically tractable model for modelling large-scale multi-agent systems by leveraging mean-field approximation to simplify the interactions among agents. In this talk, I will show how to conquer the problem of many-agent IRL using mean-field games as the framework. Specifically, I will introduce two IRL methods for mean-field games: the first is geometrically interpretable and builds the theoretical foundation for IRL in mean-field games; the second is based on the probabilistic inference that can further reason about uncertainties in agent behaviours. I will close the talk with the introduction of applications of IRL for mean-field games.

,