Shanyin Tong, Columbia University
Mean-field games (MFGs) model non-cooperative games among large populations of agents and are widely applied in areas such as traffic flow, finance, and epidemic control. Inverse mean-field games address the challenge of inferring environmental factors from observed agent behavior. The coupled forward-backward structure of MFG equations makes solving these problems difficult and adds even greater complexity to their inverse problems. In this talk, I will introduce a policy iteration method for solving inverse MFGs. This method simplifies the problem by decoupling it into solving linear PDEs and linear inverse problems, leading to significant computational efficiency. The approach is flexible, accommodating a variety of numerical methods and machine learning tools. I will also present theoretical results that guarantee the convergence of our proposed method, along with numerical examples demonstrating its accuracy and efficiency.