Stochastic Controls, Games, and Learning I
3.0
creditsAverage Course Rating
This is a year-long PhD-level course that explores fundamental and advanced topics in stochastic control, reinforcement learning (RL), and game theory. The course bridges classical control theory with modern data-driven approaches. Part I covers classical stochastic control methods, including the dynamic programming principle, Hamilton-Jacobi-Bellman (HJB) equations, and the maximum principle via backward stochastic differential equations (BSDEs). It introduces reinforcement learning through Markov decision processes (MDPs), online and offline learning, bandit problems, and RL applications in continuous-time stochastic control. Part II extends these ideas to stochastic differential games, covering both non-cooperative (Nash equilibrium) and cooperative (Pareto optimal) settings, as well as mean-field games and mean-field control. It also discusses advanced topics in reinforcement learning and generative models, including score-based diffusion models, RL-based fine-tuning, multi-agent systems, and transfer learning. Part I is not prerequisite for Part II but strongly recommended.
No Course Evaluations found