Skip to content

Lecture 3: Model-free Prediction and Control

Model-Free RL:

Model-free: solve an unknown MDP

In a lot of real-world problems, MDP model is either unknown or known by too big or too complex to use

  • Atari Game, Game of Go, Helicopter, Portfolio management, etc

Model-free RL can solve the problems through interaction with the environment截屏2020-03-30 10.29.14

How to do policy evaluation:

  • Monte Carlo policy evaluation(采样)
  • Temporal Difference (TD) learning

Monte Carlo policy evaluation

截屏2020-03-30 10.33.01

更方便的方法:

截屏2020-03-30 10.35.09

MC & DP

MC --> model free,DP --> require MDP

MC --> fast

Temporal Difference (TD) learning

TD learns from incomplete episodes, by bootstrapping

DP,MC,TD的区别

DP

截屏2020-03-30 10.49.13

MC

截屏2020-03-30 10.49.25

TD

截屏2020-03-30 10.50.19

综合比较

截屏2020-03-30 10.51.05

Model-free Control for MDP

\epsilon-Greedy: ensure exploration

截屏2020-03-30 11.02.07

Monte Carlo with ε-Greedy Exploration

截屏2020-03-30 11.02.58

On-policy learning & Off-policy learning

On-policy: 使用一个策略进行探索/学习

截屏2020-03-30 11.09.23

Q learning

截屏2020-03-30 11.20.14