强化学习读书笔记

强化学习读书笔记 - 10 - on-policy控制的近似方法

学习笔记:
Reinforcement Learning: An Introduction, Richard S. Sutton and Andrew G. Barto c 2014, 2015, 2016

参照

需要了解强化学习的数学符号,先看看这里:

on-policy控制的近似方法

近似控制方法(Control Methods)是求策略的行动状态价值(q_{pi}(s, a))的近似值(hat{q}(s, a, heta))

半梯度递减的控制Sarsa方法 (Episodic Semi-gradient Sarsa for Control)

Input: a differentiable function (hat{q} : mathcal{S} imes mathcal{A} imes mathbb{R}^n o mathbb{R})

Initialize value-function weights ( heta in mathbb{R}^n) arbitrarily (e.g., ( heta = 0))
Repeat (for each episode):
(S, A gets) initial state and action of episode (e.g., "(epsilon)-greedy)
  Repeat (for each step of episode):
   Take action (A), observe (R, S')
   If (S') is terminal:
    ( heta gets heta + alpha [R - hat{q}(S, A, heta)] abla hat{q}(S, A, heta))
    Go to next episode
   Choose (A') as a function of (hat{q}(S', dot , heta)) (e.g., (epsilon)-greedy)
   ( heta gets heta + alpha [R + gamma hat{q}(S', A', heta) - hat{q}(S, A, heta)] abla hat{q}(S, A, heta))
   (S gets S')
   (A gets A')

多步半梯度递减的控制Sarsa方法 (n-step Semi-gradient Sarsa for Control)

请看原书,不做拗述。

(连续性任务的)平均奖赏

由于打折率((gamma), the discounting rate)在近似计算中存在一些问题(说是下一章说明问题是什么)。
因此,在连续性任务中引进了平均奖赏(Average Reward)(eta(pi)):

[egin{align} eta(pi) & doteq lim_{T o infty} frac{1}{T} sum_{t=1}{T} mathbb{E} [R_t | A_{0:t-1} sim pi] \ & = lim_{t o infty} mathbb{E} [R_t | A_{0:t-1} sim pi] \ & = sum_s d_{pi}(s) sum_a pi(a|s) sum_{s',r} p(s,r'|s,a)r end{align} ]

  • 目标回报(= 原奖赏 - 平均奖赏)

[G_t doteq R_{t+1} - eta(pi) + R_{t+2} - eta(pi) + cdots ]

  • 策略价值

[v_{pi}(s) = sum_{a} pi(a|s) sum_{r,s'} p(s',r|s,a)[r - eta(pi) + v_{pi}(s')] \ q_{pi}(s,a) = sum_{r,s'} p(s',r|s,a)[r - eta(pi) + sum_{a'} pi(a'|s') q_{pi}(s',a')] \ ]

  • 策略最优价值

[v_{*}(s) = underset{a}{max} sum_{r,s'} p(s',r|s,a)[r - eta(pi) + v_{*}(s')] \ q_{*}(s,a) = sum_{r,s'} p(s',r|s,a)[r - eta(pi) + underset{a'}{max} q_{*}(s',a')] \ ]

  • 时序差分误差

[delta_t doteq R_{t+1} - ar{R} + hat{v}(S_{t+1}, heta) - hat{v}(S_{t}, heta) \ delta_t doteq R_{t+1} - ar{R} + hat{q}(S_{t+1},A_t, heta) - hat{q}(S_{t},A_t, heta) \ where \ ar{R} ext{ - is an estimate of the average reward } eta(pi) ]

  • 半梯度递减Sarsa的平均奖赏版

[ heta_{t+1} doteq heta_t + alpha delta_t abla hat{q}(S_{t},A_t, heta) ]

半梯度递减Sarsa的平均奖赏版(for continuing tasks)

Input: a differentiable function (hat{q} : mathcal{S} imes mathcal{A} imes mathbb{R}^n o mathbb{R})
Parameters: step sizes (alpha, eta > 0)

Initialize value-function weights ( heta in mathbb{R}^n) arbitrarily (e.g., ( heta = 0))
Initialize average reward estimate (ar{R}) arbitrarily (e.g., (ar{R} = 0))
Initialize state (S), and action (A)

Repeat (for each step):
  Take action (A), observe (R, S')
  Choose (A') as a function of (hat{q}(S', dot , heta)) (e.g., (epsilon)-greedy)
(delta gets R - ar{R} + hat{q}(S', A', heta) - hat{q}(S, A, heta))
(ar{R} gets ar{R} + eta delta)
( heta gets heta + alpha delta abla hat{q}(S, A, heta))
(S gets S')
(A gets A')

多步半梯度递减的控制Sarsa方法 - 平均奖赏版(for continuing tasks)

请看原书,不做拗述。

原文地址:https://www.cnblogs.com/steven-yang/p/6536471.html