Control Systems and Reinforcement LearningCambridge University Press, 9 Jun 2022 - 450 halaman A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning. |
Isi
Introduction | 1 |
Control Crash Course | 9 |
5 | 23 |
From Control Theory to | 29 |
8 | 43 |
Optimal Control | 51 |
4 | 62 |
8 | 68 |
Stochastic Control | 244 |
Stochastic Approximation | 280 |
Temporal Difference Methods | 318 |
4 | 347 |
1 | 359 |
Setting the Stage Return of the Actors | 362 |
A Mathematical Background | 395 |
B Markov Decision Processes | 401 |
10 | 78 |
Value Function Approximations | 159 |
Markov Chains | 205 |
Edisi yang lain - Lihat semua
Istilah dan frasa umum
actor-critic algorithm design An+1 apply arg min assumption asymptotic covariance average cost Bellman equation bound chapter compute conditional expectation Consider convergence convex cost function defined definition denote deterministic eigenvalues ergodic Euler approximation example Figure fluid model function approximation function class function h Gaussian implies inequality initial condition input introduced iteration Lemma linear function approximation Lipschitz continuous Lyapunov equation Lyapunov function Markov chain minimizer Newton-Raphson nonlinear nonnegative notation observations obtain ODE approximation optimal control optimal policy parameter estimates PJR averaging Poisson's equation proof Proposition Q-function Q-learning quadratic queue random variable recursion representation satisfying scalar Section setting solution solves space model stability stochastic approximation stochastic gradient descent Suppose TD-learning Theorem theory transition matrix value function variance vector zero θη θο
