[한글자막] CS 285: Lecture 9, Part 2
귓속말의 자막나라・17 minutes read
Using P Theta St instead of P Theta Prime of St can accurately approximate the return of a new policy if the distribution mismatch is ignored. Maximizing the objective of the expected value under P Theta or Pi Theta of the importance-weighted advantage is crucial in reinforcement learning.
Insights
- Using P Theta St instead of P Theta Prime of St provides an accurate approximation of a new policy's return, especially when disregarding distribution mismatches.
- Maximizing the expected value under P Theta or Pi Theta of the importance-weighted advantage is crucial in reinforcement learning, emphasizing the need for the new policy (Pi Theta Prime) to closely align with the original policy (Pi Theta) in terms of total variation divergence to achieve optimal performance.
Get key ideas from YouTube videos. It’s free
Recent questions
How can P Theta St approximate new policy returns?
By using P Theta St instead of P Theta Prime of St, the return of a new policy can be accurately approximated if the distribution mismatch is ignored. This method helps in maximizing a bar Theta Prime to improve the new policy by approximating J Theta Prime minus J Theta with a bar Theta Prime.
What is the relationship between Pi Theta and Pi Theta Prime?
For a deterministic policy Pi Theta, the state marginals for Theta and Theta Prime are close if Pi Theta Prime is close to Pi Theta. This closeness leads to a bounded difference between P Theta of St and P Theta Prime of St when Pi Theta is close to Pi Theta Prime.
How is total variation Divergence bounded for stochastic policies?
The total variation Divergence between state marginals is bounded by 2 * Epsilon * T for stochastic policies as Epsilon decreases. This bound ensures that the expected value under P Theta Prime of a function F of St is limited by the expected value under P Theta minus 2 * Epsilon * T * the maximum value of F.
What is the significance of the cost function in reinforcement learning?
The cost function (C) in reinforcement learning is determined by the number of time steps multiplied by the maximum reward (R Max). In cases with infinite time steps and a discount factor, the sum of discount values must equal 1 over 1 minus the discount factor (gamma). Maximizing the objective of the expected value under P Theta or Pi Theta of the importance-weighted advantage is crucial for optimal performance.
How does policy gradient improve reinforcement learning objectives?
Maximizing the objective of the expected value under P Theta or Pi Theta of the importance-weighted advantage is crucial in reinforcement learning. Taking the derivative of the objective with respect to Theta Prime, which affects the importance weight, results in the policy gradient. This process ultimately improves the RL objective as long as Theta Prime remains near Theta, ensuring optimal performance by keeping the new policy close to the original policy in terms of total variation divergence.
Related videos
RAIL
CS 285: Lecture 9, Part 2
Wolfram
A conversation between Nassim Nicholas Taleb and Stephen Wolfram at the Wolfram Summer School 2021
Oxford Mathematics
Differential Equations 1 - Maximum Principle: Oxford Mathematics 2nd Year Student Lecture
BRADLEY 'PISTOL' COLLINS
"WHY ARE THETA SOO BAD AT MARKETING"
3Blue1Brown
Bayes theorem, the geometry of changing beliefs