高水平建设专业网站,桂林网站制作报价,北京网络职业学院怎么样,深圳手机企业网站设计Stable-Baselines 3 部分源代码解读 ./ppo/ppo.py
前言
阅读PPO相关的源码#xff0c;了解一下标准库是如何建立PPO算法以及各种tricks的#xff0c;以便于自己的复现。
在Pycharm里面一直跳转#xff0c;可以看到PPO类是最终继承于基类#xff0c;也就是这个py文件的内…Stable-Baselines 3 部分源代码解读 ./ppo/ppo.py
前言
阅读PPO相关的源码了解一下标准库是如何建立PPO算法以及各种tricks的以便于自己的复现。
在Pycharm里面一直跳转可以看到PPO类是最终继承于基类也就是这个py文件的内容。
所以阅读源码就先从这里开始。: )
import 包
import warnings
from typing import Any, Dict, Optional, Type, TypeVar, Unionimport numpy as np
import torch as th
from gym import spaces
from torch.nn import functional as Ffrom stable_baselines3.common.on_policy_algorithm import OnPolicyAlgorithm
from stable_baselines3.common.policies import ActorCriticCnnPolicy, ActorCriticPolicy, BasePolicy, MultiInputActorCriticPolicy
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
from stable_baselines3.common.utils import explained_variance, get_schedule_fnPPO 类
这是面向使用者的浅层的PPO也就是能直接调用的类
作者在源码中给出了PPO的论文、引用了谁的源代码编写的以及一个引进PPO的讲解。
policy、env和learning_rate三者与基类base-class.py的一致
n_steps表示每次更新前需要经过的时间步作者在这里给出了n_steps * n_envs的例子可能的意思是如果环境是重复的多个打算做并行训练的话那么就是每个子环境的时间步乘以环境的数量
batch_size经验回放的最小批次信息
gamma、gae_lambda、clip_range、clip_range_vf均是具有默认值的参数分别代表“折扣因子”、“GAE奖励中平衡偏置和方差的参数”、“为网络参数而限制幅度的范围”、“为值函数网络参数而限制幅度的范围”
normalize_advantage标志是否需要归一化优势advantage
ent_coef、vf_coef损失计算的熵系数
max_grad_norm最大的梯度长度梯度下降的限幅
use_sde、sde_sample_freq是状态独立性探索只适用于连续环境与基类base-class.py的一致
target_kl限制每次更新时KL散度不能太大因为clipping限幅不能防止大量更新
剩下的参数与基类base-class.py的一致
class PPO(OnPolicyAlgorithm):Proximal Policy Optimization algorithm (PPO) (clip version)Paper: https://arxiv.org/abs/1707.06347Code: This implementation borrows code from OpenAI Spinning Up (https://github.com/openai/spinningup/)https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail andStable Baselines (PPO2 from https://github.com/hill-a/stable-baselines)Introduction to PPO: https://spinningup.openai.com/en/latest/algorithms/ppo.html:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...):param env: The environment to learn from (if registered in Gym, can be str):param learning_rate: The learning rate, it can be a functionof the current progress remaining (from 1 to 0):param n_steps: The number of steps to run for each environment per update(i.e. rollout buffer size is n_steps * n_envs where n_envs is number of environment copies running in parallel)NOTE: n_steps * n_envs must be greater than 1 (because of the advantage normalization)See https://github.com/pytorch/pytorch/issues/29372:param batch_size: Minibatch size:param n_epochs: Number of epoch when optimizing the surrogate loss:param gamma: Discount factor:param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator:param clip_range: Clipping parameter, it can be a function of the current progressremaining (from 1 to 0).:param clip_range_vf: Clipping parameter for the value function,it can be a function of the current progress remaining (from 1 to 0).This is a parameter specific to the OpenAI implementation. If None is passed (default),no clipping will be done on the value function.IMPORTANT: this clipping depends on the reward scaling.:param normalize_advantage: Whether to normalize or not the advantage:param ent_coef: Entropy coefficient for the loss calculation:param vf_coef: Value function coefficient for the loss calculation:param max_grad_norm: The maximum value for the gradient clipping:param use_sde: Whether to use generalized State Dependent Exploration (gSDE)instead of action noise exploration (default: False):param sde_sample_freq: Sample a new noise matrix every n steps when using gSDEDefault: -1 (only sample at the beginning of the rollout):param target_kl: Limit the KL divergence between updates,because the clipping is not enough to prevent large updatesee issue #213 (cf https://github.com/hill-a/stable-baselines/issues/213)By default, there is no limit on the kl div.:param tensorboard_log: the log location for tensorboard (if None, no logging):param policy_kwargs: additional arguments to be passed to the policy on creation:param verbose: Verbosity level: 0 for no output, 1 for info messages (such as device or wrappers used), 2 fordebug messages:param seed: Seed for the pseudo random generators:param device: Device (cpu, cuda, ...) on which the code should be run.Setting it to auto, the code will be run on the GPU if possible.:param _init_setup_model: Whether or not to build the network at the creation of the instance# PPO策略中限制的可以用字符串的策略就是下面三个# 连续环境使用时一般会提示选择MultiInputPolicypolicy_aliases: Dict[str, Type[BasePolicy]] {MlpPolicy: ActorCriticPolicy,CnnPolicy: ActorCriticCnnPolicy,MultiInputPolicy: MultiInputActorCriticPolicy,}# 输入的一些默认参数表列可以为调参提供参考def __init__(self,policy: Union[str, Type[ActorCriticPolicy]],env: Union[GymEnv, str],learning_rate: Union[float, Schedule] 3e-4,n_steps: int 2048,batch_size: int 64,n_epochs: int 10,gamma: float 0.99,gae_lambda: float 0.95,clip_range: Union[float, Schedule] 0.2,clip_range_vf: Union[None, float, Schedule] None,normalize_advantage: bool True,ent_coef: float 0.0,vf_coef: float 0.5,max_grad_norm: float 0.5,use_sde: bool False,sde_sample_freq: int -1,target_kl: Optional[float] None,tensorboard_log: Optional[str] None,policy_kwargs: Optional[Dict[str, Any]] None,verbose: int 0,seed: Optional[int] None,device: Union[th.device, str] auto,_init_setup_model: bool True,):super().__init__(policy,env,learning_ratelearning_rate,n_stepsn_steps,gammagamma,gae_lambdagae_lambda,ent_coefent_coef,vf_coefvf_coef,max_grad_normmax_grad_norm,use_sdeuse_sde,sde_sample_freqsde_sample_freq,tensorboard_logtensorboard_log,policy_kwargspolicy_kwargs,verboseverbose,devicedevice,seedseed,_init_setup_modelFalse,supported_action_spaces(spaces.Box,spaces.Discrete,spaces.MultiDiscrete,spaces.MultiBinary,),)# 合理性、完整性检查如果需要normalize的话需要保证batch_size参数大于1# Sanity check, otherwise it will lead to noisy gradient and NaN# because of the advantage normalizationif normalize_advantage:assert (batch_size 1), batch_size must be greater than 1. See https://github.com/DLR-RM/stable-baselines3/issues/440if self.env is not None:# Check that n_steps * n_envs 1 to avoid NaN# when doing advantage normalizationbuffer_size self.env.num_envs * self.n_steps# 如果buffer_size等于1但是又需要normalize_advantage标志时# 报错输出当前的需要运行的时间步和当前的环境数量assert buffer_size 1 or (not normalize_advantage), fn_steps * n_envs must be greater than 1. Currently n_steps{self.n_steps} and n_envs{self.env.num_envs}# rollouts的池子大小必须与最小池子数量mini-batch一致也就是能整除# 这样才能一份一份的导入我的理解是这样# Check that the rollout buffer size is a multiple of the mini-batch sizeuntruncated_batches buffer_size // batch_size# 不是整除的话就爆出警告if buffer_size % batch_size 0:warnings.warn(fYou have specified a mini-batch size of {batch_size},f but because the RolloutBuffer is of size n_steps * n_envs {buffer_size},f after every {untruncated_batches} untruncated mini-batches,f there will be a truncated mini-batch of size {buffer_size % batch_size}\nfWe recommend using a batch_size that is a factor of n_steps * n_envs.\nfInfo: (n_steps{self.n_steps} and n_envs{self.env.num_envs}))self.batch_size batch_sizeself.n_epochs n_epochsself.clip_range clip_rangeself.clip_range_vf clip_range_vfself.normalize_advantage normalize_advantageself.target_kl target_klif _init_setup_model:self._setup_model()def _setup_model(self) - None:super()._setup_model()# Transform (if needed) learning rate and clip range (for PPO) to callable.# 将输入的限制幅度转变成可以调用的变量# Initialize schedules for policy/value clippingself.clip_range get_schedule_fn(self.clip_range)# 对self.clip_range_vf参数做数据类型和正值检查if self.clip_range_vf is not None:if isinstance(self.clip_range_vf, (float, int)):assert self.clip_range_vf 0, clip_range_vf must be positive, pass None to deactivate vf clippingself.clip_range_vf get_schedule_fn(self.clip_range_vf)def train(self) - None:Update policy using the currently gathered rollout buffer.# 将模型设置成训练模式这会影响到batch norm和正则化# Switch to train mode (this affects batch norm / dropout)self.policy.set_training_mode(True)# 更新学习率如果学习率是与当前进度有关的数值# Update optimizer learning rateself._update_learning_rate(self.policy.optimizer)# 计算限幅参数输入的是与当前进度有关的学习率动态变化包括clip_range和clip_range_vf# 也就是策略网络和价值网络# Compute current clip rangeclip_range self.clip_range(self._current_progress_remaining)# Optional: clip range for the value functionif self.clip_range_vf is not None:clip_range_vf self.clip_range_vf(self._current_progress_remaining)# 初始化各种损失的记录# 熵损失、策略梯度损失、价值损失和限制参数entropy_losses []pg_losses, value_losses [], []clip_fractions []# 设置continue_training为True表示现在处于持续性训练状态continue_training True# train for n_epochs epochs# self.n_epochs是训练次数for epoch in range(self.n_epochs):# 记录近似的KL散度数值approx_kl_divs []# Do a complete pass on the rollout buffer# 将rollout_buffer池子用batch_size做分割遍历每一个循环for rollout_data in self.rollout_buffer.get(self.batch_size):# 取出每一小批的动作数据actions rollout_data.actions# 如果是离散动作空间的话专程浮点型数据并拉直if isinstance(self.action_space, spaces.Discrete):# Convert discrete action from float to longactions rollout_data.actions.long().flatten()# 判断是否使用了状态独立性探索如果使用了状态独立性探索# 那么就重置噪声# Re-sample the noise matrix because the log_std has changedif self.use_sde:self.policy.reset_noise(self.batch_size)# 根据策略、观测的数据和动作输出价值、对数概率以及熵values, log_prob, entropy self.policy.evaluate_actions(rollout_data.observations, actions)# 再把价值再做拉直values values.flatten()# 对经验池子的advantages做Normalize# Normalize的公式是advantages里面的数值减去advantages的均值# 然后再除以advantages的均值的方差自己复现的话也可以调用其他库的方法# Normalize advantageadvantages rollout_data.advantages# Normalization does not make sense if mini batchsize 1, see GH issue #325if self.normalize_advantage and len(advantages) 1:advantages (advantages - advantages.mean()) / (advantages.std() 1e-8)# 输出先后动作概率的差异值# ratio between old and new policy, should be one at the first iterationratio th.exp(log_prob - rollout_data.old_log_prob)# 策略的损失是优势数值乘以比率以及限制幅度的优势之间的负最小值# 就是论文的公式# clipped surrogate losspolicy_loss_1 advantages * ratiopolicy_loss_2 advantages * th.clamp(ratio, 1 - clip_range, 1 clip_range)policy_loss -th.min(policy_loss_1, policy_loss_2).mean()# 记录在刚才初始化的日志记录器里面# Loggingpg_losses.append(policy_loss.item())clip_fraction th.mean((th.abs(ratio - 1) clip_range).float()).item()clip_fractions.append(clip_fraction)# 如果价值没有限幅的话就直接输出# 有限制幅度的话那么就是限幅增量th.clamp()原来的数值if self.clip_range_vf is None:# No clippingvalues_pred valueselse:# Clip the difference between old and new value# NOTE: this depends on the reward scalingvalues_pred rollout_data.old_values th.clamp(values - rollout_data.old_values, -clip_range_vf, clip_range_vf)# 构建损失函数并记录下来# Value loss using the TD(gae_lambda) targetvalue_loss F.mse_loss(rollout_data.returns, values_pred)value_losses.append(value_loss.item())# 如果没有熵损失那么就直接取得均值的副对数概率如果有熵损失那么就是熵损失的均值# Entropy loss favor explorationif entropy is None:# Approximate entropy when no analytical formentropy_loss -th.mean(-log_prob)else:entropy_loss -th.mean(entropy)entropy_losses.append(entropy_loss.item())# 最终的损失就是策略损失和加了系数的熵损失和价值函数损失loss policy_loss self.ent_coef * entropy_loss self.vf_coef * value_loss# Calculate approximate form of reverse KL Divergence for early stopping# see issue #417: https://github.com/DLR-RM/stable-baselines3/issues/417# and discussion in PR #419: https://github.com/DLR-RM/stable-baselines3/pull/419# and Schulman blog: http://joschu.net/blog/kl-approx.html# 计算近似KL散度并记录在列表中with th.no_grad():log_ratio log_prob - rollout_data.old_log_probapprox_kl_div th.mean((th.exp(log_ratio) - 1) - log_ratio).cpu().numpy()approx_kl_divs.append(approx_kl_div)# 如果target_kl存在且近似KL散度太大了也就是更新程度太大就提前结束并报错if self.target_kl is not None and approx_kl_div 1.5 * self.target_kl:continue_training Falseif self.verbose 1:print(fEarly stopping at step {epoch} due to reaching max kl: {approx_kl_div:.2f})break# 对损失做优化# Optimization stepself.policy.optimizer.zero_grad()loss.backward()# 限制幅度避免较大的更新# Clip grad normth.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)self.policy.optimizer.step()self._n_updates 1if not continue_training:breakexplained_var explained_variance(self.rollout_buffer.values.flatten(), self.rollout_buffer.returns.flatten())# Logsself.logger.record(train/entropy_loss, np.mean(entropy_losses))self.logger.record(train/policy_gradient_loss, np.mean(pg_losses))self.logger.record(train/value_loss, np.mean(value_losses))self.logger.record(train/approx_kl, np.mean(approx_kl_divs))self.logger.record(train/clip_fraction, np.mean(clip_fractions))self.logger.record(train/loss, loss.item())self.logger.record(train/explained_variance, explained_var)if hasattr(self.policy, log_std):self.logger.record(train/std, th.exp(self.policy.log_std).mean().item())self.logger.record(train/n_updates, self._n_updates, excludetensorboard)self.logger.record(train/clip_range, clip_range)if self.clip_range_vf is not None:self.logger.record(train/clip_range_vf, clip_range_vf)def learn(self: SelfPPO,total_timesteps: int,callback: MaybeCallback None,log_interval: int 1,tb_log_name: str PPO,reset_num_timesteps: bool True,progress_bar: bool False,) - SelfPPO:# 这个函数主要是给用户做调用在初始化PPO类之后将这些参数引进来就可以# total_timesteps总的需要的时间步return super().learn(total_timestepstotal_timesteps,callbackcallback,log_intervallog_interval,tb_log_nametb_log_name,reset_num_timestepsreset_num_timesteps,progress_barprogress_bar,)