网站建设与设计实习报告,互联网有创意的公司名字大全,网站建设价格受哪些影响,伪静态规则wordpress友爱的目录 自动求导机制从后向中排除子图自动求导如何编码历史信息Variable上的In-place操作In-place正确性检查 CUDA语义最佳实践使用固定的内存缓冲区使用 nn.DataParallel 替代 multiprocessing 扩展PyTorch扩展 torch.autograd扩展 torch.nn 多进程最佳实践共享CUDA张量最… 友爱的目录 自动求导机制从后向中排除子图自动求导如何编码历史信息Variable上的In-place操作In-place正确性检查 CUDA语义最佳实践使用固定的内存缓冲区使用 nn.DataParallel 替代 multiprocessing 扩展PyTorch扩展 torch.autograd扩展 torch.nn 多进程最佳实践共享CUDA张量最佳实践和提示避免和抵制死锁重用经过队列的缓冲区异步多进程训练例如Hogwild 序列化语义PACKAGE参考参考文献 自动求导机制
了解这些并不是绝对必要的但我们建议您熟悉它因为它将帮助您编写更高效更简洁的程序并可帮助您进行调试。
从后向中排除子图
每个变量都有两个标志requires_grad和volatile。它们都允许从梯度计算中精细地排除子图并可以提高效率。 x Variable(torch.randn(5, 5))y Variable(torch.randn(5, 5))z Variable(torch.randn(5, 5), requires_gradTrue)a x ya.requires_grad
Falseb a zb.requires_grad
Truemodel torchvision.models.resnet18(pretrainedTrue)
for param in model.parameters():param.requires_grad False
# Replace the last fully-connected layer
# Parameters of newly constructed modules have requires_gradTrue by default
model.fc nn.Linear(512, 100)# Optimize only the classifier
optimizer optim.SGD(model.fc.parameters(), lr1e-2, momentum0.9)regular_input Variable(torch.randn(5, 5))volatile_input Variable(torch.randn(5, 5), volatileTrue)model torchvision.models.resnet18(pretrainedTrue)model(regular_input).requires_grad
Truemodel(volatile_input).requires_grad
Falsemodel(volatile_input).volatile
Truemodel(volatile_input).creator is None
True自动求导如何编码历史信息
Variable上的In-place操作
In-place正确性检查
CUDA语义
torch.cuda会记录当前选择的GPU并且分配的所有CUDA张量将在上面创建。可以使用torch.cuda.device上下文管理器更改所选设备。
x torch.cuda.FloatTensor(1)
# x.get_device() 0
y torch.FloatTensor(1).cuda()
# y.get_device() 0with torch.cuda.device(1):# allocates a tensor on GPU 1a torch.cuda.FloatTensor(1)# transfers a tensor from CPU to GPU 1b torch.FloatTensor(1).cuda()# a.get_device() b.get_device() 1c a b# c.get_device() 1z x y# z.get_device() 0# even within a context, you can give a GPU id to the .cuda calld torch.randn(2).cuda(2)# d.get_device() 2最佳实践
使用固定的内存缓冲区
当副本来自固定页锁内存时主机到GPU的复制速度要快很多。CPU张量和存储开放了一个pin_memory() 方法它返回该对象的副本而它的数据放在固定区域中。
另外一旦固定了张量或存储就可以使用异步的GPU副本。只需传递一个额外的asyncTrue参数到 cuda() 的调用。这可以用于将数据传输与计算重叠。
通过将pin_memoryTrue 传递给其构造函数可以使DataLoader将batch返回到固定内存中。
使用 nn.DataParallel 替代 multiprocessing
大多数涉及批量输入和多个GPU的情况应默认使用DataParallel来使用多个GPU。尽管有GIL的存在单个python进程也可能使多个GPU饱和。
从0.1.9版本开始大量的GPU(8)可能未被充分利用。然而这是一个已知的问题也正在积极开发。和往常一样测试你的用例吧。
调用multiprocessing来利用CUDA模型存在重要的注意事项使用具有多处理功能的CUDA模型有重要的注意事项; 除非就是需要谨慎地满足数据处理需求否则您的程序很可能会出现错误或未定义的行为。
扩展PyTorch
本篇文章中包含如何扩展 torch.nn, torch.autograd和使用我们的 C 库 编写自定义的C 扩展。
扩展 torch.autograd
# Inherit from Function
class Linear(Function):# bias is an optional argumentdef forward(self, input, weight, biasNone):self.save_for_backward(input, weight, bias)output input.mm(weight.t())if bias is not None:output bias.unsqueeze(0).expand_as(output)return output# This function has only a single output, so it gets only one gradientdef backward(self, grad_output):# This is a pattern that is very convenient - at the top of backward# unpack saved_tensors and initialize all gradients w.r.t. inputs to# None. Thanks to the fact that additional trailing Nones are# ignored, the return statement is simple even when the function has# optional inputs.input, weight, bias self.saved_tensorsgrad_input grad_weight grad_bias None# These needs_input_grad checks are optional and there only to# improve efficiency. If you want to make your code simpler, you can# skip them. Returning gradients for inputs that dont require it is# not an error.if self.needs_input_grad[0]:grad_input grad_output.mm(weight)if self.needs_input_grad[1]:grad_weight grad_output.t().mm(input)if bias is not None and self.needs_input_grad[2]:grad_bias grad_output.sum(0).squeeze(0)return grad_input, grad_weight, grad_bias现在为了可以更简单的使用自定义的operation我们建议将其用一个简单的 helper function 包装起来。 functions:
def linear(input, weight, biasNone):# First braces create a Function object. Any arguments given here# will be passed to __init__. Second braces will invoke the __call__# operator, that will then use forward() to compute the result and# return it.return Linear()(input, weight, bias)你可能想知道你刚刚实现的 backward方法是否正确的计算了梯度。你可以使用 小的有限的差分进行数值估计。
from torch.autograd import gradcheck# gradchek takes a tuple of tensor as input, check if your gradient
# evaluated with these tensors are close enough to numerical
# approximations and returns True if they all verify this condition.
input (Variable(torch.randn(20,20).double(), requires_gradTrue),)
test gradcheck.gradcheck(Linear(), input, eps1e-6, atol1e-4)
print(test)扩展 torch.nn
class Linear(nn.Module):def __init__(self, input_features, output_features, biasTrue):self.input_features input_featuresself.output_features output_features# nn.Parameter is a special kind of Variable, that will get# automatically registered as Modules parameter once its assigned# as an attribute. Parameters and buffers need to be registered, or# they wont appear in .parameters() (doesnt apply to buffers), and# wont be converted when e.g. .cuda() is called. You can use# .register_buffer() to register buffers.# nn.Parameters can never be volatile and, different than Variables,# they require gradients by default.self.weight nn.Parameter(torch.Tensor(input_features, output_features))if bias:self.bias nn.Parameter(torch.Tensor(output_features))else:# You should always register all possible parameters, but the# optional ones can be None if you want.self.register_parameter(bias, None)# Not a very smart way to initialize weightsself.weight.data.uniform_(-0.1, 0.1)if bias is not None:self.bias.data.uniform_(-0.1, 0.1)def forward(self, input):# See the autograd section for explanation of what happens here.return Linear()(input, self.weight, self.bias)#注意这个Linear是之前实现过的Linear多进程最佳实践
共享CUDA张量
最佳实践和提示
避免和抵制死锁
重用经过队列的缓冲区
异步多进程训练例如Hogwild
import torch.multiprocessing as mp
from model import MyModeldef train(model):# Construct data_loader, optimizer, etc.for data, labels in data_loader:optimizer.zero_grad()loss_fn(model(data), labels).backward()optimizer.step() # This will update the shared parametersif __name__ __main__:num_processes 4model MyModel()# NOTE: this is required for the fork method to workmodel.share_memory()processes []for rank in range(num_processes):p mp.Process(targettrain, args(model,))p.start()processes.append(p)for p in processes:p.join()序列化语义
PACKAGE参考
参考文献
参考链接https://pytorch-cn.readthedocs.io/zh/latest/