如何免费注册网站平台,网站建设中 动态图片,做网站用的插件,wordpress下载页插件下载地址分类目录#xff1a;《深入浅出Pytorch函数》总目录 禁用梯度计算的上下文管理器。当我们确信不会调用Tensor.backward()时#xff0c;禁用梯度计算对推理很有用。它将减少计算的内存消耗#xff0c;否则我们需要设置requires_gradTrue。在这种模式下#xff0c;即使输入的…分类目录《深入浅出Pytorch函数》总目录 禁用梯度计算的上下文管理器。当我们确信不会调用Tensor.backward()时禁用梯度计算对推理很有用。它将减少计算的内存消耗否则我们需要设置requires_gradTrue。在这种模式下即使输入的requires_grad为True每次计算的结果也将为requires_gradFalse。这个上下文管理器是线程本地的它不会影响其他线程中的计算。同时这个类也可以起到装饰器的作用。
语法
torch.no_grad()实例
x torch.tensor([1.], requires_gradTrue)
with torch.no_grad():y x * 2
y.requires_grad
torch.no_grad()
def doubler(x):return x * 2
z doubler(x)
z.requires_grad函数实现
class no_grad(_DecoratorContextManager):rContext-manager that disabled gradient calculation.Disabling gradient calculation is useful for inference, when you are surethat you will not call :meth:Tensor.backward(). It will reduce memoryconsumption for computations that would otherwise have requires_gradTrue.In this mode, the result of every computation will haverequires_gradFalse, even when the inputs have requires_gradTrue.This context manager is thread local; it will not affect computationin other threads.Also functions as a decorator. (Make sure to instantiate with parenthesis.).. note::No-grad is one of several mechanisms that can enable ordisable gradients locally see :ref:locally-disable-grad-doc formore information on how they compare... note::This API does not apply to :ref:forward-mode AD forward-mode-ad.If you want to disable forward AD for a computation, you can unpackyour dual tensors.Example:: # xdoctest: SKIP x torch.tensor([1.], requires_gradTrue) with torch.no_grad():... y x * 2 y.requires_gradFalse torch.no_grad()... def doubler(x):... return x * 2 z doubler(x) z.requires_gradFalsedef __init__(self) - None:if not torch._jit_internal.is_scripting():super().__init__()self.prev Falsedef __enter__(self) - None:self.prev torch.is_grad_enabled()torch.set_grad_enabled(False)def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) - None:torch.set_grad_enabled(self.prev)