{"_id":"doc-en-pytorch-0765e1f807790457a0c3360c08023ee1cafbdc4e0a83eeafeb47a39cc98ed1e4","title":"","text":"return type(self), (self.tolist(),) def __repr__(self): return repr(str(self)) return str(self) def __str__(self): # All strings are unicode in Python 3, while we have to encode unicode"}
{"_id":"doc-en-pytorch-19f6e3c49e2f982b4b2c74a0b82f1cc7984c7aea051e47c00252a959cb536395","title":"","text":"from numbers import Integral import warnings import math from operator import mul from functools import reduce import torch from torch._C import _infer_size"}
{"_id":"doc-en-pytorch-56fb18ffabc8cfdce2f8006735710b90051e1b57b19b53998e8aa3ac7511d892","title":"","text":"def batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-5): size = list(input.size()) if reduce(mul, size[2:], size[0]) == 1: raise ValueError('Expected more than 1 value per channel, got input size {}'.format(size)) f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled) return f(input, weight, bias)"}
{"_id":"doc-en-pytorch-070db6c4fc6f6f907a67f5eff63dda2759f93a2cf19670bccc99c9ab7375a073","title":"","text":"| :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. for :attr:`padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). .. note::"}
{"_id":"doc-en-pytorch-f7031e660f7387bb371e278ab1f5e0deba5828cc04c655b5e42a5cb6a283bf7f","title":"","text":"| :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. for :attr:`padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be:"}
{"_id":"doc-en-pytorch-ca8a9e7cde8b3e302e9883c007c0b71c952ec4d2b1b23b639294d25fe54d87c8","title":"","text":"composed of several input planes. This module can be seen as the gradient of Conv1d with respect to its input. It is sometimes (but incorrectly) refered to as a deconvolutional operation. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). .. note::"}
{"_id":"doc-en-pytorch-2a4353f25e6b66254df4b7bf1a55ef91eb3c6280d482b7114d6ee0c1c2abfae1","title":"","text":"output_padding (int or tuple, optional): Zero-padding added to one side of the output groups (int, optional): Number of blocked connections from input channels to output channels bias (bool, optional): If True, adds a learnable bias to the output dilation (int or tuple, optional): Spacing between kernel elements Shape: - Input: :math:`(N, C_{in}, L_{in})`"}
{"_id":"doc-en-pytorch-29428830a26059a849496a55d799332d3518e5869a3f91cc1c2fde212359f495","title":"","text":"composed of several input planes. This module can be seen as the gradient of Conv2d with respect to its input. It is sometimes (but incorrectly) refered to as a deconvolutional operation. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`output_padding` can either be: - a single ``int`` -- in which case the same value is used for the height and width dimension - a single ``int`` -- in which case the same value is used for the height and width dimensions - a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension, and the second `int` for the width dimension"}
{"_id":"doc-en-pytorch-89807e395c83aa297144688018ba65eda0cbd3a26f5906aad71ea90d8b1422e8","title":"","text":"The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes. **This module can be seen as the exact reverse of Conv3d**. It is sometimes (but incorrectly) refered to as a deconvolutional operation. This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points | :attr:`groups` controls the connections between inputs and outputs. for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`output_padding` can either be: - a single ``int`` -- in which case the same value is used for the height and width dimension - a single ``int`` -- in which case the same value is used for the depth, height and width dimensions - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the width dimension and the third `int` for the width dimension"}
{"_id":"doc-en-pytorch-28b67a4b9138c2387f5881d8944477dc35c28a9bb9bd1c5f9753dae90fc8f2d2","title":"","text":"template static PyObject* wrap_tuple_fn(Args ... args) { PyObject *result = (*fn)(std::forward(args)...); THPObjectPtr result((*fn)(std::forward(args)...)); if (!result) return NULL; if (PyTuple_Check(result)) { return PyObject_CallFunctionObjArgs((PyObject*)&THPSizeType, result, NULL); if (PyTuple_Check(result.get())) { return PyObject_CallFunctionObjArgs((PyObject*)&THPSizeType, result.get(), NULL); } Py_INCREF(result); return result; return result.release(); } static auto sq_concat = PyTuple_Type.tp_as_sequence->sq_concat;"}
{"_id":"doc-en-pytorch-6cf5108df98855d196ce5921666304c199405c8558e9226ccfb264a5fa21b4d5","title":"","text":" FROM nvidia/cuda:8.0-devel-ubuntu16.04 FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04 RUN echo \"deb http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64 /\" > /etc/apt/sources.list.d/nvidia-ml.list ENV CUDNN_VERSION 6.0.20 RUN apt-get update && apt-get install -y --no-install-recommends build-essential cmake git curl vim ca-certificates libjpeg-dev libpng-dev libcudnn6=$CUDNN_VERSION-1+cuda8.0 libcudnn6-dev=$CUDNN_VERSION-1+cuda8.0 && libpng-dev && rm -rf /var/lib/apt/lists/* RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && "}
{"_id":"doc-en-pytorch-42b42fcaec1107f5b1e1e9203eb6d65a0f52c9b39f6376f110e60f106ca5d808","title":"","text":"CMAKE_PREFIX_PATH=\"$(dirname $(which conda))/../\" pip install -v . RUN git clone https://github.com/pytorch/vision.git && cd vision && pip install -v . WORKDIR /workspace RUN chmod -R a+w /workspace"}
{"_id":"doc-en-pytorch-06405422fd745b4c40f12f712d4a5d5c8b0538cc2e3c793902e24fdf9b7b87b3","title":"","text":"# (3) initialize mean square values and square gradient storage if not 'm' in state: state['m'] = x.new().resize_as_(dfdx).fill_(1) state['m'] = x.new().resize_as_(dfdx).zero_() state['tmp'] = x.new().resize_as_(dfdx)"}
{"_id":"doc-en-pytorch-d135710ef973222ecdc593d43b4acb154189b4178c2fd4ce3644b483113544c1","title":"","text":"# State initialization if len(state) == 0: state['step'] = 0 state['square_avg'] = grad.new().resize_as_(grad).fill_(1) state['square_avg'] = grad.new().resize_as_(grad).zero_() square_avg = state['square_avg'] alpha = group['alpha']"}
{"_id":"doc-en-pytorch-5d92750acac5053bde0067d760f33443260ed7937c34b3643b554d44f571ae4b","title":"","text":"# This will segfault if things have been erroneously released out.backward(torch.randn(out.size())) def test_norm_subgradient(self): def run_test(input_size, norm_deg): input = Variable(torch.zeros(*input_size), requires_grad=True) out = input.norm(norm_deg) out.backward() self.assertEqual(input.grad.data.abs().sum(), 0) run_test((10,), 2) run_test((10, 10), 2) run_test((10,), 3) def index_variable(shape, max_indices): if not isinstance(shape, tuple):"}
{"_id":"doc-en-pytorch-bb88e9a82610e455b2dfa75780521f23fcc10eb355dc449531dd48f740dfbaac","title":"","text":"ctx.keepdim = False if keepdim is None else keepdim if dim is None: ctx.norm = input.norm(p) ctx.save_for_backward(input) return input.new((ctx.norm,)) norm = input.norm(p) output = input.new((norm,)) else: if keepdim is not None: output = input.norm(p, dim, keepdim=keepdim) else: output = input.norm(p, dim) ctx.save_for_backward(input, output) return output ctx.save_for_backward(input, output) return output @staticmethod def backward(ctx, grad_output): if ctx.dim is None: input, = ctx.saved_variables if ctx.p == 2: scale_v = (grad_output / ctx.norm).expand_as(input) return input.mul(scale_v), None, None, None else: pow = input.abs().pow(ctx.p - 2) scale_v = (grad_output / ctx.norm ** (ctx.p - 1)).expand_as(input) return input.mul(pow).mul(scale_v), None, None, None input, output = ctx.saved_variables if ctx.dim is not None and ctx.keepdim is False and input.dim() != 1: grad_output = grad_output.unsqueeze(ctx.dim) output = output.unsqueeze(ctx.dim) if ctx.p == 2: grad_input = input.mul(grad_output).div(output) else: input, output = ctx.saved_variables input_pow = input.abs().pow(ctx.p - 2) output_pow = output.pow(ctx.p - 1) grad_input = input.mul(input_pow).mul(grad_output).div(output_pow) if ctx.keepdim is False and input.dim() != 1: grad_output = grad_output.unsqueeze(ctx.dim) output = output.unsqueeze(ctx.dim) # Special case at 0 where we return a subgradient containing 0 grad_input.masked_fill_(output == 0, 0) big_grad_output = grad_output.expand_as(input) if ctx.p == 2: big_output = output.expand_as(input) return input.mul(big_grad_output).div(big_output), None, None, None else: pow = input.abs().pow(ctx.p - 2) big_output = output.pow(ctx.p - 1).expand_as(input) return input.mul(pow).mul(big_grad_output).div(big_output), None, None, None return grad_input, None, None, None # TODO: renorm"}
{"_id":"doc-en-pytorch-2c28eb7e15016b4b59150140e984b4b28d906b8cabf3829f0b24c75313ac0a6b","title":"","text":"self.assertEqual(torch.mm(flattened_tensor, flattened_tensor.t()), torch.eye(rows) * gain ** 2, prec=1e-6) # Generates rand tensor with non-equal values. This ensures that duplicate # values won't be causing test failure for modules like MaxPooling. # size should be small, otherwise randperm fails / long overflows. def _rand_tensor_non_equal(*size): total = reduce(mul, size, 1) return torch.randperm(total).view(*size).double() def add_test(test): test_name = test.get_name()"}
{"_id":"doc-en-pytorch-41bb727092e60ecb2d77543a553d9973d50e33f29f86b2efd751b45004ffab9a","title":"","text":"dict( module_name='AdaptiveMaxPool1d', constructor_args=(3,), input=torch.rand(1, 3, 5), input=_rand_tensor_non_equal(1, 3, 5), ), dict( module_name='AdaptiveMaxPool2d', constructor_args=(3,), input=torch.rand(1, 3, 5, 6), input=_rand_tensor_non_equal(1, 3, 5, 6), desc='single', ), dict( module_name='AdaptiveMaxPool2d', constructor_args=((3, 4),), input=torch.rand(1, 3, 5, 6), input=_rand_tensor_non_equal(1, 3, 5, 6), desc='tuple', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=(3,), input=torch.rand(2, 3, 5, 6, 7), input=_rand_tensor_non_equal(2, 3, 5, 6, 7), desc='single', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=((3, 4, 5),), input=torch.rand(2, 3, 5, 6, 7), input=_rand_tensor_non_equal(2, 3, 5, 6, 7), desc='tuple', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=(3,), input=torch.rand(2, 3, 12, 9, 3), input=_rand_tensor_non_equal(2, 3, 12, 9, 3), desc='single_nonatomic', ), dict( module_name='AdaptiveMaxPool3d', constructor_args=((3, 4, 5),), input=torch.rand(2, 3, 6, 4, 10), input=_rand_tensor_non_equal(2, 3, 6, 4, 10), desc='tuple_nonatomic', ), dict("}
{"_id":"doc-en-pytorch-28eb48af51e6a342b66680c0195c7b6db80fb09f911f090631a17f1d96f7c63f","title":"","text":"if momentum != 0: param_state = self.state[p] if 'momentum_buffer' not in param_state: buf = param_state['momentum_buffer'] = d_p.clone() buf = param_state['momentum_buffer'] = p.data.new().resize_as_(p.data).zero_() buf.mul_(momentum).add_(d_p) else: buf = param_state['momentum_buffer'] buf.mul_(momentum).add_(1 - dampening, d_p)"}
{"_id":"doc-en-pytorch-b45df8f12b7e3a66648d614e89960dbcc5572080953cfb9f3d1f0df99052f025","title":"","text":"THArgCheckWithCleanup(n_sample > 0, THCleanup(if (start_dim == 1) THTensor_(resize1d)(prob_dist, n_categories);), 2, \"cannot sample n_sample < 0 samples\"); \"cannot sample n_sample <= 0 samples\"); if (!with_replacement) {"}
{"_id":"doc-en-pytorch-41b3d62528c6fc247ca44188b09c63bd1502ada4282a0c3d265da4e409ab0026","title":"","text":"{ /* Get normalized cumulative distribution from prob distribution */ double sum = 0; double val; for (j=0; j sum += THStorage_(get)( val = THStorage_(get)( prob_dist->storage, prob_dist->storageOffset+i*prob_dist->stride[0]+j*prob_dist->stride[1] ); THArgCheckWithCleanup((val >= 0), THCleanup(THDoubleTensor_free(cum_dist); if (start_dim == 1) THTensor_(resize1d)(prob_dist, n_categories);), 2, \"invalid multinomial distribution (encountering probability entry < 0)\"); sum += val; THDoubleStorage_set( cum_dist->storage, cum_dist->storageOffset+j*cum_dist->stride[0], "}
{"_id":"doc-en-pytorch-d4d9807cf1a0f886d9cc6f63a72e2547d29c1b94170dabaab7b057692976be91","title":"","text":"T bern_uniform = bernoulli[idx]; int _mask = (int) THCNumerics::lt(bern_uniform, q[rand_ind]); output[idx] = J[rand_ind]*(1 -_mask) + (rand_ind+1L) * _mask; } } } template "}
{"_id":"doc-en-pytorch-5a75ac4cec00f52f5abb88353120ed574a4ba743dba026200b80f979d2f991dd","title":"","text":"__global__ void renormRowsL1(T* dist, long rows, long cols) { extern __shared__ unsigned char my_smem[]; T *smem = reinterpret_cast(my_smem); T zero = ScalarConvert::to(0); T val; for (int64_t row = blockIdx.x; row < rows; row += gridDim.x) { T sum = ScalarConvert::to(0); for (int64_t col = threadIdx.x; col < cols; col += blockDim.x) { sum = THCNumerics::add(sum, dist[row * cols + col]); val = dist[row * cols + col]; assert(THCNumerics::ge(val, zero)); sum = THCNumerics::add(sum, val); } sum = reduceBlock(smem, blockDim.x, sum, ReduceAdd(), ScalarConvert::to(0)); sum = reduceBlock(smem, blockDim.x, sum, ReduceAdd(), zero); if (threadIdx.x == 0) { assert(THCNumerics::gt(sum, zero)); smem[0] = sum; } __syncthreads();"}
{"_id":"doc-en-pytorch-6ba24c171ed755dd1fed4f2364f954ead8b1b2fd15e6604fb1a3fc05e82b3856","title":"","text":"// Each block handles one distribution // First pass, find the total sum of the distribution AccT sum = accZero; T val; for (int cat = threadIdx.x; cat < categories; cat += blockDim.x) { sum = THCNumerics::add( sum, ScalarConvert::to(dist[curDist * categories + cat])); val = dist[curDist * categories + cat]; assert(THCNumerics::ge(val, zero)); sum = THCNumerics::add(sum, ScalarConvert::to(val)); } // threadIdx.x == 0 has the sum value from this"}
{"_id":"doc-en-pytorch-0bddcecf8b0b6f915b14c981d43afbef7e8f1d7d873f11bcccfacdca8053e763","title":"","text":"if (threadIdx.x == 0) { // Make sure the sum of our distribution didn't overflow assert(!isinf(sum)); assert(THCNumerics::gt(sum, accZero)); asmem[0] = sum; smem[0] = sampled[curDist];"}
{"_id":"doc-en-pytorch-6df228497008e8ec4cd2ec8393bca92b7924400e4a996e7624da61de75ba792f","title":"","text":" #define __STDC_FORMAT_MACROS #include #ifdef _MSC_VER #include "}
{"_id":"doc-en-pytorch-88ca958657dda50101b09cbdd7703b45fbeec7c19f02de8e9ef082e239c181d4","title":"","text":" #define __STDC_FORMAT_MACROS #include #include "}
{"_id":"doc-en-pytorch-01ae53c02fda7e865dd84e9c0536a7a714381f7a5e1d5b4afcf69b9acf441040","title":"","text":"begin{array}{ll} i_t = sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) f_t = sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) g_t = tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) g_t = tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) o_t = sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) c_t = f_t c_{(t-1)} + i_t g_t h_t = o_t tanh(c_t)"}
{"_id":"doc-en-pytorch-509df4cbc50856c183362b8314756709c583bc16a35e915c2862b0c0d81e0c39","title":"","text":" From PyTorch: Copyright (c) 2016- Facebook, Inc (Adam Paszke) Copyright (c) 2014- Facebook, Inc (Soumith Chintala) Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) From Caffe2: Copyright (c) 2016-present, Facebook Inc. All rights reserved. All contributions by Facebook: Copyright (c) 2016 Facebook Inc. All contributions by Google: Copyright (c) 2015 Google Inc. All rights reserved. All contributions by Yangqing Jia: Copyright (c) 2015 Yangqing Jia All rights reserved. All contributions from Caffe: Copyright(c) 2013, 2014, 2015, the respective contributors All rights reserved. All other contributions: Copyright(c) 2015, 2016 the respective contributors All rights reserved. Caffe2 uses a copyright model similar to Caffe: each contributor holds copyright over their contributions to Caffe2. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. All rights reserved. Redistribution and use in source and binary forms, with or without"}
{"_id":"doc-en-pytorch-753c1b39b878f15783a80cc227fb295abc1bf3334786b6346278d7ffb7c5b9ae","title":"","text":" From PyTorch: Copyright (c) 2016- Facebook, Inc (Adam Paszke) Copyright (c) 2014- Facebook, Inc (Soumith Chintala) Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) From Caffe2: Copyright (c) 2016-present, Facebook Inc. All rights reserved. All contributions by Facebook: Copyright (c) 2016 Facebook Inc. All contributions by Google: Copyright (c) 2015 Google Inc. All rights reserved. All contributions by Yangqing Jia: Copyright (c) 2015 Yangqing Jia All rights reserved. All contributions from Caffe: Copyright(c) 2013, 2014, 2015, the respective contributors All rights reserved. All other contributions: Copyright(c) 2015, 2016 the respective contributors All rights reserved. Caffe2 uses a copyright model similar to Caffe: each contributor holds copyright over their contributions to Caffe2. The project versioning records all such contribution and copyright details. If a contributor wants to further mark their specific copyright on a particular contribution, they should indicate their copyright solely in the commit message of the change when it is committed. ======================================================================= Software under third_party ======================================================================="}
{"_id":"doc-en-pytorch-58c32ee618cd2026945484922d87b50ea8b6848cbb19868ad539b2435cf6f934","title":"","text":"def grid_sampler(input, grid, padding_mode): if cudnn.is_acceptable(input.data) and padding_mode == 'zeros' and input.dim() == 4: if (cudnn.is_acceptable(input.data) and padding_mode == 'zeros' and input.dim() == 4 and input.size(1) <= 1024): # as of cudnn 7102, will not work for larger than 1024 return torch.cudnn_grid_sampler(input, grid) else: return GridSampler.apply(input, grid, padding_mode)"}
{"_id":"doc-en-pytorch-1a87b46a559e99d0adda9f0cd7d53c46acddc4e5afca33491e82f08de291dae5","title":"","text":"upstream=\"$1\" pr=\"$2\" git diff --name-only \"$upstream\" \"$pr\" git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(CMakeLists.txt|Makefile|.gitmodules|.jenkins/caffe2|binaries|caffe|caffe2|cmake|conda|docker|docs/caffe2|modules|scripts|third_party)' # For safety, unconditionally trigger for any changes. #git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(CMakeLists.txt|Makefile|.gitmodules|.jenkins/caffe2|binaries|caffe|caffe2|cmake|conda|docker|docs/caffe2|modules|scripts|third_party)' "}
{"_id":"doc-en-pytorch-2bb6c5e2d9ed1208cf491f2bcfc8d310d0469970a1b9b02f702bac3f95fbfdd3","title":"","text":"upstream=\"$1\" pr=\"$2\" git diff --name-only \"$upstream\" \"$pr\" git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(aten/|caffe2/|.jenkins/pytorch|docs/(make.bat|Makefile|requirements.txt|source)|mypy|requirements.txt|setup.py|test/|third_party/|tools/|.gitmodules|torch/)' # Now that PyTorch build depends on Caffe2, unconditionally trigger # for any changes. # TODO: Replace this with a NEGATIVE regex that allows us to blacklist # files (letting us skip builds when they are unnecessary) #git diff --name-only \"$upstream\" \"$pr\" | grep -Eq '^(aten/|caffe2/|.jenkins/pytorch|docs/(make.bat|Makefile|requirements.txt|source)|mypy|requirements.txt|setup.py|test/|third_party/|tools/|.gitmodules|torch/)' "}
{"_id":"doc-en-pytorch-298eee62557fb9197824322f107803eb18b39542355131523daefa15d436b22d","title":"","text":"unset(CUDA_ARCH_PTX CACHE) endif() if(DEFINED ENV{TORCH_CUDA_ARCH_LIST}) if($ENV{TORCH_CUDA_ARCH_LIST}) # Pass CUDA architecture directly set(__cuda_arch_bin $ENV{TORCH_CUDA_ARCH_LIST}) message(STATUS \"Set CUDA arch from TORCH_CUDA_ARCH_LIST: ${__cuda_arch_bin}\")"}
{"_id":"doc-en-pytorch-56f27ed053f767d884128d55d64bcd07034c9cc0e16c8f806aece875ffd41077","title":"","text":"BASE_DIR=$(pwd) cd torch/lib INSTALL_DIR=\"$(pwd)/tmp_install\" BASIC_C_FLAGS=\" -DTH_INDEX_BASE=0 -I$INSTALL_DIR/include -I$INSTALL_DIR/include/TH -I$INSTALL_DIR/include/THC \" BASIC_C_FLAGS=\" -DTH_INDEX_BASE=0 -DTH_GENERIC_USE_HALF=1 -DCUDA_HAS_FP16=1 -I$INSTALL_DIR/include -I$INSTALL_DIR/include/TH -I$INSTALL_DIR/include/THC \" LDFLAGS=\"-L$INSTALL_DIR/lib \" if [[ $(uname) == 'Darwin' ]]; then LDFLAGS=\"$LDFLAGS -Wl,-rpath,@loader_path\""}
{"_id":"doc-en-pytorch-5527c49435634724c39aa78664696d2e3b09f4d9ce230b657c6313f53b779db9","title":"","text":"- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 7.5 or above - [NVIDIA CuDNN](https://developer.nvidia.com/cudnn) v5.x If you want to disable CUDA support, export environment variable `NO_CUDA=1`. #### Install optional dependencies On Linux"}
{"_id":"doc-en-pytorch-9b245cbb4cdf0e52d936ec300bd71a1d3bfaada73bbc3e344e9f7734199c4b32","title":"","text":"} std::string FunctionSignature::toString() const { // TODO: consider printing more proper schema strings with defaults, optionals, etc. std::ostringstream ss; bool keyword_already = false; ss << \"(\"; int i = 0; for (auto& param : params) { if (i != 0) { ss << \", \"; } if (param.keyword_only && !keyword_already) { ss << \"*, \"; keyword_already = true; } ss << param.type_name() << \" \" << param.name; i++; }"}
{"_id":"doc-en-pytorch-ad3603528f82c43ee91890682fa72cad7a5396b60f29cde4768f24d248474a19","title":"","text":"return ret; } Variable VariableType::as_variable(Tensor tensor) const { static Variable as_variable(Tensor tensor) { return make_variable(std::move(tensor)); } std::tuple VariableType::as_variable(std::tuple tensors) const { static std::tuple as_variable(std::tuple tensors) { return std::make_tuple<>( make_variable(std::move(std::get<0>(tensors))), make_variable(std::move(std::get<1>(tensors)))); } std::tuple VariableType::as_variable(std::tuple tensors) const { static std::tuple as_variable(std::tuple tensors) { return std::make_tuple<>( make_variable(std::move(std::get<0>(tensors))), make_variable(std::move(std::get<1>(tensors))), make_variable(std::move(std::get<2>(tensors)))); } std::tuple VariableType::as_variable(std::tuple tensors) const { static std::tuple as_variable(std::tuple tensors) { return std::make_tuple<>( make_variable(std::move(std::get<0>(tensors))), make_variable(std::move(std::get<1>(tensors))),"}
{"_id":"doc-en-pytorch-7d0c77fd4c8089d46335d6de8e596e3d54016e125c67efd3a3172007d59d80b7","title":"","text":"make_variable(std::move(std::get<3>(tensors)))); } std::vector VariableType::as_variable(TensorList tl) const { static std::vector as_variable(TensorList tl) { std::vector variables; for (auto& t : tl) { variables.emplace_back(make_variable(std::move(t)));"}
{"_id":"doc-en-pytorch-51b2149dc54231e077acd735c5a589581935e5f104a0e4f54368387fdd9b3558","title":"","text":"} } variable_list flatten(const TensorList& tensors) { static variable_list flatten(const TensorList& tensors) { return cast_tensor_list(tensors); } variable_list flatten(const Tensor& x, const TensorList& y) { static variable_list flatten(const Tensor& x, const TensorList& y) { std::vector r; r.reserve(1 + y.size()); r.emplace_back(x);"}
{"_id":"doc-en-pytorch-b014bf93d687c3ccf5f199ad845b288b12dfdbaebbd939c86af9d171ac3b3b2d","title":"","text":"return r; } variable_list flatten(const Tensor& x, const TensorList& y, const Tensor& z) { static variable_list flatten(const Tensor& x, const TensorList& y, const Tensor& z) { std::vector r; r.reserve(2 + y.size()); r.emplace_back(x);"}
{"_id":"doc-en-pytorch-2eb8e144c61d9fc354240acf4150cfcbb2f2030be31cd71d45e142b4006e1181","title":"","text":"return r; } std::vector as_tensor_list(std::vector &vars) { static std::vector as_tensor_list(std::vector &vars) { std::vector tensors; for (auto& v : vars) { tensors.emplace_back(std::move(v));"}
{"_id":"doc-en-pytorch-45ab2894423a4c059b0693d4024d38ac2f47fc04cbe18c56c1f2b68c277708c0","title":"","text":"return self.clone(); } std::vector to_arg_sizes(TensorList tensors, int64_t dim) { static std::vector to_arg_sizes(TensorList tensors, int64_t dim) { std::vector arg_sizes(tensors.size()); for (size_t i = 0; i < tensors.size(); ++i) { arg_sizes[i] = tensors[i].size(dim);"}
{"_id":"doc-en-pytorch-bd6eb8f661d6c0e6d0c08a4b156cbb4fbcefc38db1a4c87ae1eb9142ac37a1db","title":"","text":"std::vector unpack(at::TensorList tl, const char *name, int pos) const; std::vector unpack_idxs(at::TensorList tl, const char *name, int pos) const; Variable as_variable(Tensor tensor) const; std::tuple as_variable(std::tuple tensor) const; std::tuple as_variable(std::tuple tensor) const; std::tuple as_variable(std::tuple tensor) const; std::vector as_variable(TensorList tensor) const; Variable maybe_wrap(Tensor data, const Variable & self, bool inplace) const; private: at::Type* baseType; std::string str;"}
{"_id":"doc-en-pytorch-af6382f3e1651d38af354b5d3b54703e77febcdc137b83436bd2a4decd13e125","title":"","text":":members: Padding Layers -------------- :hidden:`ReflectionPad2d` ~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ReflectionPad2d :members: :hidden:`ReplicationPad2d` ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ReplicationPad2d :members: :hidden:`ReplicationPad3d` ~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ReplicationPad3d :members: :hidden:`ZeroPad2d` ~~~~~~~~~~~~~~~~~~~ .. autoclass:: ZeroPad2d :members: :hidden:`ConstantPad2d` ~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: ConstantPad2d :members: Non-linear Activations ----------------------------------"}
{"_id":"doc-en-pytorch-2522f531f7405c6f46bf5539d95bb508a5dae6085838eb03c5d1bf94c9abc900","title":"","text":"from .module import Module from .utils import _quadruple, _ntuple from .._functions.padding import ConstantPad2d as F_ConstantPad2d from .. import functional as F # TODO: grad_output size asserts in THNN class ReflectionPad2d(Module): r\"\"\"Pads the input tensor using the reflection of the input boundary. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ReflectionPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReflectionPad2d((3, 3, 6, 6)) >>> output = m(input) \"\"\" def __init__(self, padding): super(ReflectionPad2d, self).__init__() self.padding = _quadruple(padding) def forward(self, input): return self._backend.ReflectionPad2d(*self.padding)(input) return F.pad(input, self.padding, 'reflect') def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ReplicationPad2d(Module): r\"\"\"Pads the input tensor using replication of the input boundary. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ReplicationPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad2d((3, 3, 6, 6)) >>> output = m(input) \"\"\" def __init__(self, padding): super(ReplicationPad2d, self).__init__() self.padding = _quadruple(padding) def forward(self, input): return self._backend.ReplicationPad2d(*self.padding)(input) return F.pad(input, self.padding, 'replicate') def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ReplicationPad3d(Module): r\"\"\"Pads the input tensor using replication of the input boundary. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom, paddingFront, paddingBack) Shape: - Input: :math:`(N, C, D_{in}, H_{in}, W_{in})` - Output: :math:`(N, C, D_{out}, H_{out}, W_{out})` where :math:`D_{out} = D_{in} + paddingFront + paddingBack` :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ReplicationPad3d(3) >>> input = autograd.Variable(torch.randn(16, 3, 8, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input) \"\"\" def __init__(self, padding): super(ReplicationPad3d, self).__init__() self.padding = _ntuple(6)(padding) def forward(self, input): return self._backend.ReplicationPad3d(*self.padding)(input) return F.pad(input, self.padding, 'replicate') def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ZeroPad2d(Module): r\"\"\"Pads the input tensor boundaries with zero. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ZeroPad2d(3) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ZeroPad2d((3, 3, 6, 6)) >>> output = m(input) \"\"\" def __init__(self, padding): super(ZeroPad2d, self).__init__() self.padding = _quadruple(padding) def forward(self, input): return F_ConstantPad2d(pad=self.padding, value=0)(input) return F.pad(input, self.padding, 'constant', 0) def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding) class ConstantPad2d(Module): r\"\"\"Pads the input tensor boundaries with a constant value. Args: padding (int, tuple): the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (paddingLeft, paddingRight, paddingTop, paddingBottom) Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where :math:`H_{out} = H_{in} + paddingTop + paddingBottom` :math:`W_{out} = W_{in} + paddingLeft + paddingRight` Examples:: >>> m = nn.ConstantPad2d(3, 3.5) >>> input = autograd.Variable(torch.randn(16, 3, 320, 480)) >>> output = m(input) >>> # using different paddings >>> m = nn.ConstantPad2d((3, 3, 6, 6), 3.5) >>> output = m(input) \"\"\" def __init__(self, padding, value): super(ConstantPad2d, self).__init__()"}
{"_id":"doc-en-pytorch-90fb48d90cd835d50d5300bc7c39898223e4cb9cb2d25621c59a1ef350be474e","title":"","text":"self.value = value def forward(self, input): return F_ConstantPad2d(pad=self.padding, value=self.value)(input) return F.pad(input, self.padding, 'constant', self.value) def __repr__(self): return self.__class__.__name__ + ' ' + str(self.padding)"}
{"_id":"doc-en-pytorch-a12353ff64e96f5a3b946258b0d15a90e1197c4d9549bf80e55efe9959db81ed","title":"","text":"(Cross, (), ((S, 3), (S, 3))), (Cross, (), ((S, 3, S), (S, 3, S), 1), 'dim'), (Inverse, (), ((S, S),), '', (), [skipIfNoLapack]), (Gesv, (), ((S, S), (S, S)), '', (), [skipIfNoLapack]), (Clone, (), ((S, M, S),)), (Squeeze, (), ((S, 1, M, 1),)), # TODO: enable neg dim checks"}
{"_id":"doc-en-pytorch-865c528397122b80d2fa3f7329a6cd9644460ce373c9f5220f3d7ab3c8894ffe","title":"","text":"('cross', (S, 3), ((S, 3),)), ('cross', (S, 3, S), ((S, 3, S), 1), 'dim'), ('inverse', (S, S), (), '', (), [skipIfNoLapack]), ('gesv', (S, S), ((S, S),), '', (), [skipIfNoLapack]), ('clone', (S, M, S), ()), ('eq', (S, S, S), ((S, S, S),)), ('ne', (S, S, S), ((S, S, S),)),"}
{"_id":"doc-en-pytorch-d02956b17130a7d99e7c22145b27f4c3851b955356ce3df7c7930eef9e41b25d","title":"","text":"def backward(ctx, grad_output): inverse, = ctx.saved_variables return -torch.mm(inverse.t(), torch.mm(grad_output, inverse.t())) class Gesv(Function): @staticmethod def forward(ctx, b, a): # TODO see if one can backprop through LU X, LU = torch.gesv(b, a) ctx.save_for_backward(X, a) ctx.mark_non_differentiable(LU) return X, LU @staticmethod def backward(ctx, grad_output, grad_LU=None): X, a = ctx.saved_variables grad_b, _ = torch.gesv(grad_output, a.t()) grad_a = -torch.mm(grad_b, X.t()) return grad_b, grad_a "}
{"_id":"doc-en-pytorch-3cacfff754899ea51c3a38ec7b5a2c596b7e9832c0a665740f5920de43cf301c","title":"","text":"def inverse(self): return Inverse.apply(self) def gesv(self, a): return Gesv.apply(self, a) def multinomial(self, num_samples=1, with_replacement=False): return Multinomial(num_samples, with_replacement)(self)"}
{"_id":"doc-en-pytorch-c0164bf057cb1858c7c153ede4ef6b2e0965cc23db23c15f3ebb2121bb0adf7f","title":"","text":">>> hx = Variable(torch.randn(3, 20)) >>> output = [] >>> for i in range(6): ... hx = rnn(input, hx) ... output[i] = hx ... hx = rnn(input[i], hx) ... output.append(hx) \"\"\" def __init__(self, input_size, hidden_size, bias=True, nonlinearity=\"tanh\"):"}