docstring
stringlengths
52
499
function
stringlengths
67
35.2k
__index_level_0__
int64
52.6k
1.16M
dot "distance" between t1 and t2. Args: t1: A tensor. t2: A tensor that is the same size as t1. name: Optional name for this op. Returns: The dot distance between t1 and t2.
def dot_distance(t1, t2, name=None): with tf.name_scope(name, 'dot_distance', [t1, t2]) as scope: return -dot_product(t1, t2, name=scope)
339,353
Square of l2 distance between t1 and t2. Args: t1: A tensor. t2: A tensor that is the same size as t1. name: Optional name for this op. Returns: The l2 distance between t1 and t2.
def l2_distance_sq(t1, t2, name=None): with tf.name_scope(name, 'l2_distance_sq', [t1, t2]) as scope: t1 = tf.convert_to_tensor(t1, name='t1') t2 = tf.convert_to_tensor(t2, name='t2') return length_squared(tf.subtract(t1, t2), name=scope)
339,354
l2 distance between t1 and t2 and caps the gradient of the Square Root. Args: t1: A tensor. t2: A tensor that is the same size as t1. epsilon: A lower bound for distance, useful to avoid sqrt of very small values that can blow up gradients. name: Optional name for this op. Returns: The l2 distance between t1 and t2.
def l2_distance(t1, t2, epsilon=1e-12, name=None): with tf.name_scope(name, 'l2_distance', [t1, t2]) as scope: t1 = tf.convert_to_tensor(t1, name='t1') t2 = tf.convert_to_tensor(t2, name='t2') return tf.sqrt(tf.maximum(l2_distance_sq(t1, t2, scope), epsilon))
339,355
l1 distance between t1 and t2. Args: t1: A tensor. t2: A tensor that is the same size as t1. name: Optional name for this op. Returns: The l1 distance between t1 and t2.
def l1_distance(t1, t2, name=None): with tf.name_scope(name, 'l1_distance', [t1, t2]) as scope: t1 = tf.convert_to_tensor(t1, name='t1') t2 = tf.convert_to_tensor(t2, name='t2') sub = tf.subtract(t1, t2) reduction_dim = _last_index(sub, 1) return tf.reduce_sum(tf.abs(sub), reduction_dim, name=scope)
339,356
Creates a leaky_relu. This is an alternate non-linearity to relu. The leaky part of the relu may prevent dead Neurons in a model since the gradient doesn't go completely to 0. Args: x: The input tensor. name: Optional name for this op. Returns: x if x > 0 otherwise 0.01 * x.
def leaky_relu(x, name=None): with tf.name_scope(name, 'leaky_relu', [x]) as scope: x = tf.convert_to_tensor(x, name='x') return tf.where(tf.less(x, 0.0), 0.01 * x, x, name=scope)
339,357
Computes softplus with a scale factor to sharpen of the hinge. This is an alternate non-linearity to relu. It has a similar shape, but it has a smooth transition from the linear part to 0. Args: x: A tensor. scale: A float that sharpens the curve. name: Optional name. Returns: y = log(1 + exp(scale * x)) / scale
def softplus(x, scale=1.0, name=None): if scale == 1: return tf.nn.softplus(x) else: with tf.name_scope(name, 'softplus', [x]): scale = tf.convert_to_tensor(scale, dtype=x.dtype.base_dtype) return tf.nn.softplus(x * scale) / scale
339,358
l1 normalizes x. Args: x: The tensor to normalize. dim: The dimension to normalize along. epsilon: Lower bound on the norm, used to avoid exploding gradients as the norm approaches 0. name: Optional name for this op. Returns: x normalized along dim.
def l1_normalize(x, dim, epsilon=1e-12, name=None): with tf.name_scope(name, 'l1_normalize', [x]) as scope: x = tf.convert_to_tensor(x, name='x') x = tf.verify_tensor_all_finite(x, 'Error at input %s' % scope) x_norm = tf.maximum(tf.reduce_sum(tf.abs(x), [dim], keep_dims=True), epsilon) return tf.div(x, x_norm, name=scope)
339,359
Drops every other value from the tensor and returns a 1D tensor. This is useful if you are running multiple inputs through a model tower before splitting them and you want to line it up with some other data. Args: x: the target tensor. name: the name for this op, defaults to every_other Returns: A tensorflow op.
def every_other(x, name=None): with tf.name_scope(name, 'every_other', [x]) as scope: x = tf.convert_to_tensor(x, name='x') return tf.reshape( tf.slice( tf.reshape(x, [-1, 2]), [0, 0], [-1, 1]), [-1], name=scope)
339,360
Computes the dot product of t1 and t2. Args: t1: A rank 2 tensor. t2: A tensor that is the same size as t1. keep_dims: If true, reduction does not change the rank of the input. name: Optional name for this op. reduction_dim: The dimension to reduce, by default choose the last one and if no shape is specified guess 1. Returns: The dot product.
def dot_product(t1, t2, keep_dims=False, name=None, reduction_dim=None): with tf.name_scope(name, 'dot', [t1, t2]) as scope: t1 = tf.convert_to_tensor(t1, name='t1') t2 = tf.convert_to_tensor(t2, name='t2') mul = tf.multiply(t1, t2) if not reduction_dim: reduction_dim = _last_index(mul, 1) return tf.reduce_sum(mul, reduction_dim, name=scope, keep_dims=keep_dims)
339,361
Computes the squared length of x. Args: x: A tensor. keep_dims: If true, reduction does not change the rank of the input. name: Optional name for this op. reduction_dim: The dimension to reduce, by default choose the last one and if no shape is specified guess 1. Returns: The squared length of x.
def length_squared(x, keep_dims=False, name=None, reduction_dim=None): with tf.name_scope(name, 'length_squared', [x]) as scope: x = tf.convert_to_tensor(x, name='x') if not reduction_dim: reduction_dim = _last_index(x, 1) return tf.reduce_sum( tf.square(x), reduction_dim, keep_dims=keep_dims, name=scope)
339,362
Splits a tensor by unzipping along the split_dim. For example the following array split into 2 would be: [1, 2, 3, 4, 5, 6] -> [1, 3, 5], [2, 4, 6] and by 3: [1, 2, 3, 4] -> [1, 4], [2], [3] Args: x: The tensor to split. split_dim: The dimension to split along. current_length: Current length along the split_dim. num_splits: The number of splits. name: Optional name for this op. Returns: A length num_splits sequence.
def unzip(x, split_dim, current_length, num_splits=2, name=None): with tf.name_scope(name, 'unzip', [x]) as scope: x = tf.convert_to_tensor(x, name='x') # There is probably a more efficient way to do this. all_splits = tf.split( value=x, num_or_size_splits=current_length, axis=split_dim, name=scope) splits = [[] for _ in xrange(num_splits)] for i in xrange(current_length): splits[i % num_splits].append(all_splits[i]) return [tf.concat(s, split_dim) for s in splits]
339,363
Returns activation(x, *activation_args, **activation_kwargs). This applies the given activation and adds useful summaries specific to the activation. Args: books: The bookkeeper. x: The tensor to apply activation to. activation: An activation function. activation_args: Optional additional arguments for the activation. activation_kwargs: Optional keyword args for activation. Returns: A tensor with activation applied to x.
def apply_activation( books, x, activation, activation_args=(), activation_kwargs=None): if activation is None: return x if activation_kwargs is None: activation_kwargs = {} y = activation(x, *activation_args, **activation_kwargs) if activation in (tf.nn.relu, functions.leaky_relu, functions.softplus): books.add_scalar_summary( tf.reduce_mean(tf.cast(tf.less(x, 0.0), tf.float32)), '%s/zeros' % y.op.name) elif activation is tf.nn.relu6: books.add_scalar_summary( tf.reduce_mean(tf.cast(tf.less(x, 0.0), tf.float32)), '%s/zeros' % y.op.name) books.add_scalar_summary( tf.reduce_mean(tf.cast(tf.greater(x, 6.0), tf.float32)), '%s/sixes' % y.op.name) elif activation in (functions.l2_normalize, tf.nn.l2_normalize, functions.l1_normalize): books.add_scalar_summary( tf.reduce_mean(tf.sqrt(tf.reduce_sum( tf.square(x), 1))), '%s/length' % y.op.name) return y
339,366
Expands the kernel spec into a length 2 list. Args: kernel_spec: An integer or a length 1 or 2 sequence that is expanded to a list. Returns: A length 2 list.
def _kernel(kernel_spec): if isinstance(kernel_spec, tf.compat.integral_types): return [kernel_spec, kernel_spec] elif len(kernel_spec) == 1: return [kernel_spec[0], kernel_spec[0]] else: assert len(kernel_spec) == 2 return kernel_spec
339,375
Expands the stride spec into a length 4 list. Args: stride_spec: If length 0, 1 or 2 then assign the inner dimensions, otherwise return stride_spec if it is length 4. Returns: A length 4 list.
def _stride(stride_spec): if stride_spec is None: return [1, 1, 1, 1] elif isinstance(stride_spec, tf.compat.integral_types): return [1, stride_spec, stride_spec, 1] elif len(stride_spec) == 1: return [1, stride_spec[0], stride_spec[0], 1] elif len(stride_spec) == 2: return [1, stride_spec[0], stride_spec[1], 1] else: assert len(stride_spec) == 4 return stride_spec
339,376
Returns the underlying tensor if tensor is wrapped or tensor. Args: tensor: The tensor to unwrap. Returns: Tensor or if it is a pretty tensor, the unwrapped version. Raises: ValueError: if tensor holds a sequence.
def unwrap(tensor): while isinstance(tensor, (PrettyTensor, Loss)): tensor = tensor.tensor return tensor
339,379
Creates an input layer representing the given tensor. Args: tensor: The tensor. books: The bookkeeper; this is usually not required unless you are building multiple `tf.Graphs.` tensor_shape: An optional shape that will be set on the Tensor or verified to match the tensor. Returns: A layer.
def wrap(tensor, books=None, tensor_shape=None): if books is None: books = bookkeeper.for_default_graph() if isinstance(tensor, PrettyTensor): return tensor.as_layer() elif isinstance(tensor, UnboundVariable): def set_input_from_unbound_var(data): if data is not None: return wrap(data, books) else: return None return _DeferredLayer(books, set_input_from_unbound_var, [tensor], {}) else: tensor = tf.convert_to_tensor(tensor, name='input') if tensor_shape: _set_shape_on_tensor(tensor, tensor_shape) return Layer(books, tensor=tensor, name=tensor.name)
339,380
Creates an input layer representing the given sequence of tensors. Args: sequence: A sequence of tensors. books: The bookkeeper. tensor_shape: An optional shape that will be set on the Tensor or verified to match the tensor. Returns: A layer.
def wrap_sequence(sequence, books=None, tensor_shape=None): if books is None: books = bookkeeper.for_default_graph() my_sequence = [ wrap(t, books=books, tensor_shape=tensor_shape) for t in sequence] return Layer(books, sequence=my_sequence, name=my_sequence[0].name)
339,382
Joins the list of pretty_tensors and sets head of output_pretty_tensor. Args: tensors: A sequence of Layers or SequentialLayerBuilders to join. output: A pretty_tensor to set the head with the result. join_function: A function to join the tensors, defaults to concat on the last dimension. name: A name that is used for the name_scope Returns: The result of calling with_tensor on output Raises: ValueError: if pretty_tensors is None or empty.
def join_pretty_tensors(tensors, output, join_function=None, name='join'): if not tensors: raise ValueError('pretty_tensors must be a non-empty sequence.') with output.g.name_scope(name): if join_function is None: # Use depth concat last_dim = len(tensors[0].shape) - 1 return output.with_tensor(tf.concat(tensors, last_dim)) else: return output.with_tensor(join_function(tensors))
339,387
Remove the distracting lines from the stored tracebacks. This also reduces memory overhead by removing the frame contents. This is very important when doing long unrolls. Args: result: The result to process. processed: A set of already processed nodes, used to stop early.
def _strip_unnecessary_contents_from_stack(result, processed): # pylint: disable=protected-access if isinstance(result, (PrettyTensor, Loss)): if result.is_sequence(): for tensor in result.sequence: _strip_unnecessary_contents_from_stack(tensor, processed) return else: result = result.tensor if hasattr(result, 'op'): result = result.op if result in processed: return else: processed.add(result) trace = [] found = False for f, line_no, method, _ in result._traceback: if (method in ('_replace_deferred', '_construct') and f.endswith('pretty_tensor_class.py')): found = True continue trace.append((f, line_no, method, {})) result._traceback = trace # Assume that if we didn't find any PT deferred lines, then this node is # not part of the deferred construction. if not found: return for inp in result.inputs: _strip_unnecessary_contents_from_stack(inp, processed)
339,391
Creates a function by binding the arguments in the given order. Args: *binding_order: The unbound variables. This must include all values. Returns: A function that takes the arguments of binding_order. Raises: ValueError: If the bindings are missing values or include unknown values.
def as_fn(self, *binding_order): if len(binding_order) != len(self.unbound_vars): raise ValueError('All vars must be specified.') for arg in binding_order: if arg not in self.unbound_vars: raise ValueError('Unknown binding: %s' % arg) def func(*args, **kwargs): if len(binding_order) != len(args): raise ValueError('Missing values, expects: %s' % binding_order) values = dict(zip(binding_order, args)) values.update(kwargs) return self.construct(**values) func.__doc__ = _gen_ipython_string(func, binding_order, [], func.__doc__) return func
339,400
Internal method to fill absent values in the kwargs with the defaults. Args: _args: A list of arguments to replace if a subset is required. Name chosen to prevent conflicts with kwargs. **kwargs: The arguments to replace with defaults. Returns: A map with the same fields as kwargs, but absent values are filled with defaults.
def _replace_args_with_defaults(self, _args=None, **kwargs): if _args is None: _args = six.iterkeys(kwargs) my_defaults = self.defaults for k in _args: if k not in kwargs: if k in my_defaults: kwargs[k] = my_defaults[k] elif k in _defaults: kwargs[k] = _defaults[k] return kwargs
339,405
Attaches the template to this such that _key=this layer. Note: names were chosen to avoid conflicts with any likely unbound_var keys. Args: _template: The template to construct. _key: The key that this layer should replace. **unbound_var_values: The values for the unbound_vars. Returns: A new layer with operation applied. Raises: ValueError: If _key is specified twice or there is a problem computing the template.
def attach_template(self, _template, _key, **unbound_var_values): if _key in unbound_var_values: raise ValueError('%s specified twice.' % _key) unbound_var_values[_key] = self return _template.as_layer().construct(**unbound_var_values)
339,406
This replaces all deferred nodes (UnboundVariables and _DeferredLayers). If arg is a sequence or a dict, then it's deferred values are also replaced. Args: arg: The argument to replace. If a list or a dict, then all items are also replaced. context: The context for this replacement. Returns: The replaced values or arg if it is not a deferred node.
def _replace_deferred(self, arg, context): if isinstance(arg, UnboundVariable): return context[arg] elif isinstance(arg, _DeferredLayer): # pylint: disable=protected-access return arg._construct(context) elif isinstance(arg, tuple): return tuple((self._replace_deferred(x, context) for x in arg)) elif (isinstance(arg, collections.Sequence) and not isinstance(arg, six.string_types)): return [self._replace_deferred(x, context) for x in arg] elif isinstance(arg, collections.Mapping): return {k: self._replace_deferred(v, context) for k, v in six.iteritems(arg)} else: return arg
339,429
Constructs this by calling the deferred method. This assumes that all unbound_vars have been specified in context and if this layer has already been computed in this context, then the previously constructed value will be returned. Args: context: A dict of UnboundVariables/_DeferredLayers to their values. Returns: The result of calling the given method on this layer.
def _construct(self, context): with self.g.as_default(): if self._pass_through: # pylint: disable=protected-access return self._pass_through._construct(context) current_value = context.get(self, None) assert current_value is not _unspecified, 'Circular dependency' if current_value is not None: return current_value context[self] = _unspecified method_args = self._replace_deferred(self._method_args, context) method_kwargs = self._replace_deferred(self._method_kwargs, context) result = self._method(*method_args, **method_kwargs) _strip_unnecessary_contents_from_stack(result, set()) context[self] = result return result
339,430
Creates a new template with the given unbound variables bound. Args: **bindings: Arguments for every deferred parameter. Returns: A new template with the given bindings. Raises: ValueError: If any of the bindings do not correspond to unbound variables.
def bind(self, **bindings): new_context = dict(self._partial_context) unknown_keys = [] for k, v in six.iteritems(bindings): if k not in self._unbound_vars: unknown_keys.append(k) new_context[self._unbound_vars[k]] = v if unknown_keys: raise ValueError( 'The following keys are not associated with any unbound vars: %s, ' 'legal values are %s' % (unknown_keys, list(self._unbound_vars.keys()))) return _DeferredLayer(self.bookkeeper, None, (), {}, scope=self._scope, defaults=self._defaults, pass_through=self, partial_context=new_context)
339,431
Constructs the graph and returns either a tensor or a sequence. Args: **bindings: Arguments for every deferred parameter. Returns: The value that is placed into this.
def construct(self, **bindings): context = _assign_values_to_unbound_vars(self._unbound_vars, bindings) context.update(self._partial_context) return self._construct(context)
339,432
Attaches the template to this with the _key is supplied with this layer. Note: names were chosen to avoid conflicts. Args: _template: The template to construct. _key: The key that this layer should replace. **unbound_var_values: The values for the unbound_vars. Returns: A new layer with operation applied. Raises: ValueError: If _key is specified twice or there is a problem computing the template.
def attach_template(self, _template, _key, **unbound_var_values): if _key in unbound_var_values: raise ValueError('%s specified twice.' % _key) unbound_var_values[_key] = self return _DeferredLayer(self.bookkeeper, _template.as_layer().construct, [], unbound_var_values, scope=self._scope, defaults=self._defaults, partial_context=self._partial_context)
339,435
Constructs the graph and returns either a tensor or a sequence. Note: This method requires that this SequentialLayerBuilder holds a template. Args: **bindings: Arguments for every deferred parameter. Returns: The value that is placed into this. Raises: ValueError: if this doesn't hold a template.
def construct(self, **bindings): if hasattr(self._head, 'construct'): return self._head.construct(**bindings) else: raise ValueError( 'Cannot call construct on a non-template: %s' % type(self._head))
339,437
Assigns arguments to the decorator. Args: assign_defaults: A sequence of strings for the default values that should be provided. method_name: If provided, use this as the method_name instead of the wrapped function's name. overwrite: If False, throw an exception if this method has already been registered. True should be used in interactive environments or with great care.
def __init__(self, assign_defaults=(), method_name=None, overwrite=False): if isinstance(assign_defaults, str): self._assign_defaults = [assign_defaults] else: self._assign_defaults = assign_defaults self._method_name = method_name self._overwrite = overwrite _valid_defaults.update(self._assign_defaults) default_args = sorted(_valid_defaults) default_values = [None] * len(_valid_defaults) if six.PY2: default_func = PrettyTensor.with_defaults.__func__ else: default_func = PrettyTensor.with_defaults _set_ipython_string(default_func, default_args, default_values, _original_set_defaults_doc) _set_ipython_string(defaults_scope, default_args, default_values, _original_defaults_scope_doc)
339,445
Assigns arguments to the decorator. Args: assign_defaults: A sequence of strings for the default values that should be provided. Defaults are shared across methods. method_name: If provided, use this as the method_name instead of the wrapped function's name. overwrite: if true, overwrites definition if exists.
def __init__(self, assign_defaults=(), method_name=None, overwrite=False): super(self.__class__, self).__init__(assign_defaults=assign_defaults, method_name=method_name, overwrite=overwrite)
339,449
Creates a deferred node with captured scope. Args: func: The original function to call. input_layer: The input_layer. deferred_args: The arguments that will be used bythe deferred function. deferred_kwargs: The keyword args for the deferred function. name: The name of this layer. Returns: A _DeferredLayer that will execute func in the correct scopes.
def create_deferred(self, func, input_layer, deferred_args, deferred_kwargs, name): my_defaults = _defaults def _with_method_complete(*args, **kwargs): input_layer = args[0] with input_layer.g.as_default(), defaults_scope(**my_defaults), \ tf.name_scope(name): return input_layer._method_complete(func(*args, **kwargs)) # The deferred layer passes on the scope of the source layer so that the # construction scope matches that of the immediate version. full_args = [input_layer] full_args.extend(deferred_args) partial_context = {} if isinstance(input_layer, _DeferredLayer): partial_context = input_layer._partial_context return _DeferredLayer(input_layer.bookkeeper, scopes.Template(None, _with_method_complete), full_args, deferred_kwargs, scope=input_layer._scope, defaults=input_layer.defaults, partial_context=partial_context)
339,450
Assigns arguments to the decorator. Args: assign_defaults: A sequence of strings for the default values that should be provided. Defaults are shared across methods. method_name: If provided, use this as the method_name instead of the wrapped function's name.
def __init__(self, assign_defaults=(), method_name=None): super(self.__class__, self).__init__(assign_defaults=assign_defaults, method_name=method_name)
339,452
Builds a `ReplayableQueue` that draws from a regular `input_queue`. Args: input_queue: The queue to draw from. replay_size: The size of the replay buffer. batch_size: The size of each batch. Returns: A ReplayableQueue.
def build_from_queue(cls, input_queue, replay_size, batch_size): return cls( lambda: input_queue.dequeue_many(batch_size), replay_size, batch_size=batch_size)
339,456
Downloads Shakespeare, converts it into ASCII codes and chunks it. Args: chunk_size: The dataset is broken down so that it is shaped into batches x chunk_size. Returns: A numpy array of ASCII codes shaped into batches x chunk_size.
def shakespeare(chunk_size): file_name = maybe_download('http://cs.stanford.edu/people/karpathy/char-rnn/', 'shakespear.txt') with open(file_name) as f: shakespeare_full = f.read() # Truncate the data. length = (len(shakespeare_full) // chunk_size) * chunk_size if length < len(shakespeare_full): shakespeare_full = shakespeare_full[:length] arr = np.array([convert_to_int(c) for c in shakespeare_full])[ 0:len(shakespeare_full) / chunk_size * chunk_size] return arr.reshape((len(arr) / chunk_size, chunk_size))
339,464
Opens the baby_names csv file and produces numpy array. Args: max_length: The maximum length, 15 was the longest name when this was written. Short entries will be padded with the EOS marker. Returns: A numpy array of the names converted to ascii codes, the labels and an array of lengths. Raises: ValueError: if max_length is too small.
def baby_names(max_length=15): names = [] lengths = [] targets = [] with open(os.path.join(os.path.dirname(sys.modules[__name__].__file__), 'baby_names.csv'), 'rb') as f: first = True for l in csv.reader(f, delimiter=','): if first: first = False continue assert len(l) == 4, l name = l[0] if max_length < len(name): raise ValueError('Max length is too small: %d > %d' % (max_length, len(name))) chars = [convert_to_int(c) for c in name] names.append(chars + ([EOS] * (max_length - len(chars)))) lengths.append([len(name)]) values = [float(l[2]), float(l[3])] if abs(sum(values) - 1) > 0.001: raise ValueError('Each row must sum to 1: %s' % l) targets.append(values) return np.array(names), np.array(targets), np.array(lengths)
339,465
Applies batch normalization to x as specified in arguments. Args: x: A Pretty Tensor. arguments: Either a boolean to batch_normalize or a BatchNormalizationArguments Returns: x with batch normalization applied.
def batch_normalize_with_arguments(x, arguments): x = prettytensor.wrap(x) # Backwards compatibility. if isinstance(arguments, bool): if arguments: return x.batch_normalize() else: return x # pylint: disable=protected-access kwargs = arguments._asdict() defaults = prettytensor._defaults # pylint: enable=protected-access for arg in ('learned_moments_update_rate', 'variance_epsilon', 'scale_after_normalization'): if kwargs.get(arg, None) is None: if arg in defaults: kwargs[arg] = defaults[arg] else: del kwargs[arg] return x.batch_normalize(**kwargs)
339,467
Creates a multi layer network of fully_connected layers. Each layer is 100 neurons. Please change this to experiment with architectures. Args: images: The input images. labels: The labels as dense one-hot vectors. Returns: A softmax result.
def multilayer_fully_connected(images, labels): # Pretty Tensor is a thin wrapper on Tensors. # Change this method to experiment with other architectures images = pt.wrap(images) with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001): return (images.flatten().fully_connected(100).fully_connected(100) .softmax_classifier(10, labels))
339,468
Creates a multi layer convolutional network. The architecture is similar to that defined in LeNet 5. Please change this to experiment with architectures. Args: images: The input images. labels: The labels as dense one-hot vectors. Returns: A softmax result.
def lenet5(images, labels): images = pt.wrap(images) with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001): return (images.conv2d(5, 20).max_pool(2, 2).conv2d(5, 50).max_pool(2, 2) .flatten().fully_connected(500).softmax_classifier(10, labels))
339,469
Creates a variable scope and a name scope. If a variable_scope is provided, this will reenter that variable scope. However, if none is provided then the variable scope will match the generated part of the name scope. Args: names: A tuple of name_scope, variable_scope or None. Yields: The result of name_scope and variable_scope as a tuple.
def var_and_name_scope(names): # pylint: disable=protected-access if not names: yield None, None else: name, var_scope = names with tf.name_scope(name) as scope: # TODO(eiderman): This is a workaround until the variable_scope updates land # in a TF release. old_vs = tf.get_variable_scope() if var_scope is None: count = len(name.split('/')) scoped_name = '/'.join(scope.split('/')[-count - 1:-1]) full_name = (old_vs.name + '/' + scoped_name).lstrip('/') else: full_name = var_scope.name vs_key = tf.get_collection_ref(variable_scope._VARSCOPE_KEY) try: # TODO(eiderman): Remove this hack or fix the full file. try: vs_key[0] = tf.VariableScope( old_vs.reuse, name=full_name, initializer=old_vs.initializer, regularizer=old_vs.regularizer, caching_device=old_vs.caching_device) except AttributeError: vs_key[0] = variable_scope._VariableScope( old_vs.reuse, name=full_name, initializer=old_vs.initializer) vs_key[0].name_scope = scope yield scope, vs_key[0] finally: vs_key[0] = old_vs
339,474
Creates a template for the given function. Args: name: The variable_scope to use, if None the current scope is captured. func: The function to apply each time.
def __init__(self, name, func): self._func = func if name: self._var_scope = None self._name = name else: self._var_scope = tf.get_variable_scope() self._name = None self._reuse = None self._stacktrace = traceback.format_stack()[:-3]
339,478
Creates a 2 layer LSTM model with dropout. Args: text_in: The input text as ASCII ordinals in a Tensor. timesteps: The number of timesteps in the sequence. phase: Phase controls whether or not dropout is active. In training mode we want to perform dropout, but in test we want to disable it. Returns: The logits.
def create_model(text_in, timesteps, phase): with pt.defaults_scope(activation_fn=tf.nn.relu, l2loss=0.00001): # The embedding lookup must be placed on a cpu. with tf.device('/cpu:0'): embedded = text_in.embedding_lookup(CHARS, [EMBEDDING_SIZE]) # Because the sequence LSTM expects each timestep to be its own Tensor, # we need to cleave the sequence. # Below we can build a stacked 2 layer LSTM by just chaining them together. # You can stack as many layers as you want. lstm = (embedded .cleave_sequence(timesteps) .sequence_lstm(LOWER) .sequence_lstm(UPPER)) # The classifier is much more efficient if it runs across the entire # dataset at once, so we want to squash (i.e. uncleave). # Note: if phase is test, dropout is a noop. return (lstm.squash_sequence() .dropout(keep_prob=0.8, phase=phase) .fully_connected(CHARS, activation_fn=None))
339,481
Flattens this. If preserve_batch is True, the result is rank 2 and the first dim (batch) is unchanged. Otherwise the result is rank 1. Args: input_layer: The Pretty Tensor object, supplied. preserve_batch: If True (the default), then preserve the first dimension. Returns: A LayerWrapper with the flattened tensor.
def flatten(input_layer, preserve_batch=True): if preserve_batch: return reshape(input_layer, [DIM_SAME, -1]) else: return reshape(input_layer, [-1])
339,486
Cuts off the gradient at this point. This works on both sequence and regular Pretty Tensors. Args: input_layer: The input. Returns: A new Pretty Tensor of the same type with stop_gradient applied.
def stop_gradient(input_layer): if input_layer.is_sequence(): result = [tf.stop_gradient(t) for t in input_layer.sequence] return input_layer.with_sequence(result) else: return tf.stop_gradient(input_layer)
339,487
Applies the given operation to `input_layer` and create a summary. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied.
def apply_with_summary(input_layer, operation, *op_args, **op_kwargs): return layers.apply_activation(input_layer.bookkeeper, input_layer.tensor, operation, activation_args=op_args, activation_kwargs=op_kwargs)
339,489
Applies the given operation to this after expanding op_args. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied.
def _rapply(input_layer, operation, *op_args, **op_kwargs): op_args = list(op_args) op_args.append(input_layer.tensor) return input_layer.with_tensor(operation(*op_args, **op_kwargs))
339,490
Applies the given operation to this before without adding any summaries. Args: input_layer: The input layer for this op. operation: An operation that takes a tensor and the supplied args. *op_args: Extra arguments for operation. **op_kwargs: Keyword arguments for the operation. Returns: A new layer with operation applied.
def apply_op(input_layer, operation, *op_args, **op_kwargs): return input_layer.with_tensor( operation(input_layer.tensor, *op_args, **op_kwargs))
339,491
Joins the provided PrettyTensors with this using the join function. Args: input_layer: The input layer for this op. others: Sequence of PrettyTensor objects. include_self: Whether or not this includes itself or if the value is only derived from others. join_function: The function to use for joining, must accept a list of tensors. Use None for concat on the final dimension. Returns: self.
def join(input_layer, others, include_self=True, join_function=None): if include_self: list_of_tensors = [input_layer] list_of_tensors.extend(others) else: list_of_tensors = others return prettytensor.join_pretty_tensors(list_of_tensors, input_layer, join_function)
339,493
Unzips this Tensor along the split_dim into num_splits Equal chunks. Examples: * `[1, 2, 3, 4] -> [1, 3], [2, 4]` * `[[1, 1], [2, 2], [3, 3], [4, 4]] -> [[1, 1], [3, 3]], [[2, 2], [4, 4]]` Args: input_layer: The chainable object, supplied. split_dim: The dimension to split along. Defaults to batch. num_splits: The number of splits. Returns: A list of PrettyTensors. Raises: ValueError: If split_dim is out of range or isn't divided evenly by num_splits.
def unzip(input_layer, split_dim=0, num_splits=2): shape = input_layer.shape _check_split_dims(num_splits, split_dim, shape) splits = functions.unzip(input_layer, split_dim, shape[split_dim], num_splits) return input_layer.with_sequence(splits)
339,495
Splits this Tensor along the split_dim into num_splits Equal chunks. Examples: * `[1, 2, 3, 4] -> [1, 2], [3, 4]` * `[[1, 1], [2, 2], [3, 3], [4, 4]] -> [[1, 1], [2, 2]], [[3, 3], [4, 4]]` Args: input_layer: The chainable object, supplied. split_dim: The dimension to split along. Defaults to batch. num_splits: The number of splits. Returns: A list of PrettyTensors. Raises: ValueError: If split_dim is out of range or isn't divided evenly by num_splits.
def split(input_layer, split_dim=0, num_splits=2): shape = input_layer.shape _check_split_dims(num_splits, split_dim, shape) splits = tf.split( value=input_layer, num_or_size_splits=num_splits, axis=split_dim) return input_layer.with_sequence(splits)
339,497
Maps the given function across this sequence. To map an entire template across the sequence, use the `as_fn` method on the template. Args: input_layer: The input tensor. fn: A function of 1 argument that is applied to each item in the sequence. Returns: A new sequence Pretty Tensor. Raises: ValueError: If the input_layer does not hold a sequence.
def map_(input_layer, fn): if not input_layer.is_sequence(): raise ValueError('Can only map a sequence.') return [fn(x) for x in input_layer]
339,499
Given a set of numpy arrays, produce slices of batch_size. Note: You can use itertools.cycle to have this repeat forever. Args: batch_size: The batch_size for each array. *arrays: A list of arrays. Yields: A list of slices from the arrays of length batch_size except the last one which will contain the rest. Raises: ValueError: If arrays aren't all the same length or no arrays are provided.
def feed_numpy(batch_size, *arrays): if not arrays: raise ValueError('Arrays cannot be empty.') size = len(arrays[0]) for a in arrays: if size != len(a): raise ValueError('All arrays must be the same size.') count = int(size / batch_size) for i in xrange(count): start = i * batch_size end = start + batch_size yield [x[start:end] for x in arrays] if count * batch_size < size: yield [x[end:] for x in arrays]
339,502
Provide a slice based on the global_step. This is useful when the entire data array can be stored in memory because it allows you to feed the data very efficiently. Args: data: A numpy array or tensor. batch_size: The batch size for the produced data. name: An optional name for this data. global_step: A global step variable that is used to read the data. If None then the default prettytensor global_step is used. Returns: A tensor that produces the given data.
def slice_constant(data, batch_size=32, name='constant_data', global_step=None): with tf.name_scope(name): all_data = tf.convert_to_tensor(data) global_step = global_step or bookkeeper.global_step() count = len(data) / batch_size extra = len(data) - count * batch_size if extra: offset = tf.mod(global_step, count) return tf.slice(all_data, offset * batch_size, batch_size) else: offset = tf.mod(global_step, count + 1) return tf.slice(all_data, offset * batch_size, tf.where(tf.equal(offset, count), extra, batch_size))
339,504
Takes care of starting any local servers and stopping queues on exit. In general, the Runner is designed to work with any user provided session, but this provides a convenience for properly stopping the queues. Args: master: The master session to use. config: A tf.ConfigProto or None. Yields: A session.
def session(self, master='', config=None): session_manager = SESSION_MANAGER_FACTORY() # Initialization is handled manually at a later point and session_manager # is just used for distributed compatibility. with session_manager.prepare_session(master, None, config=config, init_fn=lambda _: None) as sess: try: yield sess finally: self.stop_queues()
339,506
Loads the model from the most recent checkpoint. This gets the most current list of checkpoints each time it is called. Args: sess: The current session. latest_filename: The filename for the latest set of checkpoints, defaults to 'checkpoints'. Returns: The loaded checkpoint or None if it failed to load.
def load_from_checkpoint(self, sess, latest_filename=None): # Set list of not-yet-deleted checkpoints. self._create_initializers() if self._save_path: ckpt = tf.train.get_checkpoint_state( os.path.dirname(self._save_path), latest_filename) if ckpt and ckpt.all_model_checkpoint_paths: # Copy it because last_checkpoints is immutable. # Manually configure a new Saver so that we get the latest snapshots. self._saver = tf.train.Saver(saver_def=self._saver.as_saver_def()) self._saver.set_last_checkpoints(list(ckpt.all_model_checkpoint_paths)) if self._saver.last_checkpoints: self._saver.restore(sess, self._saver.last_checkpoints[-1]) return self._saver.last_checkpoints[-1] else: return None
339,510
Trains the given model. Args: train_op: The training operation. cost_to_log: A cost to log. num_steps: Number of batches to run. feed_vars: A list or tuple of the variables that will be fed. feed_data: A generator that produces tuples of the same length as feed_vars. print_every: Print and save every so many steps. Returns: `cost_to_log` from the final step.
def train_model(self, train_op, cost_to_log, num_steps, feed_vars=(), feed_data=None, print_every=100): costs = [train_op] if (isinstance(cost_to_log, collections.Sequence) and not isinstance(cost_to_log, six.string_types)): costs.extend(cost_to_log) else: costs.append(cost_to_log) return self.run_model(costs, num_steps, feed_vars=feed_vars, feed_data=feed_data, print_every=print_every)[2:]
339,516
Waits for a new checkpoint to be available and then loads it. Args: sess: The current session. current_checkpoint: The current checkpoint or None to just load the next one. sleep_seconds: How long to sleep between checks. Returns: The next checkpoint to use.
def load_new_checkpoint_when_available( self, sess, current_checkpoint, sleep_seconds=10): # Load the checkpoint. while True: next_checkpoint = self.load_from_checkpoint(sess) if not next_checkpoint or next_checkpoint == current_checkpoint: print('Model not yet available, sleeping for %d seconds: ' 'path %s; found: %s' % (sleep_seconds, os.path.dirname(self._save_path), current_checkpoint)) sys.stdout.flush() time.sleep(sleep_seconds) else: return next_checkpoint
339,521
Creates a bookkeeper for the default graph. Args: *args: Arguments to pass into Bookkeeper's constructor. **kwargs: Arguments to pass into Bookkeeper's constructor. Returns: A new Bookkeeper. Raises: ValueError: If args or kwargs are provided and the Bookkeeper already exists.
def for_default_graph(*args, **kwargs): graph = tf.get_default_graph() collection = graph.get_collection(_BOOKKEEPER) if collection: if args or kwargs: raise ValueError('Requesting construction of a BookKeeper that already ' 'exists: %s %s' % (args, kwargs)) return collection[0] else: books = BOOKKEEPER_FACTORY(*args, g=graph, **kwargs) graph.add_to_collection(_BOOKKEEPER, books) return books
339,526
Creates a Bookkeeper for a new graph. You must use `m.g.as_default()` to put the graph in scope: m = Bookkeeper.for_new_graph() with m.g.as_default(): ... Args: *args: Arguments to pass into Bookkeeper's constructor. **kwargs: Arguments to pass into Bookkeeper's constructor. Returns: A new Bookkeeper.
def for_new_graph(*args, **kwargs): graph = tf.Graph() with graph.as_default(): return for_default_graph(*args, **kwargs)
339,527
Creates a new group for op_list if it has changed. Args: group: The current group. It is returned if op_list is unchanged. op_list: The list of operations to check. name: The name to use if a new group is created. Returns: Either group or a new group (or if op_list is empty then no_op).
def regroup_if_changed(group, op_list, name=None): has_deltas = isinstance(op_list, sequence_with_deltas.SequenceWithDeltas) if (group is None or len(group.control_inputs) != len(op_list) or (has_deltas and op_list.has_changed())): if has_deltas: op_list.mark() if op_list: return tf.group(*op_list, name=name) else: return tf.no_op(name=name) else: return group
339,529
Creates a loss that is the sum of all specified losses. Args: losses: A sequence of losses to include. regularize: Whether or not to include regularization losses. include_marked: Whether or not to use the marked losses. name: The name for this variable. Returns: A single tensor that is the sum of all losses.
def create_composite_loss(losses=None, regularize=True, include_marked=True, name='cost'): books = for_default_graph() return books.create_composite_loss(losses, regularize, include_marked=include_marked, name=name)
339,530
Creates a Bookkeeper. Args: g: A graph, if not specified then the default graph is used. default_device: A default device or function. global_step: A variable to use as a global step. Raises: ValueError: If global_step is not an integer variable.
def __init__(self, g=None, default_device=None, global_step=None): # pylint: disable=redefined-outer-name if g is None: self._g = tf.get_default_graph() else: self._g = g self._train_op = None # List of summaries to collect. self._summary_tags = set() if global_step and global_step.dtype.base_dtype not in (tf.int32, tf.int64): raise ValueError('Global step must be an int32 or int64 variable: %s' % global_step.dtype) self._global_step = global_step if default_device: # pylint: disable=protected-access self.g._device_function_stack.append(default_device) self._recurrent_state = None self.reset_summary_collections()
339,532
Append a loss to the total loss for the network. Args: loss: append this loss operation name: The name for this loss, defaults to loss.op.name regularization: Set to True if this is a regularization loss. add_summaries: Set to True if you want to see scalar and average summary.
def add_loss(self, loss, name=None, regularization=False, add_summaries=True): # TODO(eiderman): Strip name out and just rely on the name scope. _ = name # Eliminates pylint warning. if regularization: self._g.add_to_collection(GraphKeys.REGULARIZATION_LOSSES, loss) tf.add_to_collection(GraphKeys.LOSSES, loss) if add_summaries: self.add_scalar_summary(loss, 'loss') self.add_average_summary(loss, 'loss_average')
339,540
Creates a loss that is the sum of all specified losses. Args: losses: A sequence of losses to include. regularize: Whether or not to include regularization losses. include_marked: Whether or not to use the marked losses. name: The name for this variable. Returns: A single tensor that is the sum of all losses. Raises: ValueError: if there are no losses.
def create_composite_loss(self, losses, regularize=True, include_marked=True, name='cost'): all_losses = [] if losses: all_losses.extend(losses) if include_marked: all_losses.extend(self.marked_losses) if not all_losses: raise ValueError('No losses specified!') if regularize: all_losses.extend(self.regularization_losses) with self._g.as_default(): result = tf.add_n(all_losses, name=name) self.add_scalar_summary(result) return result
339,541
Adds a state to the state saver. Args: state_name: The name of this state. initial_state: The initial state vector. Only zeros are supported. batch_size: The batch_size or None for unknown.
def add_state(self, state_name, initial_state, batch_size=None): state_shape = initial_state.get_shape().as_list() full_shape = [batch_size] + state_shape if not batch_size: # TODO(): -1 is now reserved for unknown, so this should be # updated, but that requires coordination with the binary and is # checkpoint incompatible. # TODO(eiderman): When we make the above breaking change, we should make # the C++ client use the initial state instead of passing in zeros. shape_proto = self._as_shape_proto([0] + state_shape) batch_size = 1 else: shape_proto = self._as_shape_proto([batch_size] + state_shape) # Add a constant tensor of zeros. At training time, this will initialize # the state with the initial_state - at inference time, # this node is replaced by a feed. tiles = [batch_size] + ([1] * len(initial_state.get_shape())) feed_op = tf.placeholder_with_default( tf.tile( tf.expand_dims(initial_state, [0]), tiles), shape=full_shape, name='%s_feed' % state_name) s = {'feed_op': feed_op, 'feed_type': initial_state.dtype, 'feed_shape': shape_proto} self._states[state_name] = s
339,542
Converts a vector that specified one-hot per batch into a dense version. Args: labels: The labels input. class_count: The number of classes as an int. Returns: One dense vector for each item in the batch. Raises: ValueError: If labels is not rank 1. TypeError: If class_count is not an integer or labels is not an integer Tensor.
def to_dense_one_hot(labels, class_count): if not isinstance(class_count, tf.compat.integral_types): raise TypeError('class_count must be an integer type.') if labels.dtype.base_dtype not in (tf.int32, tf.int64): raise TypeError('Labels must be an integer: %s' % labels.dtype) if labels.get_shape().ndims != 1: raise ValueError('Labels must be a rank 1 tensor: %s' % labels.get_shape()) dtype = labels.dtype.base_dtype class_tensor = tf.convert_to_tensor( class_count, dtype=dtype, name='class_count') # Extract the batch from the shape so this is batch independent. batch = tf.gather(tf.shape(labels), 0) count = tf.expand_dims(tf.range(0, limit=batch), 1) labels = tf.expand_dims(labels, 1) batch = tf.gather(tf.shape(labels), 0) if dtype != tf.int32: count = tf.cast(count, dtype) batch = tf.cast(batch, dtype) result = tf.sparse_to_dense( tf.concat([count, labels], 1), tf.concat([tf.expand_dims(batch, 0), tf.expand_dims(class_tensor, 0)], 0), 1.0, 0.0) result.set_shape([labels.get_shape().dims[0], class_count]) return result
339,548
Calculates the Cross Entropy of input_ vs labels. Args: input_: A rank 2 `Tensor` or a Pretty Tensor holding the logits. labels: A rank 2 tf.float32 or tf.float64 tensor containing the labels. name: The optional name. loss_weight: A weight to scale the loss. Used when there are multiple losses. per_example_weights: A weighting for each example. Returns: A loss. Raises: ValueError: if labels is None or the type is not float or double.
def cross_entropy(input_, labels, name=PROVIDED, loss_weight=None, per_example_weights=None): if labels is None: raise ValueError('Labels must be set') labels = _convert_and_assert_tensors_compatible(input_, labels) if per_example_weights is not None: per_example_weights = _convert_and_assert_per_example_weights_compatible( input_, per_example_weights, dtype=input_.dtype) correct_predictions, examples = _compute_average_correct( input_, labels, per_example_weights) correct_ratio = correct_predictions / examples if correct_ratio.get_shape().is_fully_defined(): input_.bookkeeper.add_average_summary( correct_ratio, 'average_accuracy_%s' % name) return apply_regression( input_, tf.contrib.nn.deprecated_flipped_softmax_cross_entropy_with_logits, labels, [], name='%s_loss' % name, loss_weight=loss_weight, per_example_weights=per_example_weights)
339,554
Calculates the Cross Entropy of input_ vs labels. Args: input_: A rank 2 `Tensor` or a Pretty Tensor holding the logits. labels: A rank 1 integer `Tensor` with class ordinals name: The optional name. loss_weight: A weight to scale the loss. Used when there are multiple losses. per_example_weights: A weighting for each example. Returns: A loss. Raises: ValueError: if labels is None or the type is not float or double.
def sparse_cross_entropy(input_, labels, name=PROVIDED, loss_weight=None, per_example_weights=None): if labels is None: raise ValueError('Labels must be set') if per_example_weights is not None: per_example_weights = _convert_and_assert_per_example_weights_compatible( input_, per_example_weights, dtype=input_.dtype) return apply_regression( input_, tf.contrib.nn.deprecated_flipped_sparse_softmax_cross_entropy_with_logits, labels, [], name='%s_loss' % name, loss_weight=loss_weight, per_example_weights=per_example_weights)
339,555
Squashes a sequence into a single Tensor with dim 1 being time*batch. A sequence is an array of Tensors, which is not appropriate for most operations, this squashes them together into Tensor. Defaults are assigned such that cleave_sequence requires no args. Args: input_layer: The input layer. Returns: A PrettyTensor containing a single tensor with the first dim containing both time and batch. Raises: ValueError: If the sequence is empty.
def squash_sequence(input_layer): timesteps = len(input_layer.sequence) if not timesteps: raise ValueError('Empty tensor sequence.') elif timesteps == 1: result = input_layer.sequence[0] else: result = tf.concat(input_layer.sequence, 0) return input_layer.with_tensor(result).with_defaults(unroll=timesteps)
339,573
Cleaves a tensor into a sequence, this is the inverse of squash. Recurrent methods unroll across an array of Tensors with each one being a timestep. This cleaves the first dim so that each it is an array of Tensors. It is the inverse of squash_sequence. Args: input_layer: The input layer. unroll: The number of time steps. Returns: A PrettyTensor containing an array of tensors. Raises: ValueError: If unroll is not specified and it has no default or it is <= 0.
def cleave_sequence(input_layer, unroll=None): if unroll is None: raise ValueError('You must set unroll either here or in the defaults.') shape = input_layer.shape if shape[0] is not None and shape[0] % unroll != 0: raise ValueError('Must divide the split dimension evenly: %d mod %d != 0' % (shape[0], unroll)) if unroll <= 0: raise ValueError('Unroll must be > 0: %s' % unroll) elif unroll == 1: splits = [input_layer.tensor] else: splits = tf.split( value=input_layer.tensor, num_or_size_splits=unroll, axis=0) result = input_layer.with_sequence(splits) # This is an abuse of the defaults system, but it is safe because we are only # modifying result. defaults = result.defaults if 'unroll' in defaults: del defaults['unroll'] return result
339,574
Creates a PrettyTensor object for the given sequence. The first dimension is treated as a time-dimension * batch and a default is set for `unroll` and `state_saver`. TODO(eiderman): Remove shape. Args: sequence_input: A SequenceInput or StateSavingSequenceInput shape: The shape of each item in the sequence (including batch). save_state: If true, use the sequence_input's state and save_state methods. Returns: 2 Layers: inputs, targets
def create_sequence_pretty_tensor(sequence_input, shape=None, save_state=True): inputs = prettytensor.wrap_sequence(sequence_input.inputs, tensor_shape=shape) targets = prettytensor.wrap_sequence(sequence_input.targets) if save_state: bookkeeper.set_recurrent_state_saver(sequence_input) return inputs, targets
339,577
Format a hex MAC string to ASCII Args: mac_hex: Value from SNMP inc_dots: 1 to format as aabb.ccdd.eeff, 0 to format aabbccddeeff Returns: String representation of the mac_hex
def mac_hex_to_ascii(mac_hex, inc_dots): v = mac_hex[2:] ret = '' for i in range(0, len(v), 4): ret += v[i:i+4] if ((inc_dots) & ((i+4) < len(v))): ret += '.' return ret
339,613
Get the ARP table from a switch. Args: switch_ip IP address of the device ip Filter results by IP (regex) mac Filter results by MAC (regex) interf Filter results by INTERFACE (regex) arp_type Filter results by ARP Type Return: Array of natlas_arp objects
def get_arp_table(self, switch_ip, ip=None, mac=None, interf=None, arp_type=None): node = natlas_node(switch_ip) if (node.try_snmp_creds(self.config.snmp_creds) == 0): return [] arp = node.get_arp_table() if (arp == None): return [] if ((ip == None) & (mac == None) & (interf == None) & (arp_type == None)): # no filtering return arp interf = str(interf) if vlan else None # filter the result table ret = [] for a in arp: if (ip != None): if (re.match(ip, a.ip) == None): continue if (mac != None): if (re.match(mac, a.mac) == None): continue if (interf != None): if (re.match(interf, str(a.interf)) == None): continue if (arp_type != None): if (re.match(arp_type, a.arp_type) == None): continue ret.append(a) return ret
339,630
Given a node, recursively enumerate its adjacencies until we reach the specified depth (>0). Args: node: natlas_node object to enumerate. depth: The depth left that we can go further away from the root.
def __discover_node(self, node, depth): if (node == None): return if (depth >= self.max_depth): return if (node.discovered > 0): return node.discovered = 1 # vmware ESX can report IP as 0.0.0.0 # If we are allowing 0.0.0.0/32 in the config, # then we added it as a leaf, but don't discover it if (node.ip[0] == '0.0.0.0'): return # may be a leaf we couldn't connect to previously if (node.snmpobj.success == 0): return # print some info to stdout dcodes = DCODE_STEP_INTO if (depth == 0): dcodes |= DCODE_ROOT self.__print_step(node.ip[0], node.name, depth, dcodes) # get the cached snmp credentials snmpobj = node.snmpobj # list of valid neighbors to discover next valid_neighbors = [] # get list of neighbors cdp_neighbors = node.get_cdp_neighbors() lldp_neighbors = node.get_lldp_neighbors() neighbors = cdp_neighbors + lldp_neighbors if (len(neighbors) == 0): return for n in neighbors: # some neighbors may not advertise IP addresses - default them to 0.0.0.0 if (n.remote_ip == None): n.remote_ip = '0.0.0.0' # check the ACL acl_action = self.__match_node_acl(n.remote_ip, n.remote_name) if (acl_action == 'deny'): # deny inclusion of this node continue dcodes = DCODE_DISCOVERED child = None if (acl_action == 'include'): # include this node but do not discover it child = natlas_node() child.ip = [n.remote_ip] dcodes |= DCODE_INCLUDE else: # discover this node child, query_result = self.__query_node(n.remote_ip, n.remote_name) # if we couldn't pull info from SNMP fill in what we know if (child.snmpobj.success == 0): child.name = util.shorten_host_name(n.remote_name, self.config.host_domains) dcodes |= DCODE_ERR_SNMP # need to check the ACL again for extended ops (we have more info) acl_action = self.__match_node_acl(n.remote_ip, n.remote_name, n.remote_plat, n.remote_ios, child.serial) if (acl_action == 'deny'): continue if (query_result == NODE_NEW): self.nodes.append(child) if (acl_action == 'leaf'): dcodes |= DCODE_LEAF if (n.discovered_proto == 'cdp'): dcodes |= DCODE_CDP if (n.discovered_proto == 'lldp'): dcodes |= DCODE_LLDP self.__print_step(n.remote_ip, n.remote_name, depth+1, dcodes) # CDP/LLDP advertises the platform child.plat = n.remote_plat child.ios = n.remote_ios # add the discovered node to the link object and link to the parent n.node = child self.__add_link(node, n) # if we need to discover this node then add it to the list if ((query_result == NODE_NEW) & (acl_action != 'leaf') & (acl_action != 'include')): valid_neighbors.append(child) # discover the valid neighbors for n in valid_neighbors: self.__discover_node(n, depth+1)
339,688
Fetch Card for given Id Args: card_id : Id for which card object has to be retrieved Returns: Card dict for given card Id
def fetch(self, card_id, data={}, **kwargs): return super(Card, self).fetch(card_id, data, **kwargs)
340,111
Fetch Virtual Account for given Id Args: virtual_account_id : Id for which Virtual Account object has to be retrieved Returns: Virtual Account dict for given Virtual Account Id
def fetch(self, virtual_account_id, data={}, **kwargs): return super(VirtualAccount, self).fetch( virtual_account_id, data, **kwargs)
340,114
Create Virtual Account from given dict Args: Param for Creating Virtual Account Returns: Virtual Account dict
def create(self, data={}, **kwargs): url = self.base_url return self.post_url(url, data, **kwargs)
340,115
Close Virtual Account from given Id Args: virtual_account_id : Id for which Virtual Account objects has to be Closed
def close(self, virtual_account_id, data={}, **kwargs): url = "{}/{}".format(self.base_url, virtual_account_id) data['status'] = 'closed' return self.patch_url(url, data, **kwargs)
340,116
Fetch Payment for Virtual Account Id Args: virtual_account_id : Id for which Virtual Account objects has to be retrieved Returns: Payment dict for given Virtual Account Id
def payments(self, virtual_account_id, data={}, **kwargs): url = "{}/{}/payments".format(self.base_url, virtual_account_id) return self.get_url(url, data, **kwargs)
340,117
Fetch Subscription for given Id Args: subscription_id : Id for which subscription object is retrieved Returns: Subscription dict for given subscription Id
def fetch(self, subscription_id, data={}, **kwargs): return super(Subscription, self).fetch(subscription_id, data, **kwargs)
340,120
Cancel subscription given by subscription_id Args: subscription_id : Id for which subscription has to be cancelled Returns: Subscription Dict for given subscription id
def cancel(self, subscription_id, data={}, **kwargs): url = "{}/{}/cancel".format(self.base_url, subscription_id) return self.post_url(url, data, **kwargs)
340,121
Fetch Order for given Id Args: order_id : Id for which order object has to be retrieved Returns: Order dict for given order Id
def fetch(self, order_id, data={}, **kwargs): return super(Order, self).fetch(order_id, data, **kwargs)
340,125
Fetch Customer for given Id Args: customer_id : Id for which customer object has to be retrieved Returns: Order dict for given customer Id
def fetch(self, customer_id, data={}, **kwargs): return super(Customer, self).fetch(customer_id, data, **kwargs)
340,128
Fetch addon for given Id Args: addon_id : Id for which addon object has to be retrieved Returns: addon dict for given subscription Id
def fetch(self, addon_id, data={}, **kwargs): return super(Addon, self).fetch(addon_id, data, **kwargs)
340,131
Delete addon for given id Args: addon_id : Id for which addon object has to be deleted
def delete(self, addon_id, data={}, **kwargs): return super(Addon, self).delete(addon_id, data, **kwargs)
340,132
Refund object for given paymnet Id Args: refund_id : Refund Id for which refund has to be retrieved Returns: Refund dict for given refund Id
def fetch(self, refund_id, data={}, **kwargs): return super(Refund, self).fetch(refund_id, data, **kwargs)
340,135
Fetch Payment for given Id Args: payment_id : Id for which payment object has to be retrieved Returns: Payment dict for given payment Id
def fetch(self, payment_id, data={}, **kwargs): return super(Payment, self).fetch(payment_id, data, **kwargs)
340,138
Capture Payment for given Id Args: payment_id : Id for which payment object has to be retrieved Amount : Amount for which the payment has to be retrieved Returns: Payment dict after getting captured
def capture(self, payment_id, amount, data={}, **kwargs): url = "{}/{}/capture".format(self.base_url, payment_id) data['amount'] = amount return self.post_url(url, data, **kwargs)
340,139
Create Transfer for given Payment Id Args: payment_id : Id for which payment object has to be transfered Returns: Payment dict after getting transfered
def transfer(self, payment_id, data={}, **kwargs): url = "{}/{}/transfers".format(self.base_url, payment_id) return self.post_url(url, data, **kwargs)
340,140
Fetches all transfer for given Payment Id Args: payment_id : Id for which payment object has to be refunded Amount : Amount for which the payment has to be refunded Returns: Payment dict after getting refunded
def transfers(self, payment_id, data={}, **kwargs): url = "{}/{}/transfers".format(self.base_url, payment_id) return self.get_url(url, data, **kwargs)
340,141
Fetch Plan for given Id Args: plan_id : Id for which Plan object has to be retrieved Returns: Plan dict for given subscription Id
def fetch(self, plan_id, data={}, **kwargs): return super(Plan, self).fetch(plan_id, data, **kwargs)
340,143
Fetch Token for given Id and given customer Id Args: customer_id : Customer Id for which tokens have to be fetched token_id : Id for which TOken object has to be fetched Returns: Token dict for given token Id
def fetch(self, customer_id, token_id, data={}, **kwargs): url = "{}/{}/tokens/{}".format(self.base_url, customer_id, token_id) return self.get_url(url, data, **kwargs)
340,146
Get all tokens for given customer Id Args: customer_id : Customer Id for which tokens have to be fetched Returns: Token dicts for given cutomer Id
def all(self, customer_id, data={}, **kwargs): url = "{}/{}/tokens".format(self.base_url, customer_id) return self.get_url(url, data, **kwargs)
340,147
Delete Given Token For a Customer Args: customer_id : Customer Id for which tokens have to be deleted token_id : Id for which TOken object has to be deleted Returns: Dict for deleted token
def delete(self, customer_id, token_id, data={}, **kwargs): url = "{}/{}/tokens/{}".format(self.base_url, customer_id, token_id) return self.delete_url(url, data, **kwargs)
340,148
Fetch Settlement data for given Id Args: settlement_id : Id for which settlement object has to be retrieved Returns: settlement dict for given settlement id
def fetch(self, settlement_id, data={}, **kwargs): return super(Settlement, self).fetch(settlement_id, data, **kwargs)
340,159
Fetch Transfer for given Id Args: transfer_id : Id for which transfer object has to be retrieved Returns: Transfer dict for given transfer Id
def fetch(self, transfer_id, data={}, **kwargs): return super(Transfer, self).fetch(transfer_id, data, **kwargs)
340,166
Reverse Transfer from given id Args: transfer_id : Id for which transfer object has to be reversed Returns: Transfer Dict which was reversed
def reverse(self, transfer_id, data={}, **kwargs): url = "{}/{}/reversals".format(self.base_url, transfer_id) return self.post_url(url, data, **kwargs)
340,167
Get all Reversal Transfer from given id Args: transfer_id : Id for which reversal transfer object has to be fetched Returns: Transfer Dict
def reversals(self, transfer_id, data={}, **kwargs): url = "{}/{}/reversals".format(self.base_url, transfer_id) return self.get_url(url, data, **kwargs)
340,168
Fetch Invoice for given Id Args: invoice_id : Id for which invoice object has to be retrieved Returns: Invoice dict for given invoice Id
def fetch(self, invoice_id, data={}, **kwargs): return super(Invoice, self).fetch(invoice_id, data, **kwargs)
340,186
Send/Resend notifications to customer via email/sms Args: invoice_id : Id for trigger notify medium : Medium for triggering notification via email or sms Returns: {"success": true}
def notify_by(self, invoice_id, medium, **kwargs): url = "{}/{}/notify_by/{}".format(self.base_url, invoice_id, medium) return self.post_url(url, {}, **kwargs)
340,187
Cancel an unpaid Invoice with given ID via API It can only be called on an invoice that is not in the paid state. Args: invoice_id : Id for cancel the invoice Returns: The response for the API will be the invoice entity, similar to create/update API response, with status attribute's value as cancelled
def cancel(self, invoice_id, **kwargs): url = "{}/{}/cancel".format(self.base_url, invoice_id) return self.post_url(url, {}, **kwargs)
340,188