content stringlengths 27 928k | path stringlengths 4 230 | size int64 27 928k | nl_text stringlengths 21 396k | nl_size int64 21 396k | nl_language stringlengths 2 3 | nl_language_score float64 0.04 1 |
|---|---|---|---|---|---|---|
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Executes Keras benchmarks and accuracy tests."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from absl import flags
from absl.testing import flagsaver
import tensorflow as tf # pylint: disable=g-bad-import-order
FLAGS = flags.FLAGS
class KerasBenchmark(tf.test.Benchmark):
"""Base benchmark class with methods to simplify testing."""
local_flags = None
def __init__(self, output_dir=None, default_flags=None, flag_methods=None):
self.output_dir = output_dir
self.default_flags = default_flags or {}
self.flag_methods = flag_methods or {}
if not output_dir:
output_dir = '/tmp/'
def _get_model_dir(self, folder_name):
return os.path.join(self.output_dir, folder_name)
def _setup(self):
"""Sets up and resets flags before each test."""
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.DEBUG)
if KerasBenchmark.local_flags is None:
for flag_method in self.flag_methods:
flag_method()
# Loads flags to get defaults to then override. List cannot be empty.
flags.FLAGS(['foo'])
# Overrides flag values with defaults for the class of tests.
for k, v in self.default_flags.items():
setattr(FLAGS, k, v)
saved_flag_values = flagsaver.save_flag_values()
KerasBenchmark.local_flags = saved_flag_values
else:
flagsaver.restore_flag_values(KerasBenchmark.local_flags)
def _report_benchmark(self,
stats,
wall_time_sec,
top_1_max=None,
top_1_min=None,
log_steps=None,
total_batch_size=None,
warmup=1):
"""Report benchmark results by writing to local protobuf file.
Args:
stats: dict returned from keras models with known entries.
wall_time_sec: the during of the benchmark execution in seconds
top_1_max: highest passing level for top_1 accuracy.
top_1_min: lowest passing level for top_1 accuracy.
log_steps: How often the log was created for stats['step_timestamp_log'].
total_batch_size: Global batch-size.
warmup: number of entries in stats['step_timestamp_log'] to ignore.
"""
metrics = []
if 'accuracy_top_1' in stats:
metrics.append({'name': 'accuracy_top_1',
'value': stats['accuracy_top_1'],
'min_value': top_1_min,
'max_value': top_1_max})
metrics.append({'name': 'top_1_train_accuracy',
'value': stats['training_accuracy_top_1']})
if (warmup and 'step_timestamp_log' in stats and
len(stats['step_timestamp_log']) > warmup):
# first entry in the time_log is start of step 1. The rest of the
# entries are the end of each step recorded
time_log = stats['step_timestamp_log']
elapsed = time_log[-1].timestamp - time_log[warmup].timestamp
num_examples = (
total_batch_size * log_steps * (len(time_log) - warmup - 1))
examples_per_sec = num_examples / elapsed
metrics.append({'name': 'exp_per_second',
'value': examples_per_sec})
if 'avg_exp_per_second' in stats:
metrics.append({'name': 'avg_exp_per_second',
'value': stats['avg_exp_per_second']})
self.report_benchmark(iters=-1, wall_time=wall_time_sec, metrics=metrics)
| official/resnet/keras/keras_benchmark.py | 4,175 | Base benchmark class with methods to simplify testing.
Report benchmark results by writing to local protobuf file.
Args:
stats: dict returned from keras models with known entries.
wall_time_sec: the during of the benchmark execution in seconds
top_1_max: highest passing level for top_1 accuracy.
top_1_min: lowest passing level for top_1 accuracy.
log_steps: How often the log was created for stats['step_timestamp_log'].
total_batch_size: Global batch-size.
warmup: number of entries in stats['step_timestamp_log'] to ignore.
Sets up and resets flags before each test.
Executes Keras benchmarks and accuracy tests.
Copyright 2018 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================== pylint: disable=g-bad-import-order Loads flags to get defaults to then override. List cannot be empty. Overrides flag values with defaults for the class of tests. first entry in the time_log is start of step 1. The rest of the entries are the end of each step recorded | 1,563 | en | 0.857356 |
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test Package set up."""
__author__ = 'afshar@google.com (Ali Afshar)'
import oauth2client.util
def setup_package():
"""Run on testing package."""
oauth2client.util.positional_parameters_enforcement = 'EXCEPTION'
| tests/__init__.py | 768 | Run on testing package.
Test Package set up.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 569 | en | 0.87509 |
# -*- coding: utf-8 -*-
#
# pytest-dasktest documentation build configuration file, created by
# sphinx-quickstart on Thu Oct 1 00:43:18 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import shlex
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.ifconfig',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'pytest-dasktest'
copyright = u'2015, Marius van Niekerk'
author = u'Marius van Niekerk'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1.0'
# The full version, including alpha/beta/rc tags.
release = '0.1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# Now only 'ja' uses this config value
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'pytest-cookiecutterplugin_namedoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'pytest-cookiecutterplugin_name.tex', u'pytest-\\{\\{cookiecutter.plugin\\_name\\}\\} Documentation',
u'\\{\\{cookiecutter.full\\_name\\}\\}', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'pytest-cookiecutterplugin_name', u'pytest-dasktest Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'pytest-cookiecutterplugin_name', u'pytest-dasktest Documentation',
author, 'pytest-cookiecutterplugin_name', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
| docs/conf.py | 9,414 | -*- coding: utf-8 -*- pytest-dasktest documentation build configuration file, created by sphinx-quickstart on Thu Oct 1 00:43:18 2015. This file is execfile()d with the current directory set to its containing dir. Note that not all possible configuration values are present in this autogenerated file. All configuration values have a default; values that are commented out serve to show the default. If extensions (or modules to document with autodoc) are in another directory, add these directories to sys.path here. If the directory is relative to the documentation root, use os.path.abspath to make it absolute, like shown here.sys.path.insert(0, os.path.abspath('.')) -- General configuration ------------------------------------------------ If your documentation needs a minimal Sphinx version, state it here.needs_sphinx = '1.0' Add any Sphinx extension module names here, as strings. They can be extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. Add any paths that contain templates here, relative to this directory. The suffix(es) of source filenames. You can specify multiple suffix as a list of string: source_suffix = ['.rst', '.md'] The encoding of source files.source_encoding = 'utf-8-sig' The master toctree document. General information about the project. The version info for the project you're documenting, acts as replacement for |version| and |release|, also used in various other places throughout the built documents. The short X.Y version. The full version, including alpha/beta/rc tags. The language for content autogenerated by Sphinx. Refer to documentation for a list of supported languages. This is also used if you do content translation via gettext catalogs. Usually you set "language" from the command line for these cases. There are two options for replacing |today|: either, you set today to some non-false value, then it is used:today = '' Else, today_fmt is used as the format for a strftime call.today_fmt = '%B %d, %Y' List of patterns, relative to source directory, that match files and directories to ignore when looking for source files. The reST default role (used for this markup: `text`) to use for all documents.default_role = None If true, '()' will be appended to :func: etc. cross-reference text.add_function_parentheses = True If true, the current module name will be prepended to all description unit titles (such as .. function::).add_module_names = True If true, sectionauthor and moduleauthor directives will be shown in the output. They are ignored by default.show_authors = False The name of the Pygments (syntax highlighting) style to use. A list of ignored prefixes for module index sorting.modindex_common_prefix = [] If true, keep warnings as "system message" paragraphs in the built documents.keep_warnings = False If true, `todo` and `todoList` produce output, else they produce nothing. -- Options for HTML output ---------------------------------------------- The theme to use for HTML and HTML Help pages. See the documentation for a list of builtin themes. Theme options are theme-specific and customize the look and feel of a theme further. For a list of options available for each theme, see the documentation.html_theme_options = {} Add any paths that contain custom themes here, relative to this directory.html_theme_path = [] The name for this set of Sphinx documents. If None, it defaults to "<project> v<release> documentation".html_title = None A shorter title for the navigation bar. Default is the same as html_title.html_short_title = None The name of an image file (relative to this directory) to place at the top of the sidebar.html_logo = None The name of an image file (within the static path) to use as favicon of the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 pixels large.html_favicon = None Add any paths that contain custom static files (such as style sheets) here, relative to this directory. They are copied after the builtin static files, so a file named "default.css" will overwrite the builtin "default.css". Add any extra paths that contain custom files (such as robots.txt or .htaccess) here, relative to this directory. These files are copied directly to the root of the documentation.html_extra_path = [] If not '', a 'Last updated on:' timestamp is inserted at every page bottom, using the given strftime format.html_last_updated_fmt = '%b %d, %Y' If true, SmartyPants will be used to convert quotes and dashes to typographically correct entities.html_use_smartypants = True Custom sidebar templates, maps document names to template names.html_sidebars = {} Additional templates that should be rendered to pages, maps page names to template names.html_additional_pages = {} If false, no module index is generated.html_domain_indices = True If false, no index is generated.html_use_index = True If true, the index is split into individual pages for each letter.html_split_index = False If true, links to the reST sources are added to the pages.html_show_sourcelink = True If true, "Created using Sphinx" is shown in the HTML footer. Default is True.html_show_sphinx = True If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.html_show_copyright = True If true, an OpenSearch description file will be output, and all pages will contain a <link> tag referring to it. The value of this option must be the base URL from which the finished HTML is served.html_use_opensearch = '' This is the file name suffix for HTML files (e.g. ".xhtml").html_file_suffix = None Language to be used for generating the HTML full-text search index. Sphinx supports the following languages: 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'html_search_language = 'en' A dictionary with options for the search language support, empty by default. Now only 'ja' uses this config valuehtml_search_options = {'type': 'default'} The name of a javascript file (relative to the configuration directory) that implements a search results scorer. If empty, the default will be used.html_search_scorer = 'scorer.js' Output file base name for HTML help builder. -- Options for LaTeX output --------------------------------------------- The paper size ('letterpaper' or 'a4paper').'papersize': 'letterpaper', The font size ('10pt', '11pt' or '12pt').'pointsize': '10pt', Additional stuff for the LaTeX preamble.'preamble': '', Latex figure (float) alignment'figure_align': 'htbp', Grouping the document tree into LaTeX files. List of tuples (source start file, target name, title, author, documentclass [howto, manual, or own class]). The name of an image file (relative to this directory) to place at the top of the title page.latex_logo = None For "manual" documents, if this is true, then toplevel headings are parts, not chapters.latex_use_parts = False If true, show page references after internal links.latex_show_pagerefs = False If true, show URL addresses after external links.latex_show_urls = False Documents to append as an appendix to all manuals.latex_appendices = [] If false, no module index is generated.latex_domain_indices = True -- Options for manual page output --------------------------------------- One entry per manual page. List of tuples (source start file, name, description, authors, manual section). If true, show URL addresses after external links.man_show_urls = False -- Options for Texinfo output ------------------------------------------- Grouping the document tree into Texinfo files. List of tuples (source start file, target name, title, author, dir menu entry, description, category) Documents to append as an appendix to all manuals.texinfo_appendices = [] If false, no module index is generated.texinfo_domain_indices = True How to display URL addresses: 'footnote', 'no', or 'inline'.texinfo_show_urls = 'footnote' If true, do not generate a @detailmenu in the "Top" node's menu.texinfo_no_detailmenu = False | 7,941 | en | 0.642776 |
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import sys
sys.path.append('./')
from update import BasicUpdateBlock, SmallUpdateBlock
from extractor import BasicEncoder, SmallEncoder
from corr import CorrBlock, AlternateCorrBlock
from util import bilinear_sampler, coords_grid, upflow8
try:
autocast = torch.cuda.amp.autocast
except:
# dummy autocast for PyTorch < 1.6
class autocast:
def __init__(self, enabled):
pass
def __enter__(self):
pass
def __exit__(self, *args):
pass
class RAFT(nn.Module):
def __init__(self, args):
super(RAFT, self).__init__()
self.args = args
if args.small:
self.hidden_dim = hdim = 96
self.context_dim = cdim = 64
args.corr_levels = 4
args.corr_radius = 3
else:
self.hidden_dim = hdim = 128
self.context_dim = cdim = 128
args.corr_levels = 4
args.corr_radius = 4
if 'dropout' not in self.args:
self.args.dropout = 0
if 'alternate_corr' not in self.args:
self.args.alternate_corr = False
# feature network, context network, and update block
if args.small:
self.fnet = SmallEncoder(output_dim=128, norm_fn='instance', dropout=args.dropout)
self.cnet = SmallEncoder(output_dim=hdim+cdim, norm_fn='none', dropout=args.dropout)
self.update_block = SmallUpdateBlock(self.args, hidden_dim=hdim)
else:
self.fnet = BasicEncoder(output_dim=256, norm_fn='instance', dropout=args.dropout)
self.cnet = BasicEncoder(output_dim=hdim+cdim, norm_fn='batch', dropout=args.dropout)
self.update_block = BasicUpdateBlock(self.args, hidden_dim=hdim)
def freeze_bn(self):
for m in self.modules():
if isinstance(m, nn.BatchNorm2d):
m.eval()
def initialize_flow(self, img):
""" Flow is represented as difference between two coordinate grids flow = coords1 - coords0"""
N, C, H, W = img.shape
coords0 = coords_grid(N, H//8, W//8).to(img.device)
coords1 = coords_grid(N, H//8, W//8).to(img.device)
# optical flow computed as difference: flow = coords1 - coords0
return coords0, coords1
def upsample_flow(self, flow, mask):
""" Upsample flow field [H/8, W/8, 2] -> [H, W, 2] using convex combination """
N, _, H, W = flow.shape
mask = mask.view(N, 1, 9, 8, 8, H, W)
mask = torch.softmax(mask, dim=2)
up_flow = F.unfold(8 * flow, [3,3], padding=1)
up_flow = up_flow.view(N, 2, 9, 1, 1, H, W)
up_flow = torch.sum(mask * up_flow, dim=2)
up_flow = up_flow.permute(0, 1, 4, 2, 5, 3)
return up_flow.reshape(N, 2, 8*H, 8*W)
def forward(self, image1):
""" get featmap for one frame """
image1 = 2 * (image1 / 255.0) - 1.0
image1 = image1.contiguous()
hdim = self.hidden_dim
cdim = self.context_dim
# run the feature network
with autocast(enabled=self.args.mixed_precision):
fmap1 = self.fnet(image1)
fmap1 = fmap1.float()
return fmap1
def old_forward(self, image1, image2, iters=12, flow_init=None, upsample=True, test_mode=False):
""" Estimate optical flow between pair of frames """
image1 = 2 * (image1 / 255.0) - 1.0
image2 = 2 * (image2 / 255.0) - 1.0
image1 = image1.contiguous()
image2 = image2.contiguous()
hdim = self.hidden_dim
cdim = self.context_dim
# run the feature network
with autocast(enabled=self.args.mixed_precision):
fmap1, fmap2 = self.fnet([image1, image2])
fmap1 = fmap1.float()
fmap2 = fmap2.float()
if self.args.alternate_corr:
corr_fn = AlternateCorrBlock(fmap1, fmap2, radius=self.args.corr_radius)
else:
corr_fn = CorrBlock(fmap1, fmap2, radius=self.args.corr_radius)
# run the context network
with autocast(enabled=self.args.mixed_precision):
cnet = self.cnet(image1)
net, inp = torch.split(cnet, [hdim, cdim], dim=1)
net = torch.tanh(net)
inp = torch.relu(inp)
coords0, coords1 = self.initialize_flow(image1)
if flow_init is not None:
coords1 = coords1 + flow_init
flow_predictions = []
for itr in range(iters):
coords1 = coords1.detach()
corr = corr_fn(coords1) # index correlation volume
flow = coords1 - coords0
with autocast(enabled=self.args.mixed_precision):
net, up_mask, delta_flow = self.update_block(net, inp, corr, flow)
# F(t+1) = F(t) + \Delta(t)
coords1 = coords1 + delta_flow
# upsample predictions
if up_mask is None:
flow_up = upflow8(coords1 - coords0)
else:
flow_up = self.upsample_flow(coords1 - coords0, up_mask)
flow_predictions.append(flow_up)
if test_mode:
corr = corr_fn(coords1) # index correlation volume
# feat = torch.cat([inp, corr], dim=1)
feat = inp
return coords1 - coords0, flow_up, (feat, fmap1, fmap2)
return flow_predictions
| nets/raft_core/backraft.py | 5,512 | get featmap for one frame
Flow is represented as difference between two coordinate grids flow = coords1 - coords0
Estimate optical flow between pair of frames
Upsample flow field [H/8, W/8, 2] -> [H, W, 2] using convex combination
dummy autocast for PyTorch < 1.6 feature network, context network, and update block optical flow computed as difference: flow = coords1 - coords0 run the feature network run the feature network run the context network index correlation volume F(t+1) = F(t) + \Delta(t) upsample predictions index correlation volume feat = torch.cat([inp, corr], dim=1) | 587 | en | 0.819941 |
#!/usr/bin/env python
import subprocess
from pathlib import Path
from distutils.cmd import Command
from setuptools import setup, find_packages
# pylint: disable=unused-import
import fastentrypoints # noqa: F401
# pylint: enable=unused-import
import howdoi
class Lint(Command):
"""A custom command to run Flake8 on all Python source files.
"""
description = 'run Flake8 on Python source files'
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
commands = {'Flake8': 'flake8 --config=.flake8rc .'.split(),
'Pylint': 'pylint --rcfile=.pylintrc howdoi'.split()}
for linter, command in commands.items():
try:
print(f'\nRunning {linter}...')
subprocess.check_call(command)
print(f'No lint errors found by {linter}')
except FileNotFoundError:
print(f'{linter} not installed')
except subprocess.CalledProcessError:
pass
def read(*names):
values = {}
for name in names:
value = ''
for extension in ('.txt', '.md'):
filename = name + extension
if Path(filename).is_file():
with open(filename) as in_file: # pylint: disable=unspecified-encoding
value = in_file.read()
break
values[name] = value
return values
# pylint: disable=consider-using-f-string
long_description = """
%(README)s
# News
%(CHANGES)s
""" % read('README', 'CHANGES')
# pylint: enable=consider-using-f-string
setup(
name='howdoi',
version=howdoi.__version__,
description='Instant coding answers via the command line',
long_description=long_description,
long_description_content_type='text/markdown',
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Topic :: Documentation",
],
keywords='howdoi help console command line answer',
author='Benjamin Gleitzman',
author_email='gleitz@mit.edu',
maintainer='Benjamin Gleitzman',
maintainer_email='gleitz@mit.edu',
url='https://github.com/gleitz/howdoi',
license='MIT',
packages=find_packages(),
entry_points={
'console_scripts': [
'howdoi = howdoi.howdoi:command_line_runner',
]
},
install_requires=[
'Pygments',
'cssselect',
'lxml',
'pyquery',
'requests',
'cachelib',
'appdirs',
'keep',
],
cmdclass={
'lint': Lint
}
)
| setup.py | 2,929 | A custom command to run Flake8 on all Python source files.
!/usr/bin/env python pylint: disable=unused-import noqa: F401 pylint: enable=unused-import pylint: disable=unspecified-encoding pylint: disable=consider-using-f-string pylint: enable=consider-using-f-string | 271 | en | 0.535923 |
from __future__ import print_function
from __future__ import absolute_import
# Copyright (c) 2003-2016 CORE Security Technologies
#
# This software is provided under under a slightly modified version
# of the Apache Software License. See the accompanying LICENSE file
# for more information.
#
# -*- mode: python; tab-width: 4 -*-
#
# Copyright (C) 2001 Michael Teo <michaelteo@bigfoot.com>
# nmb.py - NetBIOS library
#
# This software is provided 'as-is', without any express or implied warranty.
# In no event will the author be held liable for any damages arising from the
# use of this software.
#
# Permission is granted to anyone to use this software for any purpose,
# including commercial applications, and to alter it and redistribute it
# freely, subject to the following restrictions:
#
# 1. The origin of this software must not be misrepresented; you must not
# claim that you wrote the original software. If you use this software
# in a product, an acknowledgment in the product documentation would be
# appreciated but is not required.
#
# 2. Altered source versions must be plainly marked as such, and must not be
# misrepresented as being the original software.
#
# 3. This notice cannot be removed or altered from any source distribution.
#
# Altered source done by Alberto Solino (@agsolino)
import socket
import string
import re
import select
import errno
from random import randint
from struct import pack, unpack
import time
from .structure import Structure
CVS_REVISION = '$Revision: 526 $'
# Taken from socket module reference
INADDR_ANY = '0.0.0.0'
BROADCAST_ADDR = '<broadcast>'
# Default port for NetBIOS name service
NETBIOS_NS_PORT = 137
# Default port for NetBIOS session service
NETBIOS_SESSION_PORT = 139
# Default port for SMB session service
SMB_SESSION_PORT = 445
# Owner Node Type Constants
NODE_B = 0x0000
NODE_P = 0x2000
NODE_M = 0x4000
NODE_RESERVED = 0x6000
NODE_GROUP = 0x8000
NODE_UNIQUE = 0x0
# Name Type Constants
TYPE_UNKNOWN = 0x01
TYPE_WORKSTATION = 0x00
TYPE_CLIENT = 0x03
TYPE_SERVER = 0x20
TYPE_DOMAIN_MASTER = 0x1B
TYPE_DOMAIN_CONTROLLER = 0x1C
TYPE_MASTER_BROWSER = 0x1D
TYPE_BROWSER = 0x1E
TYPE_NETDDE = 0x1F
TYPE_STATUS = 0x21
# Opcodes values
OPCODE_QUERY = 0
OPCODE_REGISTRATION = 0x5
OPCODE_RELEASE = 0x6
OPCODE_WACK = 0x7
OPCODE_REFRESH = 0x8
OPCODE_REQUEST = 0
OPCODE_RESPONSE = 0x10
# NM_FLAGS
NM_FLAGS_BROADCAST = 0x1
NM_FLAGS_UNICAST = 0
NM_FLAGS_RA = 0x8
NM_FLAGS_RD = 0x10
NM_FLAGS_TC = 0x20
NM_FLAGS_AA = 0x40
# QUESTION_TYPE
QUESTION_TYPE_NB = 0x20 # NetBIOS general Name Service Resource Record
QUESTION_TYPE_NBSTAT = 0x21 # NetBIOS NODE STATUS Resource Record
# QUESTION_CLASS
QUESTION_CLASS_IN = 0x1 # Internet class
# RR_TYPE Resource Record Type code
RR_TYPE_A = 0x1 # IP address Resource Record
RR_TYPE_NS = 0x2 # Name Server Resource Record
RR_TYPE_NULL = 0xA # NULL Resource Record
RR_TYPE_NB = 0x20 # NetBIOS general Name Service Resource Record
RR_TYPE_NBSTAT = 0x21 # NetBIOS NODE STATUS Resource Record
# Resource Record Class
RR_CLASS_IN = 1 # Internet class
# RCODE values
RCODE_FMT_ERR = 0x1 # Format Error. Request was invalidly formatted.
RCODE_SRV_ERR = 0x2 # Server failure. Problem with NBNS, cannot process name.
RCODE_IMP_ERR = 0x4 # Unsupported request error. Allowable only for challenging NBNS when gets an Update type
# registration request.
RCODE_RFS_ERR = 0x5 # Refused error. For policy reasons server will not register this name from this host.
RCODE_ACT_ERR = 0x6 # Active error. Name is owned by another node.
RCODE_CFT_ERR = 0x7 # Name in conflict error. A UNIQUE name is owned by more than one node.
# NAME_FLAGS
NAME_FLAGS_PRM = 0x0200 # Permanent Name Flag. If one (1) then entry is for the permanent node name. Flag is zero
# (0) for all other names.
NAME_FLAGS_ACT = 0x0400 # Active Name Flag. All entries have this flag set to one (1).
NAME_FLAG_CNF = 0x0800 # Conflict Flag. If one (1) then name on this node is in conflict.
NAME_FLAG_DRG = 0x1000 # Deregister Flag. If one (1) then this name is in the process of being deleted.
NAME_TYPES = { TYPE_UNKNOWN: 'Unknown', TYPE_WORKSTATION: 'Workstation', TYPE_CLIENT: 'Client',
TYPE_SERVER: 'Server', TYPE_MASTER_BROWSER: 'Master Browser', TYPE_BROWSER: 'Browser Server',
TYPE_DOMAIN_MASTER: 'Domain Master' , TYPE_NETDDE: 'NetDDE Server'}
# NetBIOS Session Types
NETBIOS_SESSION_MESSAGE = 0x0
NETBIOS_SESSION_REQUEST = 0x81
NETBIOS_SESSION_POSITIVE_RESPONSE = 0x82
NETBIOS_SESSION_NEGATIVE_RESPONSE = 0x83
NETBIOS_SESSION_RETARGET_RESPONSE = 0x84
NETBIOS_SESSION_KEEP_ALIVE = 0x85
def strerror(errclass, errcode):
if errclass == ERRCLASS_OS:
return 'OS Error', str(errcode)
elif errclass == ERRCLASS_QUERY:
return 'Query Error', QUERY_ERRORS.get(errcode, 'Unknown error')
elif errclass == ERRCLASS_SESSION:
return 'Session Error', SESSION_ERRORS.get(errcode, 'Unknown error')
else:
return 'Unknown Error Class', 'Unknown Error'
class NetBIOSError(Exception): pass
class NetBIOSTimeout(Exception):
def __init__(self, message = 'The NETBIOS connection with the remote host timed out.'):
Exception.__init__(self, message)
class NBResourceRecord:
def __init__(self, data = 0):
self._data = data
try:
if self._data:
self.rr_name = (re.split('\x00',data))[0]
offset = len(self.rr_name)+1
self.rr_type = unpack('>H', self._data[offset:offset+2])[0]
self.rr_class = unpack('>H', self._data[offset+2: offset+4])[0]
self.ttl = unpack('>L',self._data[offset+4:offset+8])[0]
self.rdlength = unpack('>H', self._data[offset+8:offset+10])[0]
self.rdata = self._data[offset+10:offset+10+self.rdlength]
offset = self.rdlength - 2
self.unit_id = data[offset:offset+6]
else:
self.rr_name = ''
self.rr_type = 0
self.rr_class = 0
self.ttl = 0
self.rdlength = 0
self.rdata = ''
self.unit_id = ''
except Exception:
raise NetBIOSError( 'Wrong packet format ' )
def set_rr_name(self, name):
self.rr_name = name
def set_rr_type(self, name):
self.rr_type = name
def set_rr_class(self,cl):
self.rr_class = cl
def set_ttl(self,ttl):
self.ttl = ttl
def set_rdata(self,rdata):
self.rdata = rdata
self.rdlength = len(rdata)
def get_unit_id(self):
return self.unit_id
def get_rr_name(self):
return self.rr_name
def get_rr_class(self):
return self.rr_class
def get_ttl(self):
return self.ttl
def get_rdlength(self):
return self.rdlength
def get_rdata(self):
return self.rdata
def rawData(self):
return self.rr_name + pack('!HHLH',self.rr_type, self.rr_class, self.ttl, self.rdlength) + self.rdata
class NBNodeStatusResponse(NBResourceRecord):
def __init__(self, data = 0):
NBResourceRecord.__init__(self,data)
self.num_names = 0
self.node_names = [ ]
self.statstics = ''
self.mac = '00-00-00-00-00-00'
try:
if data:
self._data = self.get_rdata()
self.num_names = unpack('>B',self._data[:1])[0]
offset = 1
for i in range(0, self.num_names):
name = self._data[offset:offset + 15]
type,flags = unpack('>BH', self._data[offset + 15: offset + 18])
offset += 18
self.node_names.append(NBNodeEntry(name, type ,flags))
self.set_mac_in_hexa(self.get_unit_id())
except Exception:
raise NetBIOSError( 'Wrong packet format ' )
def set_mac_in_hexa(self, data):
data_aux = ''
for d in data:
if data_aux == '':
data_aux = '%02x' % ord(d)
else:
data_aux += '-%02x' % ord(d)
self.mac = string.upper(data_aux)
def get_num_names(self):
return self.num_names
def get_mac(self):
return self.mac
def set_num_names(self, num):
self.num_names = num
def get_node_names(self):
return self.node_names
def add_node_name(self,node_names):
self.node_names.append(node_names)
self.num_names += 1
def rawData(self):
res = pack('!B', self.num_names )
for i in range(0, self.num_names):
res += self.node_names[i].rawData()
class NBPositiveNameQueryResponse(NBResourceRecord):
def __init__(self, data = 0):
NBResourceRecord.__init__(self, data)
self.addr_entries = [ ]
if data:
self._data = self.get_rdata()
_qn_length, qn_name, qn_scope = decode_name(data)
self._netbios_name = string.rstrip(qn_name[:-1]) + qn_scope
self._name_type = ord(qn_name[-1])
self._nb_flags = unpack('!H', self._data[:2])
offset = 2
while offset<len(self._data):
self.addr_entries.append('%d.%d.%d.%d' % unpack('4B', (self._data[offset:offset+4])))
offset += 4
def get_netbios_name(self):
return self._netbios_name
def get_name_type(self):
return self._name_type
def get_addr_entries(self):
return self.addr_entries
class NetBIOSPacket:
""" This is a packet as defined in RFC 1002 """
def __init__(self, data = 0):
self.name_trn_id = 0x0 # Transaction ID for Name Service Transaction.
# Requestor places a unique value for each active
# transaction. Responder puts NAME_TRN_ID value
# from request packet in response packet.
self.opcode = 0 # Packet type code
self.nm_flags = 0 # Flags for operation
self.rcode = 0 # Result codes of request.
self.qdcount = 0 # Unsigned 16 bit integer specifying the number of entries in the question section of a Name
self.ancount = 0 # Unsigned 16 bit integer specifying the number of
# resource records in the answer section of a Name
# Service packet.
self.nscount = 0 # Unsigned 16 bit integer specifying the number of
# resource records in the authority section of a
# Name Service packet.
self.arcount = 0 # Unsigned 16 bit integer specifying the number of
# resource records in the additional records
# section of a Name Service packeT.
self.questions = ''
self.answers = ''
if data == 0:
self._data = ''
else:
try:
self._data = data
self.opcode = ord(data[2]) >> 3
self.nm_flags = ((ord(data[2]) & 0x3) << 4) | ((ord(data[3]) & 0xf0) >> 4)
self.name_trn_id = unpack('>H', self._data[:2])[0]
self.rcode = ord(data[3]) & 0x0f
self.qdcount = unpack('>H', self._data[4:6])[0]
self.ancount = unpack('>H', self._data[6:8])[0]
self.nscount = unpack('>H', self._data[8:10])[0]
self.arcount = unpack('>H', self._data[10:12])[0]
self.answers = self._data[12:]
except Exception:
raise NetBIOSError( 'Wrong packet format ' )
def set_opcode(self, opcode):
self.opcode = opcode
def set_trn_id(self, trn):
self.name_trn_id = trn
def set_nm_flags(self, nm_flags):
self.nm_flags = nm_flags
def set_rcode(self, rcode):
self.rcode = rcode
def addQuestion(self, question, qtype, qclass):
self.qdcount += 1
self.questions += question + pack('!HH',qtype,qclass)
def get_trn_id(self):
return self.name_trn_id
def get_rcode(self):
return self.rcode
def get_nm_flags(self):
return self.nm_flags
def get_opcode(self):
return self.opcode
def get_qdcount(self):
return self.qdcount
def get_ancount(self):
return self.ancount
def get_nscount(self):
return self.nscount
def get_arcount(self):
return self.arcount
def rawData(self):
secondWord = self.opcode << 11
secondWord |= self.nm_flags << 4
secondWord |= self.rcode
data = pack('!HHHHHH', self.name_trn_id, secondWord , self.qdcount, self.ancount, self.nscount, self.arcount) + self.questions + self.answers
return data
def get_answers(self):
return self.answers
class NBHostEntry:
def __init__(self, nbname, nametype, ip):
self.__nbname = nbname
self.__nametype = nametype
self.__ip = ip
def get_nbname(self):
return self.__nbname
def get_nametype(self):
return self.__nametype
def get_ip(self):
return self.__ip
def __repr__(self):
return '<NBHostEntry instance: NBname="' + self.__nbname + '", IP="' + self.__ip + '">'
class NBNodeEntry:
def __init__(self, nbname, nametype, flags):
self.__nbname = string.ljust(nbname,17)
self.__nametype = nametype
self.__flags = flags
self.__isgroup = flags & 0x8000
self.__nodetype = flags & 0x6000
self.__deleting = flags & 0x1000
self.__isconflict = flags & 0x0800
self.__isactive = flags & 0x0400
self.__ispermanent = flags & 0x0200
def get_nbname(self):
return self.__nbname
def get_nametype(self):
return self.__nametype
def is_group(self):
return self.__isgroup
def get_nodetype(self):
return self.__nodetype
def is_deleting(self):
return self.__deleting
def is_conflict(self):
return self.__isconflict
def is_active(self):
return self.__isactive
def is_permanent(self):
return self.__ispermanent
def set_nbname(self, name):
self.__nbname = string.ljust(name,17)
def set_nametype(self, type):
self.__nametype = type
def set_flags(self,flags):
self.__flags = flags
def __repr__(self):
s = '<NBNodeEntry instance: NBname="' + self.__nbname + '" NameType="' + NAME_TYPES[self.__nametype] + '"'
if self.__isactive:
s += ' ACTIVE'
if self.__isgroup:
s += ' GROUP'
if self.__isconflict:
s += ' CONFLICT'
if self.__deleting:
s += ' DELETING'
return s
def rawData(self):
return self.__nbname + pack('!BH',self.__nametype, self.__flags)
class NetBIOS:
# Creates a NetBIOS instance without specifying any default NetBIOS domain nameserver.
# All queries will be sent through the servport.
def __init__(self, servport = NETBIOS_NS_PORT):
self.__servport = NETBIOS_NS_PORT
self.__nameserver = None
self.__broadcastaddr = BROADCAST_ADDR
self.mac = '00-00-00-00-00-00'
def _setup_connection(self, dstaddr):
port = randint(10000, 60000)
af, socktype, proto, _canonname, _sa = socket.getaddrinfo(dstaddr, port, socket.AF_INET, socket.SOCK_DGRAM)[0]
s = socket.socket(af, socktype, proto)
has_bind = 1
for _i in range(0, 10):
# We try to bind to a port for 10 tries
try:
s.bind(( INADDR_ANY, randint(10000, 60000) ))
s.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
has_bind = 1
except socket.error:
pass
if not has_bind:
raise NetBIOSError( 'Cannot bind to a good UDP port', ERRCLASS_OS, errno.EAGAIN)
self.__sock = s
# Set the default NetBIOS domain nameserver.
def set_nameserver(self, nameserver):
self.__nameserver = nameserver
# Return the default NetBIOS domain nameserver, or None if none is specified.
def get_nameserver(self):
return self.__nameserver
# Set the broadcast address to be used for query.
def set_broadcastaddr(self, broadcastaddr):
self.__broadcastaddr = broadcastaddr
# Return the broadcast address to be used, or BROADCAST_ADDR if default broadcast address is used.
def get_broadcastaddr(self):
return self.__broadcastaddr
# Returns a NBPositiveNameQueryResponse instance containing the host information for nbname.
# If a NetBIOS domain nameserver has been specified, it will be used for the query.
# Otherwise, the query is broadcasted on the broadcast address.
def gethostbyname(self, nbname, qtype = TYPE_WORKSTATION, scope = None, timeout = 1):
return self.__queryname(nbname, self.__nameserver, qtype, scope, timeout)
# Returns a list of NBNodeEntry instances containing node status information for nbname.
# If destaddr contains an IP address, then this will become an unicast query on the destaddr.
# Raises NetBIOSTimeout if timeout (in secs) is reached.
# Raises NetBIOSError for other errors
def getnodestatus(self, nbname, destaddr = None, type = TYPE_WORKSTATION, scope = None, timeout = 1):
if destaddr:
return self.__querynodestatus(nbname, destaddr, type, scope, timeout)
else:
return self.__querynodestatus(nbname, self.__nameserver, type, scope, timeout)
def getnetbiosname(self, ip):
entries = self.getnodestatus('*',ip)
entries = filter(lambda x:x.get_nametype() == TYPE_SERVER, entries)
return entries[0].get_nbname().strip()
def getmacaddress(self):
return self.mac
def __queryname(self, nbname, destaddr, qtype, scope, timeout, retries = 0):
self._setup_connection(destaddr)
trn_id = randint(1, 32000)
p = NetBIOSPacket()
p.set_trn_id(trn_id)
netbios_name = nbname.upper()
qn_label = encode_name(netbios_name, qtype, scope)
p.addQuestion(qn_label, QUESTION_TYPE_NB, QUESTION_CLASS_IN)
p.set_nm_flags(NM_FLAGS_RD)
if not destaddr:
p.set_nm_flags(p.get_nm_flags() | NM_FLAGS_BROADCAST)
destaddr = self.__broadcastaddr
req = p.rawData()
tries = retries
while 1:
self.__sock.sendto(req, ( destaddr, self.__servport ))
try:
ready, _, _ = select.select([ self.__sock.fileno() ], [ ] , [ ], timeout)
if not ready:
if tries:
# Retry again until tries == 0
tries -= 1
else:
raise NetBIOSTimeout
else:
data, _ = self.__sock.recvfrom(65536, 0)
res = NetBIOSPacket(data)
if res.get_trn_id() == p.get_trn_id():
if res.get_rcode():
if res.get_rcode() == 0x03:
return None
else:
raise NetBIOSError( 'Negative name query response', ERRCLASS_QUERY, res.get_rcode())
if res.get_ancount() != 1:
raise NetBIOSError( 'Malformed response')
return NBPositiveNameQueryResponse(res.get_answers())
except select.error as ex:
if ex[0] != errno.EINTR and ex[0] != errno.EAGAIN:
raise NetBIOSError( 'Error occurs while waiting for response', ERRCLASS_OS, ex[0])
raise
def __querynodestatus(self, nbname, destaddr, type, scope, timeout):
self._setup_connection(destaddr)
trn_id = randint(1, 32000)
p = NetBIOSPacket()
p.set_trn_id(trn_id)
netbios_name = string.upper(nbname)
qn_label = encode_name(netbios_name, type, scope)
p.addQuestion(qn_label, QUESTION_TYPE_NBSTAT, QUESTION_CLASS_IN)
if not destaddr:
p.set_nm_flags(NM_FLAGS_BROADCAST)
destaddr = self.__broadcastaddr
req = p.rawData()
tries = 3
while 1:
try:
self.__sock.sendto(req, 0, ( destaddr, self.__servport ))
ready, _, _ = select.select([ self.__sock.fileno() ], [ ] , [ ], timeout)
if not ready:
if tries:
# Retry again until tries == 0
tries -= 1
else:
raise NetBIOSTimeout
else:
try:
data, _ = self.__sock.recvfrom(65536, 0)
except Exception as e:
raise NetBIOSError("recvfrom error: %s" % str(e))
self.__sock.close()
res = NetBIOSPacket(data)
if res.get_trn_id() == p.get_trn_id():
if res.get_rcode():
if res.get_rcode() == 0x03:
# I'm just guessing here
raise NetBIOSError("Cannot get data from server")
else:
raise NetBIOSError( 'Negative name query response', ERRCLASS_QUERY, res.get_rcode())
answ = NBNodeStatusResponse(res.get_answers())
self.mac = answ.get_mac()
return answ.get_node_names()
except select.error as ex:
if ex[0] != errno.EINTR and ex[0] != errno.EAGAIN:
raise NetBIOSError( 'Error occurs while waiting for response', ERRCLASS_OS, ex[0])
except socket.error as ex:
raise NetBIOSError('Connection error: %s' % str(ex))
# Perform first and second level encoding of name as specified in RFC 1001 (Section 4)
def encode_name(name, type, scope):
if name == '*':
name += '\0' * 15
elif len(name) > 15:
name = name[:15] + chr(type)
else:
name = string.ljust(name, 15) + chr(type)
encoded_name = chr(len(name) * 2) + re.sub('.', _do_first_level_encoding, name)
if scope:
encoded_scope = ''
for s in string.split(scope, '.'):
encoded_scope = encoded_scope + chr(len(s)) + s
return encoded_name + encoded_scope + '\0'
else:
return encoded_name + '\0'
# Internal method for use in encode_name()
def _do_first_level_encoding(m):
s = ord(m.group(0))
return string.uppercase[s >> 4] + string.uppercase[s & 0x0f]
def decode_name(name):
name_length = ord(name[0])
assert name_length == 32
decoded_name = re.sub('..', _do_first_level_decoding, name[1:33])
if name[33] == '\0':
return 34, decoded_name, ''
else:
decoded_domain = ''
offset = 34
while 1:
domain_length = ord(name[offset])
if domain_length == 0:
break
decoded_domain = '.' + name[offset:offset + domain_length]
offset += domain_length
return offset + 1, decoded_name, decoded_domain
def _do_first_level_decoding(m):
s = m.group(0)
return chr(((ord(s[0]) - ord('A')) << 4) | (ord(s[1]) - ord('A')))
class NetBIOSSessionPacket:
def __init__(self, data = 0):
self.type = 0x0
self.flags = 0x0
self.length = 0x0
if data == 0:
self._trailer = ''
else:
try:
self.type = ord(data[0])
if self.type == NETBIOS_SESSION_MESSAGE:
self.length = ord(data[1]) << 16 | (unpack('!H', data[2:4])[0])
else:
self.flags = ord(data[1])
self.length = unpack('!H', data[2:4])[0]
self._trailer = data[4:]
except:
raise NetBIOSError( 'Wrong packet format ' )
def set_type(self, type):
self.type = type
def get_type(self):
return self.type
def rawData(self):
if self.type == NETBIOS_SESSION_MESSAGE:
data = pack('!BBH',self.type,self.length >> 16,self.length & 0xFFFF) + self._trailer
else:
data = pack('!BBH',self.type,self.flags,self.length) + self._trailer
return data
def set_trailer(self,data):
self._trailer = data
self.length = len(data)
def get_length(self):
return self.length
def get_trailer(self):
return self._trailer
class NetBIOSSession:
def __init__(self, myname, remote_name, remote_host, remote_type = TYPE_SERVER, sess_port = NETBIOS_SESSION_PORT, timeout = None, local_type = TYPE_WORKSTATION, sock = None):
if len(myname) > 15:
self.__myname = string.upper(myname[:15])
else:
self.__myname = string.upper(myname)
self.__local_type = local_type
assert remote_name
# if destination port SMB_SESSION_PORT and remote name *SMBSERVER, we're changing it to its IP address
# helping solving the client mistake ;)
if remote_name == '*SMBSERVER' and sess_port == SMB_SESSION_PORT:
remote_name = remote_host
# If remote name is *SMBSERVER let's try to query its name.. if can't be guessed, continue and hope for the best
if remote_name == '*SMBSERVER':
nb = NetBIOS()
try:
res = nb.getnetbiosname(remote_host)
except:
res = None
pass
if res is not None:
remote_name = res
if len(remote_name) > 15:
self.__remote_name = string.upper(remote_name[:15])
else:
self.__remote_name = string.upper(remote_name)
self.__remote_type = remote_type
self.__remote_host = remote_host
if sock is not None:
# We are acting as a server
self._sock = sock
else:
self._sock = self._setup_connection((remote_host, sess_port))
if sess_port == NETBIOS_SESSION_PORT:
self._request_session(remote_type, local_type, timeout)
def get_myname(self):
return self.__myname
def get_mytype(self):
return self.__local_type
def get_remote_host(self):
return self.__remote_host
def get_remote_name(self):
return self.__remote_name
def get_remote_type(self):
return self.__remote_type
def close(self):
self._sock.close()
def get_socket(self):
return self._sock
class NetBIOSUDPSessionPacket(Structure):
TYPE_DIRECT_UNIQUE = 16
TYPE_DIRECT_GROUP = 17
FLAGS_MORE_FRAGMENTS = 1
FLAGS_FIRST_FRAGMENT = 2
FLAGS_B_NODE = 0
structure = (
('Type','B=16'), # Direct Unique Datagram
('Flags','B=2'), # FLAGS_FIRST_FRAGMENT
('ID','<H'),
('_SourceIP','>L'),
('SourceIP','"'),
('SourcePort','>H=138'),
('DataLegth','>H-Data'),
('Offset','>H=0'),
('SourceName','z'),
('DestinationName','z'),
('Data',':'),
)
def getData(self):
addr = self['SourceIP'].split('.')
addr = [int(x) for x in addr]
addr = (((addr[0] << 8) + addr[1] << 8) + addr[2] << 8) + addr[3]
self['_SourceIP'] = addr
return Structure.getData(self)
def get_trailer(self):
return self['Data']
class NetBIOSUDPSession(NetBIOSSession):
def _setup_connection(self, peer):
af, socktype, proto, canonname, sa = socket.getaddrinfo(peer[0], peer[1], 0, socket.SOCK_DGRAM)[0]
sock = socket.socket(af, socktype, proto)
sock.connect(sa)
sock = socket.socket(af, socktype, proto)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((INADDR_ANY, 138))
self.peer = peer
return sock
def _request_session(self, remote_type, local_type, timeout = None):
pass
def next_id(self):
if hasattr(self, '__dgram_id'):
answer = self.__dgram_id
else:
self.__dgram_id = randint(1,65535)
answer = self.__dgram_id
self.__dgram_id += 1
return answer
def send_packet(self, data):
# Yes... I know...
self._sock.connect(self.peer)
p = NetBIOSUDPSessionPacket()
p['ID'] = self.next_id()
p['SourceIP'] = self._sock.getsockname()[0]
p['SourceName'] = encode_name(self.get_myname(), self.get_mytype(), '')[:-1]
p['DestinationName'] = encode_name(self.get_remote_name(), self.get_remote_type(), '')[:-1]
p['Data'] = data
self._sock.sendto(str(p), self.peer)
self._sock.close()
self._sock = self._setup_connection(self.peer)
def recv_packet(self, timeout = None):
# The next loop is a workaround for a bigger problem:
# When data reaches higher layers, the lower headers are lost,
# and with them, for example, the source IP. Hence, SMB users
# can't know where packets are comming from... we need a better
# solution, right now, we will filter everything except packets
# coming from the remote_host specified in __init__()
while 1:
data, peer = self._sock.recvfrom(8192)
# print "peer: %r self.peer: %r" % (peer, self.peer)
if peer == self.peer: break
return NetBIOSUDPSessionPacket(data)
class NetBIOSTCPSession(NetBIOSSession):
def __init__(self, myname, remote_name, remote_host, remote_type = TYPE_SERVER, sess_port = NETBIOS_SESSION_PORT, timeout = None, local_type = TYPE_WORKSTATION, sock = None, select_poll = False):
self.__select_poll = select_poll
if self.__select_poll:
self.read_function = self.polling_read
else:
self.read_function = self.non_polling_read
NetBIOSSession.__init__(self, myname, remote_name, remote_host, remote_type = remote_type, sess_port = sess_port, timeout = timeout, local_type = local_type, sock=sock)
def _setup_connection(self, peer):
try:
af, socktype, proto, canonname, sa = socket.getaddrinfo(peer[0], peer[1], 0, socket.SOCK_STREAM)[0]
sock = socket.socket(af, socktype, proto)
sock.connect(sa)
except socket.error as e:
raise socket.error("Connection error (%s:%s)" % (peer[0], peer[1]), e)
return sock
def send_packet(self, data):
p = NetBIOSSessionPacket()
p.set_type(NETBIOS_SESSION_MESSAGE)
p.set_trailer(data)
self._sock.send(p.rawData())
def recv_packet(self, timeout = None):
data = self.__read(timeout)
return NetBIOSSessionPacket(data)
def _request_session(self, remote_type, local_type, timeout = None):
p = NetBIOSSessionPacket()
remote_name = encode_name(self.get_remote_name(), remote_type, '')
myname = encode_name(self.get_myname(), local_type, '')
p.set_type(NETBIOS_SESSION_REQUEST)
p.set_trailer(remote_name + myname)
self._sock.send(p.rawData())
while 1:
p = self.recv_packet(timeout)
if p.get_type() == NETBIOS_SESSION_NEGATIVE_RESPONSE:
raise NetBIOSError( 'Cannot request session', ERRCLASS_SESSION, ord(p.get_trailer()[0]))
elif p.get_type() == NETBIOS_SESSION_POSITIVE_RESPONSE:
break
else:
# Ignore all other messages, most probably keepalive messages
pass
def polling_read(self, read_length, timeout):
data = ''
if timeout is None:
timeout = 3600
time_left = timeout
CHUNK_TIME = 0.025
bytes_left = read_length
while bytes_left > 0:
try:
ready, _, _ = select.select([self._sock.fileno() ], [ ], [ ], 0)
if not ready:
if time_left <= 0:
raise NetBIOSTimeout
else:
time.sleep(CHUNK_TIME)
time_left -= CHUNK_TIME
continue
received = self._sock.recv(bytes_left)
if len(received) == 0:
raise NetBIOSError( 'Error while reading from remote', ERRCLASS_OS, None)
data = data + received
bytes_left = read_length - len(data)
except select.error as ex:
if ex[0] != errno.EINTR and ex[0] != errno.EAGAIN:
raise NetBIOSError( 'Error occurs while reading from remote', ERRCLASS_OS, ex[0])
return data
def non_polling_read(self, read_length, timeout):
data = ''
bytes_left = read_length
while bytes_left > 0:
try:
ready, _, _ = select.select([self._sock.fileno() ], [ ], [ ], timeout)
if not ready:
raise NetBIOSTimeout
received = self._sock.recv(bytes_left)
if len(received) == 0:
raise NetBIOSError( 'Error while reading from remote', ERRCLASS_OS, None)
data = data + received
bytes_left = read_length - len(data)
except select.error as ex:
if ex[0] != errno.EINTR and ex[0] != errno.EAGAIN:
raise NetBIOSError( 'Error occurs while reading from remote', ERRCLASS_OS, ex[0])
return data
def __read(self, timeout = None):
data = self.read_function(4, timeout)
type, flags, length = unpack('>ccH', data)
if ord(type) == NETBIOS_SESSION_MESSAGE:
length |= ord(flags) << 16
else:
if ord(flags) & 0x01:
length |= 0x10000
data2 = self.read_function(length, timeout)
return data + data2
ERRCLASS_QUERY = 0x00
ERRCLASS_SESSION = 0xf0
ERRCLASS_OS = 0xff
QUERY_ERRORS = { 0x01: 'Request format error. Please file a bug report.',
0x02: 'Internal server error',
0x03: 'Name does not exist',
0x04: 'Unsupported request',
0x05: 'Request refused'
}
SESSION_ERRORS = { 0x80: 'Not listening on called name',
0x81: 'Not listening for calling name',
0x82: 'Called name not present',
0x83: 'Sufficient resources',
0x8f: 'Unspecified error'
}
def main():
def get_netbios_host_by_name(name):
n = NetBIOS()
n.set_broadcastaddr('255.255.255.255') # To avoid use "<broadcast>" in socket
for qtype in (TYPE_WORKSTATION, TYPE_CLIENT, TYPE_SERVER, TYPE_DOMAIN_MASTER, TYPE_DOMAIN_CONTROLLER):
try:
addrs = n.gethostbyname(name, qtype = qtype).get_addr_entries()
except NetBIOSTimeout:
continue
else:
return addrs
raise Exception("Host not found")
n = get_netbios_host_by_name("some-host")
print(n)
if __name__ == '__main__':
main()
| deps/src/curl-7.65.1/tests/python_dependencies/impacket/nmb.py | 35,940 | This is a packet as defined in RFC 1002
Copyright (c) 2003-2016 CORE Security Technologies This software is provided under under a slightly modified version of the Apache Software License. See the accompanying LICENSE file for more information. -*- mode: python; tab-width: 4 -*- Copyright (C) 2001 Michael Teo <michaelteo@bigfoot.com> nmb.py - NetBIOS library This software is provided 'as-is', without any express or implied warranty. In no event will the author be held liable for any damages arising from the use of this software. Permission is granted to anyone to use this software for any purpose, including commercial applications, and to alter it and redistribute it freely, subject to the following restrictions: 1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. 2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. 3. This notice cannot be removed or altered from any source distribution. Altered source done by Alberto Solino (@agsolino) Taken from socket module reference Default port for NetBIOS name service Default port for NetBIOS session service Default port for SMB session service Owner Node Type Constants Name Type Constants Opcodes values NM_FLAGS QUESTION_TYPE NetBIOS general Name Service Resource Record NetBIOS NODE STATUS Resource Record QUESTION_CLASS Internet class RR_TYPE Resource Record Type code IP address Resource Record Name Server Resource Record NULL Resource Record NetBIOS general Name Service Resource Record NetBIOS NODE STATUS Resource Record Resource Record Class Internet class RCODE values Format Error. Request was invalidly formatted. Server failure. Problem with NBNS, cannot process name. Unsupported request error. Allowable only for challenging NBNS when gets an Update type registration request. Refused error. For policy reasons server will not register this name from this host. Active error. Name is owned by another node. Name in conflict error. A UNIQUE name is owned by more than one node. NAME_FLAGS Permanent Name Flag. If one (1) then entry is for the permanent node name. Flag is zero (0) for all other names. Active Name Flag. All entries have this flag set to one (1). Conflict Flag. If one (1) then name on this node is in conflict. Deregister Flag. If one (1) then this name is in the process of being deleted. NetBIOS Session Types Transaction ID for Name Service Transaction. Requestor places a unique value for each active transaction. Responder puts NAME_TRN_ID value from request packet in response packet. Packet type code Flags for operation Result codes of request. Unsigned 16 bit integer specifying the number of entries in the question section of a Name Unsigned 16 bit integer specifying the number of resource records in the answer section of a Name Service packet. Unsigned 16 bit integer specifying the number of resource records in the authority section of a Name Service packet. Unsigned 16 bit integer specifying the number of resource records in the additional records section of a Name Service packeT. Creates a NetBIOS instance without specifying any default NetBIOS domain nameserver. All queries will be sent through the servport. We try to bind to a port for 10 tries Set the default NetBIOS domain nameserver. Return the default NetBIOS domain nameserver, or None if none is specified. Set the broadcast address to be used for query. Return the broadcast address to be used, or BROADCAST_ADDR if default broadcast address is used. Returns a NBPositiveNameQueryResponse instance containing the host information for nbname. If a NetBIOS domain nameserver has been specified, it will be used for the query. Otherwise, the query is broadcasted on the broadcast address. Returns a list of NBNodeEntry instances containing node status information for nbname. If destaddr contains an IP address, then this will become an unicast query on the destaddr. Raises NetBIOSTimeout if timeout (in secs) is reached. Raises NetBIOSError for other errors Retry again until tries == 0 Retry again until tries == 0 I'm just guessing here Perform first and second level encoding of name as specified in RFC 1001 (Section 4) Internal method for use in encode_name() if destination port SMB_SESSION_PORT and remote name *SMBSERVER, we're changing it to its IP address helping solving the client mistake ;) If remote name is *SMBSERVER let's try to query its name.. if can't be guessed, continue and hope for the best We are acting as a server Direct Unique Datagram FLAGS_FIRST_FRAGMENT Yes... I know... The next loop is a workaround for a bigger problem: When data reaches higher layers, the lower headers are lost, and with them, for example, the source IP. Hence, SMB users can't know where packets are comming from... we need a better solution, right now, we will filter everything except packets coming from the remote_host specified in __init__() print "peer: %r self.peer: %r" % (peer, self.peer) Ignore all other messages, most probably keepalive messages To avoid use "<broadcast>" in socket | 5,288 | en | 0.820243 |
#! /usr/bin/env python
# coding:utf-8
import unittest
from kovot.response import Response
from kovot.response import ResponseTransformer
from kovot.response import ResponseSelector
class ResponseTest(unittest.TestCase):
def test_response(self):
text = "京都にいます"
score = 1.2
res = Response(text=text, score=score)
self.assertEqual(res.text, text)
self.assertEqual(res.score, score)
class TransformerTest(unittest.TestCase):
def test_transformer(self):
text = "京都にいます"
score = 1.2
res = Response(text=text, score=score)
transformer = ResponseTransformer()
self.assertEqual(transformer.transform(res), res)
class SelectorTest(unittest.TestCase):
def test_select(self):
x = Response(text="ひとつめ", score=1.2)
y = Response(text="ふたつめ", score=3.2)
z = Response(text="みっつめ", score=0.8)
selector = ResponseSelector()
self.assertEqual(selector.select([x, y, z]), [y, x, z])
def test_select_with_num(self):
x = Response(text="ひとつめ", score=1.2)
y = Response(text="ふたつめ", score=3.2)
z = Response(text="みっつめ", score=0.8)
selector = ResponseSelector()
self.assertEqual(selector.select([x, y, z], num=2),
[y, x]) | test/test_response.py | 1,385 | ! /usr/bin/env python coding:utf-8 | 34 | en | 0.408602 |
import random
# create the initial array
regionsEMEA = ["Central Eastern Europe", "France", "Germany", "Middle East / Africa", "United Kingdom", "Western Europe"]
# randomly pick region after region
num = len(regionsEMEA)
for x in range(num):
numRegions = len(regionsEMEA)
pos = random.randint(0,numRegions-1)
selected = regionsEMEA[pos]
print(selected)
regionsEMEA.pop(pos)
| randEMEA.py | 397 | create the initial array randomly pick region after region | 58 | en | 0.568883 |
import logging
from typing import Callable, TypeVar, List, Optional, Dict
import ray
from ray.exceptions import RayActorError
from ray.util.sgd.v2.worker_group import WorkerGroup
from ray.util.sgd.v2.session import init_session, get_session, shutdown_session
T = TypeVar("T")
logger = logging.getLogger(__name__)
class BackendConfig:
"""Parent class for configurations of training backend."""
@property
def backend_cls(self):
raise NotImplementedError
class SGDBackendError(Exception):
"""Errors with BackendExecutor that should not be exposed to user."""
class BackendExecutor:
"""Main execution class for training backends.
This class holds a worker group and is responsible for executing the
training function on the workers, and collecting intermediate results
from ``sgd.report()``.
Args:
backend_config (BackendConfig): The configurations for this
specific backend.
num_workers (int): Number of workers to use for training.
num_cpus_per_worker (float): Number of CPUs to use per worker.
num_gpus_per_worker (float): Number of GPUs to use per worker.
"""
def __init__(self,
backend_config: BackendConfig,
num_workers: int = 1,
num_cpus_per_worker: float = 1,
num_gpus_per_worker: float = 0):
self._backend_config = backend_config
self._backend = self._backend_config.backend_cls()
self._num_workers = num_workers
self._num_cpus_per_worker = num_cpus_per_worker
self._num_gpus_per_worker = num_gpus_per_worker
self.worker_group = InactiveWorkerGroup()
def start(self, initialization_hook: Optional[Callable[[], None]] = None):
"""Starts the worker group."""
self.worker_group = WorkerGroup(self._num_workers,
self._num_cpus_per_worker,
self._num_gpus_per_worker)
if initialization_hook:
self.worker_group.execute(initialization_hook)
self._backend.on_start(self.worker_group, self._backend_config)
def start_training(self, train_func: Callable[[], T]) -> None:
"""Executes a training function on all workers in a separate thread.
``finish_training`` should be called after this.
Args:
train_func (Callable): The training function to run on each worker.
"""
# First initialize the session.
def initialize_session(world_rank, train_func):
try:
init_session(training_func=train_func, world_rank=world_rank)
except ValueError:
raise SGDBackendError(
"Attempting to start training but a "
"previous training run is still ongoing. "
"You must call `finish_training` before "
"calling `start_training` again.")
futures = []
for world_rank in range(len(self.worker_group)):
futures.append(
self.worker_group.execute_single_async(
world_rank,
initialize_session,
world_rank=world_rank,
train_func=train_func))
ray.get(futures)
# Run the training function asynchronously in its own thread.
def train_async():
session = get_session()
session.start()
self.worker_group.execute_async(train_async)
def fetch_next_result(self) -> Optional[List[Dict]]:
"""Fetch next results produced by ``sgd.report()`` from each worker.
Assumes ``start_training`` has already been called.
Returns:
A list of dictionaries of values passed to ``sgd.report()`` from
each worker. Each item corresponds to an intermediate result
a single worker. If there are no more items to fetch,
returns None.
"""
def get_next():
# Get the session for this worker.
try:
session = get_session()
except ValueError:
# Session is not initialized yet.
raise SGDBackendError("`fetch_next_result` has been called "
"before `start_training`. Please call "
"`start_training` before "
"`fetch_next_result`.")
try:
result = session.get_next()
except RuntimeError:
# Training thread has not been started yet.
raise SGDBackendError("`fetch_next_result` has been called "
"before `start_training`. Please call "
"`start_training` before "
"`fetch_next_result`.")
return result
futures = self.worker_group.execute_async(get_next)
results = self.get_with_failure_handling(futures)
# Check if any worker returned None.
if any(r is None for r in results):
# Either all workers have results or none of them do.
if not all(r is None for r in results):
raise RuntimeError("Some workers returned results while "
"others didn't. Make sure that "
"`sgd.report()` is called the same number "
"of times on all workers.")
else:
results = None
return results
def finish_training(self) -> List[T]:
"""Finish training and return final results. Propagate any exceptions.
Blocks until training is finished on all workers.
Assumes `start_training` has already been called.
Returns:
A list of return values from calling ``train_func`` on each worker.
Each item corresponds to the return value from a single worker.
"""
def end_training():
# Get the session for this worker.
try:
session = get_session()
except ValueError:
# Session is not initialized yet.
raise SGDBackendError("`finish_training` has been called "
"before `start_training`. Please call "
"`start_training` before "
"`finish_training`.")
try:
# session.finish raises any Exceptions from training.
output = session.finish()
finally:
# Shutdown session even if session.finish() raises an
# Exception.
shutdown_session()
return output
futures = self.worker_group.execute_async(end_training)
return self.get_with_failure_handling(futures)
def get_with_failure_handling(self, remote_values):
"""Gets the remote values while handling for worker failures.
Args:
remote_values (list): List of object refs representing functions
that may fail in the middle of execution. For example, running
a SGD training loop in multiple parallel actor calls.
Returns:
The resolved objects represented by the passed in ObjectRefs.
"""
unfinished = remote_values
try:
while len(unfinished) > 0:
finished, unfinished = ray.wait(unfinished)
# If a failure occurs the ObjectRef will be marked as finished.
# Calling ray.get will expose the failure as a RayActorError.
ray.get(finished)
except RayActorError as exc:
logger.exception(str(exc))
self.handle_failure()
return
return ray.get(remote_values)
def handle_failure(self):
# TODO: Fault-tolerance/elastic training here.
self.shutdown()
raise RuntimeError("Worker crashed during training. "
"Training unsuccessful.")
def shutdown(self):
"""Shuts down the workers in the worker group."""
try:
self._backend.on_shutdown(self.worker_group, self._backend_config)
except RayActorError:
logger.warning("Graceful shutdown of backend failed. This is "
"expected if one of the workers has crashed.")
self.worker_group.shutdown()
self.worker_group = InactiveWorkerGroup()
class BackendInterface:
def on_start(self, worker_group: WorkerGroup,
backend_config: BackendConfig):
raise NotImplementedError
def on_shutdown(self, worker_group: WorkerGroup,
backend_config: BackendConfig):
raise NotImplementedError
class InactiveWorkerGroupError(Exception):
"""Raised when underlying worker group is inactive."""
class InactiveWorkerGroup():
# TODO: fix inheritence. perhaps create WorkerGroupInterface.
def __getattribute__(self, *args, **kwargs):
raise InactiveWorkerGroupError()
def __len__(self):
raise InactiveWorkerGroupError()
| python/ray/util/sgd/v2/backends/backend.py | 9,304 | Parent class for configurations of training backend.
Main execution class for training backends.
This class holds a worker group and is responsible for executing the
training function on the workers, and collecting intermediate results
from ``sgd.report()``.
Args:
backend_config (BackendConfig): The configurations for this
specific backend.
num_workers (int): Number of workers to use for training.
num_cpus_per_worker (float): Number of CPUs to use per worker.
num_gpus_per_worker (float): Number of GPUs to use per worker.
Raised when underlying worker group is inactive.
Errors with BackendExecutor that should not be exposed to user.
Fetch next results produced by ``sgd.report()`` from each worker.
Assumes ``start_training`` has already been called.
Returns:
A list of dictionaries of values passed to ``sgd.report()`` from
each worker. Each item corresponds to an intermediate result
a single worker. If there are no more items to fetch,
returns None.
Finish training and return final results. Propagate any exceptions.
Blocks until training is finished on all workers.
Assumes `start_training` has already been called.
Returns:
A list of return values from calling ``train_func`` on each worker.
Each item corresponds to the return value from a single worker.
Gets the remote values while handling for worker failures.
Args:
remote_values (list): List of object refs representing functions
that may fail in the middle of execution. For example, running
a SGD training loop in multiple parallel actor calls.
Returns:
The resolved objects represented by the passed in ObjectRefs.
Shuts down the workers in the worker group.
Starts the worker group.
Executes a training function on all workers in a separate thread.
``finish_training`` should be called after this.
Args:
train_func (Callable): The training function to run on each worker.
First initialize the session. Run the training function asynchronously in its own thread. Get the session for this worker. Session is not initialized yet. Training thread has not been started yet. Check if any worker returned None. Either all workers have results or none of them do. Get the session for this worker. Session is not initialized yet. session.finish raises any Exceptions from training. Shutdown session even if session.finish() raises an Exception. If a failure occurs the ObjectRef will be marked as finished. Calling ray.get will expose the failure as a RayActorError. TODO: Fault-tolerance/elastic training here. TODO: fix inheritence. perhaps create WorkerGroupInterface. | 2,641 | en | 0.895064 |
from django.test import TestCase
from authors.apps.authentication.models import User
class UserModelTest(TestCase):
"""
Test Suite for the User model class, User authentication.
"""
def test_create_user(self):
"""
Test User model can create a user successfully
"""
self.assertIsInstance(
User.objects.create_user(username="username",
email="username@mail.com",
password="password"), User)
| authors/apps/authentication/tests/test_create_user.py | 527 | Test Suite for the User model class, User authentication.
Test User model can create a user successfully | 104 | en | 0.851033 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# MIT License
#
# Copyright 2018-2020 New York University Abu Dhabi
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""The CAMeL Tools transliteration utility.
Usage:
camel_transliterate (-s SCHEME | --scheme=SCHEME)
[-m MARKER | --marker=MARKER]
[-I | --ignore-markers]
[-S | --strip-markers]
[-o OUTPUT | --output=OUTPUT] [FILE]
camel_transliterate (-l | --list)
camel_transliterate (-v | --version)
camel_transliterate (-h | --help)
Options:
-s SCHEME --scheme
Scheme used for transliteration.
-o OUTPUT --output=OUTPUT
Output file. If not specified, output will be printed to stdout.
-m MARKER --marker=MARKER
Marker used to prefix tokens not to be transliterated.
[default: @@IGNORE@@]
-I --ignore-markers
Transliterate marked words as well.
-S --strip-markers
Remove markers in output.
-l --list
Show a list of available transliteration schemes.
-h --help
Show this screen.
-v --version
Show version.
"""
from __future__ import print_function, absolute_import
import sys
from docopt import docopt
import six
import camel_tools as camelt
from camel_tools.utils.stringutils import force_encoding, force_unicode
from camel_tools.utils.charmap import CharMapper
from camel_tools.utils.transliterate import Transliterator
__version__ = camelt.__version__
_BUILTIN_SCHEMES = [
('ar2bw', 'Arabic to Buckwalter'),
('ar2safebw', 'Arabic to Safe Buckwalter'),
('ar2xmlbw', 'Arabic to XML Buckwalter'),
('ar2hsb', 'Arabic to Habash-Soudi-Buckwalter'),
('bw2ar', 'Buckwalter to Arabic'),
('bw2safebw', 'Buckwalter to Safe Buckwalter'),
('bw2xmlbw', 'Buckwalter to XML Buckwalter'),
('bw2hsb', 'Buckwalter to Habash-Soudi-Buckwalter'),
('safebw2ar', 'Safe Buckwalter to Arabic'),
('safebw2bw', 'Safe Buckwalter to Buckwalter'),
('safebw2xmlbw', 'Safe Buckwalter to XML Buckwalter'),
('safebw2hsb', 'Safe Buckwalter to Habash-Soudi-Buckwalter'),
('xmlbw2ar', 'XML Buckwalter to Arabic'),
('xmlbw2bw', 'XML Buckwalter to Buckwalter'),
('xmlbw2safebw', 'XML Buckwalter to Safe Buckwalter'),
('xmlbw2hsb', 'XML Buckwalter to Habash-Soudi-Buckwalter'),
('hsb2ar', 'Habash-Soudi-Buckwalter to Arabic'),
('hsb2bw', 'Habash-Soudi-Buckwalter to Buckwalter'),
('hsb2safebw', 'Habash-Soudi-Buckwalter to Safe Buckwalter'),
('hsb2xmlbw', 'Habash-Soudi-Buckwalter to Habash-Soudi-Buckwalter'),
]
def _open_files(finpath, foutpath):
if finpath is None:
fin = sys.stdin
else:
try:
fin = open(finpath, 'r', encoding='utf-8')
except OSError:
sys.stderr.write('Error: Couldn\'t open input file {}.'
'\n'.format(repr(finpath)))
sys.exit(1)
if foutpath is None:
fout = sys.stdout
else:
try:
fout = open(foutpath, 'w', encoding='utf-8')
except OSError:
sys.stderr.write('Error: Couldn\'t open output file {}.'
'\n'.format(repr(foutpath)))
if finpath is not None:
fin.close()
sys.exit(1)
return fin, fout
def main(): # pragma: no cover
try:
version = ('CAMeL Tools v{}'.format(__version__))
arguments = docopt(__doc__, version=version)
if arguments['--list']:
for scheme in _BUILTIN_SCHEMES:
print("{} {}".format(scheme[0].ljust(20), scheme[1]))
sys.exit(0)
if arguments['--scheme'] is not None:
if arguments['--scheme'] not in [s[0] for s in _BUILTIN_SCHEMES]:
sys.stderr.write('Error: {} is not a valid scheme.\n'
'Run `camel_transliterate -l` to see the list'
' of available schemes.'
'\n'.format(repr(arguments['--scheme'])))
sys.exit(1)
if arguments['--marker'] is None:
marker = '@@IGNORE@@'
else:
marker = arguments['--marker']
ignore_markers = arguments['--ignore-markers']
strip_markers = arguments['--strip-markers']
# Open files (or just use stdin and stdout)
fin, fout = _open_files(arguments['FILE'], arguments['--output'])
# Load the CharMapper and initialize a Transliterator with it
try:
mapper = CharMapper.builtin_mapper(arguments['--scheme'])
trans = Transliterator(mapper, marker)
except Exception: # pylint: disable=W0703
sys.stderr.write('Error: Could not load builtin scheme'
' {}.\n'.format(repr(arguments['--scheme'])))
sys.exit(1)
# Transliterate lines
try:
for line in fin:
line = force_unicode(line)
if six.PY3:
fout.write(
trans.transliterate(line, strip_markers,
ignore_markers))
else:
fout.write(
force_encoding(
trans.transliterate(line, strip_markers,
ignore_markers)))
fout.flush()
# If everything worked so far, this shouldn't happen
except Exception: # pylint: disable=W0703
sys.stderr.write('Error: An unkown error occured during '
'transliteration.\n')
sys.exit(1)
# Cleanup
if arguments['FILE'] is not None:
fin.close()
if arguments['--output'] is not None:
fout.close()
sys.exit(0)
except KeyboardInterrupt:
sys.stderr.write('Exiting...\n')
sys.exit(1)
except Exception:
sys.stderr.write('Error: An unknown error occurred.\n')
sys.exit(1)
if __name__ == '__main__': # pragma: no cover
main()
| camel_tools/cli/camel_transliterate.py | 7,318 | The CAMeL Tools transliteration utility.
Usage:
camel_transliterate (-s SCHEME | --scheme=SCHEME)
[-m MARKER | --marker=MARKER]
[-I | --ignore-markers]
[-S | --strip-markers]
[-o OUTPUT | --output=OUTPUT] [FILE]
camel_transliterate (-l | --list)
camel_transliterate (-v | --version)
camel_transliterate (-h | --help)
Options:
-s SCHEME --scheme
Scheme used for transliteration.
-o OUTPUT --output=OUTPUT
Output file. If not specified, output will be printed to stdout.
-m MARKER --marker=MARKER
Marker used to prefix tokens not to be transliterated.
[default: @@IGNORE@@]
-I --ignore-markers
Transliterate marked words as well.
-S --strip-markers
Remove markers in output.
-l --list
Show a list of available transliteration schemes.
-h --help
Show this screen.
-v --version
Show version.
!/usr/bin/env python -*- coding: utf-8 -*- MIT License Copyright 2018-2020 New York University Abu Dhabi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. pragma: no cover Open files (or just use stdin and stdout) Load the CharMapper and initialize a Transliterator with it pylint: disable=W0703 Transliterate lines If everything worked so far, this shouldn't happen pylint: disable=W0703 Cleanup pragma: no cover | 2,375 | en | 0.717559 |
# -*- coding: utf-8 -*-
"""
Created on Mon Mar 7 16:41:25 2011
@author: -
"""
import os;
import time;
import sys;
import plot_pca_functions;
import numpy as np
import matplotlib.pyplot as plt
import math
taylor_error_capitol= 0.608546356589;
pca_error_9_capitol = 0.614236131016; #at 10% sample-training
taylor_error_downtown= 0.248427497809; #this is for downtown12_12_4!
pca_error_9_downtown = 0.193806624247; #this is for downtown3_3_1!
fig = plt.figure();
| bvpl/bvpl_octree/taylor_vs_pca.py | 467 | Created on Mon Mar 7 16:41:25 2011
@author: -
-*- coding: utf-8 -*-at 10% sample-trainingthis is for downtown12_12_4!this is for downtown3_3_1! | 147 | en | 0.842077 |
# encoding: UTF-8
import os
print u'load {0}/*'.format(os.path.dirname(__file__))
# 默认设置
from chinese import text
# 是否要使用英文
from vnpy.trader.vtGlobal import globalSetting
if globalSetting['language'] == 'english':
from english import text | vnpy/trader/gateway/ctpGateway/language/__init__.py | 267 | encoding: UTF-8 默认设置 是否要使用英文 | 28 | zh | 0.969302 |
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Testing RandomColor op in DE
"""
import numpy as np
import pytest
import mindspore.dataset as ds
import mindspore.dataset.transforms.py_transforms
import mindspore.dataset.vision.c_transforms as vision
import mindspore.dataset.vision.py_transforms as F
from mindspore import log as logger
from util import visualize_list, diff_mse, save_and_check_md5, \
config_get_set_seed, config_get_set_num_parallel_workers
DATA_DIR = "../data/dataset/testImageNetData/train/"
C_DATA_DIR = ["../data/dataset/test_tf_file_3_images/train-0000-of-0001.data"]
C_SCHEMA_DIR = "../data/dataset/test_tf_file_3_images/datasetSchema.json"
MNIST_DATA_DIR = "../data/dataset/testMnistData"
GENERATE_GOLDEN = False
def test_random_color_py(degrees=(0.1, 1.9), plot=False):
"""
Test Python RandomColor
"""
logger.info("Test RandomColor")
# Original Images
data = ds.ImageFolderDataset(dataset_dir=DATA_DIR, shuffle=False)
transforms_original = mindspore.dataset.transforms.py_transforms.Compose([F.Decode(),
F.Resize((224, 224)),
F.ToTensor()])
ds_original = data.map(operations=transforms_original, input_columns="image")
ds_original = ds_original.batch(512)
for idx, (image, _) in enumerate(ds_original):
if idx == 0:
images_original = np.transpose(image.asnumpy(), (0, 2, 3, 1))
else:
images_original = np.append(images_original,
np.transpose(image.asnumpy(), (0, 2, 3, 1)),
axis=0)
# Random Color Adjusted Images
data = ds.ImageFolderDataset(dataset_dir=DATA_DIR, shuffle=False)
transforms_random_color = mindspore.dataset.transforms.py_transforms.Compose([F.Decode(),
F.Resize((224, 224)),
F.RandomColor(degrees=degrees),
F.ToTensor()])
ds_random_color = data.map(operations=transforms_random_color, input_columns="image")
ds_random_color = ds_random_color.batch(512)
for idx, (image, _) in enumerate(ds_random_color):
if idx == 0:
images_random_color = np.transpose(image.asnumpy(), (0, 2, 3, 1))
else:
images_random_color = np.append(images_random_color,
np.transpose(image.asnumpy(), (0, 2, 3, 1)),
axis=0)
num_samples = images_original.shape[0]
mse = np.zeros(num_samples)
for i in range(num_samples):
mse[i] = diff_mse(images_random_color[i], images_original[i])
logger.info("MSE= {}".format(str(np.mean(mse))))
if plot:
visualize_list(images_original, images_random_color)
def test_random_color_c(degrees=(0.1, 1.9), plot=False, run_golden=True):
"""
Test Cpp RandomColor
"""
logger.info("test_random_color_op")
original_seed = config_get_set_seed(10)
original_num_parallel_workers = config_get_set_num_parallel_workers(1)
# Decode with rgb format set to True
data1 = ds.TFRecordDataset(C_DATA_DIR, C_SCHEMA_DIR, columns_list=["image"], shuffle=False)
data2 = ds.TFRecordDataset(C_DATA_DIR, C_SCHEMA_DIR, columns_list=["image"], shuffle=False)
# Serialize and Load dataset requires using vision.Decode instead of vision.Decode().
if degrees is None:
c_op = vision.RandomColor()
else:
c_op = vision.RandomColor(degrees)
data1 = data1.map(operations=[vision.Decode()], input_columns=["image"])
data2 = data2.map(operations=[vision.Decode(), c_op], input_columns=["image"])
image_random_color_op = []
image = []
for item1, item2 in zip(data1.create_dict_iterator(num_epochs=1, output_numpy=True),
data2.create_dict_iterator(num_epochs=1, output_numpy=True)):
actual = item1["image"]
expected = item2["image"]
image.append(actual)
image_random_color_op.append(expected)
if run_golden:
# Compare with expected md5 from images
filename = "random_color_op_02_result.npz"
save_and_check_md5(data2, filename, generate_golden=GENERATE_GOLDEN)
if plot:
visualize_list(image, image_random_color_op)
# Restore configuration
ds.config.set_seed(original_seed)
ds.config.set_num_parallel_workers((original_num_parallel_workers))
def test_random_color_py_md5():
"""
Test Python RandomColor with md5 check
"""
logger.info("Test RandomColor with md5 check")
original_seed = config_get_set_seed(10)
original_num_parallel_workers = config_get_set_num_parallel_workers(1)
# Generate dataset
data = ds.ImageFolderDataset(dataset_dir=DATA_DIR, shuffle=False)
transforms = mindspore.dataset.transforms.py_transforms.Compose([F.Decode(),
F.RandomColor((2.0, 2.5)),
F.ToTensor()])
data = data.map(operations=transforms, input_columns="image")
# Compare with expected md5 from images
filename = "random_color_01_result.npz"
save_and_check_md5(data, filename, generate_golden=GENERATE_GOLDEN)
# Restore configuration
ds.config.set_seed(original_seed)
ds.config.set_num_parallel_workers((original_num_parallel_workers))
def test_compare_random_color_op(degrees=None, plot=False):
"""
Compare Random Color op in Python and Cpp
"""
logger.info("test_random_color_op")
original_seed = config_get_set_seed(5)
original_num_parallel_workers = config_get_set_num_parallel_workers(1)
# Decode with rgb format set to True
data1 = ds.TFRecordDataset(C_DATA_DIR, C_SCHEMA_DIR, columns_list=["image"], shuffle=False)
data2 = ds.TFRecordDataset(C_DATA_DIR, C_SCHEMA_DIR, columns_list=["image"], shuffle=False)
if degrees is None:
c_op = vision.RandomColor()
p_op = F.RandomColor()
else:
c_op = vision.RandomColor(degrees)
p_op = F.RandomColor(degrees)
transforms_random_color_py = mindspore.dataset.transforms.py_transforms.Compose(
[lambda img: img.astype(np.uint8), F.ToPIL(),
p_op, np.array])
data1 = data1.map(operations=[vision.Decode(), c_op], input_columns=["image"])
data2 = data2.map(operations=[vision.Decode()], input_columns=["image"])
data2 = data2.map(operations=transforms_random_color_py, input_columns=["image"])
image_random_color_op = []
image = []
for item1, item2 in zip(data1.create_dict_iterator(num_epochs=1, output_numpy=True),
data2.create_dict_iterator(num_epochs=1, output_numpy=True)):
actual = item1["image"]
expected = item2["image"]
image_random_color_op.append(actual)
image.append(expected)
assert actual.shape == expected.shape
mse = diff_mse(actual, expected)
logger.info("MSE= {}".format(str(np.mean(mse))))
# Restore configuration
ds.config.set_seed(original_seed)
ds.config.set_num_parallel_workers(original_num_parallel_workers)
if plot:
visualize_list(image, image_random_color_op)
def test_random_color_c_errors():
"""
Test that Cpp RandomColor errors with bad input
"""
with pytest.raises(TypeError) as error_info:
vision.RandomColor((12))
assert "degrees must be either a tuple or a list." in str(error_info.value)
with pytest.raises(TypeError) as error_info:
vision.RandomColor(("col", 3))
assert "Argument degrees[0] with value col is not of type (<class 'int'>, <class 'float'>)." in str(
error_info.value)
with pytest.raises(ValueError) as error_info:
vision.RandomColor((0.9, 0.1))
assert "degrees should be in (min,max) format. Got (max,min)." in str(error_info.value)
with pytest.raises(ValueError) as error_info:
vision.RandomColor((0.9,))
assert "degrees must be a sequence with length 2." in str(error_info.value)
# RandomColor Cpp Op will fail with one channel input
mnist_ds = ds.MnistDataset(dataset_dir=MNIST_DATA_DIR, num_samples=2, shuffle=False)
mnist_ds = mnist_ds.map(operations=vision.RandomColor(), input_columns="image")
with pytest.raises(RuntimeError) as error_info:
for _ in enumerate(mnist_ds):
pass
assert "image shape is not <H,W,C> or channel is not 3" in str(error_info.value)
if __name__ == "__main__":
test_random_color_py()
test_random_color_py(plot=True)
test_random_color_py(degrees=(2.0, 2.5), plot=True) # Test with degree values that show more obvious transformation
test_random_color_py_md5()
test_random_color_c()
test_random_color_c(plot=True)
test_random_color_c(degrees=(2.0, 2.5), plot=True,
run_golden=False) # Test with degree values that show more obvious transformation
test_random_color_c(degrees=(0.1, 0.1), plot=True, run_golden=False)
test_compare_random_color_op(plot=True)
test_random_color_c_errors()
| tests/ut/python/dataset/test_random_color.py | 10,052 | Compare Random Color op in Python and Cpp
Test Cpp RandomColor
Test that Cpp RandomColor errors with bad input
Test Python RandomColor
Test Python RandomColor with md5 check
Testing RandomColor op in DE
Copyright 2020 Huawei Technologies Co., Ltd Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================== Original Images Random Color Adjusted Images Decode with rgb format set to True Serialize and Load dataset requires using vision.Decode instead of vision.Decode(). Compare with expected md5 from images Restore configuration Generate dataset Compare with expected md5 from images Restore configuration Decode with rgb format set to True Restore configuration RandomColor Cpp Op will fail with one channel input Test with degree values that show more obvious transformation Test with degree values that show more obvious transformation | 1,379 | en | 0.739068 |
# -*- coding: utf-8 -*-
check_state = 0
d = {}
p = []
e = []
m = []
n = int(input())
for _ in range(n):
ln = input().split()
d[ln[0]] = (int(ln[1]), int(ln[2]), int(ln[3]))
p.append(int(ln[1]))
e.append(int(ln[2]))
m.append(int(ln[3]))
while True:
if check_state == 0:
if p.count(max(p)) == 1:
for k in d:
if d[k][0] == max(p):
print(k)
break
break
else:
del_list = []
for k in d:
if d[k][0] != max(p):
p.remove(d[k][0])
e.remove(d[k][1])
m.remove(d[k][2])
del_list.append(k)
for k in del_list:
del d[k]
if check_state == 1:
if e.count(max(e)) == 1:
for k in d:
if d[k][1] == max(e):
print(k)
break
break
else:
del_list = []
for k in d:
if d[k][1] != max(e):
p.remove(d[k][0])
e.remove(d[k][1])
m.remove(d[k][2])
del_list.append(k)
for k in del_list:
del d[k]
if check_state == 2:
if m.count(min(m)) == 1:
for k in d:
if d[k][2] == min(m):
print(k)
break
break
else:
del_list = []
for k in d:
if d[k][2] != min(m):
p.remove(d[k][0])
e.remove(d[k][1])
m.remove(d[k][2])
del_list.append(k)
for k in del_list:
del d[k]
# Ordem lexicográfica é a mesma coisa que afabética nesse caso
keys = sorted(d.keys())
print(keys[0])
break
check_state += 1 | 2654.py | 2,110 | -*- coding: utf-8 -*- Ordem lexicográfica é a mesma coisa que afabética nesse caso | 82 | pt | 0.925756 |
# encoding: utf-8
"""
parse_process.py
Created by Thomas Mangin on 2015-06-05.
Copyright (c) 2009-2017 Exa Networks. All rights reserved.
License: 3-clause BSD. (See the COPYRIGHT file)
"""
import time
import copy
from collections import defaultdict
from exabgp.configuration.core import Section
from exabgp.configuration.parser import boolean
from exabgp.configuration.neighbor.parser import processes
class _ParseDirection (Section):
action = {
'parsed': 'set-command',
'packets': 'set-command',
'consolidate': 'set-command',
'open': 'set-command',
'update': 'set-command',
'notification': 'set-command',
'keepalive': 'set-command',
'refresh': 'set-command',
'operational': 'set-command',
}
known = {
'parsed': boolean,
'packets': boolean,
'consolidate': boolean,
'open': boolean,
'update': boolean,
'notification': boolean,
'keepalive': boolean,
'refresh': boolean,
'operational': boolean,
}
default = {
'parsed': True,
'packets': True,
'consolidate': True,
'open': True,
'update': True,
'notification': True,
'keepalive': True,
'refresh': True,
'operational': True,
}
syntax = '{\n %s;\n}' % ';\n '.join(default.keys())
def __init__ (self, tokeniser, scope, error, logger):
Section.__init__(self,tokeniser,scope,error,logger)
def clear (self):
pass
def pre (self):
return True
def post (self):
return True
class ParseSend (_ParseDirection):
syntax = \
'send %s' % _ParseDirection.syntax
name = 'api/send'
class ParseReceive (_ParseDirection):
syntax = \
'receive %s' % _ParseDirection.syntax
name = 'api/receive'
class ParseAPI (Section):
syntax = \
'process {\n' \
' processes [ name-of-processes ];\n' \
' neighbor-changes;\n' \
' %s\n' \
' %s\n' \
'}' % (
'\n '.join(ParseSend.syntax.split('\n')),
'\n '.join(ParseReceive.syntax.split('\n'))
)
known = {
'processes': processes,
'neighbor-changes': boolean,
'negotiated': boolean,
'fsm': boolean,
'signal': boolean,
}
action = {
'processes': 'set-command',
'neighbor-changes': 'set-command',
'negotiated': 'set-command',
'fsm': 'set-command',
'signal': 'set-command',
}
default = {
'neighbor-changes': True,
'negotiated': True,
'fsm': True,
'signal': True,
}
DEFAULT_API = {
'neighbor-changes': [],
'negotiated': [],
'fsm': [],
'signal': [],
'processes': [],
}
name = 'api'
def __init__ (self, tokeniser, scope, error, logger):
Section.__init__(self,tokeniser,scope,error,logger)
self.api = {}
self.named = ''
@classmethod
def _empty (cls):
return copy.deepcopy(cls.DEFAULT_API)
def clear (self):
self.api = {}
self.named = ''
pass
def pre (self):
named = self.tokeniser.iterate()
self.named = named if named else 'auto-named-%d' % int(time.time()*1000000)
self.check_name(self.named)
self.scope.enter(self.named)
self.scope.to_context()
return True
def post (self):
self.scope.leave()
self.scope.to_context()
return True
@classmethod
def flatten (cls,apis):
built = cls._empty()
for api in apis.values():
procs = api.get('processes',[])
built.setdefault('processes',[]).extend(procs)
for command in ('neighbor-changes','negotiated','fsm','signal'):
built.setdefault(command,[]).extend(procs if api.get(command,False) else [])
for direction in ('send','receive'):
data = api.get(direction,{})
for action in ('parsed','packets','consolidate','open', 'update', 'notification', 'keepalive', 'refresh', 'operational'):
built.setdefault("%s-%s" % (direction,action),[]).extend(procs if data.get(action,False) else [])
return built
for way in ('send','receive'):
for name in ('parsed','packets','consolidate','open', 'update', 'notification', 'keepalive', 'refresh', 'operational'):
ParseAPI.DEFAULT_API["%s-%s" % (way,name)] = []
| lib/exabgp/configuration/neighbor/api.py | 4,096 | parse_process.py
Created by Thomas Mangin on 2015-06-05.
Copyright (c) 2009-2017 Exa Networks. All rights reserved.
License: 3-clause BSD. (See the COPYRIGHT file)
encoding: utf-8 | 182 | en | 0.81894 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Ops for boosted_trees."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import gen_boosted_trees_ops
from tensorflow.python.ops import resources
# Re-exporting ops used by other modules.
# pylint: disable=unused-import
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_aggregate_stats
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_bucketize
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_calculate_best_feature_split as calculate_best_feature_split
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_calculate_best_feature_split_v2 as calculate_best_feature_split_v2
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_calculate_best_gains_per_feature as calculate_best_gains_per_feature
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_center_bias as center_bias
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_create_quantile_stream_resource as create_quantile_stream_resource
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_example_debug_outputs as example_debug_outputs
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_make_quantile_summaries as make_quantile_summaries
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_make_stats_summary as make_stats_summary
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_predict as predict
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_quantile_stream_resource_add_summaries as quantile_add_summaries
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_quantile_stream_resource_deserialize as quantile_resource_deserialize
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_quantile_stream_resource_flush as quantile_flush
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_quantile_stream_resource_get_bucket_boundaries as get_bucket_boundaries
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_quantile_stream_resource_handle_op as quantile_resource_handle_op
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_sparse_aggregate_stats
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_sparse_calculate_best_feature_split as sparse_calculate_best_feature_split
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_training_predict as training_predict
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_update_ensemble as update_ensemble
from tensorflow.python.ops.gen_boosted_trees_ops import boosted_trees_update_ensemble_v2 as update_ensemble_v2
from tensorflow.python.ops.gen_boosted_trees_ops import is_boosted_trees_quantile_stream_resource_initialized as is_quantile_resource_initialized
# pylint: enable=unused-import
from tensorflow.python.training import saver
from tensorflow.python.training.tracking import tracking
class PruningMode(object):
"""Class for working with Pruning modes."""
NO_PRUNING, PRE_PRUNING, POST_PRUNING = range(0, 3)
_map = {'none': NO_PRUNING, 'pre': PRE_PRUNING, 'post': POST_PRUNING}
@classmethod
def from_str(cls, mode):
if mode in cls._map:
return cls._map[mode]
else:
raise ValueError(
'pruning_mode mode must be one of: {}. Found: {}'.format(', '.join(
sorted(cls._map)), mode))
class QuantileAccumulatorSaveable(saver.BaseSaverBuilder.SaveableObject):
"""SaveableObject implementation for QuantileAccumulator."""
def __init__(self, resource_handle, create_op, num_streams, name):
self._resource_handle = resource_handle
self._num_streams = num_streams
self._create_op = create_op
bucket_boundaries = get_bucket_boundaries(self._resource_handle,
self._num_streams)
slice_spec = ''
specs = []
def make_save_spec(tensor, suffix):
return saver.BaseSaverBuilder.SaveSpec(tensor, slice_spec, name + suffix)
for i in range(self._num_streams):
specs += [
make_save_spec(bucket_boundaries[i], '_bucket_boundaries_' + str(i))
]
super(QuantileAccumulatorSaveable, self).__init__(self._resource_handle,
specs, name)
def restore(self, restored_tensors, unused_tensor_shapes):
bucket_boundaries = restored_tensors
with ops.control_dependencies([self._create_op]):
return quantile_resource_deserialize(
self._resource_handle, bucket_boundaries=bucket_boundaries)
class QuantileAccumulator(tracking.TrackableResource):
"""SaveableObject implementation for QuantileAccumulator.
The bucket boundaries are serialized and deserialized from checkpointing.
"""
def __init__(self,
epsilon,
num_streams,
num_quantiles,
name=None,
max_elements=None):
self._eps = epsilon
self._num_streams = num_streams
self._num_quantiles = num_quantiles
super(QuantileAccumulator, self).__init__()
with ops.name_scope(name, 'QuantileAccumulator') as name:
self._name = name
self._resource_handle = self._create_resource()
self._init_op = self._initialize()
is_initialized_op = self.is_initialized()
resources.register_resource(self.resource_handle, self._init_op,
is_initialized_op)
self._saveable = QuantileAccumulatorSaveable(
self.resource_handle, self._init_op, self._num_streams,
self.resource_handle.name)
ops.add_to_collection(ops.GraphKeys.SAVEABLE_OBJECTS, self._saveable)
def _create_resource(self):
return quantile_resource_handle_op(
container='', shared_name=self._name, name=self._name)
def _initialize(self):
return create_quantile_stream_resource(self.resource_handle, self._eps,
self._num_streams)
@property
def initializer(self):
if self._init_op is None:
self._init_op = self._initialize()
return self._init_op
def is_initialized(self):
return is_quantile_resource_initialized(self.resource_handle)
@property
def saveable(self):
return self._saveable
def _gather_saveables_for_checkpoint(self):
return {'quantile_accumulator', self._saveable}
def add_summaries(self, float_columns, example_weights):
summaries = make_quantile_summaries(float_columns, example_weights,
self._eps)
summary_op = quantile_add_summaries(self.resource_handle, summaries)
return summary_op
def flush(self):
return quantile_flush(self.resource_handle, self._num_quantiles)
def get_bucket_boundaries(self):
return get_bucket_boundaries(self.resource_handle, self._num_streams)
class _TreeEnsembleSavable(saver.BaseSaverBuilder.SaveableObject):
"""SaveableObject implementation for TreeEnsemble."""
def __init__(self, resource_handle, create_op, name):
"""Creates a _TreeEnsembleSavable object.
Args:
resource_handle: handle to the decision tree ensemble variable.
create_op: the op to initialize the variable.
name: the name to save the tree ensemble variable under.
"""
stamp_token, serialized = (
gen_boosted_trees_ops.boosted_trees_serialize_ensemble(resource_handle))
# slice_spec is useful for saving a slice from a variable.
# It's not meaningful the tree ensemble variable. So we just pass an empty
# value.
slice_spec = ''
specs = [
saver.BaseSaverBuilder.SaveSpec(stamp_token, slice_spec,
name + '_stamp'),
saver.BaseSaverBuilder.SaveSpec(serialized, slice_spec,
name + '_serialized'),
]
super(_TreeEnsembleSavable, self).__init__(resource_handle, specs, name)
self._resource_handle = resource_handle
self._create_op = create_op
def restore(self, restored_tensors, unused_restored_shapes):
"""Restores the associated tree ensemble from 'restored_tensors'.
Args:
restored_tensors: the tensors that were loaded from a checkpoint.
unused_restored_shapes: the shapes this object should conform to after
restore. Not meaningful for trees.
Returns:
The operation that restores the state of the tree ensemble variable.
"""
with ops.control_dependencies([self._create_op]):
return gen_boosted_trees_ops.boosted_trees_deserialize_ensemble(
self._resource_handle,
stamp_token=restored_tensors[0],
tree_ensemble_serialized=restored_tensors[1])
class TreeEnsemble(tracking.TrackableResource):
"""Creates TreeEnsemble resource."""
def __init__(self, name, stamp_token=0, is_local=False, serialized_proto=''):
self._stamp_token = stamp_token
self._serialized_proto = serialized_proto
self._is_local = is_local
with ops.name_scope(name, 'TreeEnsemble') as name:
self._name = name
self._resource_handle = self._create_resource()
self._init_op = self._initialize()
is_initialized_op = self.is_initialized()
# Adds the variable to the savable list.
if not is_local:
self._saveable = _TreeEnsembleSavable(
self.resource_handle, self.initializer, self.resource_handle.name)
ops.add_to_collection(ops.GraphKeys.SAVEABLE_OBJECTS, self._saveable)
resources.register_resource(
self.resource_handle,
self.initializer,
is_initialized_op,
is_shared=not is_local)
def _create_resource(self):
return gen_boosted_trees_ops.boosted_trees_ensemble_resource_handle_op(
container='', shared_name=self._name, name=self._name)
def _initialize(self):
return gen_boosted_trees_ops.boosted_trees_create_ensemble(
self.resource_handle,
self._stamp_token,
tree_ensemble_serialized=self._serialized_proto)
@property
def initializer(self):
if self._init_op is None:
self._init_op = self._initialize()
return self._init_op
def is_initialized(self):
return gen_boosted_trees_ops.is_boosted_trees_ensemble_initialized(
self.resource_handle)
def _gather_saveables_for_checkpoint(self):
if not self._is_local:
return {'tree_ensemble': self._saveable}
def get_stamp_token(self):
"""Returns the current stamp token of the resource."""
stamp_token, _, _, _, _ = (
gen_boosted_trees_ops.boosted_trees_get_ensemble_states(
self.resource_handle))
return stamp_token
def get_states(self):
"""Returns states of the tree ensemble.
Returns:
stamp_token, num_trees, num_finalized_trees, num_attempted_layers and
range of the nodes in the latest layer.
"""
(stamp_token, num_trees, num_finalized_trees, num_attempted_layers,
nodes_range) = (
gen_boosted_trees_ops.boosted_trees_get_ensemble_states(
self.resource_handle))
# Use identity to give names.
return (array_ops.identity(stamp_token, name='stamp_token'),
array_ops.identity(num_trees, name='num_trees'),
array_ops.identity(num_finalized_trees, name='num_finalized_trees'),
array_ops.identity(
num_attempted_layers, name='num_attempted_layers'),
array_ops.identity(nodes_range, name='last_layer_nodes_range'))
def serialize(self):
"""Serializes the ensemble into proto and returns the serialized proto.
Returns:
stamp_token: int64 scalar Tensor to denote the stamp of the resource.
serialized_proto: string scalar Tensor of the serialized proto.
"""
return gen_boosted_trees_ops.boosted_trees_serialize_ensemble(
self.resource_handle)
def deserialize(self, stamp_token, serialized_proto):
"""Deserialize the input proto and resets the ensemble from it.
Args:
stamp_token: int64 scalar Tensor to denote the stamp of the resource.
serialized_proto: string scalar Tensor of the serialized proto.
Returns:
Operation (for dependencies).
"""
return gen_boosted_trees_ops.boosted_trees_deserialize_ensemble(
self.resource_handle, stamp_token, serialized_proto)
| tensorflow/python/ops/boosted_trees_ops.py | 13,157 | Class for working with Pruning modes.
SaveableObject implementation for QuantileAccumulator.
The bucket boundaries are serialized and deserialized from checkpointing.
SaveableObject implementation for QuantileAccumulator.
Creates TreeEnsemble resource.
SaveableObject implementation for TreeEnsemble.
Creates a _TreeEnsembleSavable object.
Args:
resource_handle: handle to the decision tree ensemble variable.
create_op: the op to initialize the variable.
name: the name to save the tree ensemble variable under.
Deserialize the input proto and resets the ensemble from it.
Args:
stamp_token: int64 scalar Tensor to denote the stamp of the resource.
serialized_proto: string scalar Tensor of the serialized proto.
Returns:
Operation (for dependencies).
Returns the current stamp token of the resource.
Returns states of the tree ensemble.
Returns:
stamp_token, num_trees, num_finalized_trees, num_attempted_layers and
range of the nodes in the latest layer.
Restores the associated tree ensemble from 'restored_tensors'.
Args:
restored_tensors: the tensors that were loaded from a checkpoint.
unused_restored_shapes: the shapes this object should conform to after
restore. Not meaningful for trees.
Returns:
The operation that restores the state of the tree ensemble variable.
Serializes the ensemble into proto and returns the serialized proto.
Returns:
stamp_token: int64 scalar Tensor to denote the stamp of the resource.
serialized_proto: string scalar Tensor of the serialized proto.
Ops for boosted_trees.
Copyright 2018 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================== Re-exporting ops used by other modules. pylint: disable=unused-import pylint: enable=unused-import slice_spec is useful for saving a slice from a variable. It's not meaningful the tree ensemble variable. So we just pass an empty value. Adds the variable to the savable list. Use identity to give names. | 2,515 | en | 0.798041 |
# --------------
# Code starts here
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import LabelEncoder
import numpy as np
from scipy.stats import skew
#### Data 1
# Load the data
df = pd.read_csv(path)
# Overview of the data
df.info()
df.describe()
# Histogram showing distribution of car prices
df['price'].plot.hist(bins=12,alpha =0.5)
# Countplot of the make column
df['make'].value_counts().plot(kind='bar')
# Jointplot showing relationship between 'horsepower' and 'price' of the car
df.plot.scatter(x='horsepower',y='price',c='blue')
# Correlation heat map
f = plt.figure(figsize=(19, 15))
plt.matshow(df.corr(), fignum=f.number)
plt.xticks(range(df.shape[1]), df.columns, fontsize=14, rotation=45)
plt.yticks(range(df.shape[1]), df.columns, fontsize=14)
cb = plt.colorbar()
cb.ax.tick_params(labelsize=14)
plt.title('Correlation Matrix', fontsize=16);
# boxplot that shows the variability of each 'body-style' with respect to the 'price'
df.boxplot(column=['price'],by=['body-style'])
#### Data 2
# Load the data
df2 = pd.read_csv(path2)
# Impute missing values with mean
df2 = df2.replace("?","NaN")
mean_imputer = Imputer(missing_values='NaN',strategy='mean',axis=0)
df2['normalized-losses'] = mean_imputer.fit_transform(df2[['normalized-losses']])
df2['horsepower'] = mean_imputer.fit_transform(df2[['horsepower']])
# Skewness of numeric features
num_cols = df2._get_numeric_data().columns
for num_col in num_cols:
if skew(df2[num_col].values)>1:
print(num_col)
df2[num_col]= np.sqrt(df2[num_col])
print(df2.head())
cat_cols = list(set(df2.columns)- set(num_cols))
# Label encode
label_encoder = LabelEncoder()
for cat_col in cat_cols:
df2[cat_col]= label_encoder.fit_transform(df2[cat_col])
df2['area']=df2['height']*df2['width']
print(df2.head())
# Code ends here
| code.py | 1,913 | -------------- Code starts here Data 1 Load the data Overview of the data Histogram showing distribution of car prices Countplot of the make column Jointplot showing relationship between 'horsepower' and 'price' of the car Correlation heat map boxplot that shows the variability of each 'body-style' with respect to the 'price' Data 2 Load the data Impute missing values with mean Skewness of numeric features Label encode Code ends here | 438 | en | 0.777727 |
class CookieContainer(object):
"""
Provides a container for a collection of System.Net.CookieCollection objects.
CookieContainer()
CookieContainer(capacity: int)
CookieContainer(capacity: int,perDomainCapacity: int,maxCookieSize: int)
"""
def ZZZ(self):
"""hardcoded/mock instance of the class"""
return CookieContainer()
instance=ZZZ()
"""hardcoded/returns an instance of the class"""
def Add(self,*__args):
"""
Add(self: CookieContainer,cookie: Cookie)
Adds a System.Net.Cookie to a System.Net.CookieContainer. This method uses the domain from the System.Net.Cookie to determine which domain collection to associate the
System.Net.Cookie with.
cookie: The System.Net.Cookie to be added to the System.Net.CookieContainer.
Add(self: CookieContainer,cookies: CookieCollection)
Adds the contents of a System.Net.CookieCollection to the System.Net.CookieContainer.
cookies: The System.Net.CookieCollection to be added to the System.Net.CookieContainer.
Add(self: CookieContainer,uri: Uri,cookie: Cookie)
Adds a System.Net.Cookie to the System.Net.CookieContainer for a particular URI.
uri: The URI of the System.Net.Cookie to be added to the System.Net.CookieContainer.
cookie: The System.Net.Cookie to be added to the System.Net.CookieContainer.
Add(self: CookieContainer,uri: Uri,cookies: CookieCollection)
Adds the contents of a System.Net.CookieCollection to the System.Net.CookieContainer for a particular URI.
uri: The URI of the System.Net.CookieCollection to be added to the System.Net.CookieContainer.
cookies: The System.Net.CookieCollection to be added to the System.Net.CookieContainer.
"""
pass
def GetCookieHeader(self,uri):
"""
GetCookieHeader(self: CookieContainer,uri: Uri) -> str
Gets the HTTP cookie header that contains the HTTP cookies that represent the System.Net.Cookie instances that are associated with a specific URI.
uri: The URI of the System.Net.Cookie instances desired.
Returns: An HTTP cookie header,with strings representing System.Net.Cookie instances delimited by semicolons.
"""
pass
def GetCookies(self,uri):
"""
GetCookies(self: CookieContainer,uri: Uri) -> CookieCollection
Gets a System.Net.CookieCollection that contains the System.Net.Cookie instances that are associated with a specific URI.
uri: The URI of the System.Net.Cookie instances desired.
Returns: A System.Net.CookieCollection that contains the System.Net.Cookie instances that are associated with a specific URI.
"""
pass
def SetCookies(self,uri,cookieHeader):
"""
SetCookies(self: CookieContainer,uri: Uri,cookieHeader: str)
Adds System.Net.Cookie instances for one or more cookies from an HTTP cookie header to the System.Net.CookieContainer for a specific URI.
uri: The URI of the System.Net.CookieCollection.
cookieHeader: The contents of an HTTP set-cookie header as returned by a HTTP server,with System.Net.Cookie instances delimited by commas.
"""
pass
def __add__(self,*args):
""" x.__add__(y) <==> x+yx.__add__(y) <==> x+yx.__add__(y) <==> x+yx.__add__(y) <==> x+y """
pass
@staticmethod
def __new__(self,capacity=None,perDomainCapacity=None,maxCookieSize=None):
"""
__new__(cls: type)
__new__(cls: type,capacity: int)
__new__(cls: type,capacity: int,perDomainCapacity: int,maxCookieSize: int)
"""
pass
Capacity=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets and sets the number of System.Net.Cookie instances that a System.Net.CookieContainer can hold.
Get: Capacity(self: CookieContainer) -> int
Set: Capacity(self: CookieContainer)=value
"""
Count=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the number of System.Net.Cookie instances that a System.Net.CookieContainer currently holds.
Get: Count(self: CookieContainer) -> int
"""
MaxCookieSize=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Represents the maximum allowed length of a System.Net.Cookie.
Get: MaxCookieSize(self: CookieContainer) -> int
Set: MaxCookieSize(self: CookieContainer)=value
"""
PerDomainCapacity=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets and sets the number of System.Net.Cookie instances that a System.Net.CookieContainer can hold per domain.
Get: PerDomainCapacity(self: CookieContainer) -> int
Set: PerDomainCapacity(self: CookieContainer)=value
"""
DefaultCookieLengthLimit=4096
DefaultCookieLimit=300
DefaultPerDomainCookieLimit=20
| release/stubs.min/System/Net/__init___parts/CookieContainer.py | 4,720 | Provides a container for a collection of System.Net.CookieCollection objects.
CookieContainer()
CookieContainer(capacity: int)
CookieContainer(capacity: int,perDomainCapacity: int,maxCookieSize: int)
Add(self: CookieContainer,cookie: Cookie)
Adds a System.Net.Cookie to a System.Net.CookieContainer. This method uses the domain from the System.Net.Cookie to determine which domain collection to associate the
System.Net.Cookie with.
cookie: The System.Net.Cookie to be added to the System.Net.CookieContainer.
Add(self: CookieContainer,cookies: CookieCollection)
Adds the contents of a System.Net.CookieCollection to the System.Net.CookieContainer.
cookies: The System.Net.CookieCollection to be added to the System.Net.CookieContainer.
Add(self: CookieContainer,uri: Uri,cookie: Cookie)
Adds a System.Net.Cookie to the System.Net.CookieContainer for a particular URI.
uri: The URI of the System.Net.Cookie to be added to the System.Net.CookieContainer.
cookie: The System.Net.Cookie to be added to the System.Net.CookieContainer.
Add(self: CookieContainer,uri: Uri,cookies: CookieCollection)
Adds the contents of a System.Net.CookieCollection to the System.Net.CookieContainer for a particular URI.
uri: The URI of the System.Net.CookieCollection to be added to the System.Net.CookieContainer.
cookies: The System.Net.CookieCollection to be added to the System.Net.CookieContainer.
GetCookieHeader(self: CookieContainer,uri: Uri) -> str
Gets the HTTP cookie header that contains the HTTP cookies that represent the System.Net.Cookie instances that are associated with a specific URI.
uri: The URI of the System.Net.Cookie instances desired.
Returns: An HTTP cookie header,with strings representing System.Net.Cookie instances delimited by semicolons.
GetCookies(self: CookieContainer,uri: Uri) -> CookieCollection
Gets a System.Net.CookieCollection that contains the System.Net.Cookie instances that are associated with a specific URI.
uri: The URI of the System.Net.Cookie instances desired.
Returns: A System.Net.CookieCollection that contains the System.Net.Cookie instances that are associated with a specific URI.
SetCookies(self: CookieContainer,uri: Uri,cookieHeader: str)
Adds System.Net.Cookie instances for one or more cookies from an HTTP cookie header to the System.Net.CookieContainer for a specific URI.
uri: The URI of the System.Net.CookieCollection.
cookieHeader: The contents of an HTTP set-cookie header as returned by a HTTP server,with System.Net.Cookie instances delimited by commas.
hardcoded/mock instance of the class
x.__add__(y) <==> x+yx.__add__(y) <==> x+yx.__add__(y) <==> x+yx.__add__(y) <==> x+y
__new__(cls: type)
__new__(cls: type,capacity: int)
__new__(cls: type,capacity: int,perDomainCapacity: int,maxCookieSize: int) | 2,828 | en | 0.68817 |
"""Retrieve county->CBSA crosswalk file from the NBER"""
from collections import defaultdict
import unicodecsv as csv
import logging
import requests
from utils.fs import cache_json
URL = 'http://www.nber.org/cbsa-msa-fips-ssa-county-crosswalk/2016/cbsatocountycrosswalk2016.csv'
@cache_json('cbsa_lookup.json')
def cbsa_lookup():
"""
Construct a County->CBSA Lookup table from NBER data
Returns: dict
each key is a (State Code, County FIPS code) tuple
each value is a (CBSA FIPS code, CBSA Name) tuple
"""
logging.info("Beginning CBSA lookup")
cbsa_lookup = defaultdict(dict)
download = requests.get(URL)
decoded_content = download.content.decode('latin-1').encode('utf-8')
reader = csv.reader(decoded_content.splitlines(), delimiter=',')
# skip header line
next(reader)
for row in reader:
state_code = row[1]
fipscounty = row[3][-3:]
cbsa = row[4]
cbsaname = row[5]
cbsa_lookup[state_code][fipscounty] = (cbsa, cbsaname)
return cbsa_lookup
| datasets/nber_county_cbsa.py | 1,053 | Construct a County->CBSA Lookup table from NBER data
Returns: dict
each key is a (State Code, County FIPS code) tuple
each value is a (CBSA FIPS code, CBSA Name) tuple
Retrieve county->CBSA crosswalk file from the NBER
skip header line | 245 | en | 0.806278 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
class Config(object):
DEBUG = True
RELOADER = True
PORT = 8080
class DevelopmentConfig(Config):
pass
class ProductionConfig(Config):
DEBUG = False
RELOADER = False
| {{ cookiecutter.app_name }}/{{ cookiecutter.app_name }}_backend/{{ cookiecutter.app_name }}/config.py | 237 | !/usr/bin/env python -*- coding: utf-8 -*- | 42 | en | 0.34282 |
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.utils.spectral_norm as spectral_norm
import math
import numpy as np
import torchvision.models as models
from modules.networks import get_pad
from torch.distributions.multivariate_normal import MultivariateNormal
from util.utils import length_to_mask
def get_conv_layer(in_channel, out_channel, gan_type='sn_gan', **kwargs):
if gan_type == 'sn_gan':
return spectral_norm(nn.Conv2d(in_channel, out_channel, **kwargs))
else:
return nn.Conv2d(in_channel, out_channel, **kwargs)
def get_conv_block(in_channel, out_channel, gan_type='sn_gan', normalization='instance', activation='leakyrelu', **kwargs):
block = []
block.append(get_conv_layer(in_channel, out_channel, gan_type=gan_type, **kwargs))
if normalization == 'instance':
block.append(nn.InstanceNorm2d(out_channel))
if activation == 'leakyrelu':
block.append(nn.LeakyReLU())
return nn.Sequential(*block)
def gelu(x):
"""Implementation of the gelu activation function.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
Also see https://arxiv.org/abs/1606.08415
"""
return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
# try:
# from apex.normalization.fused_layer_norm import FusedLayerNorm as SketchLayerNorm
# except ImportError:
# logger.info("Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex .")
class SketchLayerNorm(nn.Module):
def __init__(self, hidden_size, eps=1e-12):
"""
Construct a layernorm module in the TF style (epsilon inside the square root).
"""
super(SketchLayerNorm, self).__init__()
self.weight = nn.Parameter(torch.ones(hidden_size))
self.bias = nn.Parameter(torch.zeros(hidden_size))
self.variance_epsilon = eps
def forward(self, x):
u = x.mean(-1, keepdim=True)
s = (x - u).pow(2).mean(-1, keepdim=True)
x = (x - u) / torch.sqrt(s + self.variance_epsilon)
return self.weight * x + self.bias
ACT2FN = {"gelu": gelu, "relu": torch.nn.functional.relu}#, "swish": swish
NORM2FN = {'BN1d':nn.BatchNorm1d, 'BN2d':nn.BatchNorm2d, 'LN':nn.LayerNorm}
class SketchSelfAttention(nn.Module):
'''
Implementation for self attention in Sketch.
The input will be a K-Dim feature.
Input Parameters:
config[dict]:
hidden_dim[int]: The dimension of input hidden embeddings in the self attention, hidden diension is equal to the output dimension
num_heads[int]: The number of heads
attention_probs[float]: probability parameter for dropout
'''
def __init__(self, num_heads, hidden_dim, attention_dropout_prob):
super(SketchSelfAttention, self).__init__()
if hidden_dim % num_heads != 0:
raise ValueError(
"The hidden size (%d) is not a multiple of the number of attention "
"heads (%d)" % (hidden_dim, num_heads))
self.hidden_dim = hidden_dim
self.num_heads = num_heads
#self.attention_dropout_prob = config.attention_dropout_prob
# Calculation for intermeidate parameters
self.head_dim = int(self.hidden_dim / self.num_heads)
self.all_head_dim = self.head_dim * self.num_heads
self.scale_factor = math.sqrt(self.head_dim)
self.query = nn.Linear(self.hidden_dim, self.all_head_dim)
self.key = nn.Linear(self.hidden_dim, self.all_head_dim)
self.value = nn.Linear(self.hidden_dim, self.all_head_dim)
self.dropout = nn.Dropout(attention_dropout_prob)
self.multihead_output = None
def transpose_(self, x):
'''
Transpose Function for simplicity.
'''
new_x_shape = x.size()[:-1] + (self.num_heads , self.head_dim)
x = x.view(*new_x_shape)
return x.permute(0, 2, 1, 3)
def forward(self, hidden_states, attention_mask, head_mask=None, output_attentions=False, keep_multihead_output=False):
'''
Input:
hidden_states[batch, seq_len, hidden_dim]
attention_mask[batch, 1, 1, seq_len]
Output:
context_states[batch, seq_len, hidden_dim]
attention_probs[seq_len, hidden_dim]
'''
# Get query, key, value together
query = self.query(hidden_states) # [batch, seq_len, all_head_dim]
key = self.key(hidden_states) # [batch, seq_len, all_head_dim]
value = self.value(hidden_states) # [batch, seq_len, all_head_dim]
# tranpose the query, key, value into multi heads[batch, seq_len, ]
multi_query = self.transpose_(query) # [batch, num_heads, seq_len, head_dim]
multi_key = self.transpose_(key) # [batch, num_heads, seq_len, head_dim]
multi_value = self.transpose_(value) # [batch, num_heads, seq_len, head_dim]
# Calculate Attention maps
attention_scores = torch.matmul(multi_query, multi_key.transpose(-1, -2))
attention_scores = attention_scores / self.scale_factor
#print(attention_scores.size(), attention_mask.size())
attention_scores = attention_scores + attention_mask
attention_probs = F.softmax(attention_scores, dim=-1)
attention_probs = self.dropout(attention_probs)
if head_mask is not None:
attention_probs = attention_probs * head_mask
# Compute states values
context_states = torch.matmul(attention_probs, multi_value)
if keep_multihead_output:
self.multihead_output = context_states
self.multihead_output.retain_grad()
context_states = context_states.permute(0,2,1,3)
context_states = context_states.contiguous().view(context_states.size()[:-2]+(-1,)) #view(context_states.size()[:-2]+ (self.all_head_dim,))
if output_attentions:
return context_states, attention_probs
return context_states
class SketchOutput(nn.Module):
def __init__(self, input_dim, output_dim, attention_norm_type, output_dropout_prob):
super(SketchOutput, self).__init__()
self.fc = nn.Linear(input_dim, output_dim)
if attention_norm_type not in NORM2FN:
raise ValueError(
"The attention normalization is not in standard normalization types.")
self.norm = NORM2FN[attention_norm_type](output_dim)
self.dropout = nn.Dropout(output_dropout_prob)
'''
Input:
hidden_states[]:
Output:
hidden_states[]:
'''
def forward(self, hidden_states, input_states):
hidden_states = self.fc(hidden_states)
hidden_states = self.dropout(hidden_states)
#print(hidden_states.size())
hidden_states = self.norm(hidden_states+input_states)
return hidden_states
class SketchMultiHeadAttention(nn.Module):
def __init__(self, num_heads, hidden_dim,
attention_norm_type, attention_dropout_prob, hidden_dropout_prob,):
super(SketchMultiHeadAttention, self).__init__()
self.attention = SketchSelfAttention(num_heads, hidden_dim, attention_dropout_prob)
self.output = SketchOutput(hidden_dim, hidden_dim, attention_norm_type, hidden_dropout_prob)
def forward(self, hidden_states, attention_mask, head_mask=None, output_attentions=False):
input_states = hidden_states
#print(hidden_states)
hidden_states = self.attention(hidden_states, attention_mask, head_mask=head_mask)
#print(hidden_states)
if output_attentions:
hidden_states, attention_probs = hidden_states
output_states = self.output(hidden_states, input_states)
if output_attentions:
return output_states, attention_probs
return output_states
class SketchIntermediate(nn.Module):
def __init__(self, hidden_dim, inter_dim, inter_activation):
super(SketchIntermediate, self).__init__()
self.fc = nn.Linear(hidden_dim, inter_dim)
self.activation = ACT2FN[inter_activation]
def forward(self, hidden_states):
hidden_states = hidden_states.to(next(self.fc.parameters()).device)
inter_states = self.fc(hidden_states.contiguous())
inter_states = self.activation(inter_states)
return inter_states
class SketchLayer(nn.Module):
'''
A transformer layer for sketch bert
'''
def __init__(self, num_heads, hidden_dim, inter_dim,
attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob,):
super(SketchLayer, self).__init__()
self.attention = SketchMultiHeadAttention(num_heads, hidden_dim,
attention_norm_type, attention_dropout_prob, hidden_dropout_prob,)
self.inter_layer = SketchIntermediate(hidden_dim, inter_dim, inter_activation)
self.output = SketchOutput(inter_dim, hidden_dim, attention_norm_type, output_dropout_prob)
'''
Input:
hidden_states[batch, seq_len, hidden_dim]:
attention_mask[batch, seq_len]
'''
def forward(self, hidden_states, attention_mask, head_mask=None, output_attentions=False):
hidden_states = self.attention(hidden_states, attention_mask, head_mask)
if output_attentions:
hidden_states, attention_probs = hidden_states
inter_states = self.inter_layer(hidden_states)
output_states = self.output(inter_states, hidden_states)
if output_attentions:
return output_states, attention_probs
return output_states
class SketchSegmentLayer(nn.Module):
'''
A transformer layer for sketch bert
'''
def __init__(self, num_heads, hidden_dim, inter_dim, max_segment,
segment_atten_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob,):
super(SketchSegmentLayer, self).__init__()
self.max_segment = max_segment
self.inter_dim = inter_dim
self.segment_atten_type = segment_atten_type
self.local_attention = SketchMultiHeadAttention(num_heads, hidden_dim,
attention_norm_type, attention_dropout_prob, hidden_dropout_prob,)
self.segment_attention = SketchMultiHeadAttention(num_heads, hidden_dim,
attention_norm_type, attention_dropout_prob, hidden_dropout_prob,)
self.local_inter_layer = SketchIntermediate(hidden_dim, inter_dim//2, inter_activation)
self.seg_inter_layer = SketchIntermediate(hidden_dim, inter_dim//2, inter_activation)
self.output = SketchOutput(inter_dim, hidden_dim, attention_norm_type, output_dropout_prob)
def get_seg_states(self, hidden_states, segment_index):
'''
Input:
hidden_states[batch, seq_len, hidden_dim]
segment_index[batch, seq_len]
'''
seg_states = torch.zeros(hidden_states.size(0), self.max_segment, hidden_states.size(2)).to(hidden_states.device)
length = (segment_index==0).sum(dim=1)
length_mask = length_to_mask(length, max_len=self.max_segment, dtype=torch.float)
seg_states[length_mask==1,:] = hidden_states[segment_index==0,:]
return seg_states, length_mask
def forward(self, hidden_states, attention_mask, segments, segment_index, head_mask=None, output_attentions=False):
'''
Input:
hidden_states[batch, seg_len, hidden_dim]:
attention_mask[batch, seg_len](segment-based)
segments[batch, seg_len]:
segment_index[batch, seq_len]
'''
# Local Attention
local_states = self.local_attention(hidden_states, attention_mask, head_mask)
if output_attentions:
local_states, attention_probs = local_states #[batch, seq_len, hidden_dim]
input_prefix = hidden_states.size(1) - segment_index.size(1)
# Segment Level Attention
seg_states, seg_atten_mask = self.get_seg_states(local_states[:,input_prefix:,:], segment_index)
if self.segment_atten_type == 'multi':
seg_states = self.segment_attention(seg_states, seg_atten_mask.unsqueeze(1).unsqueeze(2), head_mask)
if output_attentions:
seg_states, attention_probs = seg_states #[batch, seq_len, hidden_dim]
# Concatenate
local_inter_states = self.local_inter_layer(local_states)
seg_inter_states = self.seg_inter_layer(seg_states)
aug_seg_inter_states = torch.gather(seg_inter_states, 1, (segments[:,input_prefix:]-2).view(segments.size(0), -1, 1).repeat(1,1, seg_inter_states.size(2)))
inter_states = torch.zeros(local_inter_states.size(0), local_inter_states.size(1), self.inter_dim).to(local_inter_states.device)
#print(hidden_states.size(), local_states.size(), local_inter_states.size())
inter_states[:,:,:self.inter_dim//2] = local_inter_states
inter_states[:,input_prefix:, self.inter_dim//2:] = aug_seg_inter_states
inter_states[:,:input_prefix, self.inter_dim//2:] = seg_inter_states.sum(dim=1, keepdim=True)
output_states = self.output(inter_states, hidden_states)
if output_attentions:
return output_states, attention_probs
return output_states
def setting2dict(paras, setting):
paras['num_heads'] = setting[0]
paras['hidden_dim'] = setting[1]
paras['inter_dim'] = setting[2]
class SketchEncoder(nn.Module):
'''
layers_setting[list]: [[12, ], []]
'''
def __init__(self, layers_setting,
attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob,):
super(SketchEncoder, self).__init__()
layer_paras = {
'attention_norm_type':attention_norm_type, 'inter_activation':inter_activation, 'attention_dropout_prob':attention_dropout_prob,
'hidden_dropout_prob':hidden_dropout_prob, 'output_dropout_prob':output_dropout_prob}
self.layers = []
for layer_setting in layers_setting:
setting2dict(layer_paras, layer_setting)
self.layers.append(SketchLayer(**layer_paras))
self.layers = nn.ModuleList(self.layers)
def forward(self, input_states, attention_mask, head_mask=None, output_all_states=False, output_attentions=False, keep_multihead_output=False):
all_states = []
all_attention_probs = []
hidden_states = input_states
for layer in self.layers:
hidden_states = layer(hidden_states, attention_mask, head_mask=head_mask, output_attentions=output_attentions)
if output_attentions:
hidden_states, attention_probs = hidden_states
all_attention_probs.append(attention_probs)
if output_all_states:
all_states.append(hidden_states)
if not output_all_states:
all_states.append(hidden_states)
if output_attentions:
return all_states, all_attention_probs
return all_states
class SketchALEncoder(nn.Module):
'''
A Lite BERT: Parameter Sharing
layers_setting[list]: [[12, ], []]
'''
def __init__(self, layers_setting,
attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob,):
super(SketchALEncoder, self).__init__()
layer_paras = {
'attention_norm_type':attention_norm_type, 'inter_activation':inter_activation, 'attention_dropout_prob':attention_dropout_prob,
'hidden_dropout_prob':hidden_dropout_prob, 'output_dropout_prob':output_dropout_prob}
setting2dict(layer_paras, layers_setting[0])
self.sketch_layer = SketchLayer(**layer_paras)
self.layers = []
for layer_setting in layers_setting:
self.layers.append(self.sketch_layer)
#self.layers = nn.ModuleList(self.layers)
def forward(self, input_states, attention_mask, head_mask=None, output_all_states=False, output_attentions=False, keep_multihead_output=False):
all_states = []
all_attention_probs = []
hidden_states = input_states
for layer in self.layers:
hidden_states = layer(hidden_states, attention_mask, head_mask=head_mask, output_attentions=output_attentions)
if output_attentions:
hidden_states, attention_probs = hidden_states
all_attention_probs.append(attention_probs)
if output_all_states:
all_states.append(hidden_states)
if not output_all_states:
all_states.append(hidden_states)
if output_attentions:
return all_states, all_attention_probs
return all_states
class SketchSegmentEncoder(nn.Module):
'''
layers_setting[list]: [[12, ], []]
'''
def __init__(self, layers_setting, max_segment, segment_atten_type,
attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob,):
super(SketchSegmentEncoder, self).__init__()
layer_paras = {
'max_segment':max_segment, 'segment_atten_type':segment_atten_type, 'attention_norm_type':attention_norm_type, 'inter_activation':inter_activation, 'attention_dropout_prob':attention_dropout_prob,
'hidden_dropout_prob':hidden_dropout_prob, 'output_dropout_prob':output_dropout_prob}
self.layers = []
self.max_segment = max_segment
for layer_setting in layers_setting:
setting2dict(layer_paras, layer_setting)
self.layers.append(SketchSegmentLayer(**layer_paras))
self.layers = nn.ModuleList(self.layers)
def forward(self, input_states, attention_mask, segments, segment_index, head_mask=None, output_all_states=False, output_attentions=False, keep_multihead_output=False):
all_states = []
all_attention_probs = []
hidden_states = input_states
for layer in self.layers:
hidden_states = layer(hidden_states, attention_mask, segments, segment_index, head_mask=head_mask, output_attentions=output_attentions)
if output_attentions:
hidden_states, attention_probs = hidden_states
all_attention_probs.append(attention_probs)
if output_all_states:
all_states.append(hidden_states)
if not output_all_states:
all_states.append(hidden_states)
if output_attentions:
return all_states, all_attention_probs
return all_states
class SketchEmbedding(nn.Module):
def __init__(self, input_dim, hidden_dim):
super(SketchEmbedding, self).__init__()
self.embedding = nn.Linear(input_dim, hidden_dim)
def forward(self, input_states):
return self.embedding(input_states)
class SketchDiscreteEmbedding(nn.Module):
'''
max_size[tuple](x_length, y_length)
'''
def __init__(self, max_size, type_size, hidden_dim, pool_type):
super(SketchDiscreteEmbedding, self).__init__()
self.x_embedding = nn.Embedding(2*max_size[0]+2, hidden_dim//2)
self.y_embedding = nn.Embedding(2*max_size[1]+2, hidden_dim//2)
self.type_embedding = nn.Embedding(type_size+1, hidden_dim)
assert pool_type in ['sum', 'con']
self.pool_type = pool_type
'''
input_states[batch, seq_len, 3(input_dim)](Inputs are encoded as discrete type)
'''
def forward(self, input_states):
input_states = input_states.to(dtype=torch.long)
input_states = input_states + 1
#print(input_states[0,0,:], torch.min(input_states), torch.max(input_states))
x_hidden = self.x_embedding(input_states[:,:,0])
y_hidden = self.y_embedding(input_states[:,:,1])
#print(x_hidden.size(), y_hidden.size())
axis_hidden = torch.cat([x_hidden, y_hidden], dim=2)
type_hidden = self.type_embedding(input_states[:,:,2])
if self.pool_type == 'sum':
return axis_hidden + type_hidden
elif self.pool_type == 'con':
return torch.cat([axis_hidden, type_hidden], dim=2)
class SketchSinPositionEmbedding(nn.Module):
def __init__(self, max_length, pos_hidden_dim):
super(SketchSinPositionEmbedding, self).__init__()
self.pos_embedding_matrix = torch.zeros(max_length, pos_hidden_dim)
pos_vector = torch.arange(max_length).view(max_length, 1).type(torch.float)
dim_vector = torch.arange(pos_hidden_dim).type(torch.float) + 1.0
#print((pos_vector / (dim_vector[::2] / 2).view(1, -1)).size(), self.pos_embedding_matrix[:,::2].size())
self.pos_embedding_matrix[:,::2] = torch.sin(pos_vector / (dim_vector[::2] / 2).view(1, -1))
self.pos_embedding_matrix[:,1::2] = torch.cos(pos_vector / ((dim_vector[1::2] - 1) / 2).view(1, -1))
#print(self.pos_embedding_matrix)
'''
Input:
position_labels[batch, seq_len]
Output:
position_states[batch, seq_len, pos_hidden_dim]
'''
def forward(self, position_labels):
return self.pos_embedding_matrix[position_labels.view(-1),:].view(position_labels.size(0), position_labels.size(1), -1)
class SketchLearnPositionEmbedding(nn.Module):
def __init__(self, max_length, pos_hidden_dim):
super(SketchLearnPositionEmbedding, self).__init__()
print(max_length, pos_hidden_dim)
self.pos_embedding = nn.Embedding(max_length, pos_hidden_dim)
'''
Input:
position_labels[batch, seq_len]
Output:
position_states[batch, seq_len, pos_hidden_dim]
'''
def forward(self, position_labels):
return self.pos_embedding(position_labels)
class SketchEmbeddingRefineNetwork(nn.Module):
'''
The module to upsample the embedding feature, idea from the ALBERT: Factorized Embedding
'''
def __init__(self, out_dim, layers_dim):
super(SketchEmbeddingRefineNetwork, self).__init__()
self.layers = []
layers_dim = layers_dim.copy()
layers_dim.append(out_dim)
for i in range(len(layers_dim)-1):
self.layers.append(nn.Linear(layers_dim[i], layers_dim[i+1]))
self.layers = nn.ModuleList(self.layers)
def forward(self, input_state):
x = input_state
for layer in self.layers:
x = layer(x)
return x
class SketchTransformerModel(nn.Module):
'''
Input:
layers_setting[list]
input_dim[int]
max_length[int]
position_type[str]
attention_norm_type[str]
inter_activation[str]
attention_dropout_prob[float]
hidden_dropout_prob[float]
output_dropout_prob[float]
'''
def __init__(self, model_type, layers_setting, embed_layers_setting, input_dim, max_length, max_size, type_size,
position_type, segment_type, sketch_embed_type, embed_pool_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob):
super(SketchTransformerModel, self).__init__()
self.layers_setting = layers_setting
self.num_hidden_layers = len(layers_setting)
self.embed_pool_type = embed_pool_type
assert sketch_embed_type in ['linear', 'discrete']
if sketch_embed_type == 'linear':
self.embedding = SketchEmbedding(input_dim, embed_layers_setting[0])
elif sketch_embed_type == 'discrete':
self.embedding = SketchDiscreteEmbedding(max_size, type_size, embed_layers_setting[0], embed_pool_type)
assert position_type in ['sin', 'learn', 'none']
if position_type == 'sin':
self.pos_embedding = SketchSinPositionEmbedding(max_length, embed_layers_setting[0])
elif position_type == 'learn':
self.pos_embedding = SketchLearnPositionEmbedding(max_length, embed_layers_setting[0])
else:
self.pos_embedding = None
if segment_type == 'learn':
self.segment_embedding = SketchLearnPositionEmbedding(max_length, embed_layers_setting[0])
else:
self.segment_embedding = None
self.embed_refine_net = SketchEmbeddingRefineNetwork(layers_setting[0][1], embed_layers_setting)
assert model_type in ['albert', 'bert']
if model_type == 'albert':
self.encoder = SketchALEncoder(layers_setting,
attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob)
elif model_type == 'bert':
self.encoder = SketchEncoder(layers_setting,
attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob)
def load_model(self, state_dict, own_rel_in_input, own_cls_in_input, pre_rel_in_input, pre_cls_in_input):
own_state = self.state_dict()
for k, v in own_state.items():
if k == 'pos_embedding.pos_embedding.weight':
own_pos_size = v.size(0)
seq_len = own_pos_size - own_rel_in_input - own_cls_in_input
pretrained_pos_size = state_dict[k].size(0)
own_start_ind = int(own_rel_in_input+own_cls_in_input)
pre_start_ind = int(pre_rel_in_input+pre_cls_in_input)
seq_len = min(seq_len, state_dict[k].size(0)-pre_start_ind)
own_state[k][own_start_ind:own_start_ind+seq_len] = state_dict[k][pre_start_ind:pre_start_ind+seq_len]
if own_rel_in_input and own_cls_in_input:
if pre_cls_in_input and pre_cls_in_input:
own_state[k][:2] = state_dict[k][:2]
elif pre_cls_in_input:
own_state[k][1] = state_dict[k][0]
elif pre_rel_in_input:
own_state[k][0] = state_dict[k][0]
elif own_rel_in_input:
if pre_rel_in_input:
own_state[k][0] = state_dict[k][0]
elif own_cls_in_input:
if pre_cls_in_input:
own_state[k][0] = state_dict[k][int(pre_rel_in_input)]
else:
own_state[k] = state_dict[k]
self.load_state_dict(own_state)
def get_pos_states(self, input_states):
return torch.arange(input_states.size(1)).view(1,-1).repeat(input_states.size(0),1).to(device=input_states.device)
'''
Input:
input_states[batch, seq_len, 5],
attention_mask[batch, seq_len]/[batch, seq_len, ],(length mask)
Output:
output_states[batch, seq_len, hidden_dim],
'''
def forward(self, input_states, attention_mask, segments=None, head_mask=None,
output_all_states=False, output_attentions=False, keep_multihead_output=False):
if attention_mask is None:
attention_mask = torch.ones(input_states.size(0), input_states.size(1))
# Extending attention mask
if len(attention_mask.size()) == 3:
extended_attention_mask = attention_mask.unsqueeze(1)
elif len(attention_mask.size()) == 2:
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype, device=input_states.device) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
attention_mask = extended_attention_mask
# process head mask
if head_mask is not None:
if head_mask.dim() == 1:
head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
head_mask = head_mask.expand_as(self.num_hidden_layers, -1, -1, -1, -1)
elif head_mask.dim() == 2:
head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) # We can specify head_mask for each layer
head_mask = head_mask.to(dtype=next(self.parameters()).dtype, device=input_states.device) # switch to fload if need + fp16 compatibility
else:
head_mask = None
input_states = self.embedding(input_states)
if self.pos_embedding is not None:
pos_states = self.pos_embedding(self.get_pos_states(input_states))
input_states = input_states + pos_states.to(device=input_states.device)
if self.segment_embedding is not None and segments is not None:
segment_states = self.segment_embedding(segments)
input_states = input_states + segment_states
input_states = self.embed_refine_net(input_states)
output_states = self.encoder(input_states, attention_mask, head_mask, output_all_states, output_attentions, keep_multihead_output)
if output_attentions:
output_states, attention_probs = output_states
return output_states[-1], attention_probs
return output_states[-1]
class SketchCNN(nn.Module):
'''
Truely a CNN model
'''
def __init__(self, hidden_dim, net_type, pretrained):
super(SketchCNN, self).__init__()
if net_type == 'resnet18':
self.encoder = models.resnet18(pretrained=pretrained)
self.encoder.fc = nn.Linear(self.encoder.fc.in_features, hidden_dim)
elif net_type == 'resnet50':
self.encoder = models.resnet50(pretrained=pretrained)
self.encoder.fc = nn.Linear(self.encoder.fc.in_features, hidden_dim)
elif net_type == 'tcnet':
pass
elif net_type =='sketchanet':
pass
def forward(self, input):
return self.encoder(input)
'''
Sketch Transformer based GAN
'''
class SketchGANGenerator(nn.Module):
'''
Assume Label in the Input
'''
def __init__(self, layers_setting, input_dim, cls_dim, max_length,
position_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob):
super(SketchGANGenerator, self).__init__()
self.encoder = SketchTransformerModel(layers_setting, input_dim, cls_dim, max_length,
position_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob)
self.output = nn.Linear(layers_setting[0][1], 5)
'''
The same as Transformer Model
'''
def forward(self, input_states, attention_mask, head_mask=None,
output_all_states=False, output_attentions=False, keep_multihead_output=False):
hidden_states = self.encoder(input_states, attention_mask, head_mask=head_mask,
output_all_states=output_all_states, output_attentions=output_attentions, keep_multihead_output=keep_multihead_output)
fake_states = self.output(hidden_states)
return fake_states
class SketchGANDiscriminator(nn.Module):
'''
Assume Label in the Input
'''
def __init__(self, layers_setting, input_dim, cls_dim, max_length,
position_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob):
super(SketchGANDiscriminator, self).__init__()
self.encoder = SketchTransformerModel(layers_setting, input_dim, cls_dim, max_length,
position_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob)
self.output = nn.Linear(layers_setting[0][1], 2)
'''
The same as Transformer Model
'''
def forward(self, input_states, attention_mask, head_mask=None,
output_all_states=False, output_attentions=False, keep_multihead_output=False):
hidden_states = self.encoder(input_states, attention_mask, head_mask=head_mask,
output_all_states=output_all_states, output_attentions=output_attentions, keep_multihead_output=keep_multihead_output)
label = self.output(hidden_states[:,0,:])
return label
'''
Sketch Transformer based VAE
'''
class SketchVAEEncoder(SketchTransformerModel):
def __init__(self, model_type, layers_setting, embed_layers_setting, input_dim, cls_dim, max_length, max_size, type_size,
conditional, position_type, segment_type, sketch_embed_type, embed_pool_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob):
super(SketchVAEEncoder, self).__init__(model_type, layers_setting, embed_layers_setting, input_dim, max_length, max_size, type_size,
position_type, segment_type, sketch_embed_type, embed_pool_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob)
# self.rec_fc = nn.Linear(layers_setting[0][1], output_dim)
self.conditional = conditional
if self.conditional:
self.cls_embedding = nn.Embedding(cls_dim, embed_layers_setting[0])
else:
self.cls_embedding = None
def load_model(self, state_dict, only_encoder):
own_state = self.state_dict()
for k, v in own_state.items():
if only_encoder and ('encoder' in k or 'embed_refine_net' in k):
own_state[k] = state_dict[k]
else:
if k in state_dict and k in own_state:
own_state[k] = state_dict[k]
self.load_state_dict(own_state)
def forward(self, input_states, attention_mask, targets=None, segments=None, head_mask=None,
output_all_states=False, output_attentions=False, keep_multihead_output=False):
'''
Input:
input_states[batch, seq_len, 5],
zs[batch, latent_dim]
'''
if attention_mask is None:
attention_mask = torch.ones(input_states.size(0), input_states.size(1))
# Extending attention mask
if len(attention_mask.size()) == 3:
extended_attention_mask = attention_mask.unsqueeze(1)
elif len(attention_mask.size()) == 2:
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype, device=input_states.device) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
attention_mask = extended_attention_mask
# process head mask
if head_mask is not None:
if head_mask.dim() == 1:
head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
head_mask = head_mask.expand_as(self.num_hidden_layers, -1, -1, -1, -1)
elif head_mask.dim() == 2:
head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) # We can specify head_mask for each layer
head_mask = head_mask.to(dtype=next(self.parameters()).dtype, device=input_states.device) # switch to fload if need + fp16 compatibility
else:
head_mask = None
input_states = self.embedding(input_states)
if self.pos_embedding is not None:
pos_states = self.pos_embedding(self.get_pos_states(input_states))
input_states = input_states + pos_states
if self.segment_embedding is not None and segments is not None:
segment_states = self.segment_embedding(segments)
input_states = input_states + segment_states
if self.cls_embedding is not None and targets is not None:
cls_states = self.cls_embedding(targets)
cls_states = cls_states.unsqueeze(1).repeat(1,input_states.size(1),1)
input_states = input_states + cls_states
input_states = self.embed_refine_net(input_states)
# Append the latent_states
output_states = self.encoder(input_states, attention_mask, head_mask, output_all_states, output_attentions, keep_multihead_output)
if output_attentions:
output_states, attention_probs = output_states
#return self.rec_fc(output_states[-1]), attention_probs
return output_states[-1], attention_probs
return output_states[-1]
'''
Sketch Transformer based VAE
'''
class SketchVAEDecoder(SketchTransformerModel):
def __init__(self, model_type, layers_setting, embed_layers_setting, rec_layers_setting, input_dim, output_dim, latent_dim, cls_dim, max_length, max_size, type_size,
conditional, position_type, segment_type, sketch_embed_type, embed_pool_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob):
print(embed_layers_setting)
super(SketchVAEDecoder, self).__init__(model_type, layers_setting, embed_layers_setting, input_dim, max_length, max_size, type_size,
position_type, segment_type, sketch_embed_type, embed_pool_type, attention_norm_type, inter_activation, attention_dropout_prob,
hidden_dropout_prob, output_dropout_prob)
self.conditional = conditional
if self.conditional:
self.cls_embedding = nn.Embedding(cls_dim, embed_layers_setting[0])
else:
self.cls_embedding = None
self.re_fcs = []
rec_layers_setting = rec_layers_setting.copy()
rec_layers_setting.append(output_dim), rec_layers_setting.insert(0, layers_setting[0][1])
for i in range(len(rec_layers_setting)-1):
self.re_fcs.append(nn.Linear(rec_layers_setting[i], rec_layers_setting[i+1]))
self.re_fcs = nn.ModuleList(self.re_fcs)
self.latent_fusion = nn.Linear(layers_setting[0][1]+latent_dim, layers_setting[0][1])
def load_model(self, state_dict, only_encoder):
own_state = self.state_dict()
for k, v in own_state.items():
if only_encoder and ('encoder' in k or 'embed_refine_net' in k):
#print(k in own_state, k in state_dict)
own_state[k] = state_dict[k]
else:
if k in state_dict and k in own_state:
own_state[k] = state_dict[k]
self.load_state_dict(own_state)
def forward(self, input_states, zs, attention_mask, targets=None, segments=None, head_mask=None,
output_all_states=False, output_attentions=False, keep_multihead_output=False):
'''
Input:
input_states[batch, seq_len, 5],
zs[batch, latent_dim]
'''
if attention_mask is None:
attention_mask = torch.ones(input_states.size(0), input_states.size(1))
# Extending attention mask
if len(attention_mask.size()) == 3:
extended_attention_mask = attention_mask.unsqueeze(1)
elif len(attention_mask.size()) == 2:
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype, device=input_states.device) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
attention_mask = extended_attention_mask
# process head mask
if head_mask is not None:
if head_mask.dim() == 1:
head_mask = head_mask.unsqueeze(0).unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
head_mask = head_mask.expand_as(self.num_hidden_layers, -1, -1, -1, -1)
elif head_mask.dim() == 2:
head_mask = head_mask.unsqueeze(1).unsqueeze(-1).unsqueeze(-1) # We can specify head_mask for each layer
head_mask = head_mask.to(dtype=next(self.parameters()).dtype, device=input_states.device) # switch to fload if need + fp16 compatibility
else:
head_mask = None
input_states = self.embedding(input_states)
if self.pos_embedding is not None:
pos_states = self.pos_embedding(self.get_pos_states(input_states))
input_states = input_states + pos_states
if self.segment_embedding is not None and segments is not None:
segment_states = self.segment_embedding(segments)
input_states = input_states + segment_states
if self.cls_embedding is not None and targets is not None:
cls_states = self.cls_embedding(targets)
cls_states = cls_states.unsqueeze(1).repeat(1,input_states.size(1),1)
input_states = input_states + cls_states
input_states = self.embed_refine_net(input_states)
# Append the latent_states
input_states = torch.cat([input_states, zs.unsqueeze(1).repeat(1,input_states.size(1),1)],dim=2)
input_states = self.latent_fusion(input_states)
output_states = self.encoder(input_states, attention_mask, head_mask, output_all_states, output_attentions, keep_multihead_output)
if output_attentions:
output_states, attention_probs = output_states
output_states = output_states[-1]
for re_fc in self.re_fcs:
output_states = re_fc(output_states)
return output_states, attention_probs
output_states = output_states[-1]
for re_fc in self.re_fcs:
output_states = re_fc(output_states)
return output_states
class SketchVAELatentEmbedding(nn.Module):
def __init__(self, hidden_dim, latent_dim, max_length):
super(SketchVAELatentEmbedding, self).__init__()
self.mu_embedding = nn.Linear(hidden_dim, latent_dim)
self.sigma_embedding = nn.Linear(hidden_dim, latent_dim)
self.gaussian_generator = MultivariateNormal(torch.zeros(latent_dim), torch.eye(latent_dim))
'''
Input:
hidden_states[batch, seq_len, hidden_dim]
Output:
mus[batch, latent_dim]
sigmas[batch, latent_dim]
z[batch, latent_dim]
'''
def forward(self, hidden_states, attention_mask):
# Mask the lengths beyond
latent_states = hidden_states[:,0,:]
mus = self.mu_embedding(latent_states)
sigmas = self.sigma_embedding(latent_states)
sigmas = torch.exp(sigmas/2)
random_normal = self.gaussian_generator.sample([sigmas.size(0)]).to(sigmas.device)
zs = mus + sigmas * random_normal
return mus, sigmas , zs
'''
Different Pooling Layers
'''
class SketchPooling(nn.Module):
def __init__(self, hidden_dim, input_dim, cls_dim, max_length=250):
super(SketchPooling, self).__init__()
self.fc1 = nn.Linear(hidden_dim, 4)
self.fc2 = nn.Linear(max_length*4, cls_dim)
self.re_fc = nn.Linear(hidden_dim, input_dim)
def forward(self, hidden_states):
re_sketch = self.re_fc(hidden_states)
pooled = self.fc1(hidden_states)
pooled = self.fc2(pooled.view(pooled.size(0), -1))
return re_sketch, pooled
class SketchGMMPooling(nn.Module):
def __init__(self, hidden_dim, M, cls_dim, max_length=250):
super(SketchPooling, self).__init__()
self.fc1 = nn.Linear(hidden_dim, 4)
self.fc2 = nn.Linear(max_length*4, cls_dim)
self.re_fc = nn.Linear(hidden_dim, 6*M + 3)
'''
Input:
hidden_states[batch, seq_len, hidden_dim]
Output:
re_sketch[batch, seq_len, 6M+3]
pooled[batch, cls_dim]
'''
def forward(self, hidden_states):
re_sketch = self.re_fc(hidden_states)
pooled = self.fc1(hidden_states)
pooled = self.fc2(pooled.view(pooled.size(0), -1))
return re_sketch, pooled
class SketchHiddenPooling(nn.Module):
def __init__(self, hidden_dim):
super(SketchPooler, self).__init__()
self.fc = nn.Linear(hidden_dim, hidden_dim)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.fc(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
'''
Multi models for transformer backbone
'''
#Mask Sketch Model
class MaskSketchRecModel(nn.Module):
def __init__(self, rec_layers_setting, hidden_dim, input_dim, cls_in_input, rel_in_input):
super(MaskSketchRecModel, self).__init__()
self.re_fcs = []
rec_layers_setting.append(input_dim), rec_layers_setting.insert(0, hidden_dim)
for i in range(len(rec_layers_setting)-1):
self.re_fcs.append(nn.Linear(rec_layers_setting[i], rec_layers_setting[i+1]))
self.re_fcs = nn.ModuleList(self.re_fcs)
self.cls_in_input = cls_in_input
self.rel_in_input = rel_in_input
'''
Input:
hidden_states[batch, seq_len+cls_input, hidden_dim]
'''
def forward(self, hidden_states):
hidden_states = hidden_states[:, self.cls_in_input+self.rel_in_input:, :]
for re_fc in self.re_fcs:
hidden_states = re_fc(hidden_states)
return hidden_states
class MaskSketchGMMModel(nn.Module):
def __init__(self, hidden_dim, M, cls_in_input, rel_in_input):
super(MaskSketchGMMModel, self).__init__()
self.re_fc = nn.Linear(hidden_dim, 6*M + 3)
self.cls_in_input = cls_in_input
self.rel_in_input = rel_in_input
'''
Input:
hidden_states[batch, seq_len+cls_input, hidden_dim]
attention_mask[batch, seq_len+cls_input]
'''
def forward(self, hidden_states):
hidden_states = hidden_states[:, self.cls_in_input+self.rel_in_input:, :]
return self.re_fc(hidden_states)
# Sketch classification model
class SketchClassificationModel(nn.Module):
def __init__(self, hidden_dim, cls_dim, max_length):
super(SketchClassificationModel, self).__init__()
self.fc1 = nn.Linear(hidden_dim, 4)
self.fc2 = nn.Linear(max_length*4, cls_dim)
'''
Input:
hidden_states[batch, seq_len, hidden_dim]
attention_mask[batch, seq_len]
Output:
cls_states[batch, cls_dim]
'''
def forward(self, hidden_states):
pooled = self.fc1(hidden_states)
pooled = self.fc2(pooled.view(pooled.size(0), -1))
return pooled
class SketchClsPoolingModel(nn.Module):
def __init__(self, cls_layers_setting, hidden_dim, cls_dim, pool_dim):
super(SketchClsPoolingModel, self).__init__()
self.pool_dim = int(pool_dim)
cls_layers_setting = cls_layers_setting.copy()
cls_layers_setting.insert(0, hidden_dim), cls_layers_setting.append(cls_dim)
self.cls_fcs = []
for i in range(len(cls_layers_setting)-1):
self.cls_fcs.append(nn.Linear(cls_layers_setting[i], cls_layers_setting[i+1]))
self.cls_fcs = nn.ModuleList(self.cls_fcs)
'''
Input:
hidden_states[batch, seq_len+cls_dim, hidden_dim](0 dim is cls)
Output:
cls_states[batch, cls_dim]
'''
def forward(self, hidden_states):
pooled = hidden_states[:,self.pool_dim,:]
for cls_fc in self.cls_fcs:
pooled = cls_fc(pooled)
return pooled
class SketchRetrievalPoolingModel(nn.Module):
def __init__(self, rel_layers_setting, hidden_dim, feat_dim, pool_dim):
super(SketchRetrievalPoolingModel, self).__init__()
self.pool_dim = int(pool_dim)
rel_layers_setting = rel_layers_setting.copy()
rel_layers_setting.insert(0, hidden_dim), rel_layers_setting.append(feat_dim)
self.rel_fcs = []
for i in range(len(rel_layers_setting)-1):
self.rel_fcs.append(nn.Linear(rel_layers_setting[i], rel_layers_setting[i+1]))
self.rel_fcs = nn.ModuleList(self.rel_fcs)
'''
Input:
hidden_states[batch, seq_len+cls_dim, hidden_dim](0 dim is cls)
Output:
cls_states[batch, cls_dim]
'''
def forward(self, hidden_states):
pooled = hidden_states[:,self.pool_dim,:]
for rel_fc in self.rel_fcs:
pooled = rel_fc(pooled)
return pooled
class SketchDiscretePoolingModel(nn.Module):
def __init__(self, hidden_dim, max_size, type_size, cls_in_input, rel_in_input):
super(SketchDiscretePoolingModel, self).__init__()
self.cls_in_input = cls_in_input
self.rel_in_input = rel_in_input
self.x_pooling = nn.Linear(hidden_dim, 2*max_size[0]+1)
self.y_pooling = nn.Linear(hidden_dim, 2*max_size[1]+1)
self.type_pooling = nn.Linear(hidden_dim, type_size)
def forward(self, hidden_states):
'''
Input:
hidden_states[batch, seq_len+cls_dim, hidden_dim](0 dim is cls)
Output:
x_pred[batch, seq_len+cls_dim, 2*max_size[0]+1]
y_pred[batch, seq_len+cls_dim, 2*max_size[1]+1]
type_pred[batch, seq_len+cls_dim, type_size]
'''
hidden_states = (hidden_states)[:,self.cls_in_input+self.rel_in_input:,:]
x_pred = self.x_pooling(hidden_states)
y_pred = self.y_pooling(hidden_states)
type_pred = self.type_pooling(hidden_states)
return x_pred, y_pred, type_pred
class SketchSegmentOrderPoolingModel(nn.Module):
def __init__(self, hidden_dim, max_segment, cls_in_input, rel_in_input):
super(SketchSegmentOrderPoolingModel, self).__init__()
self.sg_fc = nn.Linear(hidden_dim, max_segment)
self.cls_in_input = cls_in_input
self.rel_in_input = rel_in_input
def forward(self, hidden_states, segment_index):
'''
Input:
hidden_states[batch, seg_len, hidden_dim]
segment_index[batch, seq_len]
'''
seg_states = hidden_states[:,self.cls_in_input+self.rel_in_input:,:][segment_index==0,:]
return self.sg_fc(seg_states)
class GMMLoss(nn.Module):
def __init__(self, reduction='mean'):
super(GMMLoss, self).__init__()
# self.logsoftmax = nn.LogSoftmax(dim=-1)
self.reduction = reduction
'''
x[seq_len, batch, 2]
lengths[batch, seq_len]
pis[seq_len, batch, M]: no softmax The {pis} in the paper SketchRNN, https://arxiv.org/abs/1704.03477
mus[seq_len, batch, M, 2]: The {mus} in the paper SketchRNN, https://arxiv.org/abs/1704.03477
sigmas[seq_len, batch, M, 2]: exp The {sigmas} in the paper SketchRNN, https://arxiv.org/abs/1704.03477
rhos[seq_len, batch, M]: tanh The {rho} in the paper SketchRNN, https://arxiv.org/abs/1704.03477
masks[]
'''
def forward(self, x, lengths, pis, mus, sigmas, rhos, epsilon=1e-8):
batch_size, seq_len, M = pis.size()
#print(batch_size, seq_len)
#print(x.size(), pis.size())
x = x.view(batch_size, seq_len, 1, 2).repeat(1, 1, M, 1)
sigma_prods = torch.prod(sigmas, dim=3) # [seq_len, batch, M]
sigma_sq = torch.pow(sigmas, 2) # [seq_len, batch, M, 2]
#print(x.size(), mus.size(), sigmas.size())
x_center = (x - mus) / (sigmas)
Z = torch.sum(x_center*x_center, dim=3) - 2 * rhos * torch.prod(x_center, dim=3)
rho_sq = 1 - rhos*rhos # [seq_len, batch, M]
denom = 2 * np.pi * sigma_prods * torch.sqrt(rho_sq)
probs = torch.exp(-Z / (2*rho_sq)) / denom
# pis = F.softmax(pis, dim=-1)
probs = torch.sum(F.softmax(pis, dim=-1) * probs, dim=-1)
log_probs = torch.log(probs+epsilon) * lengths # [len]
loss = - torch.mean(log_probs)
return loss
class KLLoss(nn.Module):
def __init__(self, kl_tolerance):
super(KLLoss, self).__init__()
self.kl_tolerance = torch.tensor(kl_tolerance)
'''
Input:
mus[batch, latent_size]:
sigmas[batch, latent_size]:
'''
def forward(self, mus, sigmas):
loss = - (0.5) * torch.mean(1 + torch.log(sigmas)*2.0 - mus*mus - sigmas*sigmas)
return torch.max(loss, self.kl_tolerance.to(loss.device))
| models/SketchTransformer/models/networks.py | 52,613 | A Lite BERT: Parameter Sharing
layers_setting[list]: [[12, ], []]
Truely a CNN model
max_size[tuple](x_length, y_length)
The module to upsample the embedding feature, idea from the ALBERT: Factorized Embedding
layers_setting[list]: [[12, ], []]
Assume Label in the Input
Assume Label in the Input
A transformer layer for sketch bert
layers_setting[list]: [[12, ], []]
A transformer layer for sketch bert
Implementation for self attention in Sketch.
The input will be a K-Dim feature.
Input Parameters:
config[dict]:
hidden_dim[int]: The dimension of input hidden embeddings in the self attention, hidden diension is equal to the output dimension
num_heads[int]: The number of heads
attention_probs[float]: probability parameter for dropout
Input:
layers_setting[list]
input_dim[int]
max_length[int]
position_type[str]
attention_norm_type[str]
inter_activation[str]
attention_dropout_prob[float]
hidden_dropout_prob[float]
output_dropout_prob[float]
Construct a layernorm module in the TF style (epsilon inside the square root).
Input:
hidden_states[batch, seq_len, hidden_dim]
attention_mask[batch, 1, 1, seq_len]
Output:
context_states[batch, seq_len, hidden_dim]
attention_probs[seq_len, hidden_dim]
Input:
hidden_states[batch, seg_len, hidden_dim]:
attention_mask[batch, seg_len](segment-based)
segments[batch, seg_len]:
segment_index[batch, seq_len]
Input:
input_states[batch, seq_len, 5],
zs[batch, latent_dim]
Input:
input_states[batch, seq_len, 5],
zs[batch, latent_dim]
Input:
hidden_states[batch, seq_len+cls_dim, hidden_dim](0 dim is cls)
Output:
x_pred[batch, seq_len+cls_dim, 2*max_size[0]+1]
y_pred[batch, seq_len+cls_dim, 2*max_size[1]+1]
type_pred[batch, seq_len+cls_dim, type_size]
Input:
hidden_states[batch, seg_len, hidden_dim]
segment_index[batch, seq_len]
Implementation of the gelu activation function.
For information: OpenAI GPT's gelu is slightly different (and gives slightly different results):
0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))))
Also see https://arxiv.org/abs/1606.08415
Input:
hidden_states[batch, seq_len, hidden_dim]
segment_index[batch, seq_len]
Transpose Function for simplicity.
try: from apex.normalization.fused_layer_norm import FusedLayerNorm as SketchLayerNorm except ImportError: logger.info("Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex ."), "swish": swishself.attention_dropout_prob = config.attention_dropout_prob Calculation for intermeidate parameters Get query, key, value together [batch, seq_len, all_head_dim] [batch, seq_len, all_head_dim] [batch, seq_len, all_head_dim] tranpose the query, key, value into multi heads[batch, seq_len, ] [batch, num_heads, seq_len, head_dim] [batch, num_heads, seq_len, head_dim] [batch, num_heads, seq_len, head_dim] Calculate Attention mapsprint(attention_scores.size(), attention_mask.size()) Compute states valuesview(context_states.size()[:-2]+ (self.all_head_dim,))print(hidden_states.size())print(hidden_states)print(hidden_states) Local Attention[batch, seq_len, hidden_dim] Segment Level Attention[batch, seq_len, hidden_dim] Concatenateprint(hidden_states.size(), local_states.size(), local_inter_states.size())self.layers = nn.ModuleList(self.layers)print(input_states[0,0,:], torch.min(input_states), torch.max(input_states))print(x_hidden.size(), y_hidden.size())print((pos_vector / (dim_vector[::2] / 2).view(1, -1)).size(), self.pos_embedding_matrix[:,::2].size())print(self.pos_embedding_matrix) Extending attention mask fp16 compatibility process head mask We can specify head_mask for each layer switch to fload if need + fp16 compatibility self.rec_fc = nn.Linear(layers_setting[0][1], output_dim) Extending attention mask fp16 compatibility process head mask We can specify head_mask for each layer switch to fload if need + fp16 compatibility Append the latent_statesreturn self.rec_fc(output_states[-1]), attention_probsprint(k in own_state, k in state_dict) Extending attention mask fp16 compatibility process head mask We can specify head_mask for each layer switch to fload if need + fp16 compatibility Append the latent_states Mask the lengths beyond We "pool" the model by simply taking the hidden state corresponding to the first token.Mask Sketch Model Sketch classification model self.logsoftmax = nn.LogSoftmax(dim=-1)print(batch_size, seq_len)print(x.size(), pis.size()) [seq_len, batch, M] [seq_len, batch, M, 2]print(x.size(), mus.size(), sigmas.size()) [seq_len, batch, M] pis = F.softmax(pis, dim=-1) [len] | 4,676 | en | 0.458285 |
import binascii
import re
try:
import json
except:
import simplejson as json
# Inserting certain referenced dicts in here means they can be declared in the same order as in the spec.
maps = {}
# SMPP PDU Definition - SMPP v3.4, section 4, page 45
mandatory_parameter_lists = {
'bind_transmitter': [ # SMPP v3.4, section 4.1.1, table 4-1, page 46
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None},
{'name': 'password', 'min': 1, 'max': 9, 'var': True, 'type': 'string', 'map': None},
{'name': 'system_type', 'min': 1, 'max': 13, 'var': True, 'type': 'string', 'map': None},
{'name': 'interface_version', 'min': 1, 'max': 1, 'var': False, 'type': 'hex', 'map': None},
{'name': 'addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'address_range', 'min': 1, 'max': 41, 'var': True, 'type': 'string', 'map': None}
],
'bind_transmitter_resp': [ # SMPP v3.4, section 4.1.2, table 4-2, page 47
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None}
],
'bind_receiver': [ # SMPP v3.4, section 4.1.3, table 4-3, page 48
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None},
{'name': 'password', 'min': 1, 'max': 9, 'var': True, 'type': 'string', 'map': None},
{'name': 'system_type', 'min': 1, 'max': 13, 'var': True, 'type': 'string', 'map': None},
{'name': 'interface_version', 'min': 1, 'max': 1, 'var': False, 'type': 'hex', 'map': None},
{'name': 'addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'address_range', 'min': 1, 'max': 41, 'var': True, 'type': 'string', 'map': None}
],
'bind_receiver_resp': [ # SMPP v3.4, section 4.1.4, table 4-4, page 50
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None}
],
'bind_transceiver': [ # SMPP v3.4, section 4.1.5, table 4-5, page 51
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None},
{'name': 'password', 'min': 1, 'max': 9, 'var': True, 'type': 'string', 'map': None},
{'name': 'system_type', 'min': 1, 'max': 13, 'var': True, 'type': 'string', 'map': None},
{'name': 'interface_version', 'min': 1, 'max': 1, 'var': False, 'type': 'hex', 'map': None},
{'name': 'addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'address_range', 'min': 1, 'max': 41, 'var': True, 'type': 'string', 'map': None}
],
'bind_transceiver_resp': [ # SMPP v3.4, section 4.1.6, table 4-6, page 52
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None}
],
'outbind': [ # SMPP v3.4, section 4.1.7.1, page 54
{'name': 'system_id', 'min': 1, 'max': 16, 'var': True, 'type': 'string', 'map': None},
{'name': 'password', 'min': 1, 'max': 9, 'var': True, 'type': 'string', 'map': None}
],
'unbind': [ # SMPP v3.4, section 4.2.1, table 4-7, page 56
],
'unbind_resp': [ # SMPP v3.4, section 4.2.2, table 4-8, page 56
],
'generic_nack': [ # SMPP v3.4, section 4.3.1, table 4-9, page 57
],
'submit_sm': [ # SMPP v3.4, section 4.4.1, table 4-10, page 59-61
{'name': 'service_type', 'min': 1, 'max': 6, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'dest_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'dest_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'destination_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'esm_class', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'protocol_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'priority_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'schedule_delivery_time', 'min': 1, 'max': 17, 'var': False, 'type': 'string', 'map': None},
{'name': 'validity_period', 'min': 1, 'max': 17, 'var': True, 'type': 'string', 'map': None},
{'name': 'registered_delivery', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'replace_if_present_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'data_coding', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_default_msg_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_length', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'short_message', 'min': 0, 'max': 254, 'var': 'sm_length', 'type': 'xstring', 'map': None}
],
'submit_sm_resp': [ # SMPP v3.4, section 4.4.2, table 4-11, page 67
{'name': 'message_id', 'min': 0, 'max': 65, 'var': True, 'type': 'string', 'map': None}
],
'submit_multi': [ # SMPP v3.4, section 4.5.1, table 4-12, page 69-71
{'name': 'service_type', 'min': 1, 'max': 6, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'number_of_dests', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'dest_address', 'min': 0, 'max': 0, 'var': 'number_of_dests', 'type': 'dest_address', 'map': None},
{'name': 'esm_class', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'protocol_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'priority_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'schedule_delivery_time', 'min': 1, 'max': 17, 'var': False, 'type': 'string', 'map': None},
{'name': 'validity_period', 'min': 1, 'max': 17, 'var': False, 'type': 'string', 'map': None},
{'name': 'registered_delivery', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'replace_if_present_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'data_coding', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_default_msg_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_length', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'short_message', 'min': 0, 'max': 254, 'var': 'sm_length', 'type': 'xstring', 'map': None}
],
'dest_address': [ # SMPP v3.4, section 4.5.1.1, table 4-13, page 75
{'name': 'dest_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None}
# 'sme_dest_address' or 'distribution_list' goes here
],
'sme_dest_address': [ # SMPP v3.4, section 4.5.1.1, table 4-14, page 75
{'name': 'dest_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'dest_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'destination_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None}
],
'distribution_list': [ # SMPP v3.4, section 4.5.1.2, table 4-15, page 75
{'name': 'dl_name', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None}
],
'submit_multi_resp': [ # SMPP v3.4, section 4.5.2, table 4-16, page 76
{'name': 'message_id', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'no_unsuccess', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'unsuccess_sme', 'min': 0, 'max': 0, 'var': 'no_unsuccess', 'type': 'unsuccess_sme', 'map': None}
],
'unsuccess_sme': [ # SMPP v3.4, section 4.5.2.1, table 4-17, page 77
{'name': 'dest_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'dest_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'destination_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'error_status_code', 'min': 4, 'max': 4, 'var': False, 'type': 'integer', 'map': None}
],
'deliver_sm': [ # SMPP v3.4, section 4.6.1, table 4-18, page 79-81
{'name': 'service_type', 'min': 1, 'max': 6, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'dest_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'dest_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'destination_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'esm_class', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'protocol_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'priority_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'schedule_delivery_time', 'min': 1, 'max': 1, 'var': False, 'type': 'string', 'map': None},
{'name': 'validity_period', 'min': 1, 'max': 1, 'var': False, 'type': 'string', 'map': None},
{'name': 'registered_delivery', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'replace_if_present_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'data_coding', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_default_msg_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_length', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'short_message', 'min': 0, 'max': 254, 'var': 'sm_length', 'type': 'xstring', 'map': None}
],
'deliver_sm_resp': [ # SMPP v3.4, section 4.6.2, table 4-19, page 85
{'name': 'message_id', 'min': 1, 'max': 1, 'var': False, 'type': 'string', 'map': None}
],
'data_sm': [ # SMPP v3.4, section 4.7.1, table 4-20, page 87-88
{'name': 'service_type', 'min': 1, 'max': 6, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'dest_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'dest_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'destination_addr', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'esm_class', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'registered_delivery', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'data_coding', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None}
],
'data_sm_resp': [ # SMPP v3.4, section 4.7.2, table 4-21, page 93
{'name': 'message_id', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None}
],
'query_sm': [ # SMPP v3.4, section 4.8.1, table 4-22, page 95
{'name': 'message_id', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None}
],
'query_sm_resp': [ # SMPP v3.4, section 4.7.2, table 4-21, page 93
{'name': 'message_id', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'final_date', 'min': 1, 'max': 17, 'var': False, 'type': 'string', 'map': None},
{'name': 'message_state', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'error_code', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None}
],
'cancel_sm': [ # SMPP v3.4, section 4.9.1, table 4-24, page 98-99
{'name': 'service_type', 'min': 1, 'max': 6, 'var': True, 'type': 'string', 'map': None},
{'name': 'message_id', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'dest_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'dest_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'destination_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None}
],
'cancel_sm_resp': [ # SMPP v3.4, section 4.9.2, table 4-25, page 100
],
'replace_sm': [ # SMPP v3.4, section 4.10.1, table 4-26, page 102-103
{'name': 'message_id', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 21, 'var': True, 'type': 'string', 'map': None},
{'name': 'schedule_delivery_time', 'min': 1, 'max': 17, 'var': False, 'type': 'string', 'map': None},
{'name': 'validity_period', 'min': 1, 'max': 17, 'var': False, 'type': 'string', 'map': None},
{'name': 'registered_delivery', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'replace_if_present_flag', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'data_coding', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_default_msg_id', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'sm_length', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': None},
{'name': 'short_message', 'min': 0, 'max': 254, 'var': 'sm_length', 'type': 'xstring', 'map': None}
],
'replace_sm_resp': [ # SMPP v3.4, section 4.10.2, table 4-27, page 104
],
'enquire_link': [ # SMPP v3.4, section 4.11.1, table 4-28, page 106
],
'enquire_link_resp': [ # SMPP v3.4, section 4.11.2, table 4-29, page 106
],
'alert_notification': [ # SMPP v3.4, section 4.12.1, table 4-30, page 108
{'name': 'source_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'source_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'source_addr', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
{'name': 'esme_addr_ton', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_ton'},
{'name': 'esme_addr_npi', 'min': 1, 'max': 1, 'var': False, 'type': 'integer', 'map': 'addr_npi'},
{'name': 'esme_addr', 'min': 1, 'max': 65, 'var': True, 'type': 'string', 'map': None},
]
}
def mandatory_parameter_list_by_command_name(command_name):
return mandatory_parameter_lists.get(command_name, [])
# Command IDs - SMPP v3.4, section 5.1.2.1, table 5-1, page 110-111
command_id_by_hex = {
'80000000': {'hex': '80000000', 'name': 'generic_nack'},
'00000001': {'hex': '00000001', 'name': 'bind_receiver'},
'80000001': {'hex': '80000001', 'name': 'bind_receiver_resp'},
'00000002': {'hex': '00000002', 'name': 'bind_transmitter'},
'80000002': {'hex': '80000002', 'name': 'bind_transmitter_resp'},
'00000003': {'hex': '00000003', 'name': 'query_sm'},
'80000003': {'hex': '80000003', 'name': 'query_sm_resp'},
'00000004': {'hex': '00000004', 'name': 'submit_sm'},
'80000004': {'hex': '80000004', 'name': 'submit_sm_resp'},
'00000005': {'hex': '00000005', 'name': 'deliver_sm'},
'80000005': {'hex': '80000005', 'name': 'deliver_sm_resp'},
'00000006': {'hex': '00000006', 'name': 'unbind'},
'80000006': {'hex': '80000006', 'name': 'unbind_resp'},
'00000007': {'hex': '00000007', 'name': 'replace_sm'},
'80000007': {'hex': '80000007', 'name': 'replace_sm_resp'},
'00000008': {'hex': '00000008', 'name': 'cancel_sm'},
'80000008': {'hex': '80000008', 'name': 'cancel_sm_resp'},
'00000009': {'hex': '00000009', 'name': 'bind_transceiver'},
'80000009': {'hex': '80000009', 'name': 'bind_transceiver_resp'},
'0000000b': {'hex': '0000000b', 'name': 'outbind'},
'00000015': {'hex': '00000015', 'name': 'enquire_link'},
'80000015': {'hex': '80000015', 'name': 'enquire_link_resp'},
'00000021': {'hex': '00000021', 'name': 'submit_multi'},
'80000021': {'hex': '80000021', 'name': 'submit_multi_resp'},
'00000102': {'hex': '00000102', 'name': 'alert_notification'},
'00000103': {'hex': '00000103', 'name': 'data_sm'},
'80000103': {'hex': '80000103', 'name': 'data_sm_resp'},
# v4 codes
'80010000': {'hex': '80010000', 'name': 'generic_nack_v4'},
'00010001': {'hex': '00010001', 'name': 'bind_receiver_v4'},
'80010001': {'hex': '80010001', 'name': 'bind_receiver_resp_v4'},
'00010002': {'hex': '00010002', 'name': 'bind_transmitter_v4'},
'80010002': {'hex': '80010002', 'name': 'bind_transmitter_resp_v4'},
'00010003': {'hex': '00010003', 'name': 'query_sm_v4'},
'80010003': {'hex': '80010003', 'name': 'query_sm_resp_v4'},
'00010004': {'hex': '00010004', 'name': 'submit_sm_v4'},
'80010004': {'hex': '80010004', 'name': 'submit_sm_resp_v4'},
'00010005': {'hex': '00010005', 'name': 'deliver_sm_v4'},
'80010005': {'hex': '80010005', 'name': 'deliver_sm_resp_v4'},
'00010006': {'hex': '00010006', 'name': 'unbind_v4'},
'80010006': {'hex': '80010006', 'name': 'unbind_resp_v4'},
'00010007': {'hex': '00010007', 'name': 'replace_sm_v4'},
'80010007': {'hex': '80010007', 'name': 'replace_sm_resp_v4'},
'00010008': {'hex': '00010008', 'name': 'cancel_sm_v4'},
'80010008': {'hex': '80010008', 'name': 'cancel_sm_resp_v4'},
'00010009': {'hex': '00010009', 'name': 'delivery_receipt_v4'},
'80010009': {'hex': '80010009', 'name': 'delivery_receipt_resp_v4'},
'0001000a': {'hex': '0001000a', 'name': 'enquire_link_v4'},
'8001000a': {'hex': '8001000a', 'name': 'enquire_link_resp_v4'},
'0001000b': {'hex': '0001000b', 'name': 'outbind_v4'},
}
def command_id_name_by_hex(x):
return command_id_by_hex.get(x, {}).get('name')
command_id_by_name = {
'generic_nack' :{ 'hex': '80000000', 'name': 'generic_nack'},
'bind_receiver' :{ 'hex': '00000001', 'name': 'bind_receiver'},
'bind_receiver_resp' :{ 'hex': '80000001', 'name': 'bind_receiver_resp'},
'bind_transmitter' :{ 'hex': '00000002', 'name': 'bind_transmitter'},
'bind_transmitter_resp' :{ 'hex': '80000002', 'name': 'bind_transmitter_resp'},
'query_sm' :{ 'hex': '00000003', 'name': 'query_sm'},
'query_sm_resp' :{ 'hex': '80000003', 'name': 'query_sm_resp'},
'submit_sm' :{ 'hex': '00000004', 'name': 'submit_sm'},
'submit_sm_resp' :{ 'hex': '80000004', 'name': 'submit_sm_resp'},
'deliver_sm' :{ 'hex': '00000005', 'name': 'deliver_sm'},
'deliver_sm_resp' :{ 'hex': '80000005', 'name': 'deliver_sm_resp'},
'unbind' :{ 'hex': '00000006', 'name': 'unbind'},
'unbind_resp' :{ 'hex': '80000006', 'name': 'unbind_resp'},
'replace_sm' :{ 'hex': '00000007', 'name': 'replace_sm'},
'replace_sm_resp' :{ 'hex': '80000007', 'name': 'replace_sm_resp'},
'cancel_sm' :{ 'hex': '00000008', 'name': 'cancel_sm'},
'cancel_sm_resp' :{ 'hex': '80000008', 'name': 'cancel_sm_resp'},
'bind_transceiver' :{ 'hex': '00000009', 'name': 'bind_transceiver'},
'bind_transceiver_resp' :{ 'hex': '80000009', 'name': 'bind_transceiver_resp'},
'outbind' :{ 'hex': '0000000b', 'name': 'outbind'},
'enquire_link' :{ 'hex': '00000015', 'name': 'enquire_link'},
'enquire_link_resp' :{ 'hex': '80000015', 'name': 'enquire_link_resp'},
'submit_multi' :{ 'hex': '00000021', 'name': 'submit_multi'},
'submit_multi_resp' :{ 'hex': '80000021', 'name': 'submit_multi_resp'},
'alert_notification' :{ 'hex': '00000102', 'name': 'alert_notification'},
'data_sm' :{ 'hex': '00000103', 'name': 'data_sm'},
'data_sm_resp' :{ 'hex': '80000103', 'name': 'data_sm_resp'},
# v4 codes
'generic_nack_v4' :{ 'hex': '80010000', 'name': 'generic_nack_v4'},
'bind_receiver_v4' :{ 'hex': '00010001', 'name': 'bind_receiver_v4'},
'bind_receiver_resp_v4' :{ 'hex': '80010001', 'name': 'bind_receiver_resp_v4'},
'bind_transmitter_v4' :{ 'hex': '00010002', 'name': 'bind_transmitter_v4'},
'bind_transmitter_resp_v4': {'hex': '80010002', 'name': 'bind_transmitter_resp_v4'},
'query_sm_v4' :{ 'hex': '00010003', 'name': 'query_sm_v4'},
'query_sm_resp_v4' :{ 'hex': '80010003', 'name': 'query_sm_resp_v4'},
'submit_sm_v4' :{ 'hex': '00010004', 'name': 'submit_sm_v4'},
'submit_sm_resp_v4' :{ 'hex': '80010004', 'name': 'submit_sm_resp_v4'},
'deliver_sm_v4' :{ 'hex': '00010005', 'name': 'deliver_sm_v4'},
'deliver_sm_resp_v4' :{ 'hex': '80010005', 'name': 'deliver_sm_resp_v4'},
'unbind_v4' :{ 'hex': '00010006', 'name': 'unbind_v4'},
'unbind_resp_v4' :{ 'hex': '80010006', 'name': 'unbind_resp_v4'},
'replace_sm_v4' :{ 'hex': '00010007', 'name': 'replace_sm_v4'},
'replace_sm_resp_v4' :{ 'hex': '80010007', 'name': 'replace_sm_resp_v4'},
'cancel_sm_v4' :{ 'hex': '00010008', 'name': 'cancel_sm_v4'},
'cancel_sm_resp_v4' :{ 'hex': '80010008', 'name': 'cancel_sm_resp_v4'},
'delivery_receipt_v4' :{ 'hex': '00010009', 'name': 'delivery_receipt_v4'},
'delivery_receipt_resp_v4':{ 'hex': '80010009', 'name': 'delivery_receipt_resp_v4'},
'enquire_link_v4' :{ 'hex': '0001000a', 'name': 'enquire_link_v4'},
'enquire_link_resp_v4' :{ 'hex': '8001000a', 'name': 'enquire_link_resp_v4'},
'outbind_v4' :{ 'hex': '0001000b', 'name': 'outbind_v4'}
}
def command_id_hex_by_name(n):
return command_id_by_name.get(n, {}).get('hex')
# SMPP Error Codes (ESME) - SMPP v3.4, section 5.1.3, table 5-2, page 112-114
command_status_by_hex = {
'00000000': {'hex': '00000000', 'name': 'ESME_ROK', 'description': 'No error'},
'00000001': {'hex': '00000001', 'name': 'ESME_RINVMSGLEN', 'description': 'Message Length is invalid'},
'00000002': {'hex': '00000002', 'name': 'ESME_RINVCMDLEN', 'description': 'Command Length is invalid'},
'00000003': {'hex': '00000003', 'name': 'ESME_RINVCMDID', 'description': 'Invalid Command ID'},
'00000004': {'hex': '00000004', 'name': 'ESME_RINVBNDSTS', 'description': 'Incorrect BIND Status for given command'},
'00000005': {'hex': '00000005', 'name': 'ESME_RALYBND', 'description': 'ESME Already in bound state'},
'00000006': {'hex': '00000006', 'name': 'ESME_RINVPRTFLG', 'description': 'Invalid priority flag'},
'00000007': {'hex': '00000007', 'name': 'ESME_RINVREGDLVFLG', 'description': 'Invalid registered delivery flag'},
'00000008': {'hex': '00000008', 'name': 'ESME_RSYSERR', 'description': 'System Error'},
'0000000a': {'hex': '0000000a', 'name': 'ESME_RINVSRCADR', 'description': 'Invalid source address'},
'0000000b': {'hex': '0000000b', 'name': 'ESME_RINVDSTADR', 'description': 'Invalid destination address'},
'0000000c': {'hex': '0000000c', 'name': 'ESME_RINVMSGID', 'description': 'Message ID is invalid'},
'0000000d': {'hex': '0000000d', 'name': 'ESME_RBINDFAIL', 'description': 'Bind failed'},
'0000000e': {'hex': '0000000e', 'name': 'ESME_RINVPASWD', 'description': 'Invalid password'},
'0000000f': {'hex': '0000000f', 'name': 'ESME_RINVSYSID', 'description': 'Invalid System ID'},
'00000011': {'hex': '00000011', 'name': 'ESME_RCANCELFAIL', 'description': 'Cancel SM Failed'},
'00000013': {'hex': '00000013', 'name': 'ESME_RREPLACEFAIL', 'description': 'Replace SM Failed'},
'00000014': {'hex': '00000014', 'name': 'ESME_RMSGQFUL', 'description': 'Message queue full'},
'00000015': {'hex': '00000015', 'name': 'ESME_RINVSERTYP', 'description': 'Invalid service type'},
'00000033': {'hex': '00000033', 'name': 'ESME_RINVNUMDESTS', 'description': 'Invalid number of destinations'},
'00000034': {'hex': '00000034', 'name': 'ESME_RINVDLNAME', 'description': 'Invalid distribution list name'},
'00000040': {'hex': '00000040', 'name': 'ESME_RINVDESTFLAG', 'description': 'Destination flag is invalid (submit_multi)'},
'00000042': {'hex': '00000042', 'name': 'ESME_RINVSUBREP', 'description': "Invalid `submit with replace' request (i.e. submit_sm with replace_if_present_flag set)"},
'00000043': {'hex': '00000043', 'name': 'ESME_RINVESMCLASS', 'description': 'Invalid esm_class field data'},
'00000044': {'hex': '00000044', 'name': 'ESME_RCNTSUBDL', 'description': 'Cannot submit to distribution list'},
'00000045': {'hex': '00000045', 'name': 'ESME_RSUBMITFAIL', 'description': 'submit_sm or submit_multi failed'},
'00000048': {'hex': '00000048', 'name': 'ESME_RINVSRCTON', 'description': 'Invalid source address TON'},
'00000049': {'hex': '00000049', 'name': 'ESME_RINVSRCNPI', 'description': 'Invalid source address NPI'},
'00000050': {'hex': '00000050', 'name': 'ESME_RINVDSTTON', 'description': 'Invalid destination address TON'},
'00000051': {'hex': '00000051', 'name': 'ESME_RINVDSTNPI', 'description': 'Invalid destination address NPI'},
'00000053': {'hex': '00000053', 'name': 'ESME_RINVSYSTYP', 'description': 'Invalid system_type field'},
'00000054': {'hex': '00000054', 'name': 'ESME_RINVREPFLAG', 'description': 'Invalid replace_if_present flag'},
'00000055': {'hex': '00000055', 'name': 'ESME_RINVNUMMSGS', 'description': 'Invalid number of messages'},
'00000058': {'hex': '00000058', 'name': 'ESME_RTHROTTLED', 'description': 'Throttling error (ESME has exceeded allowed message limits)'},
'00000061': {'hex': '00000061', 'name': 'ESME_RINVSCHED', 'description': 'Invalid scheduled delivery time'},
'00000062': {'hex': '00000062', 'name': 'ESME_RINVEXPIRY', 'description': 'Invalid message validity period (expiry time)'},
'00000063': {'hex': '00000063', 'name': 'ESME_RINVDFTMSGID', 'description': 'Predefined message invalid or not found'},
'00000064': {'hex': '00000064', 'name': 'ESME_RX_T_APPN', 'description': 'ESME Receiver Temporary App Error Code'},
'00000065': {'hex': '00000065', 'name': 'ESME_RX_P_APPN', 'description': 'ESME Receiver Permanent App Error Code'},
'00000066': {'hex': '00000066', 'name': 'ESME_RX_R_APPN', 'description': 'ESME Receiver Reject Message Error Code'},
'00000067': {'hex': '00000067', 'name': 'ESME_RQUERYFAIL', 'description': 'query_sm request failed'},
'000000c0': {'hex': '000000c0', 'name': 'ESME_RINVOPTPARSTREAM', 'description': 'Error in the optional part of the PDU Body'},
'000000c1': {'hex': '000000c1', 'name': 'ESME_ROPTPARNOTALLWD', 'description': 'Optional paramenter not allowed'},
'000000c2': {'hex': '000000c2', 'name': 'ESME_RINVPARLEN', 'description': 'Invalid parameter length'},
'000000c3': {'hex': '000000c3', 'name': 'ESME_RMISSINGOPTPARAM', 'description': 'Expected optional parameter missing'},
'000000c4': {'hex': '000000c4', 'name': 'ESME_RINVOPTPARAMVAL', 'description': 'Invalid optional parameter value'},
'000000fe': {'hex': '000000fe', 'name': 'ESME_RDELIVERYFAILURE', 'description': 'Delivery Failure (used for data_sm_resp)'},
'000000ff': {'hex': '000000ff', 'name': 'ESME_RUNKNOWNERR', 'description': 'Unknown error'}
}
def command_status_name_by_hex(x):
return command_status_by_hex.get(x, {}).get('name')
command_status_by_name = {
'ESME_ROK' :{ 'hex': '00000000', 'name': 'ESME_ROK', 'description': 'No error'},
'ESME_RINVMSGLEN' :{ 'hex': '00000001', 'name': 'ESME_RINVMSGLEN', 'description': 'Message Length is invalid'},
'ESME_RINVCMDLEN' :{ 'hex': '00000002', 'name': 'ESME_RINVCMDLEN', 'description': 'Command Length is invalid'},
'ESME_RINVCMDID' :{ 'hex': '00000003', 'name': 'ESME_RINVCMDID', 'description': 'Invalid Command ID'},
'ESME_RINVBNDSTS' :{ 'hex': '00000004', 'name': 'ESME_RINVBNDSTS', 'description': 'Incorrect BIND Status for given command'},
'ESME_RALYBND' :{ 'hex': '00000005', 'name': 'ESME_RALYBND', 'description': 'ESME Already in bound state'},
'ESME_RINVPRTFLG' :{ 'hex': '00000006', 'name': 'ESME_RINVPRTFLG', 'description': 'Invalid priority flag'},
'ESME_RINVREGDLVFLG' :{ 'hex': '00000007', 'name': 'ESME_RINVREGDLVFLG', 'description': 'Invalid registered delivery flag'},
'ESME_RSYSERR' :{ 'hex': '00000008', 'name': 'ESME_RSYSERR', 'description': 'System Error'},
'ESME_RINVSRCADR' :{ 'hex': '0000000a', 'name': 'ESME_RINVSRCADR', 'description': 'Invalid source address'},
'ESME_RINVDSTADR' :{ 'hex': '0000000b', 'name': 'ESME_RINVDSTADR', 'description': 'Invalid destination address'},
'ESME_RINVMSGID' :{ 'hex': '0000000c', 'name': 'ESME_RINVMSGID', 'description': 'Message ID is invalid'},
'ESME_RBINDFAIL' :{ 'hex': '0000000d', 'name': 'ESME_RBINDFAIL', 'description': 'Bind failed'},
'ESME_RINVPASWD' :{ 'hex': '0000000e', 'name': 'ESME_RINVPASWD', 'description': 'Invalid password'},
'ESME_RINVSYSID' :{ 'hex': '0000000f', 'name': 'ESME_RINVSYSID', 'description': 'Invalid System ID'},
'ESME_RCANCELFAIL' :{ 'hex': '00000011', 'name': 'ESME_RCANCELFAIL', 'description': 'Cancel SM Failed'},
'ESME_RREPLACEFAIL' :{ 'hex': '00000013', 'name': 'ESME_RREPLACEFAIL', 'description': 'Replace SM Failed'},
'ESME_RMSGQFUL' :{ 'hex': '00000014', 'name': 'ESME_RMSGQFUL', 'description': 'Message queue full'},
'ESME_RINVSERTYP' :{ 'hex': '00000015', 'name': 'ESME_RINVSERTYP', 'description': 'Invalid service type'},
'ESME_RINVNUMDESTS' :{ 'hex': '00000033', 'name': 'ESME_RINVNUMDESTS', 'description': 'Invalid number of destinations'},
'ESME_RINVDLNAME' :{ 'hex': '00000034', 'name': 'ESME_RINVDLNAME', 'description': 'Invalid distribution list name'},
'ESME_RINVDESTFLAG' :{ 'hex': '00000040', 'name': 'ESME_RINVDESTFLAG', 'description': 'Destination flag is invalid (submit_multi)'},
'ESME_RINVSUBREP' :{ 'hex': '00000042', 'name': 'ESME_RINVSUBREP', 'description': "Invalid `submit with replace' request (i.e. submit_sm with replace_if_present_flag set)"},
'ESME_RINVESMCLASS' :{ 'hex': '00000043', 'name': 'ESME_RINVESMCLASS', 'description': 'Invalid esm_class field data'},
'ESME_RCNTSUBDL' :{ 'hex': '00000044', 'name': 'ESME_RCNTSUBDL', 'description': 'Cannot submit to distribution list'},
'ESME_RSUBMITFAIL' :{ 'hex': '00000045', 'name': 'ESME_RSUBMITFAIL', 'description': 'submit_sm or submit_multi failed'},
'ESME_RINVSRCTON' :{ 'hex': '00000048', 'name': 'ESME_RINVSRCTON', 'description': 'Invalid source address TON'},
'ESME_RINVSRCNPI' :{ 'hex': '00000049', 'name': 'ESME_RINVSRCNPI', 'description': 'Invalid source address NPI'},
'ESME_RINVDSTTON' :{ 'hex': '00000050', 'name': 'ESME_RINVDSTTON', 'description': 'Invalid destination address TON'},
'ESME_RINVDSTNPI' :{ 'hex': '00000051', 'name': 'ESME_RINVDSTNPI', 'description': 'Invalid destination address NPI'},
'ESME_RINVSYSTYP' :{ 'hex': '00000053', 'name': 'ESME_RINVSYSTYP', 'description': 'Invalid system_type field'},
'ESME_RINVREPFLAG' :{ 'hex': '00000054', 'name': 'ESME_RINVREPFLAG', 'description': 'Invalid replace_if_present flag'},
'ESME_RINVNUMMSGS' :{ 'hex': '00000055', 'name': 'ESME_RINVNUMMSGS', 'description': 'Invalid number of messages'},
'ESME_RTHROTTLED' :{ 'hex': '00000058', 'name': 'ESME_RTHROTTLED', 'description': 'Throttling error (ESME has exceeded allowed message limits)'},
'ESME_RINVSCHED' :{ 'hex': '00000061', 'name': 'ESME_RINVSCHED', 'description': 'Invalid scheduled delivery time'},
'ESME_RINVEXPIRY' :{ 'hex': '00000062', 'name': 'ESME_RINVEXPIRY', 'description': 'Invalid message validity period (expiry time)'},
'ESME_RINVDFTMSGID' :{ 'hex': '00000063', 'name': 'ESME_RINVDFTMSGID', 'description': 'Predefined message invalid or not found'},
'ESME_RX_T_APPN' :{ 'hex': '00000064', 'name': 'ESME_RX_T_APPN', 'description': 'ESME Receiver Temporary App Error Code'},
'ESME_RX_P_APPN' :{ 'hex': '00000065', 'name': 'ESME_RX_P_APPN', 'description': 'ESME Receiver Permanent App Error Code'},
'ESME_RX_R_APPN' :{ 'hex': '00000066', 'name': 'ESME_RX_R_APPN', 'description': 'ESME Receiver Reject Message Error Code'},
'ESME_RQUERYFAIL' :{ 'hex': '00000067', 'name': 'ESME_RQUERYFAIL', 'description': 'query_sm request failed'},
'ESME_RINVOPTPARSTREAM': {'hex': '000000c0', 'name': 'ESME_RINVOPTPARSTREAM', 'description': 'Error in the optional part of the PDU Body'},
'ESME_ROPTPARNOTALLWD' :{ 'hex': '000000c1', 'name': 'ESME_ROPTPARNOTALLWD', 'description': 'Optional paramenter not allowed'},
'ESME_RINVPARLEN' :{ 'hex': '000000c2', 'name': 'ESME_RINVPARLEN', 'description': 'Invalid parameter length'},
'ESME_RMISSINGOPTPARAM': {'hex': '000000c3', 'name': 'ESME_RMISSINGOPTPARAM', 'description': 'Expected optional parameter missing'},
'ESME_RINVOPTPARAMVAL' :{ 'hex': '000000c4', 'name': 'ESME_RINVOPTPARAMVAL', 'description': 'Invalid optional parameter value'},
'ESME_RDELIVERYFAILURE': {'hex': '000000fe', 'name': 'ESME_RDELIVERYFAILURE', 'description': 'Delivery Failure (used for data_sm_resp)'},
'ESME_RUNKNOWNERR' :{ 'hex': '000000ff', 'name': 'ESME_RUNKNOWNERR', 'description': 'Unknown error'}
}
def command_status_hex_by_name(n):
return command_status_by_name.get(n, {}).get('hex')
# Type of Number (TON) - SMPP v3.4, section 5.2.5, table 5-3, page 117
maps['addr_ton_by_name'] = {
'unknown' : '00',
'international' : '01',
'national' : '02',
'network_specific' : '03',
'subscriber_number': '04',
'alphanumeric' : '05',
'abbreviated' : '06'
}
maps['addr_ton_by_hex'] = {
'00': 'unknown',
'01': 'international',
'02': 'national',
'03': 'network_specific',
'04': 'subscriber_number',
'05': 'alphanumeric',
'06': 'abbreviated'
}
# Numberic Plan Indicator (NPI) - SMPP v3.4, section 5.2.6, table 5-4, page 118
maps['addr_npi_by_name'] = {
'unknown' : '00',
'ISDN' : '01',
'data' : '03',
'telex' : '04',
'land_mobile': '06',
'national' : '08',
'private' : '09',
'ERMES' : '0a',
'internet' : '0e',
'WAP' : '12'
}
maps['addr_npi_by_hex'] = {
'00': 'unknown',
'01': 'ISDN',
'03': 'data',
'04': 'telex',
'06': 'land_mobile',
'08': 'national',
'09': 'private',
'0a': 'ERMES',
'0e': 'internet',
'12': 'WAP'
}
# ESM Class bits - SMPP v3.4, section 5.2.12, page 121
maps['esm_class_bits'] = {
'mode_mask' : '03',
'type_mask' : '3c',
'feature_mask' : 'c0',
'mode_default' : '00',
'mode_datagram' : '01',
'mode_forward' : '02',
'mode_store_and_forward' : '03',
'type_default' : '00',
'type_delivery_receipt' : '04',
'type_delivery_ack' : '08',
'type_0011' : '0a',
'type_user_ack' : '10',
'type_0101' : '14',
'type_conversation_abort' : '18',
'type_0111' : '1a',
'type_intermed_deliv_notif' : '20',
'type_1001' : '24',
'type_1010' : '28',
'type_1011' : '2a',
'type_1100' : '30',
'type_1101' : '34',
'type_1110' : '38',
'type_1111' : '3a',
'feature_none' : '00',
'feature_UDHI' : '40',
'feature_reply_path' : '80',
'feature_UDHI_and_reply_path': 'c0'
}
# Registered Delivery bits - SMPP v3.4, section 5.2.17, page 124
maps['registered_delivery_bits'] = {
'receipt_mask' : '03',
'ack_mask' : '0c',
'intermed_notif_mask' : '80',
'receipt_none' : '00',
'receipt_always' : '01',
'receipt_on_fail' : '02',
'receipt_res' : '03',
'ack_none' : '00',
'ack_delivery' : '04',
'ack_user' : '08',
'ack_delivery_and_user': '0c',
'intermed_notif_none' : '00',
'intermed_notif' : '10'
}
# submit_multi dest_flag constants - SMPP v3.4, section 5.2.25, page 129
# maps['dest_flag_by_name'] = {
# 'SME Address' :1,
# 'Distribution List Name': 2
# }
# Message State codes returned in query_sm_resp PDUs - SMPP v3.4, section 5.2.28, table 5-6, page 130
maps['message_state_by_name'] = {
'ENROUTE' : 1,
'DELIVERED' : 2,
'EXPIRED' : 3,
'DELETED' : 4,
'UNDELIVERABLE': 5,
'ACCEPTED' : 6,
'UNKNOWN' : 7,
'REJECTED' : 8
}
# Facility Code bits for SMPP v4
maps['facility_code_bits'] = {
'GF_PVCY' : '00000001',
'GF_SUBADDR': '00000002',
'NF_CC' : '00080000',
'NF_PDC' : '00010000',
'NF_IS136' : '00020000',
'NF_IS95A' : '00040000'
}
# Optional Parameter Tags - SMPP v3.4, section 5.3.2, Table 5-7, page 132-133
optional_parameter_tag_by_hex = {
'0005': {'hex': '0005', 'name': 'dest_addr_subunit', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.1, page 134
'0006': {'hex': '0006', 'name': 'dest_network_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.3, page 135
'0007': {'hex': '0007', 'name': 'dest_bearer_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.5, page 136
'0008': {'hex': '0008', 'name': 'dest_telematics_id', 'type': 'integer', 'tech': 'GSM', 'min': 2}, # SMPP v3.4, section 5.3.2.7, page 137
'000d': {'hex': '000d', 'name': 'source_addr_subunit', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.2, page 134
'000e': {'hex': '000e', 'name': 'source_network_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.4, page 135
'000f': {'hex': '000f', 'name': 'source_bearer_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.6, page 136
'0010': {'hex': '0010', 'name': 'source_telematics_id', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.8, page 137
'0017': {'hex': '0017', 'name': 'qos_time_to_live', 'type': 'integer', 'tech': 'Generic', 'min': 4}, # SMPP v3.4, section 5.3.2.9, page 138
'0019': {'hex': '0019', 'name': 'payload_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.10, page 138
'001d': {'hex': '001d', 'name': 'additional_status_info_text', 'type': 'string', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.11, page 139
'001e': {'hex': '001e', 'name': 'receipted_message_id', 'type': 'string', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.12, page 139
'0030': {'hex': '0030', 'name': 'ms_msg_wait_facilities', 'type': 'bitmask', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.13, page 140
'0101': {'hex': '0101', 'name': 'PVCY_AuthenticationStr', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 58-62
'0201': {'hex': '0201', 'name': 'privacy_indicator', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.14, page 141
'0202': {'hex': '0202', 'name': 'source_subaddress', 'type': 'hex', 'tech': 'CDMA, TDMA', 'min': 2}, # SMPP v3.4, section 5.3.2.15, page 142
'0203': {'hex': '0203', 'name': 'dest_subaddress', 'type': 'hex', 'tech': 'CDMA, TDMA', 'min': 2}, # SMPP v3.4, section 5.3.2.16, page 143
'0204': {'hex': '0204', 'name': 'user_message_reference', 'type': 'integer', 'tech': 'Generic', 'min': 2}, # SMPP v3.4, section 5.3.2.17, page 143
'0205': {'hex': '0205', 'name': 'user_response_code', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.18, page 144
'020a': {'hex': '020a', 'name': 'source_port', 'type': 'integer', 'tech': 'Generic', 'min': 2}, # SMPP v3.4, section 5.3.2.20, page 145
'020b': {'hex': '020b', 'name': 'destination_port', 'type': 'integer', 'tech': 'Generic', 'min': 2}, # SMPP v3.4, section 5.3.2.21, page 145
'020c': {'hex': '020c', 'name': 'sar_msg_ref_num', 'type': 'integer', 'tech': 'Generic', 'min': 2}, # SMPP v3.4, section 5.3.2.22, page 146
'020d': {'hex': '020d', 'name': 'language_indicator', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.19, page 144
'020e': {'hex': '020e', 'name': 'sar_total_segments', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.23, page 147
'020f': {'hex': '020f', 'name': 'sar_segment_seqnum', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.24, page 147
'0210': {'hex': '0210', 'name': 'sc_interface_version', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.25, page 148
'0301': {'hex': '0301', 'name': 'CC_CBN', 'type': None, 'tech': 'V4'}, # v4 page 70
'0302': {'hex': '0302', 'name': 'callback_num_pres_ind', 'type': 'bitmask', 'tech': 'TDMA'}, # SMPP v3.4, section 5.3.2.37, page 156
'0303': {'hex': '0303', 'name': 'callback_num_atag', 'type': 'hex', 'tech': 'TDMA'}, # SMPP v3.4, section 5.3.2.38, page 157
'0304': {'hex': '0304', 'name': 'number_of_messages', 'type': 'integer', 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.39, page 158
'0381': {'hex': '0381', 'name': 'callback_num', 'type': 'hex', 'tech': 'CDMA, TDMA, GSM, iDEN', 'min': 4}, # SMPP v3.4, section 5.3.2.36, page 155
'0420': {'hex': '0420', 'name': 'dpf_result', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.28, page 149
'0421': {'hex': '0421', 'name': 'set_dpf', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.29, page 150
'0422': {'hex': '0422', 'name': 'ms_availability_status', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.30, page 151
'0423': {'hex': '0423', 'name': 'network_error_code', 'type': 'hex', 'tech': 'Generic', 'min': 3}, # SMPP v3.4, section 5.3.2.31, page 152
'0424': {'hex': '0424', 'name': 'message_payload', 'type': 'hex', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.32, page 153
'0425': {'hex': '0425', 'name': 'delivery_failure_reason', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.33, page 153
'0426': {'hex': '0426', 'name': 'more_messages_to_send', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.34, page 154
'0427': {'hex': '0427', 'name': 'message_state', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.35, page 154
'0428': {'hex': '0428', 'name': 'congestion_state', 'type': None, 'tech': 'Generic'},
'0501': {'hex': '0501', 'name': 'ussd_service_op', 'type': 'hex', 'tech': 'GSM (USSD)'}, # SMPP v3.4, section 5.3.2.44, page 161
'0600': {'hex': '0600', 'name': 'broadcast_channel_indicator', 'type': None, 'tech': 'GSM'},
'0601': {'hex': '0601', 'name': 'broadcast_content_type', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'0602': {'hex': '0602', 'name': 'broadcast_content_type_info', 'type': None, 'tech': 'CDMA, TDMA'},
'0603': {'hex': '0603', 'name': 'broadcast_message_class', 'type': None, 'tech': 'GSM'},
'0604': {'hex': '0604', 'name': 'broadcast_rep_num', 'type': None, 'tech': 'GSM'},
'0605': {'hex': '0605', 'name': 'broadcast_frequency_interval', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'0606': {'hex': '0606', 'name': 'broadcast_area_identifier', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'0607': {'hex': '0607', 'name': 'broadcast_error_status', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'0608': {'hex': '0608', 'name': 'broadcast_area_success', 'type': None, 'tech': 'GSM'},
'0609': {'hex': '0609', 'name': 'broadcast_end_time', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'060a': {'hex': '060a', 'name': 'broadcast_service_group', 'type': None, 'tech': 'CDMA, TDMA'},
'060b': {'hex': '060b', 'name': 'billing_identification', 'type': None, 'tech': 'Generic'},
'060d': {'hex': '060d', 'name': 'source_network_id', 'type': None, 'tech': 'Generic'},
'060e': {'hex': '060e', 'name': 'dest_network_id', 'type': None, 'tech': 'Generic'},
'060f': {'hex': '060f', 'name': 'source_node_id', 'type': None, 'tech': 'Generic'},
'0610': {'hex': '0610', 'name': 'dest_node_id', 'type': None, 'tech': 'Generic'},
'0611': {'hex': '0611', 'name': 'dest_addr_np_resolution', 'type': None, 'tech': 'CDMA, TDMA (US Only)'},
'0612': {'hex': '0612', 'name': 'dest_addr_np_information', 'type': None, 'tech': 'CDMA, TDMA (US Only)'},
'0613': {'hex': '0613', 'name': 'dest_addr_np_country', 'type': None, 'tech': 'CDMA, TDMA (US Only)'},
'1101': {'hex': '1101', 'name': 'PDC_MessageClass', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 75
'1102': {'hex': '1102', 'name': 'PDC_PresentationOption', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 76
'1103': {'hex': '1103', 'name': 'PDC_AlertMechanism', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 76
'1104': {'hex': '1104', 'name': 'PDC_Teleservice', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 77
'1105': {'hex': '1105', 'name': 'PDC_MultiPartMessage', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 77
'1106': {'hex': '1106', 'name': 'PDC_PredefinedMsg', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 78
'1201': {'hex': '1201', 'name': 'display_time', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.26, page 148
'1203': {'hex': '1203', 'name': 'sms_signal', 'type': 'integer', 'tech': 'TDMA', 'min': 2}, # SMPP v3.4, section 5.3.2.40, page 158
'1204': {'hex': '1204', 'name': 'ms_validity', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.27, page 149
'1304': {'hex': '1304', 'name': 'IS95A_AlertOnDelivery', 'type': None, 'tech': 'CDMA'}, # v4 page 85
'1306': {'hex': '1306', 'name': 'IS95A_LanguageIndicator', 'type': None, 'tech': 'CDMA'}, # v4 page 86
'130c': {'hex': '130c', 'name': 'alert_on_message_delivery', 'type': None, 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.41, page 159
'1380': {'hex': '1380', 'name': 'its_reply_type', 'type': 'integer', 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.42, page 159
'1383': {'hex': '1383', 'name': 'its_session_info', 'type': 'hex', 'tech': 'CDMA', 'min': 2}, # SMPP v3.4, section 5.3.2.43, page 160
'1402': {'hex': '1402', 'name': 'operator_id', 'type': None, 'tech': 'vendor extension'},
'1403': {'hex': '1403', 'name': 'tariff', 'type': None, 'tech': 'Mobile Network Code vendor extension'},
'1450': {'hex': '1450', 'name': 'mcc', 'type': None, 'tech': 'Mobile Country Code vendor extension'},
'1451': {'hex': '1451', 'name': 'mnc', 'type': None, 'tech': 'Mobile Network Code vendor extension'}
}
def optional_parameter_tag_name_by_hex(x):
return optional_parameter_tag_by_hex.get(x, {}).get('name')
def optional_parameter_tag_type_by_hex(x):
return optional_parameter_tag_by_hex.get(x, {}).get('type')
def optional_parameter_tag_min_by_hex(x):
return optional_parameter_tag_by_hex.get(x, {}).get('min', 0)
optional_parameter_tag_by_name = {
'dest_addr_subunit' :{ 'hex': '0005', 'name': 'dest_addr_subunit', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.1, page 134
'dest_network_type' :{ 'hex': '0006', 'name': 'dest_network_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.3, page 135
'dest_bearer_type' :{ 'hex': '0007', 'name': 'dest_bearer_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.5, page 136
'dest_telematics_id' :{ 'hex': '0008', 'name': 'dest_telematics_id', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.7, page 137
'source_addr_subunit' :{ 'hex': '000d', 'name': 'source_addr_subunit', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.2, page 134
'source_network_type' :{ 'hex': '000e', 'name': 'source_network_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.4, page 135
'source_bearer_type' :{ 'hex': '000f', 'name': 'source_bearer_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.6, page 136
'source_telematics_id' :{ 'hex': '0010', 'name': 'source_telematics_id', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.8, page 137
'qos_time_to_live' :{ 'hex': '0017', 'name': 'qos_time_to_live', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.9, page 138
'payload_type' :{ 'hex': '0019', 'name': 'payload_type', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.10, page 138
'additional_status_info_text' :{ 'hex': '001d', 'name': 'additional_status_info_text', 'type': 'string', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.11, page 139
'receipted_message_id' :{ 'hex': '001e', 'name': 'receipted_message_id', 'type': 'string', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.12, page 139
'ms_msg_wait_facilities' :{ 'hex': '0030', 'name': 'ms_msg_wait_facilities', 'type': 'bitmask', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.13, page 140
'PVCY_AuthenticationStr' :{ 'hex': '0101', 'name': 'PVCY_AuthenticationStr', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 58-62
'privacy_indicator' :{ 'hex': '0201', 'name': 'privacy_indicator', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.14, page 141
'source_subaddress' :{ 'hex': '0202', 'name': 'source_subaddress', 'type': 'hex', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.15, page 142
'dest_subaddress' :{ 'hex': '0203', 'name': 'dest_subaddress', 'type': 'hex', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.16, page 143
'user_message_reference' :{ 'hex': '0204', 'name': 'user_message_reference', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.17, page 143
'user_response_code' :{ 'hex': '0205', 'name': 'user_response_code', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.18, page 144
'source_port' :{ 'hex': '020a', 'name': 'source_port', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.20, page 145
'destination_port' :{ 'hex': '020b', 'name': 'destination_port', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.21, page 145
'sar_msg_ref_num' :{ 'hex': '020c', 'name': 'sar_msg_ref_num', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.22, page 146
'language_indicator' :{ 'hex': '020d', 'name': 'language_indicator', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.19, page 144
'sar_total_segments' :{ 'hex': '020e', 'name': 'sar_total_segments', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.23, page 147
'sar_segment_seqnum' :{ 'hex': '020f', 'name': 'sar_segment_seqnum', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.24, page 147
'sc_interface_version' :{ 'hex': '0210', 'name': 'sc_interface_version', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.25, page 148
'CC_CBN' :{ 'hex': '0301', 'name': 'CC_CBN', 'type': None, 'tech': 'V4'}, # v4 page 70
'callback_num_pres_ind' :{ 'hex': '0302', 'name': 'callback_num_pres_ind', 'type': 'bitmask', 'tech': 'TDMA'}, # SMPP v3.4, section 5.3.2.37, page 156
'callback_num_atag' :{ 'hex': '0303', 'name': 'callback_num_atag', 'type': 'hex', 'tech': 'TDMA'}, # SMPP v3.4, section 5.3.2.38, page 157
'number_of_messages' :{ 'hex': '0304', 'name': 'number_of_messages', 'type': 'integer', 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.39, page 158
'callback_num' :{ 'hex': '0381', 'name': 'callback_num', 'type': 'hex', 'tech': 'CDMA, TDMA, GSM, iDEN'}, # SMPP v3.4, section 5.3.2.36, page 155
'dpf_result' :{ 'hex': '0420', 'name': 'dpf_result', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.28, page 149
'set_dpf' :{ 'hex': '0421', 'name': 'set_dpf', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.29, page 150
'ms_availability_status' :{ 'hex': '0422', 'name': 'ms_availability_status', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.30, page 151
'network_error_code' :{ 'hex': '0423', 'name': 'network_error_code', 'type': 'hex', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.31, page 152
'message_payload' :{ 'hex': '0424', 'name': 'message_payload', 'type': 'hex', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.32, page 153
'delivery_failure_reason' :{ 'hex': '0425', 'name': 'delivery_failure_reason', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.33, page 153
'more_messages_to_send' :{ 'hex': '0426', 'name': 'more_messages_to_send', 'type': 'integer', 'tech': 'GSM'}, # SMPP v3.4, section 5.3.2.34, page 154
'message_state' :{ 'hex': '0427', 'name': 'message_state', 'type': 'integer', 'tech': 'Generic'}, # SMPP v3.4, section 5.3.2.35, page 154
'congestion_state' :{ 'hex': '0428', 'name': 'congestion_state', 'type': None, 'tech': 'Generic'},
'ussd_service_op' :{ 'hex': '0501', 'name': 'ussd_service_op', 'type': 'hex', 'tech': 'GSM (USSD)'}, # SMPP v3.4, section 5.3.2.44, page 161
'broadcast_channel_indicator' :{ 'hex': '0600', 'name': 'broadcast_channel_indicator', 'type': None, 'tech': 'GSM'},
'broadcast_content_type' :{ 'hex': '0601', 'name': 'broadcast_content_type', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'broadcast_content_type_info' :{ 'hex': '0602', 'name': 'broadcast_content_type_info', 'type': None, 'tech': 'CDMA, TDMA'},
'broadcast_message_class' :{ 'hex': '0603', 'name': 'broadcast_message_class', 'type': None, 'tech': 'GSM'},
'broadcast_rep_num' :{ 'hex': '0604', 'name': 'broadcast_rep_num', 'type': None, 'tech': 'GSM'},
'broadcast_frequency_interval': {'hex': '0605', 'name': 'broadcast_frequency_interval', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'broadcast_area_identifier' :{ 'hex': '0606', 'name': 'broadcast_area_identifier', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'broadcast_error_status' :{ 'hex': '0607', 'name': 'broadcast_error_status', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'broadcast_area_success' :{ 'hex': '0608', 'name': 'broadcast_area_success', 'type': None, 'tech': 'GSM'},
'broadcast_end_time' :{ 'hex': '0609', 'name': 'broadcast_end_time', 'type': None, 'tech': 'CDMA, TDMA, GSM'},
'broadcast_service_group' :{ 'hex': '060a', 'name': 'broadcast_service_group', 'type': None, 'tech': 'CDMA, TDMA'},
'billing_identification' :{ 'hex': '060b', 'name': 'billing_identification', 'type': None, 'tech': 'Generic'},
'source_network_id' :{ 'hex': '060d', 'name': 'source_network_id', 'type': None, 'tech': 'Generic'},
'dest_network_id' :{ 'hex': '060e', 'name': 'dest_network_id', 'type': None, 'tech': 'Generic'},
'source_node_id' :{ 'hex': '060f', 'name': 'source_node_id', 'type': None, 'tech': 'Generic'},
'dest_node_id' :{ 'hex': '0610', 'name': 'dest_node_id', 'type': None, 'tech': 'Generic'},
'dest_addr_np_resolution' :{ 'hex': '0611', 'name': 'dest_addr_np_resolution', 'type': None, 'tech': 'CDMA, TDMA (US Only)'},
'dest_addr_np_information' :{ 'hex': '0612', 'name': 'dest_addr_np_information', 'type': None, 'tech': 'CDMA, TDMA (US Only)'},
'dest_addr_np_country' :{ 'hex': '0613', 'name': 'dest_addr_np_country', 'type': None, 'tech': 'CDMA, TDMA (US Only)'},
'PDC_MessageClass' :{ 'hex': '1101', 'name': 'PDC_MessageClass', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 75
'PDC_PresentationOption' :{ 'hex': '1102', 'name': 'PDC_PresentationOption', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 76
'PDC_AlertMechanism' :{ 'hex': '1103', 'name': 'PDC_AlertMechanism', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 76
'PDC_Teleservice' :{ 'hex': '1104', 'name': 'PDC_Teleservice', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 77
'PDC_MultiPartMessage' :{ 'hex': '1105', 'name': 'PDC_MultiPartMessage', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 77
'PDC_PredefinedMsg' :{ 'hex': '1106', 'name': 'PDC_PredefinedMsg', 'type': None, 'tech': '? (J-Phone)'}, # v4 page 78
'display_time' :{ 'hex': '1201', 'name': 'display_time', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.26, page 148
'sms_signal' :{ 'hex': '1203', 'name': 'sms_signal', 'type': 'integer', 'tech': 'TDMA'}, # SMPP v3.4, section 5.3.2.40, page 158
'ms_validity' :{ 'hex': '1204', 'name': 'ms_validity', 'type': 'integer', 'tech': 'CDMA, TDMA'}, # SMPP v3.4, section 5.3.2.27, page 149
'IS95A_AlertOnDelivery' :{ 'hex': '1304', 'name': 'IS95A_AlertOnDelivery', 'type': None, 'tech': 'CDMA'}, # v4 page 85
'IS95A_LanguageIndicator' :{ 'hex': '1306', 'name': 'IS95A_LanguageIndicator', 'type': None, 'tech': 'CDMA'}, # v4 page 86
'alert_on_message_delivery' :{ 'hex': '130c', 'name': 'alert_on_message_delivery', 'type': None, 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.41, page 159
'its_reply_type' :{ 'hex': '1380', 'name': 'its_reply_type', 'type': 'integer', 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.42, page 159
'its_session_info' :{ 'hex': '1383', 'name': 'its_session_info', 'type': 'hex', 'tech': 'CDMA'}, # SMPP v3.4, section 5.3.2.43, page 160
'operator_id' :{ 'hex': '1402', 'name': 'operator_id', 'type': None, 'tech': 'vendor extension'},
'tariff' :{ 'hex': '1403', 'name': 'tariff', 'type': None, 'tech': 'Mobile Network Code vendor extension'},
'mcc' :{ 'hex': '1450', 'name': 'mcc', 'type': None, 'tech': 'Mobile Country Code vendor extension'},
'mnc' :{ 'hex': '1451', 'name': 'mnc', 'type': None, 'tech': 'Mobile Network Code vendor extension'}
}
def optional_parameter_tag_hex_by_name(n):
return optional_parameter_tag_by_name.get(n, {}).get('hex')
# Decoding functions #######################################################
def unpack_pdu(pdu_bin):
return decode_pdu(binascii.b2a_hex(pdu_bin))
def decode_pdu(pdu_hex):
hex_ref = [pdu_hex]
pdu = {}
pdu['header'] = decode_header(hex_ref)
command = pdu['header'].get('command_id', None)
if command is not None:
body = decode_body(command, hex_ref)
if len(body) > 0:
pdu['body'] = body
return pdu
def decode_header(hex_ref):
pdu_hex = hex_ref[0]
header = {}
(command_length, command_id, command_status, sequence_number, hex_ref[0]) = \
(pdu_hex[0:8], pdu_hex[8:16], pdu_hex[16:24], pdu_hex[24:32], pdu_hex[32:])
length = int(command_length, 16)
command = command_id_name_by_hex(command_id)
status = command_status_name_by_hex(command_status)
sequence = int(sequence_number, 16)
header = {}
header['command_length'] = length
header['command_id'] = command
header['command_status'] = status
header['sequence_number'] = sequence
return header
def decode_body(command, hex_ref):
body = {}
if command is not None:
fields = mandatory_parameter_list_by_command_name(command)
mandatory = decode_mandatory_parameters(fields, hex_ref)
if len(mandatory) > 0:
body['mandatory_parameters'] = mandatory
optional = decode_optional_parameters(hex_ref)
if len(optional) > 0:
body['optional_parameters'] = optional
return body
def decode_mandatory_parameters(fields, hex_ref):
mandatory_parameters = {}
if len(hex_ref[0]) > 1:
for field in fields:
# old = len(hex_ref[0])
data = ''
octet = ''
count = 0
if field['var'] is True or field['var'] is False:
while (len(hex_ref[0]) > 1 and (count < field['min'] or
(field['var'] is True and count < field['max']+1 and
octet != '00'))):
octet = octpop(hex_ref)
data += octet
count += 1
elif field['type'] in ['string', 'xstring']:
count = mandatory_parameters[field['var']]
if count == 0:
data = None
else:
for i in range(count):
if len(hex_ref[0]) > 1:
data += octpop(hex_ref)
else:
count = mandatory_parameters[field['var']]
if field['map'] is not None:
mandatory_parameters[field['name']] = maps[field['map']+'_by_hex'].get(data, None)
if field['map'] is None or mandatory_parameters[field['name']] is None:
mandatory_parameters[field['name']] = decode_hex_type(data, field['type'], count, hex_ref)
# print field['type'], (old - len(hex_ref[0]))/2, repr(data), field['name'], mandatory_parameters[field['name']]
return mandatory_parameters
def decode_optional_parameters(hex_ref):
optional_parameters = []
hex = hex_ref[0]
while len(hex) > 0:
if len(hex) < 8:
# We don't have enough data here for this to be a valid param.
# TODO: Something better than `print` here.
print "Invalid optional param data, ignoring: %s" % (hex,)
break
(tag_hex, length_hex, rest) = (hex[0:4], hex[4:8], hex[8:])
tag = optional_parameter_tag_name_by_hex(tag_hex)
if tag is None:
tag = tag_hex
length = int(length_hex, 16)
(value_hex, tail) = (rest[0:length*2], rest[length*2:])
if len(value_hex) == 0:
value = None
else:
value = decode_hex_type(value_hex, optional_parameter_tag_type_by_hex(tag_hex))
hex = tail
optional_parameters.append({'tag': tag, 'length': length, 'value': value})
return optional_parameters
def decode_hex_type(hex, type, count=0, hex_ref=['']):
if hex is None:
return hex
elif type == 'integer':
return int(hex, 16)
elif type == 'string':
return re.sub('00', '', hex).decode('hex')
elif type == 'xstring':
return hex.decode('hex')
elif (type == 'dest_address' or type == 'unsuccess_sme'):
list = []
fields = mandatory_parameter_list_by_command_name(type)
for i in range(count):
item = decode_mandatory_parameters(fields, hex_ref)
if item.get('dest_flag', None) == 1: # 'dest_address' only
subfields = mandatory_parameter_list_by_command_name('sme_dest_address')
rest = decode_mandatory_parameters(subfields, hex_ref)
item.update(rest)
elif item.get('dest_flag', None) == 2: # 'dest_address' only
subfields = mandatory_parameter_list_by_command_name('distribution_list')
rest = decode_mandatory_parameters(subfields, hex_ref)
item.update(rest)
list.append(item)
return list
else:
return hex
def octpop(hex_ref):
octet = None
if len(hex_ref[0]) > 1:
(octet, hex_ref[0]) = (hex_ref[0][0:2], hex_ref[0][2:])
return octet
# Encoding functions
def pack_pdu(pdu_obj):
return binascii.a2b_hex(encode_pdu(pdu_obj))
def encode_pdu(pdu_obj):
header = pdu_obj.get('header', {})
body = pdu_obj.get('body', {})
mandatory = body.get('mandatory_parameters', {})
optional = body.get('optional_parameters', [])
body_hex = ''
fields = mandatory_parameter_list_by_command_name(header['command_id'])
body_hex += encode_mandatory_parameters(mandatory, fields)
for opt in optional:
body_hex += encode_optional_parameter(opt['tag'], opt['value'])
actual_length = 16 + len(body_hex)/2
command_length = '%08x' % actual_length
command_id = command_id_hex_by_name(header['command_id'])
command_status = command_status_hex_by_name(header['command_status'])
sequence_number = '%08x' % header['sequence_number']
pdu_hex = command_length + command_id + command_status + sequence_number + body_hex
return pdu_hex
def encode_mandatory_parameters(mandatory_obj, fields):
mandatory_hex_array = []
index_names = {}
index = 0
for field in fields:
param = mandatory_obj.get(field['name'], None)
param_length = None
if param is not None or field['min'] > 0:
map = None
if field['map'] is not None:
map = maps.get(field['map']+'_by_name', None)
if isinstance(param, list):
hex_list = []
for item in param:
flagfields = mandatory_parameter_list_by_command_name(field['type'])
plusfields = []
if item.get('dest_flag', None) == 1:
plusfields = mandatory_parameter_list_by_command_name('sme_dest_address')
elif item.get('dest_flag', None) == 2:
plusfields = mandatory_parameter_list_by_command_name('distribution_list')
hex_item = encode_mandatory_parameters(item, flagfields + plusfields)
if isinstance(hex_item, str) and len(hex_item) > 0:
hex_list.append(hex_item)
param_length = len(hex_list)
mandatory_hex_array.append(''.join(hex_list))
else:
hex_param = encode_param_type(
param, field['type'], field['min'], field['max'], map)
param_length = len(hex_param)/2
mandatory_hex_array.append(hex_param)
index_names[field['name']] = index
length_index = index_names.get(field['var'], None)
if length_index is not None and param_length is not None:
mandatory_hex_array[length_index] = encode_param_type(
param_length,
'integer',
len(mandatory_hex_array[length_index])/2)
index += 1
return ''.join(mandatory_hex_array)
def encode_optional_parameter(tag, value):
optional_hex_array = []
tag_hex = optional_parameter_tag_hex_by_name(tag)
if tag_hex is not None:
value_hex = encode_param_type(
value,
optional_parameter_tag_type_by_hex(tag_hex),
optional_parameter_tag_min_by_hex(tag_hex),
)
length_hex = '%04x' % (len(value_hex)/2)
optional_hex_array.append(tag_hex + length_hex + value_hex)
return ''.join(optional_hex_array)
def encode_param_type(param, type, min=0, max=None, map=None):
if param is None:
hex = None
elif map is not None:
if type == 'integer' and isinstance(param, int):
hex = ('%0'+str(min*2)+'x') % param
else:
hex = map.get(param, ('%0'+str(min*2)+'x') % 0)
elif type == 'integer':
hex = ('%0'+str(min*2)+'x') % int(param)
elif type == 'string':
hex = param.encode('hex') + '00'
elif type == 'xstring':
hex = param.encode('hex')
elif type == 'bitmask':
hex = param
elif type == 'hex':
hex = param
else:
hex = None
if hex:
if len(hex) % 2:
# pad odd length hex strings
hex = '0' + hex
if None not in (max, hex) and len(hex) > 2 * max:
raise ValueError("Value exceeds maximum size of %s." % (max,))
return hex
| smpp/__init__.py | 78,858 | Inserting certain referenced dicts in here means they can be declared in the same order as in the spec. SMPP PDU Definition - SMPP v3.4, section 4, page 45 SMPP v3.4, section 4.1.1, table 4-1, page 46 SMPP v3.4, section 4.1.2, table 4-2, page 47 SMPP v3.4, section 4.1.3, table 4-3, page 48 SMPP v3.4, section 4.1.4, table 4-4, page 50 SMPP v3.4, section 4.1.5, table 4-5, page 51 SMPP v3.4, section 4.1.6, table 4-6, page 52 SMPP v3.4, section 4.1.7.1, page 54 SMPP v3.4, section 4.2.1, table 4-7, page 56 SMPP v3.4, section 4.2.2, table 4-8, page 56 SMPP v3.4, section 4.3.1, table 4-9, page 57 SMPP v3.4, section 4.4.1, table 4-10, page 59-61 SMPP v3.4, section 4.4.2, table 4-11, page 67 SMPP v3.4, section 4.5.1, table 4-12, page 69-71 SMPP v3.4, section 4.5.1.1, table 4-13, page 75 'sme_dest_address' or 'distribution_list' goes here SMPP v3.4, section 4.5.1.1, table 4-14, page 75 SMPP v3.4, section 4.5.1.2, table 4-15, page 75 SMPP v3.4, section 4.5.2, table 4-16, page 76 SMPP v3.4, section 4.5.2.1, table 4-17, page 77 SMPP v3.4, section 4.6.1, table 4-18, page 79-81 SMPP v3.4, section 4.6.2, table 4-19, page 85 SMPP v3.4, section 4.7.1, table 4-20, page 87-88 SMPP v3.4, section 4.7.2, table 4-21, page 93 SMPP v3.4, section 4.8.1, table 4-22, page 95 SMPP v3.4, section 4.7.2, table 4-21, page 93 SMPP v3.4, section 4.9.1, table 4-24, page 98-99 SMPP v3.4, section 4.9.2, table 4-25, page 100 SMPP v3.4, section 4.10.1, table 4-26, page 102-103 SMPP v3.4, section 4.10.2, table 4-27, page 104 SMPP v3.4, section 4.11.1, table 4-28, page 106 SMPP v3.4, section 4.11.2, table 4-29, page 106 SMPP v3.4, section 4.12.1, table 4-30, page 108 Command IDs - SMPP v3.4, section 5.1.2.1, table 5-1, page 110-111 v4 codes v4 codes SMPP Error Codes (ESME) - SMPP v3.4, section 5.1.3, table 5-2, page 112-114 Type of Number (TON) - SMPP v3.4, section 5.2.5, table 5-3, page 117 Numberic Plan Indicator (NPI) - SMPP v3.4, section 5.2.6, table 5-4, page 118 ESM Class bits - SMPP v3.4, section 5.2.12, page 121 Registered Delivery bits - SMPP v3.4, section 5.2.17, page 124 submit_multi dest_flag constants - SMPP v3.4, section 5.2.25, page 129 maps['dest_flag_by_name'] = { 'SME Address' :1, 'Distribution List Name': 2 } Message State codes returned in query_sm_resp PDUs - SMPP v3.4, section 5.2.28, table 5-6, page 130 Facility Code bits for SMPP v4 Optional Parameter Tags - SMPP v3.4, section 5.3.2, Table 5-7, page 132-133 SMPP v3.4, section 5.3.2.1, page 134 SMPP v3.4, section 5.3.2.3, page 135 SMPP v3.4, section 5.3.2.5, page 136 SMPP v3.4, section 5.3.2.7, page 137 SMPP v3.4, section 5.3.2.2, page 134 SMPP v3.4, section 5.3.2.4, page 135 SMPP v3.4, section 5.3.2.6, page 136 SMPP v3.4, section 5.3.2.8, page 137 SMPP v3.4, section 5.3.2.9, page 138 SMPP v3.4, section 5.3.2.10, page 138 SMPP v3.4, section 5.3.2.11, page 139 SMPP v3.4, section 5.3.2.12, page 139 SMPP v3.4, section 5.3.2.13, page 140 v4 page 58-62 SMPP v3.4, section 5.3.2.14, page 141 SMPP v3.4, section 5.3.2.15, page 142 SMPP v3.4, section 5.3.2.16, page 143 SMPP v3.4, section 5.3.2.17, page 143 SMPP v3.4, section 5.3.2.18, page 144 SMPP v3.4, section 5.3.2.20, page 145 SMPP v3.4, section 5.3.2.21, page 145 SMPP v3.4, section 5.3.2.22, page 146 SMPP v3.4, section 5.3.2.19, page 144 SMPP v3.4, section 5.3.2.23, page 147 SMPP v3.4, section 5.3.2.24, page 147 SMPP v3.4, section 5.3.2.25, page 148 v4 page 70 SMPP v3.4, section 5.3.2.37, page 156 SMPP v3.4, section 5.3.2.38, page 157 SMPP v3.4, section 5.3.2.39, page 158 SMPP v3.4, section 5.3.2.36, page 155 SMPP v3.4, section 5.3.2.28, page 149 SMPP v3.4, section 5.3.2.29, page 150 SMPP v3.4, section 5.3.2.30, page 151 SMPP v3.4, section 5.3.2.31, page 152 SMPP v3.4, section 5.3.2.32, page 153 SMPP v3.4, section 5.3.2.33, page 153 SMPP v3.4, section 5.3.2.34, page 154 SMPP v3.4, section 5.3.2.35, page 154 SMPP v3.4, section 5.3.2.44, page 161 v4 page 75 v4 page 76 v4 page 76 v4 page 77 v4 page 77 v4 page 78 SMPP v3.4, section 5.3.2.26, page 148 SMPP v3.4, section 5.3.2.40, page 158 SMPP v3.4, section 5.3.2.27, page 149 v4 page 85 v4 page 86 SMPP v3.4, section 5.3.2.41, page 159 SMPP v3.4, section 5.3.2.42, page 159 SMPP v3.4, section 5.3.2.43, page 160 SMPP v3.4, section 5.3.2.1, page 134 SMPP v3.4, section 5.3.2.3, page 135 SMPP v3.4, section 5.3.2.5, page 136 SMPP v3.4, section 5.3.2.7, page 137 SMPP v3.4, section 5.3.2.2, page 134 SMPP v3.4, section 5.3.2.4, page 135 SMPP v3.4, section 5.3.2.6, page 136 SMPP v3.4, section 5.3.2.8, page 137 SMPP v3.4, section 5.3.2.9, page 138 SMPP v3.4, section 5.3.2.10, page 138 SMPP v3.4, section 5.3.2.11, page 139 SMPP v3.4, section 5.3.2.12, page 139 SMPP v3.4, section 5.3.2.13, page 140 v4 page 58-62 SMPP v3.4, section 5.3.2.14, page 141 SMPP v3.4, section 5.3.2.15, page 142 SMPP v3.4, section 5.3.2.16, page 143 SMPP v3.4, section 5.3.2.17, page 143 SMPP v3.4, section 5.3.2.18, page 144 SMPP v3.4, section 5.3.2.20, page 145 SMPP v3.4, section 5.3.2.21, page 145 SMPP v3.4, section 5.3.2.22, page 146 SMPP v3.4, section 5.3.2.19, page 144 SMPP v3.4, section 5.3.2.23, page 147 SMPP v3.4, section 5.3.2.24, page 147 SMPP v3.4, section 5.3.2.25, page 148 v4 page 70 SMPP v3.4, section 5.3.2.37, page 156 SMPP v3.4, section 5.3.2.38, page 157 SMPP v3.4, section 5.3.2.39, page 158 SMPP v3.4, section 5.3.2.36, page 155 SMPP v3.4, section 5.3.2.28, page 149 SMPP v3.4, section 5.3.2.29, page 150 SMPP v3.4, section 5.3.2.30, page 151 SMPP v3.4, section 5.3.2.31, page 152 SMPP v3.4, section 5.3.2.32, page 153 SMPP v3.4, section 5.3.2.33, page 153 SMPP v3.4, section 5.3.2.34, page 154 SMPP v3.4, section 5.3.2.35, page 154 SMPP v3.4, section 5.3.2.44, page 161 v4 page 75 v4 page 76 v4 page 76 v4 page 77 v4 page 77 v4 page 78 SMPP v3.4, section 5.3.2.26, page 148 SMPP v3.4, section 5.3.2.40, page 158 SMPP v3.4, section 5.3.2.27, page 149 v4 page 85 v4 page 86 SMPP v3.4, section 5.3.2.41, page 159 SMPP v3.4, section 5.3.2.42, page 159 SMPP v3.4, section 5.3.2.43, page 160 Decoding functions old = len(hex_ref[0]) print field['type'], (old - len(hex_ref[0]))/2, repr(data), field['name'], mandatory_parameters[field['name']] We don't have enough data here for this to be a valid param. TODO: Something better than `print` here. 'dest_address' only 'dest_address' only Encoding functions pad odd length hex strings | 6,344 | en | 0.240071 |
# Copyright 2018 Iguazio
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from base64 import b64encode
from nuclio.build import mlrun_footer
import mlrun
from ..model import ModelObj
from ..utils import generate_object_uri
from .utils import enrich_function_from_dict
class FunctionReference(ModelObj):
"""function reference/template, point to function and add/override resources"""
def __init__(
self,
url=None,
image=None,
requirements=None,
code=None,
spec=None,
kind=None,
name=None,
):
self.url = url
self.kind = kind
self.image = image
self.requirements = requirements
self.name = name
if hasattr(spec, "to_dict"):
spec = spec.to_dict()
self.spec = spec
self.code = code
self._function = None
self._address = None
def is_empty(self):
if self.url or self.code or self.spec:
return False
return True
def fullname(self, parent):
return f"{parent.metadata.name}-{self.name}"
def uri(self, parent, tag=None, hash_key=None, fullname=True):
name = self.fullname(parent) if fullname else self.name
return generate_object_uri(
parent.metadata.project,
name,
tag=tag or parent.metadata.tag,
hash_key=hash_key,
)
@property
def function_object(self):
"""get the generated function object"""
return self._function
def to_function(self, default_kind=None):
"""generate a function object from the ref definitions"""
if self.url and "://" not in self.url:
if not os.path.isfile(self.url):
raise OSError(f"{self.url} not found")
kind = self.kind or default_kind
if self.url:
if (
self.url.endswith(".yaml")
or self.url.startswith("db://")
or self.url.startswith("hub://")
):
func = mlrun.import_function(self.url)
if self.image:
func.spec.image = self.image
elif self.url.endswith(".ipynb"):
func = mlrun.code_to_function(
self.name, filename=self.url, image=self.image, kind=kind
)
elif self.url.endswith(".py"):
# todo: support code text as input (for UI)
if not self.image:
raise ValueError(
"image must be provided with py code files, "
"use function object for more control/settings"
)
func = mlrun.code_to_function(
self.name, filename=self.url, image=self.image, kind=kind
)
else:
raise ValueError(f"unsupported function url {self.url} or no spec")
if self.spec:
func = enrich_function_from_dict(func, self.spec)
elif self.code is not None:
code = self.code
if kind == mlrun.runtimes.RuntimeKinds.serving:
code = code + mlrun_footer.format(
mlrun.runtimes.serving.serving_subkind
)
func = mlrun.new_function(self.name, kind=kind, image=self.image)
data = b64encode(code.encode("utf-8")).decode("utf-8")
func.spec.build.functionSourceCode = data
if kind not in mlrun.runtimes.RuntimeKinds.nuclio_runtimes():
func.spec.default_handler = "handler"
if self.spec:
func = enrich_function_from_dict(func, self.spec)
elif self.spec:
func = mlrun.new_function(self.name, runtime=self.spec)
else:
raise ValueError("url or spec or code must be specified")
if self.requirements:
func.with_requirements(self.requirements)
self._function = func
return func
@property
def address(self):
return self._address
def deploy(self, **kwargs):
"""deploy the function"""
self._address = self._function.deploy(**kwargs)
return self._address
| mlrun/runtimes/function_reference.py | 4,734 | function reference/template, point to function and add/override resources
deploy the function
get the generated function object
generate a function object from the ref definitions
Copyright 2018 Iguazio Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. todo: support code text as input (for UI) | 766 | en | 0.795771 |
# -*- coding: utf-8 -*-
# Copyright 2019 Cohesity Inc.
class HypervBackupEnvParams(object):
"""Implementation of the 'HyperVBackupEnvParams' model.
Message to capture any additional backup params for a HyperV environment.
Attributes:
allow_crash_consistent_snapshot (bool): Whether to fallback to take a
crash-consistent snapshot incase taking an app-consistent snapshot
fails.
"""
# Create a mapping from Model property names to API property names
_names = {
"allow_crash_consistent_snapshot":'allowCrashConsistentSnapshot'
}
def __init__(self,
allow_crash_consistent_snapshot=None):
"""Constructor for the HypervBackupEnvParams class"""
# Initialize members of the class
self.allow_crash_consistent_snapshot = allow_crash_consistent_snapshot
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
allow_crash_consistent_snapshot = dictionary.get('allowCrashConsistentSnapshot')
# Return an object of this model
return cls(allow_crash_consistent_snapshot)
| cohesity_management_sdk/models/hyperv_backup_env_params.py | 1,631 | Implementation of the 'HyperVBackupEnvParams' model.
Message to capture any additional backup params for a HyperV environment.
Attributes:
allow_crash_consistent_snapshot (bool): Whether to fallback to take a
crash-consistent snapshot incase taking an app-consistent snapshot
fails.
Constructor for the HypervBackupEnvParams class
Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
-*- coding: utf-8 -*- Copyright 2019 Cohesity Inc. Create a mapping from Model property names to API property names Initialize members of the class Extract variables from the dictionary Return an object of this model | 890 | en | 0.749928 |
""" This file contains SPRKKRAtoms - an enhanced version of Atoms to be used
with SPRKKR """
from ase import Atoms
from ..common.unique_values import UniqueValuesMapping
import spglib
from ase.spacegroup import Spacegroup
import numpy as np
from ..sprkkr.sites import Site
from ..common.misc import numpy_index
class SPRKKRAtoms(Atoms):
""" ASE Atoms object extended by the data necessary for SPR-KKR calculations """
@staticmethod
def promote_ase_atoms(obj, symmetry=None):
""" Convert ASE Atoms object to the one usable by SPRKKR.
For the case of the usability it is a bit ugly hack: The __class__ attribute
is replaced so the extra methods and properties of the objects will
be available.
Parameters
----------
obj: ase.Atoms
The atoms object to be promoted to be used for SPRKKR calculations
symmetry: boolean or None
The sites property of the resulting object will consider the symmetry of the structure.
I.e., the by-symmetry-equal atomic sites will share the same sites object.
Default None is the same as True, however it does not change the symmetry
of the already promoted obj passed into the routine.
"""
if obj and not isinstance(obj, SPRKKRAtoms):
if obj.__class__ is Atoms:
obj.__class__ = SPRKKRAtoms
else:
if not isinstance(obj, Atoms):
raise(f'Can not promote class {obj} of class {obj.__class__} to {SPRKKRAtoms}')
class SprKKrAtomsEx(obj.__class__, SPRKKRAtoms):
pass
obj.__class__ = SprKKrAtomsEx
obj._init(True if symmetry is None else symmetry)
else:
if symmetry is not None:
obj.symmetry = symmetry
return obj
def __init__(self, *args, symmetry=True, potential=None, **kwargs):
"""
Creates SPRKKRAtoms atoms
Parameters
----------
*args: list
The positionals arguments of ase.Atoms.__init__
symmetry: boolean
The symmetry will be computed when the sites property will be initialized.
I.e., the by-symmetry-equal atomic sites will share the same sites object.
**kwargs: dict
The named arguments of ase.Atoms.__init__
"""
self._init(symmetry, potential)
super().__init__(*args, **kwargs)
def _init(self, symmetry=True, potential=None):
""" The initialization of the additional (not-in-ASE) properties. To be used
by constructor and by promote_ase_atoms"""
self._unique_sites = None
self._potential = potential
self._symmetry = symmetry
@property
def symmetry(self):
"""
Whether the sites property is/will be generated using symmetry, i.e.
whether the Sites objects in the sites property will be shared among
symmetric atomic sites.
"""
return self._symmetry
@symmetry.setter
def symmetry(self, value):
"""
Recomputes the sites with enabled/disabled symmetry if the value of the property
has changed.
"""
if self._symmetry == value:
return
self._symmetry = value
if self._unique_sites is not None:
if value:
self._compute_sites_symmetry()
else:
self._cancel_sites_symmetry()
def compute_spacegroup_for_atomic_numbers(self, atomic_numbers=None, symprec=1e-5):
""" Return spacegroup that suits to the atoms' cell structure and to the given
atomic_numbers (not necessary the real ones, they can be just ''labels'').
"""
atomic_numbers = atomic_numbers if atomic_numbers is not None else self.get_atomic_numbers()
sg = spglib.get_spacegroup((self.get_cell(),
self.get_scaled_positions(),
atomic_numbers),
symprec=symprec)
if sg is None:
return None
sg_no = int(sg[sg.find('(') + 1:sg.find(')')])
spacegroup = Spacegroup(sg_no)
return spacegroup
def compute_sites_symmetry(self, spacegroup=None, atomic_numbers=None, consider_old=False, symprec=1e-5):
""" SPRKKR has some properties shared by all by-symmetry-equal sites.
This method initializes _sites property, that holds these properties:
makes identical all the atoms on the "symmetry identical positions" with
the same atomic number.
The method is called automatically when the sites property is firstly accessed.
The effect of the method is the nearly same as setting the symmetry property.
However, setting the symmetry property on an 'already symmetrized' object has
no effect, while this methods always recompute the sites property.
Parameters
----------
spacegroup: Spacegroup
If not None, the given spacegroup is used for determining the symmetry,
instead of the one determined by cell geometry.
atomic_numbers: [ int ]
Atomic numbers used to determine the spacegroup (if it is not given) to compute
the symmetry. The atomic numbers can be ''virtual'', just to denote the equivalence
of the sites.
The array should have the same length as the number of atoms in the unit cell.
If None, self.symbols are used.
consider_old: bool
If True, and _unique_sites is not None, the non-symmetry-equivalent sites won't
be equivalent in the newly computed symmetry.
symprec: float
A threshold for spatial error for the symmetry computing. See spglib.get_spacegroup
"""
self._symmetry = True
SPRKKRAtoms._compute_sites_symmetry(**locals())
def _compute_sites_symmetry(self, spacegroup=None, atomic_numbers=None, consider_old=False, symprec=1e-5):
""" See compute_sites_symmetry - this metod does just the same, but it does not set the symmetry property."""
occupation = self.info.get('occupancy', {})
if not spacegroup and self._symmetry:
if atomic_numbers:
mapping = UniqueValuesMapping(atomic_numbers)
else:
mapping = UniqueValuesMapping(self.get_atomic_numbers())
if consider_old and self._unique_sites:
mapping = mapping.merge(self._unique_sites)
if occupation:
def gen_occ():
for i in range(len(mapping)):
val = occupation.get(i, None)
if val is None:
yield val
else:
yield tuple((k, val[k]) for k in val)
mapping = mapping.merge(gen_occ())
spacegroup = self.compute_spacegroup_for_atomic_numbers(mapping.mapping, symprec=symprec)
self.info['spacegroup'] = spacegroup
if not spacegroup:
return self.cancel_sites_symmetry()
tags = spacegroup.tag_sites(self.get_scaled_positions())
mapping = mapping.merge( tags )
tags = mapping.mapping
sites = np.empty(len(tags), dtype=object)
uniq, umap = np.unique(tags, return_inverse = True)
used = set()
for i in range(len(uniq)):
index = umap == i
if self._unique_sites is not None:
#first non-none of the given index
possible = (i for i in self._unique_sites[index])
site = next(filter(None, possible), None)
if site in used:
site = site.copy()
else:
used.add(site)
else:
site = None
if not site:
symbol = self.symbols[ numpy_index(umap,i)]
for ai in np.where(index)[0]:
if ai in occupation and occupation[ai]:
symbol = occupation[ai]
site = Site(self, symbol)
sites[index] = site
self.sites = sites
def cancel_sites_symmetry(self):
""" Cancel the use of symmetry in the structure, i.e., makes the Site object
uniqe (not shared) for each atomic site.
Calling this method is nearly equivalent to the setting the symmetry property
to False, however, this method always recompute the sites object, while
setting symmetry=False recomputes the sites property only if it was previously
set to False.
"""
self._symmetry = False
self._cancel_sites_symmetry()
def _cancel_sites_symmetry(self):
""" See cancel_sites_symmetry - this metod does just the same, but it does not set the symmetry property."""
sites = np.empty(len(self), dtype=object)
used = set()
occupation = self.info.get('occupancy', {})
for i in range(len(self)):
if self._unique_sites is not None:
site=self._unique_sites[i]
if site in used:
site = site.copy()
else:
used.add(site)
else:
symbol = occupation[i] if i in occupation and occupation[i] else \
self.symbols[i]
site = Site(self, symbol)
sites[i] = site
self.sites = sites
@property
def sites(self):
""" The sites property holds all the information for the SPR-KKR package:
atomic types (including number of semicore and valence electrons),
occupancy, symmetries, meshes...
Some of the properties are stored in the ASE atoms properties
(e.g. occupancy, atomic symbol), however, ASE is not able to hold them
all and/or to describe fully the SPR-KKR options; thus, these properties
are hold in this array.
The changes made on this array are reflected (as is possible)
to the ASE properties, but the opposite does not hold - to reflect the changes
in these properties please create a new Atoms object with given properties.
"""
if self._unique_sites is None:
self._compute_sites_symmetry()
return self._unique_sites
@sites.setter
def sites(self, v):
""" Set the sites property and update all other dependent
properties (symbols, occupancy) according to the sites """
an = np.zeros(len(v), dtype= int)
occ = {}
for i,j in enumerate(v):
occ[i] = j.occupation.as_dict
an[i] = j.occupation.primary_atomic_number
self.set_atomic_numbers(an)
self.info['occupancy'] = occ
self._unique_sites = v
@property
def potential(self):
if self._potential is None:
self._potential = potentials.Potential.from_atoms(self)
return self._potential
@potential.setter
def potential(self, potential):
self._potential = potential
def reset_sprkkr_potential(self):
for i in self.sites:
i.reset()
if self._potential:
self._potential.reset(update_atoms = False)
self._potential.set_from_atoms()
#at the last - to avoid circular imports
from ..potentials import potentials
| src/ase2sprkkr/sprkkr/sprkkr_atoms.py | 11,396 | ASE Atoms object extended by the data necessary for SPR-KKR calculations
Creates SPRKKRAtoms atoms
Parameters
----------
*args: list
The positionals arguments of ase.Atoms.__init__
symmetry: boolean
The symmetry will be computed when the sites property will be initialized.
I.e., the by-symmetry-equal atomic sites will share the same sites object.
**kwargs: dict
The named arguments of ase.Atoms.__init__
See cancel_sites_symmetry - this metod does just the same, but it does not set the symmetry property.
See compute_sites_symmetry - this metod does just the same, but it does not set the symmetry property.
The initialization of the additional (not-in-ASE) properties. To be used
by constructor and by promote_ase_atoms
Cancel the use of symmetry in the structure, i.e., makes the Site object
uniqe (not shared) for each atomic site.
Calling this method is nearly equivalent to the setting the symmetry property
to False, however, this method always recompute the sites object, while
setting symmetry=False recomputes the sites property only if it was previously
set to False.
SPRKKR has some properties shared by all by-symmetry-equal sites.
This method initializes _sites property, that holds these properties:
makes identical all the atoms on the "symmetry identical positions" with
the same atomic number.
The method is called automatically when the sites property is firstly accessed.
The effect of the method is the nearly same as setting the symmetry property.
However, setting the symmetry property on an 'already symmetrized' object has
no effect, while this methods always recompute the sites property.
Parameters
----------
spacegroup: Spacegroup
If not None, the given spacegroup is used for determining the symmetry,
instead of the one determined by cell geometry.
atomic_numbers: [ int ]
Atomic numbers used to determine the spacegroup (if it is not given) to compute
the symmetry. The atomic numbers can be ''virtual'', just to denote the equivalence
of the sites.
The array should have the same length as the number of atoms in the unit cell.
If None, self.symbols are used.
consider_old: bool
If True, and _unique_sites is not None, the non-symmetry-equivalent sites won't
be equivalent in the newly computed symmetry.
symprec: float
A threshold for spatial error for the symmetry computing. See spglib.get_spacegroup
Return spacegroup that suits to the atoms' cell structure and to the given
atomic_numbers (not necessary the real ones, they can be just ''labels'').
Convert ASE Atoms object to the one usable by SPRKKR.
For the case of the usability it is a bit ugly hack: The __class__ attribute
is replaced so the extra methods and properties of the objects will
be available.
Parameters
----------
obj: ase.Atoms
The atoms object to be promoted to be used for SPRKKR calculations
symmetry: boolean or None
The sites property of the resulting object will consider the symmetry of the structure.
I.e., the by-symmetry-equal atomic sites will share the same sites object.
Default None is the same as True, however it does not change the symmetry
of the already promoted obj passed into the routine.
The sites property holds all the information for the SPR-KKR package:
atomic types (including number of semicore and valence electrons),
occupancy, symmetries, meshes...
Some of the properties are stored in the ASE atoms properties
(e.g. occupancy, atomic symbol), however, ASE is not able to hold them
all and/or to describe fully the SPR-KKR options; thus, these properties
are hold in this array.
The changes made on this array are reflected (as is possible)
to the ASE properties, but the opposite does not hold - to reflect the changes
in these properties please create a new Atoms object with given properties.
Set the sites property and update all other dependent
properties (symbols, occupancy) according to the sites
Whether the sites property is/will be generated using symmetry, i.e.
whether the Sites objects in the sites property will be shared among
symmetric atomic sites.
Recomputes the sites with enabled/disabled symmetry if the value of the property
has changed.
This file contains SPRKKRAtoms - an enhanced version of Atoms to be used
with SPRKKR
first non-none of the given indexat the last - to avoid circular imports | 4,324 | en | 0.838233 |
"""
Open Orchestrator Cloud Radio Access Network
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from django import forms
from .models import Image
class ImageForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(ImageForm, self).__init__(*args, **kwargs)
self.fields['architecture'] = forms.ChoiceField(required=True,
choices=[('amd64', 'amd64'),
('i386', 'i386')])
self.fields['format'] = forms.ChoiceField(required=True,
widget=forms.Select(attrs={"onChange": 'select(this);'}),
choices=[('OpenStack', 'OpenStack'),
('Azure','Azure'),
('AWS','AWS'),
('GCE', 'GCE'),
("Libvirt", "Libvirt"),
("VirtualBox", "VirtualBox"),
("Docker", "Docker"),])
class Meta:
model = Image
fields = [
"name",
"version",
"format",
"architecture",
]
| oocran/django/images/forms.py | 1,952 | Open Orchestrator Cloud Radio Access Network
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. | 568 | en | 0.862218 |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jul 8 23:53:58 2019
@author: yanyanyu
"""
from spark import start_spark
from pyspark import SparkConf
from pyspark import SparkFiles
from pyspark.sql import Row
def main():
spark,conf=start_spark()
steps_per_floor_=conf['steps_per_floor']
pass
def extract(spark):
df=spark.read.parquet('tests/test_data/employees')
return df
def transform(df,steps_per_floor_,spark):
df.createOrReplaceTempView("table1")
df_transformed=spark.sql("select id, concat(first_name,' ' , second_name) as name, floor* %s as steps_to_desk from table1"%steps_per_floor_)
return df_transformed
def load(df):
df.coalesce(1).write.csv('loaded_data', mode='overwrite', header=True)
def create_test_data(spark,conf):
local_records=[
Row(id=1, first_name='nancy', second_name="yan", floor=1),
Row(id=2, first_name='Dan', second_name='Sommerville', floor=1),
Row(id=3, first_name='Alex', second_name='Ioannides', floor=2),
Row(id=4, first_name='Ken', second_name='Lai', floor=2),
Row(id=5, first_name='Stu', second_name='White', floor=3),
Row(id=6, first_name='Mark', second_name='Sweeting', floor=3),
Row(id=7, first_name='Phil', second_name='Bird', floor=4),
Row(id=8, first_name='Kim', second_name='Suter', floor=4)
]
df=spark.createDataFrame(local_records)
df_tf=transform(df,conf['steps_per_floor'],spark)
df_tf.coalesce(1).write.parquet('tests/test_data/employees_report',mode='overwrite')
| examples_pyspark/pyspark_small_project/etl_job.py | 1,617 | Created on Mon Jul 8 23:53:58 2019
@author: yanyanyu
!/usr/bin/env python3 -*- coding: utf-8 -*- | 99 | en | 0.456094 |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('report_database', '0003_auto_20160501_1646'),
]
operations = [
migrations.AddField(
model_name='report',
name='shared_users',
field=models.ManyToManyField(related_name='shared_users', to=settings.AUTH_USER_MODEL),
),
]
| report_database/migrations/0004_report_shared_users.py | 563 | -*- coding: utf-8 -*- | 21 | en | 0.767281 |
# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# https://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
import boto3
from boto3.exceptions import ResourceNotExistsError
import botocore.session
from tests import unittest
def identity(self, x):
return x
class TestResourceCustomization(unittest.TestCase):
def setUp(self):
self.botocore_session = botocore.session.get_session()
def add_new_method(self, name):
def handler(class_attributes, **kwargs):
class_attributes[name] = identity
return handler
def test_can_inject_method_onto_resource(self):
session = boto3.Session(botocore_session=self.botocore_session)
self.botocore_session.register('creating-resource-class.s3',
self.add_new_method(name='my_method'))
resource = session.resource('s3')
self.assertTrue(hasattr(resource, 'my_method'))
self.assertEqual(resource.my_method('anything'), 'anything')
class TestSessionErrorMessages(unittest.TestCase):
def test_has_good_error_message_when_no_resource(self):
bad_resource_name = 'doesnotexist'
err_regex = (
'%s.*resource does not exist.' % bad_resource_name
)
with self.assertRaisesRegex(ResourceNotExistsError, err_regex):
boto3.resource(bad_resource_name)
class TestGetAvailableSubresources(unittest.TestCase):
def test_s3_available_subresources_exists(self):
s3 = boto3.resource('s3')
self.assertTrue(hasattr(s3, 'get_available_subresources'))
| tests/functional/test_resource.py | 2,029 | Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at https://aws.amazon.com/apache2.0/ or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 537 | en | 0.910893 |
# -*- coding: utf-8 -*-
#BEGIN_HEADER
# The header block is where all import statements should live
from __future__ import print_function
import os
import re
import uuid
import requests
import json
import psutil
import subprocess
import numpy as np
import yaml
import time
from pprint import pformat
from installed_clients.WorkspaceClient import Workspace
from installed_clients.ReadsUtilsClient import ReadsUtils # @IgnorePep8
from installed_clients.baseclient import ServerError
from installed_clients.AssemblyUtilClient import AssemblyUtil
from installed_clients.KBaseReportClient import KBaseReport
from installed_clients.kb_quastClient import kb_quast
from installed_clients.kb_ea_utilsClient import kb_ea_utils
from kb_SPAdes.utils.spades_assembler import SPAdesAssembler
class ShockException(Exception):
pass
#END_HEADER
class kb_SPAdes:
'''
Module Name:
kb_SPAdes
Module Description:
A KBase module: kb_SPAdes
A wrapper for the SPAdes assembler with hybrid features supported.
http://bioinf.spbau.ru/spades
Always runs in careful mode.
Runs 3 threads / CPU.
Maximum memory use is set to available memory - 1G.
Autodetection is used for the PHRED quality offset and k-mer sizes.
A coverage cutoff is not specified.
'''
######## WARNING FOR GEVENT USERS ####### noqa
# Since asynchronous IO can lead to methods - even the same method -
# interrupting each other, you must be *very* careful when using global
# state. A method could easily clobber the state set by another while
# the latter method is running.
######################################### noqa
VERSION = "1.2.0"
GIT_URL = "https://github.com/qzzhang/kb_SPAdes"
GIT_COMMIT_HASH = "5b7e88d6993728abc26c93cfef780ee7feb16c63"
#BEGIN_CLASS_HEADER
# Class variables and functions can be defined in this block
DISABLE_SPADES_OUTPUT = False # should be False in production
PARAM_IN_WS = 'workspace_name'
PARAM_IN_LIB = 'read_libraries'
PARAM_IN_CS_NAME = 'output_contigset_name'
PARAM_IN_DNA_SOURCE = 'dna_source'
PARAM_IN_SINGLE_CELL = 'single_cell'
PARAM_IN_METAGENOME = 'metagenomic'
PARAM_IN_PLASMID = 'plasmid'
PARAM_IN_MIN_CONTIG_LENGTH = 'min_contig_length'
PARAM_IN_KMER_SIZES = 'kmer_sizes'
PARAM_IN_SKIP_ERR_CORRECT = 'skip_error_correction'
INVALID_WS_OBJ_NAME_RE = re.compile('[^\\w\\|._-]')
INVALID_WS_NAME_RE = re.compile('[^\\w:._-]')
THREADS_PER_CORE = 3
MAX_THREADS = 64 # per email thread with Anton Korobeynikov
MAX_THREADS_META = 128 # Increase threads for metagenomic assemblies
MEMORY_OFFSET_GB = 1 # 1GB
MIN_MEMORY_GB = 5
MAX_MEMORY_GB_SPADES = 500
MAX_MEMORY_GB_META_SPADES = 1000
GB = 1000000000
URL_WS = 'workspace-url'
URL_SHOCK = 'shock-url'
URL_KB_END = 'kbase-endpoint'
TRUE = 'true'
FALSE = 'false'
def log(self, message, prefix_newline=False):
print(('\n' if prefix_newline else '') +
str(time.time()) + ': ' + str(message))
def check_shock_response(self, response, errtxt):
if not response.ok:
try:
err = json.loads(response.content)['error'][0]
except:
# this means shock is down or not responding.
self.log("Couldn't parse response error content from Shock: " +
response.content)
response.raise_for_status()
raise ShockException(errtxt + str(err))
# Helper script borrowed from the transform service, logger removed
def upload_file_to_shock(self, file_path, token):
"""
Use HTTP multi-part POST to save a file to a SHOCK instance.
"""
if token is None:
raise Exception("Authentication token required!")
header = {'Authorization': "Oauth {0}".format(token)}
if file_path is None:
raise Exception("No file given for upload to SHOCK!")
with open(os.path.abspath(file_path), 'rb') as data_file:
files = {'upload': data_file}
response = requests.post(
self.shockURL + '/node', headers=header, files=files,
stream=True, allow_redirects=True)
self.check_shock_response(
response, ('Error trying to upload contig FASTA file {} to Shock: '
).format(file_path))
return response.json()['data']
# spades is configured with yaml
#
def generate_spades_yaml(self, reads_data):
left = [] # fwd in fr orientation
right = [] # rev
single = [] # single end reads
pacbio = [] # pacbio CLR reads (for pacbio CCS use -s option.)
interlaced = []
illumina_present = 0
iontorrent_present = 0
for read in reads_data:
seq_tech = read['seq_tech']
if seq_tech == "PacBio CLR":
pacbio.append(read['fwd_file'])
elif read['type'] == "paired":
if 'rev_file' in read and read['rev_file']:
left.append(read['fwd_file'])
right.append(read['rev_file'])
else:
interlaced.append(read['fwd_file'])
elif read['type'] == "single":
single.append(read['fwd_file'])
if seq_tech == "IonTorrent":
iontorrent_present = 1
elif seq_tech == "Illumina":
illumina_present = 1
if (illumina_present == 1 and iontorrent_present == 1):
raise ValueError('Both IonTorrent and Illumina read libraries exist. ' +
'SPAdes can not assemble them together.')
yml = []
yml_index_counter = 0
# Pacbio CLR ahs to be run with at least one single end or paired end library
other_reads_present_for_pacbio = 0
if left or interlaced:
yml.append({'type': 'paired-end',
'orientation': 'fr'})
if left:
yml[yml_index_counter]['left reads'] = left
yml[yml_index_counter]['right reads'] = right
if interlaced:
yml[yml_index_counter]['interlaced reads'] = interlaced
yml_index_counter += 1
other_reads_present_for_pacbio = 1
if single:
yml.append({'type': "single"})
yml[yml_index_counter]['single reads'] = single
yml_index_counter += 1
other_reads_present_for_pacbio = 1
if pacbio:
if other_reads_present_for_pacbio == 1:
yml.append({'type': "pacbio"})
yml[yml_index_counter]['single reads'] = pacbio
yml_index_counter += 1
else:
# RAISE AN ERROR AS PACBIO REQUIRES AT LEAST
# ONE SINGLE OR PAIRED ENDS LIBRARY
raise ValueError('Per SPAdes requirements : If doing PacBio CLR reads, you must ' +
'also supply at least one paired end or single end reads library')
yml_path = os.path.join(self.scratch, 'run.yaml')
with open(yml_path, 'w') as yml_file:
yaml.safe_dump(yml, yml_file)
return yml_path, iontorrent_present
def exec_spades(self, dna_source, reads_data, phred_type, kmer_sizes, skip_error_correction):
mem = (psutil.virtual_memory().available / self.GB -
self.MEMORY_OFFSET_GB)
if mem < self.MIN_MEMORY_GB:
raise ValueError(
'Only ' + str(psutil.virtual_memory().available) +
' bytes of memory are available. The SPAdes wrapper will' +
' not run without at least ' +
str(self.MIN_MEMORY_GB + self.MEMORY_OFFSET_GB) +
' gigabytes available')
if dna_source == self.PARAM_IN_METAGENOME:
max_mem = self.MAX_MEMORY_GB_META_SPADES
max_threads = self.MAX_THREADS_META
else:
max_mem = self.MAX_MEMORY_GB_SPADES
max_threads = self.MAX_THREADS
threads = min(max_threads, psutil.cpu_count() * self.THREADS_PER_CORE)
if mem > max_mem:
mem = max_mem
outdir = os.path.join(self.scratch, 'spades_output_dir')
if not os.path.exists(outdir):
os.makedirs(outdir)
tmpdir = os.path.join(self.scratch, 'spades_tmp_dir')
if not os.path.exists(tmpdir):
os.makedirs(tmpdir)
cmd = ['spades.py', '--threads', str(threads),
'--memory', str(mem), '-o', outdir, '--tmp-dir', tmpdir]
print("THE DNA SOURCE IS : " + str(dna_source))
if dna_source == self.PARAM_IN_SINGLE_CELL:
cmd += ['--sc']
if dna_source == self.PARAM_IN_PLASMID:
cmd += ['--plasmid']
# The plasmid assembly can only be run on a single library
if len(reads_data) > 1:
raise ValueError('Plasmid assembly requires that one ' +
'and only one library as input. ' +
str(len(reads_data)) + ' libraries detected.')
if dna_source == self.PARAM_IN_METAGENOME:
cmd += ['--meta']
# The metagenome assembly can only be run on a single library
# The library must be paired end.
if len(reads_data) > 1 or reads_data[0]['type'] != 'paired':
error_msg = 'Metagenome assembly requires that one and ' + \
'only one paired end library as input.'
if len(reads_data) > 1:
error_msg += ' ' + str(len(reads_data)) + \
' libraries detected.'
raise ValueError(error_msg)
else:
cmd += ['--careful']
cmd += ['--phred-offset', phred_type]
if kmer_sizes is not None:
cmd += ['-k ' + kmer_sizes]
if skip_error_correction == 1:
cmd += ['--only-assembler']
# print("LENGTH OF READSDATA IN EXEC: " + str(len(reads_data)))
# print("READS DATA: " + str(reads_data))
# print("SPADES YAML: " + str(self.generate_spades_yaml(reads_data)))
spades_yaml_path, iontorrent_present = self.generate_spades_yaml(reads_data)
if iontorrent_present == 1:
cmd += ['--iontorrent']
cmd += ['--dataset', spades_yaml_path]
self.log('Running SPAdes command line:')
print("SPADES CMD:" + str(cmd))
self.log(cmd)
if self.DISABLE_SPADES_OUTPUT:
with open(os.devnull, 'w') as null:
p = subprocess.Popen(cmd, cwd=self.scratch, shell=False,
stdout=null)
else:
p = subprocess.Popen(cmd, cwd=self.scratch, shell=False)
retcode = p.wait()
self.log('Return code: ' + str(retcode))
if p.returncode != 0:
raise ValueError('Error running SPAdes, return code: ' +
str(retcode) + '\n')
return outdir
# adapted from
# https://github.com/kbase/transform/blob/master/plugins/scripts/convert/trns_transform_KBaseFile_AssemblyFile_to_KBaseGenomes_ContigSet.py
# which was adapted from an early version of
# https://github.com/kbase/transform/blob/master/plugins/scripts/upload/trns_transform_FASTA_DNA_Assembly_to_KBaseGenomes_ContigSet.py
def load_stats(self, input_file_name):
self.log('Starting conversion of FASTA to KBaseGenomeAnnotations.Assembly')
self.log('Building Object.')
if not os.path.isfile(input_file_name):
raise Exception('The input file name {0} is not a file!'.format(
input_file_name))
with open(input_file_name, 'r') as input_file_handle:
contig_id = None
sequence_len = 0
fasta_dict = dict()
first_header_found = False
# Pattern for replacing white space
pattern = re.compile(r'\s+')
for current_line in input_file_handle:
if (current_line[0] == '>'):
# found a header line
# Wrap up previous fasta sequence
if not first_header_found:
first_header_found = True
else:
fasta_dict[contig_id] = sequence_len
sequence_len = 0
fasta_header = current_line.replace('>', '').strip()
try:
contig_id = fasta_header.strip().split(' ', 1)[0]
except:
contig_id = fasta_header.strip()
else:
sequence_len += len(re.sub(pattern, '', current_line))
# wrap up last fasta sequence, should really make this a method
if not first_header_found:
raise Exception("There are no contigs in this file")
else:
fasta_dict[contig_id] = sequence_len
return fasta_dict
def load_report(self, input_file_name, params, wsname):
fasta_stats = self.load_stats(input_file_name)
lengths = [fasta_stats[contig_id] for contig_id in fasta_stats]
assembly_ref = params[self.PARAM_IN_WS] + '/' + params[self.PARAM_IN_CS_NAME]
report = ''
report += 'Assembly saved to: ' + assembly_ref + '\n'
report += 'Assembled into ' + str(len(lengths)) + ' contigs.\n'
report += 'Avg Length: ' + str(sum(lengths) / float(len(lengths))) + \
' bp.\n'
# compute a simple contig length distribution
bins = 10
counts, edges = np.histogram(lengths, bins) # @UndefinedVariable
report += 'Contig Length Distribution (# of contigs -- min to max ' +\
'basepairs):\n'
for c in range(bins):
report += ' ' + str(counts[c]) + '\t--\t' + str(edges[c]) +\
' to ' + str(edges[c + 1]) + ' bp\n'
print('Running QUAST')
kbq = kb_quast(self.callbackURL)
quastret = kbq.run_QUAST({'files': [{'path': input_file_name,
'label': params[self.PARAM_IN_CS_NAME]}]})
print('Saving report')
kbr = KBaseReport(self.callbackURL)
report_info = kbr.create_extended_report({
'message': report,
'objects_created': [{'ref': assembly_ref, 'description': 'Assembled contigs'}],
'direct_html_link_index': 0,
'html_links': [{'shock_id': quastret['shock_id'],
'name': 'report.html',
'label': 'QUAST report'}],
'report_object_name': 'kb_megahit_report_' + str(uuid.uuid4()),
'workspace_name': params['workspace_name']
})
reportName = report_info['name']
reportRef = report_info['ref']
return reportName, reportRef
def make_ref(self, object_info):
return str(object_info[6]) + '/' + str(object_info[0]) + \
'/' + str(object_info[4])
def determine_unknown_phreds(self, reads,
phred64_reads,
phred33_reads,
unknown_phred_reads,
reftoname):
print("IN UNKNOWN CHECKING")
eautils = kb_ea_utils(self.callbackURL)
for ref in unknown_phred_reads:
rds = reads[ref]
obj_name = reftoname[ref]
files_to_check = []
f = rds['files']
if f['type'] == 'interleaved':
files_to_check.append(f['fwd'])
elif f['type'] == 'paired':
files_to_check.append(f['fwd'])
files_to_check.append(f['rev'])
elif f['type'] == 'single':
files_to_check.append(f['fwd'])
# print("FILES TO CHECK:" + str(files_to_check))
for file_path in files_to_check:
ea_stats_dict = eautils.calculate_fastq_stats({'read_library_path': file_path})
# print("EA UTILS STATS : " + str(ea_stats_dict))
if ea_stats_dict['phred_type'] == '33':
phred33_reads.add(obj_name)
elif ea_stats_dict['phred_type'] == '64':
phred64_reads.add(obj_name)
else:
raise ValueError(('Reads object {} ({}) phred type is not of the ' +
'expected value of 33 or 64. It had a phred type of ' +
'{}').format(obj_name, rds, ea_stats_dict['phred_type']))
return phred64_reads, phred33_reads
def check_reads(self, params, reads, reftoname):
phred64_reads, phred33_reads, unknown_phred_reads = (set() for i in range(3))
for ref in reads:
rds = reads[ref]
obj_name = reftoname[ref]
obj_ref = rds['ref']
if rds['phred_type'] == '33':
phred33_reads.add(obj_name)
elif rds['phred_type'] == '64':
phred64_reads.add(obj_name)
else:
unknown_phred_reads.add(ref)
if rds['read_orientation_outward'] == self.TRUE:
raise ValueError(
('Reads object {} ({}) is marked as having outward ' +
'oriented reads, which SPAdes does not ' +
'support.').format(obj_name, obj_ref))
# ideally types would be firm enough that we could rely on the
# metagenomic boolean. However KBaseAssembly doesn't have the field
# and it's optional anyway. Ideally fix those issues and then set
# the --meta command line flag automatically based on the type
# Dylan: removing these requirements because too much work for user to go all the way
# back and reimport reads with "single_genome" flag set opposite. Additionally, now
# that "metagenomic" assembly is now an explicit App instead of an option, this check
# is far less necessary
# if (rds['single_genome'] == self.TRUE and
# params[self.PARAM_IN_DNA_SOURCE] ==
# self.PARAM_IN_METAGENOME):
# raise ValueError(
# ('Reads object {} ({}) is marked as containing dna from ' +
# 'a single genome but the assembly method was specified ' +
# 'as metagenomic').format(obj_name, obj_ref))
if (rds['single_genome'] == self.FALSE and
params[self.PARAM_IN_DNA_SOURCE] !=
self.PARAM_IN_METAGENOME):
raise ValueError(
('Reads object {} ({}) is marked as containing ' +
'metagenomic data but the assembly method was not ' +
'specified as metagenomic').format(obj_name, obj_ref))
# IF UNKNOWN TYPE NEED TO DETERMINE PHRED TYPE USING EAUTILS
if len(unknown_phred_reads) > 0:
phred64_reads, phred33_reads = \
self.determine_unknown_phreds(reads, phred64_reads, phred33_reads,
unknown_phred_reads, reftoname)
# IF THERE ARE READS OF BOTH PHRED 33 and 64, throw an error
if (len(phred64_reads) > 0) and (len(phred33_reads) > 0):
raise ValueError(
('The set of Reads objects passed in have reads that have different ' +
'phred type scores. SPAdes does not support assemblies of ' +
'reads with different phred type scores.\nThe following read objects ' +
'have phred 33 scores : {}.\nThe following read objects have phred 64 ' +
'scores : {}').format(", ".join(phred33_reads), ", ".join(phred64_reads)))
elif len(phred64_reads) > 0:
return '64'
elif len(phred33_reads) > 0:
return '33'
else:
raise ValueError('The phred type of the read(s) was unable to be determined')
def process_params(self, params):
if (self.PARAM_IN_WS not in params or
not params[self.PARAM_IN_WS]):
raise ValueError(self.PARAM_IN_WS + ' parameter is required')
if self.INVALID_WS_NAME_RE.search(params[self.PARAM_IN_WS]):
raise ValueError('Invalid workspace name ' +
params[self.PARAM_IN_WS])
if self.PARAM_IN_LIB not in params:
raise ValueError(self.PARAM_IN_LIB + ' parameter is required')
if type(params[self.PARAM_IN_LIB]) != list:
raise ValueError(self.PARAM_IN_LIB + ' must be a list')
if not params[self.PARAM_IN_LIB]:
raise ValueError('At least one reads library must be provided')
# for l in params[self.PARAM_IN_LIB]:
# print("PARAM_IN_LIB : " + str(l))
# if self.INVALID_WS_OBJ_NAME_RE.search(l):
# raise ValueError('Invalid workspace object name ' + l)
if (self.PARAM_IN_CS_NAME not in params or
not params[self.PARAM_IN_CS_NAME]):
raise ValueError(self.PARAM_IN_CS_NAME + ' parameter is required')
if self.INVALID_WS_OBJ_NAME_RE.search(params[self.PARAM_IN_CS_NAME]):
raise ValueError('Invalid workspace object name ' +
params[self.PARAM_IN_CS_NAME])
if self.PARAM_IN_DNA_SOURCE in params:
s = params[self.PARAM_IN_DNA_SOURCE]
# print("FOUND THE DNA SOURCE: " + str(params[self.PARAM_IN_DNA_SOURCE]))
if s not in [self.PARAM_IN_SINGLE_CELL,
self.PARAM_IN_METAGENOME,
self.PARAM_IN_PLASMID]:
params[self.PARAM_IN_DNA_SOURCE] = None
else:
params[self.PARAM_IN_DNA_SOURCE] = None
# print("PARAMS ARE:" + str(params))
if self.PARAM_IN_MIN_CONTIG_LENGTH in params:
if not isinstance(params[self.PARAM_IN_MIN_CONTIG_LENGTH], int):
raise ValueError('min_contig_length must be of type int')
if self.PARAM_IN_KMER_SIZES in params and params[self.PARAM_IN_KMER_SIZES] is not None:
print("KMER_SIZES: " + ",".join(str(num) for num in params[self.PARAM_IN_KMER_SIZES]))
if self.PARAM_IN_SKIP_ERR_CORRECT in params and params[self.PARAM_IN_SKIP_ERR_CORRECT] is not None:
print("SKIP ERR CORRECTION: " + str(params[self.PARAM_IN_SKIP_ERR_CORRECT]))
#END_CLASS_HEADER
# config contains contents of config file in a hash or None if it couldn't
# be found
def __init__(self, config):
#BEGIN_CONSTRUCTOR
self.cfg = config
self.cfg['SDK_CALLBACK_URL'] = os.environ['SDK_CALLBACK_URL']
self.cfg['KB_AUTH_TOKEN'] = os.environ['KB_AUTH_TOKEN']
self.callbackURL = self.cfg['SDK_CALLBACK_URL']
self.log('Callback URL: ' + self.callbackURL)
self.workspaceURL = config[self.URL_WS]
self.shockURL = config[self.URL_SHOCK]
self.catalogURL = config[self.URL_KB_END] + '/catalog'
self.scratch = os.path.abspath(config['scratch'])
if not os.path.exists(self.scratch):
os.makedirs(self.scratch)
#END_CONSTRUCTOR
pass
def run_SPAdes(self, ctx, params):
"""
Run SPAdes on paired end libraries
:param params: instance of type "SPAdesParams" (Input parameters for
running SPAdes. workspace_name - the name of the workspace from
which to take input and store output. output_contigset_name - the
name of the output contigset read_libraries - a list of Illumina
PairedEndLibrary files in FASTQ or BAM format. dna_source -
(optional) the source of the DNA used for sequencing
'single_cell': DNA amplified from a single cell via MDA anything
else: Standard DNA sample from multiple cells. Default value is
None. min_contig_length - (optional) integer to filter out contigs
with length < min_contig_length from the SPAdes output. Default
value is 0 implying no filter. kmer_sizes - (optional) K-mer
sizes, Default values: 33, 55, 77, 99, 127 (all values must be
odd, less than 128 and listed in ascending order) In the absence
of these values, K values are automatically selected.
skip_error_correction - (optional) Assembly only (No error
correction). By default this is disabled.) -> structure: parameter
"workspace_name" of String, parameter "output_contigset_name" of
String, parameter "read_libraries" of list of type
"paired_end_lib" (The workspace object name of a PairedEndLibrary
file, whether of the KBaseAssembly or KBaseFile type.), parameter
"dna_source" of String, parameter "min_contig_length" of Long,
parameter "kmer_sizes" of list of Long, parameter
"skip_error_correction" of type "bool" (A boolean. 0 = false,
anything else = true.)
:returns: instance of type "SPAdesOutput" (Output parameters for
SPAdes run. report_name - the name of the KBaseReport.Report
workspace object. report_ref - the workspace reference of the
report.) -> structure: parameter "report_name" of String,
parameter "report_ref" of String
"""
# ctx is the context object
# return variables are: output
#BEGIN run_SPAdes
# A whole lot of this is adapted or outright copied from
# https://github.com/msneddon/MEGAHIT
self.log('Running run_SPAdes with params:\n' + pformat(params))
token = ctx['token']
# the reads should really be specified as a list of absolute ws refs
# but the narrative doesn't do that yet
self.process_params(params)
# get absolute refs from ws
wsname = params[self.PARAM_IN_WS]
obj_ids = []
for r in params[self.PARAM_IN_LIB]:
obj_ids.append({'ref': r if '/' in r else (wsname + '/' + r)})
ws = Workspace(self.workspaceURL, token=token)
ws_info = ws.get_object_info_new({'objects': obj_ids})
reads_params = []
reftoname = {}
for wsi, oid in zip(ws_info, obj_ids):
ref = oid['ref']
reads_params.append(ref)
obj_name = wsi[1]
reftoname[ref] = wsi[7] + '/' + obj_name
readcli = ReadsUtils(self.callbackURL, token=ctx['token'])
typeerr = ('Supported types: KBaseFile.SingleEndLibrary ' +
'KBaseFile.PairedEndLibrary ' +
'KBaseAssembly.SingleEndLibrary ' +
'KBaseAssembly.PairedEndLibrary')
try:
reads = readcli.download_reads({'read_libraries': reads_params,
'interleaved': 'false',
'gzipped': None
})['files']
except ServerError as se:
self.log('logging stacktrace from dynamic client error')
self.log(se.data)
if typeerr in se.message:
prefix = se.message.split('.')[0]
raise ValueError(
prefix + '. Only the types ' +
'KBaseAssembly.PairedEndLibrary ' +
'and KBaseFile.PairedEndLibrary are supported')
else:
raise
self.log('Got reads data from converter:\n' + pformat(reads))
phred_type = self.check_reads(params, reads, reftoname)
reads_data = []
for ref in reads:
reads_name = reftoname[ref]
f = reads[ref]['files']
# print ("REF:" + str(ref))
# print ("READS REF:" + str(reads[ref]))
seq_tech = reads[ref]["sequencing_tech"]
if f['type'] == 'interleaved':
reads_data.append({'fwd_file': f['fwd'], 'type': 'paired',
'seq_tech': seq_tech})
elif f['type'] == 'paired':
reads_data.append({'fwd_file': f['fwd'], 'rev_file': f['rev'],
'type': 'paired', 'seq_tech': seq_tech})
elif f['type'] == 'single':
reads_data.append({'fwd_file': f['fwd'], 'type': 'single',
'seq_tech': seq_tech})
else:
raise ValueError('Something is very wrong with read lib' + reads_name)
kmer_sizes = None
if self.PARAM_IN_KMER_SIZES in params and params[self.PARAM_IN_KMER_SIZES] is not None:
if (len(params[self.PARAM_IN_KMER_SIZES])) > 0:
kmer_sizes = ",".join(str(num) for num in params[self.PARAM_IN_KMER_SIZES])
skip_error_correction = 0
if self.PARAM_IN_SKIP_ERR_CORRECT in params and params[self.PARAM_IN_SKIP_ERR_CORRECT] is not None:
if params[self.PARAM_IN_SKIP_ERR_CORRECT] == 1:
skip_error_correction = 1
spades_out = self.exec_spades(params[self.PARAM_IN_DNA_SOURCE],
reads_data,
phred_type,
kmer_sizes,
skip_error_correction)
self.log('SPAdes output dir: ' + spades_out)
# parse the output and save back to KBase
output_contigs = os.path.join(spades_out, 'scaffolds.fasta')
self.log('Uploading FASTA file to Assembly')
assemblyUtil = AssemblyUtil(self.callbackURL, token=ctx['token'], service_ver='release')
if params.get('min_contig_length', 0) > 0:
assemblyUtil.save_assembly_from_fasta(
{'file': {'path': output_contigs},
'workspace_name': wsname,
'assembly_name': params[self.PARAM_IN_CS_NAME],
'min_contig_length': params['min_contig_length']
})
# load report from scaffolds.fasta.filtered.fa
report_name, report_ref = self.load_report(
output_contigs + '.filtered.fa', params, wsname)
else:
assemblyUtil.save_assembly_from_fasta(
{'file': {'path': output_contigs},
'workspace_name': wsname,
'assembly_name': params[self.PARAM_IN_CS_NAME]
})
# load report from scaffolds.fasta
report_name, report_ref = self.load_report(
output_contigs, params, wsname)
output = {'report_name': report_name,
'report_ref': report_ref
}
#END run_SPAdes
# At some point might do deeper type checking...
if not isinstance(output, dict):
raise ValueError('Method run_SPAdes return value ' +
'output is not type dict as required.')
# return the results
return [output]
def run_HybridSPAdes(self, ctx, params):
"""
Run HybridSPAdes on paired end libraries with PacBio CLR and Oxford Nanopore reads
:param params: instance of type "HybridSPAdesParams" (------To run
HybridSPAdes 3.13.0 you need at least one library of the following
types:------ 1) Illumina paired-end/high-quality
mate-pairs/unpaired reads 2) IonTorrent paired-end/high-quality
mate-pairs/unpaired reads 3) PacBio CCS reads Version 3.13.0 of
SPAdes supports paired-end reads, mate-pairs and unpaired reads.
SPAdes can take as input several paired-end and mate-pair
libraries simultaneously. workspace_name - the name of the
workspace from which to take input and store output.
output_contigset_name - the name of the output contigset
read_libraries - a list of Illumina or IonTorrent
paired-end/high-quality mate-pairs/unpaired reads
long_reads_libraries - a list of PacBio, Oxford Nanopore Sanger
reads and/or additional contigs dna_source - the source of the DNA
used for sequencing 'single_cell': DNA amplified from a single
cell via MDA anything else: Standard DNA sample from multiple
cells. Default value is None. pipeline_options - a list of string
specifying how the SPAdes pipeline should be run kmer_sizes -
(optional) K-mer sizes, Default values: 21, 33, 55, 77, 99, 127
(all values must be odd, less than 128 and listed in ascending
order) In the absence of these values, K values are automatically
selected. min_contig_length - integer to filter out contigs with
length < min_contig_length from the HybridSPAdes output. Default
value is 0 implying no filter. @optional dna_source @optional
pipeline_options @optional kmer_sizes @optional min_contig_length)
-> structure: parameter "workspace_name" of String, parameter
"output_contigset_name" of String, parameter "reads_libraries" of
list of type "ReadsParams" (parameter groups--define attributes
for specifying inputs with YAML data set file (advanced) The
following attributes are available: - orientation ("fr", "rf",
"ff") - type ("paired-end", "mate-pairs", "hq-mate-pairs",
"single", "pacbio", "nanopore", "sanger", "trusted-contigs",
"untrusted-contigs") - interlaced reads (comma-separated list of
files with interlaced reads) - left reads (comma-separated list of
files with left reads) - right reads (comma-separated list of
files with right reads) - single reads (comma-separated list of
files with single reads or unpaired reads from paired library) -
merged reads (comma-separated list of files with merged reads)) ->
structure: parameter "lib_ref" of type "obj_ref" (An X/Y/Z style
KBase object reference), parameter "orientation" of String,
parameter "lib_type" of String, parameter "long_reads_libraries"
of list of type "LongReadsParams" -> structure: parameter
"long_reads_ref" of type "obj_ref" (An X/Y/Z style KBase object
reference), parameter "long_reads_type" of String, parameter
"dna_source" of String, parameter "pipeline_options" of list of
String, parameter "kmer_sizes" of list of Long, parameter
"min_contig_length" of Long, parameter "create_report" of type
"bool" (A boolean. 0 = false, anything else = true.)
:returns: instance of type "SPAdesOutput" (Output parameters for
SPAdes run. report_name - the name of the KBaseReport.Report
workspace object. report_ref - the workspace reference of the
report.) -> structure: parameter "report_name" of String,
parameter "report_ref" of String
"""
# ctx is the context object
# return variables are: output
#BEGIN run_HybridSPAdes
self.log('Running run_HybridSPAdes with params:\n{}'.format(
json.dumps(params, indent=1)))
spades_assembler = SPAdesAssembler(self.cfg, ctx.provenance())
output = spades_assembler.run_hybrid_spades(params)
#END run_HybridSPAdes
# At some point might do deeper type checking...
if not isinstance(output, dict):
raise ValueError('Method run_HybridSPAdes return value ' +
'output is not type dict as required.')
# return the results
return [output]
def run_metaSPAdes(self, ctx, params):
"""
Run SPAdes on paired end libraries for metagenomes
:param params: instance of type "SPAdesParams" (Input parameters for
running SPAdes. workspace_name - the name of the workspace from
which to take input and store output. output_contigset_name - the
name of the output contigset read_libraries - a list of Illumina
PairedEndLibrary files in FASTQ or BAM format. dna_source -
(optional) the source of the DNA used for sequencing
'single_cell': DNA amplified from a single cell via MDA anything
else: Standard DNA sample from multiple cells. Default value is
None. min_contig_length - (optional) integer to filter out contigs
with length < min_contig_length from the SPAdes output. Default
value is 0 implying no filter. kmer_sizes - (optional) K-mer
sizes, Default values: 33, 55, 77, 99, 127 (all values must be
odd, less than 128 and listed in ascending order) In the absence
of these values, K values are automatically selected.
skip_error_correction - (optional) Assembly only (No error
correction). By default this is disabled.) -> structure: parameter
"workspace_name" of String, parameter "output_contigset_name" of
String, parameter "read_libraries" of list of type
"paired_end_lib" (The workspace object name of a PairedEndLibrary
file, whether of the KBaseAssembly or KBaseFile type.), parameter
"dna_source" of String, parameter "min_contig_length" of Long,
parameter "kmer_sizes" of list of Long, parameter
"skip_error_correction" of type "bool" (A boolean. 0 = false,
anything else = true.)
:returns: instance of type "SPAdesOutput" (Output parameters for
SPAdes run. report_name - the name of the KBaseReport.Report
workspace object. report_ref - the workspace reference of the
report.) -> structure: parameter "report_name" of String,
parameter "report_ref" of String
"""
# ctx is the context object
# return variables are: output
#BEGIN run_metaSPAdes
output = self.run_SPAdes(ctx,params)[0]
#END run_metaSPAdes
# At some point might do deeper type checking...
if not isinstance(output, dict):
raise ValueError('Method run_metaSPAdes return value ' +
'output is not type dict as required.')
# return the results
return [output]
def status(self, ctx):
#BEGIN_STATUS
returnVal = {'state': "OK",
'message': "",
'version': self.VERSION,
'git_url': self.GIT_URL,
'git_commit_hash': self.GIT_COMMIT_HASH}
del ctx # shut up pep8
#END_STATUS
return [returnVal]
| lib/kb_SPAdes/kb_SPAdesImpl.py | 38,864 | Module Name:
kb_SPAdes
Module Description:
A KBase module: kb_SPAdes
A wrapper for the SPAdes assembler with hybrid features supported.
http://bioinf.spbau.ru/spades
Always runs in careful mode.
Runs 3 threads / CPU.
Maximum memory use is set to available memory - 1G.
Autodetection is used for the PHRED quality offset and k-mer sizes.
A coverage cutoff is not specified.
Run HybridSPAdes on paired end libraries with PacBio CLR and Oxford Nanopore reads
:param params: instance of type "HybridSPAdesParams" (------To run
HybridSPAdes 3.13.0 you need at least one library of the following
types:------ 1) Illumina paired-end/high-quality
mate-pairs/unpaired reads 2) IonTorrent paired-end/high-quality
mate-pairs/unpaired reads 3) PacBio CCS reads Version 3.13.0 of
SPAdes supports paired-end reads, mate-pairs and unpaired reads.
SPAdes can take as input several paired-end and mate-pair
libraries simultaneously. workspace_name - the name of the
workspace from which to take input and store output.
output_contigset_name - the name of the output contigset
read_libraries - a list of Illumina or IonTorrent
paired-end/high-quality mate-pairs/unpaired reads
long_reads_libraries - a list of PacBio, Oxford Nanopore Sanger
reads and/or additional contigs dna_source - the source of the DNA
used for sequencing 'single_cell': DNA amplified from a single
cell via MDA anything else: Standard DNA sample from multiple
cells. Default value is None. pipeline_options - a list of string
specifying how the SPAdes pipeline should be run kmer_sizes -
(optional) K-mer sizes, Default values: 21, 33, 55, 77, 99, 127
(all values must be odd, less than 128 and listed in ascending
order) In the absence of these values, K values are automatically
selected. min_contig_length - integer to filter out contigs with
length < min_contig_length from the HybridSPAdes output. Default
value is 0 implying no filter. @optional dna_source @optional
pipeline_options @optional kmer_sizes @optional min_contig_length)
-> structure: parameter "workspace_name" of String, parameter
"output_contigset_name" of String, parameter "reads_libraries" of
list of type "ReadsParams" (parameter groups--define attributes
for specifying inputs with YAML data set file (advanced) The
following attributes are available: - orientation ("fr", "rf",
"ff") - type ("paired-end", "mate-pairs", "hq-mate-pairs",
"single", "pacbio", "nanopore", "sanger", "trusted-contigs",
"untrusted-contigs") - interlaced reads (comma-separated list of
files with interlaced reads) - left reads (comma-separated list of
files with left reads) - right reads (comma-separated list of
files with right reads) - single reads (comma-separated list of
files with single reads or unpaired reads from paired library) -
merged reads (comma-separated list of files with merged reads)) ->
structure: parameter "lib_ref" of type "obj_ref" (An X/Y/Z style
KBase object reference), parameter "orientation" of String,
parameter "lib_type" of String, parameter "long_reads_libraries"
of list of type "LongReadsParams" -> structure: parameter
"long_reads_ref" of type "obj_ref" (An X/Y/Z style KBase object
reference), parameter "long_reads_type" of String, parameter
"dna_source" of String, parameter "pipeline_options" of list of
String, parameter "kmer_sizes" of list of Long, parameter
"min_contig_length" of Long, parameter "create_report" of type
"bool" (A boolean. 0 = false, anything else = true.)
:returns: instance of type "SPAdesOutput" (Output parameters for
SPAdes run. report_name - the name of the KBaseReport.Report
workspace object. report_ref - the workspace reference of the
report.) -> structure: parameter "report_name" of String,
parameter "report_ref" of String
Run SPAdes on paired end libraries
:param params: instance of type "SPAdesParams" (Input parameters for
running SPAdes. workspace_name - the name of the workspace from
which to take input and store output. output_contigset_name - the
name of the output contigset read_libraries - a list of Illumina
PairedEndLibrary files in FASTQ or BAM format. dna_source -
(optional) the source of the DNA used for sequencing
'single_cell': DNA amplified from a single cell via MDA anything
else: Standard DNA sample from multiple cells. Default value is
None. min_contig_length - (optional) integer to filter out contigs
with length < min_contig_length from the SPAdes output. Default
value is 0 implying no filter. kmer_sizes - (optional) K-mer
sizes, Default values: 33, 55, 77, 99, 127 (all values must be
odd, less than 128 and listed in ascending order) In the absence
of these values, K values are automatically selected.
skip_error_correction - (optional) Assembly only (No error
correction). By default this is disabled.) -> structure: parameter
"workspace_name" of String, parameter "output_contigset_name" of
String, parameter "read_libraries" of list of type
"paired_end_lib" (The workspace object name of a PairedEndLibrary
file, whether of the KBaseAssembly or KBaseFile type.), parameter
"dna_source" of String, parameter "min_contig_length" of Long,
parameter "kmer_sizes" of list of Long, parameter
"skip_error_correction" of type "bool" (A boolean. 0 = false,
anything else = true.)
:returns: instance of type "SPAdesOutput" (Output parameters for
SPAdes run. report_name - the name of the KBaseReport.Report
workspace object. report_ref - the workspace reference of the
report.) -> structure: parameter "report_name" of String,
parameter "report_ref" of String
Run SPAdes on paired end libraries for metagenomes
:param params: instance of type "SPAdesParams" (Input parameters for
running SPAdes. workspace_name - the name of the workspace from
which to take input and store output. output_contigset_name - the
name of the output contigset read_libraries - a list of Illumina
PairedEndLibrary files in FASTQ or BAM format. dna_source -
(optional) the source of the DNA used for sequencing
'single_cell': DNA amplified from a single cell via MDA anything
else: Standard DNA sample from multiple cells. Default value is
None. min_contig_length - (optional) integer to filter out contigs
with length < min_contig_length from the SPAdes output. Default
value is 0 implying no filter. kmer_sizes - (optional) K-mer
sizes, Default values: 33, 55, 77, 99, 127 (all values must be
odd, less than 128 and listed in ascending order) In the absence
of these values, K values are automatically selected.
skip_error_correction - (optional) Assembly only (No error
correction). By default this is disabled.) -> structure: parameter
"workspace_name" of String, parameter "output_contigset_name" of
String, parameter "read_libraries" of list of type
"paired_end_lib" (The workspace object name of a PairedEndLibrary
file, whether of the KBaseAssembly or KBaseFile type.), parameter
"dna_source" of String, parameter "min_contig_length" of Long,
parameter "kmer_sizes" of list of Long, parameter
"skip_error_correction" of type "bool" (A boolean. 0 = false,
anything else = true.)
:returns: instance of type "SPAdesOutput" (Output parameters for
SPAdes run. report_name - the name of the KBaseReport.Report
workspace object. report_ref - the workspace reference of the
report.) -> structure: parameter "report_name" of String,
parameter "report_ref" of String
Use HTTP multi-part POST to save a file to a SHOCK instance.
-*- coding: utf-8 -*-BEGIN_HEADER The header block is where all import statements should live @IgnorePep8END_HEADER WARNING FOR GEVENT USERS noqa Since asynchronous IO can lead to methods - even the same method - interrupting each other, you must be *very* careful when using global state. A method could easily clobber the state set by another while the latter method is running. noqaBEGIN_CLASS_HEADER Class variables and functions can be defined in this block should be False in production per email thread with Anton Korobeynikov Increase threads for metagenomic assemblies 1GB this means shock is down or not responding. Helper script borrowed from the transform service, logger removed spades is configured with yaml fwd in fr orientation rev single end reads pacbio CLR reads (for pacbio CCS use -s option.) Pacbio CLR ahs to be run with at least one single end or paired end library RAISE AN ERROR AS PACBIO REQUIRES AT LEAST ONE SINGLE OR PAIRED ENDS LIBRARY The plasmid assembly can only be run on a single library The metagenome assembly can only be run on a single library The library must be paired end. print("LENGTH OF READSDATA IN EXEC: " + str(len(reads_data))) print("READS DATA: " + str(reads_data)) print("SPADES YAML: " + str(self.generate_spades_yaml(reads_data))) adapted from https://github.com/kbase/transform/blob/master/plugins/scripts/convert/trns_transform_KBaseFile_AssemblyFile_to_KBaseGenomes_ContigSet.py which was adapted from an early version of https://github.com/kbase/transform/blob/master/plugins/scripts/upload/trns_transform_FASTA_DNA_Assembly_to_KBaseGenomes_ContigSet.py Pattern for replacing white space found a header line Wrap up previous fasta sequence wrap up last fasta sequence, should really make this a method compute a simple contig length distribution @UndefinedVariable print("FILES TO CHECK:" + str(files_to_check)) print("EA UTILS STATS : " + str(ea_stats_dict)) ideally types would be firm enough that we could rely on the metagenomic boolean. However KBaseAssembly doesn't have the field and it's optional anyway. Ideally fix those issues and then set the --meta command line flag automatically based on the type Dylan: removing these requirements because too much work for user to go all the way back and reimport reads with "single_genome" flag set opposite. Additionally, now that "metagenomic" assembly is now an explicit App instead of an option, this check is far less necessary if (rds['single_genome'] == self.TRUE and params[self.PARAM_IN_DNA_SOURCE] == self.PARAM_IN_METAGENOME): raise ValueError( ('Reads object {} ({}) is marked as containing dna from ' + 'a single genome but the assembly method was specified ' + 'as metagenomic').format(obj_name, obj_ref)) IF UNKNOWN TYPE NEED TO DETERMINE PHRED TYPE USING EAUTILS IF THERE ARE READS OF BOTH PHRED 33 and 64, throw an error for l in params[self.PARAM_IN_LIB]: print("PARAM_IN_LIB : " + str(l)) if self.INVALID_WS_OBJ_NAME_RE.search(l): raise ValueError('Invalid workspace object name ' + l) print("FOUND THE DNA SOURCE: " + str(params[self.PARAM_IN_DNA_SOURCE])) print("PARAMS ARE:" + str(params))END_CLASS_HEADER config contains contents of config file in a hash or None if it couldn't be foundBEGIN_CONSTRUCTOREND_CONSTRUCTOR ctx is the context object return variables are: outputBEGIN run_SPAdes A whole lot of this is adapted or outright copied from https://github.com/msneddon/MEGAHIT the reads should really be specified as a list of absolute ws refs but the narrative doesn't do that yet get absolute refs from ws print ("REF:" + str(ref)) print ("READS REF:" + str(reads[ref])) parse the output and save back to KBase load report from scaffolds.fasta.filtered.fa load report from scaffolds.fastaEND run_SPAdes At some point might do deeper type checking... return the results ctx is the context object return variables are: outputBEGIN run_HybridSPAdesEND run_HybridSPAdes At some point might do deeper type checking... return the results ctx is the context object return variables are: outputBEGIN run_metaSPAdesEND run_metaSPAdes At some point might do deeper type checking... return the resultsBEGIN_STATUS shut up pep8END_STATUS | 12,025 | en | 0.651245 |
import os
import tensorflow as tf
from tensorkit.log import logger, Color
class Restore(object):
def __init__(self):
self._var_list = None
self._restore_saver = None
self._restore_optimistic = False
self.restore_ckpt_file = None
self._inited = False
def init(self, var_list=None, ckpt_dir=None, ckpt_file=None, optimistic=False):
"""
:param var_list: vars for restore
:param ckpt_dir: prefix of model files.
:param ckpt_file: exact name of model file, priority is higher than `ckpt_dir`
:param optimistic: only restore weights of same names with model.
:return:
"""
assert (var_list is None) or (len(var_list) > 0), 'invalid var_list: {}'.format(var_list)
assert ckpt_dir is not None or ckpt_file is not None, 'ckpt_dir and ckpt_file are both None'
self._var_list = var_list
self._restore_optimistic = optimistic
if ckpt_file is None:
assert os.path.exists(ckpt_dir), 'invalid checkpoint dir: %s' % ckpt_dir
# get ckpt file.
self.restore_ckpt_file = tf.train.latest_checkpoint(os.path.dirname(ckpt_dir + os.sep))
else:
self.restore_ckpt_file = ckpt_file
self._inited = True
return self
def restore(self, sess):
assert self._inited, 'make sure init() before restore()'
if self._restore_vars(sess):
logger.info('- succeed restore variables from: {}'.format(self.restore_ckpt_file))
return True
return False
def _restore_vars(self, sess):
"""
:param sess:
:return: boolean for successful or not
"""
if not self._restore_optimistic:
if self.restore_ckpt_file is None:
logger.warn(
Color.yellow('No checkpoint file for restore vars, checkpoint file is None', bold=True))
return False
self._restore_saver = tf.train.Saver(self._var_list, name='tk_restore')
self._restore_saver.restore(sess, self.restore_ckpt_file)
return True
else:
return self._optimistic_restore_model(sess)
def _optimistic_restore_model(self, sess):
"""
restore weights of same names with model.
:param sess:
:return:
"""
if self.restore_ckpt_file is None:
logger.warn(Color.yellow('No ckpt file for restore vars, ckpt file is None'))
return False
reader = tf.train.NewCheckpointReader(self.restore_ckpt_file)
saved_shapes = reader.get_variable_to_shape_map()
if self._var_list is None:
restore_key2vars = {var.name.split(':')[0]: var for var in tf.global_variables()}
elif isinstance(self._var_list, list):
restore_key2vars = {var.name.split(':')[0]: var for var in self._var_list}
elif isinstance(self._var_list, dict):
restore_key2vars = self._var_list
else:
raise RuntimeError('type error {}'.format(self._var_list))
assert len(restore_key2vars) > 0
restore_key2vars = sorted([(k, v) for k, v in restore_key2vars.items() if k in saved_shapes])
msg = []
var_list = dict()
with tf.variable_scope('', reuse=True):
for key, var in restore_key2vars:
var_shape = var.get_shape().as_list()
if var_shape == saved_shapes[key]:
var_list[key] = var
var_name = var.name[:var.name.index(':')]
msg.append('- restoring variable: {}'.format(var_name)
if var_name == key else
'- restoring variable {} from {}'.format(var_name, key))
else:
msg.append(Color.yellow(
'- variable({}) with inconsistent shape: {}(graph) != {}(ckpt)'.format(
key, var_shape, saved_shapes[key])
))
if len(var_list) != 0:
msg += ['- total variable count: {}'.format(len(var_list))]
logger.info('\n'.join(msg))
saver = tf.train.Saver(var_list, name='tk_restore')
saver.restore(sess, self.restore_ckpt_file)
return True
else:
logger.warn(Color.yellow('No vars need to restore from file: {}'.format(self.restore_ckpt_file)))
return False
def __str__(self):
content = 'RESTORE_OPTIMISTIC: %s' \
'\nRESTORE_CHECKPOINT_FILE: %s' % (self._restore_optimistic, self.restore_ckpt_file)
return content
| tensorkit/restore.py | 4,687 | restore weights of same names with model.
:param sess:
:return:
:param sess:
:return: boolean for successful or not
:param var_list: vars for restore
:param ckpt_dir: prefix of model files.
:param ckpt_file: exact name of model file, priority is higher than `ckpt_dir`
:param optimistic: only restore weights of same names with model.
:return:
get ckpt file. | 369 | en | 0.711414 |
import config as c
import random as r
def print_map(map_grid):
print("= " * (len(map_grid) + 2))
for row in map_grid:
print("||", end='')
print(*row, sep=" ", end='')
print("||")
print("= " * (len(map_grid) + 2))
# Builds map with all of one type of tile
# Should be WALL or FLOOR
def init_empty_map(dimension, default_tile):
map_grid = []
for i in range(dimension):
map_grid.append([default_tile] * dimension)
return map_grid
# def build_ruins(dimension, p_mod):
# map_grid = init_empty_map(dimension, c.FLOOR)
# build_dungeon_walls(map_grid, p_mod)
# return map_grid
# Randomly populate wall tiles across an empty dungeon floor
def build_dungeon_walls(map_grid, p_mod):
for y in range(0, len(map_grid)):
for x in range(0, len(map_grid)):
# Determine if wall tile will be populated
if r.randint(0,100) / 100 < p_mod:
map_grid[y][x] = c.WALL
def build_wall_clusters(map_grid, p_mod):
for y in range(0, len(map_grid) - 1):
for x in range(0, len(map_grid) - 1):
# Determine if a few tiles will be populated
if r.randint(0,100) / 100 < p_mod:
build_cluster(map_grid, y, x)
# Populate a cluster of 2-3 tiles on the map
# Does not check for overlap of existing wall tiles
def build_cluster(map_grid, row, column):
itr = r.randint(1,3)
while itr > 0:
map_grid[row][column] = c.WALL
next_direction = r.choice(get_valid_cardinals(map_grid, row, column, False))
row += c.CARDINAL_VECTORS[next_direction][c.Y_INDEX]
column += c.CARDINAL_VECTORS[next_direction][c.X_INDEX]
itr -= 1
# Returns a subset of cardinal directions which you could move from a given tile on a map
# 'diaganol' is a flag for whether or not to consider diaganol adjacency
def get_valid_cardinals(map_grid, row, column, diaganol):
valid_cardinals = []
if row > 0:
valid_cardinals.append(c.NORTH)
if column > 0:
valid_cardinals.append(c.WEST)
if row < len(map_grid) - 1:
valid_cardinals.append(c.SOUTH)
if column < len(map_grid) - 1:
valid_cardinals.append(c.EAST)
if diaganol:
if row > 0 and column > 0:
valid_cardinals.append(c.NORTHWEST)
if row > 0 and column < len(map_grid) - 1:
valid_cardinals.append(c.NORTHEAST)
if row < len(map_grid) - 1 and column > 0:
valid_cardinals.append(c.SOUTHWEST)
if row < len(map_grid) - 1 and column < len(map_grid) - 1:
valid_cardinals.append(c.SOUTHEAST)
return valid_cardinals
# Clears all tiles of a given type, which have no adjacent matching tiles
# Default clear state is a FLOOR tile
# This considers diagonal adjacency
def remove_adjacentless_tiles(map_grid, tile_type):
for y in range(0, len(map_grid)):
for x in range(0, len(map_grid)):
if map_grid[y][x] == tile_type and has_adjacent_tile(map_grid, y, x) is not True:
map_grid[y][x] = c.FLOOR
# TODO Debug
def has_adjacent_tile(map_grid, y, x):
tile_type = map_grid[y][x]
cardinals = get_valid_cardinals(map_grid, y, x, True)
for cardinal in cardinals:
y_adj = y + c.CARDINAL_VECTORS[cardinal][c.Y_INDEX]
x_adj = x + c.CARDINAL_VECTORS[cardinal][c.X_INDEX]
if map_grid[y_adj][x_adj] == tile_type:
return True
return False | map.py | 3,433 | Builds map with all of one type of tile Should be WALL or FLOOR def build_ruins(dimension, p_mod): map_grid = init_empty_map(dimension, c.FLOOR) build_dungeon_walls(map_grid, p_mod) return map_grid Randomly populate wall tiles across an empty dungeon floor Determine if wall tile will be populated Determine if a few tiles will be populated Populate a cluster of 2-3 tiles on the map Does not check for overlap of existing wall tiles Returns a subset of cardinal directions which you could move from a given tile on a map 'diaganol' is a flag for whether or not to consider diaganol adjacency Clears all tiles of a given type, which have no adjacent matching tiles Default clear state is a FLOOR tile This considers diagonal adjacency TODO Debug | 757 | en | 0.81798 |
# Nodes represent a definition of a value in our graph of operators.
from typing import TYPE_CHECKING, Union, Callable, Any, Tuple, List, Optional, Dict, Set
from ._compatibility import compatibility
from .immutable_collections import immutable_dict, immutable_list
import torch
import builtins
import types
from torch.fx.operator_schemas import normalize_function, normalize_module, ArgsKwargsPair
if TYPE_CHECKING:
from .graph import Graph
BaseArgumentTypes = Union[str, int, float, bool, torch.dtype, torch.Tensor, torch.device, torch.memory_format]
base_types = BaseArgumentTypes.__args__ # type: ignore[attr-defined]
Target = Union[Callable[..., Any], str]
Argument = Optional[Union[
Tuple[Any, ...], # actually Argument, but mypy can't represent recursive types
List[Any], # actually Argument
Dict[str, Any], # actually Argument
slice, # Slice[Argument, Argument, Argument], but slice is not a templated type in typing
'Node',
BaseArgumentTypes
]]
_side_effectful_functions: Set[Callable] = {
torch._assert, torch.ops.profiler._record_function_enter,
torch.ops.profiler._record_function_exit}
# this is fixed on master, WAR for 1.5
def _find_module_of_method(orig_method: Callable[..., Any]) -> str:
name = orig_method.__name__
module = orig_method.__module__
if module is not None:
return module
for guess in [torch, torch.nn.functional]:
if getattr(guess, name, None) is orig_method:
return guess.__name__
raise RuntimeError(f'cannot find module for {orig_method}')
# Borrowed from CPython typing module
# https://github.com/python/cpython/blob/f90dc36c15d7fee0efaf6d39e97be0bdf2683e93/Lib/typing.py#L156
def _type_repr(obj):
"""Return the repr() of an object, special-casing types (internal helper).
If obj is a type, we return a shorter version than the default
type.__repr__, based on the module and qualified name, which is
typically enough to uniquely identify a type. For everything
else, we fall back on repr(obj).
"""
# HACK: In Python 3.6, type aliases from ``typing`` are instances of ``type``, but in
# later Python versions, type aliases are not instances of ``type``!! We want
# all type aliases to fall through to ``repr``, so if we have a type that is
# in the module typing, don't go down this path.
if isinstance(obj, type) and obj.__module__ != 'typing':
if obj.__module__ == 'builtins':
return obj.__qualname__
return f'{obj.__module__}.{obj.__qualname__}'
if obj is ...:
return('...')
if isinstance(obj, types.FunctionType):
return obj.__name__
return repr(obj)
def _get_qualified_name(func: Callable[..., Any]) -> str:
# things like getattr just appear in builtins
if getattr(builtins, func.__name__, None) is func:
return func.__name__
name = func.__name__
module = _find_module_of_method(func)
module = module.replace('torch._ops', 'torch.ops') # WAR for bug in how torch.ops assigns module
return f'{module}.{name}'
def _format_arg(arg) -> str:
if isinstance(arg, list):
items = ', '.join(_format_arg(a) for a in arg)
return f'[{items}]'
elif isinstance(arg, tuple):
items = ', '.join(_format_arg(a) for a in arg)
maybe_comma = ',' if len(arg) == 1 else ''
return f'({items}{maybe_comma})'
elif isinstance(arg, dict):
items_str = ', '.join(f'{k}: {_format_arg(v)}' for k, v in arg.items())
return f'{{{items_str}}}'
if isinstance(arg, Node):
return '%' + str(arg)
else:
return str(arg)
@compatibility(is_backward_compatible=True)
class Node:
"""
``Node`` is the data structure that represents individual operations within
a ``Graph``. For the most part, Nodes represent callsites to various entities,
such as operators, methods, and Modules (some exceptions include nodes that
specify function inputs and outputs). Each ``Node`` has a function specified
by its ``op`` property. The ``Node`` semantics for each value of ``op`` are as follows:
- ``placeholder`` represents a function input. The ``name`` attribute specifies the name this value will take on.
``target`` is similarly the name of the argument. ``args`` holds either: 1) nothing, or 2) a single argument
denoting the default parameter of the function input. ``kwargs`` is don't-care. Placeholders correspond to
the function parameters (e.g. ``x``) in the graph printout.
- ``get_attr`` retrieves a parameter from the module hierarchy. ``name`` is similarly the name the result of the
fetch is assigned to. ``target`` is the fully-qualified name of the parameter's position in the module hierarchy.
``args`` and ``kwargs`` are don't-care
- ``call_function`` applies a free function to some values. ``name`` is similarly the name of the value to assign
to. ``target`` is the function to be applied. ``args`` and ``kwargs`` represent the arguments to the function,
following the Python calling convention
- ``call_module`` applies a module in the module hierarchy's ``forward()`` method to given arguments. ``name`` is
as previous. ``target`` is the fully-qualified name of the module in the module hierarchy to call.
``args`` and ``kwargs`` represent the arguments to invoke the module on, *including the self argument*.
- ``call_method`` calls a method on a value. ``name`` is as similar. ``target`` is the string name of the method
to apply to the ``self`` argument. ``args`` and ``kwargs`` represent the arguments to invoke the module on,
*including the self argument*
- ``output`` contains the output of the traced function in its ``args[0]`` attribute. This corresponds to the "return" statement
in the Graph printout.
"""
@compatibility(is_backward_compatible=True)
def __init__(self, graph: 'Graph', name: str, op: str, target: 'Target',
args: Tuple['Argument', ...], kwargs: Dict[str, 'Argument'],
return_type : Optional[Any] = None) -> None:
"""
Instantiate an instance of ``Node``. Note: most often, you want to use the
Graph APIs, i.e. ``Graph.call_module``, ``Graph.call_method``, etc. rather
than instantiating a ``Node`` directly.
Args:
graph (Graph): The ``Graph`` to which this ``Node`` should belong.
name (str): The name to which the output of this ``Node`` should be assigned
op (str): The opcode for this ``Node``. Can be one of 'placeholder',
'call_method', 'call_module', 'call_function', 'get_attr',
'output'
target ('Target'): The target this op should call. See the broader
``Node`` docstring for more details.
args (Tuple['Argument']): The args to be passed to ``target``
kwargs (Dict[str, 'Argument']): The kwargs to be passed to ``target``
return_type (Optional[Any]): The python type expression representing the
type of the output of this node. This field can be used for
annotation of values in the generated code or for other types
of analyses.
"""
self.graph = graph
self.name = name # unique name of value being created
assert op in ['placeholder', 'call_method', 'call_module', 'call_function', 'get_attr', 'output', 'root']
self.op = op # the kind of operation = placeholder|call_method|call_module|call_function|get_attr
if op == 'call_function':
if not callable(target):
raise ValueError(f'Node [graph = {graph}, name = \'{name}\'] target {target} has type {torch.typename(target)} '
'but a Callable is expected')
else:
if not isinstance(target, str):
raise ValueError(f'Node [graph = {graph}, name = \'{name}\'] target {target} has type {torch.typename(target)} '
'but a str is expected')
self.target = target # for method/module/function, the name of the method/module/function/attr
# being invoked, e.g add, layer1, or torch.add
# All `Node`-valued inputs. Key is the Node, value is don't-care.
# The public API for this is `all_input_nodes`, this private attribute
# should not be accessed directly.
self._input_nodes : Dict[Node, None] = {}
self.__update_args_kwargs(map_arg(args, lambda x: x), map_arg(kwargs, lambda x: x)) # type: ignore[arg-type]
# All of the nodes that use the value produced by this Node
# Note one user may correspond to several uses, e.g. the node fo ``x + x``
# would appear once here, but represents two uses.
#
# Is a dict to act as an "ordered set". Keys are significant, value dont-care
self.users : Dict['Node', None] = {}
# Type expression representing the output value of this node.
# This should contain the same class of Type objects that would appear
# as type annotations for function inputs/outputs.
#
# For placeholder nodes, this value will be used to type-annotate the
# generated function parameters.
# For the return node, this value will be used to type-annotate the
# generated function return type. (Note this is a special case. ``return``
# does not produce a value, it's more of a notation. Thus, this value
# describes the type of args[0] in the ``return`` node.
self.type : Optional[Any] = return_type
self._prev = self
self._next = self
self._erased = False
# If set, use this fn to print this node
self._repr_fn : Optional[Callable[[Node], str]] = None
self._stack_trace : Optional[str] = None
# Dictionary to store metadata passes need to do their
# transformations. This metadata is preserved across node copies
self.meta : Dict[str, Any] = {}
@property
def next(self) -> 'Node':
"""
Returns the next ``Node`` in the linked list of Nodes.
Returns:
The next ``Node`` in the linked list of Nodes.
"""
return self._next
@property
def prev(self) -> 'Node':
"""
Returns the previous ``Node`` in the linked list of Nodes.
Returns:
The previous ``Node`` in the linked list of Nodes.
"""
return self._prev
@compatibility(is_backward_compatible=True)
def prepend(self, x: 'Node') -> None:
"""
Insert x before this node in the list of nodes in the graph. Example::
Before: p -> self
bx -> x -> ax
After: p -> x -> self
bx -> ax
Args:
x (Node): The node to put before this node. Must be a member of the same graph.
"""
assert self.graph == x.graph, "Attempting to move a Node into a different Graph"
x._remove_from_list()
p = self._prev
p._next, x._prev = x, p
x._next, self._prev = self, x
@compatibility(is_backward_compatible=True)
def append(self, x: 'Node') -> None:
"""
Insert x after this node in the list of nodes in the graph.
Equvalent to ``self.next.prepend(x)``
Args:
x (Node): The node to put after this node. Must be a member of the same graph.
"""
self._next.prepend(x)
def _remove_from_list(self):
p, n = self._prev, self._next
p._next, n._prev = n, p
@property
def args(self) -> Tuple[Argument, ...]:
"""
The tuple of arguments to this ``Node``. The interpretation of arguments
depends on the node's opcode. See the :class:`Node` docstring for more
information.
Assignment to this property is allowed. All accounting of uses and users
is updated automatically on assignment.
"""
return self._args
@args.setter
def args(self, a : Tuple[Argument, ...]):
"""
Set the tuple of arguments to this Node. The interpretation of arguments
depends on the node's opcode. See the ``fx.Graph`` docstring for more
information.
"""
# DO NOT CALL `__update_args_kwargs` directly. The correct way to
# set `args` is via direct assignment, i.e. `node.args = new_args`
self.__update_args_kwargs(map_arg(a, lambda x: x), self._kwargs) # type: ignore[arg-type]
@property
def kwargs(self) -> Dict[str, Argument]:
"""
The dict of keyword arguments to this ``Node``. The interpretation of arguments
depends on the node's opcode. See the :class:`Node` docstring for more
information.
Assignment to this property is allowed. All accounting of uses and users
is updated automatically on assignment.
"""
return self._kwargs
@kwargs.setter
def kwargs(self, k : Dict[str, Argument]):
"""
Set the dict of kwargs to this Node. The interpretation of arguments
depends on the node's opcode. See the ``fx.Graph`` docstring for more
information.
"""
# DO NOT CALL `__update_args_kwargs` directly. The correct way to
# set `args` is via direct assignment, i.e. `node.kwargs = new_kwargs`
self.__update_args_kwargs(self._args, map_arg(k, lambda x: x)) # type: ignore[arg-type]
@property
def all_input_nodes(self) -> List['Node']:
"""
Return all Nodes that are inputs to this Node. This is equivalent to
iterating over ``args`` and ``kwargs`` and only collecting the values that
are Nodes.
Returns:
List of ``Nodes`` that appear in the ``args`` and ``kwargs`` of this
``Node``, in that order.
"""
return list(self._input_nodes.keys())
@compatibility(is_backward_compatible=True)
def update_arg(self, idx : int, arg : Argument) -> None:
"""
Update an existing positional argument to contain the new value
``arg``. After calling, ``self.args[idx] == arg``.
Args:
idx (int): The index into ``self.args`` of the element to update
arg (Argument): The new argument value to write into ``args``
"""
args = list(self.args)
args[idx] = arg
self.args = tuple(args)
@compatibility(is_backward_compatible=True)
def update_kwarg(self, key : str, arg : Argument) -> None:
"""
Update an existing keyword argument to contain the new value
``arg``. After calling, ``self.kwargs[key] == arg``.
Args:
key (str): The key in ``self.kwargs`` of the element to update
arg (Argument): The new argument value to write into ``kwargs``
"""
kwargs = dict(self.kwargs)
kwargs[key] = arg
self.kwargs = kwargs
@property
def stack_trace(self) -> Optional[str]:
"""
Return the Python stack trace that was recorded during tracing, if any.
This property is usually populated by `Tracer.create_proxy`. To record
stack traces during tracing for debug purposes, set
`record_stack_traces = True` on the `Tracer` instance.
"""
return self._stack_trace
@stack_trace.setter
def stack_trace(self, trace : Optional[str]):
self._stack_trace = trace
def __update_args_kwargs(self, new_args : Tuple['Argument', ...], new_kwargs : Dict[str, 'Argument']):
"""
This API is internal. Do *not* call it directly.
"""
self._args = new_args
self._kwargs = new_kwargs
for old_use in self._input_nodes.keys():
old_use.users.pop(self)
self._input_nodes = {}
map_arg(self._args, lambda n: self._input_nodes.setdefault(n))
map_arg(self._kwargs, lambda n: self._input_nodes.setdefault(n))
for new_use in self._input_nodes.keys():
new_use.users.setdefault(self)
def __repr__(self) -> str:
if self._repr_fn:
return self._repr_fn(self)
return self.name
def _pretty_print_target(self, target):
"""
Make target printouts more user-friendly.
1) builtins will be printed as `builtins.xyz`
2) operators will be printed as `operator.xyz`
3) other callables will be printed with qualfied name, e.g. torch.add
"""
if isinstance(target, str):
return target
if hasattr(target, '__module__'):
if not hasattr(target, '__name__'):
# Just to be defensive, if we don't have `__name__`, get the
# qualname. Not sure if this happens for any members of `operator`
# or `builtins`. This fallback path is not as good, since e.g.
# things in `operator` have `_operator` as their __module__.
return _get_qualified_name(target)
if target.__module__ == 'builtins':
return f'builtins.{target.__name__}'
elif target.__module__ == '_operator':
return f'operator.{target.__name__}'
return _get_qualified_name(target)
@compatibility(is_backward_compatible=True)
def format_node(self,
placeholder_names: List[str] = None,
maybe_return_typename: List[str] = None) -> Optional[str]:
"""
Return a descriptive string representation of ``self``.
This method can be used with no arguments as a debugging
utility.
This function is also used internally in the ``__str__`` method
of ``Graph``. Together, the strings in ``placeholder_names``
and ``maybe_return_typename`` make up the signature of the
autogenerated ``forward`` function in this Graph's surrounding
GraphModule. ``placeholder_names`` and ``maybe_return_typename``
should not be used otherwise.
Args:
placeholder_names: A list that will store formatted strings
representing the placeholders in the generated
``forward`` function. Internal use only.
maybe_return_typename: A single-element list that will store
a formatted string representing the output of the
generated ``forward`` function. Internal use only.
Returns:
str: If 1) we're using ``format_node`` as an internal helper
in the ``__str__`` method of ``Graph``, and 2) ``self``
is a placeholder Node, return ``None``. Otherwise,
return a descriptive string representation of the
current Node.
"""
if self.op == 'placeholder':
assert isinstance(self.target, str)
arg_str = self.target
arg_str += arg_str + f': {_type_repr(self.type)}' if self.type else ''
if placeholder_names:
placeholder_names.append(arg_str)
return None
maybe_typename = f'{_type_repr(self.type)} ' if self.type else ''
default_val = '(default=' + str(self.args[0]) + ')' if self.args else ''
return f'%{self.name} : {maybe_typename}[#users={len(self.users)}] = {self.op}[target={self.target}]{default_val}'
elif self.op == 'get_attr':
maybe_typename = f'{_type_repr(self.type)} ' if self.type is not None else ''
return f'%{self.name} : {maybe_typename}[#users={len(self.users)}] = ' \
f'{self.op}[target={self._pretty_print_target(self.target)}]'
elif self.op == 'output':
if self.type and maybe_return_typename:
maybe_return_typename[0] = f' -> {_type_repr(self.type)}'
return f'return {self.args[0]}'
else:
maybe_typename = f'{_type_repr(self.type)} ' if self.type is not None else ''
return f'%{self.name} : {maybe_typename}[#users={len(self.users)}] = ' \
f'{self.op}[target={self._pretty_print_target(self.target)}](' \
f'args = {_format_arg(self.args)}, kwargs = {_format_arg(self.kwargs)})'
@compatibility(is_backward_compatible=True)
def replace_all_uses_with(self, replace_with : 'Node') -> List['Node']:
"""
Replace all uses of ``self`` in the Graph with the Node ``replace_with``.
Args:
replace_with (Node): The node to replace all uses of ``self`` with.
Returns:
The list of Nodes on which this change was made.
"""
to_process = list(self.users)
for use_node in to_process:
def maybe_replace_node(n : Node) -> Node:
if n == self:
return replace_with
else:
return n
new_args = map_arg(use_node.args, maybe_replace_node)
new_kwargs = map_arg(use_node.kwargs, maybe_replace_node)
assert isinstance(new_args, tuple)
assert isinstance(new_kwargs, dict)
use_node.__update_args_kwargs(new_args, new_kwargs)
assert len(self.users) == 0
return to_process
@compatibility(is_backward_compatible=False)
def is_impure(self):
"""
Returns whether this op is impure, i.e. if its op is a placeholder or
output, or if a call_function or call_module which is impure.
Returns:
bool: If the op is impure or not.
"""
if self.op in {"placeholder", "output"}:
return True
# Check if an impure function.
if self.op == "call_function":
return self.target in _side_effectful_functions
# Check if an impure module.
if self.op == "call_module":
assert (
self.graph.owning_module is not None
), "self.graph.owning_module not set for purity check"
target_mod = self.graph.owning_module.get_submodule(self.target)
assert (
target_mod is not None
), f"Did not find expected submodule target {self.target}"
return getattr(target_mod, "_is_impure", False)
return False
@compatibility(is_backward_compatible=False)
def normalized_arguments(
self, root : torch.nn.Module, arg_types : Optional[Tuple[Any]] = None,
kwarg_types : Optional[Dict[str, Any]] = None,
normalize_to_only_use_kwargs : bool = False) -> Optional[ArgsKwargsPair]:
"""
Returns normalized arguments to Python targets. This means that
`args/kwargs` will be matched up to the module/functional's
signature and return exclusively kwargs in positional order
if `normalize_to_only_use_kwargs` is true.
Also populates default values. Does not support positional-only
parameters or varargs parameters.
Supports module calls.
May require `arg_types` and `kwarg_types` in order to disambiguate overloads.
Args:
root (torch.nn.Module): Module upon which to resolve module targets.
arg_types (Optional[Tuple[Any]]): Tuple of arg types for the args
kwarg_types (Optional[Dict[str, Any]]): Dict of arg types for the kwargs
normalize_to_only_use_kwargs (bool): Whether to normalize to only use kwargs.
Returns:
Returns NamedTuple ArgsKwargsPair, or `None` if not successful.
"""
if self.op == 'call_function':
assert callable(self.target)
return normalize_function(self.target, self.args, self.kwargs, arg_types, kwarg_types) # type: ignore[arg-type]
elif self.op == 'call_module':
assert isinstance(self.target, str)
return normalize_module(root, self.target, self.args, self.kwargs) # type: ignore[arg-type]
return None
@compatibility(is_backward_compatible=True)
def replace_input_with(self, old_input: 'Node', new_input: 'Node'):
"""
Loop through input nodes of ``self``, and replace all instances of
``old_input`` with ``new_input``.
Args:
old_input (Node): The old input node to be replaced.
new_input (Node): The new input node to replace ``old_input``.
"""
def maybe_replace_node(n : Node) -> Node:
return new_input if n == old_input else n
new_args = map_arg(self.args, maybe_replace_node)
new_kwargs = map_arg(self.kwargs, maybe_replace_node)
assert isinstance(new_args, tuple)
assert isinstance(new_kwargs, dict)
self.__update_args_kwargs(new_args, new_kwargs)
@compatibility(is_backward_compatible=True)
def map_arg(a: Argument, fn: Callable[[Node], Argument]) -> Argument:
"""
Apply fn to each Node appearing arg. arg may be a list, tuple, slice, or dict with string keys.
"""
assert callable(fn), "torch.fx.map_arg(a, fn): fn must be a callable"
return map_aggregate(a, lambda x: fn(x) if isinstance(x, Node) else x)
@compatibility(is_backward_compatible=True)
def map_aggregate(a: Argument, fn: Callable[[Argument], Argument]) -> Argument:
"""
Apply fn to each Node appearing arg. arg may be a list, tuple, slice, or dict with string keys.
"""
if isinstance(a, tuple):
return tuple(map_aggregate(elem, fn) for elem in a)
elif isinstance(a, list):
return immutable_list(map_aggregate(elem, fn) for elem in a)
elif isinstance(a, dict):
return immutable_dict((k, map_aggregate(v, fn)) for k, v in a.items())
elif isinstance(a, slice):
return slice(map_aggregate(a.start, fn), map_aggregate(a.stop, fn), map_aggregate(a.step, fn))
else:
return fn(a)
| venv/Lib/site-packages/torch/fx/node.py | 26,552 | ``Node`` is the data structure that represents individual operations within
a ``Graph``. For the most part, Nodes represent callsites to various entities,
such as operators, methods, and Modules (some exceptions include nodes that
specify function inputs and outputs). Each ``Node`` has a function specified
by its ``op`` property. The ``Node`` semantics for each value of ``op`` are as follows:
- ``placeholder`` represents a function input. The ``name`` attribute specifies the name this value will take on.
``target`` is similarly the name of the argument. ``args`` holds either: 1) nothing, or 2) a single argument
denoting the default parameter of the function input. ``kwargs`` is don't-care. Placeholders correspond to
the function parameters (e.g. ``x``) in the graph printout.
- ``get_attr`` retrieves a parameter from the module hierarchy. ``name`` is similarly the name the result of the
fetch is assigned to. ``target`` is the fully-qualified name of the parameter's position in the module hierarchy.
``args`` and ``kwargs`` are don't-care
- ``call_function`` applies a free function to some values. ``name`` is similarly the name of the value to assign
to. ``target`` is the function to be applied. ``args`` and ``kwargs`` represent the arguments to the function,
following the Python calling convention
- ``call_module`` applies a module in the module hierarchy's ``forward()`` method to given arguments. ``name`` is
as previous. ``target`` is the fully-qualified name of the module in the module hierarchy to call.
``args`` and ``kwargs`` represent the arguments to invoke the module on, *including the self argument*.
- ``call_method`` calls a method on a value. ``name`` is as similar. ``target`` is the string name of the method
to apply to the ``self`` argument. ``args`` and ``kwargs`` represent the arguments to invoke the module on,
*including the self argument*
- ``output`` contains the output of the traced function in its ``args[0]`` attribute. This corresponds to the "return" statement
in the Graph printout.
Instantiate an instance of ``Node``. Note: most often, you want to use the
Graph APIs, i.e. ``Graph.call_module``, ``Graph.call_method``, etc. rather
than instantiating a ``Node`` directly.
Args:
graph (Graph): The ``Graph`` to which this ``Node`` should belong.
name (str): The name to which the output of this ``Node`` should be assigned
op (str): The opcode for this ``Node``. Can be one of 'placeholder',
'call_method', 'call_module', 'call_function', 'get_attr',
'output'
target ('Target'): The target this op should call. See the broader
``Node`` docstring for more details.
args (Tuple['Argument']): The args to be passed to ``target``
kwargs (Dict[str, 'Argument']): The kwargs to be passed to ``target``
return_type (Optional[Any]): The python type expression representing the
type of the output of this node. This field can be used for
annotation of values in the generated code or for other types
of analyses.
This API is internal. Do *not* call it directly.
Make target printouts more user-friendly.
1) builtins will be printed as `builtins.xyz`
2) operators will be printed as `operator.xyz`
3) other callables will be printed with qualfied name, e.g. torch.add
Return the repr() of an object, special-casing types (internal helper).
If obj is a type, we return a shorter version than the default
type.__repr__, based on the module and qualified name, which is
typically enough to uniquely identify a type. For everything
else, we fall back on repr(obj).
Return all Nodes that are inputs to this Node. This is equivalent to
iterating over ``args`` and ``kwargs`` and only collecting the values that
are Nodes.
Returns:
List of ``Nodes`` that appear in the ``args`` and ``kwargs`` of this
``Node``, in that order.
Insert x after this node in the list of nodes in the graph.
Equvalent to ``self.next.prepend(x)``
Args:
x (Node): The node to put after this node. Must be a member of the same graph.
The tuple of arguments to this ``Node``. The interpretation of arguments
depends on the node's opcode. See the :class:`Node` docstring for more
information.
Assignment to this property is allowed. All accounting of uses and users
is updated automatically on assignment.
Set the tuple of arguments to this Node. The interpretation of arguments
depends on the node's opcode. See the ``fx.Graph`` docstring for more
information.
Return a descriptive string representation of ``self``.
This method can be used with no arguments as a debugging
utility.
This function is also used internally in the ``__str__`` method
of ``Graph``. Together, the strings in ``placeholder_names``
and ``maybe_return_typename`` make up the signature of the
autogenerated ``forward`` function in this Graph's surrounding
GraphModule. ``placeholder_names`` and ``maybe_return_typename``
should not be used otherwise.
Args:
placeholder_names: A list that will store formatted strings
representing the placeholders in the generated
``forward`` function. Internal use only.
maybe_return_typename: A single-element list that will store
a formatted string representing the output of the
generated ``forward`` function. Internal use only.
Returns:
str: If 1) we're using ``format_node`` as an internal helper
in the ``__str__`` method of ``Graph``, and 2) ``self``
is a placeholder Node, return ``None``. Otherwise,
return a descriptive string representation of the
current Node.
Returns whether this op is impure, i.e. if its op is a placeholder or
output, or if a call_function or call_module which is impure.
Returns:
bool: If the op is impure or not.
The dict of keyword arguments to this ``Node``. The interpretation of arguments
depends on the node's opcode. See the :class:`Node` docstring for more
information.
Assignment to this property is allowed. All accounting of uses and users
is updated automatically on assignment.
Set the dict of kwargs to this Node. The interpretation of arguments
depends on the node's opcode. See the ``fx.Graph`` docstring for more
information.
Apply fn to each Node appearing arg. arg may be a list, tuple, slice, or dict with string keys.
Apply fn to each Node appearing arg. arg may be a list, tuple, slice, or dict with string keys.
Returns the next ``Node`` in the linked list of Nodes.
Returns:
The next ``Node`` in the linked list of Nodes.
Returns normalized arguments to Python targets. This means that
`args/kwargs` will be matched up to the module/functional's
signature and return exclusively kwargs in positional order
if `normalize_to_only_use_kwargs` is true.
Also populates default values. Does not support positional-only
parameters or varargs parameters.
Supports module calls.
May require `arg_types` and `kwarg_types` in order to disambiguate overloads.
Args:
root (torch.nn.Module): Module upon which to resolve module targets.
arg_types (Optional[Tuple[Any]]): Tuple of arg types for the args
kwarg_types (Optional[Dict[str, Any]]): Dict of arg types for the kwargs
normalize_to_only_use_kwargs (bool): Whether to normalize to only use kwargs.
Returns:
Returns NamedTuple ArgsKwargsPair, or `None` if not successful.
Insert x before this node in the list of nodes in the graph. Example::
Before: p -> self
bx -> x -> ax
After: p -> x -> self
bx -> ax
Args:
x (Node): The node to put before this node. Must be a member of the same graph.
Returns the previous ``Node`` in the linked list of Nodes.
Returns:
The previous ``Node`` in the linked list of Nodes.
Replace all uses of ``self`` in the Graph with the Node ``replace_with``.
Args:
replace_with (Node): The node to replace all uses of ``self`` with.
Returns:
The list of Nodes on which this change was made.
Loop through input nodes of ``self``, and replace all instances of
``old_input`` with ``new_input``.
Args:
old_input (Node): The old input node to be replaced.
new_input (Node): The new input node to replace ``old_input``.
Return the Python stack trace that was recorded during tracing, if any.
This property is usually populated by `Tracer.create_proxy`. To record
stack traces during tracing for debug purposes, set
`record_stack_traces = True` on the `Tracer` instance.
Update an existing positional argument to contain the new value
``arg``. After calling, ``self.args[idx] == arg``.
Args:
idx (int): The index into ``self.args`` of the element to update
arg (Argument): The new argument value to write into ``args``
Update an existing keyword argument to contain the new value
``arg``. After calling, ``self.kwargs[key] == arg``.
Args:
key (str): The key in ``self.kwargs`` of the element to update
arg (Argument): The new argument value to write into ``kwargs``
Nodes represent a definition of a value in our graph of operators. type: ignore[attr-defined] actually Argument, but mypy can't represent recursive types actually Argument actually Argument Slice[Argument, Argument, Argument], but slice is not a templated type in typing this is fixed on master, WAR for 1.5 Borrowed from CPython typing module https://github.com/python/cpython/blob/f90dc36c15d7fee0efaf6d39e97be0bdf2683e93/Lib/typing.pyL156 HACK: In Python 3.6, type aliases from ``typing`` are instances of ``type``, but in later Python versions, type aliases are not instances of ``type``!! We want all type aliases to fall through to ``repr``, so if we have a type that is in the module typing, don't go down this path. things like getattr just appear in builtins WAR for bug in how torch.ops assigns module unique name of value being created the kind of operation = placeholder|call_method|call_module|call_function|get_attr for method/module/function, the name of the method/module/function/attr being invoked, e.g add, layer1, or torch.add All `Node`-valued inputs. Key is the Node, value is don't-care. The public API for this is `all_input_nodes`, this private attribute should not be accessed directly. type: ignore[arg-type] All of the nodes that use the value produced by this Node Note one user may correspond to several uses, e.g. the node fo ``x + x`` would appear once here, but represents two uses. Is a dict to act as an "ordered set". Keys are significant, value dont-care Type expression representing the output value of this node. This should contain the same class of Type objects that would appear as type annotations for function inputs/outputs. For placeholder nodes, this value will be used to type-annotate the generated function parameters. For the return node, this value will be used to type-annotate the generated function return type. (Note this is a special case. ``return`` does not produce a value, it's more of a notation. Thus, this value describes the type of args[0] in the ``return`` node. If set, use this fn to print this node Dictionary to store metadata passes need to do their transformations. This metadata is preserved across node copies DO NOT CALL `__update_args_kwargs` directly. The correct way to set `args` is via direct assignment, i.e. `node.args = new_args` type: ignore[arg-type] DO NOT CALL `__update_args_kwargs` directly. The correct way to set `args` is via direct assignment, i.e. `node.kwargs = new_kwargs` type: ignore[arg-type] Just to be defensive, if we don't have `__name__`, get the qualname. Not sure if this happens for any members of `operator` or `builtins`. This fallback path is not as good, since e.g. things in `operator` have `_operator` as their __module__. Check if an impure function. Check if an impure module. type: ignore[arg-type] type: ignore[arg-type] | 11,775 | en | 0.742337 |
"""
Copyright 2021 Anderson Faustino da Silva
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import sys
from absl import app, flags, logging
from tqdm import tqdm
from optcache.essentials import IO
from optcache.essentials import Goals
from optcache.algorithms import SGA
flags.DEFINE_integer('generations',
100,
'Number of generations')
flags.DEFINE_integer('seed',
None,
'The seed')
flags.DEFINE_integer('dimension',
100,
'Poblem dimension (individual length)')
flags.DEFINE_integer('population',
100,
'Population size')
flags.DEFINE_integer('param_s',
1,
'Number of best individuals to use or size of the tournament')
flags.DEFINE_float('param_m',
1.0,
'Distribution index')
flags.DEFINE_float('cr',
0.9,
'Crossover probability')
flags.DEFINE_float('m',
0.1,
'Mutation probability')
flags.DEFINE_enum('mutation',
'polynomial',
['polynomial', 'gaussian', 'uniform'],
'Mutation')
flags.DEFINE_enum('selection',
'tournament',
['tournament', 'truncated'],
'Selection')
flags.DEFINE_enum('crossover',
'exponential',
['exponential', 'binomial', 'single'],
'Cossover')
flags.DEFINE_string('passes_filename',
None,
'Filename (yaml) that describes the passes to use')
flags.mark_flag_as_required('passes_filename')
def execute(argv):
"""Generate genetic sequences for each benchmark"""
del argv
FLAGS = flags.FLAGS
# The benchmarks
benchmarks = IO.load_yaml(FLAGS.benchmarks_filename)
if not benchmarks:
logging.error('There are no benchmarks to process')
sys.exit(1)
# Verify benchmark directory
if not os.path.isdir(FLAGS.benchmarks_directory):
logging.error('Benchmarks directory {} does not exist.'.format(
FLAGS.benchmarks_directory)
)
sys.exit(1)
# Create the results directory
try:
os.makedirs(FLAGS.results_directory)
except FileExistsError:
pass
# Initialize a SGA object
sga = SGA(FLAGS.generations,
FLAGS.population,
FLAGS.cr,
FLAGS.m,
FLAGS.param_m,
FLAGS.param_s,
FLAGS.crossover,
FLAGS.mutation,
FLAGS.selection,
FLAGS.seed,
FLAGS.dimension,
FLAGS.passes_filename,
Goals.prepare_goals(FLAGS.goals, FLAGS.weights),
'opt',
FLAGS.benchmarks_directory,
FLAGS.working_set,
FLAGS.times,
FLAGS.tool,
FLAGS.verify_output)
# Process each benchmark
for benchmark in tqdm(benchmarks, desc='Processing'):
index = benchmark.find('.')
bench_dir = benchmark[:index]
bench_name = benchmark[index+1:]
bench_dir = os.path.join(FLAGS.results_directory,
bench_dir)
# Create the results directory for the suite
try:
os.makedirs(bench_dir)
except FileExistsError:
pass
filename = '{}/{}.yaml'.format(bench_dir, bench_name)
if FLAGS.verify_report and os.path.isfile(filename):
continue
sga.run(benchmark)
if sga.results:
IO.dump_yaml(sga.results,
filename,
FLAGS.report_only_the_best)
# Execute
if __name__ == '__main__':
flags.DEFINE_list('goals',
None,
'Goals')
flags.DEFINE_list('weights',
None,
'Weights')
flags.DEFINE_string('benchmarks_directory',
None,
'Benchmarks directory')
flags.DEFINE_integer('working_set',
0,
'Working set',
lower_bound=0)
flags.DEFINE_integer('times',
3,
'Execution/compile times',
lower_bound=3)
flags.DEFINE_enum('tool',
'perf',
['perf', 'hyperfine'],
'Execution tool')
flags.DEFINE_boolean('verify_output',
False,
'The value of the goal is only valid if the ouput is correct')
# app
flags.DEFINE_string('benchmarks_filename',
None,
'Benchmarks')
flags.DEFINE_string('results_directory',
None,
'Results directory')
flags.DEFINE_boolean('verify_report',
True,
'Do not process the benchmark if a report exists')
flags.DEFINE_boolean('report_only_the_best',
False,
'Store only the best result')
flags.mark_flag_as_required('goals')
flags.mark_flag_as_required('weights')
flags.mark_flag_as_required('benchmarks_filename')
flags.mark_flag_as_required('benchmarks_directory')
flags.mark_flag_as_required('results_directory')
app.run(execute)
| examples/algorithms/sga.py | 6,061 | Generate genetic sequences for each benchmark
Copyright 2021 Anderson Faustino da Silva
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
The benchmarks Verify benchmark directory Create the results directory Initialize a SGA object Process each benchmark Create the results directory for the suite Execute app | 783 | en | 0.791814 |
import numpy as np
import matplotlib.pyplot as plt
import os, sys
sys.path.append(os.path.join(os.path.dirname(__file__)))
import plot_settings
from test_utilities import gausspuls_coeff, gausspulse, gauss_ft
# time domain plot
fc = 5e6
bandwidth = 2/3
bwr = -6
t_vals = np.linspace(-3/fc, 3/fc, 200)
h = gausspulse(t_vals, fc, bandwidth, bwr)
plt.figure()
plt.plot(t_vals, h)
plt.xlim([-6e-7, 6e-7])
plt.grid()
plt.xlabel("Time [seconds]")
ax = plt.gca()
ax.axes.yaxis.set_ticklabels([])
plt.tight_layout()
fp = os.path.join(os.path.dirname(__file__), "figures", "_fig1p6a.pdf")
plt.savefig(fp, dpi=300)
# frequency domain pulse
f_vals = np.linspace(-3*fc-1e3, 3*fc+1e3, 1000)
a = gausspuls_coeff(fc, bandwidth, bwr)
H = gauss_ft(f_vals, a, fc=fc)
H = H / max(H)
plt.figure()
plt.semilogx(f_vals, 20*np.log10(np.abs(H)))
plt.axvline(x=fc, c='r', label="$f_c$")
plt.grid()
plt.autoscale(enable=True, axis='x', tight=True)
plt.ylabel("[dB]")
plt.legend(loc=3)
plt.xlabel("Frequency [Hz]")
plt.ylim([-40,0])
plt.tight_layout()
fp = os.path.join(os.path.dirname(__file__), "figures", "_fig1p6b.pdf")
plt.savefig(fp, dpi=300)
plt.show()
| report_results/fig1p6_pulse_shape.py | 1,143 | time domain plot frequency domain pulse | 39 | en | 0.70615 |
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
"""Run administrative tasks."""
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'apibox.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| manage.py | 662 | Run administrative tasks.
Django's command-line utility for administrative tasks.
!/usr/bin/env python | 103 | en | 0.725633 |
#!/usr/bin/python3
import subprocess
import sys
import json
import math
import os
from os.path import expanduser
from tempfile import TemporaryFile
def get_workspace():
handle = subprocess.Popen(
["i3-msg", "-t", "get_workspaces"], stdout=subprocess.PIPE)
output = handle.communicate()[0]
data = json.loads(output.decode())
data = sorted(data, key=lambda k: k['name'])
for i in data:
if(i['focused']):
return i['name']
def get_workspaces():
handle = subprocess.Popen(
["i3-msg", "-t", "get_workspaces"], stdout=subprocess.PIPE)
output = handle.communicate()[0]
data = json.loads(output.decode())
data = sorted(data, key=lambda k: k['name'])
arr = []
for i in data:
arr.append(i['name'])
return arr
def move_to(num):
subprocess.Popen(
["i3-msg", "move container to workspace " + str(num)],
stdout=subprocess.PIPE)
def go_to(num):
subprocess.Popen(["i3-msg", "workspace "+str(num)], stdout=subprocess.PIPE)
def dmenu_fetch(inputstr):
t = TemporaryFile()
t.write(bytes(inputstr, 'UTF-8'))
t.seek(0)
dmenu_run = subprocess.Popen(
["dmenu", "-b"], stdout=subprocess.PIPE, stdin=t)
output = (dmenu_run.communicate()[0]).decode().strip()
return output
def open_app(workspace):
home = expanduser("~")
cache = home+"/.cache/dmenu_run"
check_new_programs(home, cache)
applications = open(cache)
dmenu_run = subprocess.Popen(
["dmenu", "-b"], stdout=subprocess.PIPE, stdin=applications)
output = (dmenu_run.communicate()[0]).decode().strip()
subprocess.Popen(
["i3-msg", "workspace " + workspace + "; exec " + output],
stdout=subprocess.PIPE)
def check_new_programs(home, cachefile):
PATH = os.environ.get('PATH')
check = subprocess.Popen(
[home + "/.i3/scripts/dmenu_update"], stdout=subprocess.PIPE)
check.communicate()
if len(sys.argv) < 1:
print("Error not enough arguements")
else:
command = sys.argv[1]
switch_number = 1 # default switch number
if len(sys.argv) == 3:
# they passed in a number to move to
try:
switch_number = int(sys.argv[2])
except ValueError:
pass
# get the workspace number
workspace_name = get_workspace()
workspace_val = 1 # default value if name parseing fails
workspace_prefix = ''
try:
match_set = '0123456789-'
# only look for digits in the number
workspace_val = int(
''.join(
filter(
lambda x: x in match_set,
workspace_name)))
# include - in the ignore list incase it is a negative number
workspace_prefix = ''.join(
filter(
lambda x: x not in match_set,
workspace_name))
except ValueError:
pass
print(workspace_prefix)
# handle the commands
if command == 'up':
workspace_val += 10
elif command == 'down':
workspace_val -= 10
elif command == 'next':
workspace_val += 1
elif command == 'prev':
workspace_val -= 1
elif command == 'go':
# go to workspace in block
workspace_rounded = int(math.floor(workspace_val/10))*10
workspace_rounded += switch_number
go_to(workspace_prefix + str(workspace_rounded))
elif command == 'move':
# move the current container to the selected workspace
workspace_rounded = int(math.floor(workspace_val/10))*10
workspace_rounded += switch_number
move_to(workspace_prefix + str(workspace_rounded))
elif command == 'open':
open_app(workspace_name)
elif command == 'dynamic':
# dynamic tagging
command2 = sys.argv[2]
workspaces = get_workspaces()
inputstr = '\n'.join(workspaces)
result = dmenu_fetch(inputstr)
if command2 == 'go':
go_to(result)
elif command2 == 'move':
move_to(result)
if len(sys.argv) == 3:
# not a go or move, command2 is argv2
command2 = sys.argv[2]
if command == 'up' or command == 'down' or command == 'prev' or command == 'next':
if command2 == 'go':
go_to(workspace_prefix + str(workspace_val))
elif command2 == 'move':
move_to(workspace_prefix + str(workspace_val))
| scripts/workspace_controller.py | 4,434 | !/usr/bin/python3 default switch number they passed in a number to move to get the workspace number default value if name parseing fails only look for digits in the number include - in the ignore list incase it is a negative number handle the commands go to workspace in block move the current container to the selected workspace dynamic tagging not a go or move, command2 is argv2 | 381 | en | 0.722751 |
# this is a library that can be used to update and create parts
# of the bulkdata
import urllib
import json
import time
from django.conf import settings
from apps.bulk.models import Character, Corporation, Alliance
from apps.static.models import Crpnpccorporations
from connection import connection
#Make sure not to many requests are made to evewho.com
def who_connect():
timestamp = int(time.time())
if who_connect.timestamp + 30 >= timestamp:
who_connect.counter += 1
if who_connect.counter == settings.EVE_WHO_REQUESTS:
time.sleep(30)
who_connect.counter = 0
who_connect.timestamp = timestamp
else:
who_connect.timestamp = timestamp
who_connect.counter = 0
who_connect.timestamp = int(time.time())
who_connect.counter = 0
#get the basepart of the api url
def get_url(category, pk, page=0):
return "http://evewho.com/api.php?type=%s&id=%d&page=%d" % (
category,
pk,
page
)
# get the data from url
def json_object(url):
response = urllib.urlopen(url)
data = json.loads(response.read())
return data
#temp function
def remaining_alliances():
id_list = []
for alli in Alliance.objects.all():
if not Corporation.objects.filter(allianceid=alli.allianceid).exists():
id_list.append(alli.allianceid)
for pk in id_list:
pages = True
page = 0
while pages:
who_connect()
data = json_object(get_url("allilist", pk, page=page))
for char in data['characters']:
if not Character.objects.filter(
characterid=char['character_id']
).exists():
Character.objects.create(
characterid=char["character_id"],
corporationid=char["corporation_id"],
allianceid=char["alliance_id"],
name=char["name"],
)
if not Corporation.objects.filter(
corporationid=char["corporation_id"]
).exists():
corp = getattr(connection, "corporationsheet")(
char["corporation_id"]
)
try:
corp = Corporation(
corporationid=corp.corporationID,
corporationname=corp.corporationName,
ticker=corp.ticker,
ceoid=corp.ceoID,
ceoname=corp.ceoName,
allianceid=corp.allianceID,
alliancename=corp.allianceName,
stationid=corp.stationID,
description=unicode(corp.description),
url=corp.url,
taxrate=int(corp.taxRate),
membercount=corp.memberCount,
)
corp.save()
print corp.corporationname
except Exception, e:
print e
if len(data['characters']) == 200:
page += 1
else:
pages = False
| utils/whoapi.py | 3,311 | this is a library that can be used to update and create parts of the bulkdataMake sure not to many requests are made to evewho.comget the basepart of the api url get the data from urltemp function | 196 | en | 0.905902 |
# Copyright (c) 2011 Citrix Systems, Inc.
# Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
The VMware API utility module.
"""
from oslo.config import cfg
from oslo.vmware import vim_util as vutil
import suds
from nova.i18n import _
from nova.openstack.common import log as logging
vmware_opts = cfg.IntOpt('maximum_objects', default=100,
help='The maximum number of ObjectContent data '
'objects that should be returned in a single '
'result. A positive value will cause the '
'operation to suspend the retrieval when the '
'count of objects reaches the specified '
'maximum. The server may still limit the count '
'to something less than the configured value. '
'Any remaining objects may be retrieved with '
'additional requests.')
CONF = cfg.CONF
CONF.register_opt(vmware_opts, 'vmware')
LOG = logging.getLogger(__name__)
def object_to_dict(obj, list_depth=1):
"""Convert Suds object into serializable format.
The calling function can limit the amount of list entries that
are converted.
"""
d = {}
for k, v in suds.sudsobject.asdict(obj).iteritems():
if hasattr(v, '__keylist__'):
d[k] = object_to_dict(v, list_depth=list_depth)
elif isinstance(v, list):
d[k] = []
used = 0
for item in v:
used = used + 1
if used > list_depth:
break
if hasattr(item, '__keylist__'):
d[k].append(object_to_dict(item, list_depth=list_depth))
else:
d[k].append(item)
else:
d[k] = v
return d
def get_moref(value, type):
return vutil.get_moref(value, type)
def get_object_properties(vim, collector, mobj, type, properties):
"""Gets the properties of the Managed object specified."""
client_factory = vim.client.factory
if mobj is None:
return None
usecoll = collector
if usecoll is None:
usecoll = vim.service_content.propertyCollector
property_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
property_spec = client_factory.create('ns0:PropertySpec')
property_spec.all = (properties is None or len(properties) == 0)
property_spec.pathSet = properties
property_spec.type = type
object_spec = client_factory.create('ns0:ObjectSpec')
object_spec.obj = mobj
object_spec.skip = False
property_filter_spec.propSet = [property_spec]
property_filter_spec.objectSet = [object_spec]
options = client_factory.create('ns0:RetrieveOptions')
options.maxObjects = CONF.vmware.maximum_objects
return vim.RetrievePropertiesEx(usecoll, specSet=[property_filter_spec],
options=options)
def get_dynamic_property(vim, mobj, type, property_name):
"""Gets a particular property of the Managed Object."""
property_dict = get_dynamic_properties(vim, mobj, type, [property_name])
return property_dict.get(property_name)
def get_dynamic_properties(vim, mobj, type, property_names):
"""Gets the specified properties of the Managed Object."""
obj_content = get_object_properties(vim, None, mobj, type, property_names)
if obj_content is None:
return {}
if hasattr(obj_content, 'token'):
cancel_retrieve(vim, obj_content.token)
property_dict = {}
if obj_content.objects:
if hasattr(obj_content.objects[0], 'propSet'):
dynamic_properties = obj_content.objects[0].propSet
if dynamic_properties:
for prop in dynamic_properties:
property_dict[prop.name] = prop.val
# The object may have information useful for logging
if hasattr(obj_content.objects[0], 'missingSet'):
for m in obj_content.objects[0].missingSet:
LOG.warning(_("Unable to retrieve value for %(path)s "
"Reason: %(reason)s"),
{'path': m.path,
'reason': m.fault.localizedMessage})
return property_dict
def get_objects(vim, type, properties_to_collect=None, all=False):
"""Gets the list of objects of the type specified."""
return vutil.get_objects(vim, type, CONF.vmware.maximum_objects,
properties_to_collect, all)
def get_inner_objects(vim, base_obj, path, inner_type,
properties_to_collect=None, all=False):
"""Gets the list of inner objects of the type specified."""
client_factory = vim.client.factory
base_type = base_obj._type
traversal_spec = vutil.build_traversal_spec(client_factory, 'inner',
base_type, path, False, [])
object_spec = vutil.build_object_spec(client_factory,
base_obj,
[traversal_spec])
property_spec = vutil.build_property_spec(client_factory, type_=inner_type,
properties_to_collect=properties_to_collect,
all_properties=all)
property_filter_spec = vutil.build_property_filter_spec(client_factory,
[property_spec], [object_spec])
options = client_factory.create('ns0:RetrieveOptions')
options.maxObjects = CONF.vmware.maximum_objects
return vim.RetrievePropertiesEx(
vim.service_content.propertyCollector,
specSet=[property_filter_spec], options=options)
def cancel_retrieve(vim, token):
"""Cancels the retrieve operation."""
return vim.CancelRetrievePropertiesEx(
vim.service_content.propertyCollector,
token=token)
def continue_to_get_objects(vim, token):
"""Continues to get the list of objects of the type specified."""
return vim.ContinueRetrievePropertiesEx(
vim.service_content.propertyCollector,
token=token)
def get_prop_spec(client_factory, spec_type, properties):
"""Builds the Property Spec Object."""
prop_spec = client_factory.create('ns0:PropertySpec')
prop_spec.type = spec_type
prop_spec.pathSet = properties
return prop_spec
def get_obj_spec(client_factory, obj, select_set=None):
"""Builds the Object Spec object."""
obj_spec = client_factory.create('ns0:ObjectSpec')
obj_spec.obj = obj
obj_spec.skip = False
if select_set is not None:
obj_spec.selectSet = select_set
return obj_spec
def get_prop_filter_spec(client_factory, obj_spec, prop_spec):
"""Builds the Property Filter Spec Object."""
prop_filter_spec = client_factory.create('ns0:PropertyFilterSpec')
prop_filter_spec.propSet = prop_spec
prop_filter_spec.objectSet = obj_spec
return prop_filter_spec
def get_properties_for_a_collection_of_objects(vim, type,
obj_list, properties):
"""Gets the list of properties for the collection of
objects of the type specified.
"""
client_factory = vim.client.factory
if len(obj_list) == 0:
return []
prop_spec = get_prop_spec(client_factory, type, properties)
lst_obj_specs = []
for obj in obj_list:
lst_obj_specs.append(get_obj_spec(client_factory, obj))
prop_filter_spec = get_prop_filter_spec(client_factory,
lst_obj_specs, [prop_spec])
options = client_factory.create('ns0:RetrieveOptions')
options.maxObjects = CONF.vmware.maximum_objects
return vim.RetrievePropertiesEx(
vim.service_content.propertyCollector,
specSet=[prop_filter_spec], options=options)
def get_about_info(vim):
"""Get the About Info from the service content."""
return vim.service_content.about
| fs_patches_of_hybrid_cloud/cherry_for_111T/nova_cascaded/nova/virt/vmwareapi/vim_util.py | 8,596 | Cancels the retrieve operation.
Continues to get the list of objects of the type specified.
Get the About Info from the service content.
Gets the specified properties of the Managed Object.
Gets a particular property of the Managed Object.
Gets the list of inner objects of the type specified.
Builds the Object Spec object.
Gets the properties of the Managed object specified.
Gets the list of objects of the type specified.
Builds the Property Filter Spec Object.
Builds the Property Spec Object.
Gets the list of properties for the collection of
objects of the type specified.
Convert Suds object into serializable format.
The calling function can limit the amount of list entries that
are converted.
The VMware API utility module.
Copyright (c) 2011 Citrix Systems, Inc. Copyright 2011 OpenStack Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. The object may have information useful for logging | 1,414 | en | 0.841088 |
####################
# ES-DOC CIM Questionnaire
# Copyright (c) 2017 ES-DOC. All rights reserved.
#
# University of Colorado, Boulder
# http://cires.colorado.edu/
#
# This project is distributed according to the terms of the MIT license [http://www.opensource.org/licenses/MIT].
####################
from django.contrib import messages
from django.core.exceptions import ObjectDoesNotExist
from django.core.urlresolvers import reverse
from django.http import HttpResponseRedirect
from django.shortcuts import render_to_response
from django.utils.translation import ugettext_lazy as _
from Q.questionnaire.models.models_customizations import QModelCustomization
from Q.questionnaire.models.models_realizations import QModelRealization, get_new_realizations, get_existing_realizations, set_owner
from Q.questionnaire.models.models_users import is_member_of, is_user_of
from Q.questionnaire.views.views_base import validate_view_arguments as validate_view_arguments_base, add_parameters_to_context, get_key_from_request, get_or_create_cached_object
from Q.questionnaire.views.views_errors import q_error
from Q.questionnaire.views.views_legacy import redirect_legacy_projects
from Q.questionnaire.q_utils import evaluate_lazy_object, add_parameters_to_url
def validate_view_arguments(project_name=None, ontology_key=None, document_type=None):
"""
extends the "validate_view_arguments" fn in "views_base"
by adding a check that there is a default customization associated w/ this project/ontology/proxy
:param project_name:
:param ontology_key:
:param document_type:
:return:
"""
model_customization = None
validity, project, ontology, model_proxy, msg = validate_view_arguments_base(
project_name=project_name,
ontology_key=ontology_key,
document_type=document_type
)
if not validity:
return validity, project, ontology, model_proxy, model_customization, msg
try:
model_customization = QModelCustomization.objects.get(
project=project,
proxy=model_proxy,
is_default=True,
)
except ObjectDoesNotExist:
msg = _(
"There is no default customization associated with this document type for this project."
"<br/>Please <a href='mailto:{0}?subject=Missing%20Customization&body=Please%20create%20a%20customization%20for%20the%20%22{1}%22%20document%20type.'>contact</a>"
" the project administrator for assistance."
).format(project.email, model_proxy.fully_qualified_name)
validity = False
return validity, project, ontology, model_proxy, model_customization, msg
return validity, project, ontology, model_proxy, model_customization, msg
@redirect_legacy_projects
def q_edit_new(request, project_name=None, ontology_key=None, document_type=None):
# save any request parameters...
# (in case of redirection)
context = add_parameters_to_context(request)
# check the arguments...
validity, project, ontology, model_proxy, model_customization, msg = validate_view_arguments(
project_name=project_name,
ontology_key=ontology_key,
document_type=document_type
)
if not validity:
return q_error(request, msg)
# check authentication...
# (not using "@login_required" b/c some projects ignore authentication)
current_user = request.user
if project.authenticated:
if not current_user.is_authenticated():
next_page = add_parameters_to_url(reverse("account_login"), next=request.path)
return HttpResponseRedirect(next_page)
if not is_user_of(current_user, project):
next_page = reverse("project", kwargs={"project_name": project_name})
msg = "You have tried to view a restricted resource for this project. Please consider joining."
messages.add_message(request, messages.WARNING, msg)
return HttpResponseRedirect(next_page)
# get (or set) realization objects from the cache...
session_key = get_key_from_request(request)
cached_realizations_key = "{0}_realizations".format(session_key)
model_realization = get_or_create_cached_object(request.session, cached_realizations_key,
get_new_realizations,
**{
"project": project,
"ontology": ontology,
"model_proxy": model_proxy,
"key": model_proxy.name,
}
)
if current_user.is_authenticated():
set_owner(model_realization, evaluate_lazy_object(current_user))
model_realization.is_root = True # TODO: COME UP W/ A BETTER WAY OF DEALING W/ "is_root"
# no forms are created here,
# instead the load-on-demand paradigm is used,
# work out various paths, so that ng can reload things as needed...
view_url_dirname = request.path.rsplit('/', 1)[0]
api_url_dirname = reverse("realization-list").rsplit('/', 1)[0]
# gather all the extra information required by the template...
template_context = {
"project": project,
"ontology": ontology,
"proxy": model_proxy,
"view_url_dirname": view_url_dirname,
"api_url_dirname": api_url_dirname,
"session_key": session_key,
"customization": model_customization,
"realization": model_realization,
"read_only": "false", # passing "false" instead of False b/c this is a JS variable
}
return render_to_response('questionnaire/q_edit.html', template_context, context_instance=context)
@redirect_legacy_projects
def q_edit_existing(request, project_name=None, ontology_key=None, document_type=None, realization_pk=None):
# save any request parameters...
# (in case of redirection)
context = add_parameters_to_context(request)
# check the arguments...
validity, project, ontology, model_proxy, model_customization, msg = validate_view_arguments(
project_name=project_name,
ontology_key=ontology_key,
document_type=document_type
)
if not validity:
return q_error(request, msg)
# check authentication...
# (not using "@login_required" b/c some projects ignore authentication)
current_user = request.user
if project.authenticated:
if not current_user.is_authenticated():
next_page = add_parameters_to_url(reverse("account_login"), next=request.path)
return HttpResponseRedirect(next_page)
if not is_user_of(current_user, project):
next_page = reverse("project", kwargs={"project_name": project_name})
msg = "You have tried to view a restricted resource for this project. Please consider joining."
messages.add_message(request, messages.WARNING, msg)
return HttpResponseRedirect(next_page)
# get (or set) realization objects from the cache...
# note that unlike in "q_edit_new" above, this bit is enclosed in a try/catch block
try:
session_key = get_key_from_request(request)
cached_realizations_key = "{0}_realizations".format(session_key)
model_realization = get_or_create_cached_object(request.session, cached_realizations_key,
get_existing_realizations,
**{
"project": project,
"ontology": ontology,
"model_proxy": model_proxy,
"model_id": realization_pk
}
)
except ObjectDoesNotExist:
msg = "Cannot find a document with an id of '{0}' for that project/ontology/document type combination.".format(realization_pk)
return q_error(request, msg)
# no forms are created here,
# instead the load-on-demand paradigm is used,
# work out various paths, so that ng can reload things as needed...
# (notice these are slightly different than in "q_edit_new" above
view_url_dirname = request.path.rsplit('/', 1)[0]
api_url_dirname = reverse("realization-detail", kwargs={"pk": model_realization.pk}).rsplit('/', 2)[0]
# gather all the extra information required by the template...
template_context = {
"project": project,
"ontology": ontology,
"proxy": model_proxy,
"view_url_dirname": view_url_dirname,
"api_url_dirname": api_url_dirname,
"session_key": session_key,
"customization": model_customization,
"realization": model_realization,
"read_only": "false", # passing "false" instead of False b/c this is a JS variable
}
return render_to_response('questionnaire/q_edit.html', template_context, context_instance=context)
@redirect_legacy_projects
def q_view_new(request, project_name=None, ontology_key=None, document_type=None):
"""
this is never exposed by templates
but a user might still try to navigate explicitly to this URL
just return an error telling them not to try that
:param request:
:param project_name:
:param ontology_key:
:param document_type:
:return:
"""
# save any request parameters...
# (in case of redirection)
context = add_parameters_to_context(request)
# check the arguments...
validity, project, ontology, model_proxy, model_customization, msg = validate_view_arguments(
project_name=project_name,
ontology_key=ontology_key,
document_type=document_type
)
if not validity:
return q_error(request, msg)
# and then let the user know that they can't vew a _new_ document...
msg = "The ES-DOC Questionnaire only supports viewing of <em>existing</em> documents."
return q_error(request, msg)
@redirect_legacy_projects
def q_view_existing(request, project_name=None, ontology_key=None, document_type=None, realization_pk=None):
"""
this is exactly the same as "q_edit_existing" except:
there are no authentication checks,
the template_context & template are different.
:param request:
:param project_name:
:param ontology_key:
:param document_type:
:param realization_pk:
:return:
"""
# save any request parameters...
# (in case of redirection)
context = add_parameters_to_context(request)
# check the arguments...
validity, project, ontology, model_proxy, model_customization, msg = validate_view_arguments(
project_name=project_name,
ontology_key=ontology_key,
document_type=document_type
)
if not validity:
return q_error(request, msg)
# no need to check authentication
# get (or set) realization objects from the cache...
# note that unlike in "q_edit_new" above, this bit is enclosed in a try/catch block
try:
session_key = get_key_from_request(request)
cached_realizations_key = "{0}_realizations".format(session_key)
model_realization = get_or_create_cached_object(request.session, cached_realizations_key,
get_existing_realizations,
**{
"project": project,
"ontology": ontology,
"model_proxy": model_proxy,
"model_id": realization_pk
}
)
except ObjectDoesNotExist:
msg = "Cannot find a document with an id of '{0}' for that project/ontology/document type combination.".format(realization_pk)
return q_error(request, msg)
# no forms are created here,
# instead the load-on-demand paradigm is used,
# work out various paths, so that ng can reload things as needed...
# (notice these are slightly different than in "q_edit_new" above
view_url_dirname = request.path.rsplit('/', 1)[0]
api_url_dirname = reverse("realization-detail", kwargs={"pk": model_realization.pk}).rsplit('/', 2)[0]
# gather all the extra information required by the template...
template_context = {
"project": project,
"ontology": ontology,
"proxy": model_proxy,
"view_url_dirname": view_url_dirname,
"api_url_dirname": api_url_dirname,
"session_key": session_key,
"customization": model_customization,
"realization": model_realization,
"read_only": "true", # passing "true" instead of True b/c this is a JS variable
}
return render_to_response('questionnaire/q_view.html', template_context, context_instance=context)
@redirect_legacy_projects
def q_get_existing(request, project_name=None, ontology_key=None, document_type=None):
"""
this is meant to be used from external requests (ie: further_info_url)
where uniquely identifying model fields (including pk) are passed
if a unique realization cannot be found then an error is returned
otherwise the response is routed to "q_edit_existing"
:param request:
:param project_name:
:param ontology_key:
:param document_type:
:param realization_pk:
:return:
"""
# check the arguments...
validity, project, ontology, model_proxy, model_customization, msg = validate_view_arguments(
project_name=project_name,
ontology_key=ontology_key,
document_type=document_type
)
if not validity:
return q_error(request, msg)
model_realizations = QModelRealization.objects.filter(project=project, proxy=model_proxy)
additional_parameters = request.GET.copy()
for key, value in additional_parameters.iteritems():
if key == "pk" or key == "guid":
try:
return HttpResponseRedirect(reverse("edit_existing", kwargs={
"project_name": project_name,
"ontology_key": ontology_key,
"document_type": document_type,
"realization_pk": model_realizations.get(**{key: value}).pk
}))
except (ObjectDoesNotExist, ValueError):
msg = "There is no '{0}' document with a {1} of '{2}'".format(model_proxy, key, value)
return q_error(request, msg)
else:
try:
property_proxy = model_proxy.property_proxies.get(name=key)
if property_proxy.field_type == "ATOMIC":
model_realizations = model_realizations.filter(properties__proxy=property_proxy).has_atomic_value(value)
elif property_proxy.field_type == "ENUMERATION":
formatted_values = [fv for fv in map(lambda v: v.strip(), value.split(',')) if fv]
model_realizations = model_realizations.filter(properties__proxy=property_proxy).has_enumeration_values(formatted_values)
else: # property_proxy_field_type == "RELATIONSHIP"
# TODO:
msg = "Unable to support getting a document by relationship_field"
return q_error(request, msg)
except ObjectDoesNotExist:
msg = "There is no '{0}' property for the '{0}' document_type".format(key, model_proxy)
return q_error(request, msg)
if model_realizations.count() != 1:
msg = "Unable to uniquely identify '{0}' document_type with the following properties: '{1}'".format(
model_proxy,
", ".join(["{0}: {1}".format(p[0], p[1]) for p in additional_parameters.items()])
)
return q_error(request, msg)
return HttpResponseRedirect(reverse("edit_existing", kwargs={
"project_name": project_name,
"ontology_key": ontology_key,
"document_type": document_type,
"realization_pk": model_realizations.first().pk
}))
| Q/questionnaire/views/views_realizations.py | 15,547 | this is meant to be used from external requests (ie: further_info_url)
where uniquely identifying model fields (including pk) are passed
if a unique realization cannot be found then an error is returned
otherwise the response is routed to "q_edit_existing"
:param request:
:param project_name:
:param ontology_key:
:param document_type:
:param realization_pk:
:return:
this is exactly the same as "q_edit_existing" except:
there are no authentication checks,
the template_context & template are different.
:param request:
:param project_name:
:param ontology_key:
:param document_type:
:param realization_pk:
:return:
this is never exposed by templates
but a user might still try to navigate explicitly to this URL
just return an error telling them not to try that
:param request:
:param project_name:
:param ontology_key:
:param document_type:
:return:
extends the "validate_view_arguments" fn in "views_base"
by adding a check that there is a default customization associated w/ this project/ontology/proxy
:param project_name:
:param ontology_key:
:param document_type:
:return:
ES-DOC CIM Questionnaire Copyright (c) 2017 ES-DOC. All rights reserved. University of Colorado, Boulder http://cires.colorado.edu/ This project is distributed according to the terms of the MIT license [http://www.opensource.org/licenses/MIT]. save any request parameters... (in case of redirection) check the arguments... check authentication... (not using "@login_required" b/c some projects ignore authentication) get (or set) realization objects from the cache... TODO: COME UP W/ A BETTER WAY OF DEALING W/ "is_root" no forms are created here, instead the load-on-demand paradigm is used, work out various paths, so that ng can reload things as needed... gather all the extra information required by the template... passing "false" instead of False b/c this is a JS variable save any request parameters... (in case of redirection) check the arguments... check authentication... (not using "@login_required" b/c some projects ignore authentication) get (or set) realization objects from the cache... note that unlike in "q_edit_new" above, this bit is enclosed in a try/catch block no forms are created here, instead the load-on-demand paradigm is used, work out various paths, so that ng can reload things as needed... (notice these are slightly different than in "q_edit_new" above gather all the extra information required by the template... passing "false" instead of False b/c this is a JS variable save any request parameters... (in case of redirection) check the arguments... and then let the user know that they can't vew a _new_ document... save any request parameters... (in case of redirection) check the arguments... no need to check authentication get (or set) realization objects from the cache... note that unlike in "q_edit_new" above, this bit is enclosed in a try/catch block no forms are created here, instead the load-on-demand paradigm is used, work out various paths, so that ng can reload things as needed... (notice these are slightly different than in "q_edit_new" above gather all the extra information required by the template... passing "true" instead of True b/c this is a JS variable check the arguments... property_proxy_field_type == "RELATIONSHIP" TODO: | 3,284 | en | 0.800682 |
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tests import unittest
from synapse.events.builder import EventBuilder
from synapse.crypto.event_signing import add_hashes_and_signatures
from unpaddedbase64 import decode_base64
import nacl.signing
# Perform these tests using given secret key so we get entirely deterministic
# signatures output that we can test against.
SIGNING_KEY_SEED = decode_base64(
"YJDBA9Xnr2sVqXD9Vj7XVUnmFZcZrlw8Md7kMW+3XA1"
)
KEY_ALG = "ed25519"
KEY_VER = 1
KEY_NAME = "%s:%d" % (KEY_ALG, KEY_VER)
HOSTNAME = "domain"
class EventSigningTestCase(unittest.TestCase):
def setUp(self):
self.signing_key = nacl.signing.SigningKey(SIGNING_KEY_SEED)
self.signing_key.alg = KEY_ALG
self.signing_key.version = KEY_VER
def test_sign_minimal(self):
builder = EventBuilder(
{
'event_id': "$0:domain",
'origin': "domain",
'origin_server_ts': 1000000,
'signatures': {},
'type': "X",
'unsigned': {'age_ts': 1000000},
},
)
add_hashes_and_signatures(builder, HOSTNAME, self.signing_key)
event = builder.build()
self.assertTrue(hasattr(event, 'hashes'))
self.assertIn('sha256', event.hashes)
self.assertEquals(
event.hashes['sha256'],
"6tJjLpXtggfke8UxFhAKg82QVkJzvKOVOOSjUDK4ZSI",
)
self.assertTrue(hasattr(event, 'signatures'))
self.assertIn(HOSTNAME, event.signatures)
self.assertIn(KEY_NAME, event.signatures["domain"])
self.assertEquals(
event.signatures[HOSTNAME][KEY_NAME],
"2Wptgo4CwmLo/Y8B8qinxApKaCkBG2fjTWB7AbP5Uy+"
"aIbygsSdLOFzvdDjww8zUVKCmI02eP9xtyJxc/cLiBA",
)
def test_sign_message(self):
builder = EventBuilder(
{
'content': {
'body': "Here is the message content",
},
'event_id': "$0:domain",
'origin': "domain",
'origin_server_ts': 1000000,
'type': "m.room.message",
'room_id': "!r:domain",
'sender': "@u:domain",
'signatures': {},
'unsigned': {'age_ts': 1000000},
}
)
add_hashes_and_signatures(builder, HOSTNAME, self.signing_key)
event = builder.build()
self.assertTrue(hasattr(event, 'hashes'))
self.assertIn('sha256', event.hashes)
self.assertEquals(
event.hashes['sha256'],
"onLKD1bGljeBWQhWZ1kaP9SorVmRQNdN5aM2JYU2n/g",
)
self.assertTrue(hasattr(event, 'signatures'))
self.assertIn(HOSTNAME, event.signatures)
self.assertIn(KEY_NAME, event.signatures["domain"])
self.assertEquals(
event.signatures[HOSTNAME][KEY_NAME],
"Wm+VzmOUOz08Ds+0NTWb1d4CZrVsJSikkeRxh6aCcUw"
"u6pNC78FunoD7KNWzqFn241eYHYMGCA5McEiVPdhzBA"
)
| tests/crypto/test_event_signing.py | 3,619 | -*- coding: utf-8 -*- Copyright 2015 OpenMarket Ltd Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Perform these tests using given secret key so we get entirely deterministic signatures output that we can test against. | 693 | en | 0.853696 |
from core.datasource import DataSource
from core.model.member import KarmaMember, Member
# karma database service class, perform operations on the configured mongodb.
from util.config import config, profile
class KarmaService:
def __init__(self):
self._karma = DataSource(config['database']['host'], config['database']['port'],
config['database']['username'], config['database']['password'],
config['database']['name']).db.karma
self._filter_query = dict(guild_id="", member_id="")
self._channel_query = dict(guild_id="", member_id="", channel_id="", message_id="")
self._increase_karma = {"$inc": {'karma': 1}}
self._decrease_karma = {"$inc": {'karma': -1}}
# update or insert karma member if not exist on first karma
# check on inc if inc or dec query should be applied.
def upsert_karma_member(self, member: KarmaMember, inc: bool) -> None:
self._channel_query['guild_id'] = member.guild_id
self._channel_query['member_id'] = member.member_id
self._channel_query['channel_id'] = member.channel_id
self._channel_query['message_id'] = member.message_id
if inc:
self._karma.update_one(filter=self._channel_query, update=self._increase_karma,
upsert=True)
else:
self._karma.delete_one(filter=self._channel_query)
# remove all karma, regardless of channel
def delete_all_karma(self, guild_id: str, member_id: str) -> None:
filter_member = dict(guild_id=guild_id, member_id=member_id)
self._karma.delete_many(filter=filter_member)
# aggregate overall karma of a member
def aggregate_member_by_karma(self, member: KarmaMember) -> int:
self._filter_query['guild_id'] = member.guild_id
self._filter_query['member_id'] = member.member_id
pipeline = [{"$unwind": "$karma"}, {"$match": self._filter_query},
{"$group": {"_id": {"member_id": "$member_id"}, "karma": {"$sum": "$karma"}}}]
doc_cursor = self._karma.aggregate(pipeline)
for doc in doc_cursor:
return doc['karma']
def aggregate_member_by_channels(self, member: KarmaMember):
self._filter_query['guild_id'] = member.guild_id
self._filter_query['member_id'] = member.member_id
pipeline = [{"$unwind": "$karma"}, {"$match": self._filter_query},
{"$group": {"_id": {"member_id": "$member_id", "channel_id": "$channel_id"},
"karma": {"$sum": "$karma"}}}, {"$limit": profile()['channels']},
{"$sort": {"karma": -1}}]
doc_cursor = self._karma.aggregate(pipeline)
return doc_cursor
class BlockerService:
def __init__(self):
self._blacklist = DataSource(config['database']['host'], config['database']['port'],
config['database']['username'], config['database']['password'],
config['database']['name']).db.blacklist
self._filter_query = dict(guild_id="", member_id="")
def blacklist(self, member: Member):
self._filter_query['guild_id'] = member.guild_id
self._filter_query['member_id'] = member.member_id
self._blacklist.update_one(filter=self._filter_query, update={'$set': {
'guild_id': '{}'.format(member.guild_id),
'member_id': '{}'.format(member.member_id)
}}, upsert=True)
def whitelist(self, member: Member):
self._filter_query['guild_id'] = member.guild_id
self._filter_query['member_id'] = member.member_id
self._blacklist.delete_one(filter=self._filter_query)
def find_member(self, member: Member):
self._filter_query['guild_id'] = member.guild_id
self._filter_query['member_id'] = member.member_id
return self._blacklist.find_one(filter=self._filter_query)
| core/service/karma_service.py | 3,984 | karma database service class, perform operations on the configured mongodb. update or insert karma member if not exist on first karma check on inc if inc or dec query should be applied. remove all karma, regardless of channel aggregate overall karma of a member | 261 | en | 0.848897 |
# Copyright 2021 solo-learn development team.
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all copies
# or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
import torch
from solo.utils.misc import gather
def test_gather_layer():
X = torch.randn(10, 30, requires_grad=True)
X_gathered = gather(X)
assert isinstance(X, torch.Tensor)
dummy_loss = torch.mm(X_gathered, X_gathered.T).sum()
dummy_loss.backward()
assert X.grad is not None
| tests/utils/test_gather.py | 1,403 | Copyright 2021 solo-learn development team. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | 1,064 | en | 0.878381 |
import numpy as np
import sys
import os
import re
import ntpath
from subprocess import call
#### DISCLAIMER: This script uses the `pythonSubmit.py` format
#### that has been replaced by the `runSubmit.py` and
#### `compasConfigDefault.yaml` combo as of v02.25.10.
#### The `pythonSubmit.py` format will eventually become deprecated.
# Check if we are using python 3
python_version = sys.version_info[0]
print("python_version =", python_version)
class pythonProgramOptions:
"""
A class to store and access COMPAS program options in python
"""
# Do './COMPAS --help' to see all options
#-- Define variables
# environment variable COMPAS_EXECUTABLE_PATH is used for docker runs
# if COMPAS_EXECUTABLE_PATH is not set (== None) we assume this is an
# interactive run with python3
# if COMPAS_EXECUTABLE_PATH is set (!= None) we assume this is a run
# inside a docker container - we have different directories inside a
# docker container (src, obj, bin), and the COMPAS executable resides
# in the bin directory (rather than the src directory)
compas_executable_override = os.environ.get('COMPAS_EXECUTABLE_PATH')
if (compas_executable_override is None):
# we should fix this one day - we should not assume that the COMPAS executable
# is in the 'src' directory. The standard is to put the object files created
# by the compile into the 'obj' directory, and the executable files created by
# the link in the 'bin' directory.
#
# for now though, because this is how everybody expects it to be, we'll just check
# that the path to the root directory (the parent directory of the directory in
# which we expect the executable to reside - for now, 'src') is set to something.
compas_root_dir = os.environ.get('COMPAS_ROOT_DIR')
assert compas_root_dir is not None, "Unable to locate the COMPAS executable: check that the environment variable COMPAS_ROOT_DIR is set correctly, and the COMPAS executable exists."
# construct path to executable
#
# ideally we wouldn't have the 'src' directory name (or any other directory name)
# prepended to the executable name - if we just execute the executable name on its
# own, as long as the user navigates to the directory in which the executable resides
# they don't need to set the COMPAS_ROOT_DIR environment variable
compas_executable = os.path.join(compas_root_dir, 'src/COMPAS')
else:
compas_executable = compas_executable_override
# check that a file with the correct name exists where we expect it to
assert os.path.isfile(compas_executable), "Unable to locate the COMPAS executable: check that the environment variable COMPAS_ROOT_DIR is set correctly, and the COMPAS executable exists."
enable_warnings = False # option to enable/disable warning messages
number_of_systems = 10 # number of systems per batch
populationPrinting = False
randomSeedFileName = 'randomSeed.txt'
if os.path.isfile(randomSeedFileName):
random_seed = int(np.loadtxt(randomSeedFileName))
else:
random_seed = 0 # If you want a random seed, use: np.random.randint(2,2**63-1)
# environment variable COMPAS_LOGS_OUTPUT_DIR_PATH is used primarily for docker runs
# if COMPAS_LOGS_OUTPUT_DIR_PATH is set (!= None) it is used as the value for the
# --output-path option
# if COMPAS_LOGS_OUTPUT_DIR_PATH is not set (== None) the current working directory
# is used as the value for the --output-path option
compas_logs_output_override = os.environ.get('COMPAS_LOGS_OUTPUT_DIR_PATH')
if (compas_logs_output_override is None):
output = os.getcwd()
output_container = None # names the directory to be created and in which log files are created. Default in COMPAS is "COMPAS_Output"
else:
output = compas_logs_output_override
output_container = None
# environment variable COMPAS_INPUT_DIR_PATH is used primarily for docker runs
# if COMPAS_INPUT_DIR_PATH is set (!= None) it is prepended to input filenames
# (such as grid_filename and logfile_definitions)
# if COMPAS_INPUT_DIR_PATH is not set (== None) the current working directory
# is prepended to input filenames
compas_input_path_override = os.environ.get('COMPAS_INPUT_DIR_PATH')
#-- option to make a grid of hyperparameter values at which to produce populations.
#-- If this is set to true, it will divide the number_of_binaries parameter equally
#-- amoungst the grid points (as closely as possible). See the hyperparameterGrid method below
#-- for more details. If this is set to True, some hyperparameter values defined in this method'gridOutputs/'+str(i)
#-- will be overwritten
hyperparameterGrid = False
hyperparameterList = False
shareSeeds = False
notes_hdrs = None # no annotations header strings (no annotations)
notes = None # no annotations
mode = 'BSE' # evolving single stars (SSE) or binaries (BSE)?
grid_filename = 'grid.txt' # grid file name (e.g. 'mygrid.txt')
if grid_filename != None:
# if the grid filename supplied is already fully-qualified, leave it as is
head, tail = ntpath.split(grid_filename) # split into pathname and base filename
if head == '' or head == '.': # no path (or CWD) - add path as required
grid_filename = tail or ntpath.basename(head)
if compas_input_path_override == None:
grid_filename = os.getcwd() + '/' + grid_filename.strip("'\"")
else:
grid_filename = compas_input_path_override + '/' + grid_filename.strip("'\"")
logfile_definitions = None # logfile record definitions file name (e.g. 'logdefs.txt')
if logfile_definitions != None:
# if the grid filename supplied is already fully-qualified, leave it as is
head, tail = ntpath.split(logfile_definitions) # split into pathname and base filename
if head == '' or head == '.': # no path (or CWD) - add path as required
logfile_definitions = tail or ntpath.basename(head)
if compas_input_path_override == None:
logfile_definitions = os.getcwd() + '/' + logfile_definitions.strip("'\"")
else:
logfile_definitions = compas_input_path_override + '/' + logfile_definitions.strip("'\"")
initial_mass = None # initial mass for SSE
initial_mass_1 = None # primary initial mass for BSE
initial_mass_2 = None # secondary initial mass for BSE
mass_ratio = None
eccentricity = None # eccentricity for BSE
semi_major_axis = None # semi-major axis for BSE
orbital_period = None # orbital period for BSE
use_mass_loss = True
mass_transfer = True
detailed_output = True # WARNING: this creates a data heavy file
RLOFPrinting = True
evolve_unbound_systems = False
quiet = False
metallicity = 0.0142 # metallicity for both SSE and BSE - Solar metallicity Asplund+2010
allow_rlof_at_birth = True # allow binaries that have one or both stars in RLOF at birth to evolve?
allow_touching_at_birth = False # record binaries that have stars touching at birth in output files?
chemically_homogeneous_evolution = 'PESSIMISTIC' # chemically homogeneous evolution. Options are 'NONE', 'OPTIMISTIC' and 'PESSIMISTIC'
switch_log = False
common_envelope_alpha = 1.0
common_envelope_lambda = 0.1 # Only if using 'LAMBDA_FIXED'
common_envelope_lambda_prescription = 'LAMBDA_NANJING' # Xu & Li 2010
common_envelope_slope_Kruckow = -5.0/6.0
stellar_zeta_prescription = 'SOBERMAN'
common_envelope_revised_energy_formalism = False
common_envelope_maximum_donor_mass_revised_energy_formalism = 2.0
common_envelope_recombination_energy_density = 1.5E13
common_envelope_alpha_thermal = 1.0 # lambda = alpha_th*lambda_b + (1-alpha_th)*lambda_g
common_envelope_lambda_multiplier = 1.0 # Multiply common envelope lambda by some constant
common_envelope_allow_main_sequence_survive = True # Allow main sequence stars to survive CE. Was previously False by default
common_envelope_mass_accretion_prescription = 'ZERO'
common_envelope_mass_accretion_min = 0.04 # For 'MACLEOD+2014' [Msol]
common_envelope_mass_accretion_max = 0.10 # For 'MACLEOD+2014' [Msol]
envelope_state_prescription = 'LEGACY'
common_envelope_allow_radiative_envelope_survive = False
common_envelope_allow_immediate_RLOF_post_CE_survive = False
mass_loss_prescription = 'VINK'
luminous_blue_variable_prescription = 'HURLEY_ADD'
luminous_blue_variable_multiplier = 1.5
overall_wind_mass_loss_multiplier = 1.0
wolf_rayet_multiplier = 1.0
cool_wind_mass_loss_multiplier = 1.0
check_photon_tiring_limit = False
circularise_binary_during_mass_transfer = True
angular_momentum_conservation_during_circularisation = False
mass_transfer_angular_momentum_loss_prescription = 'ISOTROPIC'
mass_transfer_accretion_efficiency_prescription = 'THERMAL'
mass_transfer_fa = 0.5 # Only if using mass_transfer_accretion_efficiency_prescription = 'FIXED'
mass_transfer_jloss = 1.0 # Only if using mass_transfer_angular_momentum_loss_prescription = 'FIXED'
mass_transfer_rejuvenation_prescription = 'STARTRACK'
mass_transfer_thermal_limit_accretor= 'CFACTOR'
mass_transfer_thermal_limit_C= 10.0
eddington_accretion_factor = 1 # multiplication Factor for eddington accretion onto NS&BH
case_BB_stability_prescription = 'ALWAYS_STABLE'
zeta_Main_Sequence = 2.0
zeta_Radiative_Envelope_Giant = 6.5
maximum_evolution_time = 13700.0 # Maximum physical time a system can be evolved [Myrs]
maximum_number_timesteps = 99999
timestep_multiplier = 0.1 # Optional multiplier relative to default time step duration
initial_mass_function = 'KROUPA'
initial_mass_min = 5.0 # Use 1.0 for LRNe, 5.0 for DCOs [Msol]
initial_mass_max = 150.0 # Stellar tracks extrapolated above 50 Msol (Hurley+2000) [Msol]
initial_mass_power = 0.0
semi_major_axis_distribution = 'FLATINLOG'
semi_major_axis_min = 0.01 # [AU]
semi_major_axis_max = 1000.0 # [AU]
orbital_period_distribution = 'FLATINLOG'
orbital_period_min = 1.1 # [days]
orbital_period_max = 1000 # [days]
mass_ratio_distribution = 'FLAT'
mass_ratio_min = 0.01
mass_ratio_max = 1.0
minimum_secondary_mass = 0.1 # Brown dwarf limit [Msol]
eccentricity_distribution = 'ZERO'
eccentricity_min = 0.0
eccentricity_max = 1.0
metallicity_distribution = 'ZSOLAR'
metallicity_min = 0.0001
metallicity_max = 0.03
pulsar_birth_magnetic_field_distribution = 'ZERO'
pulsar_birth_magnetic_field_min = 11.0 # [log10(B/G)]
pulsar_birth_magnetic_field_max = 13.0 # [log10(B/G)]
pulsar_birth_spin_period_distribution = "ZERO"
pulsar_birth_spin_period_min = 10.0 # [ms]
pulsar_birth_spin_period_max = 100.0 # [ms]
pulsar_magnetic_field_decay_timescale = 1000.0 # [Myr]
pulsar_magnetic_field_decay_massscale = 0.025 # [Msol]
pulsar_minimum_magnetic_field = 8.0 # [log10(B/G)]
evolvePulsars = False
rotational_velocity_distribution = 'ZERO'
neutron_star_equation_of_state = 'SSE'
neutrino_mass_loss_BH_formation = "FIXED_MASS" # "FIXED_FRACTION"
neutrino_mass_loss_BH_formation_value = 0.1 # Either fraction or mass (Msol) to lose
remnant_mass_prescription = 'FRYER2012' #
fryer_supernova_engine = 'DELAYED'
black_hole_kicks = 'FALLBACK'
kick_magnitude_distribution = 'MAXWELLIAN'
kick_magnitude_sigma_CCSN_NS = 265.0 # [km/s]
kick_magnitude_sigma_CCSN_BH = 265.0 # [km/s]
kick_magnitude_sigma_ECSN = 30.0 # [km/s]
kick_magnitude_sigma_USSN = 30.0 # [km/s]
fix_dimensionless_kick_magnitude = -1
kick_direction = 'ISOTROPIC'
kick_direction_power = 0.0
kick_scaling_factor = 1.0
kick_magnitude_maximum = -1.0
kick_magnitude_random = None # (SSE) used to draw the kick magnitude for the star should it undergo a supernova event
kick_magnitude = None # (SSE) (drawn) kick magnitude for the star should it undergo a supernova event [km/s]
kick_magnitude_random_1 = None # (BSE) used to draw the kick magnitude for the primary star should it undergo a supernova event
kick_magnitude_1 = None # (BSE) (drawn) kick magnitude for the primary star should it undergo a supernova event [km/s]
kick_theta_1 = None # (BSE) angle between the orbital plane and the 'z' axis of the supernova vector for the primary star should it undergo a supernova event [radians]
kick_phi_1 = None # (BSE) angle between 'x' and 'y', both in the orbital plane of the supernova vector, for the primary star should it undergo a supernova event [radians]
kick_mean_anomaly_1 = None # (BSE) mean anomaly at the instant of the supernova for the primary star should it undergo a supernova event - should be uniform in [0, 2pi) [radians]
kick_magnitude_random_2 = None # (BSE) used to draw the kick velocity for the secondary star should it undergo a supernova event
kick_magnitude_2 = None # (BSE) (drawn) kick magnitude for the secondary star should it undergo a supernova event [km/s]
kick_theta_2 = None # (BSE) angle between the orbital plane and the 'z' axis of the supernova vector for the secondary star should it undergo a supernova event [radians]
kick_phi_2 = None # (BSE) angle between 'x' and 'y', both in the orbital plane of the supernova vector, for the secondary star should it undergo a supernova event [radians]
kick_mean_anomaly_2 = None # (BSE) mean anomaly at the instant of the supernova for the secondary star should it undergo a supernova event - should be uniform in [0, 2pi) [radians]
muller_mandel_kick_multiplier_BH = 200.0 # scaling prefactor for BH kicks when using the 'MULLERMANDEL' kick magnitude distribution
muller_mandel_kick_multiplier_NS = 400.0 # scaling prefactor for NS kicks when using the 'MULLERMANDEL' kick magnitude distribution
pair_instability_supernovae = True
PISN_lower_limit = 60.0 # Minimum core mass for PISN [Msol]
PISN_upper_limit = 135.0 # Maximum core mass for PISN [Msol]
pulsation_pair_instability = True
PPI_lower_limit = 35.0 # Minimum core mass for PPI [Msol]
PPI_upper_limit = 60.0 # Maximum core mass for PPI [Msol]
pulsational_pair_instability_prescription = 'MARCHANT'
maximum_neutron_star_mass = 2.5 # [Msol]
add_options_to_sysparms = 'GRID' # should all option values be added to system parameters files? options are 'ALWAYS', 'GRID', and 'NEVER'
log_level = 0
log_classes = []
debug_level = 0
debug_classes = []
logfile_name_prefix = None
logfile_type = 'HDF5'
hdf5_chunk_size = 100000
hdf5_buffer_size = 1
# set the logfile names here
#
# set to None (e.g. logfile_BSE_supernovae = None) to use the default filename
# set to a string (e.g. logfile_BSE_supernovae = 'mySNfilename') to use that string as the filename
# set to empty string (e.g. logfile_BSE_supernovae = '""') to disable logging for that file (the file will not be created)
#
# We don't really need the 'BSE' or 'SSE' prefixes any more - they were put there because
# prior to the implementation of the containing folder it was too hard to locate the files
# created by a COMPAS run - especially the detailed output files. Now that the output
# files are created inside a containing folder for each run there is really no need for
# the prefixes - and if we don't have the prefixes we can share some of the options
# (e.g. specifying the supernovae filename doesn't need to have separate options for
# SSE and BSE - we really just need one (we only ever run in one mode or the other))
#
# For now though, I'll leave them as is - we can change this when (if) we decide to
# drop the prefixes
logfile_common_envelopes = None
logfile_detailed_output = None
logfile_double_compact_objects = None
logfile_rlof_parameters = None
logfile_pulsar_evolution = None
logfile_supernovae = None
logfile_switch_log = None
logfile_system_parameters = None
debug_to_file = False
errors_to_file = False
def booleanChoices(self):
booleanChoices = [
self.enable_warnings,
self.use_mass_loss,
self.mass_transfer,
self.detailed_output,
self.evolve_unbound_systems,
self.populationPrinting,
self.RLOFPrinting,
self.circularise_binary_during_mass_transfer,
self.angular_momentum_conservation_during_circularisation,
self.pair_instability_supernovae,
self.pulsation_pair_instability,
self.quiet,
self.common_envelope_allow_main_sequence_survive,
self.common_envelope_allow_radiative_envelope_survive,
self.common_envelope_allow_immediate_RLOF_post_CE_survive,
self.evolvePulsars,
self.debug_to_file,
self.errors_to_file,
self.allow_rlof_at_birth,
self.allow_touching_at_birth,
self.switch_log,
self.check_photon_tiring_limit
]
return booleanChoices
def booleanCommands(self):
booleanCommands = [
'--enable-warnings',
'--use-mass-loss',
'--mass-transfer',
'--detailed-output',
'--evolve-unbound-systems',
'--population-data-printing',
'--rlof-printing',
'--circularise-binary-during-mass-transfer',
'--angular-momentum-conservation-during-circularisation',
'--pair-instability-supernovae',
'--pulsational-pair-instability',
'--quiet',
'--common-envelope-allow-main-sequence-survive',
'--common-envelope-allow-radiative-envelope-survive',
'--common-envelope-allow-immediate-rlof-post-ce-survive',
'--evolve-pulsars',
'--debug-to-file',
'--errors-to-file',
'--allow-rlof-at-birth',
'--allow-touching-at-birth',
'--switch-log',
'--check-photon-tiring-limit'
]
return booleanCommands
def numericalChoices(self):
numericalChoices = [
self.number_of_systems,
self.initial_mass,
self.initial_mass_1,
self.initial_mass_2,
self.eccentricity,
self.semi_major_axis,
self.orbital_period,
self.metallicity,
self.common_envelope_alpha,
self.common_envelope_lambda,
self.common_envelope_slope_Kruckow,
self.common_envelope_alpha_thermal,
self.common_envelope_lambda_multiplier,
self.luminous_blue_variable_multiplier,
self.overall_wind_mass_loss_multiplier,
self.wolf_rayet_multiplier,
self.cool_wind_mass_loss_multiplier,
self.mass_transfer_fa,
self.mass_transfer_jloss,
self.maximum_evolution_time,
self.maximum_number_timesteps,
self.timestep_multiplier,
self.initial_mass_min,
self.initial_mass_max,
self.initial_mass_power,
self.semi_major_axis_min,
self.semi_major_axis_max,
self.mass_ratio,
self.mass_ratio_min,
self.mass_ratio_max,
self.minimum_secondary_mass,
self.eccentricity_min,
self.eccentricity_max,
self.metallicity_min,
self.metallicity_max,
self.pulsar_birth_magnetic_field_min,
self.pulsar_birth_magnetic_field_max,
self.pulsar_birth_spin_period_min,
self.pulsar_birth_spin_period_max,
self.pulsar_magnetic_field_decay_timescale,
self.pulsar_magnetic_field_decay_massscale,
self.pulsar_minimum_magnetic_field,
self.orbital_period_min,
self.orbital_period_max,
self.kick_magnitude_sigma_CCSN_NS,
self.kick_magnitude_sigma_CCSN_BH,
self.fix_dimensionless_kick_magnitude,
self.kick_direction_power,
self.random_seed,
self.mass_transfer_thermal_limit_C,
self.eddington_accretion_factor,
self.PISN_lower_limit,
self.PISN_upper_limit,
self.PPI_lower_limit,
self.PPI_upper_limit,
self.maximum_neutron_star_mass,
self.kick_magnitude_sigma_ECSN,
self.kick_magnitude_sigma_USSN,
self.kick_scaling_factor,
self.common_envelope_maximum_donor_mass_revised_energy_formalism,
self.common_envelope_recombination_energy_density,
self.common_envelope_mass_accretion_max,
self.common_envelope_mass_accretion_min,
self.zeta_Main_Sequence,
self.zeta_Radiative_Envelope_Giant,
self.kick_magnitude_maximum,
self.kick_magnitude_random,
self.kick_magnitude,
self.kick_magnitude_random_1,
self.kick_magnitude_1,
self.kick_theta_1,
self.kick_phi_1,
self.kick_mean_anomaly_1,
self.kick_magnitude_random_2,
self.kick_magnitude_2,
self.kick_theta_2,
self.kick_phi_2,
self.kick_mean_anomaly_2,
self.muller_mandel_kick_multiplier_BH,
self.muller_mandel_kick_multiplier_NS,
self.log_level,
self.debug_level,
self.hdf5_chunk_size,
self.hdf5_buffer_size,
self.neutrino_mass_loss_BH_formation_value
]
return numericalChoices
def numericalCommands(self):
numericalCommands = [
'--number-of-systems',
'--initial-mass',
'--initial-mass-1',
'--initial-mass-2',
'--eccentricity',
'--semi-major-axis',
'--orbital-period',
'--metallicity',
'--common-envelope-alpha',
'--common-envelope-lambda',
'--common-envelope-slope-kruckow',
'--common-envelope-alpha-thermal',
'--common-envelope-lambda-multiplier',
'--luminous-blue-variable-multiplier',
'--overall-wind-mass-loss-multiplier',
'--wolf-rayet-multiplier',
'--cool-wind-mass-loss-multiplier',
'--mass-transfer-fa',
'--mass-transfer-jloss',
'--maximum-evolution-time',
'--maximum-number-timestep-iterations',
'--timestep-multiplier',
'--initial-mass-min',
'--initial-mass-max',
'--initial-mass-power',
'--semi-major-axis-min',
'--semi-major-axis-max',
'--mass-ratio',
'--mass-ratio-min',
'--mass-ratio-max',
'--minimum-secondary-mass',
'--eccentricity-min',
'--eccentricity-max',
'--metallicity-min',
'--metallicity-max',
'--pulsar-birth-magnetic-field-distribution-min',
'--pulsar-birth-magnetic-field-distribution-max',
'--pulsar-birth-spin-period-distribution-min',
'--pulsar-birth-spin-period-distribution-max',
'--pulsar-magnetic-field-decay-timescale',
'--pulsar-magnetic-field-decay-massscale',
'--pulsar-minimum-magnetic-field',
'--orbital-period-min',
'--orbital-period-max',
'--kick-magnitude-sigma-CCSN-NS',
'--kick-magnitude-sigma-CCSN-BH',
'--fix-dimensionless-kick-magnitude',
'--kick-direction-power',
'--random-seed',
'--mass-transfer-thermal-limit-C',
'--eddington-accretion-factor',
'--pisn-lower-limit',
'--pisn-upper-limit',
'--ppi-lower-limit',
'--ppi-upper-limit',
'--maximum-neutron-star-mass',
'--kick-magnitude-sigma-ECSN',
'--kick-magnitude-sigma-USSN',
'--kick-scaling-factor',
'--maximum-mass-donor-nandez-ivanova',
'--common-envelope-recombination-energy-density',
'--common-envelope-mass-accretion-max',
'--common-envelope-mass-accretion-min',
'--zeta-main-sequence',
'--zeta-radiative-envelope-giant',
'--kick-magnitude-max',
'--kick-magnitude-random',
'--kick-magnitude',
'--kick-magnitude-random-1',
'--kick-magnitude-1',
'--kick-theta-1',
'--kick-phi-1',
'--kick-mean-anomaly-1',
'--kick-magnitude-random-2',
'--kick-magnitude-2',
'--kick-theta-2',
'--kick-phi-2',
'--kick-mean-anomaly-2',
'--muller-mandel-kick-multiplier-BH',
'--muller-mandel-kick-multiplier-NS',
'--log-level',
'--debug-level',
'--hdf5-chunk-size',
'--hdf5-buffer-size',
'--neutrino-mass-loss-BH-formation-value'
]
return numericalCommands
def stringChoices(self):
stringChoices = [
self.notes_hdrs,
self.notes,
self.mode,
self.case_BB_stability_prescription,
self.chemically_homogeneous_evolution,
self.luminous_blue_variable_prescription,
self.mass_loss_prescription,
self.mass_transfer_angular_momentum_loss_prescription,
self.mass_transfer_accretion_efficiency_prescription,
self.mass_transfer_rejuvenation_prescription,
self.initial_mass_function,
self.semi_major_axis_distribution,
self.orbital_period_distribution,
self.mass_ratio_distribution,
self.eccentricity_distribution,
self.metallicity_distribution,
self.rotational_velocity_distribution,
self.remnant_mass_prescription,
self.fryer_supernova_engine,
self.black_hole_kicks,
self.kick_magnitude_distribution,
self.kick_direction,
self.output,
self.output_container,
self.common_envelope_lambda_prescription,
self.stellar_zeta_prescription,
self.mass_transfer_thermal_limit_accretor,
self.pulsational_pair_instability_prescription,
self.neutron_star_equation_of_state,
self.pulsar_birth_magnetic_field_distribution,
self.pulsar_birth_spin_period_distribution,
self.common_envelope_mass_accretion_prescription,
self.envelope_state_prescription,
self.logfile_name_prefix,
self.logfile_type,
self.logfile_definitions,
self.grid_filename,
self.logfile_common_envelopes,
self.logfile_detailed_output,
self.logfile_double_compact_objects,
self.logfile_pulsar_evolution,
self.logfile_rlof_parameters,
self.logfile_supernovae,
self.logfile_switch_log,
self.logfile_system_parameters,
self.neutrino_mass_loss_BH_formation,
self.add_options_to_sysparms
]
return stringChoices
def stringCommands(self):
stringCommands = [
'--notes-hdrs',
'--notes',
'--mode',
'--case-BB-stability-prescription',
'--chemically-homogeneous-evolution',
'--luminous-blue-variable-prescription',
'--mass-loss-prescription',
'--mass-transfer-angular-momentum-loss-prescription',
'--mass-transfer-accretion-efficiency-prescription',
'--mass-transfer-rejuvenation-prescription',
'--initial-mass-function',
'--semi-major-axis-distribution',
'--orbital-period-distribution',
'--mass-ratio-distribution',
'--eccentricity-distribution',
'--metallicity-distribution',
'--rotational-velocity-distribution',
'--remnant-mass-prescription',
'--fryer-supernova-engine',
'--black-hole-kicks',
'--kick-magnitude-distribution',
'--kick-direction',
'--output-path',
'--output-container',
'--common-envelope-lambda-prescription',
'--stellar-zeta-prescription',
'--mass-transfer-thermal-limit-accretor',
'--pulsational-pair-instability-prescription',
'--neutron-star-equation-of-state',
'--pulsar-birth-magnetic-field-distribution',
'--pulsar-birth-spin-period-distribution',
'--common-envelope-mass-accretion-prescription',
'--envelope-state-prescription',
'--logfile-name-prefix',
'--logfile-type',
'--logfile-definitions',
'--grid',
'--logfile-common-envelopes',
'--logfile-detailed-output',
'--logfile-double-compact-objects',
'--logfile-pulsar-evolution',
'--logfile-rlof-parameters',
'--logfile-supernovae',
'--logfile-switch-log',
'--logfile-system-parameters',
'--neutrino-mass-loss-BH-formation',
'--add-options-to-sysparms'
]
return stringCommands
def listChoices(self):
listChoices = [
self.log_classes,
self.debug_classes
]
return listChoices
def listCommands(self):
listCommands = [
'--log-classes',
'--debug-classes'
]
return listCommands
def generateCommandLineOptionsDict(self):
"""
This function generates a dictionary mapping COMPAS options to their specified
values (or empty strings for boolean options). These can be combined into a string
and run directly as a terminal command, or passed to the stroopwafel interface
where some of them may be overwritten. Options not to be included in the command
line should be set to pythons None (except booleans, which should be set to False)
Parameters
-----------
self : pythonProgramOptions
Contains program options
Returns
--------
commands : str or list of strs
"""
booleanChoices = self.booleanChoices()
booleanCommands = self.booleanCommands()
nBoolean = len(booleanChoices)
assert len(booleanCommands) == nBoolean
numericalChoices = self.numericalChoices()
numericalCommands = self.numericalCommands()
nNumerical = len(numericalChoices)
assert len(numericalCommands) == nNumerical
stringChoices = self.stringChoices()
stringCommands = self.stringCommands()
nString = len(stringChoices)
assert len(stringCommands) == nString
listChoices = self.listChoices()
listCommands = self.listCommands()
nList = len(listChoices)
assert len(listCommands) == nList
### Collect all options into a dictionary mapping option name to option value
command = {'compas_executable' : self.compas_executable}
for i in range(nBoolean):
if booleanChoices[i] == True:
command.update({booleanCommands[i] : ''})
elif booleanChoices[i] == False:
command.update({booleanCommands[i] : 'False'})
for i in range(nNumerical):
if not numericalChoices[i] == None:
command.update({numericalCommands[i] : str(numericalChoices[i])})
for i in range(nString):
if not stringChoices[i] == None:
command.update({stringCommands[i] : cleanStringParameter(stringChoices[i])})
for i in range(nList):
if listChoices[i]:
command.update({listCommands[i] : ' '.join(map(str,listChoices[i]))})
return command
def combineCommandLineOptionsDictIntoShellCommand(commandOptions):
"""
Write out the compas input parameters into a shell string.
Ensure the Compas executable is first, and not repeated.
Options are non-ordered.
"""
shellCommand = commandOptions['compas_executable']
del commandOptions['compas_executable']
for key, val in commandOptions.items():
shellCommand += ' ' + key + ' ' + val
return shellCommand
def cleanStringParameter(str_param):
""" clean up string parameters to avoid confusing Boost """
if str_param is not None:
# strip any quotes from the ends of the string
str_param = str_param.strip("'\"")
# escape any unescaped spaces or quotes within the string
escapes = [" ", "'", "\""]
for escape in escapes:
str_param = re.sub(r"(?<!\\){}".format(escape), r"\{}".format(escape), str_param)
return str_param
if __name__ == "__main__":
#-- Get the program options
programOptions = pythonProgramOptions()
commandOptions = programOptions.generateCommandLineOptionsDict()
#-- Convert options into a shell string
shellCommand = combineCommandLineOptionsDictIntoShellCommand(commandOptions)
#-- Run exectute COMPAS shell string
print(shellCommand)
call(shellCommand,shell=True)
| utils/example_plots/methods_paper_plots/fig_5_HR_diagram/pythonSubmit.py | 36,202 | A class to store and access COMPAS program options in python
clean up string parameters to avoid confusing Boost
Write out the compas input parameters into a shell string.
Ensure the Compas executable is first, and not repeated.
Options are non-ordered.
This function generates a dictionary mapping COMPAS options to their specified
values (or empty strings for boolean options). These can be combined into a string
and run directly as a terminal command, or passed to the stroopwafel interface
where some of them may be overwritten. Options not to be included in the command
line should be set to pythons None (except booleans, which should be set to False)
Parameters
-----------
self : pythonProgramOptions
Contains program options
Returns
--------
commands : str or list of strs
DISCLAIMER: This script uses the `pythonSubmit.py` format that has been replaced by the `runSubmit.py` and `compasConfigDefault.yaml` combo as of v02.25.10. The `pythonSubmit.py` format will eventually become deprecated. Check if we are using python 3 Do './COMPAS --help' to see all options-- Define variables environment variable COMPAS_EXECUTABLE_PATH is used for docker runs if COMPAS_EXECUTABLE_PATH is not set (== None) we assume this is an interactive run with python3 if COMPAS_EXECUTABLE_PATH is set (!= None) we assume this is a run inside a docker container - we have different directories inside a docker container (src, obj, bin), and the COMPAS executable resides in the bin directory (rather than the src directory) we should fix this one day - we should not assume that the COMPAS executable is in the 'src' directory. The standard is to put the object files created by the compile into the 'obj' directory, and the executable files created by the link in the 'bin' directory. for now though, because this is how everybody expects it to be, we'll just check that the path to the root directory (the parent directory of the directory in which we expect the executable to reside - for now, 'src') is set to something. construct path to executable ideally we wouldn't have the 'src' directory name (or any other directory name) prepended to the executable name - if we just execute the executable name on its own, as long as the user navigates to the directory in which the executable resides they don't need to set the COMPAS_ROOT_DIR environment variable check that a file with the correct name exists where we expect it to option to enable/disable warning messages number of systems per batch If you want a random seed, use: np.random.randint(2,2**63-1) environment variable COMPAS_LOGS_OUTPUT_DIR_PATH is used primarily for docker runs if COMPAS_LOGS_OUTPUT_DIR_PATH is set (!= None) it is used as the value for the --output-path option if COMPAS_LOGS_OUTPUT_DIR_PATH is not set (== None) the current working directory is used as the value for the --output-path option names the directory to be created and in which log files are created. Default in COMPAS is "COMPAS_Output" environment variable COMPAS_INPUT_DIR_PATH is used primarily for docker runs if COMPAS_INPUT_DIR_PATH is set (!= None) it is prepended to input filenames (such as grid_filename and logfile_definitions) if COMPAS_INPUT_DIR_PATH is not set (== None) the current working directory is prepended to input filenames-- option to make a grid of hyperparameter values at which to produce populations.-- If this is set to true, it will divide the number_of_binaries parameter equally-- amoungst the grid points (as closely as possible). See the hyperparameterGrid method below-- for more details. If this is set to True, some hyperparameter values defined in this method'gridOutputs/'+str(i)-- will be overwritten no annotations header strings (no annotations) no annotations evolving single stars (SSE) or binaries (BSE)? grid file name (e.g. 'mygrid.txt') if the grid filename supplied is already fully-qualified, leave it as is split into pathname and base filename no path (or CWD) - add path as required logfile record definitions file name (e.g. 'logdefs.txt') if the grid filename supplied is already fully-qualified, leave it as is split into pathname and base filename no path (or CWD) - add path as required initial mass for SSE primary initial mass for BSE secondary initial mass for BSE eccentricity for BSE semi-major axis for BSE orbital period for BSE WARNING: this creates a data heavy file metallicity for both SSE and BSE - Solar metallicity Asplund+2010 allow binaries that have one or both stars in RLOF at birth to evolve? record binaries that have stars touching at birth in output files? chemically homogeneous evolution. Options are 'NONE', 'OPTIMISTIC' and 'PESSIMISTIC' Only if using 'LAMBDA_FIXED' Xu & Li 2010 lambda = alpha_th*lambda_b + (1-alpha_th)*lambda_g Multiply common envelope lambda by some constant Allow main sequence stars to survive CE. Was previously False by default For 'MACLEOD+2014' [Msol] For 'MACLEOD+2014' [Msol] Only if using mass_transfer_accretion_efficiency_prescription = 'FIXED' Only if using mass_transfer_angular_momentum_loss_prescription = 'FIXED' multiplication Factor for eddington accretion onto NS&BH Maximum physical time a system can be evolved [Myrs] Optional multiplier relative to default time step duration Use 1.0 for LRNe, 5.0 for DCOs [Msol] Stellar tracks extrapolated above 50 Msol (Hurley+2000) [Msol] [AU] [AU] [days] [days] Brown dwarf limit [Msol] [log10(B/G)] [log10(B/G)] [ms] [ms] [Myr] [Msol] [log10(B/G)] "FIXED_FRACTION" Either fraction or mass (Msol) to lose [km/s] [km/s] [km/s] [km/s] (SSE) used to draw the kick magnitude for the star should it undergo a supernova event (SSE) (drawn) kick magnitude for the star should it undergo a supernova event [km/s] (BSE) used to draw the kick magnitude for the primary star should it undergo a supernova event (BSE) (drawn) kick magnitude for the primary star should it undergo a supernova event [km/s] (BSE) angle between the orbital plane and the 'z' axis of the supernova vector for the primary star should it undergo a supernova event [radians] (BSE) angle between 'x' and 'y', both in the orbital plane of the supernova vector, for the primary star should it undergo a supernova event [radians] (BSE) mean anomaly at the instant of the supernova for the primary star should it undergo a supernova event - should be uniform in [0, 2pi) [radians] (BSE) used to draw the kick velocity for the secondary star should it undergo a supernova event (BSE) (drawn) kick magnitude for the secondary star should it undergo a supernova event [km/s] (BSE) angle between the orbital plane and the 'z' axis of the supernova vector for the secondary star should it undergo a supernova event [radians] (BSE) angle between 'x' and 'y', both in the orbital plane of the supernova vector, for the secondary star should it undergo a supernova event [radians] (BSE) mean anomaly at the instant of the supernova for the secondary star should it undergo a supernova event - should be uniform in [0, 2pi) [radians] scaling prefactor for BH kicks when using the 'MULLERMANDEL' kick magnitude distribution scaling prefactor for NS kicks when using the 'MULLERMANDEL' kick magnitude distribution Minimum core mass for PISN [Msol] Maximum core mass for PISN [Msol] Minimum core mass for PPI [Msol] Maximum core mass for PPI [Msol] [Msol] should all option values be added to system parameters files? options are 'ALWAYS', 'GRID', and 'NEVER' set the logfile names here set to None (e.g. logfile_BSE_supernovae = None) to use the default filename set to a string (e.g. logfile_BSE_supernovae = 'mySNfilename') to use that string as the filename set to empty string (e.g. logfile_BSE_supernovae = '""') to disable logging for that file (the file will not be created) We don't really need the 'BSE' or 'SSE' prefixes any more - they were put there because prior to the implementation of the containing folder it was too hard to locate the files created by a COMPAS run - especially the detailed output files. Now that the output files are created inside a containing folder for each run there is really no need for the prefixes - and if we don't have the prefixes we can share some of the options (e.g. specifying the supernovae filename doesn't need to have separate options for SSE and BSE - we really just need one (we only ever run in one mode or the other)) For now though, I'll leave them as is - we can change this when (if) we decide to drop the prefixes Collect all options into a dictionary mapping option name to option value strip any quotes from the ends of the string escape any unescaped spaces or quotes within the string-- Get the program options-- Convert options into a shell string-- Run exectute COMPAS shell string | 8,731 | en | 0.830982 |
from django.contrib.auth import get_user_model
from django.utils.translation import gettext_lazy as _
from rest_framework import HTTP_HEADER_ENCODING, authentication
from .exceptions import AuthenticationFailed, InvalidToken, TokenError
from .settings import api_settings
AUTH_HEADER_TYPES = api_settings.AUTH_HEADER_TYPES
if not isinstance(api_settings.AUTH_HEADER_TYPES, (list, tuple)):
AUTH_HEADER_TYPES = (AUTH_HEADER_TYPES,)
AUTH_HEADER_TYPE_BYTES = set(
h.encode(HTTP_HEADER_ENCODING)
for h in AUTH_HEADER_TYPES
)
class JWTAuthentication(authentication.BaseAuthentication):
"""
An authentication plugin that authenticates requests through a JSON web
token provided in a request header.
"""
www_authenticate_realm = 'api'
media_type = 'application/json'
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.user_model = get_user_model()
def authenticate(self, request):
header = self.get_header(request)
if header is None:
return None
raw_token = self.get_raw_token(header)
if raw_token is None:
return None
validated_token = self.get_validated_token(raw_token)
return self.get_user(validated_token), validated_token
def authenticate_header(self, request):
return '{0} realm="{1}"'.format(
AUTH_HEADER_TYPES[0],
self.www_authenticate_realm,
)
def get_header(self, request):
"""
Extracts the header containing the JSON web token from the given
request.
"""
header = request.META.get(api_settings.AUTH_HEADER_NAME)
if isinstance(header, str):
# Work around django test client oddness
header = header.encode(HTTP_HEADER_ENCODING)
return header
def get_raw_token(self, header):
"""
Extracts an unvalidated JSON web token from the given "Authorization"
header value.
"""
parts = header.split()
if len(parts) == 0:
# Empty AUTHORIZATION header sent
return None
if parts[0] not in AUTH_HEADER_TYPE_BYTES:
# Assume the header does not contain a JSON web token
return None
if len(parts) != 2:
raise AuthenticationFailed(
_('Authorization header must contain two space-delimited values'),
code='bad_authorization_header',
)
return parts[1]
def get_validated_token(self, raw_token):
"""
Validates an encoded JSON web token and returns a validated token
wrapper object.
"""
messages = []
for AuthToken in api_settings.AUTH_TOKEN_CLASSES:
try:
return AuthToken(raw_token)
except TokenError as e:
messages.append({'token_class': AuthToken.__name__,
'token_type': AuthToken.token_type,
'message': e.args[0]})
raise InvalidToken({
'detail': _('Given token not valid for any token type'),
'messages': messages,
})
def get_user(self, validated_token):
"""
Attempts to find and return a user using the given validated token.
"""
try:
user_id = validated_token[api_settings.USER_ID_CLAIM]
except KeyError:
raise InvalidToken(_('Token contained no recognizable user identification'))
try:
user = self.user_model.objects.get(**{api_settings.USER_ID_FIELD: user_id})
except self.user_model.DoesNotExist:
raise AuthenticationFailed(_('User not found'), code='user_not_found')
if not user.is_active:
raise AuthenticationFailed(_('User is inactive'), code='user_inactive')
return user
class JWTTokenUserAuthentication(JWTAuthentication):
def get_user(self, validated_token):
"""
Returns a stateless user object which is backed by the given validated
token.
"""
if api_settings.USER_ID_CLAIM not in validated_token:
# The TokenUser class assumes tokens will have a recognizable user
# identifier claim.
raise InvalidToken(_('Token contained no recognizable user identification'))
return api_settings.TOKEN_USER_CLASS(validated_token)
def default_user_authentication_rule(user):
# Prior to Django 1.10, inactive users could be authenticated with the
# default `ModelBackend`. As of Django 1.10, the `ModelBackend`
# prevents inactive users from authenticating. App designers can still
# allow inactive users to authenticate by opting for the new
# `AllowAllUsersModelBackend`. However, we explicitly prevent inactive
# users from authenticating to enforce a reasonable policy and provide
# sensible backwards compatibility with older Django versions.
return True if user is not None and user.is_active else False
| webpersonal/env/Lib/site-packages/rest_framework_simplejwt/authentication.py | 5,057 | An authentication plugin that authenticates requests through a JSON web
token provided in a request header.
Extracts the header containing the JSON web token from the given
request.
Extracts an unvalidated JSON web token from the given "Authorization"
header value.
Attempts to find and return a user using the given validated token.
Returns a stateless user object which is backed by the given validated
token.
Validates an encoded JSON web token and returns a validated token
wrapper object.
Work around django test client oddness Empty AUTHORIZATION header sent Assume the header does not contain a JSON web token The TokenUser class assumes tokens will have a recognizable user identifier claim. Prior to Django 1.10, inactive users could be authenticated with the default `ModelBackend`. As of Django 1.10, the `ModelBackend` prevents inactive users from authenticating. App designers can still allow inactive users to authenticate by opting for the new `AllowAllUsersModelBackend`. However, we explicitly prevent inactive users from authenticating to enforce a reasonable policy and provide sensible backwards compatibility with older Django versions. | 1,162 | en | 0.709103 |
import random
import numpy as np
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
from .attention import Attention
from .baseRNN import BaseRNN
if torch.cuda.is_available():
import torch.cuda as device
else:
import torch as device
class DecoderRNN(BaseRNN):
r"""
Provides functionality for decoding in a seq2seq framework, with an option for attention.
Args:
vocab_size (int): size of the vocabulary
max_len (int): a maximum allowed length for the sequence to be processed
hidden_size (int): the number of features in the hidden state `h`
sos_id (int): index of the start of sentence symbol
eos_id (int): index of the end of sentence symbol
n_layers (int, optional): number of recurrent layers (default: 1)
rnn_cell (str, optional): type of RNN cell (default: gru)
bidirectional (bool, optional): if the encoder is bidirectional (default False)
input_dropout_p (float, optional): dropout probability for the input sequence (default: 0)
dropout_p (float, optional): dropout probability for the output sequence (default: 0)
use_attention(bool, optional): flag indication whether to use attention mechanism or not (default: false)
Attributes:
KEY_ATTN_SCORE (str): key used to indicate attention weights in `ret_dict`
KEY_LENGTH (str): key used to indicate a list representing lengths of output sequences in `ret_dict`
KEY_SEQUENCE (str): key used to indicate a list of sequences in `ret_dict`
Inputs: inputs, encoder_hidden, encoder_outputs, function, teacher_forcing_ratio
- **inputs** (batch, seq_len, input_size): list of sequences, whose length is the batch size and within which
each sequence is a list of token IDs. It is used for teacher forcing when provided. (default `None`)
- **encoder_hidden** (num_layers * num_directions, batch_size, hidden_size): tensor containing the features in the
hidden state `h` of encoder. Used as the initial hidden state of the decoder. (default `None`)
- **encoder_outputs** (batch, seq_len, hidden_size): tensor with containing the outputs of the encoder.
Used for attention mechanism (default is `None`).
- **function** (torch.nn.Module): A function used to generate symbols from RNN hidden state
(default is `torch.nn.functional.log_softmax`).
- **teacher_forcing_ratio** (float): The probability that teacher forcing will be used. A random number is
drawn uniformly from 0-1 for every decoding token, and if the sample is smaller than the given value,
teacher forcing would be used (default is 0).
Outputs: decoder_outputs, decoder_hidden, ret_dict
- **decoder_outputs** (seq_len, batch, vocab_size): list of tensors with size (batch_size, vocab_size) containing
the outputs of the decoding function.
- **decoder_hidden** (num_layers * num_directions, batch, hidden_size): tensor containing the last hidden
state of the decoder.
- **ret_dict**: dictionary containing additional information as follows {*KEY_LENGTH* : list of integers
representing lengths of output sequences, *KEY_SEQUENCE* : list of sequences, where each sequence is a list of
predicted token IDs }.
"""
KEY_ATTN_SCORE = 'attention_score'
KEY_LENGTH = 'length'
KEY_SEQUENCE = 'sequence'
def __init__(self, vocab_size, max_len, hidden_size,
sos_id, eos_id,
n_layers=1, rnn_cell='gru', bidirectional=False,
input_dropout_p=0, dropout_p=0, use_attention=False):
super(DecoderRNN, self).__init__(vocab_size, max_len, hidden_size,
input_dropout_p, dropout_p,
n_layers, rnn_cell)
self.bidirectional_encoder = bidirectional
self.rnn = self.rnn_cell(hidden_size, hidden_size, n_layers, batch_first=True, dropout=dropout_p)
self.output_size = vocab_size
self.max_length = max_len
self.use_attention = use_attention
self.eos_id = eos_id
self.sos_id = sos_id
self.init_input = None
self.embedding = nn.Embedding(self.output_size, self.hidden_size)
if use_attention:
self.attention = Attention(self.hidden_size)
self.out = nn.Linear(self.hidden_size, self.output_size)
def forward_step(self, input_var, hidden, encoder_outputs, function):
batch_size = input_var.size(0)
output_size = input_var.size(1)
embedded = self.embedding(input_var)
embedded = self.input_dropout(embedded)
output, hidden = self.rnn(embedded, hidden)
attn = None
if self.use_attention:
output, attn = self.attention(output, encoder_outputs)
predicted_softmax = function(self.out(output.view(-1, self.hidden_size))).view(batch_size, output_size, -1)
return predicted_softmax, hidden, attn
def forward(self, inputs=None, encoder_hidden=None, encoder_outputs=None,
function=F.log_softmax, teacher_forcing_ratio=0):
ret_dict = dict()
if self.use_attention:
ret_dict[DecoderRNN.KEY_ATTN_SCORE] = list()
inputs, batch_size, max_length = self._validate_args(inputs, encoder_hidden, encoder_outputs,
function, teacher_forcing_ratio)
decoder_hidden = self._init_state(encoder_hidden)
use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False
decoder_outputs = []
sequence_symbols = []
lengths = np.array([max_length] * batch_size)
def decode(step, step_output, step_attn):
decoder_outputs.append(step_output)
if self.use_attention:
ret_dict[DecoderRNN.KEY_ATTN_SCORE].append(step_attn)
symbols = decoder_outputs[-1].topk(1)[1]
sequence_symbols.append(symbols)
eos_batches = symbols.data.eq(self.eos_id)
if eos_batches.dim() > 0:
eos_batches = eos_batches.cpu().view(-1).numpy()
update_idx = ((lengths > step) & eos_batches) != 0
lengths[update_idx] = len(sequence_symbols)
return symbols
# Manual unrolling is used to support random teacher forcing.
# If teacher_forcing_ratio is True or False instead of a probability, the unrolling can be done in graph
if use_teacher_forcing:
decoder_input = inputs[:, :-1]
decoder_output, decoder_hidden, attn = self.forward_step(decoder_input, decoder_hidden, encoder_outputs,
function=function)
for di in range(decoder_output.size(1)):
step_output = decoder_output[:, di, :]
if attn is not None:
step_attn = attn[:, di, :]
else:
step_attn = None
decode(di, step_output, step_attn)
else:
decoder_input = inputs[:, 0].unsqueeze(1)
for di in range(max_length):
decoder_output, decoder_hidden, step_attn = self.forward_step(decoder_input, decoder_hidden, encoder_outputs,
function=function)
step_output = decoder_output.squeeze(1)
symbols = decode(di, step_output, step_attn)
decoder_input = symbols
ret_dict[DecoderRNN.KEY_SEQUENCE] = sequence_symbols
ret_dict[DecoderRNN.KEY_LENGTH] = lengths.tolist()
return decoder_outputs, decoder_hidden, ret_dict
def _init_state(self, encoder_hidden):
""" Initialize the encoder hidden state. """
if encoder_hidden is None:
return None
if isinstance(encoder_hidden, tuple):
encoder_hidden = tuple([self._cat_directions(h) for h in encoder_hidden])
else:
encoder_hidden = self._cat_directions(encoder_hidden)
return encoder_hidden
def _cat_directions(self, h):
""" If the encoder is bidirectional, do the following transformation.
(#directions * #layers, #batch, hidden_size) -> (#layers, #batch, #directions * hidden_size)
"""
if self.bidirectional_encoder:
h = torch.cat([h[0:h.size(0):2], h[1:h.size(0):2]], 2)
return h
def _validate_args(self, inputs, encoder_hidden, encoder_outputs, function, teacher_forcing_ratio):
if self.use_attention:
if encoder_outputs is None:
raise ValueError("Argument encoder_outputs cannot be None when attention is used.")
# inference batch size
if inputs is None and encoder_hidden is None:
batch_size = 1
else:
if inputs is not None:
batch_size = inputs.size(0)
else:
if self.rnn_cell is nn.LSTM:
batch_size = encoder_hidden[0].size(1)
elif self.rnn_cell is nn.GRU:
batch_size = encoder_hidden.size(1)
# set default input and max decoding length
if inputs is None:
if teacher_forcing_ratio > 0:
raise ValueError("Teacher forcing has to be disabled (set 0) when no inputs is provided.")
inputs = Variable(torch.LongTensor([self.sos_id] * batch_size),
volatile=True).view(batch_size, 1)
if torch.cuda.is_available():
inputs = inputs.cuda()
max_length = self.max_length
else:
max_length = inputs.size(1) - 1 # minus the start of sequence symbol
return inputs, batch_size, max_length
| seq2seq/models/DecoderRNN.py | 9,892 | Provides functionality for decoding in a seq2seq framework, with an option for attention.
Args:
vocab_size (int): size of the vocabulary
max_len (int): a maximum allowed length for the sequence to be processed
hidden_size (int): the number of features in the hidden state `h`
sos_id (int): index of the start of sentence symbol
eos_id (int): index of the end of sentence symbol
n_layers (int, optional): number of recurrent layers (default: 1)
rnn_cell (str, optional): type of RNN cell (default: gru)
bidirectional (bool, optional): if the encoder is bidirectional (default False)
input_dropout_p (float, optional): dropout probability for the input sequence (default: 0)
dropout_p (float, optional): dropout probability for the output sequence (default: 0)
use_attention(bool, optional): flag indication whether to use attention mechanism or not (default: false)
Attributes:
KEY_ATTN_SCORE (str): key used to indicate attention weights in `ret_dict`
KEY_LENGTH (str): key used to indicate a list representing lengths of output sequences in `ret_dict`
KEY_SEQUENCE (str): key used to indicate a list of sequences in `ret_dict`
Inputs: inputs, encoder_hidden, encoder_outputs, function, teacher_forcing_ratio
- **inputs** (batch, seq_len, input_size): list of sequences, whose length is the batch size and within which
each sequence is a list of token IDs. It is used for teacher forcing when provided. (default `None`)
- **encoder_hidden** (num_layers * num_directions, batch_size, hidden_size): tensor containing the features in the
hidden state `h` of encoder. Used as the initial hidden state of the decoder. (default `None`)
- **encoder_outputs** (batch, seq_len, hidden_size): tensor with containing the outputs of the encoder.
Used for attention mechanism (default is `None`).
- **function** (torch.nn.Module): A function used to generate symbols from RNN hidden state
(default is `torch.nn.functional.log_softmax`).
- **teacher_forcing_ratio** (float): The probability that teacher forcing will be used. A random number is
drawn uniformly from 0-1 for every decoding token, and if the sample is smaller than the given value,
teacher forcing would be used (default is 0).
Outputs: decoder_outputs, decoder_hidden, ret_dict
- **decoder_outputs** (seq_len, batch, vocab_size): list of tensors with size (batch_size, vocab_size) containing
the outputs of the decoding function.
- **decoder_hidden** (num_layers * num_directions, batch, hidden_size): tensor containing the last hidden
state of the decoder.
- **ret_dict**: dictionary containing additional information as follows {*KEY_LENGTH* : list of integers
representing lengths of output sequences, *KEY_SEQUENCE* : list of sequences, where each sequence is a list of
predicted token IDs }.
If the encoder is bidirectional, do the following transformation.
(#directions * #layers, #batch, hidden_size) -> (#layers, #batch, #directions * hidden_size)
Initialize the encoder hidden state.
Manual unrolling is used to support random teacher forcing. If teacher_forcing_ratio is True or False instead of a probability, the unrolling can be done in graph inference batch size set default input and max decoding length minus the start of sequence symbol | 3,361 | en | 0.691797 |
import torch
from mmcv import Config
from mmcv.parallel import MMDataParallel
from mmcv.runner import load_checkpoint
from mmdet.apis import single_gpu_mergetiles_visualize
from mmdet.core import wrap_fp16_model
from mmdet.datasets import build_dataloader, build_dataset
from mmdet.models import build_detector
import argparse
def parse_args():
parser = argparse.ArgumentParser(description='Visualize result with tile-cropped images')
parser.add_argument('config', help='test config file path')
parser.add_argument('checkpoint', help='checkpoint file')
args = parser.parse_args()
return args
def main():
args = parse_args()
cfg = Config.fromfile(args.config)
# set cudnn_benchmark
if cfg.get('cudnn_benchmark', False):
torch.backends.cudnn.benchmark = True
cfg.model.pretrained = None
cfg.data.test.test_mode = True
# build the dataloader
dataset = build_dataset(cfg.data.test)
data_loader = build_dataloader(
dataset,
samples_per_gpu=1,
workers_per_gpu=1,
#workers_per_gpu=cfg.data.workers_per_gpu,
dist=False,
shuffle=False)
# build the model and load checkpoint
model = build_detector(cfg.model, train_cfg=None, test_cfg=cfg.test_cfg)
fp16_cfg = cfg.get('fp16', None)
if fp16_cfg is not None:
wrap_fp16_model(model)
# checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu')
checkpoint = load_checkpoint(model, args.checkpoint, map_location='cuda')
# old versions did not save class info in checkpoints, this walkaround is
# for backward compatibility
if 'CLASSES' in checkpoint['meta']:
model.CLASSES = checkpoint['meta']['CLASSES']
else:
model.CLASSES = dataset.CLASSES
model = MMDataParallel(model, device_ids=[0])
single_gpu_mergetiles_visualize(model, data_loader, 0.8)
if __name__ == "__main__":
main()
| rtools/dota_result_visualize.py | 1,935 | set cudnn_benchmark build the dataloaderworkers_per_gpu=cfg.data.workers_per_gpu, build the model and load checkpoint checkpoint = load_checkpoint(model, args.checkpoint, map_location='cpu') old versions did not save class info in checkpoints, this walkaround is for backward compatibility | 289 | en | 0.771434 |
#!/usr/bin/env python
# encoding: utf-8
"""
Mobi.py
Created by Elliot Kroo on 2009-12-25.
Copyright (c) 2009 Elliot Kroo. All rights reserved.
"""
import sys
import os
import unittest
from struct import *
from pprint import pprint
import mobi.utils
from mobi.lz77 import uncompress_lz77
class Mobi:
def parse(self) -> object:
""" reads in the file, then parses record tables"""
self.contents = self.f.read()
self.header = self.parseHeader()
self.records = self.parseRecordInfoList()
self.readRecord0()
def readRecord(self, recordnum, disable_compression=False):
if self.config:
if self.config['palmdoc']['Compression'] == 1 or disable_compression:
return self.contents[self.records[recordnum]['record Data Offset']:self.records[recordnum+1]['record Data Offset']];
# elif self.config['palmdoc']['Compression'] == 2:
# result = uncompress_lz77(self.contents[self.records[recordnum]['record Data Offset']:self.records[recordnum+1]['record Data Offset']-self.config['mobi']['extra bytes']])
# return result
def readImageRecord(self, imgnum):
if self.config:
recordnum = self.config['mobi']['First Image index'] + imgnum;
return self.readRecord(recordnum, disable_compression=True);
def author(self):
"Returns the author of the book"
return self.config['exth']['records'][100]
def title(self):
"Returns the title of the book"
return self.config['mobi']['Full Name']
########### Private API ###########################
def __init__(self, filename: object) -> object:
try:
if isinstance(filename, str):
self.f = open(filename, "rb");
else:
self.f = filename;
except IOError as e:
sys.stderr.write("Could not open %s! " % filename);
raise e;
self.offset = 0;
def __iter__(self):
if not self.config: return
for record in range(1, self.config['mobi']['First Non-book index'] - 1):
yield self.readRecord(record)
def parseRecordInfoList(self):
records = {};
# read in all records in info list
for recordID in range(self.header['number of records']):
headerfmt = '>II'
headerlen = calcsize(headerfmt)
fields = [
"record Data Offset",
"UniqueID",
]
# create tuple with info
results = zip(fields, unpack(headerfmt, self.contents[self.offset:self.offset+headerlen]))
# increment offset into file
self.offset += headerlen
# convert tuple to dictionary
resultsDict = utils.toDict(results);
# futz around with the unique ID record, as the uniqueID's top 8 bytes are
# really the "record attributes":
resultsDict['record Attributes'] = (resultsDict['UniqueID'] & 0xFF000000) >> 24;
resultsDict['UniqueID'] = resultsDict['UniqueID'] & 0x00FFFFFF;
# store into the records dict
records[resultsDict['UniqueID']] = resultsDict;
return records;
def parseHeader(self):
headerfmt = '>32shhIIIIII4s4sIIH'
headerlen = calcsize(headerfmt)
fields = [
"name",
"attributes",
"version",
"created",
"modified",
"backup",
"modnum",
"appInfoId",
"sortInfoID",
"type",
"creator",
"uniqueIDseed",
"nextRecordListID",
"number of records"
]
# unpack header, zip up into list of tuples
results = zip(fields, unpack(headerfmt, self.contents[self.offset:self.offset+headerlen]))
# increment offset into file
self.offset += headerlen
# convert tuple array to dictionary
resultsDict = utils.toDict(results);
return resultsDict
def readRecord0(self):
palmdocHeader = self.parsePalmDOCHeader();
MobiHeader = self.parseMobiHeader();
exthHeader = None
if MobiHeader['Has EXTH Header']:
exthHeader = self.parseEXTHHeader();
self.config = {
'palmdoc': palmdocHeader,
'mobi' : MobiHeader,
'exth' : exthHeader
}
def parseEXTHHeader(self):
headerfmt = '>III'
headerlen = calcsize(headerfmt)
fields = [
'identifier',
'header length',
'record Count'
]
# unpack header, zip up into list of tuples
results = zip(fields, unpack(headerfmt, self.contents[self.offset:self.offset+headerlen]))
# convert tuple array to dictionary
resultsDict = utils.toDict(results);
self.offset += headerlen;
resultsDict['records'] = {};
for record in range(resultsDict['record Count']):
recordType, recordLen = unpack(">II", self.contents[self.offset:self.offset+8]);
recordData = self.contents[self.offset+8:self.offset+recordLen];
resultsDict['records'][recordType] = recordData;
self.offset += recordLen;
return resultsDict;
def parseMobiHeader(self):
headerfmt = '> IIII II 40s III IIIII IIII I 36s IIII 8s HHIIIII'
headerlen = calcsize(headerfmt)
fields = [
"identifier",
"header length",
"Mobi type",
"text Encoding",
"Unique-ID",
"Generator version",
"-Reserved",
"First Non-book index",
"Full Name Offset",
"Full Name Length",
"Language",
"Input Language",
"Output Language",
"Format version",
"First Image index",
"First Huff Record",
"Huff Record Count",
"First DATP Record",
"DATP Record Count",
"EXTH flags",
"-36 unknown bytes, if Mobi is long enough",
"DRM Offset",
"DRM Count",
"DRM Size",
"DRM Flags",
"-Usually Zeros, unknown 8 bytes",
"-Unknown",
"Last Image Record",
"-Unknown",
"FCIS record",
"-Unknown",
"FLIS record",
"Unknown"
]
# unpack header, zip up into list of tuples
results = zip(fields, unpack(headerfmt, self.contents[self.offset:self.offset+headerlen]))
# convert tuple array to dictionary
resultsDict = utils.toDict(results);
resultsDict['Start Offset'] = self.offset;
resultsDict['Full Name'] = (self.contents[
self.records[0]['record Data Offset'] + resultsDict['Full Name Offset'] :
self.records[0]['record Data Offset'] + resultsDict['Full Name Offset'] + resultsDict['Full Name Length']])
resultsDict['Has DRM'] = resultsDict['DRM Offset'] != 0xFFFFFFFF;
resultsDict['Has EXTH Header'] = (resultsDict['EXTH flags'] & 0x40) != 0;
self.offset += resultsDict['header length'];
def onebits(x, width=16):
return len(list(filter(lambda x: x == "1", (str((x>>i)&1) for i in range(width-1, -1, -1)))));
resultsDict['extra bytes'] = 2*onebits(unpack(">H", self.contents[self.offset-2:self.offset])[0] & 0xFFFE)
return resultsDict;
def parsePalmDOCHeader(self):
headerfmt = '>HHIHHHH'
headerlen = calcsize(headerfmt)
fields = [
"Compression",
"Unused",
"text length",
"record count",
"record size",
"Encryption Type",
"Unknown"
]
offset = self.records[0]['record Data Offset'];
# create tuple with info
results = zip(fields, unpack(headerfmt, self.contents[offset:offset+headerlen]))
# convert tuple array to dictionary
resultsDict = utils.toDict(results);
self.offset = offset+headerlen;
return resultsDict
class MobiTests(unittest.TestCase):
def setUp(self):
self.mobitest = Mobi("../test/病者生存.mobi");
def testParse(self):
self.mobitest.parse();
pprint (self.mobitest.config)
def testRead(self):
self.mobitest.parse();
content = ""
for i in range(1,5):
content += self.mobitest.readRecord(i);
def testImage(self):
self.mobitest.parse();
pprint (self.mobitest.records);
for record in range(4):
f = open("imagerecord%d.jpg" % record, 'w')
f.write(self.mobitest.readImageRecord(record));
f.close();
def testAuthorTitle(self):
self.mobitest.parse()
self.assertEqual(self.mobitest.author(), 'Charles Darwin')
self.assertEqual(self.mobitest.title(), 'The Origin of Species by means '+
'of Natural Selection, 6th Edition')
if __name__ == '__main__':
unittest.main()
| dbookbee/mobi/__init__.py | 8,115 | Returns the author of the book
reads in the file, then parses record tables
Returns the title of the book
Mobi.py
Created by Elliot Kroo on 2009-12-25.
Copyright (c) 2009 Elliot Kroo. All rights reserved.
!/usr/bin/env python encoding: utf-8 elif self.config['palmdoc']['Compression'] == 2: result = uncompress_lz77(self.contents[self.records[recordnum]['record Data Offset']:self.records[recordnum+1]['record Data Offset']-self.config['mobi']['extra bytes']]) return result Private API read in all records in info list create tuple with info increment offset into file convert tuple to dictionary futz around with the unique ID record, as the uniqueID's top 8 bytes are really the "record attributes": store into the records dict unpack header, zip up into list of tuples increment offset into file convert tuple array to dictionary unpack header, zip up into list of tuples convert tuple array to dictionary unpack header, zip up into list of tuples convert tuple array to dictionary create tuple with info convert tuple array to dictionary | 1,050 | en | 0.656903 |
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import Any, AsyncIterable, Callable, Dict, Generic, Optional, TypeVar, Union
import warnings
from azure.core.async_paging import AsyncItemPaged, AsyncList
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import AsyncHttpResponse, HttpRequest
from azure.core.polling import AsyncLROPoller, AsyncNoPolling, AsyncPollingMethod
from azure.mgmt.core.exceptions import ARMErrorFormat
from azure.mgmt.core.polling.async_arm_polling import AsyncARMPolling
from ... import models
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, AsyncHttpResponse], T, Dict[str, Any]], Any]]
class SecurityRulesOperations:
"""SecurityRulesOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.mgmt.network.v2018_07_01.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = models
def __init__(self, client, config, serializer, deserializer) -> None:
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
async def _delete_initial(
self,
resource_group_name: str,
network_security_group_name: str,
security_rule_name: str,
**kwargs
) -> None:
cls = kwargs.pop('cls', None) # type: ClsType[None]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2018-07-01"
# Construct URL
url = self._delete_initial.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'networkSecurityGroupName': self._serialize.url("network_security_group_name", network_security_group_name, 'str'),
'securityRuleName': self._serialize.url("security_rule_name", security_rule_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
request = self._client.delete(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if cls:
return cls(pipeline_response, None, {})
_delete_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}'} # type: ignore
async def begin_delete(
self,
resource_group_name: str,
network_security_group_name: str,
security_rule_name: str,
**kwargs
) -> AsyncLROPoller[None]:
"""Deletes the specified network security rule.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param network_security_group_name: The name of the network security group.
:type network_security_group_name: str
:param security_rule_name: The name of the security rule.
:type security_rule_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: True for ARMPolling, False for no polling, or a
polling object for personal polling strategy
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
polling = kwargs.pop('polling', True) # type: Union[bool, AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType[None]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._delete_initial(
resource_group_name=resource_group_name,
network_security_group_name=network_security_group_name,
security_rule_name=security_rule_name,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
kwargs.pop('content_type', None)
def get_long_running_output(pipeline_response):
if cls:
return cls(pipeline_response, None, {})
if polling is True: polling_method = AsyncARMPolling(lro_delay, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_delete.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}'} # type: ignore
async def get(
self,
resource_group_name: str,
network_security_group_name: str,
security_rule_name: str,
**kwargs
) -> "models.SecurityRule":
"""Get the specified network security rule.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param network_security_group_name: The name of the network security group.
:type network_security_group_name: str
:param security_rule_name: The name of the security rule.
:type security_rule_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SecurityRule, or the result of cls(response)
:rtype: ~azure.mgmt.network.v2018_07_01.models.SecurityRule
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.SecurityRule"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2018-07-01"
accept = "application/json"
# Construct URL
url = self.get.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'networkSecurityGroupName': self._serialize.url("network_security_group_name", network_security_group_name, 'str'),
'securityRuleName': self._serialize.url("security_rule_name", security_rule_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('SecurityRule', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}'} # type: ignore
async def _create_or_update_initial(
self,
resource_group_name: str,
network_security_group_name: str,
security_rule_name: str,
security_rule_parameters: "models.SecurityRule",
**kwargs
) -> "models.SecurityRule":
cls = kwargs.pop('cls', None) # type: ClsType["models.SecurityRule"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2018-07-01"
content_type = kwargs.pop("content_type", "application/json")
accept = "application/json"
# Construct URL
url = self._create_or_update_initial.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'networkSecurityGroupName': self._serialize.url("network_security_group_name", network_security_group_name, 'str'),
'securityRuleName': self._serialize.url("security_rule_name", security_rule_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Content-Type'] = self._serialize.header("content_type", content_type, 'str')
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
body_content_kwargs = {} # type: Dict[str, Any]
body_content = self._serialize.body(security_rule_parameters, 'SecurityRule')
body_content_kwargs['content'] = body_content
request = self._client.put(url, query_parameters, header_parameters, **body_content_kwargs)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 201]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
if response.status_code == 200:
deserialized = self._deserialize('SecurityRule', pipeline_response)
if response.status_code == 201:
deserialized = self._deserialize('SecurityRule', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
_create_or_update_initial.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}'} # type: ignore
async def begin_create_or_update(
self,
resource_group_name: str,
network_security_group_name: str,
security_rule_name: str,
security_rule_parameters: "models.SecurityRule",
**kwargs
) -> AsyncLROPoller["models.SecurityRule"]:
"""Creates or updates a security rule in the specified network security group.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param network_security_group_name: The name of the network security group.
:type network_security_group_name: str
:param security_rule_name: The name of the security rule.
:type security_rule_name: str
:param security_rule_parameters: Parameters supplied to the create or update network security
rule operation.
:type security_rule_parameters: ~azure.mgmt.network.v2018_07_01.models.SecurityRule
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: True for ARMPolling, False for no polling, or a
polling object for personal polling strategy
:paramtype polling: bool or ~azure.core.polling.AsyncPollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present.
:return: An instance of AsyncLROPoller that returns either SecurityRule or the result of cls(response)
:rtype: ~azure.core.polling.AsyncLROPoller[~azure.mgmt.network.v2018_07_01.models.SecurityRule]
:raises ~azure.core.exceptions.HttpResponseError:
"""
polling = kwargs.pop('polling', True) # type: Union[bool, AsyncPollingMethod]
cls = kwargs.pop('cls', None) # type: ClsType["models.SecurityRule"]
lro_delay = kwargs.pop(
'polling_interval',
self._config.polling_interval
)
cont_token = kwargs.pop('continuation_token', None) # type: Optional[str]
if cont_token is None:
raw_result = await self._create_or_update_initial(
resource_group_name=resource_group_name,
network_security_group_name=network_security_group_name,
security_rule_name=security_rule_name,
security_rule_parameters=security_rule_parameters,
cls=lambda x,y,z: x,
**kwargs
)
kwargs.pop('error_map', None)
kwargs.pop('content_type', None)
def get_long_running_output(pipeline_response):
deserialized = self._deserialize('SecurityRule', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
if polling is True: polling_method = AsyncARMPolling(lro_delay, **kwargs)
elif polling is False: polling_method = AsyncNoPolling()
else: polling_method = polling
if cont_token:
return AsyncLROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output
)
else:
return AsyncLROPoller(self._client, raw_result, get_long_running_output, polling_method)
begin_create_or_update.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}'} # type: ignore
def list(
self,
resource_group_name: str,
network_security_group_name: str,
**kwargs
) -> AsyncIterable["models.SecurityRuleListResult"]:
"""Gets all security rules in a network security group.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param network_security_group_name: The name of the network security group.
:type network_security_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either SecurityRuleListResult or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.network.v2018_07_01.models.SecurityRuleListResult]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["models.SecurityRuleListResult"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2018-07-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list.metadata['url'] # type: ignore
path_format_arguments = {
'resourceGroupName': self._serialize.url("resource_group_name", resource_group_name, 'str'),
'networkSecurityGroupName': self._serialize.url("network_security_group_name", network_security_group_name, 'str'),
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
async def extract_data(pipeline_response):
deserialized = self._deserialize('SecurityRuleListResult', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, AsyncList(list_of_elem)
async def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = await self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return AsyncItemPaged(
get_next, extract_data
)
list.metadata = {'url': '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules'} # type: ignore
| sdk/network/azure-mgmt-network/azure/mgmt/network/v2018_07_01/aio/operations/_security_rules_operations.py | 20,920 | SecurityRulesOperations async operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.mgmt.network.v2018_07_01.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
Gets all security rules in a network security group.
:param resource_group_name: The name of the resource group.
:type resource_group_name: str
:param network_security_group_name: The name of the network security group.
:type network_security_group_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either SecurityRuleListResult or the result of cls(response)
:rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.mgmt.network.v2018_07_01.models.SecurityRuleListResult]
:raises: ~azure.core.exceptions.HttpResponseError
coding=utf-8 -------------------------------------------------------------------------- Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. See License.txt in the project root for license information. Code generated by Microsoft (R) AutoRest Code Generator. Changes may cause incorrect behavior and will be lost if the code is regenerated. -------------------------------------------------------------------------- type: ClsType[None] Construct URL type: ignore Construct parameters type: Dict[str, Any] Construct headers type: Dict[str, Any] type: ignore type: Union[bool, AsyncPollingMethod] type: ClsType[None] type: Optional[str] type: ignore type: ClsType["models.SecurityRule"] Construct URL type: ignore Construct parameters type: Dict[str, Any] Construct headers type: Dict[str, Any] type: ignore type: ClsType["models.SecurityRule"] Construct URL type: ignore Construct parameters type: Dict[str, Any] Construct headers type: Dict[str, Any] type: Dict[str, Any] type: ignore type: Union[bool, AsyncPollingMethod] type: ClsType["models.SecurityRule"] type: Optional[str] type: ignore type: ClsType["models.SecurityRuleListResult"] Construct headers type: Dict[str, Any] Construct URL type: ignore Construct parameters type: Dict[str, Any] type: Dict[str, Any] type: ignore | 2,438 | en | 0.511493 |
"""
Copyright (c) 2014-2015 F-Secure
See LICENSE for details
"""
import re
import inspect
import datetime
from copy import copy
from collections import defaultdict
import isodate
import pytz
from .errors import ValidationError, DeclarationError
class BaseField(object):
"""
Superclass for all fields
description (None|string = None)
help text to be shown in schema. This should include the reasons why this field actually needs to exist.
required (bool = False)
flag that specifes if the field has to be present
\*\*kwargs
extra parameters that are not programmatically supported
"""
verbose_name = "unknown_type"
def __init__(self, description=None, required=True, **kwargs):
self.description = description
self.kwargs = kwargs
self.required = required
def _to_python(self, val):
""" Transforms primitive data (e.g. dict, list, str, int, bool, float) to a python object """
return val
def _validate(self, val):
""" Validates incoming data against constraints defined via field declaration """
if self.required and val is None:
raise ValidationError("Value is required and thus cannot be None")
def deserialize(self, val):
""" Converts data passed over the wire or from the script into sth. to be used in python scripts """
rval = self._to_python(val)
self._validate(rval)
return rval
def serialize(self, val):
""" Converts python object into sth. that can be sent over the wire """
return val
def get_schema(self):
rval = {
"description": self.description,
"type": self.verbose_name,
"required": self.required
}
rval.update(self.kwargs)
return rval
class BaseIsoField(BaseField):
""" Represents time entity that can be either a native object or ISO 8601 datetime string.
The item is
`serialized <https://docs.python.org/2/library/datetime.html#datetime.datetime.isoformat>`_ into ISO 8601 string.
"""
def _parse(self, val):
""" Supposed to transform the value into a valid Python type using a respective isodate function """
raise NotImplementedError
def _to_python(self, val):
val = super(BaseIsoField, self)._to_python(val)
if val is None:
return None
if isinstance(val, basestring):
try:
# Parse datetime
val = self._parse(val)
except ValueError:
raise ValidationError("Datetime timestamp has to be a string in ISO 8601 format")
return val
def serialize(self, val):
if val is None:
return None
return val.isoformat()
class DateTimeField(BaseIsoField):
""" datetime object serialized into YYYY-MM-DDThh:mm:ss.sTZD.
E.g.: 2013-09-30T11:32:39.984847 """
verbose_name = "datetime"
def _parse(self, val):
return isodate.parse_datetime(val)
def _to_python(self, val):
val = super(DateTimeField, self)._to_python(val)
if val is None:
return None
# Convert to naive UTC
if hasattr(val, "tzinfo") and val.tzinfo:
val = val.astimezone(pytz.utc)
val = val.replace(tzinfo=None)
return val
class DateField(BaseIsoField):
""" date object serialized into YYYY-MM-DD.
E.g.: 2013-09-30 """
verbose_name = "date"
def _parse(self, val):
return isodate.parse_date(val)
class TimeField(BaseIsoField):
""" time object serialized into hh:mm:ssTZD.
E.g.: 11:32:39.984847 """
verbose_name = "time"
def _parse(self, val):
return isodate.parse_time(val)
def _to_python(self, val):
val = super(TimeField, self)._to_python(val)
if val is None:
return None
# Convert to naive UTC
if hasattr(val, "tzinfo") and val.tzinfo:
dt = datetime.datetime.combine(datetime.date.today(), val)
dt = dt.astimezone(pytz.utc)
dt = dt.replace(tzinfo=None)
val = dt.time()
return val
class DurationField(BaseIsoField):
""" timedelta object serialized into PnYnMnDTnHnMnS.
E.g.: P105DT9H52M49.448422S"""
verbose_name = "duration"
def _parse(self, val):
return isodate.parse_duration(val)
def serialize(self, val):
if val is None:
return None
return isodate.duration_isoformat(val)
class BaseSimpleField(BaseField):
python_type = None
def __init__(self, default=None, **kwargs):
super(BaseSimpleField, self).__init__(**kwargs)
try:
self.default = self._to_python(default)
except ValidationError, e:
raise DeclarationError("default: %s" % str(e))
def _to_python(self, val):
if val is None:
return None
try:
return self.python_type(val)
except ValueError:
raise ValidationError("Conversion of value %r failed" % val)
def get_schema(self):
rval = super(BaseSimpleField, self).get_schema()
rval["default"] = self.default
return rval
class IndexableField(BaseSimpleField):
def __init__(self, choices=None, invalid_choices=None, **kwargs):
super(IndexableField, self).__init__(**kwargs)
if choices is not None:
if not isinstance(choices, (list, tuple)):
raise DeclarationError("choices has to be a list or tuple")
tempo = []
for i in xrange(len(choices)):
try:
tempo.append(self._to_python(choices[i]))
except Exception, e:
raise DeclarationError("[%d]: %s" % (i, str(e)))
choices = tempo
if invalid_choices is not None:
if not isinstance(invalid_choices, (list, tuple)):
raise DeclarationError("invalid_choices has to be a list or tuple")
tempo = []
for i in xrange(len(invalid_choices)):
try:
tempo.append(self._to_python(invalid_choices[i]))
except Exception, e:
raise DeclarationError("[%d]: %s" % (i, str(e)))
invalid_choices = tempo
if self.default is not None:
if invalid_choices and self.default in invalid_choices:
raise DeclarationError("default value is in invalid_choices")
if choices and self.default not in choices:
raise DeclarationError("default value is not in choices")
if invalid_choices and choices:
inter = set(choices).intersection(set(invalid_choices))
if inter:
raise DeclarationError("these choices are stated as both valid and invalid: %r" % inter)
self.choices, self.invalid_choices = choices, invalid_choices
def _validate(self, val):
super(IndexableField, self)._validate(val)
if val is None:
return
if self.choices and val not in self.choices:
raise ValidationError("Val %r must be one of %r" % (val, self.choices))
if self.invalid_choices and val in self.invalid_choices:
raise ValidationError("Val %r must NOT be one of %r" % (val, self.invalid_choices))
def get_schema(self):
rval = super(IndexableField, self).get_schema()
rval["choices"] = self.choices
rval["invalid_choices"] = self.invalid_choices
return rval
class DigitField(IndexableField):
""" Base class for fields that represent numbers
min_val (int|long|float = None)
Minumum threshold for incoming value
max_val (int|long|float = None)
Maximum threshold for imcoming value
"""
def __init__(self, min_val=None, max_val=None, **kwargs):
super(DigitField, self).__init__(**kwargs)
min_val = self._to_python(min_val)
max_val = self._to_python(max_val)
value_check = min_val or max_val
if self.choices is not None and value_check is not None:
raise DeclarationError("choices and min or max value limits do not make sense together")
if min_val is not None and max_val is not None:
if max_val < min_val:
raise DeclarationError("max val is less than min_val")
if self.default is not None:
if min_val is not None and self.default < min_val:
raise DeclarationError("default value is too small")
if max_val is not None and self.default > max_val:
raise DeclarationError("default value is too big")
self.min_val, self.max_val = min_val, max_val
def _to_python(self, val):
if not isinstance(val, (basestring, int, long, float, type(None))):
raise ValidationError("Has to be a digit or a string convertable to digit")
return super(DigitField, self)._to_python(val)
def _validate(self, val):
super(DigitField, self)._validate(val)
if val is None:
return
if self.min_val is not None and val < self.min_val:
raise ValidationError("Digit %r is too small. Has to be at least %r." % (val, self.min_val))
if self.max_val is not None and val > self.max_val:
raise ValidationError("Digit %r is too big. Has to be at max %r." % (val, self.max_val))
def get_schema(self):
rval = super(DigitField, self).get_schema()
rval.update({
"min_val": self.min_val,
"max_val": self.max_val
})
return rval
class IntegerField(DigitField):
""" Transforms input data that could be any number or a string value with that number into *long* """
python_type = long
verbose_name = "int"
class FloatField(DigitField):
""" Transforms input data that could be any number or a string value with that number into *float* """
python_type = float
verbose_name = "float"
class StringField(IndexableField):
""" Represents any arbitrary text
regex (string = None)
`Python regular expression <https://docs.python.org/2/library/re.html#regular-expression-syntax>`_
used to validate the string.
min_length (int = None)
Minimum size of string value
max_length (int = None)
Maximum size of string value
"""
python_type = unicode
verbose_name = "string"
def __init__(self, regex=None, min_length=None, max_length=None, **kwargs):
super(StringField, self).__init__(**kwargs)
def _set(name, transform_f, val):
if val is not None:
try:
val = transform_f(val)
except Exception, e:
raise DeclarationError("%s: %s" % (name, str(e)))
setattr(self, name, val)
val_check = min_length or max_length or regex
if self.choices and val_check is not None:
raise DeclarationError("choices and value checkers do not make sense together")
_set("regex", re.compile, regex)
_set("min_length", int, min_length)
_set("max_length", int, max_length)
def _to_python(self, val):
if not isinstance(val, (basestring, type(None))):
raise ValidationError("Has to be string")
return super(StringField, self)._to_python(val)
def _validate(self, val):
super(StringField, self)._validate(val)
if val is None:
return
if self.min_length is not None:
if len(val) < self.min_length:
raise ValidationError("Length is too small. Is %r has to be at least %r." % (len(val),
self.min_length))
if self.max_length is not None:
if len(val) > self.max_length:
raise ValidationError("Length is too small. Is %r has to be at least %r." % (len(val),
self.max_length))
reg = self.regex
if reg is not None:
if not reg.match(val):
raise ValidationError("%r did not match regexp %r" % (val, reg.pattern))
def get_schema(self):
rval = super(StringField, self).get_schema()
rval.update({
"regex": getattr(self.regex, "pattern", None),
"min_length": self.min_length,
"max_length": self.max_length})
return rval
class BooleanField(BaseSimpleField):
""" Expects only a boolean value as incoming data """
verbose_name = "boolean"
python_type = bool
def _to_python(self, val):
if not isinstance(val, (bool, type(None))):
raise ValidationError("Has to be a digit or a string convertable to digit")
return super(BooleanField, self)._to_python(val)
PRIMITIVE_TYPES_MAP = {
int: IntegerField,
float: FloatField,
str: StringField,
unicode: StringField,
basestring: StringField,
bool: BooleanField
}
def wrap_into_field(simple_type):
if not isinstance(simple_type, BaseField):
field_class = PRIMITIVE_TYPES_MAP.get(simple_type, None)
if field_class:
return field_class()
else:
return ObjectField(simple_type)
return simple_type
class ListField(BaseField):
""" Represents a collection of primitives. Serialized into a list.
item_type (python primitve|Field instance)
value is used by list field to validate individual items
python primitive are internally mapped to Field instances according to
:data:`PRIMITIVE_TYPES_MAP <resource_api.interfaces.PRIMITIVE_TYPES_MAP>`
"""
verbose_name = "list"
def __init__(self, item_type, **kwargs):
super(ListField, self).__init__(**kwargs)
self.item_type = wrap_into_field(item_type)
def deserialize(self, val):
self._validate(val)
if val is None:
return val
errors = []
rval = []
if not isinstance(val, list):
raise ValidationError("Has to be list")
for item in val:
try:
rval.append(self.item_type.deserialize(item))
except ValidationError, e:
errors.append([val.index(item), e.message])
if errors:
raise ValidationError(errors)
return rval
def get_schema(self):
rval = super(ListField, self).get_schema()
rval["schema"] = self.item_type.get_schema()
return rval
def serialize(self, val):
return [self.item_type.serialize(item) for item in val]
class ObjectField(BaseField):
""" Represents a nested document/mapping of primitives. Serialized into a dict.
schema (class):
schema to be used for validation of the nested document, it does not have to be Schema subclass - just a
collection of fields
ObjectField can be declared via two different ways.
First, if there is a reusable schema defined elsewhere:
>>> class Sample(Schema):
>>> object_field = ObjectField(ExternalSchema, required=False, description="Zen")
Second, if the field is supposed to have a unique custom schema:
>>> class Sample(Schema):
>>> object_field = ObjectField(required=False, description="Zen", schema=dict(
>>> "foo": StringField()
>>> ))
"""
verbose_name = "dict"
def __init__(self, schema, **kwargs):
super(ObjectField, self).__init__(**kwargs)
if isinstance(schema, dict):
class Tmp(Schema):
pass
for key, value in schema.iteritems():
setattr(Tmp, key, value)
schema = Tmp
elif inspect.isclass(schema) and not issubclass(schema, Schema):
class Tmp(schema, Schema):
pass
schema = Tmp
self._schema = schema()
def deserialize(self, val):
self._validate(val)
if val is None:
return val
return self._schema.deserialize(val)
def get_schema(self):
return {
"type": self.verbose_name,
"schema": self._schema.get_schema()
}
def serialize(self, val):
return self._schema.serialize(val)
class Schema(object):
""" Base class for containers that would hold one or many fields.
it has one class attribute that may be used to alter shcema's validation flow
has_additional_fields (bool = False)
If *True* it shall be possible to have extra fields inside input data that will not be validated
NOTE: when defining schemas do not use any of the following reserved keywords:
- find_fields
- deserialize
- get_schema
- serialize
- has_additional_fields
"""
has_additional_fields = False
def __init__(self, validate_required_constraint=True, with_errors=True):
self._required_fields = set()
self._defaults = {}
self._validate_required_constraint, self._with_errors = validate_required_constraint, with_errors
self.fields = {}
for field_name in dir(self):
field = getattr(self, field_name)
if not isinstance(field, BaseField):
continue
self._add_field(field_name, copy(field))
def _add_field(self, field_name, field):
setattr(self, field_name, field)
self.fields[field_name] = field
if isinstance(field, BaseField) and field.required:
self._required_fields.add(field_name)
if isinstance(field, BaseSimpleField) and field.default is not None:
self._defaults[field_name] = field.default
def find_fields(self, **kwargs):
""" Returns a set of fields where each field contains one or more specified keyword arguments """
rval = set()
for key, value in kwargs.iteritems():
for field_name, field in self.fields.iteritems():
if field.kwargs.get(key) == value:
rval.add(field_name)
return rval
def deserialize(self, data, validate_required_constraint=True, with_errors=True):
""" Validates and transforms input data into something that is used withing data access layer
data (dict)
Incoming data
validate_required_constraint (bool = True)
If *False*, schema will not validate required constraint of the fields inside
with_errors (bool = True)
If *False*, all fields that contain errors are silently excluded
@raises ValidationError
When one or more fields has errors and *with_errors=True*
"""
if not isinstance(data, dict):
raise ValidationError({"__all__": "Has to be a dict"})
transformed = dict(self._defaults)
errors = defaultdict(list)
for key, value in data.iteritems():
field = self.fields.get(key)
if field is None:
if self.has_additional_fields:
transformed[key] = value
else:
errors["__all__"].append("Field %r is not defined" % key)
continue
try:
transformed[key] = field.deserialize(value)
except ValidationError, e:
errors[key].append(e.message)
if validate_required_constraint:
for field in self._required_fields:
if transformed.get(field) is None and field not in errors:
errors[field].append("Required field is missing")
if errors and with_errors:
raise ValidationError(errors)
else:
return transformed
def get_schema(self):
""" Returns a JSONizable schema that could be transfered over the wire """
rval = {}
for field_name, field in self.fields.iteritems():
rval[field_name] = field.get_schema()
if self.has_additional_fields:
rval["has_additional_fields"] = True
return rval
def serialize(self, val):
""" Transforms outgoing data into a JSONizable dict """
rval = {}
for key, value in val.iteritems():
field = self.fields.get(key)
if field:
rval[key] = field.serialize(value)
elif self.has_additional_fields:
rval[key] = value
else:
pass
return rval
| src/resource_api/schema.py | 20,619 | Parse datetime Convert to naive UTC Convert to naive UTC | 56 | en | 0.677463 |
"""
The tool to check the availability or syntax of domains, IPv4, IPv6 or URL.
::
██████╗ ██╗ ██╗███████╗██╗ ██╗███╗ ██╗ ██████╗███████╗██████╗ ██╗ ███████╗
██╔══██╗╚██╗ ██╔╝██╔════╝██║ ██║████╗ ██║██╔════╝██╔════╝██╔══██╗██║ ██╔════╝
██████╔╝ ╚████╔╝ █████╗ ██║ ██║██╔██╗ ██║██║ █████╗ ██████╔╝██║ █████╗
██╔═══╝ ╚██╔╝ ██╔══╝ ██║ ██║██║╚██╗██║██║ ██╔══╝ ██╔══██╗██║ ██╔══╝
██║ ██║ ██║ ╚██████╔╝██║ ╚████║╚██████╗███████╗██████╔╝███████╗███████╗
╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═════╝╚══════╝╚═════╝ ╚══════╝╚══════╝
Provides the cleaning interface.
Author:
Nissar Chababy, @funilrys, contactTATAfunilrysTODTODcom
Special thanks:
https://pyfunceble.github.io/special-thanks.html
Contributors:
https://pyfunceble.github.io/contributors.html
Project link:
https://github.com/funilrys/PyFunceble
Project documentation:
https://pyfunceble.readthedocs.io///en/master/
Project homepage:
https://pyfunceble.github.io/
License:
::
MIT License
Copyright (c) 2017, 2018, 2019, 2020 Nissar Chababy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
from os import sep as directory_separator
from os import walk
import PyFunceble
class Clean:
"""
Provide the cleaning logic.
.. note::
By cleaning we mean the cleaning of the :code:`output` directory.
:param list_to_test: The list of domains we are testing.
:type list_to_test: list|None
:param bool clean_all:
Tell the subsystem if we need to clean all.
Which include, of course, the output directory but also
all other file(s) generated by our system.
:param str file_path:
The path to the file we tested.
.. note::
This is only relevant if you use the MariaDB/MySQL database.
"""
def __init__(self, clean_all=False, file_path=None):
# We clean the output directory.
self.almost_everything(clean_all, file_path)
@classmethod
def file_to_delete(cls, all_files=False):
"""
Return the list of file to delete.
"""
# We initiate the directory we have to look for.
directory = "{0}{1}".format(
PyFunceble.OUTPUT_DIRECTORY, PyFunceble.OUTPUTS.parent_directory
)
if not directory.endswith(directory_separator): # pragma: no cover
# For safety, if it does not ends with the directory separator, we append it
# to its end.
directory += directory_separator
# We initiate a variable which will save the list of file to delete.
result = []
for root, _, files in walk(directory):
# We walk in the directory and get all files and sub-directories.
for file in files:
# If there is files in the current sub-directory, we loop
# through the list of files.
if file in [".gitignore", ".keep"]:
continue
if (
not all_files and "logs" in root and ".log" in file
): # pragma: no cover
continue
# The file is not into our list of file we do not have to delete.
if root.endswith(directory_separator):
# The root ends with the directory separator.
# We construct the path and append the full path to the result.
result.append(root + file)
else:
# The root directory does not ends with the directory separator.
# We construct the path by appending the directory separator
# between the root and the filename and append the full path to
# the result.
result.append(root + directory_separator + file) # pragma: no cover
# We return our list of file to delete.
return result
@classmethod
def databases_to_delete(cls): # pragma: no cover
"""
Set the databases files to delete.
"""
# We initate the result variable.
result = []
if PyFunceble.CONFIGURATION.db_type == "json":
# We initiate the directory we have to look for.
directory = PyFunceble.CONFIG_DIRECTORY
# We append the dir_structure file.
result.append(
"{0}{1}".format(
directory, PyFunceble.OUTPUTS.default_files.dir_structure
)
)
# We append the iana file.
result.append(
"{0}{1}".format(directory, PyFunceble.OUTPUTS.default_files.iana)
)
# We append the public suffix file.
result.append(
"{0}{1}".format(
directory, PyFunceble.OUTPUTS.default_files.public_suffix
)
)
# We append the inactive database file.
result.append(
"{0}{1}".format(directory, PyFunceble.OUTPUTS.default_files.inactive_db)
)
# We append the mining database file.
result.append(
"{0}{1}".format(directory, PyFunceble.OUTPUTS.default_files.mining)
)
return result
def almost_everything(self, clean_all=False, file_path=False):
"""
Delete almost all discovered files.
:param bool clean_all:
Tell the subsystem if we have to clean everything instesd
of almost everything.
"""
if (
"do_not_clean" not in PyFunceble.INTERN
or not PyFunceble.INTERN["do_not_clean"]
):
# We get the list of file to delete.
to_delete = self.file_to_delete(clean_all)
if (
not PyFunceble.abstracts.Version.is_local_cloned() and clean_all
): # pragma: no cover
to_delete.extend(self.databases_to_delete())
for file in to_delete:
# We loop through the list of file to delete.
# And we delete the currently read file.
PyFunceble.helpers.File(file).delete()
PyFunceble.LOGGER.info(f"Deleted: {file}")
if clean_all: # pragma: no cover
to_avoid = ["whois"]
else:
to_avoid = ["whois", "auto_continue", "inactive", "mining"]
if not file_path:
query = "DELETE FROM {0}"
else: # pragma: no cover
query = "DELETE FROM {0} WHERE file_path = %(file_path)s"
if PyFunceble.CONFIGURATION.db_type in [
"mariadb",
"mysql",
]: # pragma: no cover
with PyFunceble.engine.MySQL() as connection:
for database_name in [
y
for x, y in PyFunceble.engine.MySQL.tables.items()
if x not in to_avoid
]:
lquery = query.format(database_name)
with connection.cursor() as cursor:
cursor.execute(lquery, {"file_path": file_path})
PyFunceble.LOGGER.info(
"Cleaned the data related to "
f"{repr(file_path)} from the {database_name} table."
)
if (
not PyFunceble.abstracts.Version.is_local_cloned() and clean_all
): # pragma: no cover
PyFunceble.load_config()
PyFunceble.LOGGER.info(f"Reloaded configuration.")
| PyFunceble/output/clean.py | 9,585 | Provide the cleaning logic.
.. note::
By cleaning we mean the cleaning of the :code:`output` directory.
:param list_to_test: The list of domains we are testing.
:type list_to_test: list|None
:param bool clean_all:
Tell the subsystem if we need to clean all.
Which include, of course, the output directory but also
all other file(s) generated by our system.
:param str file_path:
The path to the file we tested.
.. note::
This is only relevant if you use the MariaDB/MySQL database.
Delete almost all discovered files.
:param bool clean_all:
Tell the subsystem if we have to clean everything instesd
of almost everything.
Set the databases files to delete.
Return the list of file to delete.
The tool to check the availability or syntax of domains, IPv4, IPv6 or URL.
::
██████╗ ██╗ ██╗███████╗██╗ ██╗███╗ ██╗ ██████╗███████╗██████╗ ██╗ ███████╗
██╔══██╗╚██╗ ██╔╝██╔════╝██║ ██║████╗ ██║██╔════╝██╔════╝██╔══██╗██║ ██╔════╝
██████╔╝ ╚████╔╝ █████╗ ██║ ██║██╔██╗ ██║██║ █████╗ ██████╔╝██║ █████╗
██╔═══╝ ╚██╔╝ ██╔══╝ ██║ ██║██║╚██╗██║██║ ██╔══╝ ██╔══██╗██║ ██╔══╝
██║ ██║ ██║ ╚██████╔╝██║ ╚████║╚██████╗███████╗██████╔╝███████╗███████╗
╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═════╝╚══════╝╚═════╝ ╚══════╝╚══════╝
Provides the cleaning interface.
Author:
Nissar Chababy, @funilrys, contactTATAfunilrysTODTODcom
Special thanks:
https://pyfunceble.github.io/special-thanks.html
Contributors:
https://pyfunceble.github.io/contributors.html
Project link:
https://github.com/funilrys/PyFunceble
Project documentation:
https://pyfunceble.readthedocs.io///en/master/
Project homepage:
https://pyfunceble.github.io/
License:
::
MIT License
Copyright (c) 2017, 2018, 2019, 2020 Nissar Chababy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
We clean the output directory. We initiate the directory we have to look for. pragma: no cover For safety, if it does not ends with the directory separator, we append it to its end. We initiate a variable which will save the list of file to delete. We walk in the directory and get all files and sub-directories. If there is files in the current sub-directory, we loop through the list of files. pragma: no cover The file is not into our list of file we do not have to delete. The root ends with the directory separator. We construct the path and append the full path to the result. The root directory does not ends with the directory separator. We construct the path by appending the directory separator between the root and the filename and append the full path to the result. pragma: no cover We return our list of file to delete. pragma: no cover We initate the result variable. We initiate the directory we have to look for. We append the dir_structure file. We append the iana file. We append the public suffix file. We append the inactive database file. We append the mining database file. We get the list of file to delete. pragma: no cover We loop through the list of file to delete. And we delete the currently read file. pragma: no cover pragma: no cover pragma: no cover pragma: no cover | 4,246 | en | 0.783106 |
"""
ANIMATE RIGID OBJECTS IN BLENDER.
Requirements:
------------------------------------------------------------------------------
IMPORTANT! This has only been tested with Blender 2.79 API.
Warnings:
------------------------------------------------------------------------------
Do not expect all blends to be perfect; we did additional filtering of
generated blends to ensure that random data is well-formed.
Execution:
------------------------------------------------------------------------------
This script is intended to run inside blender launched in background mode.
Sample invocation is:
blender --background --python-exit-code 1 --factory-startup \
--python blender/animate_main.py -- \
--set_env_lighting_image=$ENVMAPS \
--obj_file="$OBJ" \
--output_blend="$OFILE"
Capabilities:
------------------------------------------------------------------------------
Uses Blender's rigid body simulator to animate objects in the input file and
output a blend file with the animation.
"""
import bpy
import argparse
import logging
import math
import os
import sys
import random
import time
import traceback
# Add to path to make sure we can import modules inside Blender.
__sdir = os.path.dirname(os.path.realpath(__file__))
if __sdir not in sys.path:
sys.path.append(__sdir)
import rigid_body_util
import geo_util
import render_util
LOG = logging.getLogger(__name__)
if __name__ == "__main__":
try:
# FLAGS
# --------------------------------------------------------------------------
parser = argparse.ArgumentParser(
description='Utility to animate shapenet models randomly.')
parser.add_argument(
'--obj_file', action='store', type=str, required=True,
help='Input OBJ file.')
parser.add_argument(
'--simple_diagnostic', action='store_true', default=False,
help='If true, does not animate, but just imports and runs diagnostic info.')
parser.add_argument(
'--set_env_lighting_image', action='store', default='',
help='Image or directory of images; set to set environment lighting.')
parser.add_argument(
'--p_breaking', action='store', type=float, default=0.5,
help='Probability of breaking.')
parser.add_argument(
'--p_cam_track', action='store', type=float, default=0.5)
parser.add_argument(
'--p_bouncy', action='store', type=float, default=0.3)
parser.add_argument(
'--p_warp_time', action='store', type=float, default=0.3)
parser.add_argument(
'--p_tilt_floor', action='store', type=float, default=0.2)
parser.add_argument(
'--diagnostic_frame_prefix', action='store', default='')
parser.add_argument(
'--output_blend', action='store', type=str, required=True)
# Parse only arguments after --
# --------------------------------------------------------------------------
argv = sys.argv
if "--" not in argv:
argv = [] # as if no args are passed
else:
argv = argv[argv.index("--") + 1:]
args = parser.parse_args(argv)
random.seed(time.time())
render_util.set_width_height(1500, 1500)
if args.set_env_lighting_image:
render_util.setup_realistic_lighting(args.set_env_lighting_image, 3.0, False)
if args.simple_diagnostic:
rigid_body_util.obj_import_diagnostic(args.obj_file)
cam = geo_util.create_random_camera(
geo_util.BBox([-1.0,-1.0,0.0], [1.0, 1.0, 1.0]),
1.0, 1.0, 1.0)
else:
floor, objects = rigid_body_util.obj_import_animate(
args.obj_file,
allow_breaking=(random.random() < args.p_breaking))
cam = geo_util.create_random_camera(
geo_util.BBox([-1.0,-1.0,0.0], [1.0, 1.0, 1.0]),
1.0, 1.0, 1.0)
# Note: one can't truly slow down the simulation without altering
# the result in blender; empirically this gives a reasonable alternative
# timing
if random.random() < args.p_warp_time:
rigid_body_util.set_rigidbody_world_properties(
steps_per_sec=60, time_scale=0.5, solver_its=random.randint(3, 6))
if random.random() < args.p_tilt_floor:
axis = random.randint(0, 1)
angle = random.uniform(-math.pi * 0.2, math.pi * 0.2)
floor.rotation_euler[axis] = angle
if random.random() < args.p_bouncy:
restitution = random.uniform(0.38, 0.5)
for ob in objects + [floor]:
ob.rigid_body.restitution = restitution
if random.random() < args.p_cam_track:
geo_util.add_camera_track_constraint(
cam, objects[random.randint(0, len(objects) - 1)])
# bpy.context.scene.world.light_settings.samples = 2
bpy.ops.file.pack_all()
print('Saving blend to %s' % args.output_blend.replace('.blend', '_unbaked.blend'))
geo_util.save_blend(args.output_blend.replace('.blend', '_unbaked.blend'))
rigid_body_util.bake_simulation_bugfix()
print('Saving blend to %s' % args.output_blend)
geo_util.save_blend(args.output_blend)
if len(args.diagnostic_frame_prefix) > 0:
render_util.render_animation(args.diagnostic_frame_prefix, 1)
except Exception as e:
tb = traceback.format_exc()
LOG.critical(tb)
LOG.critical('Script failed')
raise e
| creativeflow/blender/animate_main.py | 5,720 | ANIMATE RIGID OBJECTS IN BLENDER.
Requirements:
------------------------------------------------------------------------------
IMPORTANT! This has only been tested with Blender 2.79 API.
Warnings:
------------------------------------------------------------------------------
Do not expect all blends to be perfect; we did additional filtering of
generated blends to ensure that random data is well-formed.
Execution:
------------------------------------------------------------------------------
This script is intended to run inside blender launched in background mode.
Sample invocation is:
blender --background --python-exit-code 1 --factory-startup --python blender/animate_main.py -- --set_env_lighting_image=$ENVMAPS --obj_file="$OBJ" --output_blend="$OFILE"
Capabilities:
------------------------------------------------------------------------------
Uses Blender's rigid body simulator to animate objects in the input file and
output a blend file with the animation.
Add to path to make sure we can import modules inside Blender. FLAGS -------------------------------------------------------------------------- Parse only arguments after -- -------------------------------------------------------------------------- as if no args are passed Note: one can't truly slow down the simulation without altering the result in blender; empirically this gives a reasonable alternative timing bpy.context.scene.world.light_settings.samples = 2 | 1,497 | en | 0.506539 |
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import argparse
import numpy as np
import mindspore.context as context
import mindspore.nn as nn
from mindspore import Tensor
from mindspore.nn import TrainOneStepCell, WithLossCell
from src.model import LeNet5
from src.adam import AdamWeightDecayOp
parser = argparse.ArgumentParser(description="test_fl_lenet")
parser.add_argument("--device_target", type=str, default="CPU")
parser.add_argument("--server_mode", type=str, default="FEDERATED_LEARNING")
parser.add_argument("--ms_role", type=str, default="MS_WORKER")
parser.add_argument("--worker_num", type=int, default=0)
parser.add_argument("--server_num", type=int, default=1)
parser.add_argument("--scheduler_ip", type=str, default="127.0.0.1")
parser.add_argument("--scheduler_port", type=int, default=8113)
parser.add_argument("--fl_server_port", type=int, default=6666)
parser.add_argument("--start_fl_job_threshold", type=int, default=1)
parser.add_argument("--start_fl_job_time_window", type=int, default=3000)
parser.add_argument("--update_model_ratio", type=float, default=1.0)
parser.add_argument("--update_model_time_window", type=int, default=3000)
parser.add_argument("--fl_name", type=str, default="Lenet")
parser.add_argument("--fl_iteration_num", type=int, default=25)
parser.add_argument("--client_epoch_num", type=int, default=20)
parser.add_argument("--client_batch_size", type=int, default=32)
parser.add_argument("--client_learning_rate", type=float, default=0.1)
parser.add_argument("--scheduler_manage_port", type=int, default=11202)
parser.add_argument("--config_file_path", type=str, default="")
parser.add_argument("--encrypt_type", type=str, default="NOT_ENCRYPT")
# parameters for encrypt_type='DP_ENCRYPT'
parser.add_argument("--dp_eps", type=float, default=50.0)
parser.add_argument("--dp_delta", type=float, default=0.01) # 1/worker_num
parser.add_argument("--dp_norm_clip", type=float, default=1.0)
# parameters for encrypt_type='PW_ENCRYPT'
parser.add_argument("--share_secrets_ratio", type=float, default=1.0)
parser.add_argument("--cipher_time_window", type=int, default=300000)
parser.add_argument("--reconstruct_secrets_threshold", type=int, default=3)
args, _ = parser.parse_known_args()
device_target = args.device_target
server_mode = args.server_mode
ms_role = args.ms_role
worker_num = args.worker_num
server_num = args.server_num
scheduler_ip = args.scheduler_ip
scheduler_port = args.scheduler_port
fl_server_port = args.fl_server_port
start_fl_job_threshold = args.start_fl_job_threshold
start_fl_job_time_window = args.start_fl_job_time_window
update_model_ratio = args.update_model_ratio
update_model_time_window = args.update_model_time_window
share_secrets_ratio = args.share_secrets_ratio
cipher_time_window = args.cipher_time_window
reconstruct_secrets_threshold = args.reconstruct_secrets_threshold
fl_name = args.fl_name
fl_iteration_num = args.fl_iteration_num
client_epoch_num = args.client_epoch_num
client_batch_size = args.client_batch_size
client_learning_rate = args.client_learning_rate
scheduler_manage_port = args.scheduler_manage_port
config_file_path = args.config_file_path
dp_eps = args.dp_eps
dp_delta = args.dp_delta
dp_norm_clip = args.dp_norm_clip
encrypt_type = args.encrypt_type
ctx = {
"enable_fl": True,
"server_mode": server_mode,
"ms_role": ms_role,
"worker_num": worker_num,
"server_num": server_num,
"scheduler_ip": scheduler_ip,
"scheduler_port": scheduler_port,
"fl_server_port": fl_server_port,
"start_fl_job_threshold": start_fl_job_threshold,
"start_fl_job_time_window": start_fl_job_time_window,
"update_model_ratio": update_model_ratio,
"update_model_time_window": update_model_time_window,
"share_secrets_ratio": share_secrets_ratio,
"cipher_time_window": cipher_time_window,
"reconstruct_secrets_threshold": reconstruct_secrets_threshold,
"fl_name": fl_name,
"fl_iteration_num": fl_iteration_num,
"client_epoch_num": client_epoch_num,
"client_batch_size": client_batch_size,
"client_learning_rate": client_learning_rate,
"scheduler_manage_port": scheduler_manage_port,
"config_file_path": config_file_path,
"dp_eps": dp_eps,
"dp_delta": dp_delta,
"dp_norm_clip": dp_norm_clip,
"encrypt_type": encrypt_type
}
context.set_context(mode=context.GRAPH_MODE, device_target=device_target, save_graphs=False)
context.set_fl_context(**ctx)
if __name__ == "__main__":
epoch = 5
np.random.seed(0)
network = LeNet5(62)
criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
net_adam_opt = AdamWeightDecayOp(network.trainable_params(), weight_decay=0.1)
net_with_criterion = WithLossCell(network, criterion)
train_network = TrainOneStepCell(net_with_criterion, net_opt)
train_network.set_train()
losses = []
for _ in range(epoch):
data = Tensor(np.random.rand(32, 3, 32, 32).astype(np.float32))
label = Tensor(np.random.randint(0, 61, (32)).astype(np.int32))
loss = train_network(data, label).asnumpy()
losses.append(loss)
print(losses)
| tests/st/fl/mobile/test_mobile_lenet.py | 5,807 | Copyright 2021 Huawei Technologies Co., Ltd Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================ parameters for encrypt_type='DP_ENCRYPT' 1/worker_num parameters for encrypt_type='PW_ENCRYPT' | 733 | en | 0.740875 |
# -*- coding: utf-8 -*-
# Generated by Django 1.10.4 on 2016-12-12 00:16
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Account',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('password', models.CharField(max_length=128, verbose_name='password')),
('last_login', models.DateTimeField(blank=True, null=True, verbose_name='last login')),
('email', models.EmailField(max_length=254, unique=True)),
('username', models.CharField(max_length=40, unique=True)),
('first_name', models.CharField(blank=True, max_length=40)),
('last_name', models.CharField(blank=True, max_length=40)),
('tagline', models.CharField(blank=True, max_length=140)),
('is_admin', models.BooleanField(default=False)),
('created_at', models.DateTimeField(auto_now_add=True)),
('updated_at', models.DateTimeField(auto_now=True)),
],
options={
'abstract': False,
},
),
]
| authentication/migrations/0001_initial.py | 1,344 | -*- coding: utf-8 -*- Generated by Django 1.10.4 on 2016-12-12 00:16 | 68 | en | 0.752644 |
#! /usr/bin/env python3.6
from selenium import webdriver
import time
browser = webdriver.Chrome(executable_path='/home/coslate/anaconda3/bin/chromedriver')
#url = 'https://stats.nba.com/leaders'
url = 'http://stats.nba.com/teams/traditional/#!?sort=W_PCT&dir=-1'
browser.get(url)
time.sleep(5)
#browser.find_element_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/div[1]/div[1]/div/div/label/select/option[3]').click()
#browser.find_element_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/div[1]/div[2]/div/div/label/select/option[2]').click()
#browser.find_element_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/nba-stat-table/div[3]/div/div/select/option[1]').click()
#table = browser.find_element_by_class_name('nba-stat-table__overflow')
table = browser.find_elements_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/nba-stat-table/div[2]/div[1]/table/tbody')
line1 = browser.find_element_by_xpath('//tr[@index="0"]')
print(line1.text)
print("All the window handles : ")
print(browser.window_handles) # 查看所有window handles
print("The current window handle : ")
print(browser.current_window_handle) # 查看所有window handles
browser.close()
| crawler/test_code/test_selenium.py | 1,177 | ! /usr/bin/env python3.6url = 'https://stats.nba.com/leaders'browser.find_element_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/div[1]/div[1]/div/div/label/select/option[3]').click()browser.find_element_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/div[1]/div[2]/div/div/label/select/option[2]').click()browser.find_element_by_xpath('/html/body/main/div[2]/div/div[2]/div/div/nba-stat-table/div[3]/div/div/select/option[1]').click()table = browser.find_element_by_class_name('nba-stat-table__overflow') 查看所有window handles 查看所有window handles | 552 | en | 0.132153 |
# Large amount of credit goes to:
# https://github.com/keras-team/keras-contrib/blob/master/examples/improved_wgan.py
# which I've used as a reference for this implementation
from __future__ import print_function, division
from keras.datasets import mnist
from keras.layers.merge import _Merge
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import RMSprop
from functools import partial
import keras.backend as K
import matplotlib.pyplot as plt
import sys
import numpy as np
class RandomWeightedAverage(_Merge):
"""Provides a (random) weighted average between real and generated image samples"""
def _merge_function(self, inputs):
alpha = K.random_uniform((32, 1, 1, 1))
return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])
class WGANGP():
def __init__(self):
self.img_rows = 28
self.img_cols = 28
self.channels = 1
self.img_shape = (self.img_rows, self.img_cols, self.channels)
self.latent_dim = 100
# Following parameter and optimizer set as recommended in paper
self.n_critic = 5
optimizer = RMSprop(lr=0.00005)
# Build the generator and critic
self.generator = self.build_generator()
self.critic = self.build_critic()
#-------------------------------
# Construct Computational Graph
# for the Critic
#-------------------------------
# Freeze generator's layers while training critic
self.generator.trainable = False
# Image input (real sample)
real_img = Input(shape=self.img_shape)
# Noise input
z_disc = Input(shape=(self.latent_dim,))
# Generate image based of noise (fake sample)
fake_img = self.generator(z_disc)
# Discriminator determines validity of the real and fake images
fake = self.critic(fake_img)
valid = self.critic(real_img)
# Construct weighted average between real and fake images
interpolated_img = RandomWeightedAverage()([real_img, fake_img])
# Determine validity of weighted sample
validity_interpolated = self.critic(interpolated_img)
# Use Python partial to provide loss function with additional
# 'averaged_samples' argument
partial_gp_loss = partial(self.gradient_penalty_loss,
averaged_samples=interpolated_img)
partial_gp_loss.__name__ = 'gradient_penalty' # Keras requires function names
self.critic_model = Model(inputs=[real_img, z_disc],
outputs=[valid, fake, validity_interpolated])
self.critic_model.compile(loss=[self.wasserstein_loss,
self.wasserstein_loss,
partial_gp_loss],
optimizer=optimizer,
loss_weights=[1, 1, 10])
#-------------------------------
# Construct Computational Graph
# for Generator
#-------------------------------
# For the generator we freeze the critic's layers
self.critic.trainable = False
self.generator.trainable = True
# Sampled noise for input to generator
z_gen = Input(shape=(self.latent_dim,))
# Generate images based of noise
img = self.generator(z_gen)
# Discriminator determines validity
valid = self.critic(img)
# Defines generator model
self.generator_model = Model(z_gen, valid)
self.generator_model.compile(loss=self.wasserstein_loss, optimizer=optimizer)
def gradient_penalty_loss(self, y_true, y_pred, averaged_samples):
"""
Computes gradient penalty based on prediction and weighted real / fake samples
"""
gradients = K.gradients(y_pred, averaged_samples)[0]
# compute the euclidean norm by squaring ...
gradients_sqr = K.square(gradients)
# ... summing over the rows ...
gradients_sqr_sum = K.sum(gradients_sqr,
axis=np.arange(1, len(gradients_sqr.shape)))
# ... and sqrt
gradient_l2_norm = K.sqrt(gradients_sqr_sum)
# compute lambda * (1 - ||grad||)^2 still for each single sample
gradient_penalty = K.square(1 - gradient_l2_norm)
# return the mean as loss over all the batch samples
return K.mean(gradient_penalty)
def wasserstein_loss(self, y_true, y_pred):
return K.mean(y_true * y_pred)
def build_generator(self):
model = Sequential()
model.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.latent_dim))
model.add(Reshape((7, 7, 128)))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size=4, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size=4, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(Activation("relu"))
model.add(Conv2D(self.channels, kernel_size=4, padding="same"))
model.add(Activation("tanh"))
model.summary()
noise = Input(shape=(self.latent_dim,))
img = model(noise)
return Model(noise, img)
def build_critic(self):
model = Sequential()
model.add(Conv2D(16, kernel_size=3, strides=2, input_shape=self.img_shape, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(32, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(128, kernel_size=3, strides=1, padding="same"))
model.add(BatchNormalization(momentum=0.8))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(1))
model.summary()
img = Input(shape=self.img_shape)
validity = model(img)
return Model(img, validity)
def train(self, epochs, batch_size, sample_interval=50):
# Load the dataset
(X_train, _), (_, _) = mnist.load_data()
# Rescale -1 to 1
X_train = (X_train.astype(np.float32) - 127.5) / 127.5
X_train = np.expand_dims(X_train, axis=3)
# Adversarial ground truths
valid = -np.ones((batch_size, 1))
fake = np.ones((batch_size, 1))
dummy = np.zeros((batch_size, 1)) # Dummy gt for gradient penalty
for epoch in range(epochs):
for _ in range(self.n_critic):
# ---------------------
# Train Discriminator
# ---------------------
# Select a random batch of images
idx = np.random.randint(0, X_train.shape[0], batch_size)
imgs = X_train[idx]
# Sample generator input
noise = np.random.normal(0, 1, (batch_size, self.latent_dim))
# Train the critic
d_loss = self.critic_model.train_on_batch([imgs, noise],
[valid, fake, dummy])
# ---------------------
# Train Generator
# ---------------------
g_loss = self.generator_model.train_on_batch(noise, valid)
# Plot the progress
print ("%d [D loss: %f] [G loss: %f]" % (epoch, d_loss[0], g_loss))
# If at save interval => save generated image samples
if epoch % sample_interval == 0:
self.sample_images(epoch)
def sample_images(self, epoch):
r, c = 5, 5
noise = np.random.normal(0, 1, (r * c, self.latent_dim))
gen_imgs = self.generator.predict(noise)
# Rescale images 0 - 1
gen_imgs = 0.5 * gen_imgs + 0.5
fig, axs = plt.subplots(r, c)
cnt = 0
for i in range(r):
for j in range(c):
axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')
axs[i,j].axis('off')
cnt += 1
fig.savefig("images/mnist_%d.png" % epoch)
plt.close()
if __name__ == '__main__':
wgan = WGANGP()
wgan.train(epochs=30000, batch_size=32, sample_interval=100)
| wgan_gp/wgan_gp.py | 8,958 | Provides a (random) weighted average between real and generated image samples
Computes gradient penalty based on prediction and weighted real / fake samples
Large amount of credit goes to: https://github.com/keras-team/keras-contrib/blob/master/examples/improved_wgan.py which I've used as a reference for this implementation Following parameter and optimizer set as recommended in paper Build the generator and critic------------------------------- Construct Computational Graph for the Critic------------------------------- Freeze generator's layers while training critic Image input (real sample) Noise input Generate image based of noise (fake sample) Discriminator determines validity of the real and fake images Construct weighted average between real and fake images Determine validity of weighted sample Use Python partial to provide loss function with additional 'averaged_samples' argument Keras requires function names------------------------------- Construct Computational Graph for Generator------------------------------- For the generator we freeze the critic's layers Sampled noise for input to generator Generate images based of noise Discriminator determines validity Defines generator model compute the euclidean norm by squaring ... ... summing over the rows ... ... and sqrt compute lambda * (1 - ||grad||)^2 still for each single sample return the mean as loss over all the batch samples Load the dataset Rescale -1 to 1 Adversarial ground truths Dummy gt for gradient penalty --------------------- Train Discriminator --------------------- Select a random batch of images Sample generator input Train the critic --------------------- Train Generator --------------------- Plot the progress If at save interval => save generated image samples Rescale images 0 - 1 | 1,807 | en | 0.760732 |
"""
sphinx.writers.texinfo
~~~~~~~~~~~~~~~~~~~~~~
Custom docutils writer for Texinfo.
:copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
import textwrap
import warnings
from os import path
from typing import (TYPE_CHECKING, Any, Dict, Iterable, Iterator, List, Optional, Pattern, Set,
Tuple, Union, cast)
from docutils import nodes, writers
from docutils.nodes import Element, Node, Text
from sphinx import __display_version__, addnodes
from sphinx.deprecation import RemovedInSphinx50Warning
from sphinx.domains import IndexEntry
from sphinx.domains.index import IndexDomain
from sphinx.errors import ExtensionError
from sphinx.locale import _, __, admonitionlabels
from sphinx.util import logging
from sphinx.util.docutils import SphinxTranslator
from sphinx.util.i18n import format_date
from sphinx.writers.latex import collected_footnote
if TYPE_CHECKING:
from sphinx.builders.texinfo import TexinfoBuilder
logger = logging.getLogger(__name__)
COPYING = """\
@quotation
%(project)s %(release)s, %(date)s
%(author)s
Copyright @copyright{} %(copyright)s
@end quotation
"""
TEMPLATE = """\
\\input texinfo @c -*-texinfo-*-
@c %%**start of header
@setfilename %(filename)s
@documentencoding UTF-8
@ifinfo
@*Generated by Sphinx """ + __display_version__ + """.@*
@end ifinfo
@settitle %(title)s
@defindex ge
@paragraphindent %(paragraphindent)s
@exampleindent %(exampleindent)s
@finalout
%(direntry)s
@definfoenclose strong,`,'
@definfoenclose emph,`,'
@c %%**end of header
@copying
%(copying)s
@end copying
@titlepage
@title %(title)s
@insertcopying
@end titlepage
@contents
@c %%** start of user preamble
%(preamble)s
@c %%** end of user preamble
@ifnottex
@node Top
@top %(title)s
@insertcopying
@end ifnottex
@c %%**start of body
%(body)s
@c %%**end of body
@bye
"""
def find_subsections(section: Element) -> List[nodes.section]:
"""Return a list of subsections for the given ``section``."""
result = []
for child in section:
if isinstance(child, nodes.section):
result.append(child)
continue
elif isinstance(child, nodes.Element):
result.extend(find_subsections(child))
return result
def smart_capwords(s: str, sep: str = None) -> str:
"""Like string.capwords() but does not capitalize words that already
contain a capital letter."""
words = s.split(sep)
for i, word in enumerate(words):
if all(x.islower() for x in word):
words[i] = word.capitalize()
return (sep or ' ').join(words)
class TexinfoWriter(writers.Writer):
"""Texinfo writer for generating Texinfo documents."""
supported = ('texinfo', 'texi')
settings_spec: Tuple[str, Any, Tuple[Tuple[str, List[str], Dict[str, str]], ...]] = (
'Texinfo Specific Options', None, (
("Name of the Info file", ['--texinfo-filename'], {'default': ''}),
('Dir entry', ['--texinfo-dir-entry'], {'default': ''}),
('Description', ['--texinfo-dir-description'], {'default': ''}),
('Category', ['--texinfo-dir-category'], {'default':
'Miscellaneous'})))
settings_defaults: Dict = {}
output: str = None
visitor_attributes = ('output', 'fragment')
def __init__(self, builder: "TexinfoBuilder") -> None:
super().__init__()
self.builder = builder
def translate(self) -> None:
visitor = self.builder.create_translator(self.document, self.builder)
self.visitor = cast(TexinfoTranslator, visitor)
self.document.walkabout(visitor)
self.visitor.finish()
for attr in self.visitor_attributes:
setattr(self, attr, getattr(self.visitor, attr))
class TexinfoTranslator(SphinxTranslator):
builder: "TexinfoBuilder" = None
ignore_missing_images = False
default_elements = {
'author': '',
'body': '',
'copying': '',
'date': '',
'direntry': '',
'exampleindent': 4,
'filename': '',
'paragraphindent': 0,
'preamble': '',
'project': '',
'release': '',
'title': '',
}
def __init__(self, document: nodes.document, builder: "TexinfoBuilder") -> None:
super().__init__(document, builder)
self.init_settings()
self.written_ids: Set[str] = set() # node names and anchors in output
# node names and anchors that should be in output
self.referenced_ids: Set[str] = set()
self.indices: List[Tuple[str, str]] = [] # (node name, content)
self.short_ids: Dict[str, str] = {} # anchors --> short ids
self.node_names: Dict[str, str] = {} # node name --> node's name to display
self.node_menus: Dict[str, List[str]] = {} # node name --> node's menu entries
self.rellinks: Dict[str, List[str]] = {} # node name --> (next, previous, up)
self.collect_indices()
self.collect_node_names()
self.collect_node_menus()
self.collect_rellinks()
self.body: List[str] = []
self.context: List[str] = []
self.descs: List[addnodes.desc] = []
self.previous_section: nodes.section = None
self.section_level = 0
self.seen_title = False
self.next_section_ids: Set[str] = set()
self.escape_newlines = 0
self.escape_hyphens = 0
self.curfilestack: List[str] = []
self.footnotestack: List[Dict[str, List[Union[collected_footnote, bool]]]] = [] # NOQA
self.in_footnote = 0
self.in_samp = 0
self.handled_abbrs: Set[str] = set()
self.colwidths: List[int] = None
def finish(self) -> None:
if self.previous_section is None:
self.add_menu('Top')
for index in self.indices:
name, content = index
pointers = tuple([name] + self.rellinks[name])
self.body.append('\n@node %s,%s,%s,%s\n' % pointers)
self.body.append('@unnumbered %s\n\n%s\n' % (name, content))
while self.referenced_ids:
# handle xrefs with missing anchors
r = self.referenced_ids.pop()
if r not in self.written_ids:
self.body.append('@anchor{%s}@w{%s}\n' % (r, ' ' * 30))
self.ensure_eol()
self.fragment = ''.join(self.body)
self.elements['body'] = self.fragment
self.output = TEMPLATE % self.elements
# -- Helper routines
def init_settings(self) -> None:
elements = self.elements = self.default_elements.copy()
elements.update({
# if empty, the title is set to the first section title
'title': self.settings.title,
'author': self.settings.author,
# if empty, use basename of input file
'filename': self.settings.texinfo_filename,
'release': self.escape(self.config.release),
'project': self.escape(self.config.project),
'copyright': self.escape(self.config.copyright),
'date': self.escape(self.config.today or
format_date(self.config.today_fmt or _('%b %d, %Y'),
language=self.config.language))
})
# title
title: str = self.settings.title
if not title:
title_node = self.document.next_node(nodes.title)
title = title_node.astext() if title_node else '<untitled>'
elements['title'] = self.escape_id(title) or '<untitled>'
# filename
if not elements['filename']:
elements['filename'] = self.document.get('source') or 'untitled'
if elements['filename'][-4:] in ('.txt', '.rst'): # type: ignore
elements['filename'] = elements['filename'][:-4] # type: ignore
elements['filename'] += '.info' # type: ignore
# direntry
if self.settings.texinfo_dir_entry:
entry = self.format_menu_entry(
self.escape_menu(self.settings.texinfo_dir_entry),
'(%s)' % elements['filename'],
self.escape_arg(self.settings.texinfo_dir_description))
elements['direntry'] = ('@dircategory %s\n'
'@direntry\n'
'%s'
'@end direntry\n') % (
self.escape_id(self.settings.texinfo_dir_category), entry)
elements['copying'] = COPYING % elements
# allow the user to override them all
elements.update(self.settings.texinfo_elements)
def collect_node_names(self) -> None:
"""Generates a unique id for each section.
Assigns the attribute ``node_name`` to each section."""
def add_node_name(name: str) -> str:
node_id = self.escape_id(name)
nth, suffix = 1, ''
while node_id + suffix in self.written_ids or \
node_id + suffix in self.node_names:
nth += 1
suffix = '<%s>' % nth
node_id += suffix
self.written_ids.add(node_id)
self.node_names[node_id] = name
return node_id
# must have a "Top" node
self.document['node_name'] = 'Top'
add_node_name('Top')
add_node_name('top')
# each index is a node
self.indices = [(add_node_name(name), content)
for name, content in self.indices]
# each section is also a node
for section in self.document.findall(nodes.section):
title = cast(nodes.TextElement, section.next_node(nodes.Titular))
name = title.astext() if title else '<untitled>'
section['node_name'] = add_node_name(name)
def collect_node_menus(self) -> None:
"""Collect the menu entries for each "node" section."""
node_menus = self.node_menus
targets: List[Element] = [self.document]
targets.extend(self.document.findall(nodes.section))
for node in targets:
assert 'node_name' in node and node['node_name']
entries = [s['node_name'] for s in find_subsections(node)]
node_menus[node['node_name']] = entries
# try to find a suitable "Top" node
title = self.document.next_node(nodes.title)
top = title.parent if title else self.document
if not isinstance(top, (nodes.document, nodes.section)):
top = self.document
if top is not self.document:
entries = node_menus[top['node_name']]
entries += node_menus['Top'][1:]
node_menus['Top'] = entries
del node_menus[top['node_name']]
top['node_name'] = 'Top'
# handle the indices
for name, _content in self.indices:
node_menus[name] = []
node_menus['Top'].append(name)
def collect_rellinks(self) -> None:
"""Collect the relative links (next, previous, up) for each "node"."""
rellinks = self.rellinks
node_menus = self.node_menus
for id in node_menus:
rellinks[id] = ['', '', '']
# up's
for id, entries in node_menus.items():
for e in entries:
rellinks[e][2] = id
# next's and prev's
for id, entries in node_menus.items():
for i, id in enumerate(entries):
# First child's prev is empty
if i != 0:
rellinks[id][1] = entries[i - 1]
# Last child's next is empty
if i != len(entries) - 1:
rellinks[id][0] = entries[i + 1]
# top's next is its first child
try:
first = node_menus['Top'][0]
except IndexError:
pass
else:
rellinks['Top'][0] = first
rellinks[first][1] = 'Top'
# -- Escaping
# Which characters to escape depends on the context. In some cases,
# namely menus and node names, it's not possible to escape certain
# characters.
def escape(self, s: str) -> str:
"""Return a string with Texinfo command characters escaped."""
s = s.replace('@', '@@')
s = s.replace('{', '@{')
s = s.replace('}', '@}')
# prevent `` and '' quote conversion
s = s.replace('``', "`@w{`}")
s = s.replace("''", "'@w{'}")
return s
def escape_arg(self, s: str) -> str:
"""Return an escaped string suitable for use as an argument
to a Texinfo command."""
s = self.escape(s)
# commas are the argument delimiters
s = s.replace(',', '@comma{}')
# normalize white space
s = ' '.join(s.split()).strip()
return s
def escape_id(self, s: str) -> str:
"""Return an escaped string suitable for node names and anchors."""
bad_chars = ',:()'
for bc in bad_chars:
s = s.replace(bc, ' ')
if re.search('[^ .]', s):
# remove DOTs if name contains other characters
s = s.replace('.', ' ')
s = ' '.join(s.split()).strip()
return self.escape(s)
def escape_menu(self, s: str) -> str:
"""Return an escaped string suitable for menu entries."""
s = self.escape_arg(s)
s = s.replace(':', ';')
s = ' '.join(s.split()).strip()
return s
def ensure_eol(self) -> None:
"""Ensure the last line in body is terminated by new line."""
if self.body and self.body[-1][-1:] != '\n':
self.body.append('\n')
def format_menu_entry(self, name: str, node_name: str, desc: str) -> str:
if name == node_name:
s = '* %s:: ' % (name,)
else:
s = '* %s: %s. ' % (name, node_name)
offset = max((24, (len(name) + 4) % 78))
wdesc = '\n'.join(' ' * offset + l for l in
textwrap.wrap(desc, width=78 - offset))
return s + wdesc.strip() + '\n'
def add_menu_entries(self, entries: List[str], reg: Pattern = re.compile(r'\s+---?\s+')
) -> None:
for entry in entries:
name = self.node_names[entry]
# special formatting for entries that are divided by an em-dash
try:
parts = reg.split(name, 1)
except TypeError:
# could be a gettext proxy
parts = [name]
if len(parts) == 2:
name, desc = parts
else:
desc = ''
name = self.escape_menu(name)
desc = self.escape(desc)
self.body.append(self.format_menu_entry(name, entry, desc))
def add_menu(self, node_name: str) -> None:
entries = self.node_menus[node_name]
if not entries:
return
self.body.append('\n@menu\n')
self.add_menu_entries(entries)
if (node_name != 'Top' or
not self.node_menus[entries[0]] or
self.config.texinfo_no_detailmenu):
self.body.append('\n@end menu\n')
return
def _add_detailed_menu(name: str) -> None:
entries = self.node_menus[name]
if not entries:
return
self.body.append('\n%s\n\n' % (self.escape(self.node_names[name],)))
self.add_menu_entries(entries)
for subentry in entries:
_add_detailed_menu(subentry)
self.body.append('\n@detailmenu\n'
' --- The Detailed Node Listing ---\n')
for entry in entries:
_add_detailed_menu(entry)
self.body.append('\n@end detailmenu\n'
'@end menu\n')
def tex_image_length(self, width_str: str) -> str:
match = re.match(r'(\d*\.?\d*)\s*(\S*)', width_str)
if not match:
# fallback
return width_str
res = width_str
amount, unit = match.groups()[:2]
if not unit or unit == "px":
# pixels: let TeX alone
return ''
elif unit == "%":
# a4paper: textwidth=418.25368pt
res = "%d.0pt" % (float(amount) * 4.1825368)
return res
def collect_indices(self) -> None:
def generate(content: List[Tuple[str, List[IndexEntry]]], collapsed: bool) -> str:
ret = ['\n@menu\n']
for _letter, entries in content:
for entry in entries:
if not entry[3]:
continue
name = self.escape_menu(entry[0])
sid = self.get_short_id('%s:%s' % (entry[2], entry[3]))
desc = self.escape_arg(entry[6])
me = self.format_menu_entry(name, sid, desc)
ret.append(me)
ret.append('@end menu\n')
return ''.join(ret)
indices_config = self.config.texinfo_domain_indices
if indices_config:
for domain in self.builder.env.domains.values():
for indexcls in domain.indices:
indexname = '%s-%s' % (domain.name, indexcls.name)
if isinstance(indices_config, list):
if indexname not in indices_config:
continue
content, collapsed = indexcls(domain).generate(
self.builder.docnames)
if not content:
continue
self.indices.append((indexcls.localname,
generate(content, collapsed)))
# only add the main Index if it's not empty
domain = cast(IndexDomain, self.builder.env.get_domain('index'))
for docname in self.builder.docnames:
if domain.entries[docname]:
self.indices.append((_('Index'), '\n@printindex ge\n'))
break
# this is copied from the latex writer
# TODO: move this to sphinx.util
def collect_footnotes(self, node: Element) -> Dict[str, List[Union[collected_footnote, bool]]]: # NOQA
def footnotes_under(n: Element) -> Iterator[nodes.footnote]:
if isinstance(n, nodes.footnote):
yield n
else:
for c in n.children:
if isinstance(c, addnodes.start_of_file):
continue
elif isinstance(c, nodes.Element):
yield from footnotes_under(c)
fnotes: Dict[str, List[Union[collected_footnote, bool]]] = {}
for fn in footnotes_under(node):
label = cast(nodes.label, fn[0])
num = label.astext().strip()
fnotes[num] = [collected_footnote('', *fn.children), False]
return fnotes
# -- xref handling
def get_short_id(self, id: str) -> str:
"""Return a shorter 'id' associated with ``id``."""
# Shorter ids improve paragraph filling in places
# that the id is hidden by Emacs.
try:
sid = self.short_ids[id]
except KeyError:
sid = hex(len(self.short_ids))[2:]
self.short_ids[id] = sid
return sid
def add_anchor(self, id: str, node: Node) -> None:
if id.startswith('index-'):
return
id = self.curfilestack[-1] + ':' + id
eid = self.escape_id(id)
sid = self.get_short_id(id)
for id in (eid, sid):
if id not in self.written_ids:
self.body.append('@anchor{%s}' % id)
self.written_ids.add(id)
def add_xref(self, id: str, name: str, node: Node) -> None:
name = self.escape_menu(name)
sid = self.get_short_id(id)
if self.config.texinfo_cross_references:
self.body.append('@ref{%s,,%s}' % (sid, name))
self.referenced_ids.add(sid)
self.referenced_ids.add(self.escape_id(id))
else:
self.body.append(name)
# -- Visiting
def visit_document(self, node: Element) -> None:
self.footnotestack.append(self.collect_footnotes(node))
self.curfilestack.append(node.get('docname', ''))
if 'docname' in node:
self.add_anchor(':doc', node)
def depart_document(self, node: Element) -> None:
self.footnotestack.pop()
self.curfilestack.pop()
def visit_Text(self, node: Text) -> None:
s = self.escape(node.astext())
if self.escape_newlines:
s = s.replace('\n', ' ')
if self.escape_hyphens:
# prevent "--" and "---" conversion
s = s.replace('-', '@w{-}')
self.body.append(s)
def depart_Text(self, node: Text) -> None:
pass
def visit_section(self, node: Element) -> None:
self.next_section_ids.update(node.get('ids', []))
if not self.seen_title:
return
if self.previous_section:
self.add_menu(self.previous_section['node_name'])
else:
self.add_menu('Top')
node_name = node['node_name']
pointers = tuple([node_name] + self.rellinks[node_name])
self.body.append('\n@node %s,%s,%s,%s\n' % pointers)
for id in sorted(self.next_section_ids):
self.add_anchor(id, node)
self.next_section_ids.clear()
self.previous_section = cast(nodes.section, node)
self.section_level += 1
def depart_section(self, node: Element) -> None:
self.section_level -= 1
headings = (
'@unnumbered',
'@chapter',
'@section',
'@subsection',
'@subsubsection',
)
rubrics = (
'@heading',
'@subheading',
'@subsubheading',
)
def visit_title(self, node: Element) -> None:
if not self.seen_title:
self.seen_title = True
raise nodes.SkipNode
parent = node.parent
if isinstance(parent, nodes.table):
return
if isinstance(parent, (nodes.Admonition, nodes.sidebar, nodes.topic)):
raise nodes.SkipNode
elif not isinstance(parent, nodes.section):
logger.warning(__('encountered title node not in section, topic, table, '
'admonition or sidebar'),
location=node)
self.visit_rubric(node)
else:
try:
heading = self.headings[self.section_level]
except IndexError:
heading = self.headings[-1]
self.body.append('\n%s ' % heading)
def depart_title(self, node: Element) -> None:
self.body.append('\n\n')
def visit_rubric(self, node: Element) -> None:
if len(node) == 1 and node.astext() in ('Footnotes', _('Footnotes')):
raise nodes.SkipNode
try:
rubric = self.rubrics[self.section_level]
except IndexError:
rubric = self.rubrics[-1]
self.body.append('\n%s ' % rubric)
self.escape_newlines += 1
def depart_rubric(self, node: Element) -> None:
self.escape_newlines -= 1
self.body.append('\n\n')
def visit_subtitle(self, node: Element) -> None:
self.body.append('\n\n@noindent\n')
def depart_subtitle(self, node: Element) -> None:
self.body.append('\n\n')
# -- References
def visit_target(self, node: Element) -> None:
# postpone the labels until after the sectioning command
parindex = node.parent.index(node)
try:
try:
next = node.parent[parindex + 1]
except IndexError:
# last node in parent, look at next after parent
# (for section of equal level)
next = node.parent.parent[node.parent.parent.index(node.parent)]
if isinstance(next, nodes.section):
if node.get('refid'):
self.next_section_ids.add(node['refid'])
self.next_section_ids.update(node['ids'])
return
except (IndexError, AttributeError):
pass
if 'refuri' in node:
return
if node.get('refid'):
self.add_anchor(node['refid'], node)
for id in node['ids']:
self.add_anchor(id, node)
def depart_target(self, node: Element) -> None:
pass
def visit_reference(self, node: Element) -> None:
# an xref's target is displayed in Info so we ignore a few
# cases for the sake of appearance
if isinstance(node.parent, (nodes.title, addnodes.desc_type)):
return
if isinstance(node[0], nodes.image):
return
name = node.get('name', node.astext()).strip()
uri = node.get('refuri', '')
if not uri and node.get('refid'):
uri = '%' + self.curfilestack[-1] + '#' + node['refid']
if not uri:
return
if uri.startswith('mailto:'):
uri = self.escape_arg(uri[7:])
name = self.escape_arg(name)
if not name or name == uri:
self.body.append('@email{%s}' % uri)
else:
self.body.append('@email{%s,%s}' % (uri, name))
elif uri.startswith('#'):
# references to labels in the same document
id = self.curfilestack[-1] + ':' + uri[1:]
self.add_xref(id, name, node)
elif uri.startswith('%'):
# references to documents or labels inside documents
hashindex = uri.find('#')
if hashindex == -1:
# reference to the document
id = uri[1:] + '::doc'
else:
# reference to a label
id = uri[1:].replace('#', ':')
self.add_xref(id, name, node)
elif uri.startswith('info:'):
# references to an external Info file
uri = uri[5:].replace('_', ' ')
uri = self.escape_arg(uri)
id = 'Top'
if '#' in uri:
uri, id = uri.split('#', 1)
id = self.escape_id(id)
name = self.escape_menu(name)
if name == id:
self.body.append('@ref{%s,,,%s}' % (id, uri))
else:
self.body.append('@ref{%s,,%s,%s}' % (id, name, uri))
else:
uri = self.escape_arg(uri)
name = self.escape_arg(name)
show_urls = self.config.texinfo_show_urls
if self.in_footnote:
show_urls = 'inline'
if not name or uri == name:
self.body.append('@indicateurl{%s}' % uri)
elif show_urls == 'inline':
self.body.append('@uref{%s,%s}' % (uri, name))
elif show_urls == 'no':
self.body.append('@uref{%s,,%s}' % (uri, name))
else:
self.body.append('%s@footnote{%s}' % (name, uri))
raise nodes.SkipNode
def depart_reference(self, node: Element) -> None:
pass
def visit_number_reference(self, node: Element) -> None:
text = nodes.Text(node.get('title', '#'))
self.visit_Text(text)
raise nodes.SkipNode
def visit_title_reference(self, node: Element) -> None:
text = node.astext()
self.body.append('@cite{%s}' % self.escape_arg(text))
raise nodes.SkipNode
# -- Blocks
def visit_paragraph(self, node: Element) -> None:
self.body.append('\n')
def depart_paragraph(self, node: Element) -> None:
self.body.append('\n')
def visit_block_quote(self, node: Element) -> None:
self.body.append('\n@quotation\n')
def depart_block_quote(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end quotation\n')
def visit_literal_block(self, node: Element) -> None:
self.body.append('\n@example\n')
def depart_literal_block(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end example\n')
visit_doctest_block = visit_literal_block
depart_doctest_block = depart_literal_block
def visit_line_block(self, node: Element) -> None:
if not isinstance(node.parent, nodes.line_block):
self.body.append('\n\n')
self.body.append('@display\n')
def depart_line_block(self, node: Element) -> None:
self.body.append('@end display\n')
if not isinstance(node.parent, nodes.line_block):
self.body.append('\n\n')
def visit_line(self, node: Element) -> None:
self.escape_newlines += 1
def depart_line(self, node: Element) -> None:
self.body.append('@w{ }\n')
self.escape_newlines -= 1
# -- Inline
def visit_strong(self, node: Element) -> None:
self.body.append('@strong{')
def depart_strong(self, node: Element) -> None:
self.body.append('}')
def visit_emphasis(self, node: Element) -> None:
element = 'emph' if not self.in_samp else 'var'
self.body.append('@%s{' % element)
def depart_emphasis(self, node: Element) -> None:
self.body.append('}')
def is_samp(self, node: Element) -> bool:
return 'samp' in node['classes']
def visit_literal(self, node: Element) -> None:
if self.is_samp(node):
self.in_samp += 1
self.body.append('@code{')
def depart_literal(self, node: Element) -> None:
if self.is_samp(node):
self.in_samp -= 1
self.body.append('}')
def visit_superscript(self, node: Element) -> None:
self.body.append('@w{^')
def depart_superscript(self, node: Element) -> None:
self.body.append('}')
def visit_subscript(self, node: Element) -> None:
self.body.append('@w{[')
def depart_subscript(self, node: Element) -> None:
self.body.append(']}')
# -- Footnotes
def visit_footnote(self, node: Element) -> None:
raise nodes.SkipNode
def visit_collected_footnote(self, node: Element) -> None:
self.in_footnote += 1
self.body.append('@footnote{')
def depart_collected_footnote(self, node: Element) -> None:
self.body.append('}')
self.in_footnote -= 1
def visit_footnote_reference(self, node: Element) -> None:
num = node.astext().strip()
try:
footnode, used = self.footnotestack[-1][num]
except (KeyError, IndexError) as exc:
raise nodes.SkipNode from exc
# footnotes are repeated for each reference
footnode.walkabout(self) # type: ignore
raise nodes.SkipChildren
def visit_citation(self, node: Element) -> None:
self.body.append('\n')
for id in node.get('ids'):
self.add_anchor(id, node)
self.escape_newlines += 1
def depart_citation(self, node: Element) -> None:
self.escape_newlines -= 1
def visit_citation_reference(self, node: Element) -> None:
self.body.append('@w{[')
def depart_citation_reference(self, node: Element) -> None:
self.body.append(']}')
# -- Lists
def visit_bullet_list(self, node: Element) -> None:
bullet = node.get('bullet', '*')
self.body.append('\n\n@itemize %s\n' % bullet)
def depart_bullet_list(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end itemize\n')
def visit_enumerated_list(self, node: Element) -> None:
# doesn't support Roman numerals
enum = node.get('enumtype', 'arabic')
starters = {'arabic': '',
'loweralpha': 'a',
'upperalpha': 'A'}
start = node.get('start', starters.get(enum, ''))
self.body.append('\n\n@enumerate %s\n' % start)
def depart_enumerated_list(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end enumerate\n')
def visit_list_item(self, node: Element) -> None:
self.body.append('\n@item ')
def depart_list_item(self, node: Element) -> None:
pass
# -- Option List
def visit_option_list(self, node: Element) -> None:
self.body.append('\n\n@table @option\n')
def depart_option_list(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end table\n')
def visit_option_list_item(self, node: Element) -> None:
pass
def depart_option_list_item(self, node: Element) -> None:
pass
def visit_option_group(self, node: Element) -> None:
self.at_item_x = '@item'
def depart_option_group(self, node: Element) -> None:
pass
def visit_option(self, node: Element) -> None:
self.escape_hyphens += 1
self.body.append('\n%s ' % self.at_item_x)
self.at_item_x = '@itemx'
def depart_option(self, node: Element) -> None:
self.escape_hyphens -= 1
def visit_option_string(self, node: Element) -> None:
pass
def depart_option_string(self, node: Element) -> None:
pass
def visit_option_argument(self, node: Element) -> None:
self.body.append(node.get('delimiter', ' '))
def depart_option_argument(self, node: Element) -> None:
pass
def visit_description(self, node: Element) -> None:
self.body.append('\n')
def depart_description(self, node: Element) -> None:
pass
# -- Definitions
def visit_definition_list(self, node: Element) -> None:
self.body.append('\n\n@table @asis\n')
def depart_definition_list(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end table\n')
def visit_definition_list_item(self, node: Element) -> None:
self.at_item_x = '@item'
def depart_definition_list_item(self, node: Element) -> None:
pass
def visit_term(self, node: Element) -> None:
for id in node.get('ids'):
self.add_anchor(id, node)
# anchors and indexes need to go in front
for n in node[::]:
if isinstance(n, (addnodes.index, nodes.target)):
n.walkabout(self)
node.remove(n)
self.body.append('\n%s ' % self.at_item_x)
self.at_item_x = '@itemx'
def depart_term(self, node: Element) -> None:
pass
def visit_classifier(self, node: Element) -> None:
self.body.append(' : ')
def depart_classifier(self, node: Element) -> None:
pass
def visit_definition(self, node: Element) -> None:
self.body.append('\n')
def depart_definition(self, node: Element) -> None:
pass
# -- Tables
def visit_table(self, node: Element) -> None:
self.entry_sep = '@item'
def depart_table(self, node: Element) -> None:
self.body.append('\n@end multitable\n\n')
def visit_tabular_col_spec(self, node: Element) -> None:
pass
def depart_tabular_col_spec(self, node: Element) -> None:
pass
def visit_colspec(self, node: Element) -> None:
self.colwidths.append(node['colwidth'])
if len(self.colwidths) != self.n_cols:
return
self.body.append('\n\n@multitable ')
for n in self.colwidths:
self.body.append('{%s} ' % ('x' * (n + 2)))
def depart_colspec(self, node: Element) -> None:
pass
def visit_tgroup(self, node: Element) -> None:
self.colwidths = []
self.n_cols = node['cols']
def depart_tgroup(self, node: Element) -> None:
pass
def visit_thead(self, node: Element) -> None:
self.entry_sep = '@headitem'
def depart_thead(self, node: Element) -> None:
pass
def visit_tbody(self, node: Element) -> None:
pass
def depart_tbody(self, node: Element) -> None:
pass
def visit_row(self, node: Element) -> None:
pass
def depart_row(self, node: Element) -> None:
self.entry_sep = '@item'
def visit_entry(self, node: Element) -> None:
self.body.append('\n%s\n' % self.entry_sep)
self.entry_sep = '@tab'
def depart_entry(self, node: Element) -> None:
for _i in range(node.get('morecols', 0)):
self.body.append('\n@tab\n')
# -- Field Lists
def visit_field_list(self, node: Element) -> None:
pass
def depart_field_list(self, node: Element) -> None:
pass
def visit_field(self, node: Element) -> None:
self.body.append('\n')
def depart_field(self, node: Element) -> None:
self.body.append('\n')
def visit_field_name(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@*')
def depart_field_name(self, node: Element) -> None:
self.body.append(': ')
def visit_field_body(self, node: Element) -> None:
pass
def depart_field_body(self, node: Element) -> None:
pass
# -- Admonitions
def visit_admonition(self, node: Element, name: str = '') -> None:
if not name:
title = cast(nodes.title, node[0])
name = self.escape(title.astext())
self.body.append('\n@cartouche\n@quotation %s ' % name)
def _visit_named_admonition(self, node: Element) -> None:
label = admonitionlabels[node.tagname]
self.body.append('\n@cartouche\n@quotation %s ' % label)
def depart_admonition(self, node: Element) -> None:
self.ensure_eol()
self.body.append('@end quotation\n'
'@end cartouche\n')
visit_attention = _visit_named_admonition
depart_attention = depart_admonition
visit_caution = _visit_named_admonition
depart_caution = depart_admonition
visit_danger = _visit_named_admonition
depart_danger = depart_admonition
visit_error = _visit_named_admonition
depart_error = depart_admonition
visit_hint = _visit_named_admonition
depart_hint = depart_admonition
visit_important = _visit_named_admonition
depart_important = depart_admonition
visit_note = _visit_named_admonition
depart_note = depart_admonition
visit_tip = _visit_named_admonition
depart_tip = depart_admonition
visit_warning = _visit_named_admonition
depart_warning = depart_admonition
# -- Misc
def visit_docinfo(self, node: Element) -> None:
raise nodes.SkipNode
def visit_generated(self, node: Element) -> None:
raise nodes.SkipNode
def visit_header(self, node: Element) -> None:
raise nodes.SkipNode
def visit_footer(self, node: Element) -> None:
raise nodes.SkipNode
def visit_container(self, node: Element) -> None:
if node.get('literal_block'):
self.body.append('\n\n@float LiteralBlock\n')
def depart_container(self, node: Element) -> None:
if node.get('literal_block'):
self.body.append('\n@end float\n\n')
def visit_decoration(self, node: Element) -> None:
pass
def depart_decoration(self, node: Element) -> None:
pass
def visit_topic(self, node: Element) -> None:
# ignore TOC's since we have to have a "menu" anyway
if 'contents' in node.get('classes', []):
raise nodes.SkipNode
title = cast(nodes.title, node[0])
self.visit_rubric(title)
self.body.append('%s\n' % self.escape(title.astext()))
self.depart_rubric(title)
def depart_topic(self, node: Element) -> None:
pass
def visit_transition(self, node: Element) -> None:
self.body.append('\n\n%s\n\n' % ('_' * 66))
def depart_transition(self, node: Element) -> None:
pass
def visit_attribution(self, node: Element) -> None:
self.body.append('\n\n@center --- ')
def depart_attribution(self, node: Element) -> None:
self.body.append('\n\n')
def visit_raw(self, node: Element) -> None:
format = node.get('format', '').split()
if 'texinfo' in format or 'texi' in format:
self.body.append(node.astext())
raise nodes.SkipNode
def visit_figure(self, node: Element) -> None:
self.body.append('\n\n@float Figure\n')
def depart_figure(self, node: Element) -> None:
self.body.append('\n@end float\n\n')
def visit_caption(self, node: Element) -> None:
if (isinstance(node.parent, nodes.figure) or
(isinstance(node.parent, nodes.container) and
node.parent.get('literal_block'))):
self.body.append('\n@caption{')
else:
logger.warning(__('caption not inside a figure.'),
location=node)
def depart_caption(self, node: Element) -> None:
if (isinstance(node.parent, nodes.figure) or
(isinstance(node.parent, nodes.container) and
node.parent.get('literal_block'))):
self.body.append('}\n')
def visit_image(self, node: Element) -> None:
if node['uri'] in self.builder.images:
uri = self.builder.images[node['uri']]
else:
# missing image!
if self.ignore_missing_images:
return
uri = node['uri']
if uri.find('://') != -1:
# ignore remote images
return
name, ext = path.splitext(uri)
# width and height ignored in non-tex output
width = self.tex_image_length(node.get('width', ''))
height = self.tex_image_length(node.get('height', ''))
alt = self.escape_arg(node.get('alt', ''))
filename = "%s-figures/%s" % (self.elements['filename'][:-5], name) # type: ignore
self.body.append('\n@image{%s,%s,%s,%s,%s}\n' %
(filename, width, height, alt, ext[1:]))
def depart_image(self, node: Element) -> None:
pass
def visit_compound(self, node: Element) -> None:
pass
def depart_compound(self, node: Element) -> None:
pass
def visit_sidebar(self, node: Element) -> None:
self.visit_topic(node)
def depart_sidebar(self, node: Element) -> None:
self.depart_topic(node)
def visit_label(self, node: Element) -> None:
# label numbering is automatically generated by Texinfo
if self.in_footnote:
raise nodes.SkipNode
else:
self.body.append('@w{(')
def depart_label(self, node: Element) -> None:
self.body.append(')} ')
def visit_legend(self, node: Element) -> None:
pass
def depart_legend(self, node: Element) -> None:
pass
def visit_substitution_reference(self, node: Element) -> None:
pass
def depart_substitution_reference(self, node: Element) -> None:
pass
def visit_substitution_definition(self, node: Element) -> None:
raise nodes.SkipNode
def visit_system_message(self, node: Element) -> None:
self.body.append('\n@verbatim\n'
'<SYSTEM MESSAGE: %s>\n'
'@end verbatim\n' % node.astext())
raise nodes.SkipNode
def visit_comment(self, node: Element) -> None:
self.body.append('\n')
for line in node.astext().splitlines():
self.body.append('@c %s\n' % line)
raise nodes.SkipNode
def visit_problematic(self, node: Element) -> None:
self.body.append('>>')
def depart_problematic(self, node: Element) -> None:
self.body.append('<<')
def unimplemented_visit(self, node: Element) -> None:
logger.warning(__("unimplemented node type: %r"), node,
location=node)
def unknown_departure(self, node: Node) -> None:
pass
# -- Sphinx specific
def visit_productionlist(self, node: Element) -> None:
self.visit_literal_block(None)
names = []
productionlist = cast(Iterable[addnodes.production], node)
for production in productionlist:
names.append(production['tokenname'])
maxlen = max(len(name) for name in names)
for production in productionlist:
if production['tokenname']:
for id in production.get('ids'):
self.add_anchor(id, production)
s = production['tokenname'].ljust(maxlen) + ' ::='
else:
s = '%s ' % (' ' * maxlen)
self.body.append(self.escape(s))
self.body.append(self.escape(production.astext() + '\n'))
self.depart_literal_block(None)
raise nodes.SkipNode
def visit_production(self, node: Element) -> None:
pass
def depart_production(self, node: Element) -> None:
pass
def visit_literal_emphasis(self, node: Element) -> None:
self.body.append('@code{')
def depart_literal_emphasis(self, node: Element) -> None:
self.body.append('}')
def visit_literal_strong(self, node: Element) -> None:
self.body.append('@code{')
def depart_literal_strong(self, node: Element) -> None:
self.body.append('}')
def visit_index(self, node: Element) -> None:
# terminate the line but don't prevent paragraph breaks
if isinstance(node.parent, nodes.paragraph):
self.ensure_eol()
else:
self.body.append('\n')
for entry in node['entries']:
typ, text, tid, text2, key_ = entry
text = self.escape_menu(text)
self.body.append('@geindex %s\n' % text)
def visit_versionmodified(self, node: Element) -> None:
self.body.append('\n')
def depart_versionmodified(self, node: Element) -> None:
self.body.append('\n')
def visit_start_of_file(self, node: Element) -> None:
# add a document target
self.next_section_ids.add(':doc')
self.curfilestack.append(node['docname'])
self.footnotestack.append(self.collect_footnotes(node))
def depart_start_of_file(self, node: Element) -> None:
self.curfilestack.pop()
self.footnotestack.pop()
def visit_centered(self, node: Element) -> None:
txt = self.escape_arg(node.astext())
self.body.append('\n\n@center %s\n\n' % txt)
raise nodes.SkipNode
def visit_seealso(self, node: Element) -> None:
self.body.append('\n\n@subsubheading %s\n\n' %
admonitionlabels['seealso'])
def depart_seealso(self, node: Element) -> None:
self.body.append('\n')
def visit_meta(self, node: Element) -> None:
raise nodes.SkipNode
def visit_glossary(self, node: Element) -> None:
pass
def depart_glossary(self, node: Element) -> None:
pass
def visit_acks(self, node: Element) -> None:
bullet_list = cast(nodes.bullet_list, node[0])
list_items = cast(Iterable[nodes.list_item], bullet_list)
self.body.append('\n\n')
self.body.append(', '.join(n.astext() for n in list_items) + '.')
self.body.append('\n\n')
raise nodes.SkipNode
#############################################################
# Domain-specific object descriptions
#############################################################
# Top-level nodes for descriptions
##################################
def visit_desc(self, node: addnodes.desc) -> None:
self.descs.append(node)
self.at_deffnx = '@deffn'
def depart_desc(self, node: addnodes.desc) -> None:
self.descs.pop()
self.ensure_eol()
self.body.append('@end deffn\n')
def visit_desc_signature(self, node: Element) -> None:
self.escape_hyphens += 1
objtype = node.parent['objtype']
if objtype != 'describe':
for id in node.get('ids'):
self.add_anchor(id, node)
# use the full name of the objtype for the category
try:
domain = self.builder.env.get_domain(node.parent['domain'])
name = domain.get_type_name(domain.object_types[objtype],
self.config.primary_domain == domain.name)
except (KeyError, ExtensionError):
name = objtype
# by convention, the deffn category should be capitalized like a title
category = self.escape_arg(smart_capwords(name))
self.body.append('\n%s {%s} ' % (self.at_deffnx, category))
self.at_deffnx = '@deffnx'
self.desc_type_name = name
def depart_desc_signature(self, node: Element) -> None:
self.body.append("\n")
self.escape_hyphens -= 1
self.desc_type_name = None
def visit_desc_signature_line(self, node: Element) -> None:
pass
def depart_desc_signature_line(self, node: Element) -> None:
pass
def visit_desc_content(self, node: Element) -> None:
pass
def depart_desc_content(self, node: Element) -> None:
pass
def visit_desc_inline(self, node: Element) -> None:
pass
def depart_desc_inline(self, node: Element) -> None:
pass
# Nodes for high-level structure in signatures
##############################################
def visit_desc_name(self, node: Element) -> None:
pass
def depart_desc_name(self, node: Element) -> None:
pass
def visit_desc_addname(self, node: Element) -> None:
pass
def depart_desc_addname(self, node: Element) -> None:
pass
def visit_desc_type(self, node: Element) -> None:
pass
def depart_desc_type(self, node: Element) -> None:
pass
def visit_desc_returns(self, node: Element) -> None:
self.body.append(' -> ')
def depart_desc_returns(self, node: Element) -> None:
pass
def visit_desc_parameterlist(self, node: Element) -> None:
self.body.append(' (')
self.first_param = 1
def depart_desc_parameterlist(self, node: Element) -> None:
self.body.append(')')
def visit_desc_parameter(self, node: Element) -> None:
if not self.first_param:
self.body.append(', ')
else:
self.first_param = 0
text = self.escape(node.astext())
# replace no-break spaces with normal ones
text = text.replace(' ', '@w{ }')
self.body.append(text)
raise nodes.SkipNode
def visit_desc_optional(self, node: Element) -> None:
self.body.append('[')
def depart_desc_optional(self, node: Element) -> None:
self.body.append(']')
def visit_desc_annotation(self, node: Element) -> None:
# Try to avoid duplicating info already displayed by the deffn category.
# e.g.
# @deffn {Class} Foo
# -- instead of --
# @deffn {Class} class Foo
txt = node.astext().strip()
if ((self.descs and txt == self.descs[-1]['objtype']) or
(self.desc_type_name and txt in self.desc_type_name.split())):
raise nodes.SkipNode
def depart_desc_annotation(self, node: Element) -> None:
pass
##############################################
def visit_inline(self, node: Element) -> None:
pass
def depart_inline(self, node: Element) -> None:
pass
def visit_abbreviation(self, node: Element) -> None:
abbr = node.astext()
self.body.append('@abbr{')
if node.hasattr('explanation') and abbr not in self.handled_abbrs:
self.context.append(',%s}' % self.escape_arg(node['explanation']))
self.handled_abbrs.add(abbr)
else:
self.context.append('}')
def depart_abbreviation(self, node: Element) -> None:
self.body.append(self.context.pop())
def visit_manpage(self, node: Element) -> None:
return self.visit_literal_emphasis(node)
def depart_manpage(self, node: Element) -> None:
return self.depart_literal_emphasis(node)
def visit_download_reference(self, node: Element) -> None:
pass
def depart_download_reference(self, node: Element) -> None:
pass
def visit_hlist(self, node: Element) -> None:
self.visit_bullet_list(node)
def depart_hlist(self, node: Element) -> None:
self.depart_bullet_list(node)
def visit_hlistcol(self, node: Element) -> None:
pass
def depart_hlistcol(self, node: Element) -> None:
pass
def visit_pending_xref(self, node: Element) -> None:
pass
def depart_pending_xref(self, node: Element) -> None:
pass
def visit_math(self, node: Element) -> None:
self.body.append('@math{' + self.escape_arg(node.astext()) + '}')
raise nodes.SkipNode
def visit_math_block(self, node: Element) -> None:
if node.get('label'):
self.add_anchor(node['label'], node)
self.body.append('\n\n@example\n%s\n@end example\n\n' %
self.escape_arg(node.astext()))
raise nodes.SkipNode
@property
def desc(self) -> Optional[addnodes.desc]:
warnings.warn('TexinfoWriter.desc is deprecated.', RemovedInSphinx50Warning)
if len(self.descs):
return self.descs[-1]
else:
return None
| sphinx/writers/texinfo.py | 53,356 | Texinfo writer for generating Texinfo documents.
Collect the menu entries for each "node" section.
Generates a unique id for each section.
Assigns the attribute ``node_name`` to each section.
Collect the relative links (next, previous, up) for each "node".
Ensure the last line in body is terminated by new line.
Return a string with Texinfo command characters escaped.
Return an escaped string suitable for use as an argument
to a Texinfo command.
Return an escaped string suitable for node names and anchors.
Return an escaped string suitable for menu entries.
Return a list of subsections for the given ``section``.
Return a shorter 'id' associated with ``id``.
Like string.capwords() but does not capitalize words that already
contain a capital letter.
sphinx.writers.texinfo
~~~~~~~~~~~~~~~~~~~~~~
Custom docutils writer for Texinfo.
:copyright: Copyright 2007-2022 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
node names and anchors in output node names and anchors that should be in output (node name, content) anchors --> short ids node name --> node's name to display node name --> node's menu entries node name --> (next, previous, up) NOQA handle xrefs with missing anchors -- Helper routines if empty, the title is set to the first section title if empty, use basename of input file title filename type: ignore type: ignore type: ignore direntry allow the user to override them all must have a "Top" node each index is a node each section is also a node try to find a suitable "Top" node handle the indices up's next's and prev's First child's prev is empty Last child's next is empty top's next is its first child -- Escaping Which characters to escape depends on the context. In some cases, namely menus and node names, it's not possible to escape certain characters. prevent `` and '' quote conversion commas are the argument delimiters normalize white space remove DOTs if name contains other characters special formatting for entries that are divided by an em-dash could be a gettext proxy fallback pixels: let TeX alone a4paper: textwidth=418.25368pt only add the main Index if it's not empty this is copied from the latex writer TODO: move this to sphinx.util NOQA -- xref handling Shorter ids improve paragraph filling in places that the id is hidden by Emacs. -- Visiting prevent "--" and "---" conversion -- References postpone the labels until after the sectioning command last node in parent, look at next after parent (for section of equal level) an xref's target is displayed in Info so we ignore a few cases for the sake of appearance references to labels in the same document references to documents or labels inside documents reference to the document reference to a label references to an external Info file -- Blocks -- Inline -- Footnotes footnotes are repeated for each reference type: ignore -- Lists doesn't support Roman numerals -- Option List -- Definitions anchors and indexes need to go in front -- Tables -- Field Lists -- Admonitions -- Misc ignore TOC's since we have to have a "menu" anyway missing image! ignore remote images width and height ignored in non-tex output type: ignore label numbering is automatically generated by Texinfo -- Sphinx specific terminate the line but don't prevent paragraph breaks add a document target Domain-specific object descriptions Top-level nodes for descriptions use the full name of the objtype for the category by convention, the deffn category should be capitalized like a title Nodes for high-level structure in signatures replace no-break spaces with normal ones Try to avoid duplicating info already displayed by the deffn category. e.g. @deffn {Class} Foo -- instead of -- @deffn {Class} class Foo | 3,732 | en | 0.770669 |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu Feb 22 17:28:54 2018
@author: galengao
This is the original analysis code as it exists in the environment where it was writen and initially run.
Portions and modifications of this script constitute all other .py scripts in this directory.
"""
import numpy as np
import pandas as pd
from collections import Counter
import matplotlib.pyplot as plt
import seaborn as sns
### Helper Function to Load in the Data ###
def load_data(coh, thresh=False):
"""Load in the hg38 and hg19 gistic thresholded data. Assume GISTIC runs
for each tumor type live in a parent directory (hg38_gistic or hg19_gistic)
one level up from this script."""
if thresh:
hg38 = '../hg38_gistic/'+coh+'/all_thresholded.by_genes.txt'
hg19 = '../hg19_gistic/'+coh+'/all_thresholded.by_genes.txt'
hg38drops = ['Cytoband', 'Locus ID']
else:
hg38 = '../hg38_gistic/'+coh+'/all_data_by_genes.txt'
hg19 = '../hg19_gistic/'+coh+'/all_data_by_genes.txt'
hg38drops = ['Cytoband', 'Gene ID']
df_hg19 = pd.read_table(hg19, index_col=[0]).drop(['Cytoband', 'Locus ID'], axis=1)
df_hg38 = pd.read_table(hg38, index_col=[0]).drop(hg38drops, axis=1)
same_samps = list(set(df_hg38.columns) & set(df_hg19.columns))
same_genes = list(set(df_hg38.index) & set(df_hg19.index))
print(coh, len(same_genes), len(same_samps))
return df_hg38[same_samps].T[same_genes], df_hg19[same_samps].T[same_genes]
return df_hg38, df_hg19
### Raw Copy Number Values Analysis Code ###
def raw_value_comparison(coh, plot=False):
"""Return the average differences in raw copy number values between the
gene-level calls in hg19 and hg38 for each gene for a given tumor type
'coh.' If plot=True, plot the genes' differences in a histogram."""
# load in the data
df_38, df_19 = load_data(coh, thresh=False)
# compute average sample-by-sample differences for each gene
df_s = df_38 - df_19
avg_diff = {g:np.average(df_s[g]) for g in df_s.columns.get_level_values('Gene Symbol')}
# take note of which genes are altered more than our threshold of 4*std
results = []
std = np.std([avg_diff[x] for x in avg_diff])
for g in avg_diff:
if avg_diff[g] > 4 * std:
results.append([coh, 'Pos', g, avg_diff[g]])
elif avg_diff[g] < -4 * std:
results.append([coh, 'Neg', g, avg_diff[g]])
if plot:
plt.hist([avg_diff[x] for x in avg_diff], bins=1000)
plt.title(coh, fontsize=16)
plt.xlabel('Average CN Difference Between Alignments', fontsize=14)
plt.ylabel('Genes', fontsize=14)
sns.despine()
plt.savefig('./genehists/'+coh+'_genehist.pdf')
plt.savefig('./genehists/'+coh+'_genehist.png')
plt.clf()
return results
def sequential_cohort_test_raw_values(cohs, plot=False):
"""Sequentially compare raw gene-level calls for the given tumor types."""
c_results = []
for coh in cohs: # perform raw value comparison for each cohort
c_results += raw_value_comparison(coh, plot=plot)
# compile results together
df_r = pd.DataFrame(c_results, columns=['Cohort', 'Direction', 'Gene', 'Difference'])
gcount = Counter(df_r['Gene'])
pos_gcount = Counter(df_r[df_r['Direction']=='Pos']['Gene'])
neg_gcount = Counter(df_r[df_r['Direction']=='Neg']['Gene'])
df = pd.DataFrame([gcount[x] for x in gcount], index=gcount.keys(), columns=['Count'])
df['Count_pos'] = [pos_gcount[x] if x in pos_gcount else 0 for x in gcount]
df['Count_neg'] = [neg_gcount[x] if x in neg_gcount else 0 for x in gcount]
if plot: # write output
plt.plot(np.sort([gcount[x] for x in gcount])[::-1], 'b-')
plt.xlabel('Gene by Rank', fontsize=16)
plt.ylabel('Number of Occurences', fontsize=16)
sns.despine()
plt.savefig('GeneDevianceDropoff.pdf')
plt.savefig('GeneDevianceDropoff.png')
df_r.to_csv('./genehists/LargestDifferences.tsv', sep='\t', index=False)
df.to_csv('./genehists/LargestDifferenceGenes_ByCount.tsv', sep='\t', index=True)
### Thresholded Copy Number Values Analysis Code ###
def thresholded_value_comparison(df_hg38, df_hg19, metric='hamming'):
"""Compare -2,-1,0,1,2 gene-level thresholded calls. metric can be either
hamming (number of discrepancies in each gene) or manhattan (sum of
'distances' between each gene so a 1 to -1 change is 2). Returns a vector
of each gene's metric."""
out = []
for i, g in enumerate(df_hg38.columns):
if metric == 'hamming':
out.append(sum(df_hg19[g] != df_hg38[g])/len(df_hg19))
elif metric == 'manhattan':
out.append(sum(abs((df_hg19[g] - df_hg38[g]))))
return pd.DataFrame(out, index=df_hg38.columns)
def sequential_cohort_test_thresholded_values(cohs):
"""Compare thresholded gene-level calls for input tumor types."""
df_out = pd.DataFrame([])
for coh in cohs:
df_hg38, df_hg19 = load_data(coh, thresh=True)
df_results = thresholded_value_comparison(df_hg38, df_hg19, metric='hamming')
df_results.columns = [coh]
df_out = df_out.join(df_results, how='outer')
df_out.to_csv('../readout/DiscordantSampleFractions_perGene_perCohort_thresholdedCalls.tsv', sep='\t')
return df_out
def plot_fractionDisagreements_perCohort(cohs):
"""Visualize fraction of samples with disagreements in thresholded copy
number for each gene. Run sequential_cohort_test_thresholded_values()
before this function."""
# Read in data written by sequential_cohort_test_thresholded_values
df = sequential_cohort_test_thresholded_values(cohs)
df_box = pd.melt(df.reset_index(), id_vars='Gene Symbol').set_index('Gene Symbol')
df_box.columns = ['Tumor Type', 'Fraction of Samples with Disagreements']
dft = df.T
dft['med_degenerates'] = df.median(axis=0)
boxorder = dft.sort_values('med_degenerates', axis=0).index
# read in copy number burden data (requires aneuploidy RecurrentSCNA calls)
df_cn = pd.read_table('../../PanCanAneuploidy/bin/PANCAN_armonly_ASandpuritycalls_092817_xcellcalls.txt', index_col=0, usecols=[0,1,2,16])
coh_medians = [int(np.median(df_cn[df_cn['Type']==x]['RecurrentSCNA'].dropna())) for x in df_cn.Type.unique()]
df_med = pd.DataFrame(coh_medians, index=df_cn.Type.unique(), columns=['med'])
# plot it out
pal = sns.color_palette('Blues', max(df_med.med)-min(df_med.med)+1)
my_pal = {c: pal[df_med.at[c,'med']] for c in df_med.index}
g = sns.boxplot(x=df_box.columns[0], y=df_box.columns[1], data=df_box, \
order=boxorder, fliersize=1, palette=my_pal, linewidth=0.5)
newxticks = [x+' ('+str(df_med.loc[x]['med'])+')' for x in boxorder]
g.set_xticklabels(newxticks, rotation=90)
plt.ylabel('Fraction with Disagreements', fontsize=12)
sns.despine()
plt.gcf().set_size_inches((8,3))
plt.savefig('2_thresholdedCN_boxplot.pdf', bbox_inches='tight')
plt.savefig('2_thresholdedCN_boxplot.png', bbox_inches='tight')
### Significantly Altered Focal Peaks Analysis Code ###
def peakgene_overlaps(combos, same_genes, normalize=False):
"""Count the number of genes that overlap when examing the hg19 & hg38
GISTIC runs' focal peaks."""
venn_numbers, gsu, gsi = [], [], []
for coh, ad in combos:
print(coh)
# put all significant genes in a list
fnames = ['../hg19_gistic/'+coh+ad+'genes.conf_99.txt', '../hg38_gistic/'+coh+ad+'genes.txt']
df38 = pd.read_table(fnames[0], index_col=0).drop(['q value','residual q value','wide peak boundaries'])
df19 = pd.read_table(fnames[1], index_col=0).drop(['q value','residual q value','wide peak boundaries'])
g_38 = set([x for col in df38.columns for x in df38[col].dropna()]) & same_genes
g_19 = set([x for col in df19.columns for x in df19[col].dropna()]) & same_genes
intersect, union = g_38 & g_19, g_38 | g_19
gsu.append(union)
gsi.append(intersect)
if normalize:
venn_numbers.append([len(g_19-intersect)/len(union),len(intersect)/len(union), len(g_38-intersect)/len(union)])
else:
venn_numbers.append([len(g_19-intersect),len(intersect), len(g_38-intersect)])
index = [x[0]+'_'+x[1][1:-1] for x in combos]
return pd.DataFrame(venn_numbers, index=index, columns=['hg19 only','Intersection','hg38 only'])
def plot_peakgene_overlaps(combos, same_genes, write=False):
"""Visualize the results of peakgene_overlaps function in bargraph form."""
df_out = peakgene_overlaps(combos, same_genes, normalize=False)
df_d, df_a = df_out[df_out.index.str.split('_').str[-1] == 'del'], \
df_out[df_out.index.str.split('_').str[-1] == 'amp']
for x in zip((df_d, df_a), ('Deletion Peak Memberships', 'Amplification Peak Memberships')):
x[0].index = x[0].index.str.split('_').str[0]
x[0].plot.bar(stacked=True, color=['#af8dc3', '#f7f7f7', '#7fbf7b'], linewidth=1, edgecolor='k')
plt.gca().set_xticklabels(x[0].index, rotation=90)
plt.title(x[1], fontsize=18)
plt.gcf().set_size_inches(10,8)
sns.despine()
plt.savefig(x[1].split(' ')[0]+'_peakMemberships.pdf', bbox_inches='tight')
plt.savefig(x[1].split(' ')[0]+'_peakMemberships.png', bbox_inches='tight')
plt.clf()
if write:
df_out.to_csv('VennStats_focalpeaks.tsv', sep='\t')
### Conservation of Significant Copy Number Driver Events Analysis Code ###
def documented_driver_differences():
"""Scan and analyze manually currated DocumentedDriverDifferences.txt file.
Returns: 1) Number of driver genes called in both hg19 & hg38 GISTIC peaks
2) Number of drivers missing in hg38 peaks that appeared in hg19 peaks and
3) Number of drivers present in hg38 peaks but absent from hg19 peaks."""
# read in table of documented driver differences
# (this table needs a manual curation to be generated)
df = pd.read_table('../DocumentedDriverDifferences.txt', index_col=0)
# process entries to have just yes/no calls (without parens & brackets)
df['hg19?'] = df['present in hg19?'].str.strip(')').str.strip('(').str.strip('[').str.strip(']')
df['hg38?'] = df['present in hg38?'].str.strip(')').str.strip('(').str.strip('[').str.strip(']')
# number of documented drivers that match in hg19 & hg38
matches = sum(df['hg19?'] == df['hg38?'])
# number of documented drivers that are in hg19 but not hg38 & vice versa
lostdrivers = len(df[(df['hg19?'] == 'yes') & (df['hg38?'] == 'no')])
recovereddrivers = len(df[(df['hg19?'] == 'no') & (df['hg38?'] == 'yes')])
# Return in order
return matches, lostdrivers, recovereddrivers
# set up the tumor types we want to analyze
cohs = ['ACC','BLCA','CESC','CHOL','COAD','DLBC','ESCA','GBM', 'HNSC','KICH',\
'KIRC','KIRP','LAML','LGG','LIHC','LUAD','LUSC','OV','PAAD','PCPG',\
'PRAD','READ','SARC','SKCM','STAD','TGCT','THCA','THYM','UCEC','UCS','UVM']
ads = ['/amp_', '/del_']
combos = [(c, a) for c in cohs for a in ads]
# grab list of genes present in both hg19 & hg38
df_hg38 = pd.read_table('../hg38_gistic/CHOL/all_thresholded.by_genes.txt', index_col=0, usecols=[0,1])
df_hg19 = pd.read_table('../hg19_gistic/CHOL/all_thresholded.by_genes.txt', index_col=0, usecols=[0,1])
same_genes = set(df_hg38.index) & set(df_hg19.index)
# action lines -- run the analysis
sequential_cohort_test_raw_values(cohs, plot=True)
plot_fractionDisagreements_perCohort(cohs)
plot_peakgene_overlaps(combos, same_genes, write=True)
print(documented_driver_differences())
| scripts/AnalysisCode.py | 11,800 | Scan and analyze manually currated DocumentedDriverDifferences.txt file.
Returns: 1) Number of driver genes called in both hg19 & hg38 GISTIC peaks
2) Number of drivers missing in hg38 peaks that appeared in hg19 peaks and
3) Number of drivers present in hg38 peaks but absent from hg19 peaks.
Load in the hg38 and hg19 gistic thresholded data. Assume GISTIC runs
for each tumor type live in a parent directory (hg38_gistic or hg19_gistic)
one level up from this script.
Count the number of genes that overlap when examing the hg19 & hg38
GISTIC runs' focal peaks.
Visualize fraction of samples with disagreements in thresholded copy
number for each gene. Run sequential_cohort_test_thresholded_values()
before this function.
Visualize the results of peakgene_overlaps function in bargraph form.
Return the average differences in raw copy number values between the
gene-level calls in hg19 and hg38 for each gene for a given tumor type
'coh.' If plot=True, plot the genes' differences in a histogram.
Sequentially compare raw gene-level calls for the given tumor types.
Compare thresholded gene-level calls for input tumor types.
Compare -2,-1,0,1,2 gene-level thresholded calls. metric can be either
hamming (number of discrepancies in each gene) or manhattan (sum of
'distances' between each gene so a 1 to -1 change is 2). Returns a vector
of each gene's metric.
Created on Thu Feb 22 17:28:54 2018
@author: galengao
This is the original analysis code as it exists in the environment where it was writen and initially run.
Portions and modifications of this script constitute all other .py scripts in this directory.
!/usr/bin/env python3 -*- coding: utf-8 -*- Helper Function to Load in the Data Raw Copy Number Values Analysis Code load in the data compute average sample-by-sample differences for each gene take note of which genes are altered more than our threshold of 4*std perform raw value comparison for each cohort compile results together write output Thresholded Copy Number Values Analysis Code Read in data written by sequential_cohort_test_thresholded_values read in copy number burden data (requires aneuploidy RecurrentSCNA calls) plot it out Significantly Altered Focal Peaks Analysis Code put all significant genes in a list Conservation of Significant Copy Number Driver Events Analysis Code read in table of documented driver differences (this table needs a manual curation to be generated) process entries to have just yes/no calls (without parens & brackets) number of documented drivers that match in hg19 & hg38 number of documented drivers that are in hg19 but not hg38 & vice versa Return in order set up the tumor types we want to analyze grab list of genes present in both hg19 & hg38 action lines -- run the analysis | 2,763 | en | 0.873616 |
from colored import fg, stylize, attr
import requests as rq
from yaspin import yaspin
version = "0.4beta"
greeting = stylize("""
╭────────────────────────────────────────────────────────────────╮
│ Добро пожаловать в │
│ _____ _ _ ____ _ ___ │
│ | ____| |(_)_ _ _ __ / ___| | |_ _| │
│ | _| | || | | | | '__| | | | | | │
│ | |___| || | |_| | | | |___| |___ | | │
│ |_____|_|/ |\__,_|_| \____|_____|___| │
│ |__/ │
│ вер. 0.6.1beta │
╰────────────────────────────────────────────────────────────────╯
""", fg("magenta"), attr("bold"))
API_URL = "https://markbook.eljur.ru/apiv3/"
DEVKEY = "9235e26e80ac2c509c48fe62db23642c"
VENDOR = "markbook"
lessons = []
time_style = fg("green") + attr("bold")
room_style = fg("yellow") + attr("bold")
day_of_week_style = fg("orange_1") + attr("bold")
non_academ_style = fg("cyan")
separator_style = fg("medium_purple_1") + attr("bold")
separator = stylize("::", separator_style)
# yakuri354 - Для обозначения времени окон
# butukay - Я бы назвал это костылём } < Немогу удалить
# yakuri354 ~> ну я согласен, но а как ещё окна отображать
lessons_time = {
"1": "08:30:00_09:10:00",
"2": "09:30:00_10:10:00",
"3": "10:20:00_11:00:00",
"4": "11:10:00_11:50:00",
"5": "12:00:00_12:40:00",
"6": "13:30:00_14:10:00",
"7": "14:20:00_15:00:00",
"8": "15:10:00_15:50:00",
"9": "16:20:00_17:00:00",
"10": "17:10:00_17:50:00",
"11": "18:00:00_18:40:00"
}
# Объект ученика
class Student:
def __init__(self, token=None, login=None):
self.token = token
self.login = login
rules_params = {
"DEVKEY": DEVKEY,
"vendor": VENDOR,
"out_format": "json",
"auth_token": self.token,
}
user_info = rq.get(API_URL + "getrules", params=rules_params).json()["response"]
if user_info["error"] is not None or "":
print("Ошибка при получении информации об ученике: " + user_info["error"])
raise LookupError(user_info["error"])
self.student_id = user_info["result"]["name"]
self.name = user_info["result"]["relations"]["students"][self.student_id]["title"]
self.grade = user_info["result"]["relations"]["students"][self.student_id]["class"]
self.city = user_info["result"]["city"]
self.email = user_info["result"]["email"]
self.fullname = user_info["result"]["title"]
self.gender = user_info["result"]["gender"]
self.school = user_info["result"]["relations"]["schools"][0]["title"]
def __str__(self):
text = ""
text += "\nИмя: " + self.name
text += "\nКласс: " + str(self.grade)
text += "\nГород: " + self.city
text += "\nШкола: " + self.school
text += "\nПол: " + "Мужской" if self.gender == "male" else "Женский"
text += "\nЛогин: " + self.login
text += "\nЭл. Почта: " + self.email
return text
def get_schedule(self, date=None, silent=False):
load_spinner = None
if not silent:
load_spinner = yaspin(text="Загрузка...")
load_spinner.text = "[Получение дневника из журнала...]"
if date is None:
date = "20191118-20191124"
diary = rq.get(
API_URL + "getschedule",
params={
"devkey": DEVKEY,
"vendor": VENDOR,
"out_format": "json",
"student": self.student_id,
"auth_token": self.token,
"days": date,
"rings": "true"
}
).json()['response']
if diary["error"] is not None:
if not silent:
load_spinner.text = ""
load_spinner.fail(stylize("Ошибка получения расписания: " + diary["error"], fg("red")))
raise LookupError(diary["error"])
schedule = diary['result']['students'][str(self.student_id)]
if not silent:
load_spinner.text = ""
load_spinner.ok(stylize("[Расписание успешно получено!] ", fg("green")))
return schedule
# Получение информации об ученике через запрос getrules
def info(self, extended=False):
if not extended:
return self.student_id, self.name, self.grade
else:
return {
"student_id": self.student_id,
"fullname": self.name,
"grade": self.grade,
"city": self.city,
"email": self.email,
"gender": self.gender,
"school": self.school
}
| eljur.py | 5,600 | yakuri354 - Для обозначения времени окон butukay - Я бы назвал это костылём } < Немогу удалить yakuri354 ~> ну я согласен, но а как ещё окна отображать Объект ученика Получение информации об ученике через запрос getrules | 247 | ru | 0.994596 |
from datetime import date
from dateutil.relativedelta import relativedelta
from django.contrib.auth.models import Group
from django.contrib.gis.geos import GEOSGeometry
from django.core.mail import send_mail
from ..api.get_table import *
from ..utils.get_data import has_access, is_int, is_float
from ..water_network.models import ElementType
def log_element(elem, request):
transaction = Transaction(user=request.user)
transaction.save()
elem.save()
elem.log_add(transaction)
def add_consumer_element(request):
first_name = request.POST.get("firstname", None)
last_name = request.POST.get("lastname", None)
gender = request.POST.get("gender", None)
address = request.POST.get("address", None)
sub = request.POST.get("subconsumer", None)
phone = request.POST.get("phone", None)
outlet_id = request.POST.get("mainOutlet", None)
if sub is None or not is_int(sub):
return HttpResponse("Impossible, certains champs devraient être des entiers", status=400)
outlet = Element.objects.filter(id=outlet_id).first()
if outlet is None:
return HttpResponse("La sortie d'eau spécifiée n'a pas été trouvée, "
"impossible d'ajouter le consommateur", status=400)
if not has_access(outlet, request):
return HttpResponse("Vous n'avez pas les droits sur cet élément de réseau", status=403)
consumer = Consumer(last_name=last_name, first_name=first_name, gender=gender, location=address,
phone_number=phone, household_size=sub, water_outlet=outlet) # Creation
log_element(consumer, request)
if outlet.type != ElementType.INDIVIDUAL.name:
price, duration = outlet.get_price_and_duration()
creation = date.today()
expiration = creation + relativedelta(months=duration)
invoice = Invoice(consumer=consumer, water_outlet=outlet, amount=price,
creation=creation, expiration=expiration)
invoice.save()
json_object = {
'data': consumer.descript(),
'type': 'add',
'table': 'consumer'
}
return HttpResponse(json.dumps(json_object), status=200)
def add_network_element(request):
if request.user.profile.zone is None:
return HttpResponse("Vous n'êtes pas connecté en tant que gestionnaire de zone", status=403)
type = request.POST.get("type", None).upper()
loc = request.POST.get("localization", None)
state = request.POST.get("state", None).upper()
name = ElementType[type].value + " " + loc
zone = Zone.objects.filter(name=request.user.profile.zone).first()
if zone is None:
return HttpResponse("Impossible de trouver la zone gérée pas l'utilisateur", status=400)
element = Element(name=name, type=type, status=state, location=loc, zone=zone) # Creation
log_element(element, request)
json_object = {
'data': element.network_descript(),
'type': 'add',
'table': 'water_element'
}
return HttpResponse(json.dumps(json_object), status=200)
def add_report_element(request):
values = json.loads(request.body.decode("utf-8"))
for index, elem in enumerate(values["selectedOutlets"]):
outlet = Element.objects.filter(id=elem).first()
if outlet is None:
return HttpResponse("La sortie d'eau concernée par ce rapport n'a pas été trouvée", status=400)
if not has_access(outlet, request):
return HttpResponse("Vous n'avez pas les droits sur cet élément de réseau", status=403)
active = values["isActive"]
if active:
hour_activity = values["inputHours"]
day_activity = values["inputDays"]
if not is_int(hour_activity) or not is_int(day_activity):
return HttpResponse("Impossible, certains champs devraient être des entiers", status=400)
data = values["details"][index]["perCubic"] != "none"
if data:
meters_distr = values["details"][index]["cubic"]
value_meter = values["details"][index]["perCubic"]
recette = values["details"][index]["bill"]
if not is_float(meters_distr) or not is_float(value_meter) or not is_float(recette):
return HttpResponse("Impossible, certains champs devraient être des entiers", status=400)
report_line = Report(water_outlet=outlet, was_active=active, has_data=data,
hours_active=hour_activity, days_active=day_activity,
quantity_distributed=meters_distr, price=value_meter, recette=recette)
if outlet.type == ElementType.INDIVIDUAL.name: # Create an invoice for individual outlets
consumer = Consumer.objects.filter(water_outlet=outlet).first()
if consumer is not None:
amount = int(meters_distr) * int(value_meter)
creation = date.today()
expiration = creation + relativedelta(months=1)
invoice = Invoice(consumer=consumer, water_outlet=outlet, creation=creation,
expiration=expiration, amount=amount)
invoice.save()
else:
report_line = Report(water_outlet=outlet, was_active=active, has_data=data,
hours_active=hour_activity, days_active=day_activity)
if outlet.type == ElementType.INDIVIDUAL.name:
consumer = Consumer.objects.filter(water_outlet=outlet).first()
if consumer is not None:
amount = outlet.zone.indiv_base_price
creation = date.today()
expiration = creation + relativedelta(months=1)
invoice = Invoice(consumer=consumer, water_outlet=outlet, creation=creation,
expiration=expiration, amount=amount)
invoice.save()
else:
report_line = Report(water_outlet=outlet, was_active=active)
log_element(report_line, request)
return HttpResponse(status=200)
def add_zone_element(request):
if request.user.profile.zone is None:
return HttpResponse("Vous n'êtes pas connecté en tant que gestionnaire de zone", status=403)
name = request.POST.get("name", None)
fountain_price = request.POST.get("fountain-price", 0)
fountain_duration = request.POST.get("fountain-duration", 1)
kiosk_price = request.POST.get("kiosk-price", 0)
kiosk_duration = request.POST.get("kiosk-duration", 1)
indiv_base_price = request.POST.get("indiv-price", 0)
if not is_int(fountain_price) or not is_int(fountain_duration) \
or not is_int(kiosk_price) or not is_int(kiosk_duration) \
or not is_int(indiv_base_price):
return HttpResponse("Impossible, certains champs devraient être des entiers", status=400)
if Zone.objects.filter(name=name).first() is not None:
return HttpResponse("Une zone avec ce nom existe déjà dans l'application, "
"veuillez en choisir un autre", status=400)
superzone = Zone.objects.filter(name=request.user.profile.zone).first()
if superzone is None:
return HttpResponse("Impossible de trouver la zone gérée pas l'utilisateur", status=400)
zone = Zone(name=name, superzone=superzone, subzones=[name],
fountain_price=fountain_price, fountain_duration=fountain_duration,
kiosk_price=kiosk_price, kiosk_duration=kiosk_duration,
indiv_base_price=indiv_base_price)
while superzone is not None:
superzone.subzones.append(name)
superzone.save()
superzone = superzone.superzone
log_element(zone, request)
json_object = {
'data': zone.descript(),
'type': 'add',
'table': 'zone'
}
return HttpResponse(json.dumps(json_object), status=200)
def add_collaborator_element(request):
if request.user.profile.zone is None:
return HttpResponse("Vous n'êtes pas connecté en tant que gestionnaire de zone", status=403)
first_name = request.POST.get("firstname", None)
last_name = request.POST.get("lastname", None)
username = request.POST.get("id", None)
password = User.objects.make_random_password() # New random password
email = request.POST.get("email", None)
type = request.POST.get("type", None)
phone = request.POST.get("phone", None)
if User.objects.filter(username=username).first() is not None:
return HttpResponse("Cet utilisateur existe déjà ! Vérifier que son identifiant est bien unique", status=400)
user = User.objects.create_user(username=username, email=email, password=password,
first_name=first_name, last_name=last_name)
user.profile.phone_number = phone
if type == "fountain-manager":
outlet_ids = request.POST.get("outlets", None).split(',')
if len(outlet_ids) < 1:
user.delete()
return HttpResponse("Vous n'avez pas choisi de fontaine a attribuer !", status=400)
outlets = Element.objects.filter(id__in=outlet_ids) if len(outlet_ids) > 1 else \
Element.objects.filter(id=outlet_ids[0])
if len(outlets) < 1:
user.delete()
return HttpResponse("Impossible d'attribuer cette fontaine au gestionnaire", status=400)
for outlet in outlets:
if not has_access(outlet, request):
user.delete()
return HttpResponse("Vous n'avez pas les droits sur cet élément de réseau", status=403)
outlet.manager_names = outlet.get_managers()
outlet.save()
user.profile.outlets.append(outlet.id)
my_group = Group.objects.get(name='Gestionnaire de fontaine')
my_group.user_set.add(user)
tab = [user.username, user.last_name, user.first_name, user.profile.get_phone_number(),
user.email, "Gestionnaire de fontaine", user.profile.get_zone(), user.profile.outlets]
elif type == "zone-manager":
zone_id = request.POST.get("zone", None)
zone = Zone.objects.filter(id=zone_id).first()
if zone is None:
user.delete()
return HttpResponse("Impossible d'attribuer cette zone au gestionnaire", status=400)
if zone.name not in request.user.profile.zone.subzones:
user.delete()
return HttpResponse("Vous n'avez pas les droits sur cette zone", status=403)
user.profile.zone = zone
my_group = Group.objects.get(name='Gestionnaire de zone')
my_group.user_set.add(user)
tab = [user.username, user.last_name, user.first_name, user.profile.get_phone_number(),
user.email, "Gestionnaire de zone", user.profile.zone.name, user.profile.outlets]
else:
user.delete()
return HttpResponse("Impossible d'ajouter l'utilisateur", status=400)
send_mail('Bienvenue sur haitiwater !',
'Bienvenue sur haitiwater. Voici votre mot de passe autogénéré : ' + password + '\n' +
'Veuillez vous connecter pour le modifier.\n' +
'Pour rappel, votre identifiant est : ' + username,
'', [email], fail_silently=False)
log_element(user.profile, request)
json_object = {
'data': tab,
'type': 'add',
'table': 'manager'
}
return HttpResponse(json.dumps(json_object), status=200)
def add_ticket_element(request):
outlet_id = request.POST.get("id_outlet", None)
type = request.POST.get("type", None).upper()
comment = request.POST.get("comment", None)
urgency = request.POST.get('urgency', None).upper()
image = request.FILES.get("picture", None)
outlet = Element.objects.filter(id=outlet_id).first()
if outlet is None:
return HttpResponse("Impossible de trouver la sortie d'eau correspondante au ticket", status=400)
if not has_access(outlet, request):
return HttpResponse("Vous n'avez pas les droits sur cet élément de réseau", status=403)
if image:
import uuid
extension = image.name.split(".")
filename = str(uuid.uuid4())
image.name = filename + "." + extension[1]
ticket = Ticket(water_outlet=outlet, type=type, comment=comment, urgency=urgency, image=image)
log_element(ticket, request)
json_object = {
'data': ticket.descript(),
'type': 'add',
'table': 'ticket'
}
return HttpResponse(json.dumps(json_object), status=200)
def add_payment_element(request):
id_consumer = request.POST.get("id_consumer", None)
amount = request.POST.get("amount", None)
if not is_float(amount):
return HttpResponse("Impossible, certains champs devraient être des entiers", status=400)
consumer = Consumer.objects.filter(id=id_consumer).first()
if not consumer:
return HttpResponse("Impossible de trouver l'utilisateur", status=400)
elif not has_access(consumer.water_outlet, request):
return HttpResponse("Vous n'avez pas les droits sur ce consommateur", status=403)
outlet = consumer.water_outlet
payment = Payment(consumer=consumer, water_outlet=outlet, amount=amount)
log_element(payment, request)
json_object = {
'data': payment.descript(),
'type': 'add',
'table': 'payment',
'consumer': payment.infos()["Identifiant consommateur"]
}
return HttpResponse(json.dumps(json_object), status=200)
def add_location_element(request, elem):
body = request.body.decode('utf-8')
json_value = json.loads(body)
poly = GEOSGeometry(str(json_value["geometry"]))
lon, lat = 0, 0
if len(poly.coord_seq) == 1:
lon, lat = poly[0], poly[1]
loc = Location(elem=elem, lat=lat, lon=lon, poly=poly, json_representation=body)
log_element(loc, request)
json_object = {
'data': [loc.elem.name, loc.json_representation],
'type': 'add',
'id': loc.elem.id,
'table': 'water_element_details'
}
return HttpResponse(json.dumps(json_object), status=200)
| code/haitiwater/apps/api/add_table.py | 14,381 | Creation Creation Create an invoice for individual outlets New random password | 78 | en | 0.766852 |
# Author: Luke Bloy <bloyl@chop.edu>
#
# License: BSD (3-clause)
import numpy as np
import os.path as op
import datetime
import calendar
from .utils import _load_mne_locs
from ...utils import logger, warn
from ..utils import _read_segments_file
from ..base import BaseRaw
from ..meas_info import _empty_info
from ..constants import FIFF
def read_raw_artemis123(input_fname, preload=False, verbose=None):
"""Read Artemis123 data as raw object.
Parameters
----------
input_fname : str
Path to the data file (extension ``.bin``). The header file with the
same file name stem and an extension ``.txt`` is expected to be found
in the same directory.
preload : bool or str (default False)
Preload data into memory for data manipulation and faster indexing.
If True, the data will be preloaded into memory (fast, requires
large amount of memory). If preload is a string, preload is the
file name of a memory-mapped file which is used to store the data
on the hard drive (slower, requires less memory).
verbose : bool, str, int, or None
If not None, override default verbose level (see mne.verbose).
Returns
-------
raw : Instance of Raw
A Raw object containing the data.
See Also
--------
mne.io.Raw : Documentation of attribute and methods.
"""
return RawArtemis123(input_fname, preload=preload, verbose=verbose)
def _get_artemis123_info(fname):
"""Function for extracting info from artemis123 header files."""
fname = op.splitext(op.abspath(fname))[0]
header = fname + '.txt'
logger.info('Reading header...')
# key names for artemis channel info...
chan_keys = ['name', 'scaling', 'FLL_Gain', 'FLL_Mode', 'FLL_HighPass',
'FLL_AutoReset', 'FLL_ResetLock']
header_info = dict()
header_info['filter_hist'] = []
header_info['comments'] = ''
header_info['channels'] = []
with open(header, 'r') as fid:
# section flag
# 0 - None
# 1 - main header
# 2 - channel header
# 3 - comments
# 4 - length
# 5 - filtering History
sectionFlag = 0
for line in fid:
# skip emptylines or header line for channel info
if ((not line.strip()) or
(sectionFlag == 2 and line.startswith('DAQ Map'))):
continue
# set sectionFlag
if line.startswith('<end'):
sectionFlag = 0
elif line.startswith("<start main header>"):
sectionFlag = 1
elif line.startswith("<start per channel header>"):
sectionFlag = 2
elif line.startswith("<start comments>"):
sectionFlag = 3
elif line.startswith("<start length>"):
sectionFlag = 4
elif line.startswith("<start filtering history>"):
sectionFlag = 5
else:
# parse header info lines
# part of main header - lines are name value pairs
if sectionFlag == 1:
values = line.strip().split('\t')
if len(values) == 1:
values.append('')
header_info[values[0]] = values[1]
# part of channel header - lines are Channel Info
elif sectionFlag == 2:
values = line.strip().split('\t')
if len(values) != 7:
raise IOError('Error parsing line \n\t:%s\n' % line +
'from file %s' % header)
tmp = dict()
for k, v in zip(chan_keys, values):
tmp[k] = v
header_info['channels'].append(tmp)
elif sectionFlag == 3:
header_info['comments'] = '%s%s' \
% (header_info['comments'], line.strip())
elif sectionFlag == 4:
header_info['num_samples'] = int(line.strip())
elif sectionFlag == 5:
header_info['filter_hist'].append(line.strip())
for k in ['Temporal Filter Active?', 'Decimation Active?',
'Spatial Filter Active?']:
if(header_info[k] != 'FALSE'):
warn('%s - set to but is not supported' % k)
if(header_info['filter_hist']):
warn('Non-Empty Filter histroy found, BUT is not supported' % k)
# build mne info struct
info = _empty_info(float(header_info['Rate Out']))
# Attempt to get time/date from fname
# Artemis123 files saved from the scanner observe the following
# naming convention 'Artemis_Data_YYYY-MM-DD-HHh-MMm_[chosen by user].bin'
try:
date = datetime.datetime.strptime(
op.basename(fname).split('_')[2], '%Y-%m-%d-%Hh-%Mm')
meas_date = calendar.timegm(date.utctimetuple())
except Exception:
meas_date = None
# build subject info
subject_info = {'id': header_info['Subject ID']}
# build description
desc = ''
for k in ['Purpose', 'Notes']:
desc += '{} : {}\n'.format(k, header_info[k])
desc += 'Comments : {}'.format(header_info['comments'])
info = _empty_info(float(header_info['Rate Out']))
info.update({'filename': fname, 'meas_date': meas_date,
'description': desc, 'buffer_size_sec': 1.,
'subject_info': subject_info,
'proj_name': header_info['Project Name']})
# Channel Names by type
ref_mag_names = ['REF_001', 'REF_002', 'REF_003',
'REF_004', 'REF_005', 'REF_006']
ref_grad_names = ['REF_007', 'REF_008', 'REF_009',
'REF_010', 'REF_011', 'REF_012']
# load mne loc dictionary
loc_dict = _load_mne_locs()
info['chs'] = []
info['bads'] = []
for i, chan in enumerate(header_info['channels']):
# build chs struct
t = {'cal': float(chan['scaling']), 'ch_name': chan['name'],
'logno': i + 1, 'scanno': i + 1, 'range': 1.0,
'unit_mul': FIFF.FIFF_UNITM_NONE,
'coord_frame': FIFF.FIFFV_COORD_DEVICE}
t['loc'] = loc_dict.get(chan['name'], np.zeros(12))
if (chan['name'].startswith('MEG')):
t['coil_type'] = FIFF.FIFFV_COIL_ARTEMIS123_GRAD
t['kind'] = FIFF.FIFFV_MEG_CH
# While gradiometer units are T/m, the meg sensors referred to as
# gradiometers report the field difference between 2 pick-up coils.
# Therefore the units of the measurements should be T
# *AND* the baseline (difference between pickup coils)
# should not be used in leadfield / forwardfield computations.
t['unit'] = FIFF.FIFF_UNIT_T
t['unit_mul'] = FIFF.FIFF_UNITM_F
# 3 axis referance magnetometers
elif (chan['name'] in ref_mag_names):
t['coil_type'] = FIFF.FIFFV_COIL_ARTEMIS123_REF_MAG
t['kind'] = FIFF.FIFFV_REF_MEG_CH
t['unit'] = FIFF.FIFF_UNIT_T
t['unit_mul'] = FIFF.FIFF_UNITM_F
# reference gradiometers
elif (chan['name'] in ref_grad_names):
t['coil_type'] = FIFF.FIFFV_COIL_ARTEMIS123_REF_GRAD
t['kind'] = FIFF.FIFFV_REF_MEG_CH
# While gradiometer units are T/m, the meg sensors referred to as
# gradiometers report the field difference between 2 pick-up coils.
# Therefore the units of the measurements should be T
# *AND* the baseline (difference between pickup coils)
# should not be used in leadfield / forwardfield computations.
t['unit'] = FIFF.FIFF_UNIT_T
t['unit_mul'] = FIFF.FIFF_UNITM_F
# other reference channels are unplugged and should be ignored.
elif (chan['name'].startswith('REF')):
t['coil_type'] = FIFF.FIFFV_COIL_NONE
t['kind'] = FIFF.FIFFV_MISC_CH
t['unit'] = FIFF.FIFF_UNIT_V
info['bads'].append(t['ch_name'])
elif (chan['name'].startswith(('AUX', 'TRG', 'MIO'))):
t['coil_type'] = FIFF.FIFFV_COIL_NONE
t['unit'] = FIFF.FIFF_UNIT_V
if (chan['name'].startswith('TRG')):
t['kind'] = FIFF.FIFFV_STIM_CH
else:
t['kind'] = FIFF.FIFFV_MISC_CH
else:
raise ValueError('Channel does not match expected' +
' channel Types:"%s"' % chan['name'])
# incorporate mulitplier (unit_mul) into calibration
t['cal'] *= 10 ** t['unit_mul']
t['unit_mul'] = FIFF.FIFF_UNITM_NONE
# append this channel to the info
info['chs'].append(t)
if (chan['FLL_ResetLock'] == 'TRUE'):
info['bads'].append(t['ch_name'])
# reduce info['bads'] to unique set
info['bads'] = list(set(info['bads']))
info._update_redundant()
return info, header_info
class RawArtemis123(BaseRaw):
"""Raw object from Artemis123 file.
Parameters
----------
input_fname : str
Path to the Artemis123 data file (ending in ``'.bin'``).
preload : bool or str (default False)
Preload data into memory for data manipulation and faster indexing.
If True, the data will be preloaded into memory (fast, requires
large amount of memory). If preload is a string, preload is the
file name of a memory-mapped file which is used to store the data
on the hard drive (slower, requires less memory).
verbose : bool, str, int, or None
If not None, override default verbose level (see mne.verbose).
See Also
--------
mne.io.Raw : Documentation of attribute and methods.
"""
def __init__(self, input_fname, preload=False, verbose=None): # noqa: D102
info, header_info = _get_artemis123_info(input_fname)
last_samps = [header_info['num_samples'] - 1]
super(RawArtemis123, self).__init__(
info, preload, filenames=[input_fname], raw_extras=[header_info],
last_samps=last_samps, orig_format=np.float32,
verbose=verbose)
def _read_segment_file(self, data, idx, fi, start, stop, cals, mult):
"""Read a chunk of raw data."""
_read_segments_file(self, data, idx, fi, start,
stop, cals, mult, dtype='>f4')
| mne/io/artemis123/artemis123.py | 10,473 | Raw object from Artemis123 file.
Parameters
----------
input_fname : str
Path to the Artemis123 data file (ending in ``'.bin'``).
preload : bool or str (default False)
Preload data into memory for data manipulation and faster indexing.
If True, the data will be preloaded into memory (fast, requires
large amount of memory). If preload is a string, preload is the
file name of a memory-mapped file which is used to store the data
on the hard drive (slower, requires less memory).
verbose : bool, str, int, or None
If not None, override default verbose level (see mne.verbose).
See Also
--------
mne.io.Raw : Documentation of attribute and methods.
Function for extracting info from artemis123 header files.
Read a chunk of raw data.
Read Artemis123 data as raw object.
Parameters
----------
input_fname : str
Path to the data file (extension ``.bin``). The header file with the
same file name stem and an extension ``.txt`` is expected to be found
in the same directory.
preload : bool or str (default False)
Preload data into memory for data manipulation and faster indexing.
If True, the data will be preloaded into memory (fast, requires
large amount of memory). If preload is a string, preload is the
file name of a memory-mapped file which is used to store the data
on the hard drive (slower, requires less memory).
verbose : bool, str, int, or None
If not None, override default verbose level (see mne.verbose).
Returns
-------
raw : Instance of Raw
A Raw object containing the data.
See Also
--------
mne.io.Raw : Documentation of attribute and methods.
Author: Luke Bloy <bloyl@chop.edu> License: BSD (3-clause) key names for artemis channel info... section flag 0 - None 1 - main header 2 - channel header 3 - comments 4 - length 5 - filtering History skip emptylines or header line for channel info set sectionFlag parse header info lines part of main header - lines are name value pairs part of channel header - lines are Channel Info build mne info struct Attempt to get time/date from fname Artemis123 files saved from the scanner observe the following naming convention 'Artemis_Data_YYYY-MM-DD-HHh-MMm_[chosen by user].bin' build subject info build description Channel Names by type load mne loc dictionary build chs struct While gradiometer units are T/m, the meg sensors referred to as gradiometers report the field difference between 2 pick-up coils. Therefore the units of the measurements should be T *AND* the baseline (difference between pickup coils) should not be used in leadfield / forwardfield computations. 3 axis referance magnetometers reference gradiometers While gradiometer units are T/m, the meg sensors referred to as gradiometers report the field difference between 2 pick-up coils. Therefore the units of the measurements should be T *AND* the baseline (difference between pickup coils) should not be used in leadfield / forwardfield computations. other reference channels are unplugged and should be ignored. incorporate mulitplier (unit_mul) into calibration append this channel to the info reduce info['bads'] to unique set noqa: D102 | 3,149 | en | 0.764032 |
"""The DAS response.
The DAS response describes the attributes associated with a dataset and its
variables. Together with the DDS the DAS response completely describes the
metadata of a dataset, allowing it to be introspected and data to be
downloaded.
"""
try:
from functools import singledispatch
except ImportError:
from singledispatch import singledispatch
from collections import Iterable
from six import string_types, integer_types
from six.moves import map
import numpy as np
from ..model import (DatasetType, BaseType,
StructureType, SequenceType,
GridType)
from ..lib import encode, quote, __version__, NUMPY_TO_DAP2_TYPEMAP
from .lib import BaseResponse
INDENT = ' ' * 4
class DASResponse(BaseResponse):
"""The DAS response."""
__version__ = __version__
def __init__(self, dataset):
BaseResponse.__init__(self, dataset)
self.headers.extend([
('Content-description', 'dods_das'),
('Content-type', 'text/plain; charset=ascii'),
])
def __iter__(self):
for line in das(self.dataset):
try:
yield line.encode('ascii')
except UnicodeDecodeError:
yield line.encode('UTF-8')
@singledispatch
def das(var, level=0):
"""Single dispatcher that generates the DAS response."""
raise StopIteration
@das.register(DatasetType)
def _datasettype(var, level=0):
yield '{indent}Attributes {{\n'.format(indent=level*INDENT)
for attr in sorted(var.attributes.keys()):
values = var.attributes[attr]
for line in build_attributes(attr, values, level+1):
yield line
for child in var.children():
for line in das(child, level=level+1):
yield line
yield '{indent}}}\n'.format(indent=level*INDENT)
@das.register(StructureType)
@das.register(SequenceType)
def _structuretype(var, level=0):
yield '{indent}{name} {{\n'.format(indent=level*INDENT, name=var.name)
for attr in sorted(var.attributes.keys()):
values = var.attributes[attr]
for line in build_attributes(attr, values, level+1):
yield line
for child in var.children():
for line in das(child, level=level+1):
yield line
yield '{indent}}}\n'.format(indent=level*INDENT)
@das.register(BaseType)
@das.register(GridType)
def _basetypegridtype(var, level=0):
yield '{indent}{name} {{\n'.format(indent=level*INDENT, name=var.name)
for attr in sorted(var.attributes.keys()):
values = var.attributes[attr]
if np.asarray(values).size > 0:
for line in build_attributes(attr, values, level+1):
yield line
yield '{indent}}}\n'.format(indent=level*INDENT)
def build_attributes(attr, values, level=0):
"""Recursive function to build the DAS."""
# check for metadata
if isinstance(values, dict):
yield '{indent}{attr} {{\n'.format(indent=(level)*INDENT, attr=attr)
for k, v in values.items():
for line in build_attributes(k, v, level+1):
yield line
yield '{indent}}}\n'.format(indent=(level)*INDENT)
else:
# get type
type = get_type(values)
# encode values
if (isinstance(values, string_types) or
not isinstance(values, Iterable) or
getattr(values, 'shape', None) == ()):
values = [encode(values)]
else:
values = map(encode, values)
yield '{indent}{type} {attr} {values};\n'.format(
indent=(level)*INDENT,
type=type,
attr=quote(attr),
values=', '.join(values))
def get_type(values):
"""Extract the type of a variable.
This function tries to determine the DAP type of a Python variable using
several methods. Returns the DAP type as a string.
"""
if hasattr(values, 'dtype'):
return NUMPY_TO_DAP2_TYPEMAP[values.dtype.char]
elif isinstance(values, string_types) or not isinstance(values, Iterable):
return type_convert(values)
else:
# if there are several values, they may have different types, so we
# need to convert all of them and use a precedence table
types = [type_convert(val) for val in values]
precedence = ['String', 'Float64', 'Int32']
types.sort(key=precedence.index)
return types[0]
def type_convert(obj):
"""Map Python objects to the corresponding Opendap types.
Returns the DAP representation of the type as a string.
"""
if isinstance(obj, float):
return 'Float64'
elif isinstance(obj, integer_types):
return 'Int32'
else:
return 'String'
| src/pydap/responses/das.py | 4,733 | The DAS response.
Recursive function to build the DAS.
Single dispatcher that generates the DAS response.
Extract the type of a variable.
This function tries to determine the DAP type of a Python variable using
several methods. Returns the DAP type as a string.
Map Python objects to the corresponding Opendap types.
Returns the DAP representation of the type as a string.
The DAS response.
The DAS response describes the attributes associated with a dataset and its
variables. Together with the DDS the DAS response completely describes the
metadata of a dataset, allowing it to be introspected and data to be
downloaded.
check for metadata get type encode values if there are several values, they may have different types, so we need to convert all of them and use a precedence table | 790 | en | 0.826689 |
# -*- coding: utf-8 -*-
# Generated by Django 1.10.5 on 2017-11-11 04:06
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='SignedTermsAndConditions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
],
),
migrations.CreateModel(
name='TermsAndConditions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date', models.DateField(auto_now_add=True, help_text='Date of publication', verbose_name='Creation date')),
('markdown', models.TextField(editable=False, help_text='Formatted in Markdown', verbose_name='Terms and conditions)')),
],
),
migrations.AddField(
model_name='signedtermsandconditions',
name='terms',
field=models.ForeignKey(help_text='Terms agreed with user', on_delete=django.db.models.deletion.CASCADE, to='terms.TermsAndConditions'),
),
migrations.AddField(
model_name='signedtermsandconditions',
name='user',
field=models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL),
),
]
| seven23/models/terms/migrations/0001_initial.py | 1,634 | -*- coding: utf-8 -*- Generated by Django 1.10.5 on 2017-11-11 04:06 | 68 | en | 0.542058 |
# pypi
from pyramid.httpexceptions import HTTPFound
from pyramid.view import view_config
# local
from ..lib.handler import Handler
from ...lib import db as lib_db
from ...lib import errors
from ...model import utils as model_utils
# ==============================================================================
class ViewAdminOperations(Handler):
def _parse__event_type(self):
event_type = self.request.params.get("event_type", None)
event_type_id = None
if event_type:
try:
event_type_id = model_utils.OperationsEventType.from_string(event_type)
except AttributeError:
event_type = None
return (event_type, event_type_id)
def _parse__event_type_ids(self):
"""turns the request's `event_type=operations__update_recents__global` into an id."""
event_type_id = None
event_type = self.request.params.get("event_type", None)
if event_type:
try:
event_type_id = model_utils.OperationsEventType.from_string(event_type)
except AttributeError:
event_type = None
event_type_id = None
if event_type_id:
return (event_type_id,)
return None
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@view_config(route_name="admin:operations", renderer=None)
def operations(self):
return HTTPFound(
"%s/operations/log"
% self.request.registry.settings["app_settings"]["admin_prefix"]
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@view_config(
route_name="admin:operations:log", renderer="/admin/operations-log.mako"
)
@view_config(
route_name="admin:operations:log_paginated",
renderer="/admin/operations-log.mako",
)
def operations_log(self):
_items_per_page = 25
(event_type, event_type_id) = self._parse__event_type()
event_type_ids = (event_type_id,) if event_type_id else None
items_count = lib_db.get.get__OperationsEvent__count(
self.request.api_context, event_type_ids=event_type_ids
)
_url_template = (
"%s/operations/log/{0}"
% self.request.registry.settings["app_settings"]["admin_prefix"]
)
if event_type:
_url_template = "%s/operations/log/{0}?event_type=%s" % (
self.request.registry.settings["app_settings"]["admin_prefix"],
event_type,
)
(pager, offset) = self._paginate(
items_count, url_template=_url_template, items_per_page=_items_per_page
)
items_paged = lib_db.get.get__OperationsEvent__paginated(
self.request.api_context,
event_type_ids=event_type_ids,
limit=_items_per_page,
offset=offset,
)
return {
"project": "peter_sslers",
"OperationsEvent__count": items_count,
"OperationsEvents": items_paged,
"pager": pager,
"enable_redis": self.request.registry.settings["app_settings"][
"enable_redis"
],
"enable_nginx": self.request.registry.settings["app_settings"][
"enable_nginx"
],
"event_type": event_type,
}
@view_config(
route_name="admin:operations:log:focus",
renderer="/admin/operations-log-focus.mako",
)
def operations_log_focus(self):
item = lib_db.get.get__OperationsEvent__by_id(
self.request.api_context, self.request.matchdict["id"], eagerload_log=True
)
if not item:
raise ValueError("no item")
return {
"project": "peter_sslers",
"OperationsEvent": item,
"enable_redis": self.request.registry.settings["app_settings"][
"enable_redis"
],
"enable_nginx": self.request.registry.settings["app_settings"][
"enable_nginx"
],
}
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@view_config(
route_name="admin:operations:redis", renderer="/admin/operations-redis.mako"
)
@view_config(
route_name="admin:operations:redis_paginated",
renderer="/admin/operations-redis.mako",
)
def admin_redis(self):
try:
# could raise `lib.errors.InvalidRequest`
# is this needed for viewing logs though?
# self._ensure_redis()
_items_per_page = 25
items_count = lib_db.get.get__OperationsEvent__count(
self.request.api_context,
event_type_ids=(
model_utils.OperationsEventType.from_string(
"operations__redis_prime"
),
),
)
url_template = (
"%s/operations/redis/log/{0}"
% self.request.registry.settings["app_settings"]["admin_prefix"]
)
(pager, offset) = self._paginate(
items_count,
url_template=url_template,
items_per_page=_items_per_page,
)
items_paged = lib_db.get.get__OperationsEvent__paginated(
self.request.api_context,
event_type_ids=(
model_utils.OperationsEventType.from_string(
"operations__redis_prime"
),
),
limit=_items_per_page,
offset=offset,
)
return {
"project": "peter_sslers",
"OperationsEvent__count": items_count,
"OperationsEvents": items_paged,
"pager": pager,
"enable_redis": self.request.registry.settings["app_settings"][
"enable_redis"
],
}
except errors.InvalidRequest as exc:
if self.request.wants_json:
return {
"result": "error",
"error": exc.args[0],
}
raise HTTPFound(
"%s?result=error&error=%s"
% (
self.request.registry.settings["app_settings"]["admin_prefix"],
exc.as_querystring,
)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@view_config(
route_name="admin:operations:nginx", renderer="/admin/operations-nginx.mako"
)
@view_config(
route_name="admin:operations:nginx_paginated",
renderer="/admin/operations-nginx.mako",
)
def admin_nginx(self):
try:
# could raise `lib.errors.InvalidRequest`
# is this needed for viewing logs though?
# self._ensure_nginx()
_items_per_page = 25
_event_type_ids = (
model_utils.OperationsEventType.from_string(
"operations__nginx_cache_expire"
),
model_utils.OperationsEventType.from_string(
"operations__nginx_cache_flush"
),
)
items_count = lib_db.get.get__OperationsEvent__count(
self.request.api_context, event_type_ids=_event_type_ids
)
url_template = (
"%s/operations/nginx/log/{0}"
% self.request.registry.settings["app_settings"]["admin_prefix"]
)
(pager, offset) = self._paginate(
items_count,
url_template=url_template,
items_per_page=_items_per_page,
)
items_paged = lib_db.get.get__OperationsEvent__paginated(
self.request.api_context,
event_type_ids=_event_type_ids,
limit=_items_per_page,
offset=offset,
)
return {
"project": "peter_sslers",
"OperationsEvent__count": items_count,
"OperationsEvents": items_paged,
"pager": pager,
"enable_nginx": self.request.registry.settings["app_settings"][
"enable_nginx"
],
}
except errors.InvalidRequest as exc:
if self.request.wants_json:
return {
"result": "error",
"error": exc.args[0],
}
raise HTTPFound(
"%s/?result=error&&error=%s"
% (
self.request.registry.settings["app_settings"]["admin_prefix"],
exc.as_querystring,
)
)
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@view_config(
route_name="admin:operations:object_log",
renderer="/admin/operations-object_log.mako",
)
@view_config(
route_name="admin:operations:object_log_paginated",
renderer="/admin/operations-object_log.mako",
)
def object_log(self):
_items_per_page = 25
items_count = lib_db.get.get__OperationsObjectEvent__count(
self.request.api_context
)
url_template = (
"%s/operations/object-log/{0}"
% self.request.registry.settings["app_settings"]["admin_prefix"]
)
(pager, offset) = self._paginate(
items_count,
url_template=url_template,
items_per_page=_items_per_page,
)
items_paged = lib_db.get.get__OperationsObjectEvent__paginated(
self.request.api_context, limit=_items_per_page, offset=offset
)
return {
"project": "peter_sslers",
"OperationsObjectEvent__count": items_count,
"OperationsObjectEvents": items_paged,
"pager": pager,
"enable_redis": self.request.registry.settings["app_settings"][
"enable_redis"
],
"enable_nginx": self.request.registry.settings["app_settings"][
"enable_nginx"
],
}
@view_config(
route_name="admin:operations:object_log:focus",
renderer="/admin/operations-object_log-focus.mako",
)
def operations_object_log_focus(self):
item = lib_db.get.get__OperationsObjectEvent__by_id(
self.request.api_context, self.request.matchdict["id"], eagerload_log=True
)
if not item:
raise ValueError("no item")
return {
"project": "peter_sslers",
"OperationsObjectEvent": item,
"enable_redis": self.request.registry.settings["app_settings"][
"enable_redis"
],
"enable_nginx": self.request.registry.settings["app_settings"][
"enable_nginx"
],
}
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| src/peter_sslers/web/views_admin/operation.py | 11,602 | turns the request's `event_type=operations__update_recents__global` into an id.
pypi local ============================================================================== - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - could raise `lib.errors.InvalidRequest` is this needed for viewing logs though? self._ensure_redis() - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - could raise `lib.errors.InvalidRequest` is this needed for viewing logs though? self._ensure_nginx() - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - | 1,187 | it | 0.437631 |
#######################################################################
# Copyright (C) 2017 Shangtong Zhang(zhangshangtong.cpp@gmail.com) #
# Permission given to modify the code as long as you keep this #
# declaration at the top #
#######################################################################
from .network_utils import *
from .network_bodies import *
from .hyper_bodies import *
from .hypernetwork_ops import *
from ..utils.hypernet_heads_defs import *
from ..component.samplers import *
class VanillaHyperNet(nn.Module, BaseNet):
def __init__(self, output_dim, body):
super(VanillaHyperNet, self).__init__()
self.mixer = False
self.config = VanillaNet_config(body.feature_dim, output_dim)
self.fc_head = LinearGenerator(self.config['fc_head'])
self.body = body
self.to(Config.DEVICE)
def sample_model_seed(self):
if not self.mixer:
self.model_seed = {
'fc_head_z': torch.rand(self.fc_head.config['n_gen'], particles, self.z_dim).to(Config.DEVICE)
}
else:
self.model_seed = torch.rand(particles, self.s_dim)
def forward(self, x, z=None):
phi = self.body(tensor(x, z))
y = self.fc_head(z[0], phi)
return y
class DuelingHyperNet(nn.Module, BaseNet):
def __init__(self, action_dim, body, hidden, dist, particles):
super(DuelingHyperNet, self).__init__()
self.mixer = False
self.config = DuelingNet_config(body.feature_dim, action_dim)
self.config['fc_value'] = self.config['fc_value']._replace(d_hidden=hidden)
self.config['fc_advantage'] = self.config['fc_advantage']._replace(d_hidden=hidden)
self.fc_value = LinearGenerator(self.config['fc_value']).cuda()
self.fc_advantage = LinearGenerator(self.config['fc_advantage']).cuda()
self.features = body
self.s_dim = self.config['s_dim']
self.z_dim = self.config['z_dim']
self.n_gen = self.config['n_gen']
self.particles = particles
self.noise_sampler = NoiseSampler(dist, self.z_dim, self.particles)
# self.sample_model_seed()
self.to(Config.DEVICE)
def sample_model_seed(self):
sample_z = self.noise_sampler.sample().to(Config.DEVICE)
# sample_z = sample_z.unsqueeze(0).repeat(self.features.config['n_gen'], 1, 1)
sample_z = sample_z.unsqueeze(0).repeat(self.particles, 1)
self.model_seed = {
'value_z': sample_z,
'advantage_z': sample_z,
}
def set_model_seed(self, seed):
self.model_seed = seed
def forward(self, x, to_numpy=False, theta=None):
if not isinstance(x, torch.cuda.FloatTensor):
x = tensor(x)
if x.shape[0] == 1 and x.shape[1] == 1: ## dm_env returns one too many dimensions
x = x[0]
phi = self.body(x)
return self.head(phi)
def body(self, x=None):
if not isinstance(x, torch.cuda.FloatTensor):
x = tensor(x)
return self.features(x)
def head(self, phi):
phi = phi.repeat(self.particles, 1, 1) # since we have a deterministic body with many heads
value = self.fc_value(self.model_seed['value_z'], phi)
advantage = self.fc_advantage(self.model_seed['advantage_z'], phi)
q = value.expand_as(advantage) + (advantage - advantage.mean(-1, keepdim=True).expand_as(advantage))
return q
def sample_model(self, component):
param_sets = []
if component == 'q':
param_sets.extend(self.fc_value(z=self.model_seed['value_z']))
param_sets.extend(self.fc_advantage(z=self.model_seed['advantage_z']))
return param_sets
def predict_action(self, x, pred, to_numpy=False):
x = tensor(x)
q = self(x)
if pred == 'max':
max_q, max_q_idx = q.max(-1) # max over q values
max_actor = max_q.max(0)[1] # max over particles
action = q[max_actor].argmax()
elif pred == 'rand':
idx = np.random.choice(self.particles, 1)[0]
action = q[idx].max(0)[1]
elif pred == 'mean':
action_means = q.mean(0) #[actions]
action = action_means.argmax()
if to_numpy:
action = action.cpu().detach().numpy()
return action
| deep_rl/network/hyper_heads.py | 4,473 | Copyright (C) 2017 Shangtong Zhang(zhangshangtong.cpp@gmail.com) Permission given to modify the code as long as you keep this declaration at the top self.sample_model_seed() sample_z = sample_z.unsqueeze(0).repeat(self.features.config['n_gen'], 1, 1) dm_env returns one too many dimensions since we have a deterministic body with many heads max over q values max over particles[actions] | 444 | en | 0.753257 |
from django.utils import timezone
from .forms import SchedDayForm
class AdminCommonMixin(object):
"""
common methods for all admin class
set default values for owner, date, etc
"""
def save_model(self, request, obj, form, change):
try:
obj.created_by = request.user
except:
pass
super().save_model(request, obj, form, change)
def get_queryset(self, request):
"""
read queryset if is superuser
or read owns objects
"""
qs = super().get_queryset(request)
if request.user.is_superuser:
return qs
return qs.filter(created_by=request.user)
def response_change(self, request, obj):
"""
get from response change some custom action from post
ej: '_custom_action' in request.POST:
"""
if '_custom_action' in request.POST:
pass
return super().response_change(request, obj)
def response_add(self, request, obj):
"""
get from response change some custom action from post
ej: '_custom_action' in request.POST:
"""
if '_custom_action' in request.POST:
pass
return super().response_add(request, obj)
class CalendarActionMixin(object):
def save_model(self, request, obj, form, change):
try:
obj.created_by = request.user
except:
pass
super().save_model(request, obj, form, change)
def get_queryset(self, request):
"""
read queryset if is superuser
or read owns objects
"""
qs = super().get_queryset(request)
if request.user.is_superuser:
return qs
return qs.filter(created_by=request.user)
def changelist_view(self, request, extra_context=None):
response = super().changelist_view(
request,
extra_context=extra_context
)
try:
# get only when times of days are set
qs = response.context_data['cl'].queryset.timesofdays()
except (AttributeError, KeyError):
return response
response.context_data['scheduled_days'] = qs
return response
| schedules/mixins.py | 2,241 | common methods for all admin class
set default values for owner, date, etc
read queryset if is superuser
or read owns objects
read queryset if is superuser
or read owns objects
get from response change some custom action from post
ej: '_custom_action' in request.POST:
get from response change some custom action from post
ej: '_custom_action' in request.POST:
get only when times of days are set | 398 | en | 0.810766 |
# coding=utf-8
# Copyright 2020 The HuggingFace NLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset"""
from __future__ import absolute_import, division, print_function
import os
from zipfile import ZipFile
import nlp
_CITATION = """\
@InProceedings{li2017dailydialog,
author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},
title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},
booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)},
year = {2017}
}
"""
_DESCRIPTION = """\
We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.
The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way
and cover various topics about our daily life. We also manually label the developed dataset with communication
intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it
benefit the research field of dialog systems.
"""
_URL = "http://yanran.li/files/ijcnlp_dailydialog.zip"
act_label = {
"0": "__dummy__", # Added to be compatible out-of-the-box with nlp.ClassLabel
"1": "inform",
"2": "question",
"3": "directive",
"4": "commissive",
}
emotion_label = {
"0": "no emotion",
"1": "anger",
"2": "disgust",
"3": "fear",
"4": "happiness",
"5": "sadness",
"6": "surprise",
}
class DailyDialog(nlp.GeneratorBasedBuilder):
"""DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset"""
VERSION = nlp.Version("1.0.0")
__EOU__ = "__eou__"
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"dialog": nlp.features.Sequence(
nlp.Value("string")
),
"act": nlp.features.Sequence(
nlp.ClassLabel(names=list(act_label.values()))
),
"emotion": nlp.features.Sequence(
nlp.ClassLabel(names=list(emotion_label.values()))
),
}
),
supervised_keys=None,
homepage="http://yanran.li/dailydialog",
citation=_CITATION,
)
def _split_generators(self, dl_manager: nlp.DownloadManager):
"""Returns SplitGenerators."""
# dl_manager is a nlp.download.DownloadManager that can be used to
# download and extract URLs
dl_dir = dl_manager.download_and_extract(_URL)
data_dir = os.path.join(dl_dir, "ijcnlp_dailydialog")
# The splits are nested inside the zip
for name in ("train", "validation", "test"):
zip_fpath = os.path.join(data_dir, f"{name}.zip")
with ZipFile(zip_fpath) as zip_file:
zip_file.extractall(path=data_dir)
zip_file.close()
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"file_path": os.path.join(data_dir, "train", "dialogues_train.txt"),
"act_path": os.path.join(data_dir, "train", "dialogues_act_train.txt"),
"emotion_path": os.path.join(data_dir, "train", "dialogues_emotion_train.txt"),
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"file_path": os.path.join(data_dir, "test", "dialogues_test.txt"),
"act_path": os.path.join(data_dir, "test", "dialogues_act_test.txt"),
"emotion_path": os.path.join(data_dir, "test", "dialogues_emotion_test.txt"),
"split": "test",
},
),
nlp.SplitGenerator(
name=nlp.Split.VALIDATION,
# These kwargs will be passed to _generate_examples
gen_kwargs={
"file_path": os.path.join(data_dir, "validation", "dialogues_validation.txt"),
"act_path": os.path.join(data_dir, "validation", "dialogues_act_validation.txt"),
"emotion_path": os.path.join(data_dir, "validation", "dialogues_emotion_validation.txt"),
"split": "dev",
},
),
]
def _generate_examples(self, file_path, act_path, emotion_path, split):
""" Yields examples. """
# Yields (key, example) tuples from the dataset
with open(file_path, "r", encoding="utf-8") as f, open(act_path, "r", encoding="utf-8") as act, open(
emotion_path, "r", encoding="utf-8"
) as emotion:
for i, (line_f, line_act, line_emotion) in enumerate(zip(f, act, emotion)):
if len(line_f.strip()) == 0:
break
dialog = line_f.split(self.__EOU__)[:-1]
act = line_act.split(" ")[:-1]
emotion = line_emotion.split(" ")[:-1]
assert len(dialog) == len(act) == len(emotion), "Different turns btw dialogue & emotion & action"
yield f"{split}-{i}", {
"dialog": dialog,
"act": [act_label[x] for x in act],
"emotion": [emotion_label[x] for x in emotion],
}
| datasets/daily_dialog/daily_dialog.py | 6,224 | DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
Yields examples.
Returns SplitGenerators.
DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset
coding=utf-8 Copyright 2020 The HuggingFace NLP Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Added to be compatible out-of-the-box with nlp.ClassLabel dl_manager is a nlp.download.DownloadManager that can be used to download and extract URLs The splits are nested inside the zip These kwargs will be passed to _generate_examples These kwargs will be passed to _generate_examples These kwargs will be passed to _generate_examples Yields (key, example) tuples from the dataset | 1,126 | en | 0.793973 |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/>
#
# Licensed under the GNU General Public License, version 3 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://jxself.org/translations/gpl-3.zh.shtml
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import json
from functools import wraps
from flask_restful import ResponseBase
from sfo_server.models import SfoServerUser, SfoAccountManagerMethod, SfoServerAccessLog
from sfo_server.resource.common import timestamp_format
from flask import request, g, session
def access_log_decorate(func):
"""
用于记录用户登录后访问网址行为的装饰器
:param func:
:return:
"""
@wraps(func)
def wrapper(*args, **kwargs):
access_user = request.headers.get('X-Real-IP ', request.remote_addr)
access_method = request.method
access_path = request.path
access_time = timestamp_format(time.time())
resp = func(*args, **kwargs)
access_result = resp[0].get('status')
access_message = resp[0].get('message', 'Internal Server Error') if resp else 'Internal Server Error'
SfoServerAccessLog.add_access_log(access_user, access_method, access_path, access_time, access_result, access_message)
return resp
return wrapper
def login_required(func):
"""
验证是否登录
:param func:
:return:
"""
@wraps(func)
def wrapper(*args, **kwargs):
user_account = session.get('username', '')
if user_account:
login_user = SfoServerUser.query_user_by_account(user_account)
g.user = login_user
return func(*args, **kwargs)
else:
return ResponseBase(json.dumps({'status': 401, "message": u'请先登录'}),
status=401, content_type='application/json')
return wrapper
def permission_required(*resources):
"""
权限验证的前提是用户已经登录
权限验证
:param resources: 控制的资源对象
"""
def decorate(func):
@wraps(func)
def wrapper(*args, **kwargs):
method = func.__name__
resource_names = [resource.__tablename__ for resource in resources]
need_permission = set([method + '_' + resource_name for resource_name in resource_names])
user = getattr(g, 'user', '')
has_permission_set = set()
is_clusteradmin = user.is_clusteradmin if user else 0
if is_clusteradmin:
return func(*args, **kwargs)
if user:
for role in user.roles:
for permission in role.permissions:
has_permission_set.add(permission.permission_name)
if not need_permission.issubset(has_permission_set):
return ResponseBase(json.dumps({'status': 403, 'message': u'权限不足,请联系管理员'}),
status=403, content_type='application/json')
else:
return func(*args, **kwargs)
else:
return ResponseBase(json.dumps({'status': 401, "message": u'请先登录'}),
status=401, content_type='application/json')
return wrapper
return decorate
| sfo_server/decorate.py | 3,725 | 用于记录用户登录后访问网址行为的装饰器
:param func:
:return:
验证是否登录
:param func:
:return:
权限验证的前提是用户已经登录
权限验证
:param resources: 控制的资源对象
!/usr/bin/env python -*- coding: utf-8 -*- Copyright © 2007 Free Software Foundation, Inc. <https://fsf.org/> Licensed under the GNU General Public License, version 3 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://jxself.org/translations/gpl-3.zh.shtml Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. | 763 | en | 0.758358 |
# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
import os
class PyPybind11(CMakePackage):
"""pybind11 -- Seamless operability between C++11 and Python.
pybind11 is a lightweight header-only library that exposes C++ types in
Python and vice versa, mainly to create Python bindings of existing C++
code. Its goals and syntax are similar to the excellent Boost.Python
library by David Abrahams: to minimize boilerplate code in traditional
extension modules by inferring type information using compile-time
introspection."""
homepage = "https://pybind11.readthedocs.io"
url = "https://github.com/pybind/pybind11/archive/v2.6.2.tar.gz"
git = "https://github.com/pybind/pybind11.git"
maintainers = ['ax3l']
version('master', branch='master')
version('2.6.2', sha256='8ff2fff22df038f5cd02cea8af56622bc67f5b64534f1b83b9f133b8366acff2')
version('2.6.1', sha256='cdbe326d357f18b83d10322ba202d69f11b2f49e2d87ade0dc2be0c5c34f8e2a')
version('2.5.0', sha256='97504db65640570f32d3fdf701c25a340c8643037c3b69aec469c10c93dc8504', preferred=True)
version('2.4.3', sha256='1eed57bc6863190e35637290f97a20c81cfe4d9090ac0a24f3bbf08f265eb71d')
version('2.3.0', sha256='0f34838f2c8024a6765168227ba587b3687729ebf03dc912f88ff75c7aa9cfe8')
version('2.2.4', sha256='b69e83658513215b8d1443544d0549b7d231b9f201f6fc787a2b2218b408181e')
version('2.2.3', sha256='3a3b7b651afab1c5ba557f4c37d785a522b8030dfc765da26adc2ecd1de940ea')
version('2.2.2', sha256='b639a2b2cbf1c467849660801c4665ffc1a4d0a9e153ae1996ed6f21c492064e')
version('2.2.1', sha256='f8bd1509578b2a1e7407d52e6ee8afe64268909a1bbda620ca407318598927e7')
version('2.2.0', sha256='1b0fda17c650c493f5862902e90f426df6751da8c0b58c05983ab009951ed769')
version('2.1.1', sha256='f2c6874f1ea5b4ad4ffffe352413f7d2cd1a49f9050940805c2a082348621540')
version('2.1.0', sha256='2860f2b8d0c9f65f0698289a161385f59d099b7ead1bf64e8993c486f2b93ee0')
depends_on('py-setuptools', type='build')
extends('python')
# compiler support
conflicts('%gcc@:4.7')
conflicts('%clang@:3.2')
conflicts('%intel@:16')
def cmake_args(self):
args = []
args.append('-DPYTHON_EXECUTABLE:FILEPATH=%s'
% self.spec['python'].command.path)
args += [
self.define('PYBIND11_TEST', self.run_tests)
]
return args
def setup_build_environment(self, env):
env.set('PYBIND11_USE_CMAKE', 1)
# https://github.com/pybind/pybind11/pull/1995
@when('@:2.4.99')
def patch(self):
""" see https://github.com/spack/spack/issues/13559 """
filter_file('import sys',
'import sys; return "{0}"'.format(self.prefix.include),
'pybind11/__init__.py',
string=True)
def install(self, spec, prefix):
super(PyPybind11, self).install(spec, prefix)
setup_py('install', '--single-version-externally-managed', '--root=/',
'--prefix={0}'.format(prefix))
@run_after('install')
@on_package_attributes(run_tests=True)
def install_test(self):
with working_dir('spack-test', create=True):
# test include helper points to right location
python = self.spec['python'].command
py_inc = python(
'-c',
'import pybind11 as py; ' +
self.spec['python'].package.print_string('py.get_include()'),
output=str).strip()
for inc in [py_inc, self.prefix.include]:
inc_file = join_path(inc, 'pybind11', 'pybind11.h')
assert os.path.isfile(inc_file)
| var/spack/repos/builtin/packages/py-pybind11/package.py | 3,865 | pybind11 -- Seamless operability between C++11 and Python.
pybind11 is a lightweight header-only library that exposes C++ types in
Python and vice versa, mainly to create Python bindings of existing C++
code. Its goals and syntax are similar to the excellent Boost.Python
library by David Abrahams: to minimize boilerplate code in traditional
extension modules by inferring type information using compile-time
introspection.
see https://github.com/spack/spack/issues/13559
Copyright 2013-2021 Lawrence Livermore National Security, LLC and other Spack Project Developers. See the top-level COPYRIGHT file for details. SPDX-License-Identifier: (Apache-2.0 OR MIT) compiler support https://github.com/pybind/pybind11/pull/1995 test include helper points to right location | 772 | en | 0.843012 |
# Copyright 2015 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""URL endpoint to allow Buildbot slaves to post data to the dashboard."""
import copy
import json
import logging
import math
import re
from google.appengine.api import datastore_errors
from google.appengine.api import taskqueue
from google.appengine.ext import ndb
from dashboard import math_utils
from dashboard import post_data_handler
from dashboard.common import datastore_hooks
from dashboard.models import graph_data
_TASK_QUEUE_NAME = 'new-points-queue'
# Number of rows to process per task queue task. This limits the task size
# and execution time (Limits: 100KB object size and 10 minutes execution time).
_TASK_QUEUE_SIZE = 32
# Max length for a Row property name.
_MAX_COLUMN_NAME_LENGTH = 25
# Maximum length of a value for a string property.
_STRING_COLUMN_MAX_LENGTH = 400
# Maximum number of properties for a Row.
_MAX_NUM_COLUMNS = 30
# Maximum length for a test path. This limit is required because the test path
# used as the string ID for TestContainer (the parent in the datastore for Row
# entities), and datastore imposes a maximum string ID length.
_MAX_TEST_PATH_LENGTH = 500
class BadRequestError(Exception):
"""An error indicating that a 400 response status should be returned."""
pass
class AddPointHandler(post_data_handler.PostDataHandler):
"""URL endpoint to post data to the dashboard."""
def post(self):
"""Validates data parameter and add task to queue to process points.
The row data comes from a "data" parameter, which is a JSON encoding of a
list of dictionaries, each of which represents one performance result
(one point in a graph) and associated data.
[
{
"master": "ChromiumPerf",
"bot": "xp-release-dual-core",
"test": "dromaeo/dom/modify",
"revision": 123456789,
"value": 24.66,
"error": 2.33,
"units": "ms",
"supplemental_columns": {
"d_median": 24234.12,
"d_mean": 23.553,
"r_webkit": 423340,
...
},
...
},
...
]
In general, the required fields are "master", "bot", "test" (which together
form the test path which identifies the series that this point belongs to),
and "revision" and "value", which are the X and Y values for the point.
This API also supports the Dashboard JSON v1.0 format (go/telemetry-json),
the first producer of which is Telemetry. Telemetry provides lightweight
serialization of values it produces, as JSON. If a dashboard JSON object is
passed, it will be a single dict rather than a list, with the test,
value, error, and units fields replaced by a chart_data field containing a
Chart JSON dict (see design doc, and example below). Dashboard JSON v1.0 is
processed by converting it into rows (which can be viewed as Dashboard JSON
v0).
{
"master": "ChromiumPerf",
<other row fields>,
"chart_data": {
"foo": {
"bar": {
"type": "scalar",
"name": "foo.bar",
"units": "ms",
"value": 4.2,
},
"summary": {
"type": "list_of_scalar_values",
"name": "foo",
"units": "ms",
"values": [4.2, 5.7, 6.8],
"std": 1.30512,
},
},
}
Request parameters:
data: JSON encoding of a list of dictionaries.
Outputs:
Empty 200 response with if successful,
200 response with warning message if optional data is invalid,
403 response with error message if sender IP is not white-listed,
400 response with error message if required data is invalid.
500 with error message otherwise.
"""
datastore_hooks.SetPrivilegedRequest()
if not self._CheckIpAgainstWhitelist():
# TODO(qyearsley): Add test coverage. See catapult:#1346.
return
data = self.request.get('data')
if not data:
# TODO(qyearsley): Add test coverage. See catapult:#1346.
self.ReportError('Missing "data" parameter.', status=400)
return
try:
data = json.loads(self.request.get('data'))
except ValueError:
self.ReportError('Invalid JSON string.', status=400)
return
logging.info('Received data: %s', data)
try:
if type(data) is dict:
if data.get('chart_data'):
data = _DashboardJsonToRawRows(data)
if not data:
return # No data to add, bail out.
else:
self.ReportError(
'Data should be a list of rows or a Dashboard JSON v1.0 dict.',
status=400)
return
test_map = _ConstructTestPathMap(data)
for row_dict in data:
_ValidateRowDict(row_dict, test_map)
_AddTasks(data)
except BadRequestError as error:
# If any of the data was invalid, abort immediately and return an error.
self.ReportError(error.message, status=400)
def _DashboardJsonToRawRows(dash_json_dict):
"""Formats a Dashboard JSON dict as a list of row dicts.
For the dashboard to begin accepting the Telemetry Dashboard JSON format
as per go/telemetry-json, this function chunks a Dashboard JSON literal
into rows and passes the resulting list to _AddTasks.
Args:
dash_json_dict: A dashboard JSON v1.0 dict.
Returns:
A list of dicts, each of which represents a point.
Raises:
AssertionError: The given argument wasn't a dict.
BadRequestError: The content of the input wasn't valid.
"""
assert type(dash_json_dict) is dict
# A Dashboard JSON dict should at least have all charts coming from the
# same master, bot and rev. It can contain multiple charts, however.
if not dash_json_dict.get('master'):
raise BadRequestError('No master name given.')
if not dash_json_dict.get('bot'):
raise BadRequestError('No bot name given.')
if not dash_json_dict.get('point_id'):
raise BadRequestError('No point_id number given.')
if not dash_json_dict.get('chart_data'):
raise BadRequestError('No chart data given.')
test_suite_name = _TestSuiteName(dash_json_dict)
chart_data = dash_json_dict.get('chart_data', {})
charts = chart_data.get('charts', {})
if not charts:
return [] # No charts implies no data to add.
# Links to about:tracing traces are listed under 'trace'; if they
# exist copy them to a separate dictionary and delete from the chartjson
# so that we don't try to process them as data points.
tracing_links = None
if 'trace' in charts:
tracing_links = charts['trace'].copy()
del charts['trace']
row_template = _MakeRowTemplate(dash_json_dict)
benchmark_description = chart_data.get('benchmark_description', '')
trace_rerun_options = dict(chart_data.get('trace_rerun_options', []))
is_ref = bool(dash_json_dict.get('is_ref'))
rows = []
for chart in charts:
for trace in charts[chart]:
# Need to do a deep copy here so we don't copy a_tracing_uri data.
row = copy.deepcopy(row_template)
specific_vals = _FlattenTrace(
test_suite_name, chart, trace, charts[chart][trace], is_ref,
tracing_links, benchmark_description)
# Telemetry may validly produce rows that represent a value of NaN. To
# avoid getting into messy situations with alerts, we do not add such
# rows to be processed.
if not (math.isnan(specific_vals['value']) or
math.isnan(specific_vals['error'])):
if specific_vals['tracing_uri']:
row['supplemental_columns']['a_tracing_uri'] = specific_vals[
'tracing_uri']
if trace_rerun_options:
row['supplemental_columns']['a_trace_rerun_options'] = (
trace_rerun_options)
row.update(specific_vals)
rows.append(row)
return rows
def _TestSuiteName(dash_json_dict):
"""Extracts a test suite name from Dashboard JSON.
The dashboard JSON may contain a field "test_suite_name". If this is not
present or it is None, the dashboard will fall back to using "benchmark_name"
in the "chart_data" dict.
"""
if dash_json_dict.get('test_suite_name'):
return dash_json_dict['test_suite_name']
try:
return dash_json_dict['chart_data']['benchmark_name']
except KeyError as e:
raise BadRequestError('Could not find test suite name. ' + e.message)
def _AddTasks(data):
"""Puts tasks on queue for adding data.
Args:
data: A list of dictionaries, each of which represents one point.
"""
task_list = []
for data_sublist in _Chunk(data, _TASK_QUEUE_SIZE):
task_list.append(taskqueue.Task(
url='/add_point_queue',
params={'data': json.dumps(data_sublist)}))
queue = taskqueue.Queue(_TASK_QUEUE_NAME)
for task_sublist in _Chunk(task_list, taskqueue.MAX_TASKS_PER_ADD):
# Calling get_result waits for all tasks to be added. It's possible that
# this is different, and maybe faster, than just calling queue.add.
queue.add_async(task_sublist).get_result()
def _Chunk(items, chunk_size):
"""Breaks a long list into sub-lists of a particular size."""
chunks = []
for i in range(0, len(items), chunk_size):
chunks.append(items[i:i + chunk_size])
return chunks
def _MakeRowTemplate(dash_json_dict):
"""Produces a template for rows created from a Dashboard JSON v1.0 dict.
_DashboardJsonToRawRows adds metadata fields to every row that it creates.
These include things like master, bot, point ID, versions, and other
supplementary data. This method produces a dict containing this metadata
to which row-specific information (like value and error) can be added.
Some metadata needs to be transformed to conform to the v0 format, and this
method is also responsible for that transformation.
Some validation is deferred until after the input is converted to a list
of row dicts, since revision format correctness is checked on a per-point
basis.
Args:
dash_json_dict: A dashboard JSON v1.0 dict.
Returns:
A dict containing data to include in each row dict that is created from
|dash_json_dict|.
"""
row_template = dash_json_dict.copy()
del row_template['chart_data']
del row_template['point_id']
row_template['revision'] = dash_json_dict['point_id']
annotations = row_template['supplemental']
versions = row_template['versions']
del row_template['supplemental']
del row_template['versions']
row_template['supplemental_columns'] = {}
supplemental = row_template['supplemental_columns']
for annotation in annotations:
supplemental['a_' + annotation] = annotations[annotation]
for version in versions:
supplemental['r_' + version] = versions[version]
return row_template
def _FlattenTrace(test_suite_name, chart_name, trace_name, trace,
is_ref=False, tracing_links=None, benchmark_description=''):
"""Takes a trace dict from dashboard JSON and readies it for display.
Traces can be either scalars or lists; if scalar we take the value directly;
if list we average the values and compute their standard deviation. We also
extract fields that are normally part of v0 row dicts that are uploaded
using add_point but are actually part of traces in the v1.0 format.
Args:
test_suite_name: The name of the test suite (benchmark).
chart_name: The name of the chart to which this trace belongs.
trace_name: The name of the passed trace.
trace: A trace dict extracted from a dashboard JSON chart.
is_ref: A boolean which indicates whether this trace comes from a
reference build.
tracing_links: A dictionary mapping trace names to about:tracing trace
urls in cloud storage
benchmark_description: A string documenting the benchmark suite to which
this trace belongs.
Returns:
A dict containing units, value, and error for this trace.
Raises:
BadRequestError: The data wasn't valid.
"""
if '@@' in chart_name:
tir_label, chart_name = chart_name.split('@@')
chart_name = chart_name + '/' + tir_label
value, error = _ExtractValueAndError(trace)
# If there is a link to an about:tracing trace in cloud storage for this
# test trace_name, cache it.
tracing_uri = None
if (tracing_links and
trace_name in tracing_links and
'cloud_url' in tracing_links[trace_name]):
tracing_uri = tracing_links[trace_name]['cloud_url'].replace('\\/', '/')
trace_name = _EscapeName(trace_name)
if trace_name == 'summary':
subtest_name = chart_name
else:
subtest_name = chart_name + '/' + trace_name
name = test_suite_name + '/' + subtest_name
if trace_name == 'summary' and is_ref:
name += '/ref'
elif trace_name != 'summary' and is_ref:
name += '_ref'
row_dict = {
'test': name,
'value': value,
'error': error,
'units': trace['units'],
'tracing_uri': tracing_uri,
'benchmark_description': benchmark_description,
}
if 'improvement_direction' in trace:
improvement_direction_str = trace['improvement_direction']
if improvement_direction_str is None:
raise BadRequestError('improvement_direction must not be None')
row_dict['higher_is_better'] = _ImprovementDirectionToHigherIsBetter(
improvement_direction_str)
return row_dict
def _ExtractValueAndError(trace):
"""Returns the value and measure of error from a chartjson trace dict.
Args:
trace: A dict that has one "result" from a performance test, e.g. one
"value" in a Telemetry test, with the keys "trace_type", "value", etc.
Returns:
A pair (value, error) where |value| is a float and |error| is some measure
of variance used to show error bars; |error| could be None.
Raises:
BadRequestError: Data format was invalid.
"""
trace_type = trace.get('type')
if trace_type == 'scalar':
value = trace.get('value')
if value is None and trace.get('none_value_reason'):
return float('nan'), 0
try:
return float(value), 0
except:
raise BadRequestError('Expected scalar value, got: %r' % value)
if trace_type == 'list_of_scalar_values':
values = trace.get('values')
if not isinstance(values, list) and values is not None:
# Something else (such as a single scalar, or string) was given.
raise BadRequestError('Expected list of scalar values, got: %r' % values)
if not values or None in values:
# None was included or values is None; this is not an error if there
# is a reason.
if trace.get('none_value_reason'):
return float('nan'), float('nan')
raise BadRequestError('Expected list of scalar values, got: %r' % values)
if not all(_IsNumber(v) for v in values):
raise BadRequestError('Non-number found in values list: %r' % values)
value = math_utils.Mean(values)
std = trace.get('std')
if std is not None:
error = std
else:
error = math_utils.StandardDeviation(values)
return value, error
if trace_type == 'histogram':
return _GeomMeanAndStdDevFromHistogram(trace)
raise BadRequestError('Invalid value type in chart object: %r' % trace_type)
def _IsNumber(v):
return isinstance(v, float) or isinstance(v, int) or isinstance(v, long)
def _EscapeName(name):
"""Escapes a trace name so it can be stored in a row.
Args:
name: A string representing a name.
Returns:
An escaped version of the name.
"""
return re.sub(r'[\:|=/#&,]', '_', name)
def _GeomMeanAndStdDevFromHistogram(histogram):
"""Generates the geom. mean and std. dev. for a histogram.
A histogram is a collection of numerical buckets with associated
counts; a bucket can either represent a number of instances of a single
value ('low'), or from within a range of values (in which case 'high' will
specify the upper bound). We compute the statistics by treating the
histogram analogously to a list of individual values, where the counts tell
us how many of each value there are.
Args:
histogram: A histogram dict with a list 'buckets' of buckets.
Returns:
The geometric mean and standard deviation of the given histogram.
"""
# Note: This code comes originally from
# build/scripts/common/chromium_utils.py and was used initially for
# processing histogram results on the buildbot side previously.
if 'buckets' not in histogram:
# TODO(qyearsley): Add test coverage. See catapult:#1346.
return 0.0, 0.0
count = 0
sum_of_logs = 0
for bucket in histogram['buckets']:
if 'high' in bucket:
bucket['mean'] = (bucket['low'] + bucket['high']) / 2.0
else:
# TODO(qyearsley): Add test coverage. See catapult:#1346.
bucket['mean'] = bucket['low']
if bucket['mean'] > 0:
sum_of_logs += math.log(bucket['mean']) * bucket['count']
count += bucket['count']
if count == 0:
return 0.0, 0.0
sum_of_squares = 0
geom_mean = math.exp(sum_of_logs / count)
for bucket in histogram['buckets']:
if bucket['mean'] > 0:
sum_of_squares += (bucket['mean'] - geom_mean) ** 2 * bucket['count']
return geom_mean, math.sqrt(sum_of_squares / count)
def _ImprovementDirectionToHigherIsBetter(improvement_direction_str):
"""Converts an improvement direction string to a higher_is_better boolean.
Args:
improvement_direction_str: a string, either 'up' or 'down'.
Returns:
A boolean expressing the appropriate higher_is_better value.
Raises:
BadRequestError: if improvement_direction_str is invalid.
"""
# If improvement_direction is provided, we want to use it. Otherwise, by not
# providing it we'll fall back to unit-info.json
# TODO(eakuefner): Fail instead of falling back after fixing crbug.com/459450.
if improvement_direction_str == 'up':
return True
elif improvement_direction_str == 'down':
return False
else:
raise BadRequestError('Invalid improvement direction string: ' +
improvement_direction_str)
def _ConstructTestPathMap(row_dicts):
"""Makes a mapping from test paths to last added revision."""
last_added_revision_keys = []
for row in row_dicts:
if not ('master' in row and 'bot' in row and 'test' in row):
continue
path = '%s/%s/%s' % (row['master'], row['bot'], row['test'].strip('/'))
if len(path) > _MAX_TEST_PATH_LENGTH:
continue
last_added_revision_keys.append(ndb.Key('LastAddedRevision', path))
try:
last_added_revision_entities = ndb.get_multi(last_added_revision_keys)
except datastore_errors.BadRequestError:
# TODO(qyearsley): Add test coverage. See catapult:#1346.
logging.warn('Datastore BadRequestError when getting %s',
repr(last_added_revision_keys))
return {}
return {r.key.string_id(): r.revision
for r in last_added_revision_entities if r is not None}
def _ValidateRowDict(row, test_map):
"""Checks all fields in the input dictionary.
Args:
row: A dictionary which represents one point.
test_map: A dictionary mapping test paths to last added revision.
Raises:
BadRequestError: The input was not valid.
"""
required_fields = ['master', 'bot', 'test']
for field in required_fields:
if field not in row:
raise BadRequestError('No "%s" field in row dict.' % field)
_ValidateMasterBotTest(row['master'], row['bot'], row['test'])
_ValidateRowId(row, test_map)
GetAndValidateRowProperties(row)
def _ValidateMasterBotTest(master, bot, test):
"""Validates the master, bot, and test properties of a row dict."""
# Trailing and leading slashes in the test name are ignored.
# The test name must consist of at least a test suite plus sub-test.
test = test.strip('/')
if '/' not in test:
raise BadRequestError('Test name must have more than one part.')
if len(test.split('/')) > graph_data.MAX_TEST_ANCESTORS:
raise BadRequestError('Invalid test name: %s' % test)
# The master and bot names have just one part.
if '/' in master or '/' in bot:
raise BadRequestError('Illegal slash in master or bot name.')
_ValidateTestPath('%s/%s/%s' % (master, bot, test))
def _ValidateTestPath(test_path):
"""Checks whether all the parts of the test path are valid."""
# A test with a test path length over the max key length shouldn't be
# created, since the test path is used in TestContainer keys.
if len(test_path) > _MAX_TEST_PATH_LENGTH:
raise BadRequestError('Test path too long: %s' % test_path)
# Stars are reserved for test path patterns, so they can't be used in names.
if '*' in test_path:
raise BadRequestError('Illegal asterisk in test name.')
for name in test_path.split('/'):
_ValidateTestPathPartName(name)
def _ValidateTestPathPartName(name):
"""Checks whether a Master, Bot or TestMetadata name is OK."""
# NDB Datastore doesn't allow key names to start and with "__" and "__".
if name.startswith('__') and name.endswith('__'):
raise BadRequestError(
'Invalid name: "%s". Names cannot start and end with "__".' % name)
def _ValidateRowId(row_dict, test_map):
"""Checks whether the ID for a Row is OK.
Args:
row_dict: A dictionary with new point properties, including "revision".
test_map: A dictionary mapping test paths to the last previously added
revision for each test.
Raises:
BadRequestError: The revision is not acceptable for some reason.
"""
row_id = GetAndValidateRowId(row_dict)
# Get the last added revision number for this test.
master, bot, test = row_dict['master'], row_dict['bot'], row_dict['test']
test_path = '%s/%s/%s' % (master, bot, test)
last_row_id = test_map.get(test_path)
if not last_row_id:
# Could be first point in test.
logging.warning('Test %s has no last added revision entry.', test_path)
return
allow_jump = (
master.endswith('Internal') or
(master.endswith('QA') and bot.startswith('release-tests-')))
if not _IsAcceptableRowId(row_id, last_row_id, allow_jump=allow_jump):
raise BadRequestError(
'Invalid ID (revision) %d; compared to previous ID %s, it was larger '
'or smaller by too much.' % (row_id, last_row_id))
def _IsAcceptableRowId(row_id, last_row_id, allow_jump=False):
"""Checks whether the given row id (aka revision) is not too large or small.
For each data series (i.e. TestMetadata entity), we assume that row IDs are
monotonically increasing. On a given chart, points are sorted by these
row IDs. This way, points can arrive out of order but still be shown
correctly in the chart.
However, sometimes a bot might start to use a different *type* of row ID;
for example it might change from revision numbers or build numbers to
timestamps, or from timestamps to build numbers. This causes a lot of
problems, including points being put out of order.
If a sender of data actually wants to switch to a different type of
row ID, it would be much cleaner for them to start sending it under a new
chart name.
Args:
row_id: The proposed Row entity id (usually sent as "revision")
last_row_id: The previous Row id, or None if there were none previous.
Returns:
True if acceptable, False otherwise.
"""
if last_row_id is None:
# TODO(qyearsley): Add test coverage. See catapult:#1346.
return True
if row_id <= 0:
# TODO(qyearsley): Add test coverage. See catapult:#1346.
return False
# Too big of a decrease.
if row_id < 0.5 * last_row_id:
return False
# TODO(perezju): We temporarily allow for a big jump on special cased bots,
# while we migrate from using commit position to timestamp as row id.
# The jump is only allowed into a timestamp falling within Aug-Dec 2016.
# This special casing should be removed after finishing the migration.
if allow_jump and 1470009600 < row_id < 1483228800:
return True
# Too big of an increase.
if row_id > 2 * last_row_id:
return False
return True
def GetAndValidateRowId(row_dict):
"""Returns the integer ID for a new Row.
This method is also responsible for validating the input fields related
to making the new row ID.
Args:
row_dict: A dictionary obtained from the input JSON.
Returns:
An integer row ID.
Raises:
BadRequestError: The input wasn't formatted properly.
"""
if 'revision' not in row_dict:
raise BadRequestError('Required field "revision" missing.')
try:
return int(row_dict['revision'])
except (ValueError, TypeError):
raise BadRequestError('Bad value for "revision", should be numerical.')
def GetAndValidateRowProperties(row):
"""From the object received, make a dictionary of properties for a Row.
This includes the default "value" and "error" columns as well as all
supplemental columns, but it doesn't include "revision", and it doesn't
include input fields that are properties of the parent TestMetadata, such as
"units".
This method is responsible for validating all properties that are to be
properties of the new Row.
Args:
row: A dictionary obtained from the input JSON.
Returns:
A dictionary of the properties and property values to set when creating
a Row. This will include "value" and "error" as well as all supplemental
columns.
Raises:
BadRequestError: The properties weren't formatted correctly.
"""
columns = {}
# Value and error must be floating point numbers.
if 'value' not in row:
raise BadRequestError('No "value" given.')
try:
columns['value'] = float(row['value'])
except (ValueError, TypeError):
raise BadRequestError('Bad value for "value", should be numerical.')
if 'error' in row:
try:
error = float(row['error'])
columns['error'] = error
except (ValueError, TypeError):
logging.warn('Bad value for "error".')
columns.update(_GetSupplementalColumns(row))
return columns
def _GetSupplementalColumns(row):
"""Gets a dict of supplemental columns.
If any columns are invalid, a warning is logged and they just aren't included,
but no exception is raised.
Individual rows may specify up to _MAX_NUM_COLUMNS extra data, revision,
and annotation columns. These columns must follow formatting rules for
their type. Invalid columns are dropped with an error log, but the valid
data will still be graphed.
Args:
row: A dict, possibly with the key "supplemental_columns", the value of
which should be a dict.
Returns:
A dict of valid supplemental columns.
"""
columns = {}
for (name, value) in row.get('supplemental_columns', {}).iteritems():
# Don't allow too many columns
if len(columns) == _MAX_NUM_COLUMNS:
logging.warn('Too many columns, some being dropped.')
break
value = _CheckSupplementalColumn(name, value)
if value:
columns[name] = value
return columns
def _CheckSupplementalColumn(name, value):
"""Returns a possibly modified value for a supplemental column, or None."""
# Check length of column name.
name = str(name)
if len(name) > _MAX_COLUMN_NAME_LENGTH:
logging.warn('Supplemental column name too long.')
return None
# The column name has a prefix which indicates type of value.
if name[:2] not in ('d_', 'r_', 'a_'):
logging.warn('Bad column name "%s", invalid prefix.', name)
return None
# The d_ prefix means "data column", intended to hold numbers.
if name.startswith('d_'):
try:
value = float(value)
except (ValueError, TypeError):
logging.warn('Bad value for column "%s", should be numerical.', name)
return None
# The r_ prefix means "revision", and the value should look like a number,
# a version number, or a git commit hash.
if name.startswith('r_'):
revision_patterns = [
r'^\d+$',
r'^\d+\.\d+\.\d+\.\d+$',
r'^[A-Fa-f0-9]{40}$',
]
if (not value or len(str(value)) > _STRING_COLUMN_MAX_LENGTH or
not any(re.match(p, str(value)) for p in revision_patterns)):
logging.warn('Bad value for revision column "%s".', name)
return None
value = str(value)
if name.startswith('a_'):
# Annotation column, should be a short string.
if len(str(value)) > _STRING_COLUMN_MAX_LENGTH:
logging.warn('Value for "%s" too long, max length is %d.',
name, _STRING_COLUMN_MAX_LENGTH)
return None
return value
| dashboard/dashboard/add_point.py | 28,460 | URL endpoint to post data to the dashboard.
An error indicating that a 400 response status should be returned.
Returns the integer ID for a new Row.
This method is also responsible for validating the input fields related
to making the new row ID.
Args:
row_dict: A dictionary obtained from the input JSON.
Returns:
An integer row ID.
Raises:
BadRequestError: The input wasn't formatted properly.
From the object received, make a dictionary of properties for a Row.
This includes the default "value" and "error" columns as well as all
supplemental columns, but it doesn't include "revision", and it doesn't
include input fields that are properties of the parent TestMetadata, such as
"units".
This method is responsible for validating all properties that are to be
properties of the new Row.
Args:
row: A dictionary obtained from the input JSON.
Returns:
A dictionary of the properties and property values to set when creating
a Row. This will include "value" and "error" as well as all supplemental
columns.
Raises:
BadRequestError: The properties weren't formatted correctly.
Puts tasks on queue for adding data.
Args:
data: A list of dictionaries, each of which represents one point.
Returns a possibly modified value for a supplemental column, or None.
Breaks a long list into sub-lists of a particular size.
Makes a mapping from test paths to last added revision.
Formats a Dashboard JSON dict as a list of row dicts.
For the dashboard to begin accepting the Telemetry Dashboard JSON format
as per go/telemetry-json, this function chunks a Dashboard JSON literal
into rows and passes the resulting list to _AddTasks.
Args:
dash_json_dict: A dashboard JSON v1.0 dict.
Returns:
A list of dicts, each of which represents a point.
Raises:
AssertionError: The given argument wasn't a dict.
BadRequestError: The content of the input wasn't valid.
Escapes a trace name so it can be stored in a row.
Args:
name: A string representing a name.
Returns:
An escaped version of the name.
Returns the value and measure of error from a chartjson trace dict.
Args:
trace: A dict that has one "result" from a performance test, e.g. one
"value" in a Telemetry test, with the keys "trace_type", "value", etc.
Returns:
A pair (value, error) where |value| is a float and |error| is some measure
of variance used to show error bars; |error| could be None.
Raises:
BadRequestError: Data format was invalid.
Takes a trace dict from dashboard JSON and readies it for display.
Traces can be either scalars or lists; if scalar we take the value directly;
if list we average the values and compute their standard deviation. We also
extract fields that are normally part of v0 row dicts that are uploaded
using add_point but are actually part of traces in the v1.0 format.
Args:
test_suite_name: The name of the test suite (benchmark).
chart_name: The name of the chart to which this trace belongs.
trace_name: The name of the passed trace.
trace: A trace dict extracted from a dashboard JSON chart.
is_ref: A boolean which indicates whether this trace comes from a
reference build.
tracing_links: A dictionary mapping trace names to about:tracing trace
urls in cloud storage
benchmark_description: A string documenting the benchmark suite to which
this trace belongs.
Returns:
A dict containing units, value, and error for this trace.
Raises:
BadRequestError: The data wasn't valid.
Generates the geom. mean and std. dev. for a histogram.
A histogram is a collection of numerical buckets with associated
counts; a bucket can either represent a number of instances of a single
value ('low'), or from within a range of values (in which case 'high' will
specify the upper bound). We compute the statistics by treating the
histogram analogously to a list of individual values, where the counts tell
us how many of each value there are.
Args:
histogram: A histogram dict with a list 'buckets' of buckets.
Returns:
The geometric mean and standard deviation of the given histogram.
Gets a dict of supplemental columns.
If any columns are invalid, a warning is logged and they just aren't included,
but no exception is raised.
Individual rows may specify up to _MAX_NUM_COLUMNS extra data, revision,
and annotation columns. These columns must follow formatting rules for
their type. Invalid columns are dropped with an error log, but the valid
data will still be graphed.
Args:
row: A dict, possibly with the key "supplemental_columns", the value of
which should be a dict.
Returns:
A dict of valid supplemental columns.
Converts an improvement direction string to a higher_is_better boolean.
Args:
improvement_direction_str: a string, either 'up' or 'down'.
Returns:
A boolean expressing the appropriate higher_is_better value.
Raises:
BadRequestError: if improvement_direction_str is invalid.
Checks whether the given row id (aka revision) is not too large or small.
For each data series (i.e. TestMetadata entity), we assume that row IDs are
monotonically increasing. On a given chart, points are sorted by these
row IDs. This way, points can arrive out of order but still be shown
correctly in the chart.
However, sometimes a bot might start to use a different *type* of row ID;
for example it might change from revision numbers or build numbers to
timestamps, or from timestamps to build numbers. This causes a lot of
problems, including points being put out of order.
If a sender of data actually wants to switch to a different type of
row ID, it would be much cleaner for them to start sending it under a new
chart name.
Args:
row_id: The proposed Row entity id (usually sent as "revision")
last_row_id: The previous Row id, or None if there were none previous.
Returns:
True if acceptable, False otherwise.
Produces a template for rows created from a Dashboard JSON v1.0 dict.
_DashboardJsonToRawRows adds metadata fields to every row that it creates.
These include things like master, bot, point ID, versions, and other
supplementary data. This method produces a dict containing this metadata
to which row-specific information (like value and error) can be added.
Some metadata needs to be transformed to conform to the v0 format, and this
method is also responsible for that transformation.
Some validation is deferred until after the input is converted to a list
of row dicts, since revision format correctness is checked on a per-point
basis.
Args:
dash_json_dict: A dashboard JSON v1.0 dict.
Returns:
A dict containing data to include in each row dict that is created from
|dash_json_dict|.
Extracts a test suite name from Dashboard JSON.
The dashboard JSON may contain a field "test_suite_name". If this is not
present or it is None, the dashboard will fall back to using "benchmark_name"
in the "chart_data" dict.
Validates the master, bot, and test properties of a row dict.
Checks all fields in the input dictionary.
Args:
row: A dictionary which represents one point.
test_map: A dictionary mapping test paths to last added revision.
Raises:
BadRequestError: The input was not valid.
Checks whether the ID for a Row is OK.
Args:
row_dict: A dictionary with new point properties, including "revision".
test_map: A dictionary mapping test paths to the last previously added
revision for each test.
Raises:
BadRequestError: The revision is not acceptable for some reason.
Checks whether all the parts of the test path are valid.
Checks whether a Master, Bot or TestMetadata name is OK.
Validates data parameter and add task to queue to process points.
The row data comes from a "data" parameter, which is a JSON encoding of a
list of dictionaries, each of which represents one performance result
(one point in a graph) and associated data.
[
{
"master": "ChromiumPerf",
"bot": "xp-release-dual-core",
"test": "dromaeo/dom/modify",
"revision": 123456789,
"value": 24.66,
"error": 2.33,
"units": "ms",
"supplemental_columns": {
"d_median": 24234.12,
"d_mean": 23.553,
"r_webkit": 423340,
...
},
...
},
...
]
In general, the required fields are "master", "bot", "test" (which together
form the test path which identifies the series that this point belongs to),
and "revision" and "value", which are the X and Y values for the point.
This API also supports the Dashboard JSON v1.0 format (go/telemetry-json),
the first producer of which is Telemetry. Telemetry provides lightweight
serialization of values it produces, as JSON. If a dashboard JSON object is
passed, it will be a single dict rather than a list, with the test,
value, error, and units fields replaced by a chart_data field containing a
Chart JSON dict (see design doc, and example below). Dashboard JSON v1.0 is
processed by converting it into rows (which can be viewed as Dashboard JSON
v0).
{
"master": "ChromiumPerf",
<other row fields>,
"chart_data": {
"foo": {
"bar": {
"type": "scalar",
"name": "foo.bar",
"units": "ms",
"value": 4.2,
},
"summary": {
"type": "list_of_scalar_values",
"name": "foo",
"units": "ms",
"values": [4.2, 5.7, 6.8],
"std": 1.30512,
},
},
}
Request parameters:
data: JSON encoding of a list of dictionaries.
Outputs:
Empty 200 response with if successful,
200 response with warning message if optional data is invalid,
403 response with error message if sender IP is not white-listed,
400 response with error message if required data is invalid.
500 with error message otherwise.
URL endpoint to allow Buildbot slaves to post data to the dashboard.
Copyright 2015 The Chromium Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file. Number of rows to process per task queue task. This limits the task size and execution time (Limits: 100KB object size and 10 minutes execution time). Max length for a Row property name. Maximum length of a value for a string property. Maximum number of properties for a Row. Maximum length for a test path. This limit is required because the test path used as the string ID for TestContainer (the parent in the datastore for Row entities), and datastore imposes a maximum string ID length. TODO(qyearsley): Add test coverage. See catapult:1346. TODO(qyearsley): Add test coverage. See catapult:1346. No data to add, bail out. If any of the data was invalid, abort immediately and return an error. A Dashboard JSON dict should at least have all charts coming from the same master, bot and rev. It can contain multiple charts, however. No charts implies no data to add. Links to about:tracing traces are listed under 'trace'; if they exist copy them to a separate dictionary and delete from the chartjson so that we don't try to process them as data points. Need to do a deep copy here so we don't copy a_tracing_uri data. Telemetry may validly produce rows that represent a value of NaN. To avoid getting into messy situations with alerts, we do not add such rows to be processed. Calling get_result waits for all tasks to be added. It's possible that this is different, and maybe faster, than just calling queue.add. If there is a link to an about:tracing trace in cloud storage for this test trace_name, cache it. Something else (such as a single scalar, or string) was given. None was included or values is None; this is not an error if there is a reason. Note: This code comes originally from build/scripts/common/chromium_utils.py and was used initially for processing histogram results on the buildbot side previously. TODO(qyearsley): Add test coverage. See catapult:1346. TODO(qyearsley): Add test coverage. See catapult:1346. If improvement_direction is provided, we want to use it. Otherwise, by not providing it we'll fall back to unit-info.json TODO(eakuefner): Fail instead of falling back after fixing crbug.com/459450. TODO(qyearsley): Add test coverage. See catapult:1346. Trailing and leading slashes in the test name are ignored. The test name must consist of at least a test suite plus sub-test. The master and bot names have just one part. A test with a test path length over the max key length shouldn't be created, since the test path is used in TestContainer keys. Stars are reserved for test path patterns, so they can't be used in names. NDB Datastore doesn't allow key names to start and with "__" and "__". Get the last added revision number for this test. Could be first point in test. TODO(qyearsley): Add test coverage. See catapult:1346. TODO(qyearsley): Add test coverage. See catapult:1346. Too big of a decrease. TODO(perezju): We temporarily allow for a big jump on special cased bots, while we migrate from using commit position to timestamp as row id. The jump is only allowed into a timestamp falling within Aug-Dec 2016. This special casing should be removed after finishing the migration. Too big of an increase. Value and error must be floating point numbers. Don't allow too many columns Check length of column name. The column name has a prefix which indicates type of value. The d_ prefix means "data column", intended to hold numbers. The r_ prefix means "revision", and the value should look like a number, a version number, or a git commit hash. Annotation column, should be a short string. | 13,430 | en | 0.853517 |
import spacy
# 读取zh_core_web_md流程
nlp = spacy.load("zh_core_web_md")
# 处理文本
doc = nlp("两只老虎跑得快")
for token in doc:
print(token.text)
# 获取词符"老虎"的向量
laohu_vector = doc[2].vector
print(laohu_vector)
| exercises/zh/solution_02_09.py | 252 | 读取zh_core_web_md流程 处理文本 获取词符"老虎"的向量 | 35 | zh | 0.87396 |
'''
Second example calculation from:
Smart, S. E., & Mazziotti, D. A. (2021). Lowering tomography costs in quantum simulation
with a symmetry projected operator basis. Physical Review A, 103(1), 012420.
https://doi.org/10.1103/PhysRevA.103.012420
Here we are simuatling a noisy quantum system using a tunable noise model provided from an actual quantum device, and comparing the tomography of the 2-RDM under the default and symmetry projected techniques with the ideal 2-RDM.
'''
import numpy as np
import sys
from math import pi
import qiskit.providers.aer.noise as noise
from noise_model.deconstruct import *
from hqca.hamiltonian import *
from hqca.instructions import *
from hqca.processes import *
from hqca.acse import *
from hqca.core import *
from hqca.core.primitives import *
from pyscf import gto
from hqca.transforms import *
from functools import partial
from hqca.tools import *
from hqca.state_tomography import *
np.set_printoptions(precision=3)
import qiskit
class Ins(Instructions):
def __init__(self,coeff):
self._gates =[[(coeff,),self._test]]
def _test(self,Q,coeff):
Q.si(0)
Q.Cx(1,0)
Q.Cx(2,1)
Q.Cx(3,2)
Q.Rx(3,coeff[0])
Q.Rx(1,coeff[1])
Q.Cx(3,2)
Q.Cx(2,1)
Q.Cx(1,0)
Q.Cx(3,2)
Q.Ry(3,coeff[2])
Q.Cx(3,2)
Q.s(0)
@property
def gates(self):
return self._gates
@gates.setter
def gates(self,a):
self._gates = a
def split_matrix(rdm):
N = rdm.rdm.shape[0]
R = int(np.sqrt(N))
nn = np.zeros(rdm.rdm.shape,dtype=np.complex_)
ne = np.zeros(rdm.rdm.shape,dtype=np.complex_)
ee = np.zeros(rdm.rdm.shape,dtype=np.complex_)
for i in range(N):
p,r = i//R,i%R
for j in range(N):
q,s = j//R,j%R
ind = tuple([p,q,r,s])
if len(set(ind))==2:
nn[i,j]=rdm.rdm[i,j]
elif len(set(ind))==3:
ne[i,j]=rdm.rdm[i,j]
elif len(set(ind))==4:
ee[i,j]=rdm.rdm[i,j]
return nn,ne,ee
n = 0
# generate mol object
mol = gto.Mole()
mol.atom=[['H',(0,0,0)],['H',(2.0,0,0)]]
mol.basis='sto-3g'
mol.spin=0
mol.build()
N = []
eig = []
norm = []
ham = MolecularHamiltonian(mol,transform=JordanWigner)
st = StorageACSE(ham)
qs = QuantumStorage()
qs0 = QuantumStorage()
pr = StandardProcess()
qs0.set_algorithm(st)
# set Nq, number of shots, and error strength
Nq = 4
Ns = 8192
error = 0.0
# qs0, ideal
# qs, noisy simulated
qs0.set_backend(
backend='statevector_simulator',
Nq=Nq,
Nq_ancilla=0,
num_shots=Ns,
provider='Aer')
qs.set_algorithm(st)
# can specify provider='IBMQ' and an appropriate backend if desired
qs.set_backend(
backend='qasm_simulator',
Nq=Nq,
num_shots=Ns,
provider='Aer')
nm = model_v2(scaling=error,name='./noise_model/110220_ibmq_bogota')
qs.set_noise_model(custom=True,
noise_model=nm)
tomo = []
tomo_sim = []
coefficients = np.load('./noise_model/coefficients.npy')
# runs the tomography in sets of 5...suited for particular constraints on quantum device access
# but can be easily modified
for q in range(5):
coeffs = coefficients[q*5:q*5+5,:]
for coeff in coeffs:
print(coeff)
# run 1
tomo0 = StandardTomography(qs0,verbose=False)
tomo0.generate(real=True,imag=True,
simplify=True,transform=JordanWigner,
method='gt',strategy='lf')
ins0 = Ins(coeff)
tomo0.set(ins0)
tomo1 = StandardTomography(qs,verbose=False)
tomo1.generate(real=True,imag=True,
simplify=True,transform=JordanWigner,
method='gt',strategy='lf')
ins = Ins(coeff)
tomo1.set(ins)
tomo2 = ReducedTomography(qs,verbose=False)
tomo2.generate(real=True,imag=True,
simplify=True,transform=JordanWigner,
method='gt',strategy='lf')
ins = Ins(coeff)
tomo2.set(ins)
tomo_sim.append(tomo0)
tomo.append(tomo1)
tomo.append(tomo2)
run_multiple(tomo[q*10:(q*10+10)],qs)
run_multiple(tomo_sim[q*5:(q*5+5)],qs0)
for item in tomo:
print(item.counts['ZZZZ'])
print('Constructing..')
for t in tomo:
t.construct(processor=pr)
for t in tomo_sim:
t.construct(processor=pr)
for i in range(len(coefficients)):
print(coefficients[i,:])
tomo0 = tomo_sim[i]
tomo1 = tomo[i*2]
tomo2 = tomo[i*2+1]
st.analysis(tomo0.rdm)
st.analysis(tomo1.rdm)
st.analysis(tomo2.rdm)
tomo0.rdm.contract()
tomo1.rdm.contract()
tomo2.rdm.contract()
e0 = np.linalg.eigvalsh(tomo0.rdm.rdm)
e1 = np.linalg.eigvalsh(tomo1.rdm.rdm)
e2 = np.linalg.eigvalsh(tomo2.rdm.rdm)
d01 = tomo0.rdm-tomo1.rdm
d02 = tomo0.rdm-tomo2.rdm
d12 = tomo1.rdm-tomo2.rdm
d01.contract()
d12.contract()
d02.contract()
N01 = np.linalg.norm(d01.rdm,ord='fro')
N02 = np.linalg.norm(d02.rdm,ord='fro')
N12 = np.linalg.norm(d12.rdm,ord='fro')
print('Difference D0-D1: {}'.format(N01))
print('Difference D0-D2: {}'.format(N02))
print('Difference D1-D2: {}'.format(N12))
norm.append([N01,N02,N12])
print('--- --- --- --- --- ---')
print('Frombenius norm of D01, D02, and D12 for each run')
norm = np.asmatrix(norm)
print(norm)
print('--- --- --- --- --- ---')
print(' average (std dev)')
for i,l in zip(range(norm.shape[1]),['D01','D02','D12']):
print('{}: {:.6f} {:.6f}'.format(l,np.average(norm[:,i]),np.std(norm[:,i])))
| examples/r2021_pra_tomography/02_pra_example_2.py | 5,606 | Second example calculation from:
Smart, S. E., & Mazziotti, D. A. (2021). Lowering tomography costs in quantum simulation
with a symmetry projected operator basis. Physical Review A, 103(1), 012420.
https://doi.org/10.1103/PhysRevA.103.012420
Here we are simuatling a noisy quantum system using a tunable noise model provided from an actual quantum device, and comparing the tomography of the 2-RDM under the default and symmetry projected techniques with the ideal 2-RDM.
generate mol object set Nq, number of shots, and error strength qs0, ideal qs, noisy simulated can specify provider='IBMQ' and an appropriate backend if desired runs the tomography in sets of 5...suited for particular constraints on quantum device access but can be easily modified run 1 | 779 | en | 0.720822 |
# ----------------------------------------------------------------------------
# Imports:
# ----------------------------------------------------------------------------
from dpa.action import Action, ActionError
from dpa.ptask.action.sync import _PTaskSyncAction
from dpa.location import current_location_code
from dpa.shell.output import Style
# ----------------------------------------------------------------------------
# Classes:
# ----------------------------------------------------------------------------
class PTaskSourceAction(_PTaskSyncAction):
"""Source the contents of one ptask into another."""
# ------------------------------------------------------------------------
def execute(self):
try:
super(PTaskSourceAction, self).execute()
except ActionError as e:
raise ActionError("Unable to source ptask: " + str(e))
else:
print "\nSuccessfully sourced: ",
if self.source_version:
print Style.bright + str(self.source_version.spec) + \
Style.reset + "\n"
else:
print Style.bright + str(self.source.spec) + " [latest]" + \
Style.reset + "\n"
# ------------------------------------------------------------------------
def validate(self):
super(PTaskSourceAction, self).validate()
# ---- make sure the destination location is the current location.
cur_loc_code = current_location_code()
if self.destination_version:
dest_loc_code = self.destination_version.location_code
else:
dest_loc_code = self.destination_latest_version.location_code
if cur_loc_code != dest_loc_code:
raise ActionError("Destination location must be this location.")
| dpa/ptask/action/source.py | 1,820 | ---------------------------------------------------------------------------- Imports: ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- Classes: ---------------------------------------------------------------------------- ------------------------------------------------------------------------ ------------------------------------------------------------------------ ---- make sure the destination location is the current location. | 536 | en | 0.216276 |
"""
Module: 'collections' on esp32 1.11.0
"""
# MCU: (sysname='esp32', nodename='esp32', release='1.11.0', version='v1.11 on 2019-05-29', machine='ESP32 module with ESP32')
# Stubber: 1.3.2
class OrderedDict:
''
def clear():
pass
def copy():
pass
def fromkeys():
pass
def get():
pass
def items():
pass
def keys():
pass
def pop():
pass
def popitem():
pass
def setdefault():
pass
def update():
pass
def values():
pass
class deque:
''
def append():
pass
def popleft():
pass
def namedtuple():
pass
| stubs/micropython-esp32-1_11/collections.py | 678 | Module: 'collections' on esp32 1.11.0
MCU: (sysname='esp32', nodename='esp32', release='1.11.0', version='v1.11 on 2019-05-29', machine='ESP32 module with ESP32') Stubber: 1.3.2 | 179 | en | 0.166619 |
import os
import lcd
from Maix import GPIO
from board import board_info
from fpioa_manager import fm
# import uos
S_IFDIR = 0o040000 # directory
# noinspection PyPep8Naming
def S_IFMT(mode):
"""Return the portion of the file's mode that describes the
file type.
"""
return mode & 0o170000
# noinspection PyPep8Naming
def S_ISDIR(mode):
"""Return True if mode is from a directory."""
return S_IFMT(mode) == S_IFDIR
def sizeof_fmt(num, suffix='B'):
for unit in ['', 'K', 'M', 'G', 'T', 'P', 'E', 'Z']:
if abs(num) < 1024.0:
return "%3.1f%s%s" % (num, unit, suffix)
num /= 1024.0
return "%.1f%s%s" % (num, 'Yi', suffix)
class ExplorerApp:
def __init__(self):
self.current_offset = 0
self.current_selected_index = 0
self.__initialized = False
self.is_dirty = True;
def __lazy_init(self):
self.current_dir_files = os.listdir("/sd/")
print(self.current_dir_files)
self.__initialized = True
def on_top_button_changed(self, state):
if state == "pressed":
print("pressed")
self.current_selected_index += 1
if self.current_selected_index >= len(self.current_dir_files):
self.current_selected_index = 0
if self.current_selected_index >= 7:
self.current_offset = self.current_selected_index - 6
else:
self.current_offset = 0
print("current_selected=", self.current_selected_index,
"current_offset=", self.current_offset)
self.is_dirty = True
def on_draw(self):
self.is_dirty = False
if not self.__initialized:
self.__lazy_init()
x_offset = 4
y_offset = 6
lcd.clear()
for i in range(self.current_offset, len(self.current_dir_files)):
# gc.collect()
file_name = self.current_dir_files[i]
print(file_name)
try:
f_stat = os.stat('/sd/' + file_name)
if S_ISDIR(f_stat[0]):
file_name = file_name + '/'
# gc.collect()
file_readable_size = sizeof_fmt(f_stat[6])
lcd.draw_string(lcd.width() - 50, y_offset,
file_readable_size, lcd.WHITE, lcd.BLUE)
except Exception as e:
print("-------------------->", e)
is_current = self.current_selected_index == i
line = "%s %d %s" % ("->" if is_current else " ", i, file_name)
lcd.draw_string(x_offset, y_offset, line, lcd.WHITE, lcd.RED)
# gc.collect()
y_offset += 18
if y_offset > lcd.height():
print(y_offset, lcd.height(), "y_offset > height(), break")
break
lcd.init()
lcd.rotation(2) # Rotate the lcd 180deg
def test_irq(gpio, pin_num=None):
value = gpio.value()
state = "released" if value else "pressed"
print("key", gpio, state)
global app, key1, key2
if gpio is key2:
app.on_top_button_changed(state)
fm.register(board_info.BUTTON_A, fm.fpioa.GPIOHS21)
fm.register(board_info.BUTTON_B, fm.fpioa.GPIOHS22)
# fm.register(board_info.BUTTON_A, fm.fpioa.GPIOHS21, force=True)
key1=GPIO(GPIO.GPIOHS21, GPIO.IN, GPIO.PULL_UP)
key2=GPIO(GPIO.GPIOHS22, GPIO.IN, GPIO.PULL_UP)
key1.irq(test_irq, GPIO.IRQ_BOTH, GPIO.WAKEUP_NOT_SUPPORT, 7)
key2.irq(test_irq, GPIO.IRQ_BOTH, GPIO.WAKEUP_NOT_SUPPORT, 7)
app = ExplorerApp()
while True:
if app.is_dirty:
app.on_draw()
time.sleep_ms(1)
else:
time.sleep_ms(100)
| others/explorer_standalone.py | 3,683 | Return the portion of the file's mode that describes the
file type.
Return True if mode is from a directory.
import uos directory noinspection PyPep8Naming noinspection PyPep8Naming gc.collect() gc.collect() gc.collect() Rotate the lcd 180deg fm.register(board_info.BUTTON_A, fm.fpioa.GPIOHS21, force=True) | 309 | en | 0.365742 |
from tytus.parser.team21.Analisis_Ascendente.Instrucciones.Expresiones.Expresion import Expresion
from tytus.parser.team21.Analisis_Ascendente.Instrucciones.expresion import Primitivo
from tytus.parser.team21.Analisis_Ascendente.Instrucciones.instruccion import Instruccion
from tytus.parser.team21.Analisis_Ascendente.storageManager.jsonMode import *
import tytus.parser.team21.Analisis_Ascendente.Tabla_simbolos.TablaSimbolos as TS
from datetime import date,datetime
todoBien = True
#INSERT INTO
class InsertInto(Instruccion):
def __init__(self,caso, id, listaId, values,fila,columna):
self.caso=caso
self.id = id
self.listaId = listaId
self.values = values
self.fila = fila
self.columna = columna
def ejecutar(insertinto,ts,consola,exceptions):
if ts.validar_sim("usedatabase1234") == 1:
# nombre de la bd
bdactual = ts.buscar_sim("usedatabase1234")
# se busca el simbolo y por lo tanto se pide el entorno de la bd
BD = ts.buscar_sim(bdactual.valor)
entornoBD = BD.Entorno
dataainsertar =[]
if entornoBD.validar_sim(insertinto.id) == 1:
simbolo_tabla = entornoBD.buscar_sim(insertinto.id)
entornoTabla = simbolo_tabla.Entorno
indices_a_buscar=[]
if insertinto.caso==1:
print("caso1")
for data in insertinto.listaId:
contador = 1
for columna in entornoTabla.simbolos:
if data.id == columna:
indices_a_buscar.append(contador)
break
contador=contador+1
print(indices_a_buscar)
lista = entornoTabla.simbolos
contador = 1
for columna in lista:
if not contador in indices_a_buscar:
print("((((((((((((((((((((((((((((((((((((((")
if "NOTNULL" in lista.get(columna).valor:
global todoBien
todoBien = False
consola.append(f"Error esta columna no puede ser nula {columna}")
break
else:
todoBien = True
contador=contador+1
for data in insertinto.listaId:
if entornoTabla.validar_sim(data.id)==-1:
consola.append(f"Error no hay coincidencia de ids en {data.id}")
todoBien = False
for data in insertinto.values:
print("val :",data.valor)
if todoBien:
contadoraux= 1
i = 0
todobien = True
for data in entornoTabla.simbolos:
if contadoraux in indices_a_buscar:
todobien = comprobar_tipos(dataainsertar, i, insertinto.values, data, entornoTabla.simbolos,
entornoTabla, consola, exceptions, BD, simbolo_tabla,ts)
i = i + 1
else:
dataainsertar.append(str(None))
if not todobien:
consola.append("No se insertaron los datos, columnas inconsistentes")
todobien = False
break
contadoraux =contadoraux+1
if todobien:
insert(BD.id, simbolo_tabla.id, dataainsertar)
consola.append(f"insert en la tabla {insertinto.id}, exitoso\n")
else:
consola.append(f"Campos insconsistentes")
consola.append(f"insert en la tabla {insertinto.id}, exitoso\n")
else:
consola.append(f"datos dectectados como no nulos")
todoBien=True
else:
print("caso 2")
if len(insertinto.values) == len(entornoTabla.simbolos):
i =0
todobien = True
for data in entornoTabla.simbolos:
todobien = comprobar_tipos(dataainsertar,i,insertinto.values,data,entornoTabla.simbolos,entornoTabla,consola,exceptions,BD,simbolo_tabla,ts)
if not todobien:
consola.append("No se insertaron los datos, columnas inconsistentes")
todobien= False
break
i=i+1
if todobien:
insert(BD.id,simbolo_tabla.id,dataainsertar)
consola.append(f"insert en la tabla {insertinto.id}, exitoso\n")
else:
consola.append(f"Campos insconsistentes")
else:
consola.append(f"La cantidad de columnas esperadas es de {len(entornoTabla.simbolos)} para insersar en tabla {insertinto.id}")
exceptions.append(f"Error semantico-22023-invalid_parameter_value -{insertinto.fila}-{insertinto.columna}")
else:
consola.append(f"42P01 undefined_table, no existe la tabla {insertinto.id}")
exceptions.append(f"Error semantico-42P01- 42P01 undefined_table, no existe la tabla {insertinto.id}-fila-columna")
else:
consola.append("42P12 invalid_database_definition, Error al insertar\n")
consola.append("22005 error_in_assignment, No se ha seleccionado una BD\n")
exceptions.append("Error semantico-22005 error_in_assignment-No se ha seleccionado DB-fila-columna")
def comprobar_tipos(datainsertar,index,lista_valores,campo,lista_tabla,ts,Consola,exception,bd,tabla,globall):
print("estoy aqui !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")
todobien = False
# print(lista_valores[index].valor)
# print(date.fromisoformat(lista_valores[index].valor))
# print(isinstance(date.fromisoformat(lista_valores[index].valor),date))
# print('DATE' in str(lista_tabla.get(campo).tipo).upper())
datafinal = None
if isinstance(lista_valores[index],Instruccion):
datafinal = Expresion.Resolver(lista_valores[index],ts,Consola,exception)
datainsertar.append(datafinal)
else:
datafinal = lista_valores[index].valor
datainsertar.append(datafinal)
print(datafinal)
if isinstance(datafinal,int) and 'INTEGER' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,datafinal,lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor,datafinal,Consola,exception,bd,tabla,index)
elif isinstance(datafinal,float) and 'DOUBLE' in str(lista_tabla.get(campo).tipo).upper() or 'DECIMAL' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,datafinal,lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd, tabla, index)
elif str(datafinal).upper() == 'TRUE' or str(datafinal).upper() == 'FALSE' and 'BOOLEAN' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,datafinal,lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd, tabla,
index)
elif isinstance(datafinal,str) and 'TEXT' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,datafinal,lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd, tabla,
index)
elif isinstance(str(datafinal),str) and 'VARCHAR' in str(lista_tabla.get(campo).tipo).upper() or 'CHARACTERVARYING' in str(lista_tabla.get(campo).tipo).upper() or 'CHARACTER' in str(lista_tabla.get(campo).tipo).upper() or 'CHAR' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
cantidad = str(lista_tabla.get(campo).tipo).split("-")[1]
if len(str(datafinal)) <= int(cantidad):
todobien = True
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,str(datafinal),lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd, tabla,
index)
else:
todobien = False
elif isinstance(datafinal,float) and 'MONEY' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,datafinal,lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd, tabla,
index)
elif isinstance(datafinal,int) and 'MONEY' in str(lista_tabla.get(campo).tipo).upper():
todobien = True
try:
todobien = comprobarcheck(lista_tabla.get(campo).Entorno,1,datafinal,lista_tabla.get(campo).id,ts,Consola,exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd, tabla,
index)
except:
todobien = False
elif 'DATE' in str(lista_tabla.get(campo).tipo).upper():
try:
#todobien= isinstance(date.fromisoformat(str(datafinal)), date)
todobien = comprobarcheck(lista_tabla.get(campo).Entorno, 1, datafinal, lista_tabla.get(campo).id, ts,Consola, exception)
todobien = comprobar_caracteristicas(lista_tabla.get(campo).valor, datafinal, Consola, exception, bd,
tabla, index)
except:
print("error de tipo")
todobien = False
else:
try:
print("%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5")
print(lista_tabla.get(campo).tipo)
for data in globall.simbolos:
print(":: ",data)
if globall.validar_sim(str(lista_tabla.get(campo).tipo).lower()) == 1:
print("$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$4")
for data in ts.simbolos:
print(";;; ",data)
simbolo_enumo = globall.buscar_sim(str(lista_tabla.get(campo).tipo).lower())
if datafinal in simbolo_enumo.valor:
todobien = True
Consola.append("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!11")
else:
print("no encotrado")
except:
todobien= False
return todobien
def comprobarcheck(expresion,data,valor,nombre_columna,ts,Consola,exception):
valor_retorno=True
print("que pedo",data)
#if data == 1:
print("-> ",expresion)
if expresion != None:
for datos in expresion:
dataiz = datos.iz
datade = datos.dr
operador= datos.operador
if nombre_columna != dataiz.id:
valor_retorno=False
break
valor_retorno = Expresion.Resolver(Expresion(Primitivo(valor,1,1),datade,operador,1,1),ts,Consola,exception)
return valor_retorno
def comprobar_caracteristicas(tipo_caracteristica,data,Consola,Exception,nombre_bd,nombre_tabla,posicion):
devolver=True
print("->>>>>",tipo_caracteristica)
if tipo_caracteristica != None:
print("aqui estamos")
for caracteristica in tipo_caracteristica:
print(caracteristica)
if "NOTNULL" in str(caracteristica):
if data == None:
Consola.append("Dato encontrado con not null, debe llevar un valor")
devolver=False
break
elif "UNIQUE" in str(caracteristica) or "PRIMARYKEY" in str(caracteristica):
print(nombre_bd.id,nombre_tabla.id)
datas = extractTable(nombre_bd.id,nombre_tabla.id)
print("unique or primary -> ",posicion)
for fila in datas:
if str(fila[posicion])== str(data):
devolver= False
Consola.append("Constraint unique active")
print(fila[posicion])
print(data)
return devolver | parser/team21/Analisis_Ascendente/Instrucciones/Insert/insert.py | 13,476 | INSERT INTO nombre de la bd se busca el simbolo y por lo tanto se pide el entorno de la bd print(lista_valores[index].valor) print(date.fromisoformat(lista_valores[index].valor)) print(isinstance(date.fromisoformat(lista_valores[index].valor),date)) print('DATE' in str(lista_tabla.get(campo).tipo).upper())todobien= isinstance(date.fromisoformat(str(datafinal)), date)if data == 1: | 394 | es | 0.362164 |
from django.db import models
from osoba.models import ServiseID, Company
from django.utils.translation import gettext as _
class SHPK(models.Model):
name = models.CharField(max_length=512, verbose_name=_('Name'))
short_name = models.CharField(max_length=512, verbose_name=_('Short name'))
def __str__(self):
return self.name[:50]
class Meta:
verbose_name = _('SHPK')
verbose_name_plural = _('SHPK')
class ZvannyaName(models.Model):
zv_id = models.AutoField(primary_key=True)
zv_name = models.TextField()
zv_short_name = models.CharField(max_length=20)
def __str__(self):
return '{}__{}'.format(self.zv_id, self.zv_name)
class Meta:
managed = False
db_table = 'zvannya_name'
class Staff(models.Model):
# Штатка
#порядковий номер в штатці
unicum = models.PositiveBigIntegerField(verbose_name= _('Unic number'), blank=True)
company = models.ForeignKey(Company, on_delete=models.CASCADE, blank=True, verbose_name= _('Company'))
name = models.CharField(max_length=512, verbose_name=_('Name'))
shpk = models.ForeignKey(SHPK, on_delete=models.CASCADE, blank=True, verbose_name= _('shpk'))
ocoba = models.ForeignKey(ServiseID, on_delete=models.CASCADE, blank=True, verbose_name= _('ocoba'), null=True)
vos = models.CharField(max_length=512, verbose_name= _('VOS'))
poz = models.CharField(max_length=512, verbose_name= _('pozyvnyy'), blank=True)
salary = models.PositiveBigIntegerField(verbose_name= _('salary'), blank=True)
tariff_category = models.PositiveBigIntegerField(verbose_name= _('tariff category'), blank=True)
vacant = models.BooleanField(verbose_name= _('Vacant'), blank=True, null=True, default=True)
def __str__(self):
return self.name[:50]
class Meta:
verbose_name = _('Staff')
verbose_name_plural = _('Staff')
class Adresa(models.Model):
adr_id = models.AutoField(primary_key=True)
adr_n_id = models.IntegerField()
adresa = models.CharField(max_length=360)
class Meta:
managed = False
db_table = 'adresa'
class Nakaz(models.Model):
nak_id = models.AutoField(primary_key=True)
nak_n_id = models.IntegerField()
nak_status_id = models.IntegerField()
zvidky = models.IntegerField()
kudy = models.IntegerField()
nak_data = models.DateField( blank=True, null=True)
nak_nomer = models.IntegerField()
povern = models.DateField( blank=True, null=True)
tmp = models.IntegerField()
class Meta:
managed = False
db_table = 'nakaz'
class NakazNomer(models.Model):
n_nak_id = models.AutoField(primary_key=True)
n_nak_data = models.DateField( blank=True, null=True)
n_nak_nomer = models.IntegerField()
class Meta:
managed = False
db_table = 'nakaz_nomer'
class NakazPlace(models.Model):
nak_place_id = models.AutoField(primary_key=True)
nak_place_name = models.CharField(max_length=120)
class Meta:
managed = False
db_table = 'nakaz_place'
class PosadaName(models.Model):
pos_id = models.AutoField(primary_key=True)
pos_name = models.TextField()
def __str__(self):
return '{}__{}'.format(self.pos_id, self.pos_name[:50])
class Meta:
managed = False
db_table = 'posada_name'
class PidrozdilName(models.Model):
p_id = models.AutoField(primary_key=True)
por_nomer = models.IntegerField()
p_name = models.TextField()
p_short_name = models.CharField(max_length=32)
p_full_name = models.CharField(max_length=200)
active = models.IntegerField()
def __str__(self):
return '{}__{}'.format(self.p_id, self.p_name[:50])
class Meta:
managed = False
db_table = 'pidrozdil_name'
class Shtatka(models.Model):
pos_id = models.AutoField(primary_key=True)
p = models.ForeignKey(PidrozdilName, to_field='p_id', on_delete=models.PROTECT, related_name='+' )
sh = models.ForeignKey(PosadaName, to_field='pos_id', on_delete=models.PROTECT, related_name='+' )
zv_sh = models.ForeignKey(ZvannyaName, to_field='zv_id', on_delete=models.PROTECT, related_name='+' )
dopusk = models.CharField(max_length=1)
vos = models.CharField(max_length=12)
oklad = models.CharField(max_length=12)
vidsotok = models.IntegerField()
nomer_kniga = models.IntegerField()
class Meta:
managed = False
db_table = 'shtatka'
def __str__(self):
return '{}__{}'.format(self.pos_id, self.sh)
class OsvitaName(models.Model):
osv_name_id = models.AutoField(primary_key=True)
osv_name = models.CharField(max_length=100)
def __str__(self):
return '{}__{}'.format(self.osv_name_id, self.osv_name)
class Meta:
managed = False
db_table = 'osvita_name'
class SimStanName(models.Model):
s_stan_name_id = models.AutoField(primary_key=True)
s_stan_name = models.CharField(max_length=30)
def __str__(self):
return '{}__{}'.format(self.s_stan_name_id, self.s_stan_name)
class Meta:
managed = False
db_table = 'sim_stan_name'
class StatsName(models.Model):
s_stats_name_id = models.AutoField(primary_key=True)
s_stats_name = models.CharField(max_length=1)
def __str__(self):
return '{}__{}'.format(self.s_stats_name_id, self.s_stats_name)
class Meta:
managed = False
db_table = 'stats_name'
class StatusName(models.Model):
s_id = models.AutoField(primary_key=True)
s_name = models.CharField(max_length=128)
def __str__(self):
return '{}__{}'.format(self.s_id, self.s_name)
class Meta:
managed = False
db_table = 'status_name'
class Name(models.Model):
n_id= models.AutoField(primary_key=True)
name = models.TextField()
short_name = models.TextField()
pseudo = models.CharField(max_length=128)
zv = models.ForeignKey(ZvannyaName, to_field='zv_id', on_delete=models.PROTECT, related_name='+' )
data_zv = models.CharField(max_length=100)
pos = models.ForeignKey(Shtatka, to_field='pos_id', on_delete=models.PROTECT, related_name='+' )
pos_id_old = models.IntegerField(null=True, blank=True)
p_id = models.IntegerField() #wtf?
kontr = models.IntegerField(null=True, blank=True)
data_narod = models.DateField( blank=True, null=True)
adresa_nar = models.CharField(max_length=200)
data_mob = models.DateField( blank=True, null=True)
vijskomat = models.CharField(max_length=100)
data_zarah = models.DateField( blank=True, null=True)
nomer_nakazu_ok = models.CharField(max_length=10)
data_nakazu_ok = models.DateField( blank=True, null=True)
chiy_nakaz = models.CharField(max_length=50)
kontrakt = models.DateField( blank=True, null=True)
kontrakt_strok = models.CharField(max_length=50)
kontrakt_zak = models.DateField( blank=True, null=True)
nomer_nakazu = models.IntegerField()#wtf?
data_zviln = models.DateField( blank=True, null=True)
nomer_nakazu_zviln = models.IntegerField()#wtf?
nomer_pasp = models.CharField(max_length=100)
code_nomer = models.CharField(max_length=10)
voen_nomer = models.CharField(max_length=25)
grupa_krovi = models.CharField(max_length=15)
osvita = models.ForeignKey(OsvitaName, to_field='osv_name_id', on_delete=models.PROTECT, related_name='+' )
specialnist = models.CharField(max_length=500)
zvp = models.CharField(max_length=100)
fahova = models.CharField(max_length=100)
liderstvo = models.CharField(max_length=100)
perem = models.CharField(max_length=50)
persh_kontr = models.CharField(max_length=50)
ozdor = models.CharField(max_length=50)
mdspp = models.CharField(max_length=50)
sim_stan = models.ForeignKey(SimStanName, to_field='s_stan_name_id', on_delete=models.PROTECT, related_name='+' )
stats = models.ForeignKey(StatsName, to_field='s_stats_name_id', on_delete=models.PROTECT, related_name='+' )
status = models.ForeignKey(StatusName, to_field='s_id', on_delete=models.PROTECT, related_name='+' )
status2 = models.IntegerField()
notes = models.TextField()
notes1 = models.TextField()
def __str__(self):
return self.name[:50]
class Meta:
managed = False
db_table = 'name'
class Peremish(models.Model):
perem_id = models.AutoField(primary_key=True)
perem_n_id = models.IntegerField()
perem_status_id = models.IntegerField()
zvidky = models.IntegerField()
kudy = models.IntegerField()
perem_data = models.DateField( blank=True, null=True)
nakaz_id = models.IntegerField()
povern = models.DateField( blank=True, null=True)
class Meta:
managed = False
db_table = 'peremish'
class Phones(models.Model):
ph_id = models.AutoField(primary_key=True)
n_id = models.IntegerField()
ph_nomer = models.TextField()
class Meta:
managed = False
db_table = 'phones'
class PidrozdilId(models.Model):
p_id = models.AutoField(primary_key=True)
p_parent_id = models.IntegerField()
isparent = models.IntegerField()
class Meta:
managed = False
db_table = 'pidrozdil_id'
class Priznach(models.Model):
prizn_id = models.AutoField(primary_key=True)
prizn_n_id = models.IntegerField()
prizn_data = models.DateField( blank=True, null=True)
prizn_pos_id = models.IntegerField()
class Meta:
managed = False
db_table = 'priznach'
class PriznachOld(models.Model):
prizn_id = models.AutoField(primary_key=True)
prizn_n_id = models.IntegerField()
prizn_data = models.DateField( blank=True, null=True)
prizn_pos_id = models.IntegerField()
class Meta:
managed = False
db_table = 'priznach_old'
class PriznachOld2(models.Model):
prizn_id = models.AutoField(primary_key=True)
prizn_n_id = models.IntegerField()
prizn_data = models.DateField( blank=True, null=True)
prizn_pos_id = models.IntegerField()
class Meta:
managed = False
db_table = 'priznach_old_2'
class Ridny(models.Model):
rid_id = models.AutoField(primary_key=True)
rid_n_id = models.IntegerField()
rid_name_id = models.IntegerField()
rid_name = models.CharField(max_length=200)
rid_data_nar = models.DateField( blank=True, null=True)
rid_phone = models.IntegerField()
rid_notes = models.CharField(max_length=500)
class Meta:
managed = False
db_table = 'ridny'
class RidnyName(models.Model):
rid_name_id = models.AutoField(primary_key=True)
rid_name_name = models.CharField(max_length=50)
class Meta:
managed = False
db_table = 'ridny_name'
class ShtatkaOld(models.Model):
pos_id = models.AutoField(primary_key=True)
p_id = models.IntegerField()
sh_id = models.IntegerField()
zv_sh_id = models.IntegerField()
vos = models.CharField(max_length=12)
oklad = models.CharField(max_length=12)
nomer_kniga = models.IntegerField()
class Meta:
managed = False
db_table = 'shtatka_old'
class ShtatkaOld2(models.Model):
pos_id = models.AutoField(primary_key=True)
p_id = models.IntegerField()
sh_id = models.IntegerField()
zv_sh_id = models.IntegerField()
vos = models.CharField(max_length=12)
oklad = models.CharField(max_length=12)
nomer_kniga = models.IntegerField()
class Meta:
managed = False
db_table = 'shtatka_old_2'
class Table32(models.Model):
col_1 = models.CharField(db_column='COL 1', max_length=10, blank=True, null=True) # Field name made lowercase. Field renamed to remove unsuitable characters.
col_2 = models.IntegerField(db_column='COL 2', blank=True, null=True) # Field name made lowercase. Field renamed to remove unsuitable characters.
class Meta:
managed = False
db_table = 'table 32'
class Table35(models.Model):
col_1 = models.CharField(db_column='COL 1', max_length=10, blank=True, null=True) # Field name made lowercase. Field renamed to remove unsuitable characters.
col_2 = models.IntegerField(db_column='COL 2', blank=True, null=True) # Field name made lowercase. Field renamed to remove unsuitable characters.
class Meta:
managed = False
db_table = 'table 35'
class Temp(models.Model):
number_1 = models.IntegerField(db_column='1') # Field renamed because it wasn't a valid Python identifier.
number_2 = models.TextField(db_column='2') # Field renamed because it wasn't a valid Python identifier.
class Meta:
managed = False
db_table = 'temp'
class Tmp(models.Model):
number_1 = models.IntegerField(db_column='1') # Field renamed because it wasn't a valid Python identifier.
number_2 = models.TextField(db_column='2') # Field renamed because it wasn't a valid Python identifier.
class Meta:
managed = False
db_table = 'tmp'
class Vysluga(models.Model):
vys_id = models.AutoField(primary_key=True)
vys_n_id = models.IntegerField()
vys_data_mob = models.DateField( blank=True, null=True)
vys_data_zvil = models.DateField( blank=True, null=True)
class Meta:
managed = False
db_table = 'vysluga'
class VyslugaNormy(models.Model):
rokiv = models.IntegerField()
nadbavka = models.IntegerField()
class Meta:
managed = False
db_table = 'vysluga_normy'
class VyslugaZv(models.Model):
vys_zv_id = models.AutoField(primary_key=True)
vys_zv_n_id = models.IntegerField()
data_zv = models.DateField( blank=True, null=True)
class Meta:
managed = False
db_table = 'vysluga_zv'
class ZbrStatusName(models.Model):
zbr_status_id = models.AutoField(primary_key=True)
zbr_status_name = models.CharField(max_length=20)
class Meta:
managed = False
db_table = 'zbr_status_name'
class Zbroya(models.Model):
zbr_id = models.AutoField(primary_key=True)
zbr_type = models.IntegerField()
nomer = models.CharField(max_length=128)
n_id = models.IntegerField()
magazin = models.IntegerField()
zbr_status = models.IntegerField()
zbr_note = models.CharField(max_length=256)
class Meta:
managed = False
db_table = 'zbroya'
class ZbroyaAll(models.Model):
zbr_type = models.IntegerField()
nomer = models.CharField(max_length=128)
rota = models.IntegerField()
class Meta:
managed = False
db_table = 'zbroya_all'
class ZbroyaName(models.Model):
zbr_id = models.AutoField(primary_key=True)
zbr_name = models.CharField(max_length=128)
class Meta:
managed = False
db_table = 'zbroya_name'
class ZbroyaSklad(models.Model):
zbr_type = models.IntegerField()
nomer = models.CharField(max_length=256)
class Meta:
managed = False
db_table = 'zbroya_sklad'
class ZvGrupaName(models.Model):
zv_gr_id = models.AutoField(primary_key=True)
zv_gr_name = models.CharField(max_length=20)
class Meta:
managed = False
db_table = 'zv_grupa_name'
class ZvannyaId(models.Model):
zv_id = models.IntegerField(unique=True)
zv_gr_id = models.IntegerField()
zv_okl = models.IntegerField()
class Meta:
managed = False
db_table = 'zvannya_id'
class ZvilnComent(models.Model):
zv_com_id = models.AutoField(primary_key=True)
zv_com_n_id = models.IntegerField()
zv_coment = models.CharField(max_length=500)
class Meta:
managed = False
db_table = 'zviln_coment'
class Kontrakt(models.Model):
kontrakt_com_id = models.AutoField(primary_key=True)
kontrakt_com_n = models.ForeignKey(Name, to_field='n_id', on_delete=models.PROTECT, related_name='+')# models.IntegerField()
kontrakt_date = models.DateField( blank=True, null=True)
kontrakt_srok = models.IntegerField()
kontrakt_zak = models.DateField( blank=True, null=True)
class Meta:
managed = False
db_table = 'kontrakt'
| zampol/staff/models.py | 16,122 | Штаткапорядковий номер в штатціwtf?wtf?wtf? Field name made lowercase. Field renamed to remove unsuitable characters. Field name made lowercase. Field renamed to remove unsuitable characters. Field name made lowercase. Field renamed to remove unsuitable characters. Field name made lowercase. Field renamed to remove unsuitable characters. Field renamed because it wasn't a valid Python identifier. Field renamed because it wasn't a valid Python identifier. Field renamed because it wasn't a valid Python identifier. Field renamed because it wasn't a valid Python identifier. models.IntegerField() | 597 | en | 0.788995 |
# -*- coding: utf-8 -*-
"""Define general test helper attributes and utilities."""
import os
import sys
TRAVIS=os.getenv("TRAVIS_PYTHON_VERSION") is not None
PYTHON_VERSION = "%s.%s" % (sys.version_info.major, sys.version_info.minor)
TMP_DIR="/tmp"
| tests/test_helper.py | 250 | Define general test helper attributes and utilities.
-*- coding: utf-8 -*- | 76 | en | 0.700675 |
import ipdb
import medis.speckle_nulling.sn_hardware as hardware
import medis.speckle_nulling.sn_preprocessing as pre
import numpy as np
import os
import astropy.io.fits as pf
import medis.speckle_nulling.sn_filehandling as flh
from configobj import ConfigObj
def build_median(imagelist, outputfile = None):
"""Takes a list of image paths and builds a median image"""
first = True
for image in imagelist:
hdulist= pf.open(image)
data = pre.combine_quadrants(hdulist)
#data= hdulist[0].data
#data = pre.combine_quadrants(data)
#filesused.append(image+'; ')
if first:
imcube = data[:,:,np.newaxis]
first = False
else:
np.concatenate((imcube, data[:,:,np.newaxis]), axis=2)
hdulist.close()
medimage = np.median(imcube, axis=2)
if outputfile is not None:
print "Writing median image to "+outputfile
strfiles = [x+'; ' for x in imagelist]
strfilesused = ("Files used to create master image: "+
''.join(strfiles))
flh.writeout(medimage, outputfile,
comment = strfilesused)
return medimage
def build_master_flat(mfminusmd, badpix=None,
kernelsize = 9,
outputfile = 'masterflat.fits',
removezeros = True):
"""removes bad pixels from a background subtracted master flat"""
im1 = pre.removebadpix(mfminusmd, badpix, kernelsize=kernelsize)
ans = im1/np.mean(im1)
if removezeros:
ans=pre.removebadpix(ans, ans==0, kernelsize = kernelsize)
flh.writeout(ans, outputfile)
return ans
def build_master_dark(rawdark, badpix = None, outputfile='masterdark.fits'):
ans=pre.removebadpix(rawdark, badpix)
flh.writeout(ans, outputfile)
return ans
def build_badpixmask(image,
method='gaussfit',outputfile = 'badpix.fits'):
if method == 'gaussfit':
masterbadpixelmask = pre.locate_badpix(image, sigmaclip = 2.5)
print "Writing badpix image to "+outputfile
flh.writeout(masterbadpixelmask, outputfile)
return masterbadpixelmask
if __name__ == "__main__":
hardwareconfigfile = 'speckle_instruments.ini'
configfilename = 'speckle_null_config.ini'
config = ConfigObj(configfilename)
bgdconfig= config['BACKGROUNDS_CAL']
outputdir = config['BACKGROUNDS_CAL']['dir']
pharo = hardware.PHARO_COM('PHARO',
configfile = hardwareconfigfile)
print ("\n\n\n\nThis script is meant to tell PHARO to take a bunch of backgrounds,\n darks, and flats, then assemble them into the correctly formatted 1024x1024 region \nthat we care about, and place them in the following directory:")
print config['BACKGROUNDS_CAL']['dir']
print ('\n\n\n\nIf this script does not work, my advice would be to bypass it completely and do it manually take some flats, backgrounds and darks. \nAssemble them yourselves (see sn_preprocessing.py), \nparticularly combine_quadrants, locate_badpix, and save them as masterflat.fits, masterdark.fits, badpix.fits in the same directory mentioned above')
filetypes = ['backgrounds',
'flats', 'flatdarks']
for ftype in filetypes:
imnames = []
commandstring = ("\n\n\n\nSet up Pharo to the configurations to take "+ftype.upper()+" then hit any key. ")
s= raw_input(commandstring)
if ftype == 'backgrounds':
for i in range(int(config['BACKGROUNDS_CAL']['N'])):
fname = pharo.take_src_return_imagename(
exptime = bgdconfig['bgdtime'])
imnames.append(fname)
print ftype.upper()+" taken so far: "
print imnames
background = build_median(imnames,
outputfile = os.path.join(outputdir, 'medbackground.fits'))
ipdb.set_trace()
if ftype == 'flats':
for i in range(int(config['BACKGROUNDS_CAL']['N'])):
fname = pharo.take_src_return_imagename(
exptime = bgdconfig['flattime'])
imnames.append(fname)
print ftype.upper()+" taken so far: "
print imnames
med_flat = build_median(imnames, outputfile=os.path.join(outputdir, 'medflat.fits'))
#XXXX fix flat fielding
if ftype == 'flatdarks':
for i in range(int(config['BACKGROUNDS_CAL']['N'])):
fname = pharo.take_src_return_imagename(
exptime = bgdconfig['flattime'])
imnames.append(fname)
print ftype.upper()+" taken so far: "
print imnames
med_flatdark = build_median(imnames, outputfile=os.path.join(outputdir, 'medflatdark.fits'))
bp = build_badpixmask(med_flatdark,
outputfile = os.path.join(outputdir,'badpix.fits'))
#bp = build_badpixmask(targ_bkgd-cal_bkgd,
# outputfile = os.path.join(outputdir,'badpix.fits'))
mf = build_master_flat(med_flat-med_flatdark, badpix=bp,
outputfile = os.path.join(outputdir,'masterflat.fits'))
| medis/speckle_nulling/take_flats_and_darks_old.py | 5,251 | data= hdulist[0].datadata = pre.combine_quadrants(data)filesused.append(image+'; ')XXXX fix flat fieldingbp = build_badpixmask(targ_bkgd-cal_bkgd, outputfile = os.path.join(outputdir,'badpix.fits')) | 204 | en | 0.335031 |
class Solution:
def canPlaceFlowers(self, flowerbed, n: int) -> bool:
# Even with an empty list, the maximum amount we can place
# is len(flowerbed) // 2 (+ 1 if odd, +0 if even)
length = len(flowerbed)
if n > (length // 2) + 1 * (length & 1):
return False
# bail early. fits everywhere.
if n == 0:
return True
# bail early if [0].
if length == 1:
return flowerbed[0] == 0
# Start counting from 2 pos if [_, 0, ...]
if flowerbed[1] == 0:
# but decrement n if [0, 0, ...]
if flowerbed[0] == 0:
n -= 1
if n == 0:
return True
i = 2
# Start counting from 3rd pos if [_, 1, ...]
else:
i = 3
# Go through the flower bed and check adjacent positions.
while i < length:
# if available, check adjacent.
if flowerbed[i] == 0:
j, k = i - 1, i + 1
# previous is 0, check next and jump appropriately.
if flowerbed[j] == 0:
if k < length:
if flowerbed[k] == 0:
n -= 1
else:
# jump over and go two steps over
# to try that slot.
i += 3
continue
elif k == length:
return n <= 1
if n == 0:
return True
# go two positions over. Either we filled it or it was
# already a one.
i += 2
return n == 0
| Python/Algorithms/605.py | 1,734 | Even with an empty list, the maximum amount we can place is len(flowerbed) // 2 (+ 1 if odd, +0 if even) bail early. fits everywhere. bail early if [0]. Start counting from 2 pos if [_, 0, ...] but decrement n if [0, 0, ...] Start counting from 3rd pos if [_, 1, ...] Go through the flower bed and check adjacent positions. if available, check adjacent. previous is 0, check next and jump appropriately. jump over and go two steps over to try that slot. go two positions over. Either we filled it or it was already a one. | 521 | en | 0.896816 |
def belong(in_list1: list, in_list2: list) -> list:
"""
Check wheter or not all the element in list in_list1 belong into in_list2
:param in_list1: the source list
:param in_list2: the target list where to find the element in in_list1
:return: return True if the statement is verified otherwise return False
"""
return all(element in in_list2 for element in in_list1)
if __name__ == "__main__":
print(belong([1,2,3,4],[4,5,6,5,7,0,4,2,3])) | Python/List/14.belong.py | 471 | Check wheter or not all the element in list in_list1 belong into in_list2
:param in_list1: the source list
:param in_list2: the target list where to find the element in in_list1
:return: return True if the statement is verified otherwise return False | 250 | en | 0.533722 |
#
# PySNMP MIB module CISCO-WAN-FR-CONN-MIB (http://snmplabs.com/pysmi)
# ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CISCO-WAN-FR-CONN-MIB
# Produced by pysmi-0.3.4 at Wed May 1 12:20:25 2019
# On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4
# Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15)
#
OctetString, ObjectIdentifier, Integer = mibBuilder.importSymbols("ASN1", "OctetString", "ObjectIdentifier", "Integer")
NamedValues, = mibBuilder.importSymbols("ASN1-ENUMERATION", "NamedValues")
ConstraintsIntersection, ValueRangeConstraint, ConstraintsUnion, ValueSizeConstraint, SingleValueConstraint = mibBuilder.importSymbols("ASN1-REFINEMENT", "ConstraintsIntersection", "ValueRangeConstraint", "ConstraintsUnion", "ValueSizeConstraint", "SingleValueConstraint")
frameRelay, frChan = mibBuilder.importSymbols("BASIS-MIB", "frameRelay", "frChan")
ciscoWan, = mibBuilder.importSymbols("CISCOWAN-SMI", "ciscoWan")
ModuleCompliance, ObjectGroup, NotificationGroup = mibBuilder.importSymbols("SNMPv2-CONF", "ModuleCompliance", "ObjectGroup", "NotificationGroup")
Gauge32, MibScalar, MibTable, MibTableRow, MibTableColumn, Bits, IpAddress, iso, TimeTicks, ModuleIdentity, Counter64, ObjectIdentity, Unsigned32, MibIdentifier, Integer32, NotificationType, Counter32 = mibBuilder.importSymbols("SNMPv2-SMI", "Gauge32", "MibScalar", "MibTable", "MibTableRow", "MibTableColumn", "Bits", "IpAddress", "iso", "TimeTicks", "ModuleIdentity", "Counter64", "ObjectIdentity", "Unsigned32", "MibIdentifier", "Integer32", "NotificationType", "Counter32")
TruthValue, TextualConvention, DisplayString = mibBuilder.importSymbols("SNMPv2-TC", "TruthValue", "TextualConvention", "DisplayString")
ciscoWanFrConnMIB = ModuleIdentity((1, 3, 6, 1, 4, 1, 351, 150, 47))
ciscoWanFrConnMIB.setRevisions(('2002-09-18 00:00',))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
if mibBuilder.loadTexts: ciscoWanFrConnMIB.setRevisionsDescriptions(('Initial version of the MIB. The content of this MIB was originally available in CISCO-WAN-AXIPOP-MIB defined using SMIv1. The applicable objects from CISCO-WAN-AXIPOP-MIB are defined using SMIv2 in this MIB. Also the descriptions of some of the objects have been modified.',))
if mibBuilder.loadTexts: ciscoWanFrConnMIB.setLastUpdated('200209180000Z')
if mibBuilder.loadTexts: ciscoWanFrConnMIB.setOrganization('Cisco Systems, Inc.')
if mibBuilder.loadTexts: ciscoWanFrConnMIB.setContactInfo(' Cisco Systems Customer Service Postal: 170 W Tasman Drive San Jose, CA 95134 USA Tel: +1 800 553-NETS E-mail: cs-wanatm@cisco.com')
if mibBuilder.loadTexts: ciscoWanFrConnMIB.setDescription("The MIB module to configure the Frame Relay connection configuration. Terminologies Used: SIW - Frame-Relay-to ATM Service Interworking. In SIW, the ATM port connected to a frame relay port does not need to be aware that it is connected to an interworking function. This is explained in document FRF.8. NIW - Frame-Relay-to ATM Network Interworking. In NIW, the ATM port connected to a frame relay port does need to be aware that it is connected to an interworking function. PVC - Permanent Virtual Circuit OR Permanent Virtual Connection A frame relay logical link, whose endpoints and class of service are defined by network management. A PVC consists of the originating frame relay network element address, originating DLCI, terminating frame relay network element address and terminating DLCI. This is controlled by PAR(Portable Auto Route) controller. SPVC - Soft Permanent Virtual Circuits. This is a PVC controlled by PNNI Controller. Frame Relay PVC/SPVC end-point/Channel is referred to as frame Relay connection in this MIB. Traffic shaping parameters: CIR, EIR, Bc, Be, DE, Tc, AR corresponding to rate of the physical interface. CIR - Committed Information Rate. This is the rate of traffic that the PVC will support as 'comitted' traffic. The committed rate(in bits per second) at which the ingress access interface trunk interfaces, and egress access interface of a frame relay network transfer information to the destination frame relay end system under normal conditions. The rate is averaged over a minimum time interval Tc. AR - Access Rate The maximum number of bits per second that an end station can transmit into the network is bounded by the acess rate of the user-network interface. The line speed of the user network connection limits the access rate. Bc - Committed Burst Size The maximum amount of data(in bits) that the network agrees to transfer, under normal conditions during a time interval Tc. The data is in bytes in the current implementation. Be - Excess Burst Size The maximum amount of uncommitted data(in bits) in excess of BC that a frame relay network can attempt to deliver during a time interval Tc. This data generally is delivered with a low probability than Bc. The network treats Be data as discard eligible. The data is in bytes in the current implementation. Tc - The committed rate measurement interval. The time interval during which the user can send only BC committed amount of data and BE excess amount of data. EIR - Excess Information Rate This is the bandwidth in excess of CIR the PVC will be allowed to burst on a a given PVC. The average rate at which excess traffic is to be policed. This number is computed based on Bc, Be, CIR and Tc. DE - Discard Eligibility Frame Forwarding Port: Frame Forwarding Ports are identified by portType = frame-forward(3). NOTE: The objects related to frame relay ports are available in ifTable,if ifTable is implemented in service module/card. Following Service Modules support ifTable: FRSM-12 ")
frChanCnfGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1))
frChanCnfGrpTable = MibTable((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1), )
if mibBuilder.loadTexts: frChanCnfGrpTable.setStatus('current')
if mibBuilder.loadTexts: frChanCnfGrpTable.setDescription('This table is for configuring connection parameters for frame relay connections.')
frChanCnfGrpEntry = MibTableRow((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1), ).setIndexNames((0, "CISCO-WAN-FR-CONN-MIB", "chanNum"))
if mibBuilder.loadTexts: frChanCnfGrpEntry.setStatus('current')
if mibBuilder.loadTexts: frChanCnfGrpEntry.setDescription('An entry for each frame relay connection.')
chanNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chanNum.setStatus('current')
if mibBuilder.loadTexts: chanNum.setDescription('The value of this object identifies the frame relay connection/channel index. Note that the actual range of the index supported by a card depends on the type of card. Supported Range for different Card Types: FRSM-4T1/E1 : Range is 16..271 (256 entries) FRSM-8T1/E1 : Range is 16..1015 (1000 entries) FRSM-T3/E3/HS2/ /HS2B-HSSI/T3B/E3B : Range is 16..2015 (2000 entries) FRSM-2CT3/HS2B-12IN1: Range is 16..4015 (4000 entries) For FRSM12 Card : Range is 16..16015 for Lower 16 bits Upper 16 bits contain Chassis Number and logical slot number.')
chanRowStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("add", 1), ("del", 2), ("mod", 3), ("outOfService", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanRowStatus.setStatus('current')
if mibBuilder.loadTexts: chanRowStatus.setDescription('This object is used for adding/modifying/deleting the channel. add(1) : For adding the frame relay connections. delete(2): For deleting frame relay connections. mod(3) : For Modifying frame relay connections. This is also used for uping the connection. outOfService(4) : Bring the Frame relay connection down.')
chanPortNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 3), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanPortNum.setStatus('current')
if mibBuilder.loadTexts: chanPortNum.setDescription("This object refers to the frame relay port on which channel is created. This is a mandatory object for creating the channel. For FRSM12 Card: This object contains the port's ifIndex value. ")
dLCI = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 8388607))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: dLCI.setStatus('current')
if mibBuilder.loadTexts: dLCI.setDescription("The value of this object is the DLCI number of the channel. This is a mandatory object for creating the channel. All the connections on the same port should have a unique DLCI number. Note that if we are adding a channel to a port that has LMI signalling enabled, then we can not use DLCI number 0(ANNEX A & D) and 1023(STRATA LMI). The value of this object can be only 1000 if the portType = frame-forward(3) on which the frame relay connection is being created. That is, only one Frame Relay Connection can be created on a Frame Forwarding Port. For portHeaderLen = twoOctets(1) following restrictions apply. Range supported is '0..1023' DLCI values 0,1007, 1023 can not be used. For portHeaderLen = fourOctets(2) following restrictions apply. Range supported is '0..8388607' DLCI values 0,8257535 can not be used. ")
egressQSelect = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("highPriority", 1), ("lowPriority", 2), ("notSupported", 3))).clone('lowPriority')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: egressQSelect.setStatus('current')
if mibBuilder.loadTexts: egressQSelect.setDescription('Selects one out of two possible port queues. The default port queue number is 1 which is the high pririty queue. 1 = High priority queue 2 = Low priority queue 3 = Indicates that this entry is not used (eg: in FRSM-VHS, chanServType indicates the channel service type and would determine the queue to which the channel gets mapped) For FRSM12 Card: This object is used to select between the two ATM-COS queues in the egress direction. ')
ingressQDepth = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(4510, 2097151)).clone(65535)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ingressQDepth.setStatus('current')
if mibBuilder.loadTexts: ingressQDepth.setDescription("This variable sets the max depth for queue, before it starts dropping the cells. It is defined in terms of number of bytes. In all cards except the FRSM-VHS card, the range is limited to (4510..'ffff'h). ingressQDepth should be greater than ingressQECNThresh and ingressQDEThresh For FRSM12 Card: Not Supported ")
ingressQECNThresh = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(6553)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ingressQECNThresh.setStatus('current')
if mibBuilder.loadTexts: ingressQECNThresh.setDescription("This variable sets the max depth for queue, before it starts flow control. It is defined in terms of number of bytes. In all cards except the FRSM-VHS card, the range is limited to (0..'ffff'h). For FRSM12 Card: Not Supported ")
ingressQDEThresh = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(32767)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ingressQDEThresh.setStatus('current')
if mibBuilder.loadTexts: ingressQDEThresh.setDescription("This variable sets the max depth for queue, before they become discard eligible. It is defined in terms of number of bytes. In all cards except the FRSM-VHS card, the range is limited to (0..'ffff'h). For FRSM12 Card: Not Supported ")
egressQDepth = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(65535)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: egressQDepth.setStatus('current')
if mibBuilder.loadTexts: egressQDepth.setDescription("This variable sets the max depth for queue, before it starts dropping the cells. It is defined in terms of number of bytes. In all cards except the FRSM-VHS card, the range is limited to (0..'ffff'h). egressQDepth should be greater than egressQDEThresh and egressQECNThresh For FRSM12 Card: Not Supported ")
egressQDEThresh = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(32767)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: egressQDEThresh.setStatus('current')
if mibBuilder.loadTexts: egressQDEThresh.setDescription("This variable sets the max depth for queue, before they become discard eligible. It is defined in terms of number of bytes. In all cards except the FRSM-VHS card, the range is limited to (0..'ffff'h). For FRSM12 Card: Not Supported ")
egressQECNThresh = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(6553)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: egressQECNThresh.setStatus('current')
if mibBuilder.loadTexts: egressQECNThresh.setDescription("This variable sets the max depth for queue, before it starts flow control. It is defined in terms of number of bytes. In all cards except the FRSM-VHS card, the range is limited to (0..'ffff'h). For FRSM12 Card: Not Supported ")
deTaggingEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 12), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: deTaggingEnable.setStatus('current')
if mibBuilder.loadTexts: deTaggingEnable.setDescription('This object enables/disables the DE tagging. The tagging is enabled only in the ingress direction. For FRSM12 Card: When this object is disabled, the ingress policer will never set the DE bit to 1 in the Frame Relay frames even if the incoming frame exceeds the Bc bucket. ')
cir = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 13), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 52000000)).clone(2400)).setUnits('bps').setMaxAccess("readwrite")
if mibBuilder.loadTexts: cir.setStatus('current')
if mibBuilder.loadTexts: cir.setDescription('The value of this object is equal to the CIR parameter for this frame relay connection. The CIR value have to be less than or equal to the port speed. Any value from 1 to 2399 will be rounded off to 2400. Range supported for different interfaces/card: For E1 interface : Range is 0..2048000 For T1 interface : Range is 0..1536000 For E3 interface : Range is 0..34368000 For T3 interface : Range is 0..44736000 For HSSI : Range is 0..52000000 For FRSM-2CT3 : Range is 0..1536000 For FRSM-HS2B-12IN1: Range is 0..10240000 The CIR value can be 0 only for chanServType = uBR(5). ')
bc = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 14), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(5100)).setUnits('bytes').setMaxAccess("readwrite")
if mibBuilder.loadTexts: bc.setStatus('current')
if mibBuilder.loadTexts: bc.setDescription('The value of this object is equal to the committed burst size(BC) parameter for this PVC endpoint. The value of bc can not be 0 when cir is non zero. The value of bc has to be 0 if cir is 0. The peak value for bc in FRSM-VHS cards is (2^21 -1), i.e. 2097151 and for all other cards, it is 65535. For FRSM-VHS cards, the relation between CIR and Bc should be such that Tc is always less than 512 seconds. ')
be = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 15), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(5100)).setUnits('bytes').setMaxAccess("readwrite")
if mibBuilder.loadTexts: be.setStatus('current')
if mibBuilder.loadTexts: be.setDescription('The value of this object is euqal to the excess burst size(Be) parameter for this PVC endpoint. The value be can not be 0 when cir is 0. The peak value for be : For FRSM-VHS and FRSM12 cards is (2^21 -1), i.e. 2097151 and For all other cards, it is 65535. For FRSM-VHS cards, setting the value of 2091751 will cause the policing to be disabled. ')
ibs = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 16), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 2097151)).clone(100)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: ibs.setStatus('current')
if mibBuilder.loadTexts: ibs.setDescription('The value of this object is euqal to the excess burst size(Be) parameter for this PVC endpoint. The value of ibs should be less or equal to bc when cir is greater than 0. The value of ibs has to be 0 when cir is 0. The peak value for ibs in FRSM-VHS cards is (2^21 -1), i.e. 2097151 and for all other cards, it is 65535. For FRSM12 Card: Not Supported ')
foreSightEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 17), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: foreSightEnable.setStatus('current')
if mibBuilder.loadTexts: foreSightEnable.setDescription('This variable enables/disables foreSight option. Following objects can be modified only when this object is set to enable(1): qir, mir, pir The RATE CONTROL FEATURE has to be ON in order to enable foresight and also modify its parameter. For FRSM12 Card: Not Supported ')
qir = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 18), Integer32().subtype(subtypeSpec=ValueRangeConstraint(160, 6400000)).clone(160)).setUnits('fastpackets-per-second').setMaxAccess("readwrite")
if mibBuilder.loadTexts: qir.setStatus('current')
if mibBuilder.loadTexts: qir.setDescription('The value of this object is euqal to the quiescent information rate for Foresight. The unit is 1 Cell/Sec = 16 fastpackets/sec. Following information about cps is for reference only: The peak value for qir in FRSM-VHS cards is 285714 cps and for all other cards, it is 10000 cps. For FRSM-VHS cards, cell will be the ATM cell (48 byte payload). For FRSM12 Card: Not Supported')
mir = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 19), Integer32().subtype(subtypeSpec=ValueRangeConstraint(160, 6400000)).clone(160)).setUnits('fastpackets-per-second').setMaxAccess("readwrite")
if mibBuilder.loadTexts: mir.setStatus('current')
if mibBuilder.loadTexts: mir.setDescription('The value of this object is euqal to the minimum information rate for Foresight. The unit is 1 Cell/Sec = 16 fastpackets/sec. is equal to 16 fastpackets/sec. Following information about cps is for reference only: The peak value for qir in FRSM-VHS cards is 285714 cps and for all other cards, it is 10000 cps. For FRSM-VHS cards, cell will be the ATM cell (48 byte payload). For FRSM12 Card: Not Supported ')
pir = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 20), Integer32().subtype(subtypeSpec=ValueRangeConstraint(160, 6400000)).clone(160)).setUnits('fastpackets-per-second').setMaxAccess("readwrite")
if mibBuilder.loadTexts: pir.setStatus('current')
if mibBuilder.loadTexts: pir.setDescription('The value of this object is euqal to the peak information rate for Foresight. The unit is 1 Cell/Sec = 16 fastpackets/sec. is equal to 16 fastpackets/sec. Following information about cps is for reference only: The peak value for qir in FRSM-VHS cards is 285714 cps and for all other cards, it is 10000 cps. For FRSM-VHS cards, cell will be the ATM cell (48 byte payload). For FRSM12 Card: Not Supported ')
chanLocRmtLpbkState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 21), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanLocRmtLpbkState.setStatus('current')
if mibBuilder.loadTexts: chanLocRmtLpbkState.setDescription('This variable enables or disables the remote loopback for each channel. When you enable this option on a connection (channel) then all the cells that are coming from the network side would be looped back toward the network and all the frames coming from the user side would be dropped. This channel remote loopback has nothing to do with the chanTestType option, each one does a different function. For example, the channel remote loopback is used for looping the data toward the network and if this connection is terminated on an IPX then they can put a test equipment and measure some of the characteristics of the network. For FRSM12 Card: Not Supported ')
chanTestType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 22), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("testcon", 1), ("testdelay", 2), ("notest", 3))).clone('notest')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanTestType.setStatus('current')
if mibBuilder.loadTexts: chanTestType.setDescription('The chanTestType starts testing the continuity or delay of a connection. It sends specific cell patterns toward the network and the terminating end of this connection has to be an MGX8220 or ASI of a BPX in order for this test to be working. The receiving node would loop back when it receives these cells. The test should be done in about couple of seconds. The testcon tests the continuity of the connection and testdelay uses the same test except that it measures for delay through the network. To test the delay follow this procedure: a- set chanTestType to testdelay b- read chanTestState till it is Pass or Fail c- Read chanRTDResult for the delay if it is Pass *Note that the chanTestType would go back to notest when the test is completed To test the continuity follow this procedure: a- set chanTestType to testcon b- read chanTestState till it is Pass or Fail *Note that the chanTestType would go back to notest when the test is completed You CAN NOT select 2 tests back to back, you have to select one and wait for the result and then start the other one. SYNTAX When you select testdelay This is the type of the test 1 = Test Continuity 2 = Test Delay 3 = No Test ')
chanTestState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 23), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("passed", 1), ("failed", 2), ("inprogress", 3), ("notinprogress", 4))).clone('notinprogress')).setMaxAccess("readonly")
if mibBuilder.loadTexts: chanTestState.setStatus('current')
if mibBuilder.loadTexts: chanTestState.setDescription('This shows the state of the test When you add a connection then the chanTestState becomes notinprogress and when you select any test, it would go to inprogress state and after it completes the test, it will go to failed or passed state. 1 = Passed 2 = Failed 3 = In Progress 4 = Not In Progress ')
chanRTDResult = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 24), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 65535)).clone(65535)).setMaxAccess("readonly")
if mibBuilder.loadTexts: chanRTDResult.setStatus('current')
if mibBuilder.loadTexts: chanRTDResult.setDescription('This is round trip delay in milliseconds. When you select testdelay option for the chanTestType, the result of the test that is measured in milliseconds can be read in chanRTDResult. ')
chanType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 25), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("frNIW", 1), ("frSIW-transparent", 2), ("frSIW-translate", 3), ("frFUNI", 4), ("frForward", 5), ("frNIWReplace", 6)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanType.setReference('FRF.8')
if mibBuilder.loadTexts: chanType.setStatus('current')
if mibBuilder.loadTexts: chanType.setDescription("The value of this object is used for setting the channel type of a frame relay connection. If set with values frSIW-transparent(2) and frSIW-translate(3), all PVC data is subject to service interworking translation and mapping in both Frame Relay-to-ATM and ATM-to-Frame relay directions. The possible values are : frNIW(1) : Frame-Relay-to ATM Network Interworking(NIW-unicast). The traffic crosses the network as ATM Cells. frSIW-transparent(2): Service InterWorking with out any SDU translation. In transparent mode, the service module does not translate. frSIW-translate(3) : Service InterWorking with SDU translation. In translation mode, service module translates protocol between the FR NLPID encapsulation(RFC 1490) and ATM LCC encapsulation(RFC 1483). Translation mode support includes address resolution by transforming address resolution protocol (ARP, RFC 826) and inverse ARP(RFC 1293) between the frame relay and ATM Formats. frFUNI(4) : Frame based UNI: mode-1a which is ALL5. frForward(5) : frame forwarding. Frame forwarding operates same as standard frame relay except: * 2 byte Q.922 header is not assumed or interpreted. * All frames received are mapped to a specific connection if it exists. Otherwise the frames are dropped. * No DE/CLP or FECN/EFCI mapping is performed. * 'llegal Header count' and 'invalid DLCI' statistics are not kept/applicable. frNIWReplace(6) : Frame Relay network interworking with DLCI in FR-SSCS(Frame Relay Specific Convergence Sublayer)PDU always set to 1022. ")
chanFECNconfig = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 26), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("mapEFCI", 1), ("setEFCIzero", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanFECNconfig.setReference('FRF.8, section 4.3.1.1')
if mibBuilder.loadTexts: chanFECNconfig.setStatus('current')
if mibBuilder.loadTexts: chanFECNconfig.setDescription('The value of this object specifies how to map from FECN field in the frame Relay PDU to the EFCI field in the ATM cells. This object does not apply to NIW. This is applicable only for SIW. mapEFCI(1) : Maps the FECN bits in frame-relay to EFCI bit in the ATM cells. This value is valid only for SIW. setEFCIzero(2): Set EFCI = 0. Do not map FECN to EFCI.')
chanDEtoCLPmap = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 27), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("mapCLP", 1), ("setCLPzero", 2), ("setCLPone", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanDEtoCLPmap.setReference('FRF.5, FRF.8')
if mibBuilder.loadTexts: chanDEtoCLPmap.setStatus('current')
if mibBuilder.loadTexts: chanDEtoCLPmap.setDescription('The value of this object specifies how to map from DE bit on the Frame Relay side to CLP bit on the ATM side. mapCLP(1) : Map DE bit to CLP bit in ATM cell. setCLPzero(2) : Ignore DE bit. Set CLP to 0. setCLPone(3) : Ignore DE bit. Set CLP to 1. For FRSM12 Card: Should not be mapCLP for chanType of frForward. ')
chanCLPtoDEmap = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 28), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("mapDE", 1), ("setDEzero", 2), ("setDEone", 3), ("ignoreCLP", 4)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanCLPtoDEmap.setReference('FRF.8, section 4.2.2 FRF.5, section 4.4.2')
if mibBuilder.loadTexts: chanCLPtoDEmap.setStatus('current')
if mibBuilder.loadTexts: chanCLPtoDEmap.setDescription('The value of this object enables mapping of Cell Loss Priority(CLP) bit on the ATM Side to Discard Eligibility(DE) bit on the Frame relay side. The possible values are : mapDE(1) : Map CLP bit to DE bit. Valid for SIW and NIW. setDEzero(2) : Ignore CLP. Set DE bit to 0. Valid for SIW. setDEone(3) : Ignore CLP. Set DE bit to 1. Valid for SIW. ignoreCLP(4) : Ignore CLP. No change in receieved DE bit. Valid for NIW. For FRSM12 Card: Should be ignoreCLP for chanType of frForward. Should not be setDEzero/setDEone for chanType of frNIW and frNIWReplace. Should not be ignoreCLP for chanType of frSIW-transparent and frSIW-translate. ')
chanIngrPercentUtil = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 29), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 100)).clone(100)).setUnits('percentage').setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanIngrPercentUtil.setStatus('current')
if mibBuilder.loadTexts: chanIngrPercentUtil.setDescription('The ingress utilization on a frame relay connection.')
chanEgrPercentUtil = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 30), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 100)).clone(100)).setUnits('percentage').setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanEgrPercentUtil.setStatus('current')
if mibBuilder.loadTexts: chanEgrPercentUtil.setDescription('The egress utilization on a frame relay connection.')
chanEgrSrvRate = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 31), Integer32().subtype(subtypeSpec=ValueRangeConstraint(2400, 52000000)).clone(2400)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanEgrSrvRate.setStatus('current')
if mibBuilder.loadTexts: chanEgrSrvRate.setDescription('The value of this object identifies egress CIR value for a frame relay connection. The value of this object must be less than or equal(<=) to the port speed. The value supported depends upon the interface and service module(card) type. For E1 Service Module : Range is 2400..2048000 For T1 Service Module : Range is 2400..1536000 2CT3 Module : For E3 Service Module : Range is 2400..34368000 For T3 Service Module : Range is 2400..44736000 For HSSI Service Module : Range is 2400..52000000 For FRSM12 Card: This object is used only for CAC and the range will be same as the range for cir object. The Maximum value is 44736000m. ')
chanOvrSubOvrRide = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 32), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("disable", 1), ("enable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanOvrSubOvrRide.setStatus('current')
if mibBuilder.loadTexts: chanOvrSubOvrRide.setDescription('The value of this object enables/disables the oversubscription on a connection. This object allows one to add a new connection on a port even if it is over subscribed. For FRSM12 Card: Not Supported.')
chanFrConnType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 33), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6))).clone(namedValues=NamedValues(("pvc", 1), ("svc", 2), ("spvc", 3), ("par", 4), ("pnni", 5), ("tag", 6))).clone('pvc')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanFrConnType.setStatus('current')
if mibBuilder.loadTexts: chanFrConnType.setDescription('The value of this object is used for configuring connection type of a frame relay connection. The possible values are : pvc(1) : Permanent Virtual Connection svc(2) : Switched Virtual Connection spvc(3) : Soft PVC. par(4) : Portable Auto Route Connection. Valid only for trunk connection pnni(5) : PNNI Connection Valid only for trunk connection tag(6) : Tag/MPLS Connection Valid only for trunk connection For FRSM12 Card: Not Supported.')
frCDRNumber = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 34), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frCDRNumber.setStatus('current')
if mibBuilder.loadTexts: frCDRNumber.setDescription('The value of this object identifies the CDR(Call Detail Record) number. This is the key to correlate cell/frame counts, start/end record. For FRSM12 Card: Not Supported ')
frLocalVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 35), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: frLocalVpi.setStatus('current')
if mibBuilder.loadTexts: frLocalVpi.setDescription('The value of this object provides the VPI value for the local endpoint. This object in conjunction with frLocalVci and frLocalNSAP represents the local end point of this connection. The service module sets this to value 0.')
frLocalVci = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 36), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readonly")
if mibBuilder.loadTexts: frLocalVci.setStatus('current')
if mibBuilder.loadTexts: frLocalVci.setDescription("The value of this object provides the VCI value for the local endpoint. This object in conjunction with frLocalVpi and frLocalNSAP represents the local end point of this connection. The service module assigns this value specified in object 'dLCI'.")
frLocalNSAP = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 37), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 20))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frLocalNSAP.setStatus('current')
if mibBuilder.loadTexts: frLocalNSAP.setDescription('The value of this object identifies the NSAP address of the frame relay connection. The value of this object follows the format: Prefix : 13 Bytes Cisco ID : 2 bytes Reserved : 1 byte Slot Number : 1 byte Port Number : 2 bytes ESL : 1 byte For FRSM12 Card: This object will have the NSAP format as required by the PNNI controller ')
frRemoteVpi = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 38), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frRemoteVpi.setStatus('current')
if mibBuilder.loadTexts: frRemoteVpi.setDescription('The value of this object identifies the VPI value of remote end point of this connection. The frRemoteVpi, frRemoteVci and frRemoteNSAP identifies the remote end point of this connection.')
frRemoteVci = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 39), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frRemoteVci.setStatus('current')
if mibBuilder.loadTexts: frRemoteVci.setDescription('The value of this object identifies the VCI value of remote end point of this connection. The frRemoteVpi, frRemoteVci and frRemoteNSAP identifies the remote end point of this connection.')
frRemoteNSAP = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 40), OctetString().subtype(subtypeSpec=ValueSizeConstraint(0, 20))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frRemoteNSAP.setStatus('current')
if mibBuilder.loadTexts: frRemoteNSAP.setDescription('The value of this object identifies the NSAP address of the frame relay connection. The value of this object follows the format: Prefix : 13 Bytes Cisco ID : 2 bytes Reserved : 1 byte Slot Number : 1 byte Port Number : 2 bytes ESL : 1 byte.')
frMastership = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 41), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("master", 1), ("slave", 2), ("unknown", 3)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frMastership.setStatus('current')
if mibBuilder.loadTexts: frMastership.setDescription(' This is used by PXM to determine if this end point is master or slave, a new type unknown is added to identify the SM in MGX8220 shelf and the SM in MGX shelf. In AXIS shelf, user can still use addchan to add a channel without specifying X/Y/P parameters. But in MGX shelf, if the user uses addchan without X/Y/P set (based on this object being set to type 3 unknown), SPM on PXM will reject the request. It must be supplied in connection setup request. In the feeder mode, this is always set to master. ')
frVpcFlag = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 42), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("vpc", 1), ("vcc", 2))).clone('vcc')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frVpcFlag.setStatus('current')
if mibBuilder.loadTexts: frVpcFlag.setDescription(" This represents the connection type, used for PXM to identify VPC/VCC but FRSM card doesn't use it always set to vcc for FRSM card For FRSM12 Card: For chanFrConnType = pnni(5), this object is set to vcc(2) always.")
frConnServiceType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 43), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32))).clone(namedValues=NamedValues(("cbr", 1), ("vbr", 2), ("notUsed", 3), ("ubr", 4), ("atfr", 5), ("abrstd", 6), ("abrfst", 7), ("vbrrt", 8), ("cbr1", 21), ("vbr1rt", 22), ("vbr2rt", 23), ("vbr3rt", 24), ("vbr1nrt", 25), ("vbr2nrt", 26), ("vbr3nrt", 27), ("ubr1", 28), ("ubr2", 29), ("stdabr", 30), ("cbr2", 31), ("cbr3", 32))).clone('atfr')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnServiceType.setStatus('current')
if mibBuilder.loadTexts: frConnServiceType.setDescription("This specifies the service type 1 ==> Constant Bit Rate 2 ==> Variable Bit Rate 3 ==> Not used 4 ==> Unspecified Bit Rate 5 ==> ATM frame relay 6 ==> standard ABR 7 ==> foresight ABR Note that this is used by PXM card, SV+ doesn't need to set it, if not set in the connection setup request, it'll be defaulted to ATFR type for FRSM. Also to make it compatible with existing AUSM MIB definition, value 3 is not used. The following types are being added for PNNI support. and are based on UNI 4.0 cbr1 (21) - CBR.1 vbr1rt (22) - Real time VBR.1 vbr2rt (23) - Real time VBR.2 vbr3rt (24) - Real time VBR.3 vbr1nrt(25) - Non Real time VBR.1 vbr2nrt(26) - Non Real time VBR.2 vbr3nrt(27) - Non Real time VBR.3 ubr1 (28) - UBR.1 ubr2 (29) - UBR.2 stdabr (30) - TM 4.0 compliant standard ABR cbr2 (31) - CBR.2 cbr3 (32) - CBR.3 For FRSM12 Card: Not Supported. Derived from chanServType. ")
frRoutingPriority = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 44), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 15)).clone(1)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frRoutingPriority.setStatus('current')
if mibBuilder.loadTexts: frRoutingPriority.setDescription(' This is used by PXM to determine how important this connection is when selecting connections to route ')
frMaxCost = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 45), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frMaxCost.setStatus('current')
if mibBuilder.loadTexts: frMaxCost.setDescription("The value of this object specifies the Maximum allowed cost. It is related to Cost Based Routing. This is used by PXM so that it won't choose a path with a cost greater than this configured level. This is not necessary to be provided in the connection setup request, if not provided, the default value 255 will be used. Also the range supported depends upon the controller configured : Controller Range Default Value chanFrConnType = par(2) 1..65535 255 chanFrConnType = pnni(5) 1..2147483647 2147483647. ")
frRestrictTrunkType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 46), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("norestriction", 1), ("terrestrialTrunk", 2), ("sateliteTrunk", 3))).clone('norestriction')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frRestrictTrunkType.setStatus('current')
if mibBuilder.loadTexts: frRestrictTrunkType.setDescription(' Restricted trunk type for routing, used by PXM. It specifies that the connection either cannot be routed over satelite trunks, or terrestrial trunks, or it can be on any type of trunk. It is not necessary to be provide in the connection setup request, the default value is norestriction(1). For FRSM12 Card: Not Supported ')
frConnPCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 47), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnPCR.setStatus('current')
if mibBuilder.loadTexts: frConnPCR.setDescription("The value of this object identifies the PCR(Peak Cell Rate). If not provided in the connection setup request, it'll be derived from object 'pir'. For FRSM12 Card: Default value is (1.44 * CIR) ")
frConnRemotePCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 48), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnRemotePCR.setStatus('current')
if mibBuilder.loadTexts: frConnRemotePCR.setDescription(' Peak cell rate of the other end, if not set, will be set to the same as local end PCR (frConnPCR). However, note that if the CIRs for both local and remote end are set to the different value (i.e., asymmetric conn), then this should be set differently from local end PCR. For FRSM12 Card: Default value is frConnPCR ')
frConnMCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 49), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnMCR.setStatus('current')
if mibBuilder.loadTexts: frConnMCR.setDescription(" Minimum cell rate, if not provided in the connection setup request, it'll be derived from object 'mir'. For FRSM12 Card: Default value is frConnPCR ")
frConnRemoteMCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 50), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnRemoteMCR.setStatus('current')
if mibBuilder.loadTexts: frConnRemoteMCR.setDescription(' Minimum cell rate of the other end, if not set, will be set to the same as local end MCR (frConnMCR). However, note that if the CIRs for both local and remote end are set to the different value (i.e., asymmetric conn), then this should be set differently from local end MCR. For FRSM12 Card: Default value is frConnMCR ')
frConnPercentUtil = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 51), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 100)).clone(100)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnPercentUtil.setStatus('current')
if mibBuilder.loadTexts: frConnPercentUtil.setDescription("This is the expected long-term utilization of the channel by this end-point. If this is not specified in the connection setup request, it'll be defaulted to 100 percent For FRSM12 Card: Not Supported ")
frConnRemotePercentUtil = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 52), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 100))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnRemotePercentUtil.setStatus('current')
if mibBuilder.loadTexts: frConnRemotePercentUtil.setDescription("This is the expected long-term utilization of the channel by the other end-point. If this is not specified in the connection setup request, it'll be set to be the same as the local end frConnPercentUtil value assuming that the connection is symmetric. In a asymmetric connection, this object is supposed to be set. For FRSM12 Card: Not Supported.")
frConnForeSightEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 53), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnForeSightEnable.setStatus('current')
if mibBuilder.loadTexts: frConnForeSightEnable.setDescription("This object is used by the controller(PAR/PNNI/TAG) to set up the Qbin for the connection, if this is not set, it'll be defaulted by SM to the same as foreSightEnable in the end point parameters. For FRSM12 Card: Not Supported.")
frConnFGCRAEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 54), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnFGCRAEnable.setStatus('current')
if mibBuilder.loadTexts: frConnFGCRAEnable.setDescription('The value of this object is used for enabling/disabling Frame based GCRA (early packet discard). For FRSM12 Card: Not Supported.')
chanServType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 55), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4, 5, 6, 7, 8, 9))).clone(namedValues=NamedValues(("highpriority", 1), ("rtVBR", 2), ("nrtVBR", 3), ("aBR", 4), ("uBR", 5), ("queue6", 6), ("queue7", 7), ("queue8", 8), ("stdABR", 9)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanServType.setStatus('current')
if mibBuilder.loadTexts: chanServType.setDescription('The value of this object indicates the indicates the class of the connection. 1-High priority (typically CBR connections) 2- real-time VBR 3- non-real time VBR 4- Available Bit Rate 5- Unspecified Bit Rate 9- Standard ABR There are 8 queues actually but only 4 are being used (the 4 queues are for CBR, VBR-rt, <VBR-nrt and ABR>, UBR traffic). This object is suported only in FRSM-VHS and FRSM-8T1E1. For FRSM-8T1E1, a 0 indicates that the connections are of old model type where chanServType object is unused. For FRSM12 Card: The types aBR, queue6, queue7, queue8 are not supported This object can not be modified after a frame relay connection has been created.')
chanServiceRateOverride = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 56), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanServiceRateOverride.setStatus('current')
if mibBuilder.loadTexts: chanServiceRateOverride.setDescription('This variable sets the SAR IR programming option. Foresight and chanServiceRateOverride are mutually exclusive. For FRSM12 Card: Not Supported.')
chanServiceRate = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 57), Integer32().subtype(subtypeSpec=ValueRangeConstraint(160, 6400000))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanServiceRate.setStatus('current')
if mibBuilder.loadTexts: chanServiceRate.setDescription('This is the rate to which IR can be set to when chanServiceRateOverride is set to enable(1). If chanServiceRateOverride is disable(2) then this object does not have any significance. For FRSM-8T1/8E1,this is defined in fastpackets/sec. For FRSM-VHS, this is defined in atm cells per second. For VHS the range in cells per second will be 10 to 400000 cps. For FRSM12 Card: Not Supported.')
zeroCirConEir = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 58), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 52000000))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: zeroCirConEir.setStatus('current')
if mibBuilder.loadTexts: zeroCirConEir.setDescription("The value of this object defines defines EIR value for '0' CIR connection. If the value is '0', EIR is set to port speed. If zeroCirConEir is non-zero value, EIR is set to value of this object, and this value is used for policing in ingress direction. This object is valid only for a zero cir connection. zeroCirConEir has to be less than or equal to the port speed.")
chanReroute = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 59), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("true", 1), ("false", 2))).clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: chanReroute.setStatus('current')
if mibBuilder.loadTexts: chanReroute.setDescription(' This is used by the administrator to trigger the re-routing of the connection. The rerouting takes effect, when this object is set to true(1). When set to false (2), no action is taken. A get on this object always returns false (2). This object is not applicable to MGX Release 1.x. ')
frConnSCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 60), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnSCR.setStatus('current')
if mibBuilder.loadTexts: frConnSCR.setDescription(' Sustained cell rate, Used for VBR connections setup with PNNI controller. For FRSM12 Card: Default value is frConnPCR This object is not applicable to MGX Release 1.x.')
frConnRemoteSCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 61), Integer32()).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnRemoteSCR.setStatus('current')
if mibBuilder.loadTexts: frConnRemoteSCR.setDescription(' Sustained cell rate of the other end, Used for VBR connections setup with PNNI controller. For FRSM12 Card: Default value is frConnSCR This object is not applicable to MGX Release 1.x ')
frConnTemplateId = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 62), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 17)).clone(17)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnTemplateId.setStatus('current')
if mibBuilder.loadTexts: frConnTemplateId.setDescription('This object specifies the template identifier for the connection template associated with this connection. The valid range for templates is 1..16. A value of 17 indicates no template is associated with this connection For FRSM12 Card: Not Supported This object is not applicable to MGX Release 1.x ')
frConnAdminStatus = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 63), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("up", 1), ("down", 2))).clone('down')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnAdminStatus.setStatus('current')
if mibBuilder.loadTexts: frConnAdminStatus.setDescription('This object specifies channel admin status. This object is not applicable to MGX Release 1.x.')
frChanCnfChangeCount = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 64), Counter32()).setMaxAccess("readonly")
if mibBuilder.loadTexts: frChanCnfChangeCount.setStatus('current')
if mibBuilder.loadTexts: frChanCnfChangeCount.setDescription('This object is added only for FRSM12 card. This counter tracks the number of configuration changes that happen on a channel. The counter is associated only with the end point and NOT with the connection itself. This counter is used by the NMS to determine if a connection configuration had been modified and requires an upload. This functionality is conventionally achieved by time stamping using a time-of-day clock. However, in switches where time-of-day clock is not available, the following scheme is used: The upload counter is incremented, when: * assignment of connection to an end point channel. This happens when a connection is added and assigned this channel number. * de-assignment of connection from a channel number. This happens when a connection is deleted and the end point resource is released. * When there is a configuration change done to the connection that is associated with this end point channel number. In a new system, an unutilised resouce (channel number) has a counter value of zero. When a connection is added to this channel end point, the counter is incremented. And is incremented for any of the above operations. When a connection is deleted the value of this counter is incremented and preserved till a new connection gets associated with this channel end point. This object is not applicable to MGX Release 1.x.')
frChanCnfIgnoreIncomingDE = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 65), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2)))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanCnfIgnoreIncomingDE.setStatus('current')
if mibBuilder.loadTexts: frChanCnfIgnoreIncomingDE.setDescription('This object is added for FRSM12 card. When this object is enabled, the incoming frames with DE(Discard Eligible) bit set to 1 are counted in the Bc bucket instead of Be bucket This object is not applicable to MGX Release 1.x.')
frChanOamCCEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 66), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanOamCCEnable.setStatus('current')
if mibBuilder.loadTexts: frChanOamCCEnable.setDescription('This object is added for FRSM12 card. This object serves to enable or disable continuity check(CC) on a connection endpoint. When continuity check is enabled on an endpoint, the endpoint anticipates OAM CC cells from its peer endpoint. OAM CC cells are sent when the peer endpoint does not have traffic cells to send. If the connection is idle and this endpoint has not received OAM CC cells for a period of 3.5 +/- 0.5 seconds, it declares continuity failure. This object serves to administratively control the CC feature. Typical implementations (of this feature) may choose to ignore this control or impose other conditions to actually enable CC cell flow. However, if this object is set to false(2), then this feature should be disabled This object is not applicable to MGX Release 1.x.')
frChanStatsEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 67), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanStatsEnable.setStatus('current')
if mibBuilder.loadTexts: frChanStatsEnable.setDescription(' This object serves the purpose of enabling/disabling statistics collection on a per connection basis. In implementations which do not have such limitations, this object can be set to enable(1) for all connections. Limits imposed by software or hardware implementations could restrict the amount of statistical data that can be maintained in a physical entity (like a service module card). Hence there could be a need to restrict statistics collection to a smaller subset. This object is not applicable to MGX Release 1.x.')
frChanLocalLpbkEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 68), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('disable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanLocalLpbkEnable.setStatus('current')
if mibBuilder.loadTexts: frChanLocalLpbkEnable.setDescription('This object is added for FRSM12 card. This object when enabled adds a channel-level loopback towards the port side. If the connection is in loopback, Connection MIB (FrChanCnfGrpEntry) variables cannot be modified. This object is not applicable to MGX Release 1.x. ')
frChanUpcEnable = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 69), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("enable", 1), ("disable", 2))).clone('enable')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanUpcEnable.setStatus('current')
if mibBuilder.loadTexts: frChanUpcEnable.setDescription(' This object is added for FRSM12 card. This object when disabled, disables Frame Relay policing. This object is not applicable to MGX Release 1.x. ')
frChanSlaveType = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 70), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2))).clone(namedValues=NamedValues(("persistentSlave", 1), ("nonPersistentSlave", 2))).clone('persistentSlave')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanSlaveType.setStatus('current')
if mibBuilder.loadTexts: frChanSlaveType.setDescription("This object is added for FRSM12 card. This object indicates whether a master endpoint has a persistent slave or not. A connection with a master and a non-persistent slave is considered a single-ended SPVC. This object is only meaningful when 'frMastership' contains the value of 'master(1)'. And this variable must be used with 'frMastership' to decide if a connection is single-ended or not. This object is not applicable to MGX Release 1.x.")
frConnRemoteMBS = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 71), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 5000000)).clone(1024)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frConnRemoteMBS.setStatus('current')
if mibBuilder.loadTexts: frConnRemoteMBS.setDescription("Remote Maximum Burst Size in terms of number of cells. This object should be set by the user in cases when the remote end of the connection is an ATM end-point where the Local MBS can be explicitly specified. In such cases, this element should be set to be equal to the remote end-point's local MBS. This object is not applicable to MGX Release 1.x. ")
frChanPrefRouteId = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 72), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 65535))).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanPrefRouteId.setStatus('current')
if mibBuilder.loadTexts: frChanPrefRouteId.setDescription("This object serves to to associate a preferred route with a connection. The value '0' means no preferred route is associated with this connection. Usage: - If the value of this set to 0, the object frChanDirectRoute is automatically set to FALSE by the switch. - The preferred route is defined in cwaPrefRouteConfTable object.")
frChanDirectRoute = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 1, 1, 73), TruthValue().clone('false')).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frChanDirectRoute.setStatus('current')
if mibBuilder.loadTexts: frChanDirectRoute.setDescription('This object serves to associate a prefer route as directed route (correspond to the prefer route object frChanPrefRouteId). A directed route specifies that the associated preferred route is the only permission route for the connection to take. Should the associated preferred route be unavailable, the connection is failed. The object is not applicable if there is no associated preferred route with the connection.')
chanNumNextAvailable = MibScalar((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(16, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chanNumNextAvailable.setStatus('current')
if mibBuilder.loadTexts: chanNumNextAvailable.setDescription("This variable contains the next UNUSED channel number of the maximum possible value(depends upon the service module). This number can be used in channel config table, the ChanNumNextAvailable gets updated if the number gets used to create a logical channel. A '0' indicates that no more channels are available. For FRSM12 Card: Not Supported.")
frstdABRCnfGrpTable = MibTable((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3), )
if mibBuilder.loadTexts: frstdABRCnfGrpTable.setStatus('current')
if mibBuilder.loadTexts: frstdABRCnfGrpTable.setDescription('This table is used for configuring ABR parameters on a frame relay connection. ')
frstdABRCnfGrpEntry = MibTableRow((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1), ).setIndexNames((0, "CISCO-WAN-FR-CONN-MIB", "frstdABRcnfChanNum"))
if mibBuilder.loadTexts: frstdABRCnfGrpEntry.setStatus('current')
if mibBuilder.loadTexts: frstdABRCnfGrpEntry.setDescription('An entry in ABR Configuration table.')
frstdABRcnfChanNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: frstdABRcnfChanNum.setStatus('current')
if mibBuilder.loadTexts: frstdABRcnfChanNum.setDescription('Refers to the virtual connection index. The value supported depends upon the type of service module. Supported Range for different Card Types: FRSM-4T1/E1 : supported range is 16..271 (256 entries) FRSM-8T1/E1 : supported range is 16..1015 (1000 entries) FRSM-T3/E3/HS2/ /HS2B-HSSI/T3B/E3B : supported range is 16..2015 (2000 entries) FRSM-2CT3/HS2B-12IN1: supported range is 16..4015 (4000 entries) FRSM12 Card: Byte 3 = Chassis Number, Byte 2 = Slot Number, Byte 1 & 0 = channel Number. Lower two bytes range from 16..16015 (16000 entries) ')
frstdABRTBE = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 16777215)).clone(16777215)).setUnits('cells').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRTBE.setStatus('current')
if mibBuilder.loadTexts: frstdABRTBE.setDescription('The value of this object is equal to Transient Buffer Exposure(TBE). The TBE is a negotiated number of cells that the network would like to limit the source to sending during startup periods, before the first RM-cell returns.')
frstdABRFRTT = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 16700))).setUnits('milli-seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRFRTT.setStatus('current')
if mibBuilder.loadTexts: frstdABRFRTT.setDescription('The value of this object is equal to Fixed Round-Trip Time(FRTT). The FRTT is sum of the fixed propogation delays from the source to a destination network. The Value 0 signifies that FRTT is not available.')
frstdABRRDF = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 32768)).clone(16)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRRDF.setStatus('current')
if mibBuilder.loadTexts: frstdABRRDF.setDescription('The value of this object is equal to Rate Decrease Factor(RDF). The RDF controls the rate decrease which occurs when backward RM-cells with CI=1 are received. Larger values lead to faster rate decrease. The value specified has to be inverted to arrive at the actual value. The valid values possible are only powers of 2; i.e. 1, 2, 4, 8 ..... 32768. The SNMP agent has to verify this compliance.')
frstdABRRIF = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 5), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 32768)).clone(64)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRRIF.setStatus('current')
if mibBuilder.loadTexts: frstdABRRIF.setDescription('The value of this object is equal to Rate Increase Factor(RIF). The RIF controls the rate increase which occurs when a backward RM-cell is received with CI=0 and NI=0. The value specified has to be inverted to arrive at the actual value. The valid values possible are only powers of 2; i.e. 1, 2, 4, 8 ..... 32768. The SNMP agent has to verify this compliance.')
frstdABRNrm = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 6), Integer32().subtype(subtypeSpec=ValueRangeConstraint(2, 256)).clone(64)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRNrm.setStatus('current')
if mibBuilder.loadTexts: frstdABRNrm.setDescription('The value of this object is equal to number of cells a source may send for each forward RM cell. The valid values possible are only powers of 2 starting from 2; i.e. 2, 4, 8 ..... 256. The SNMP agent has to verify this compliance.')
frstdABRTrm = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(3, 255)).clone(255)).setUnits('milli-seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRTrm.setStatus('current')
if mibBuilder.loadTexts: frstdABRTrm.setDescription('The value of this object is equal to Upper bound on the time between forward RM cells for an active source.')
frstdABRCDF = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 8), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 64)).clone(16)).setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRCDF.setStatus('current')
if mibBuilder.loadTexts: frstdABRCDF.setDescription('The value of this object is equal to Cutoff Decrease Factor(CDF). The value specified has to be inverted to arrive at the actual value. The valid values possible are 0 and only powers of 2; i.e., 1, 2, 4, 8, 16, 32, 64. The SNMP agent has to verify this compliance.')
frstdABRADTF = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 9), Integer32().subtype(subtypeSpec=ValueRangeConstraint(10, 10230)).clone(500)).setUnits('milli-seconds').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRADTF.setStatus('current')
if mibBuilder.loadTexts: frstdABRADTF.setDescription('The value of this object is equal to ACR Decrease Time Factor(ADTF). The Granularity allowed is 10 milli seconds. i.e. 10,20,30 etc. The SNMP agent has to verify this compliance.')
frstdABRICR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 10), Integer32().subtype(subtypeSpec=ValueRangeConstraint(10, 400000)).clone(10)).setUnits('cells-per-second').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRICR.setStatus('current')
if mibBuilder.loadTexts: frstdABRICR.setDescription('The value of this object is equal to Initial Cell Rate(ICR). The ICR is the rate at which the source should send initially and after an idle period. This includes the bandwidth allocated for both data cells as well as all in-rate RM cells.')
frstdABRMCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 11), Integer32().subtype(subtypeSpec=ValueRangeConstraint(10, 400000)).clone(10)).setUnits('cells-per-second').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRMCR.setStatus('current')
if mibBuilder.loadTexts: frstdABRMCR.setDescription('The value of this object is equal to Minimum Cell Rate(MCR). The MCR is the rate at which the source is allowed to send. This includes the bandwidth allocated for both data cells as well as all in-rate RM cells.')
frstdABRPCR = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 1, 3, 1, 12), Integer32().subtype(subtypeSpec=ValueRangeConstraint(10, 400000)).clone(10)).setUnits('cells-per-second').setMaxAccess("readwrite")
if mibBuilder.loadTexts: frstdABRPCR.setStatus('current')
if mibBuilder.loadTexts: frstdABRPCR.setDescription('The value of this object is equal to Peak Cell Rate(PCR). The PCR is the rate at which the source is allowed to send. This includes the bandwidth allocated for both data cells as well as all in-rate RM cells.')
frChanStateGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2))
frChanStateGrpTable = MibTable((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1), )
if mibBuilder.loadTexts: frChanStateGrpTable.setStatus('current')
if mibBuilder.loadTexts: frChanStateGrpTable.setDescription('Table of transmit/receive states of channels.')
frChanStateGrpEntry = MibTableRow((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1), ).setIndexNames((0, "CISCO-WAN-FR-CONN-MIB", "stateChanNum"))
if mibBuilder.loadTexts: frChanStateGrpEntry.setStatus('current')
if mibBuilder.loadTexts: frChanStateGrpEntry.setDescription('An entry for FrChannelStateGrpEntry.')
stateChanNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: stateChanNum.setStatus('current')
if mibBuilder.loadTexts: stateChanNum.setDescription("The value of this object refers to frame relay connection. The value must be same as the value of the object 'chanNum' in frChanCnfGrpTable.")
chanState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 2), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3, 4))).clone(namedValues=NamedValues(("notConfigured", 1), ("okay", 2), ("alarm", 3), ("failed", 4)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chanState.setStatus('current')
if mibBuilder.loadTexts: chanState.setDescription('This variable indicates the LMI state of the VC (channel). The possible values are : notConfigured(1): Connection Not configured okay(2) : Connection is in Ok state alarm(3) : Connection is in alarm failed(4) : Connection is in failed state. This is applicable only for PNNI.')
xmtAbitState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 3), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("off", 1), ("sendingAequal1", 2), ("sendingAequal0", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: xmtAbitState.setStatus('current')
if mibBuilder.loadTexts: xmtAbitState.setDescription('The value of this object identifies the A bit transmit state. The possible values are : off(1) : LMI is off sendingAequal1(2) : LMI is on and connection is O.K. sendingAequal0(3) : LMI is on and connection is failed.')
rcvAbitState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 4), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("off", 1), ("rcvingAequal1", 2), ("rcvingAequal0", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: rcvAbitState.setStatus('current')
if mibBuilder.loadTexts: rcvAbitState.setDescription('The value of this object identifies the A bit receive state. The possible values are : off(1) : LMI is off rcvingAequal1(2) : LMI is on and connection is O.K. rcvingAequal0(3) : LMI is on and connection is failed.')
xmtATMState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 5), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("notSending", 1), ("sendingAIS", 2), ("sendingFERF", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: xmtATMState.setStatus('current')
if mibBuilder.loadTexts: xmtATMState.setDescription('This variable indicates the transmit state of the VC (channel) on the ATM side. The possible values are : notSending(1) : Not sending any state sendingAIS(2) : Sending AIS OAM state sendingFERF(2) : Sending FERF OAM state.')
rcvATMState = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 6), Integer32().subtype(subtypeSpec=ConstraintsUnion(SingleValueConstraint(1, 2, 3))).clone(namedValues=NamedValues(("notRcving", 1), ("rcvingAIS", 2), ("rcvingFERF", 3)))).setMaxAccess("readonly")
if mibBuilder.loadTexts: rcvATMState.setStatus('current')
if mibBuilder.loadTexts: rcvATMState.setDescription('This variable indicates the receive state of the VC (channel) on the ATM side. The possible values are : notRcving(1) : Not receiving any state rcvingAIS(2) : Receiving AIS OAM rcvingFERF(2) : Receiving FERF OAM.')
chanStatusBitMap = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 2, 2, 1, 1, 7), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 255))).setMaxAccess("readonly")
if mibBuilder.loadTexts: chanStatusBitMap.setStatus('current')
if mibBuilder.loadTexts: chanStatusBitMap.setDescription('This variable indicates the consolidated bit map of the channel alarm state. Individual bit positions are as defined below. Bit position Fail/Alarm Reason ------------ ---------- ------ 0 Alarm Reserved 1 Alarm n/w side AIS/RDI Rx 2 Fail Conditioned(A bit from n/w) 3 Alarm Reserved 4 Fail CC failed/RAS failed 5 Fail Mismatch 6 Alarm ingress A bit (LMI) 7 Alarm Reserved Fail bitmap mask : 0x34 Alarm bitmap mask: 0xCB This object is not applicable to MGX Release 1.x. ')
frEndPtMapGrp = MibIdentifier((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3))
frEndPtMapGrpTable = MibTable((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3, 1), )
if mibBuilder.loadTexts: frEndPtMapGrpTable.setStatus('current')
if mibBuilder.loadTexts: frEndPtMapGrpTable.setDescription('This is the Endpoint Mapping table for Frame Relay connections.')
frEndPtMapGrpEntry = MibTableRow((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3, 1, 1), ).setIndexNames((0, "CISCO-WAN-FR-CONN-MIB", "endPortNum"), (0, "CISCO-WAN-FR-CONN-MIB", "endDLCI"))
if mibBuilder.loadTexts: frEndPtMapGrpEntry.setStatus('current')
if mibBuilder.loadTexts: frEndPtMapGrpEntry.setDescription('An entry in the frame relay connection Endpoint table.')
endPortNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3, 1, 1, 1), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: endPortNum.setStatus('current')
if mibBuilder.loadTexts: endPortNum.setDescription("This object identifies the frame relay logical port. The value for this object must be same as 'portNum' object in frPortCnfPortGrpTable. If ifTable is is implemented in a service module, this object must be same as the ifIndex of frame relay port.")
endDLCI = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3, 1, 1, 2), Integer32().subtype(subtypeSpec=ValueRangeConstraint(0, 8388607))).setMaxAccess("readonly")
if mibBuilder.loadTexts: endDLCI.setStatus('current')
if mibBuilder.loadTexts: endDLCI.setDescription('The value of this object is equal to the DLCI value for this PVC endpoint.')
endChanNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3, 1, 1, 3), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: endChanNum.setStatus('current')
if mibBuilder.loadTexts: endChanNum.setDescription("The value of this object identifies the frame relay connection number. The value of this object is same as the value of 'chanNum' object in frChanCnfGrpTable. This object contains value 0, if port.dlci is a multicast group.")
endLineNum = MibTableColumn((1, 3, 6, 1, 4, 1, 351, 110, 5, 1, 3, 1, 1, 4), Integer32().subtype(subtypeSpec=ValueRangeConstraint(1, 2147483647))).setMaxAccess("readonly")
if mibBuilder.loadTexts: endLineNum.setStatus('current')
if mibBuilder.loadTexts: endLineNum.setDescription('The value of this object is equal to the physical line(for example T1/E1) or ifIndex on which connection is provisioned. If ifTable is not implemented in a service module, then the range is from 1 to Maximum number of lines supported. If ifTable is is implemented in a service module, this object must be same as the ifIndex of the interface (ifType=ds1(18),ds3(30)). The value supported for this object depends upon the type of service module: FRSM-4T1/E1 : Range is from 1..4 FRSM-8T1/E1 : Range is from 1..8 FRSM-T3/E3/HS2: Range is from 1..2 FRSM-2CT3 : Range is from 1..56 with ifTable Support: must refer to ifIndex of the interface. ')
ciscoWanFrConnMIBConformance = MibIdentifier((1, 3, 6, 1, 4, 1, 351, 150, 47, 2))
ciscoWanFrConnMIBGroups = MibIdentifier((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1))
ciscoWanFrConnMIBCompliances = MibIdentifier((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 2))
ciscoWanFrConnCompliance = ModuleCompliance((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 2, 1)).setObjects(("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnGroup"), ("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnTestGroup"), ("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnStateGroup"), ("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnEndptGroup"), ("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnABRGroup"), ("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnForesightGroup"), ("CISCO-WAN-FR-CONN-MIB", "ciscoWanFrConnQueueGroup"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnCompliance = ciscoWanFrConnCompliance.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnCompliance.setDescription('The compliance statement for SNMP entities which support Frame realy connection MIB.')
ciscoWanFrConnGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 1)).setObjects(("CISCO-WAN-FR-CONN-MIB", "chanNum"), ("CISCO-WAN-FR-CONN-MIB", "chanRowStatus"), ("CISCO-WAN-FR-CONN-MIB", "chanPortNum"), ("CISCO-WAN-FR-CONN-MIB", "dLCI"), ("CISCO-WAN-FR-CONN-MIB", "egressQSelect"), ("CISCO-WAN-FR-CONN-MIB", "deTaggingEnable"), ("CISCO-WAN-FR-CONN-MIB", "cir"), ("CISCO-WAN-FR-CONN-MIB", "bc"), ("CISCO-WAN-FR-CONN-MIB", "be"), ("CISCO-WAN-FR-CONN-MIB", "ibs"), ("CISCO-WAN-FR-CONN-MIB", "chanLocRmtLpbkState"), ("CISCO-WAN-FR-CONN-MIB", "chanType"), ("CISCO-WAN-FR-CONN-MIB", "chanFECNconfig"), ("CISCO-WAN-FR-CONN-MIB", "chanDEtoCLPmap"), ("CISCO-WAN-FR-CONN-MIB", "chanCLPtoDEmap"), ("CISCO-WAN-FR-CONN-MIB", "chanIngrPercentUtil"), ("CISCO-WAN-FR-CONN-MIB", "chanEgrPercentUtil"), ("CISCO-WAN-FR-CONN-MIB", "chanEgrSrvRate"), ("CISCO-WAN-FR-CONN-MIB", "chanOvrSubOvrRide"), ("CISCO-WAN-FR-CONN-MIB", "chanFrConnType"), ("CISCO-WAN-FR-CONN-MIB", "frCDRNumber"), ("CISCO-WAN-FR-CONN-MIB", "frLocalVpi"), ("CISCO-WAN-FR-CONN-MIB", "frLocalVci"), ("CISCO-WAN-FR-CONN-MIB", "frLocalNSAP"), ("CISCO-WAN-FR-CONN-MIB", "frRemoteVpi"), ("CISCO-WAN-FR-CONN-MIB", "frRemoteVci"), ("CISCO-WAN-FR-CONN-MIB", "frRemoteNSAP"), ("CISCO-WAN-FR-CONN-MIB", "frMastership"), ("CISCO-WAN-FR-CONN-MIB", "frVpcFlag"), ("CISCO-WAN-FR-CONN-MIB", "frConnServiceType"), ("CISCO-WAN-FR-CONN-MIB", "frRoutingPriority"), ("CISCO-WAN-FR-CONN-MIB", "frMaxCost"), ("CISCO-WAN-FR-CONN-MIB", "frRestrictTrunkType"), ("CISCO-WAN-FR-CONN-MIB", "frConnPCR"), ("CISCO-WAN-FR-CONN-MIB", "frConnRemotePCR"), ("CISCO-WAN-FR-CONN-MIB", "frConnMCR"), ("CISCO-WAN-FR-CONN-MIB", "frConnRemoteMCR"), ("CISCO-WAN-FR-CONN-MIB", "frConnPercentUtil"), ("CISCO-WAN-FR-CONN-MIB", "frConnRemotePercentUtil"), ("CISCO-WAN-FR-CONN-MIB", "frConnForeSightEnable"), ("CISCO-WAN-FR-CONN-MIB", "frConnFGCRAEnable"), ("CISCO-WAN-FR-CONN-MIB", "chanServType"), ("CISCO-WAN-FR-CONN-MIB", "chanServiceRateOverride"), ("CISCO-WAN-FR-CONN-MIB", "chanServiceRate"), ("CISCO-WAN-FR-CONN-MIB", "zeroCirConEir"), ("CISCO-WAN-FR-CONN-MIB", "chanReroute"), ("CISCO-WAN-FR-CONN-MIB", "frConnSCR"), ("CISCO-WAN-FR-CONN-MIB", "frConnRemoteSCR"), ("CISCO-WAN-FR-CONN-MIB", "frConnTemplateId"), ("CISCO-WAN-FR-CONN-MIB", "frConnAdminStatus"), ("CISCO-WAN-FR-CONN-MIB", "frChanCnfChangeCount"), ("CISCO-WAN-FR-CONN-MIB", "frChanCnfIgnoreIncomingDE"), ("CISCO-WAN-FR-CONN-MIB", "frChanOamCCEnable"), ("CISCO-WAN-FR-CONN-MIB", "frChanStatsEnable"), ("CISCO-WAN-FR-CONN-MIB", "frChanLocalLpbkEnable"), ("CISCO-WAN-FR-CONN-MIB", "frChanUpcEnable"), ("CISCO-WAN-FR-CONN-MIB", "frChanSlaveType"), ("CISCO-WAN-FR-CONN-MIB", "frConnRemoteMBS"), ("CISCO-WAN-FR-CONN-MIB", "chanNumNextAvailable"), ("CISCO-WAN-FR-CONN-MIB", "frChanPrefRouteId"), ("CISCO-WAN-FR-CONN-MIB", "frChanDirectRoute"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnGroup = ciscoWanFrConnGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnGroup.setDescription('A collection of objects providing information applicable to a Frame Relay Connection.')
ciscoWanFrConnForesightGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 2)).setObjects(("CISCO-WAN-FR-CONN-MIB", "foreSightEnable"), ("CISCO-WAN-FR-CONN-MIB", "qir"), ("CISCO-WAN-FR-CONN-MIB", "mir"), ("CISCO-WAN-FR-CONN-MIB", "pir"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnForesightGroup = ciscoWanFrConnForesightGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnForesightGroup.setDescription('A collection of objects related to foresight feature of a frame realay connection.')
ciscoWanFrConnQueueGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 3)).setObjects(("CISCO-WAN-FR-CONN-MIB", "ingressQDepth"), ("CISCO-WAN-FR-CONN-MIB", "ingressQDEThresh"), ("CISCO-WAN-FR-CONN-MIB", "ingressQECNThresh"), ("CISCO-WAN-FR-CONN-MIB", "egressQDepth"), ("CISCO-WAN-FR-CONN-MIB", "egressQDEThresh"), ("CISCO-WAN-FR-CONN-MIB", "egressQECNThresh"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnQueueGroup = ciscoWanFrConnQueueGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnQueueGroup.setDescription('A collection of objects related to queue depth egress/ingress thresholds.')
ciscoWanFrConnTestGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 4)).setObjects(("CISCO-WAN-FR-CONN-MIB", "chanTestType"), ("CISCO-WAN-FR-CONN-MIB", "chanTestState"), ("CISCO-WAN-FR-CONN-MIB", "chanRTDResult"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnTestGroup = ciscoWanFrConnTestGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnTestGroup.setDescription('A collection of objects related to testing Frame relay connections.')
ciscoWanFrConnStateGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 5)).setObjects(("CISCO-WAN-FR-CONN-MIB", "stateChanNum"), ("CISCO-WAN-FR-CONN-MIB", "chanState"), ("CISCO-WAN-FR-CONN-MIB", "xmtAbitState"), ("CISCO-WAN-FR-CONN-MIB", "rcvAbitState"), ("CISCO-WAN-FR-CONN-MIB", "xmtATMState"), ("CISCO-WAN-FR-CONN-MIB", "rcvATMState"), ("CISCO-WAN-FR-CONN-MIB", "chanStatusBitMap"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnStateGroup = ciscoWanFrConnStateGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnStateGroup.setDescription('A collection of objects related to state of Frame Relay connections.')
ciscoWanFrConnEndptGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 6)).setObjects(("CISCO-WAN-FR-CONN-MIB", "endPortNum"), ("CISCO-WAN-FR-CONN-MIB", "endDLCI"), ("CISCO-WAN-FR-CONN-MIB", "endChanNum"), ("CISCO-WAN-FR-CONN-MIB", "endLineNum"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnEndptGroup = ciscoWanFrConnEndptGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnEndptGroup.setDescription('A collection of objects related to Endpoint mapping in Frame Relay Connections.')
ciscoWanFrConnABRGroup = ObjectGroup((1, 3, 6, 1, 4, 1, 351, 150, 47, 2, 1, 7)).setObjects(("CISCO-WAN-FR-CONN-MIB", "frstdABRcnfChanNum"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRTBE"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRFRTT"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRRDF"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRRIF"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRNrm"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRTrm"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRCDF"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRADTF"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRICR"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRMCR"), ("CISCO-WAN-FR-CONN-MIB", "frstdABRPCR"))
if getattr(mibBuilder, 'version', (0, 0, 0)) > (4, 4, 0):
ciscoWanFrConnABRGroup = ciscoWanFrConnABRGroup.setStatus('current')
if mibBuilder.loadTexts: ciscoWanFrConnABRGroup.setDescription('A collection of objects related to ABR in a frame relay connection.')
mibBuilder.exportSymbols("CISCO-WAN-FR-CONN-MIB", ciscoWanFrConnMIBGroups=ciscoWanFrConnMIBGroups, frCDRNumber=frCDRNumber, frChanStateGrpEntry=frChanStateGrpEntry, frChanStatsEnable=frChanStatsEnable, chanServType=chanServType, frConnRemotePCR=frConnRemotePCR, chanDEtoCLPmap=chanDEtoCLPmap, frLocalNSAP=frLocalNSAP, stateChanNum=stateChanNum, frConnSCR=frConnSCR, chanServiceRate=chanServiceRate, frstdABRNrm=frstdABRNrm, ciscoWanFrConnCompliance=ciscoWanFrConnCompliance, qir=qir, ciscoWanFrConnQueueGroup=ciscoWanFrConnQueueGroup, frChanCnfGrpEntry=frChanCnfGrpEntry, frChanLocalLpbkEnable=frChanLocalLpbkEnable, frstdABRCDF=frstdABRCDF, frChanCnfGrp=frChanCnfGrp, ciscoWanFrConnStateGroup=ciscoWanFrConnStateGroup, frstdABRRDF=frstdABRRDF, be=be, xmtATMState=xmtATMState, frRoutingPriority=frRoutingPriority, frstdABRTrm=frstdABRTrm, chanEgrPercentUtil=chanEgrPercentUtil, foreSightEnable=foreSightEnable, chanPortNum=chanPortNum, chanFECNconfig=chanFECNconfig, frConnRemotePercentUtil=frConnRemotePercentUtil, chanServiceRateOverride=chanServiceRateOverride, PYSNMP_MODULE_ID=ciscoWanFrConnMIB, frEndPtMapGrp=frEndPtMapGrp, frstdABRMCR=frstdABRMCR, frstdABRTBE=frstdABRTBE, ciscoWanFrConnTestGroup=ciscoWanFrConnTestGroup, frstdABRcnfChanNum=frstdABRcnfChanNum, bc=bc, egressQDepth=egressQDepth, frEndPtMapGrpTable=frEndPtMapGrpTable, frRemoteVci=frRemoteVci, chanTestType=chanTestType, frstdABRRIF=frstdABRRIF, frMaxCost=frMaxCost, chanEgrSrvRate=chanEgrSrvRate, frChanCnfGrpTable=frChanCnfGrpTable, frConnPercentUtil=frConnPercentUtil, ciscoWanFrConnEndptGroup=ciscoWanFrConnEndptGroup, ciscoWanFrConnMIBCompliances=ciscoWanFrConnMIBCompliances, frConnTemplateId=frConnTemplateId, ingressQDEThresh=ingressQDEThresh, ciscoWanFrConnABRGroup=ciscoWanFrConnABRGroup, mir=mir, xmtAbitState=xmtAbitState, frRemoteVpi=frRemoteVpi, ingressQDepth=ingressQDepth, frChanUpcEnable=frChanUpcEnable, chanFrConnType=chanFrConnType, chanRowStatus=chanRowStatus, egressQDEThresh=egressQDEThresh, egressQSelect=egressQSelect, chanNum=chanNum, rcvAbitState=rcvAbitState, ibs=ibs, endDLCI=endDLCI, ciscoWanFrConnMIB=ciscoWanFrConnMIB, frstdABRCnfGrpEntry=frstdABRCnfGrpEntry, frChanDirectRoute=frChanDirectRoute, frChanSlaveType=frChanSlaveType, frRestrictTrunkType=frRestrictTrunkType, frConnServiceType=frConnServiceType, frstdABRPCR=frstdABRPCR, frstdABRADTF=frstdABRADTF, frEndPtMapGrpEntry=frEndPtMapGrpEntry, chanType=chanType, frMastership=frMastership, frLocalVpi=frLocalVpi, frConnRemoteSCR=frConnRemoteSCR, pir=pir, frConnAdminStatus=frConnAdminStatus, frConnRemoteMBS=frConnRemoteMBS, frChanOamCCEnable=frChanOamCCEnable, chanReroute=chanReroute, chanNumNextAvailable=chanNumNextAvailable, chanTestState=chanTestState, frChanCnfChangeCount=frChanCnfChangeCount, frLocalVci=frLocalVci, frChanPrefRouteId=frChanPrefRouteId, frstdABRICR=frstdABRICR, frstdABRFRTT=frstdABRFRTT, chanLocRmtLpbkState=chanLocRmtLpbkState, ciscoWanFrConnForesightGroup=ciscoWanFrConnForesightGroup, ciscoWanFrConnMIBConformance=ciscoWanFrConnMIBConformance, dLCI=dLCI, frConnRemoteMCR=frConnRemoteMCR, chanState=chanState, frVpcFlag=frVpcFlag, chanRTDResult=chanRTDResult, frConnMCR=frConnMCR, cir=cir, frConnForeSightEnable=frConnForeSightEnable, rcvATMState=rcvATMState, chanStatusBitMap=chanStatusBitMap, frConnFGCRAEnable=frConnFGCRAEnable, frRemoteNSAP=frRemoteNSAP, zeroCirConEir=zeroCirConEir, frChanStateGrpTable=frChanStateGrpTable, egressQECNThresh=egressQECNThresh, chanOvrSubOvrRide=chanOvrSubOvrRide, deTaggingEnable=deTaggingEnable, chanCLPtoDEmap=chanCLPtoDEmap, chanIngrPercentUtil=chanIngrPercentUtil, frConnPCR=frConnPCR, frChanCnfIgnoreIncomingDE=frChanCnfIgnoreIncomingDE, endChanNum=endChanNum, endLineNum=endLineNum, ingressQECNThresh=ingressQECNThresh, frChanStateGrp=frChanStateGrp, endPortNum=endPortNum, ciscoWanFrConnGroup=ciscoWanFrConnGroup, frstdABRCnfGrpTable=frstdABRCnfGrpTable)
| pysnmp-with-texts/CISCO-WAN-FR-CONN-MIB.py | 86,389 | PySNMP MIB module CISCO-WAN-FR-CONN-MIB (http://snmplabs.com/pysmi) ASN.1 source file:///Users/davwang4/Dev/mibs.snmplabs.com/asn1/CISCO-WAN-FR-CONN-MIB Produced by pysmi-0.3.4 at Wed May 1 12:20:25 2019 On host DAVWANG4-M-1475 platform Darwin version 18.5.0 by user davwang4 Using Python version 3.7.3 (default, Mar 27 2019, 09:23:15) | 336 | en | 0.442734 |
"""
Adds troposphere methods for adding scaling to a cluster
"""
from troposphere.awslambda import Function, Code, Environment, Permission
from troposphere import Ref, Sub, GetAtt
from troposphere.iam import Role, Policy
from troposphere.events import Target, Rule
from troposphere.ssm import Parameter
from ecs_cluster_deployer.utils import sanitize_cfn_resource_name
def add_scaling(spot_fleet, template, cluster_name):
""" Add scaling resources to a cluster """
ssm_param = Parameter(
'Scale{}'.format(sanitize_cfn_resource_name(spot_fleet.get('name'))),
Type="String",
Value="0",
Name=Sub("/ecs-maestro/${ClusterName}/${Version}/scaletime")
)
template.add_resource(ssm_param)
function_name = sanitize_cfn_resource_name(cluster_name)
autoscaling_role = Role(
"AutoscalingRole",
AssumeRolePolicyDocument={
"Statement": [{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {"Service": "lambda.amazonaws.com"},
}]
},
Policies=[
Policy(
PolicyName="ec2-spot-fleet-scaler",
PolicyDocument={
"Statement": [{
"Effect": "Allow",
"Action": [
"cloudwatch:Get*",
"ec2:DescribeSpotFleetRequests",
"ec2:ModifySpotFleetRequest",
"logs:*",
"ecs:ListContainerInstances",
"ecs:Update*",
"ecs:ListTasks",
"s3:GetEncryptionConfiguration"
],
"Resource": "*"
}, {
"Effect": "Allow",
"Action": [
"ssm:Get*",
"ssm:Put*",
"ssm:Delete*"
],
"Resource": [
{"Fn::Sub": "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/ecs-maestro/${ClusterName}/*"}
]
}]
}
),
Policy(
PolicyName="DeleteStack",
PolicyDocument={
"Statement": [{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
],
"Resource": [
{"Fn::Sub": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:"+function_name+"ASGCleanupLambda"}]
}]
}
)
]
)
template.add_resource(autoscaling_role)
scaling_lambda = Function(
'ScalingLambda{}'.format(sanitize_cfn_resource_name(spot_fleet.get('name'))),
Code=Code(
S3Bucket=Sub("${S3Bucket}"),
S3Key=Sub("${S3Prefix}/deployment.zip")
),
Handler="scaling.scale_spot.lambda_handler",
Role=GetAtt(autoscaling_role, "Arn"),
Environment=Environment(
Variables={
"CLUSTER_NAME": Sub("${ClusterName}"),
"SPOT_FLEET": Ref(
"SpotFleet{}".format(
sanitize_cfn_resource_name(
spot_fleet.get('name')
)
)
),
"STATUS": Sub("${Status}"),
"VERSION": Sub("${Version}"),
"SCALE_IN_THRESHOLD": Sub("${SpotTaskThresholdIn}"),
"SCALE_OUT_THRESHOLD": Sub("${SpotTaskThresholdOut}"),
"MAX_WEIGHT": Sub("${SpotMaxWeight}"),
"MIN_WEIGHT": Sub("${SpotMinWeight}")
}
),
Timeout=900,
MemorySize=128,
Runtime="python3.7",
)
template.add_resource(scaling_lambda)
CronScaling = Rule(
"CronScaling{}".format(
sanitize_cfn_resource_name(spot_fleet.get('name'))
),
ScheduleExpression="rate(1 minute)",
Description="Cron for cluster stats",
Targets=[
Target(
Id="1",
Arn=GetAtt(scaling_lambda, "Arn"))
]
)
template.add_resource(CronScaling)
ScalingPerm = Permission(
"ScalePerm{}".format(
sanitize_cfn_resource_name(spot_fleet.get('name'))
),
Action="lambda:InvokeFunction",
FunctionName=GetAtt(scaling_lambda, "Arn"),
Principal="events.amazonaws.com",
SourceArn=GetAtt(CronScaling, "Arn")
)
template.add_resource(ScalingPerm)
| ecs_cluster_deployer/compute/lambda_scaler.py | 4,805 | Add scaling resources to a cluster
Adds troposphere methods for adding scaling to a cluster | 92 | en | 0.651314 |
#!/usr/bin/python
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.helpers import ModuleRes, CartridgeException, cartridge_errcodes
from ansible.module_utils.helpers import get_control_console
from ansible.module_utils.helpers import dynamic_box_cfg_params
import os
argument_spec = {
'restarted': {'required': False, 'type': 'bool'},
'control_sock': {'required': True, 'type': 'str'},
'appname': {'required': True, 'type': 'str'},
'instance_conf_file': {'required': True, 'type': 'str'},
'conf_section_name': {'required': True, 'type': 'str'},
'cluster_cookie': {'required': True, 'type': 'str'},
'cartridge_defaults': {'required': True, 'type': 'dict'},
'config': {'required': True, 'type': 'dict'},
'stateboard': {'required': True, 'type': 'bool'}
}
def read_yaml_file_section(filepath, control_console, section):
sections = control_console.eval('''
local file = require('fio').open('{}')
if file == nil then
error('Failed to open instance config file')
end
local buf = {{}}
while true do
local val = file:read(1024)
if val == nil then
error('Failed to read from instance config file')
elseif val == '' then
break
end
table.insert(buf, val)
end
file:close()
local data = table.concat(buf, '')
local ok, ret = pcall(require('yaml').decode, data)
if not ok then
error('Failed to decode instance config from YAML')
end
return ret
'''.format(filepath))
if section not in sections:
errmsg = 'File {} does not contain section: {}'.format(filepath, section)
raise CartridgeException(cartridge_errcodes.MISSED_SECTION, errmsg)
return sections[section]
def check_conf_updated(new_conf, old_conf, ignore_keys=[]):
# check new conf keys
for key, value in new_conf.items():
if key not in ignore_keys:
if key not in old_conf or old_conf[key] != value:
return True
# check old conf keys
for key, value in old_conf.items():
if key not in ignore_keys:
if key not in new_conf or new_conf[key] != value:
return True
return False
def get_current_cfg(control_console):
return control_console.eval('''
return type(box.cfg) ~= 'function' and box.cfg or box.NULL
''')
def needs_restart(params):
restarted = params['restarted']
if restarted is True:
return ModuleRes(success=True, changed=True)
if restarted is False:
return ModuleRes(success=True, changed=False)
stateboard = params['stateboard']
control_sock = params['control_sock']
appname = params['appname']
new_default_conf = params['cartridge_defaults']
new_instance_conf = params['config']
cluster_cookie = params['cluster_cookie']
instance_conf_file = params['instance_conf_file']
conf_section_name = params['conf_section_name']
default_conf_path = '/etc/tarantool/conf.d/{}.yml'.format(appname)
app_code_path = '/usr/share/tarantool/{}'.format(appname)
# check if instance was not started yet
if not os.path.exists(control_sock):
return ModuleRes(success=True, changed=True)
try:
control_console = get_control_console(control_sock)
except CartridgeException as e:
allowed_errcodes = [
cartridge_errcodes.SOCKET_NOT_FOUND,
cartridge_errcodes.FAILED_TO_CONNECT_TO_SOCKET,
cartridge_errcodes.INSTANCE_IS_NOT_STARTED_YET
]
if e.code in allowed_errcodes:
return ModuleRes(success=True, changed=True)
last_restart_time = os.path.getmtime(control_sock)
# check if application code was updated
package_update_time = os.path.getmtime(app_code_path)
if last_restart_time < package_update_time:
return ModuleRes(success=True, changed=True)
# check if instance config was changed (except memtx_memory)
current_instance_conf = read_yaml_file_section(
instance_conf_file,
control_console,
conf_section_name
)
if check_conf_updated(new_instance_conf, current_instance_conf, dynamic_box_cfg_params):
return ModuleRes(success=True, changed=True)
if not stateboard:
# check if default config was changed (except memtx_memory)
current_default_conf = read_yaml_file_section(
default_conf_path,
control_console,
appname
)
new_default_conf.update({'cluster_cookie': cluster_cookie})
if check_conf_updated(new_default_conf, current_default_conf, dynamic_box_cfg_params):
return ModuleRes(success=True, changed=True)
current_cfg = get_current_cfg(control_console)
for param_name in dynamic_box_cfg_params:
new_value = None
if param_name in new_instance_conf:
new_value = new_instance_conf[param_name]
elif not stateboard and param_name in new_default_conf:
new_value = new_default_conf[param_name]
# This code is ran after attempt to change parameter in runtime
# If current parameter wasn't changed to the new value,
# it mean that instance should be restarted to apply change
if new_value is not None:
if current_cfg[param_name] != new_value:
return ModuleRes(success=True, changed=True)
return ModuleRes(success=True, changed=False)
def main():
module = AnsibleModule(argument_spec=argument_spec)
try:
res = needs_restart(module.params)
except CartridgeException as e:
module.fail_json(msg=str(e))
if res.success is True:
module.exit_json(changed=res.changed, meta=res.meta)
else:
module.fail_json(msg=res.msg)
if __name__ == '__main__':
main()
| library/cartridge_needs_restart.py | 5,931 | !/usr/bin/python check new conf keys check old conf keys check if instance was not started yet check if application code was updated check if instance config was changed (except memtx_memory) check if default config was changed (except memtx_memory) This code is ran after attempt to change parameter in runtime If current parameter wasn't changed to the new value, it mean that instance should be restarted to apply change | 423 | en | 0.888001 |
import sys, os
sys.path.append('../../') #get rid of this at some point with central test script or when package is built
os.chdir('../../')
import MSI.simulations.instruments.shock_tube as st
import MSI.cti_core.cti_processor as pr
import MSI.optimization.matrix_loader as ml
import MSI.optimization.opt_runner as opt
import MSI.simulations.absorbance.curve_superimpose as csp
import MSI.simulations.yaml_parser as yp
import MSI.optimization.shock_tube_optimization_shell_six_param_fit as stMSIspf
import cantera as ct
import pandas as pd
import numpy as np
import MSI.utilities.plotting_script as plotter
import MSI.utilities.post_processor as post_processor
files_to_include = [['Pirraglia_0.yaml']]
numer_of_iterations = 3
cti_file = 'glarborg_custom.cti'
working_directory = 'MSI/data/H_O2'
reaction_uncertainty_csv = 'glarborg_reaction_uncertainty.csv'
master_reaction_equation_cti_name = 'master_reactions_glarborg.cti'
#rate_constant_target_value_data = 'burke_target_value_single_reactions.csv'
#this would be an empty string '' if you do not want to include it
run_with_k_target_values = 'On'
master_equation_reactions = ['H2O2 + OH <=> H2O + HO2',
'2 HO2 <=> H2O2 + O2',
'HO2 + OH <=> H2O + O2',
'2 OH <=> H2O + O',
'CH3 + HO2 <=> CH4 + O2',
'CH3 + HO2 <=> CH3O + OH']
#master_index = [2,3,4,5,6,7]
master_index = [2,3,4,5,6,7]
master_equation_uncertainty_df = pd.read_csv('MSI/data/H_O2/six_parameter_fit_large_uncertainty.csv')
#this could be 'On'
rate_constant_target_value_data_for_plotting = 'FFCM1_target_reactions_1_plotting.csv'
rate_constant_target_value_data = 'FFCM1_target_reactions_1.csv'
rate_constant_target_value_data_extra = 'FFCM1_target_reactions_extra_data.csv'
#start here
six_parameter_fit_sensitivities = {'H2O2 + OH <=> H2O + HO2':{'A':np.array([-13.37032086, 32.42060027, 19.23022032, 6.843287462 , 36.62853824 ,-0.220309785 ,-0.099366346, -4.134352081]),
'n':np.array([1.948532282, -5.341557065, -3.337497841, -1.025292166, -5.813524857, 0.011862923 ,0.061801326, 0.581628835]),
'Ea':np.array([-0.463042822, 1.529151218, 0.808025472 ,0.359889935, -0.021309254, -0.098013004, -0.102022118, -0.097024727]),
'c':np.array([0.00163576, -0.008645666, -0.003111179, -0.002541995, 0.014228149 ,0.001263134, 0.001236963, -0.000390567]),
'd':np.array([1.071992802, -2.780550365, -1.71391034 ,-0.274481751, -4.491132406, -0.054960894, 0.049553379, 0.270885383]),
'f':np.array([-0.027060156, 0.056903076, 0.041102936 ,0.001361221, 0.144385439, 0.003136796 ,0.001374015, -0.006089248])},
'2 HO2 <=> H2O2 + O2': {'A':np.array([-12.93733217, 24.39245077 ,17.73177606, 4.37803475, 33.44985889, 0.381601192 ,3.748890308]),
'n':np.array([1.872602872, -4.096806067, -3.09439453 ,-0.63226683, -5.125008418, -0.061610462, -0.677953862]),
'Ea':np.array([-0.463903763 ,1.259537237, 0.826684258 ,0.257400116, 0.803882706 ,2.20E-05, 0.181336266]),
'c':np.array([0.002069572, -0.008314769, -0.00424128 ,-0.002016113, 0.000134642 ,0.000122049 ,-0.001026567]),
'd':np.array([0.981856324, -1.847383095, -1.493544053, 0.016222685, -3.428753345, -0.050708107, -0.526284003]),
'f':np.array([-0.022628436, 0.023558844, 0.031573523 ,-0.00732987, 0.096573278 ,0.001668073, 0.01033547])},
'HO2 + OH <=> H2O + O2': {'A':np.array([-4.795727446, 6.426354909 ,4.878258417, 2.472791017, 7.856296474, 1.328033302 ,-3.457932692, -0.349839371, 2.331070924 ,2.403555921, -0.165397001, 0.246540172 ,0.722946077]),
'n':np.array([0.624241134, -1.321082842, -1.032242319, -0.36532386, -1.112545721, -0.188622956, 0.421083939 ,0.038859478 ,-0.360855106, -0.38989218, 0.029669899 ,-0.04371581, -0.130487515]),
'Ea':np.array([-0.259799111, 0.205620792 ,0.130799794, 0.137023666 ,0.379232542, 6.19E-02, -0.198196699, -0.023548432, 0.118069394 ,0.104383314 ,-0.003830947, 0.011566499 ,-0.073557828]),
'c':np.array([0.00161312, -0.001906694, -0.000863021, -0.00105112 ,-0.002185605, -0.000334461, 0.001817049 ,0.000170761, -0.000859313, -0.000653029, -3.11E-06 ,-6.37E-05, 0.00047058]),
'd':np.array([0.124499363, -0.645652135, -0.535188558, 0.052734001 ,-0.45181066, -0.082250635, 0.034779283, -0.011522821, 0.017057742, -0.165960963, 0.057288687, -0.012776017, -0.192422381]),
'f':np.array([0.002033109, -0.011099716, 0.005351213 ,-0.007623667, 0.005327017 ,0.001259485,0.00245957, 0.000976725 ,-0.004879845, 0.001903886 ,-0.001838669 ,0.000252269, 0.004691829])},
'2 OH <=> H2O + O': {'A': np.array([-5.40485067, 18.96061659 ,8.089301961, 6.953940096 ,-12.54280438, -3.264972401, 2.106487623 ,-1.657943467, 1.614935 ,-1.536463599]),
'n': np.array([0.803274875, -3.167851673, -1.607661056, -1.041258197, 1.679914849, 0.466415264 ,-0.326136934, 0.355297684 ,-0.16618967, 0.253903734]),
'Ea': np.array([0.147285831, 0.605814544, -0.062253282, 0.372322712, -1.884116555, -0.281992263, 0.099465537 ,0.030650483, 0.176069015 ,-0.056967886]),
'c': np.array([-0.003001658, -0.001870536, 0.003820535 ,-0.002753277, 0.014224162, 0.00032969 ,-0.000627241, -0.001081979, -0.002009835, 0.000255318]),
'd':np.array([0.446957978, -1.467039994, -1.298391635, -0.402720385, 0.568106728 ,0.229877892, -0.194395052, 1.033858025 ,0.527183366, 0.308743056]),
'f':np.array([-0.010053913, 0.025128322, 0.035579811 ,0.00515753 ,-0.0083511, -0.00512885, 0.003954, -0.029711993 ,-0.01986861, -0.007691647])},
'CH3 + HO2 <=> CH4 + O2': {'A':np.array([.007845,-.89278,-.94908]),
'n':np.array([-0.00104,-.36888,.154462]),
'Ea':np.array([.504278,-.44379,-0.03181]),
'c':np.array([0,0,0]),
'd':np.array([0,0,0]),
'f':np.array([0,0,0])},
'CH3 + HO2 <=> CH3O + OH': {'A':np.array([1.319108,-.92151]),
'n':np.array([-.04282,.150846]),
'Ea':np.array([0.024285,-0.02956]),
'c':np.array([0,0]),
'd':np.array([0,0]),
'f':np.array([0,0])}}
molecular_parameter_sensitivities = {'H2O2 + OH <=> H2O + HO2':{'A':np.array([-0.373074255, -5.658058364,-2.203911028,1.69333527,-7.110529947,-0.272049596,1.373125254,-0.644666166]),
'n':np.array([0.043611058, 0.15417925, -0.208413633, -0.306031876, 0.81053055, 0.031772359 ,-0.136901806, 0.073807424]),
'Ea':np.array([0.419762882, -1.301125209, -0.681648059, -0.091866582, -2.353326781, -0.064230907, 0.047721593 ,0.147941186])},
'2 HO2 <=> H2O2 + O2': {'A':np.array([-0.166005487, -6.797175212, -2.798300682, 1.973896891 ,-4.354910767, -0.082067357, -3.839749825]),
'n':np.array([0.018748596, 0.294710827 ,-0.135488286, -0.332967052, 0.4930396, 0.009470627 ,0.409095255]),
'Ea':np.array([0.459015825, -1.401810899, -0.722040616, -0.066133729, -1.52807633 ,-0.021832631, -0.411667639])},
'HO2 + OH <=> H2O + O2': {'A':np.array([-1.30109642, -11.63457509, -4.680271526, 0.782373804 , -0.016083278, 0.005513255 ,-1.738426278, -0.232013539, 0.884067816 ,-0.500473791, 0.399272687 ,0.062255923 ,-1.667253993]),
'n':np.array([0.152797314, 1.1181845, 0.306250902 ,-0.164846884, -0.008229148, -0.001531881, 0.195875814 ,0.026844834, -0.18238354 ,0.017363927, -0.055634983 ,-0.017324495, 0.218771679]),
'Ea':np.array([0.101558432, -1.638858106, -0.704325409, -0.119041648, -0.307281167, -0.04872945, 0.001603412 ,0.000324159, -0.08089174, -0.148811902, 0.027266121 ,-0.002907638, -0.237949453])},
'2 OH <=> H2O + O': {'A': np.array([0.299144373, -2.662684629, -6.643003014, 0.370230493 ,-3.354253502, -0.271981922, -0.581195748, 9.774024441 , 5.90328859, 2.272800133]),
'n': np.array([-0.028599275, -0.071787028, 0.572722706 ,-0.109709456, 0.381272207 ,0.03153973 ,0.061282516, -1.341475144, -0.835422411, -0.302994441]),
'Ea': np.array([0.535103651, -1.054606857, -0.989721261, -0.169631331, -1.099840578, -0.069647609, -0.101285313, 0.74522721, 0.352517552 ,0.205464658])},
'CH3 + HO2 <=> CH4 + O2': {'A':np.array([.007845,-.89278,-.94908]),
'n':np.array([-0.00104,-.36888,.154462]),
'Ea':np.array([.504278,-.44379,-0.03181])},
'CH3 + HO2 <=> CH3O + OH': {'A':np.array([1.319108,-.92151]),
'n':np.array([-.04282,.150846]),
'Ea':np.array([0.024285,-0.02956])}}
six_parameter_fit_nominal_parameters_dict = {'H2O2 + OH <=> H2O + HO2':{'A':4.64E-06,'n':5.605491008,'Ea':-5440.266692,'c':126875776.1,'d':0.000441194,'f':-5.35E-13},
'2 HO2 <=> H2O2 + O2':{'A':1.30E+04,'n':1.997152351,'Ea':-3628.04407,'c':93390973.44,'d':-0.000732521,'f':8.20E-12} ,
'HO2 + OH <=> H2O + O2':{'A':1.41E+18,'n':-2.05344973,'Ea':-232.0064051,'c':15243859.12,'d':-0.001187694,'f':8.01E-12},
'2 OH <=> H2O + O':{'A':354.5770856,'n':2.938741717,'Ea':-1836.492972,'c':12010735.18,'d':-4.87E-05,'f':1.22E-12},
'CH3 + HO2 <=> CH4 + O2':{'A':3.19e3,'n':2.670857,'Ea':-4080.73,'c':0.0,'d':0.0,'f':0.0},
'CH3 + HO2 <=> CH3O + OH':{'A':8.38e11,'n':.29,'Ea':-785.45,'c':0.0,'d':0.0,'f':0.0}}
MSI_st_instance_one = stMSIspf.MSI_shocktube_optimization_six_parameter_fit(cti_file,
.01,
1,
1,
working_directory,
files_to_include,
reaction_uncertainty_csv,rate_constant_target_value_data,
master_equation_reactions = master_equation_reactions,
molecular_parameter_sensitivities = molecular_parameter_sensitivities,
six_parameter_fit_sensitivities = six_parameter_fit_sensitivities,
master_reaction_equation_cti_name = master_reaction_equation_cti_name,
master_index = master_index,
master_equation_uncertainty_df = master_equation_uncertainty_df,
six_paramter_fit_nominal_parameters_dict = six_parameter_fit_nominal_parameters_dict)
MSI_st_instance_one.one_run_shock_tube_optimization()
S_matrix_original = MSI_st_instance_one.S_matrix
exp_dict_list_original = MSI_st_instance_one.experiment_dictonaries
original_covariance = MSI_st_instance_one.covarience
X_one_itteration = MSI_st_instance_one.X
MSI_st_instance_one.deltaXAsNsEas
#need to fix this and return _s_matrix and y_matrix
MSI_st_instance_two = stMSIspf.MSI_shocktube_optimization_six_parameter_fit(cti_file,
.01,
1,
1,
working_directory,
files_to_include,
reaction_uncertainty_csv,rate_constant_target_value_data,
master_equation_reactions = master_equation_reactions,
molecular_parameter_sensitivities = molecular_parameter_sensitivities,
six_parameter_fit_sensitivities = six_parameter_fit_sensitivities,
master_reaction_equation_cti_name = master_reaction_equation_cti_name,
master_index = master_index,
master_equation_uncertainty_df = master_equation_uncertainty_df,
six_paramter_fit_nominal_parameters_dict = six_parameter_fit_nominal_parameters_dict)
#
#
#
#
#
#ALL OF THIS STUFF CAN PROBABLY GO INTO SOME SORT OF CLASS
delta_X_list = MSI_st_instance_two.multiple_shock_tube_runs(numer_of_iterations)
deltaXAsNsEas = MSI_st_instance_two.deltaXAsNsEas
physical_obervable_updates_list = MSI_st_instance_two.physical_obervable_updates_list
absorbance_observables_updates_list = MSI_st_instance_two.absorbance_coef_update_dict
Ydf = MSI_st_instance_two.Y_data_frame
Zdf = MSI_st_instance_two.z_data_frame
experimental_dicts = MSI_st_instance_two.experiment_dictonaries
z_matrix = MSI_st_instance_two.z_matrix
s_matrix = MSI_st_instance_two.s_matrix
y = MSI_st_instance_two.y_matrix
Y_matrix = MSI_st_instance_two.Y_matrix
S_matrix = MSI_st_instance_two.S_matrix
X = MSI_st_instance_two.X
Xdf = MSI_st_instance_two.X_data_frame
covarience = MSI_st_instance_two.covarience
exp_dict_list_optimized_extra_reaction = MSI_st_instance_two.experiment_dictonaries
parsed_yaml_list = MSI_st_instance_two.list_of_parsed_yamls
sigma = MSI_st_instance_two.sigma
X = MSI_st_instance_two.X
delta_X = MSI_st_instance_two.delta_X
molecular_parameter_updates = MSI_st_instance_two.delta_x_molecular_params_by_reaction_dict
nominal_dict_six_p_fit = MSI_st_instance_two.six_paramter_fit_nominal_parameters_dict
original_diag = np.diag(original_covariance)
#target_value_rate_constant_csv = 'MSI/data/test_data/FFCM1_custom_target_value_test.csv'
original_cti_file = MSI_st_instance_two.data_directory +'/'+ MSI_st_instance_two.cti_file_name
experiment_dict_uncertainty = MSI_st_instance_two.experiment_dict_uncertainty_original
target_value_csv = MSI_st_instance_two.data_directory +'/'+ MSI_st_instance_two.k_target_values_csv
six_parameter_fit_dict_optimized = MSI_st_instance_two.updated_six_parameter_fits_dict
if run_with_k_target_values == 'On' or run_with_k_target_values == 'on':
k_target_value_S_matrix = MSI_st_instance_two.k_target_values_for_s
else:
k_target_value_S_matrix = None
##########################################################################################################################
#PLOTTING##
##########################################################################################################################
#csv_file_sigma = MSI_st_instance_two.data_directory +'/'+'sigma_for_uncertainty_weighted_sensitivity_FFCM1.csv'
csv_file_sigma = MSI_st_instance_two.data_directory +'/'+'sigma_for_uncertainty_weighted_sensitivity_glarborg.csv'
#csv_file_sigma = ''
plotting_instance = plotter.Plotting(S_matrix,
s_matrix,
Y_matrix,
Y_matrix,
z_matrix,
X,
sigma,
covarience,
original_covariance,
S_matrix_original,
exp_dict_list_optimized_extra_reaction,
exp_dict_list_original,
parsed_yaml_list,
Ydf,
target_value_rate_constant_csv= MSI_st_instance_two.data_directory +'/'+ rate_constant_target_value_data_for_plotting ,
target_value_rate_constant_csv_extra_values = MSI_st_instance_two.data_directory +'/'+rate_constant_target_value_data_extra,
k_target_value_S_matrix =k_target_value_S_matrix,
k_target_values=run_with_k_target_values,
working_directory = working_directory,
sigma_uncertainty_weighted_sensitivity_csv=csv_file_sigma)
#csv_file_sigma = MSI_st_instance_two.data_directory +'/'+'sigma_for_uncertainty_weighted_sensitivity_updated.csv'
observable_counter_and_absorbance_wl,length_of_experimental_data = plotting_instance.lengths_of_experimental_data()
sigmas_optimized,test = plotting_instance.calculating_sigmas(S_matrix,covarience)
sigmas_original,test2 = plotting_instance.calculating_sigmas(S_matrix_original,original_covariance)
plotting_instance.plotting_observables(sigmas_original = sigmas_original,sigmas_optimized= sigmas_optimized)
diag = plotting_instance.getting_matrix_diag(covarience)
#plotting_instance.Y_matrix_plotter(Y_matrix,exp_dict_list_optimized,y,sigma)
#
#
#plotting_instance.plotting_rate_constants(optimized_cti_file=MSI_st_instance_two.new_cti_file,
# original_cti_file=original_cti_file,
# initial_temperature=250,
# final_temperature=2500)
sensitivity, top_sensitivity = plotting_instance.sort_top_uncertainty_weighted_sens()
obs = plotting_instance.plotting_uncertainty_weighted_sens()
plotting_instance.plotting_rate_constants_six_paramter_fit(optimized_cti_file=MSI_st_instance_two.new_cti_file,
original_cti_file=original_cti_file,
initial_temperature=250,
final_temperature=2500,
master_equation_reactions = master_equation_reactions,
six_parameter_fit_dict_optimized = six_parameter_fit_dict_optimized,
six_parameter_fit_dict_nominal = six_parameter_fit_nominal_parameters_dict,
six_parameter_fit_sensitivity_dict =six_parameter_fit_sensitivities )
#plotting_instance.plotting_X_itterations(list_of_X_values_to_plot = [0,1,2,3,4,5,50],list_of_X_array=X_list,number_of_iterations=numer_of_iterations)
post_processor_instance = post_processor.post_processing(optimized_cti_file = MSI_st_instance_two.new_cti_file,
original_cti_file = original_cti_file,
kinetic_paramter_dictonary = MSI_st_instance_two.kinetic_paramter_dict,
master_equation_reactions=master_equation_reactions,
six_parameter_fit_nominal_parameters_dict = six_parameter_fit_nominal_parameters_dict,
six_parameter_fit_optimized_paramter_dict = six_parameter_fit_dict_optimized,
exp_dict_list_optimized = exp_dict_list_optimized_extra_reaction,
exp_dict_list_original = exp_dict_list_original,
parsed_yaml_list = parsed_yaml_list)
kinetic_paramters_dict = post_processor_instance.create_active_kinetic_paramter_dictonary()
physical_params_dict = post_processor_instance.create_active_physical_paramter_dictonary()
| tests/shock_tube_optimization_shell_six_paramter_fit_test_modified.py | 22,491 | get rid of this at some point with central test script or when package is builtrate_constant_target_value_data = 'burke_target_value_single_reactions.csv'this would be an empty string '' if you do not want to include it master_index = [2,3,4,5,6,7]this could be 'On'start here need to fix this and return _s_matrix and y_matrixALL OF THIS STUFF CAN PROBABLY GO INTO SOME SORT OF CLASStarget_value_rate_constant_csv = 'MSI/data/test_data/FFCM1_custom_target_value_test.csv'PLOTTINGcsv_file_sigma = MSI_st_instance_two.data_directory +'/'+'sigma_for_uncertainty_weighted_sensitivity_FFCM1.csv'csv_file_sigma = ''csv_file_sigma = MSI_st_instance_two.data_directory +'/'+'sigma_for_uncertainty_weighted_sensitivity_updated.csv'plotting_instance.Y_matrix_plotter(Y_matrix,exp_dict_list_optimized,y,sigma)plotting_instance.plotting_rate_constants(optimized_cti_file=MSI_st_instance_two.new_cti_file, original_cti_file=original_cti_file, initial_temperature=250, final_temperature=2500)plotting_instance.plotting_X_itterations(list_of_X_values_to_plot = [0,1,2,3,4,5,50],list_of_X_array=X_list,number_of_iterations=numer_of_iterations) | 1,221 | en | 0.546867 |
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""StackPush and StackPop op"""
from mindspore.ops.op_info_register import op_info_register, AiCPURegOp, DataType
stack_init_op_info = AiCPURegOp("StackInit") \
.fusion_type("OPAQUE") \
.attr("index", "int") \
.get_op_info()
stack_push_op_info = AiCPURegOp("StackPush") \
.fusion_type("OPAQUE") \
.input(0, "src", "required") \
.attr("index", "int") \
.dtype_format(DataType.U8_Default) \
.dtype_format(DataType.U16_Default) \
.dtype_format(DataType.U32_Default) \
.dtype_format(DataType.U64_Default) \
.dtype_format(DataType.I8_Default) \
.dtype_format(DataType.I16_Default) \
.dtype_format(DataType.I32_Default) \
.dtype_format(DataType.I64_Default) \
.dtype_format(DataType.F16_Default) \
.dtype_format(DataType.F32_Default) \
.dtype_format(DataType.F64_Default) \
.dtype_format(DataType.BOOL_Default) \
.get_op_info()
stack_pop_op_info = AiCPURegOp("StackPop") \
.fusion_type("OPAQUE") \
.output(0, "dst", "required") \
.attr("index", "int") \
.dtype_format(DataType.U8_Default) \
.dtype_format(DataType.U16_Default) \
.dtype_format(DataType.U32_Default) \
.dtype_format(DataType.U64_Default) \
.dtype_format(DataType.I8_Default) \
.dtype_format(DataType.I16_Default) \
.dtype_format(DataType.I32_Default) \
.dtype_format(DataType.I64_Default) \
.dtype_format(DataType.F16_Default) \
.dtype_format(DataType.F32_Default) \
.dtype_format(DataType.F64_Default) \
.dtype_format(DataType.BOOL_Default) \
.get_op_info()
stack_destroy_op_info = AiCPURegOp("StackDestroy") \
.fusion_type("OPAQUE") \
.attr("index", "int") \
.get_op_info()
@op_info_register(stack_init_op_info)
def _stack_init_aicpu():
"""StackInit aicpu register"""
return
@op_info_register(stack_push_op_info)
def _stack_push_aicpu():
"""StackPush aicpu register"""
return
@op_info_register(stack_pop_op_info)
def _stack_pop_aicpu():
"""StackPop aicpu register"""
return
@op_info_register(stack_destroy_op_info)
def _stack_destroy_aicpu():
"""StackDestroy aicpu register"""
return
| mindspore/ops/_op_impl/aicpu/stack_push_pop.py | 2,809 | StackDestroy aicpu register
StackInit aicpu register
StackPop aicpu register
StackPush aicpu register
StackPush and StackPop op
Copyright 2020 Huawei Technologies Co., Ltd Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ============================================================================ | 768 | en | 0.780292 |
from tree import TreeNode
def min_depth(self, root):
"""
:type root: TreeNode
:rtype: int
"""
if root is None:
return 0
if root.left is not None or root.right is not None:
return max(self.minDepth(root.left), self.minDepth(root.right))+1
return min(self.minDepth(root.left), self.minDepth(root.right)) + 1
# iterative
def min_height(root):
if root is None:
return 0
height = 0
level = [root]
while level:
height += 1
new_level = []
for node in level:
if node.left is None and node.right is None:
return height
if node.left is not None:
new_level.append(node.left)
if node.right is not None:
new_level.append(node.right)
level = new_level
return height
def print_tree(root):
if root is not None:
print(root.val)
print_tree(root.left)
print_tree(root.right)
if __name__ == '__main__':
tree = TreeNode(10)
tree.left = TreeNode(12)
tree.right = TreeNode(15)
tree.left.left = TreeNode(25)
tree.left.left.right = TreeNode(100)
tree.left.right = TreeNode(30)
tree.right.left = TreeNode(36)
height = min_height(tree)
print_tree(tree)
print("height:", height)
| algorithms/tree/min_height.py | 1,318 | :type root: TreeNode
:rtype: int
iterative | 44 | en | 0.441816 |
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
from django.core.management.base import BaseCommand
from django.conf import settings
from lib.l10n_utils.gettext import merge_lang_files
class Command(BaseCommand):
help = 'Merges gettext strings into .lang files'
def add_arguments(self, parser):
# Positional arguments
parser.add_argument('langs', nargs='*')
def handle(self, *args, **options):
langs = options['langs']
if not langs:
langs = os.listdir(os.path.join(settings.ROOT, 'locale'))
langs = filter(lambda x: x != 'templates', langs)
langs = filter(lambda x: x[0] != '.', langs)
merge_lang_files(langs)
| lib/l10n_utils/management/commands/l10n_merge.py | 867 | This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. Positional arguments | 213 | en | 0.89557 |
#-*- coding: utf-8 -*-
from DBP.models import Base, session
from DBP.models.user import User
from sqlalchemy.orm import class_mapper
from sqlalchemy.inspection import inspect
from sqlalchemy.sql import func
from sqlalchemy.dialects.mysql import INTEGER,VARCHAR, DATETIME
from datetime import datetime
import csv
import io
from openpyxl import Workbook
from openpyxl import load_workbook
def utf_8_encoder(unicode_csv_data):
for line in unicode_csv_data:
yield line.encode('utf-8')
class OriginalData (object):
def __init__(self, length, name, mappinginfo):
self.length = length
self.name = name
cols = inspect(self.__class__).columns
if len(mappinginfo) != len(cols) -3:
raise TypeError
for col in mappinginfo:
setattr(self,str( u"sch_"+col["label"]["name"]),int(col["col"]))
def dict(self):
data = {
"id" : self.id,
"length" : self.length,
"name" : self.name,
"mapinfo" : self.mapList()
}
return data
def getInfo(self):
data = self.dict()
data["parsednum"] = len(self.parseds)
data["tasknum"] = sum(map(lambda x: len(x.tasks),self.parseds))
return data
def mapList(self):
maplist = list()
for col in filter(lambda x: x.name[:3] == u"sch", inspect(self.__class__).columns ):
maplist.append(getattr(self,col.name))
return maplist
def getSchema(self):
return filter(lambda x: x.name[:3] == u"sch", inspect(self.__class__).columns )
def loadcsv(self,submitter,csvread,nth,duration_start,duration_end):
reader = csv.reader(csvread, delimiter=',', quotechar="'")
csvwrite = io.BytesIO()
writer = csv.writer(csvwrite, delimiter=',', quotechar="'")
maplist = self.mapList()
counter = 0
dupset = set()
dupcounter = 0
nullcount = dict()
schema = self.getSchema()
for col in schema:
nullcount[col.name] = 0
for rrow in reader:
crow = list()
for mapnum, col in zip(maplist, schema):
crow.append(rrow[mapnum])
if rrow[mapnum] == "":
nullcount[col.name] +=1
dupset.add(unicode(crow))
writer.writerow(crow)
counter += 1
evaluator = User.randomEvaluator()
parsedmodel = self.parsedclass(nth,duration_start,duration_end,csvwrite,counter, counter - len(dupset))
parsedmodel.submitterid = submitter.id
parsedmodel.evaluatorid = evaluator.id
self.taskrow.addUser(evaluator)
for col in schema :
setattr(parsedmodel,"null_" + col.name[4:] , nullcount[col.name] / (counter*1.0) )
self.parseds.append(parsedmodel)
session.commit()
return parsedmodel
def loadxlsx(self,submitter,xlsxread,nth,duration_start,duration_end):
wb = load_workbook(xlsxread)
ws = wb.active
csvwrite = io.BytesIO()
writer = csv.writer(csvwrite, delimiter=',', quotechar="'")
maplist = self.mapList()
counter = 0
dupset = set()
dupcounter = 0
nullcount = dict()
schema = self.getSchema()
for col in schema:
nullcount[col.name] = 0
for rrow in ws.rows:
crow = list()
for mapnum, col in zip(maplist, schema):
if type(rrow[mapnum].value) == datetime:
crow.append(rrow[mapnum].value.strftime("%Y-%m-%d %H:%M"))
else :
crow.append(rrow[mapnum].value)
if rrow[mapnum].value == "":
nullcount[col.name] +=1
dupset.add(unicode(crow))
utfrow = list ()
for x in crow:
if type(x) == unicode :
utfrow.append(x.encode("utf8"))
else :
utfrow.append(x)
writer.writerow(utfrow)
counter += 1
evaluator = User.randomEvaluator()
parsedmodel = self.parsedclass(nth,duration_start,duration_end,csvwrite,counter, counter - len(dupset))
parsedmodel.submitterid = submitter.id
parsedmodel.evaluatorid = evaluator.id
self.taskrow.addUser(evaluator)
for col in schema :
setattr(parsedmodel,"null_" + col.name[4:] , nullcount[col.name] / (counter*1.0) )
self.parseds.append(parsedmodel)
session.commit()
return parsedmodel
def getInfoByUser(self,user):
data = self.dict()
data["nth"] = self.getNextnth (user)
return data
def getNextnth(self,user):
nth = session.query( func.max(self.parsedclass.nth)).filter(self.parsedclass.originalid == self.id).filter(self.parsedclass.submitterid == user.id).first()
if nth[0]:
return nth[0] +1
else :
return 1
class ParsedData (object):
def __init__(self,nth,duration_start,duration_end, csvfile, tuplenum,duplicatetuplenum):
self.nth = nth
self.duration_start = duration_start
self.duration_end = duration_end
self.file = csvfile.getvalue()
self.tuplenum = tuplenum
self.duplicatetuplenum = duplicatetuplenum
def parsecsv(self):
csvread = io.StringIO(self.file.decode("utf8"))
reader = csv.reader(utf_8_encoder(csvread), delimiter=',', quotechar="'")
parsedlist = list()
for row in reader:
tsmodel = self.taskclass(User.getUser(self.submitterid).name, self.id)
for (column, data) in zip(filter(lambda x: x.name[:3] == u"sch", inspect(self.taskclass).columns ), row):
if type(column.type) == INTEGER:
try :
setattr(tsmodel,column.name, int(data))
except :
setattr(tsmodel,column.name, None)
elif type(column.type) == DATETIME:
try :
setattr(tsmodel,column.name, datetime.strptime( data, "%Y-%m-%d %H:%M"))
except :
setattr(tsmodel,column.name, None)
else :
setattr(tsmodel,column.name, data)
parsedlist.append(tsmodel)
return parsedlist
def insertcsv(self):
if self.pnp != "Pass":
return False
session.bulk_save_objects(self.parsecsv())
session.commit()
return True
def dict(self):
return {
"id" : self.id,
"nth" : self.nth,
"tuplenum" : self.tuplenum,
"duplicatetuplenum" : self.duplicatetuplenum,
"duration_start" : self.duration_start.isoformat(),
"duration_end" : self.duration_end.isoformat(),
"status" : self.status,
"score" : self.score,
"pnp" : self.pnp,
"submitter" : User.getUser(self.submitterid).name,
"original" : self.original.name,
"evaluator": User.getUser(self.evaluatorid).name,
"nullratio" : self.nullInfo()
}
def evaluate(self, score,pnp):
self.status = "Evaluated"
self.score = 5 * score + 25 *( 1.0 - self.duplicatetuplenum/(self.tuplenum * 1.0) ) + 25 * (1.0 - sum(map(lambda x : x['ratio'] ,self.nullInfo()))/(len(self.nullInfo())*1.0))
self.pnp = pnp
session.commit()
def nullInfo(self):
nulllist = list()
for col in filter(lambda x: x.name[:4] == u"null", inspect(self.__class__).columns ):
nulllist.append(dict(ratio=getattr(self,col.name) ,name = col.name[5:] ))
return nulllist
class TaskData (object):
def __init__ (self,submittername, parsedid):
self.submittername = submittername
self.parsedid = parsedid
| DBP/models/instance.py | 6,647 | -*- coding: utf-8 -*- | 21 | en | 0.767281 |
# Copyright (C) 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
#
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import shutil
import tempfile
import unittest
from devstack_local_conf import LocalConf
from collections import OrderedDict
class TestDevstackLocalConf(unittest.TestCase):
def setUp(self):
self.tmpdir = tempfile.mkdtemp()
def tearDown(self):
shutil.rmtree(self.tmpdir)
def test_plugins(self):
"Test that plugins without dependencies work"
localrc = {'test_localrc': '1'}
local_conf = {'install':
{'nova.conf':
{'main':
{'test_conf': '2'}}}}
services = {'cinder': True}
# We use ordereddict here to make sure the plugins are in the
# *wrong* order for testing.
plugins = OrderedDict([
('bar', 'git://git.openstack.org/openstack/bar-plugin'),
('foo', 'git://git.openstack.org/openstack/foo-plugin'),
('baz', 'git://git.openstack.org/openstack/baz-plugin'),
])
p = dict(localrc=localrc,
local_conf=local_conf,
base_services=[],
services=services,
plugins=plugins,
base_dir='./test',
path=os.path.join(self.tmpdir, 'test.local.conf'))
lc = LocalConf(p.get('localrc'),
p.get('local_conf'),
p.get('base_services'),
p.get('services'),
p.get('plugins'),
p.get('base_dir'),
p.get('projects'),
p.get('project'))
lc.write(p['path'])
plugins = []
with open(p['path']) as f:
for line in f:
if line.startswith('enable_plugin'):
plugins.append(line.split()[1])
self.assertEqual(['bar', 'baz', 'foo'], plugins)
def test_plugin_deps(self):
"Test that plugins with dependencies work"
os.makedirs(os.path.join(self.tmpdir, 'foo-plugin', 'devstack'))
os.makedirs(os.path.join(self.tmpdir, 'foo-plugin', '.git'))
os.makedirs(os.path.join(self.tmpdir, 'bar-plugin', 'devstack'))
os.makedirs(os.path.join(self.tmpdir, 'bar-plugin', '.git'))
with open(os.path.join(
self.tmpdir,
'foo-plugin', 'devstack', 'settings'), 'w') as f:
f.write('define_plugin foo\n')
with open(os.path.join(
self.tmpdir,
'bar-plugin', 'devstack', 'settings'), 'w') as f:
f.write('define_plugin bar\n')
f.write('plugin_requires bar foo\n')
localrc = {'test_localrc': '1'}
local_conf = {'install':
{'nova.conf':
{'main':
{'test_conf': '2'}}}}
services = {'cinder': True}
# We use ordereddict here to make sure the plugins are in the
# *wrong* order for testing.
plugins = OrderedDict([
('bar', 'git://git.openstack.org/openstack/bar-plugin'),
('foo', 'git://git.openstack.org/openstack/foo-plugin'),
])
p = dict(localrc=localrc,
local_conf=local_conf,
base_services=[],
services=services,
plugins=plugins,
base_dir=self.tmpdir,
path=os.path.join(self.tmpdir, 'test.local.conf'))
def test_libs_from_git(self):
"Test that LIBS_FROM_GIT is auto-generated"
projects = {
'git.openstack.org/openstack/nova': {
'required': True,
'short_name': 'nova',
},
'git.openstack.org/openstack/oslo.messaging': {
'required': True,
'short_name': 'oslo.messaging',
},
'git.openstack.org/openstack/devstack-plugin': {
'required': False,
'short_name': 'devstack-plugin',
},
}
project = {
'short_name': 'glance',
}
p = dict(base_services=[],
base_dir='./test',
path=os.path.join(self.tmpdir, 'test.local.conf'),
projects=projects,
project=project)
lc = LocalConf(p.get('localrc'),
p.get('local_conf'),
p.get('base_services'),
p.get('services'),
p.get('plugins'),
p.get('base_dir'),
p.get('projects'),
p.get('project'))
lc.write(p['path'])
lfg = None
with open(p['path']) as f:
for line in f:
if line.startswith('LIBS_FROM_GIT'):
lfg = line.strip().split('=')[1]
self.assertEqual('nova,oslo.messaging,glance', lfg)
def test_overridelibs_from_git(self):
"Test that LIBS_FROM_GIT can be overridden"
localrc = {'LIBS_FROM_GIT': 'oslo.db'}
projects = {
'git.openstack.org/openstack/nova': {
'required': True,
'short_name': 'nova',
},
'git.openstack.org/openstack/oslo.messaging': {
'required': True,
'short_name': 'oslo.messaging',
},
'git.openstack.org/openstack/devstack-plugin': {
'required': False,
'short_name': 'devstack-plugin',
},
}
p = dict(localrc=localrc,
base_services=[],
base_dir='./test',
path=os.path.join(self.tmpdir, 'test.local.conf'),
projects=projects)
lc = LocalConf(p.get('localrc'),
p.get('local_conf'),
p.get('base_services'),
p.get('services'),
p.get('plugins'),
p.get('base_dir'),
p.get('projects'),
p.get('project'))
lc.write(p['path'])
lfg = None
with open(p['path']) as f:
for line in f:
if line.startswith('LIBS_FROM_GIT'):
lfg = line.strip().split('=')[1]
self.assertEqual('oslo.db', lfg)
def test_plugin_circular_deps(self):
"Test that plugins with circular dependencies fail"
os.makedirs(os.path.join(self.tmpdir, 'foo-plugin', 'devstack'))
os.makedirs(os.path.join(self.tmpdir, 'foo-plugin', '.git'))
os.makedirs(os.path.join(self.tmpdir, 'bar-plugin', 'devstack'))
os.makedirs(os.path.join(self.tmpdir, 'bar-plugin', '.git'))
with open(os.path.join(
self.tmpdir,
'foo-plugin', 'devstack', 'settings'), 'w') as f:
f.write('define_plugin foo\n')
f.write('plugin_requires foo bar\n')
with open(os.path.join(
self.tmpdir,
'bar-plugin', 'devstack', 'settings'), 'w') as f:
f.write('define_plugin bar\n')
f.write('plugin_requires bar foo\n')
localrc = {'test_localrc': '1'}
local_conf = {'install':
{'nova.conf':
{'main':
{'test_conf': '2'}}}}
services = {'cinder': True}
# We use ordereddict here to make sure the plugins are in the
# *wrong* order for testing.
plugins = OrderedDict([
('bar', 'git://git.openstack.org/openstack/bar-plugin'),
('foo', 'git://git.openstack.org/openstack/foo-plugin'),
])
p = dict(localrc=localrc,
local_conf=local_conf,
base_services=[],
services=services,
plugins=plugins,
base_dir=self.tmpdir,
path=os.path.join(self.tmpdir, 'test.local.conf'))
with self.assertRaises(Exception):
lc = LocalConf(p.get('localrc'),
p.get('local_conf'),
p.get('base_services'),
p.get('services'),
p.get('plugins'),
p.get('base_dir'))
lc.write(p['path'])
if __name__ == '__main__':
unittest.main()
| roles/write-devstack-local-conf/library/test.py | 8,942 | Test that LIBS_FROM_GIT is auto-generated
Test that LIBS_FROM_GIT can be overridden
Test that plugins with circular dependencies fail
Test that plugins with dependencies work
Test that plugins without dependencies work
Copyright (C) 2017 Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. We use ordereddict here to make sure the plugins are in the *wrong* order for testing. We use ordereddict here to make sure the plugins are in the *wrong* order for testing. We use ordereddict here to make sure the plugins are in the *wrong* order for testing. | 1,035 | en | 0.876369 |
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from datetime import datetime
from kubernetes import client as k8s_client
from kubernetes import config
import time
import logging
import re
from .. import dsl
class K8sHelper(object):
""" Kubernetes Helper """
def __init__(self):
if not self._configure_k8s():
raise Exception('K8sHelper __init__ failure')
def _configure_k8s(self):
try:
config.load_kube_config()
logging.info('Found local kubernetes config. Initialized with kube_config.')
except:
logging.info('Cannot Find local kubernetes config. Trying in-cluster config.')
config.load_incluster_config()
logging.info('Initialized with in-cluster config.')
self._api_client = k8s_client.ApiClient()
self._corev1 = k8s_client.CoreV1Api(self._api_client)
return True
def _create_k8s_job(self, yaml_spec):
""" _create_k8s_job creates a kubernetes job based on the yaml spec """
pod = k8s_client.V1Pod(metadata=k8s_client.V1ObjectMeta(generate_name=yaml_spec['metadata']['generateName']))
container = k8s_client.V1Container(name = yaml_spec['spec']['containers'][0]['name'],
image = yaml_spec['spec']['containers'][0]['image'],
args = yaml_spec['spec']['containers'][0]['args'],
volume_mounts = [k8s_client.V1VolumeMount(
name=yaml_spec['spec']['containers'][0]['volumeMounts'][0]['name'],
mount_path=yaml_spec['spec']['containers'][0]['volumeMounts'][0]['mountPath'],
)],
env = [k8s_client.V1EnvVar(
name=yaml_spec['spec']['containers'][0]['env'][0]['name'],
value=yaml_spec['spec']['containers'][0]['env'][0]['value'],
)])
pod.spec = k8s_client.V1PodSpec(restart_policy=yaml_spec['spec']['restartPolicy'],
containers = [container],
service_account_name=yaml_spec['spec']['serviceAccountName'],
volumes=[k8s_client.V1Volume(
name=yaml_spec['spec']['volumes'][0]['name'],
secret=k8s_client.V1SecretVolumeSource(
secret_name=yaml_spec['spec']['volumes'][0]['secret']['secretName'],
)
)])
try:
api_response = self._corev1.create_namespaced_pod(yaml_spec['metadata']['namespace'], pod)
return api_response.metadata.name, True
except k8s_client.rest.ApiException as e:
logging.exception("Exception when calling CoreV1Api->create_namespaced_pod: {}\n".format(str(e)))
return '', False
def _wait_for_k8s_job(self, pod_name, yaml_spec, timeout):
""" _wait_for_k8s_job waits for the job to complete """
status = 'running'
start_time = datetime.now()
while status in ['pending', 'running']:
# Pod pending values: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodStatus.md
try:
api_response = self._corev1.read_namespaced_pod(pod_name, yaml_spec['metadata']['namespace'])
status = api_response.status.phase.lower()
time.sleep(5)
elapsed_time = (datetime.now() - start_time).seconds
logging.info('{} seconds: waiting for job to complete'.format(elapsed_time))
if elapsed_time > timeout:
logging.info('Kubernetes job timeout')
return False
except k8s_client.rest.ApiException as e:
logging.exception('Exception when calling CoreV1Api->read_namespaced_pod: {}\n'.format(str(e)))
return False
return status == 'succeeded'
def _delete_k8s_job(self, pod_name, yaml_spec):
""" _delete_k8s_job deletes a pod """
try:
api_response = self._corev1.delete_namespaced_pod(pod_name, yaml_spec['metadata']['namespace'], body=k8s_client.V1DeleteOptions())
except k8s_client.rest.ApiException as e:
logging.exception('Exception when calling CoreV1Api->delete_namespaced_pod: {}\n'.format(str(e)))
def _read_pod_log(self, pod_name, yaml_spec):
try:
api_response = self._corev1.read_namespaced_pod_log(pod_name, yaml_spec['metadata']['namespace'])
except k8s_client.rest.ApiException as e:
logging.exception('Exception when calling CoreV1Api->read_namespaced_pod_log: {}\n'.format(str(e)))
return False
return api_response
def run_job(self, yaml_spec, timeout=600):
""" run_job runs a kubernetes job and clean up afterwards """
pod_name, succ = self._create_k8s_job(yaml_spec)
if not succ:
return False
# timeout in seconds
succ = self._wait_for_k8s_job(pod_name, yaml_spec, timeout)
if not succ:
logging.info('Kubernetes job failed.')
return False
#TODO: investigate the read log error
# print(self._read_pod_log(pod_name, yaml_spec))
self._delete_k8s_job(pod_name, yaml_spec)
return succ
@staticmethod
def sanitize_k8s_name(name):
"""From _make_kubernetes_name
sanitize_k8s_name cleans and converts the names in the workflow.
"""
return re.sub('-+', '-', re.sub('[^-0-9a-z]+', '-', name.lower())).lstrip('-').rstrip('-')
@staticmethod
def convert_k8s_obj_to_json(k8s_obj):
"""
Builds a JSON K8s object.
If obj is None, return None.
If obj is str, int, long, float, bool, return directly.
If obj is datetime.datetime, datetime.date
convert to string in iso8601 format.
If obj is list, sanitize each element in the list.
If obj is dict, return the dict.
If obj is swagger model, return the properties dict.
Args:
obj: The data to serialize.
Returns: The serialized form of data.
"""
from six import text_type, integer_types, iteritems
PRIMITIVE_TYPES = (float, bool, bytes, text_type) + integer_types
from datetime import date, datetime
if k8s_obj is None:
return None
elif isinstance(k8s_obj, PRIMITIVE_TYPES):
return k8s_obj
elif isinstance(k8s_obj, list):
return [K8sHelper.convert_k8s_obj_to_json(sub_obj)
for sub_obj in k8s_obj]
elif isinstance(k8s_obj, tuple):
return tuple(K8sHelper.convert_k8s_obj_to_json(sub_obj)
for sub_obj in k8s_obj)
elif isinstance(k8s_obj, (datetime, date)):
return k8s_obj.isoformat()
elif isinstance(k8s_obj, dsl.PipelineParam):
if isinstance(k8s_obj.value, str):
return k8s_obj.value
return '{{inputs.parameters.%s}}' % k8s_obj.full_name
if isinstance(k8s_obj, dict):
obj_dict = k8s_obj
else:
# Convert model obj to dict except
# attributes `swagger_types`, `attribute_map`
# and attributes which value is not None.
# Convert attribute name to json key in
# model definition for request.
obj_dict = {k8s_obj.attribute_map[attr]: getattr(k8s_obj, attr)
for attr, _ in iteritems(k8s_obj.swagger_types)
if getattr(k8s_obj, attr) is not None}
return {key: K8sHelper.convert_k8s_obj_to_json(val)
for key, val in iteritems(obj_dict)} | sdk/python/kfp/compiler/_k8s_helper.py | 7,990 | Kubernetes Helper
_create_k8s_job creates a kubernetes job based on the yaml spec
_delete_k8s_job deletes a pod
_wait_for_k8s_job waits for the job to complete
Builds a JSON K8s object.
If obj is None, return None.
If obj is str, int, long, float, bool, return directly.
If obj is datetime.datetime, datetime.date
convert to string in iso8601 format.
If obj is list, sanitize each element in the list.
If obj is dict, return the dict.
If obj is swagger model, return the properties dict.
Args:
obj: The data to serialize.
Returns: The serialized form of data.
run_job runs a kubernetes job and clean up afterwards
From _make_kubernetes_name
sanitize_k8s_name cleans and converts the names in the workflow.
Copyright 2018 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Pod pending values: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1PodStatus.md timeout in secondsTODO: investigate the read log error print(self._read_pod_log(pod_name, yaml_spec)) Convert model obj to dict except attributes `swagger_types`, `attribute_map` and attributes which value is not None. Convert attribute name to json key in model definition for request. | 1,663 | en | 0.753673 |
"""Webhook tests for mobile_app."""
import logging
import pytest
from homeassistant.components.camera import SUPPORT_STREAM as CAMERA_SUPPORT_STREAM
from homeassistant.components.mobile_app.const import CONF_SECRET
from homeassistant.components.zone import DOMAIN as ZONE_DOMAIN
from homeassistant.const import CONF_WEBHOOK_ID
from homeassistant.core import callback
from homeassistant.exceptions import HomeAssistantError
from homeassistant.setup import async_setup_component
from .const import CALL_SERVICE, FIRE_EVENT, REGISTER_CLEARTEXT, RENDER_TEMPLATE, UPDATE
from tests.async_mock import patch
from tests.common import async_mock_service
_LOGGER = logging.getLogger(__name__)
def encrypt_payload(secret_key, payload):
"""Return a encrypted payload given a key and dictionary of data."""
try:
from nacl.secret import SecretBox
from nacl.encoding import Base64Encoder
except (ImportError, OSError):
pytest.skip("libnacl/libsodium is not installed")
return
import json
keylen = SecretBox.KEY_SIZE
prepped_key = secret_key.encode("utf-8")
prepped_key = prepped_key[:keylen]
prepped_key = prepped_key.ljust(keylen, b"\0")
payload = json.dumps(payload).encode("utf-8")
return (
SecretBox(prepped_key).encrypt(payload, encoder=Base64Encoder).decode("utf-8")
)
def decrypt_payload(secret_key, encrypted_data):
"""Return a decrypted payload given a key and a string of encrypted data."""
try:
from nacl.secret import SecretBox
from nacl.encoding import Base64Encoder
except (ImportError, OSError):
pytest.skip("libnacl/libsodium is not installed")
return
import json
keylen = SecretBox.KEY_SIZE
prepped_key = secret_key.encode("utf-8")
prepped_key = prepped_key[:keylen]
prepped_key = prepped_key.ljust(keylen, b"\0")
decrypted_data = SecretBox(prepped_key).decrypt(
encrypted_data, encoder=Base64Encoder
)
decrypted_data = decrypted_data.decode("utf-8")
return json.loads(decrypted_data)
async def test_webhook_handle_render_template(create_registrations, webhook_client):
"""Test that we render templates properly."""
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]),
json=RENDER_TEMPLATE,
)
assert resp.status == 200
json = await resp.json()
assert json == {"one": "Hello world"}
async def test_webhook_handle_call_services(hass, create_registrations, webhook_client):
"""Test that we call services properly."""
calls = async_mock_service(hass, "test", "mobile_app")
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]),
json=CALL_SERVICE,
)
assert resp.status == 200
assert len(calls) == 1
async def test_webhook_handle_fire_event(hass, create_registrations, webhook_client):
"""Test that we can fire events."""
events = []
@callback
def store_event(event):
"""Helepr to store events."""
events.append(event)
hass.bus.async_listen("test_event", store_event)
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]), json=FIRE_EVENT
)
assert resp.status == 200
json = await resp.json()
assert json == {}
assert len(events) == 1
assert events[0].data["hello"] == "yo world"
async def test_webhook_update_registration(webhook_client, authed_api_client):
"""Test that a we can update an existing registration via webhook."""
register_resp = await authed_api_client.post(
"/api/mobile_app/registrations", json=REGISTER_CLEARTEXT
)
assert register_resp.status == 201
register_json = await register_resp.json()
webhook_id = register_json[CONF_WEBHOOK_ID]
update_container = {"type": "update_registration", "data": UPDATE}
update_resp = await webhook_client.post(
f"/api/webhook/{webhook_id}", json=update_container
)
assert update_resp.status == 200
update_json = await update_resp.json()
assert update_json["app_version"] == "2.0.0"
assert CONF_WEBHOOK_ID not in update_json
assert CONF_SECRET not in update_json
async def test_webhook_handle_get_zones(hass, create_registrations, webhook_client):
"""Test that we can get zones properly."""
await async_setup_component(
hass, ZONE_DOMAIN, {ZONE_DOMAIN: {}},
)
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]),
json={"type": "get_zones"},
)
assert resp.status == 200
json = await resp.json()
assert len(json) == 1
zones = sorted(json, key=lambda entry: entry["entity_id"])
assert zones[0]["entity_id"] == "zone.home"
async def test_webhook_handle_get_config(hass, create_registrations, webhook_client):
"""Test that we can get config properly."""
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]),
json={"type": "get_config"},
)
assert resp.status == 200
json = await resp.json()
if "components" in json:
json["components"] = set(json["components"])
if "whitelist_external_dirs" in json:
json["whitelist_external_dirs"] = set(json["whitelist_external_dirs"])
hass_config = hass.config.as_dict()
expected_dict = {
"latitude": hass_config["latitude"],
"longitude": hass_config["longitude"],
"elevation": hass_config["elevation"],
"unit_system": hass_config["unit_system"],
"location_name": hass_config["location_name"],
"time_zone": hass_config["time_zone"],
"components": hass_config["components"],
"version": hass_config["version"],
"theme_color": "#03A9F4", # Default frontend theme color
}
assert expected_dict == json
async def test_webhook_returns_error_incorrect_json(
webhook_client, create_registrations, caplog
):
"""Test that an error is returned when JSON is invalid."""
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]), data="not json"
)
assert resp.status == 400
json = await resp.json()
assert json == {}
assert "invalid JSON" in caplog.text
async def test_webhook_handle_decryption(webhook_client, create_registrations):
"""Test that we can encrypt/decrypt properly."""
key = create_registrations[0]["secret"]
data = encrypt_payload(key, RENDER_TEMPLATE["data"])
container = {"type": "render_template", "encrypted": True, "encrypted_data": data}
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[0]["webhook_id"]), json=container
)
assert resp.status == 200
webhook_json = await resp.json()
assert "encrypted_data" in webhook_json
decrypted_data = decrypt_payload(key, webhook_json["encrypted_data"])
assert decrypted_data == {"one": "Hello world"}
async def test_webhook_requires_encryption(webhook_client, create_registrations):
"""Test that encrypted registrations only accept encrypted data."""
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[0]["webhook_id"]),
json=RENDER_TEMPLATE,
)
assert resp.status == 400
webhook_json = await resp.json()
assert "error" in webhook_json
assert webhook_json["success"] is False
assert webhook_json["error"]["code"] == "encryption_required"
async def test_webhook_update_location(hass, webhook_client, create_registrations):
"""Test that location can be updated."""
resp = await webhook_client.post(
"/api/webhook/{}".format(create_registrations[1]["webhook_id"]),
json={
"type": "update_location",
"data": {"gps": [1, 2], "gps_accuracy": 10, "altitude": -10},
},
)
assert resp.status == 200
state = hass.states.get("device_tracker.test_1_2")
assert state is not None
assert state.attributes["latitude"] == 1.0
assert state.attributes["longitude"] == 2.0
assert state.attributes["gps_accuracy"] == 10
assert state.attributes["altitude"] == -10
async def test_webhook_enable_encryption(hass, webhook_client, create_registrations):
"""Test that encryption can be added to a reg initially created without."""
webhook_id = create_registrations[1]["webhook_id"]
enable_enc_resp = await webhook_client.post(
f"/api/webhook/{webhook_id}", json={"type": "enable_encryption"},
)
assert enable_enc_resp.status == 200
enable_enc_json = await enable_enc_resp.json()
assert len(enable_enc_json) == 1
assert CONF_SECRET in enable_enc_json
key = enable_enc_json["secret"]
enc_required_resp = await webhook_client.post(
f"/api/webhook/{webhook_id}", json=RENDER_TEMPLATE,
)
assert enc_required_resp.status == 400
enc_required_json = await enc_required_resp.json()
assert "error" in enc_required_json
assert enc_required_json["success"] is False
assert enc_required_json["error"]["code"] == "encryption_required"
enc_data = encrypt_payload(key, RENDER_TEMPLATE["data"])
container = {
"type": "render_template",
"encrypted": True,
"encrypted_data": enc_data,
}
enc_resp = await webhook_client.post(f"/api/webhook/{webhook_id}", json=container)
assert enc_resp.status == 200
enc_json = await enc_resp.json()
assert "encrypted_data" in enc_json
decrypted_data = decrypt_payload(key, enc_json["encrypted_data"])
assert decrypted_data == {"one": "Hello world"}
async def test_webhook_camera_stream_non_existent(
hass, create_registrations, webhook_client
):
"""Test fetching camera stream URLs for a non-existent camera."""
webhook_id = create_registrations[1]["webhook_id"]
resp = await webhook_client.post(
f"/api/webhook/{webhook_id}",
json={
"type": "stream_camera",
"data": {"camera_entity_id": "camera.doesnt_exist"},
},
)
assert resp.status == 400
webhook_json = await resp.json()
assert webhook_json["success"] is False
async def test_webhook_camera_stream_non_hls(
hass, create_registrations, webhook_client
):
"""Test fetching camera stream URLs for a non-HLS/stream-supporting camera."""
hass.states.async_set("camera.non_stream_camera", "idle", {"supported_features": 0})
webhook_id = create_registrations[1]["webhook_id"]
resp = await webhook_client.post(
f"/api/webhook/{webhook_id}",
json={
"type": "stream_camera",
"data": {"camera_entity_id": "camera.non_stream_camera"},
},
)
assert resp.status == 200
webhook_json = await resp.json()
assert webhook_json["hls_path"] is None
assert (
webhook_json["mjpeg_path"]
== "/api/camera_proxy_stream/camera.non_stream_camera"
)
async def test_webhook_camera_stream_stream_available(
hass, create_registrations, webhook_client
):
"""Test fetching camera stream URLs for an HLS/stream-supporting camera."""
hass.states.async_set(
"camera.stream_camera", "idle", {"supported_features": CAMERA_SUPPORT_STREAM}
)
webhook_id = create_registrations[1]["webhook_id"]
with patch(
"homeassistant.components.camera.async_request_stream",
return_value="/api/streams/some_hls_stream",
):
resp = await webhook_client.post(
f"/api/webhook/{webhook_id}",
json={
"type": "stream_camera",
"data": {"camera_entity_id": "camera.stream_camera"},
},
)
assert resp.status == 200
webhook_json = await resp.json()
assert webhook_json["hls_path"] == "/api/streams/some_hls_stream"
assert webhook_json["mjpeg_path"] == "/api/camera_proxy_stream/camera.stream_camera"
async def test_webhook_camera_stream_stream_available_but_errors(
hass, create_registrations, webhook_client
):
"""Test fetching camera stream URLs for an HLS/stream-supporting camera but that streaming errors."""
hass.states.async_set(
"camera.stream_camera", "idle", {"supported_features": CAMERA_SUPPORT_STREAM}
)
webhook_id = create_registrations[1]["webhook_id"]
with patch(
"homeassistant.components.camera.async_request_stream",
side_effect=HomeAssistantError(),
):
resp = await webhook_client.post(
f"/api/webhook/{webhook_id}",
json={
"type": "stream_camera",
"data": {"camera_entity_id": "camera.stream_camera"},
},
)
assert resp.status == 200
webhook_json = await resp.json()
assert webhook_json["hls_path"] is None
assert webhook_json["mjpeg_path"] == "/api/camera_proxy_stream/camera.stream_camera"
| tests/components/mobile_app/test_webhook.py | 13,006 | Return a decrypted payload given a key and a string of encrypted data.
Return a encrypted payload given a key and dictionary of data.
Helepr to store events.
Webhook tests for mobile_app.
Default frontend theme color | 218 | en | 0.648925 |
# (C) Copyright 2016 Hewlett Packard Enterprise Development LP
# Copyright 2016 FUJITSU LIMITED
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import monascastatsd
from oslo_config import cfg
from oslo_log import log
from monasca_notification.common.repositories import exceptions
from monasca_notification.notification import Notification
LOG = log.getLogger(__name__)
CONF = cfg.CONF
NOTIFICATION_DIMENSIONS = {'service': 'monitoring',
'component': 'monasca-notification'}
def get_db_repo():
repo_driver = CONF.database.repo_driver
LOG.debug('Enabling the %s RDB repository', repo_driver)
return repo_driver(CONF)
def construct_notification_object(db_repo, notification_json):
try:
notification = Notification(notification_json['id'],
notification_json['type'],
notification_json['name'],
notification_json['address'],
notification_json['period'],
notification_json['retry_count'],
notification_json['raw_alarm'])
# Grab notification method from database to see if it was changed
stored_notification = grab_stored_notification_method(db_repo, notification.id)
# Notification method was deleted
if stored_notification is None:
LOG.debug("Notification method {0} was deleted from database. "
"Will stop sending.".format(notification.id))
return None
# Update notification method with most up to date values
else:
notification.name = stored_notification[0]
notification.type = stored_notification[1]
notification.address = stored_notification[2]
notification.period = stored_notification[3]
return notification
except exceptions.DatabaseException:
LOG.warn("Error querying mysql for notification method. "
"Using currently cached method.")
return notification
except Exception as e:
LOG.warn("Error when attempting to construct notification {0}".format(e))
return None
def grab_stored_notification_method(db_repo, notification_id):
try:
stored_notification = db_repo.get_notification(notification_id)
except exceptions.DatabaseException:
LOG.debug('Database Error. Attempting reconnect')
stored_notification = db_repo.get_notification(notification_id)
return stored_notification
def get_statsd_client(dimensions=None):
local_dims = dimensions.copy() if dimensions else {}
local_dims.update(NOTIFICATION_DIMENSIONS)
if CONF.statsd.enable:
LOG.debug("Stablishing connection with statsd on {0}:{1}"
.format(CONF.statsd.host, CONF.statsd.port))
client = monascastatsd.Client(name='monasca',
host=CONF.statsd.host,
port=CONF.statsd.port,
dimensions=local_dims)
else:
LOG.debug("Overriding monascastatsd.Client to use it offline")
client = OfflineClient(name='monasca',
host=CONF.statsd.host,
port=CONF.statsd.port,
dimensions=local_dims)
return client
class OfflineClient(monascastatsd.Client):
def _set_connection(self, connection, host, port):
if connection is None:
self.connection = OfflineConnection(host=host,
port=port,
max_buffer_size=self._max_buffer_size)
else:
self.connection = connection
class OfflineConnection(monascastatsd.Connection):
def __init__(self, host='localhost', port=8125, max_buffer_size=50):
"""Initialize an Offline Connection object.
>>> monascastatsd = MonascaStatsd()
:name: the name for this client. Everything sent by this client
will be prefixed by name
:param host: the host of the MonascaStatsd server.
:param port: the port of the MonascaStatsd server.
:param max_buffer_size: Maximum number of metric to buffer before
sending to the server if sending metrics in batch
"""
self.max_buffer_size = max_buffer_size
self._send = self._send_to_server
self.connect(host, port)
self.encoding = 'utf-8'
def connect(self, host, port):
"""Avoid to connect to the monascastatsd server.
"""
pass
def _send_to_server(self, packet):
pass
| monasca_notification/common/utils.py | 5,281 | Initialize an Offline Connection object.
>>> monascastatsd = MonascaStatsd()
:name: the name for this client. Everything sent by this client
will be prefixed by name
:param host: the host of the MonascaStatsd server.
:param port: the port of the MonascaStatsd server.
:param max_buffer_size: Maximum number of metric to buffer before
sending to the server if sending metrics in batch
Avoid to connect to the monascastatsd server.
(C) Copyright 2016 Hewlett Packard Enterprise Development LP Copyright 2016 FUJITSU LIMITED Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Grab notification method from database to see if it was changed Notification method was deleted Update notification method with most up to date values | 1,210 | en | 0.854416 |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = "Christian Heider Nielsen"
__doc__ = r"""
"""
import h5py
import torch
import torch.utils
import torch.utils.data
from .h5_mnist_data import download_binary_mnist
def load_binary_mnist(cfg, **kwcfg):
fname = cfg.data_dir / "binary_mnist.h5"
if not fname.exists():
print("Downloading binary MNIST data...")
download_binary_mnist(fname)
f = h5py.File(str(fname), "r")
x_train = f["train"][::]
x_val = f["valid"][::]
x_test = f["test"][::]
train = torch.utils.data.TensorDataset(torch.from_numpy(x_train))
train_loader = torch.utils.data.DataLoader(
train, batch_size=cfg.batch_size, shuffle=True, **kwcfg
)
validation = torch.utils.data.TensorDataset(torch.from_numpy(x_val))
val_loader = torch.utils.data.DataLoader(
validation, batch_size=cfg.test_batch_size, shuffle=False
)
test = torch.utils.data.TensorDataset(torch.from_numpy(x_test))
test_loader = torch.utils.data.DataLoader(
test, batch_size=cfg.test_batch_size, shuffle=False
)
return train_loader, val_loader, test_loader
| samples/regression/vae/flow/data_loader.py | 1,163 | !/usr/bin/env python3 -*- coding: utf-8 -*- | 43 | fr | 0.304089 |
import json
import os
# set working directory
def gen_json(t):
print(os.getcwd())
# read log file
with open('Screenshots/Screenshoot_meta.txt', 'r') as f:
log = f.read()
data = {"camera_angle_x": 0.6911112070083618}
frames = []
line_cnt = 0
for line in log.split('\n'):
try:
record = {"file_path": "{}}_{:04d}".format(t,
line_cnt), "rotation": 4.0, "transform_matrix": eval(line)}
except:
pass
frames.append(record)
line_cnt += 1
data["frames"] = frames
data_json = json.dumps(data)
with open('Screenshots/Screenshoot_meta.json', 'w') as ff:
ff.write(data_json)
# %%
# %%
| my_src/meta_json.py | 802 | set working directory read log file %% %% | 41 | en | 0.767718 |
# This program and the accompanying materials are made available under the
# terms of the Mozilla Public License v2.0 which accompanies this distribution,
# and is available at https://www.mozilla.org/en-US/MPL/2.0/
from pyincore.utils.cgeoutputprocess import CGEOutputProcess
import os
PYINCOREPATH = "path-to-pyincore"
TESTAPATH = "pyincore/tests/pyincore/analyses/"
def run_convert_cge_json():
# run the JoplinCGE analysis first to get results, csv files
cge_json = CGEOutputProcess()
filepath = os.path.join(PYINCOREPATH, TESTAPATH, "joplincge")
cge_json.get_cge_household_count(None,
os.path.join(filepath, "joplin-pop-disl-results.csv"),
"cge_total_household_count.json")
cge_json.get_cge_gross_income(None,
os.path.join(filepath, "gross-income.csv"),
"cge_total_household_income.json")
cge_json.get_cge_employment(None, None,
os.path.join(filepath, "pre-disaster-factor-demand.csv"),
os.path.join(filepath, "post-disaster-factor-demand.csv"),
"cge_employment.json")
cge_json.get_cge_domestic_supply(None,
os.path.join(filepath, "domestic-supply.csv"),
"cge_domestic_supply.json")
return True
if __name__ == '__main__':
run_convert_cge_json()
| tests/pyincore/utils/test_csvoutputjson.py | 1,503 | This program and the accompanying materials are made available under the terms of the Mozilla Public License v2.0 which accompanies this distribution, and is available at https://www.mozilla.org/en-US/MPL/2.0/ run the JoplinCGE analysis first to get results, csv files | 268 | en | 0.875409 |
import numpy as np
import matplotlib.pyplot as plt
import time
from IPython import display
# Implemented methods
methods = ['DynProg', 'ValIter'];
# Some colours
LIGHT_RED = '#FFC4CC';
LIGHT_GREEN = '#95FD99';
BLACK = '#000000';
WHITE = '#FFFFFF';
LIGHT_PURPLE = '#E8D0FF';
LIGHT_ORANGE = '#FAE0C3';
SEB_GREEN = '#52B92C';
BUSTED_BLUE = '#5993B5'
class RobbingBanks:
# Actions
STAY = 0
MOVE_LEFT = 1
MOVE_RIGHT = 2
MOVE_UP = 3
MOVE_DOWN = 4
# Give names to actions
actions_names = {
STAY: "stay",
MOVE_LEFT: "move left",
MOVE_RIGHT: "move right",
MOVE_UP: "move up",
MOVE_DOWN: "move down"
}
# Reward values
def __init__(self, town_map):
""" Constructor of the environment town_map.
"""
self.STEP_REWARD = 0
self.BANK_REWARD = 10
self.CAUGHT_REWARD = -50
self.town_map = town_map;
self.initial_state = np.array([0,0,1,2])
self.actions = self.__actions();
self.states, self.map = self.__states();
self.n_actions = len(self.actions);
self.n_states = len(self.states);
self.transition_probabilities = self.__transitions();
self.rewards = self.__rewards();
def __actions(self):
actions = dict();
actions[self.STAY] = np.array([0, 0]);
actions[self.MOVE_LEFT] = np.array([0,-1]);
actions[self.MOVE_RIGHT] = np.array([0, 1]);
actions[self.MOVE_UP] = np.array([-1,0]);
actions[self.MOVE_DOWN] = np.array([1,0]);
return actions;
def __states(self):
states = dict();
states_vec = dict();
s = 0;
for i in range(self.town_map.shape[0]):
for j in range(self.town_map.shape[1]):
for k in range(self.town_map.shape[0]):
for l in range(self.town_map.shape[1]):
states[s] = np.array([i,j,k,l]);
states_vec[(i,j,k,l)] = s;
s += 1;
return states, states_vec
def __move(self, state, action):
""" Makes a step in the town_map, given a current position and an action.
If the action STAY or an inadmissible action is used, the robber stays in place.
:return integer next_cell corresponding to position (x,y) x (x,y) on the town_map that agent transitions to.
"""
# Compute the future position given current (state, action)
row = self.states[state][0] + self.actions[action][0];
col = self.states[state][1] + self.actions[action][1];
# Is the future position an impossible one ?
hitting_town_walls = (row == -1) or (row == self.town_map.shape[0]) or \
(col == -1) or (col == self.town_map.shape[1])
# Based on the impossiblity check return the next state.
list_police_pos = self.__police_positions(state)
new_police_pos = list_police_pos[np.random.randint(len(list_police_pos))]
#caught = (row, col) == (new_police_pos[0], new_police_pos[1])
caught = all(self.states[state][0:2] == self.states[state][2:])
if caught:
return self.map[tuple(self.initial_state)];
#Hot take: If you "unintentionally" hit the wall, the result should be that you (and the police) stay in place since it's not a "deliberate" move
elif hitting_town_walls:
return state
else:
return self.map[(row, col, new_police_pos[0], new_police_pos[1])];
def __police_positions(self, state):
"""
Input: The state as an int
Returns: A list of possible new minotaur positions from current state
"""
agent_pos = self.states[state][0:2]
police_pos = self.states[state][2:]
diff_pos = np.sign(agent_pos - police_pos)
list_pos = [[1,0], [-1,0], [0, diff_pos[1]]] if diff_pos[0] == 0 else [[0,1], [0,-1], [diff_pos[0],0]] if diff_pos[1] == 0 else [[0,diff_pos[1]], [diff_pos[0],0]]
list_pos += police_pos
list_pos = list(filter(None,[tuple(pos)*(0<=pos[0]<self.town_map.shape[0] and 0<=pos[1]<self.town_map.shape[1]) for pos in list_pos]))
return list_pos
def __transitions(self):
""" Computes the transition probabilities for every state action pair.
:return numpy.tensor transition probabilities: tensor of transition
probabilities of dimension S*S*A
"""
# Initialize the transition probailities tensor (S,S,A)
dimensions = (self.n_states,self.n_states,self.n_actions);
transition_probabilities = np.zeros(dimensions);
# Compute the transition probabilities. Note that the transitions
# are deterministic.
for s in range(self.n_states):
#if we are in the same position as the police, we return to initial
if (self.states[s][0],self.states[s][1])==(self.states[s][2],self.states[s][3]):
transition_probabilities[self.initial_state, s, :] = 1/3
else:
for a in range(self.n_actions):
list_pos = self.__police_positions(s) #police positions
for police_pos in list_pos:
next_s = self.__move(s,a);
new_pos = np.copy(self.states[next_s])
new_pos[2:] = police_pos
next_s = self.map[tuple(new_pos)]
transition_probabilities[next_s, s, a] = 1/len(list_pos);
return transition_probabilities;
def __rewards(self):
rewards = np.zeros((self.n_states, self.n_actions));
# rewards[i,j,k] = r(s' | s, a): tensor of rewards of dimension S x S x A
for s in range(self.n_states):
list_pos = self.__police_positions(s)
for a in range(self.n_actions):
next_s = self.__move(s,a);
#if we can get caught in the next move
if (tuple(self.states[next_s][0:2]) in list_pos):
#if our next position is not a bank
if self.town_map[tuple(self.states[next_s][0:2])] != 1:
rewards[s,a] = self.CAUGHT_REWARD/len(list_pos)
#if our next position is a bank
if self.town_map[tuple(self.states[next_s][0:2])] == 1:
rewards[s,a] = self.CAUGHT_REWARD/len(list_pos) + (len(list_pos)-1)*self.BANK_REWARD/len(list_pos)
#if we cannot get caught in the next move
else:
#reward for standing in a bank
if self.town_map[tuple(self.states[next_s][0:2])] == 1:
rewards[s,a] = self.BANK_REWARD
# list_pos = self.__police_positions(s)
# for a in range(self.n_actions):
# next_s = self.__move(s,a);
return rewards;
def simulate(self,policy):
path = list();
# Initialize current state, next state and time
t = 1;
s = self.map[tuple(self.initial_state)];
# Add the starting position in the town_map to the path
path.append(self.initial_state);
# Move to next state given the policy and the current state
next_s = self.__move(s,policy[s]);
# Add the position in the town_map corresponding to the next state
# to the pygame.freetype.path
path.append(self.states[next_s]);
# Loop while state is not the goal state
T = 40
while t<T:
# Update state
s = next_s;
# Move to next state given the policy and the current state
next_s = self.__move(s,policy[s]);
# Add the position in the town_map corresponding to the next state
# to the path
path.append(self.states[next_s])
# Update time and state for next iteration
t +=1;
return path
def show(self):
print('The states are :')
print(self.states)
print('The actions are:')
print(self.actions)
print('The mapping of the states:')
print(self.map)
print('The rewards:')
print(self.rewards)
def value_iteration(env, gamma, epsilon):
""" Solves the shortest path problem using value iteration
:input town_map env : The town_map environment in which we seek to
find the shortest path.
:input float gamma : The discount factor.
:input float epsilon : accuracy of the value iteration procedure.
:return numpy.array V : Optimal values for every state at every
time, dimension S*T
:return numpy.array policy: Optimal time-varying policy at every state,
dimension S*T
"""
# The value itearation algorithm requires the knowledge of :
# - Transition probabilities
# - Rewards
# - State space
# - Action space
# - The finite horizon
p = env.transition_probabilities;
r = env.rewards;
n_states = env.n_states;
n_actions = env.n_actions;
# Required variables and temporary ones for the VI to run
V = np.zeros(n_states);
Q = np.zeros((n_states, n_actions));
BV = np.zeros(n_states);
# Iteration counter
n = 0;
# Tolerance error
tol = (1 - gamma)* epsilon/gamma;
#tol = 100
# Initialization of the VI
for s in range(n_states):
for a in range(n_actions):
Q[s, a] = r[s, a] + gamma*np.dot(p[:,s,a],V);
BV = np.max(Q, 1);
# Iterate until convergence
while np.linalg.norm(V - BV) >= tol and n < 2600:
# Increment by one the numbers of iteration
n += 1;
# Update the value function
V = np.copy(BV);
# Compute the new BV
for s in range(n_states):
for a in range(n_actions):
Q[s, a] = r[s, a] + gamma*np.dot(p[:,s,a],V);
BV = np.max(Q, 1);
# Show error
#print(np.linalg.norm(V - BV))
# Compute policy
policy = np.argmax(Q,1);
# Return the obtained policy
return V, policy;
def draw_town_map(town_map):
# Map a color to each cell in the town_map
col_map = {0: WHITE, 1: BLACK, 2: LIGHT_GREEN, -6: LIGHT_RED, -1: LIGHT_RED};
# Give a color to each cell
rows,cols = town_map.shape;
colored_town_map = [[col_map[town_map[j,i]] for i in range(cols)] for j in range(rows)];
# Create figure of the size of the town_map
fig = plt.figure(1, figsize=(cols,rows));
# Remove the axis ticks and add title title
ax = plt.gca();
ax.set_title('The town_map');
ax.set_xticks([]);
ax.set_yticks([]);
# Give a color to each cell
rows,cols = town_map.shape;
colored_town_map = [[col_map[town_map[j,i]] for i in range(cols)] for j in range(rows)];
# Create figure of the size of the town_map
fig = plt.figure(1, figsize=(cols,rows))
# Create a table to color
grid = plt.table(cellText=None,
cellColours=colored_town_map,
cellLoc='center',
loc=(0,0),
edges='closed');
# Modify the hight and width of the cells in the table
tc = grid.properties()['children']
for cell in tc:
cell.set_height(1.0/rows);
cell.set_width(1.0/cols);
def animate_solution(town_map, path, save_anim = False, until_caught = False, gamma = 0):
# Map a color to each cell in the town_map
col_map = {0: WHITE, 1: SEB_GREEN, 2: LIGHT_GREEN, -6: LIGHT_RED, -1: LIGHT_RED};
# Size of the town_map
rows,cols = town_map.shape;
# Create figure of the size of the town_map
fig = plt.figure(1, figsize=(cols,rows));
# Remove the axis ticks and add title title
ax = plt.gca();
ax.set_title('Policy simulation: $\lambda$ = %0.1f' %gamma);
ax.set_xticks([]);
ax.set_yticks([]);
# Give a color to each cell
colored_town_map = [[col_map[town_map[j,i]] for i in range(cols)] for j in range(rows)];
# Create figure of the size of the town_map
fig = plt.figure(1, figsize=(cols,rows))
# Create a table to color
grid = plt.table(cellText=None,
cellColours=colored_town_map,
cellLoc='center',
loc=(0,0),
edges='closed');
# Modify the hight and width of the cells in the table
tc = grid.properties()['children']
for cell in tc:
cell.set_height(1.0/rows);
cell.set_width(1.0/cols);
# Update the color at each frame
path_robber = [tuple(p)[0:2] for p in path]
path_police = [tuple(p)[2:] for p in path]
for i in range(len(path_robber)):
if i == 0:
grid.get_celld()[(path_robber[i])].set_facecolor(LIGHT_ORANGE)
grid.get_celld()[(path_robber[i])].get_text().set_text('Robber')
grid.get_celld()[(path_police[i])].set_facecolor(LIGHT_RED)
grid.get_celld()[(path_police[i])].get_text().set_text('Police')
if save_anim:
plt.savefig('optimal_policy_'+str(i))
else:
if until_caught and path_robber[i] == path_police[i]:
grid.get_celld()[(path_robber[i-1])].set_facecolor(col_map[town_map[path_robber[i-1]]])
grid.get_celld()[(path_robber[i-1])].get_text().set_text('')
grid.get_celld()[(path_police[i-1])].set_facecolor(col_map[town_map[path_police[i-1]]])
grid.get_celld()[(path_police[i-1])].get_text().set_text('')
grid.get_celld()[(path_police[i])].set_facecolor(BUSTED_BLUE)
grid.get_celld()[(path_police[i])].get_text().set_text('BUSTED')
print("BUSTED!!!", gamma)
if save_anim:
plt.savefig(str(gamma)+'_'+str(i)+'.png')
break
if save_anim:
plt.savefig(str(gamma)+'_'+str(i)+'.png')
grid.get_celld()[(path_robber[i-1])].set_facecolor(col_map[town_map[path_robber[i-1]]])
grid.get_celld()[(path_robber[i-1])].get_text().set_text('')
grid.get_celld()[(path_police[i-1])].set_facecolor(col_map[town_map[path_police[i-1]]])
grid.get_celld()[(path_police[i-1])].get_text().set_text('')
grid.get_celld()[(path_robber[i])].set_facecolor(LIGHT_ORANGE)
grid.get_celld()[(path_robber[i])].get_text().set_text('Robber')
grid.get_celld()[(path_police[i])].set_facecolor(LIGHT_RED)
grid.get_celld()[(path_police[i])].get_text().set_text('Police')
grid.get_celld()[0,0].get_text().set_text('SEB')
grid.get_celld()[0,0].get_text().set_color('white')
grid.get_celld()[0,5].get_text().set_text('SEB')
grid.get_celld()[0,5].get_text().set_color('white')
grid.get_celld()[2,0].get_text().set_text('SEB')
grid.get_celld()[2,0].get_text().set_color('white')
grid.get_celld()[2,5].get_text().set_text('SEB')
grid.get_celld()[2,5].get_text().set_color('white')
plt.pause(0.7)
plt.show()
town_map= np.array([
[ 1, 0, 0, 0, 0, 1],
[ 0, 0, 0, 0, 0, 0],
[ 1, 0, 0, 0, 0, 1]
])
rb = RobbingBanks(town_map)
p=rb.transition_probabilities
n=rb.n_states
for s in range(n):
summ=np.sum(p[:,s,3])
if summ>1:
print(rb.states[s])
# PLOTTING VALUE_FUNC(INIT_STATE) AS A FUNCTION OF LAMBDA/GAMMA
"""
gammas = np.linspace(0.01,1,100,endpoint=False)
values = []
for gamma in gammas:
V, policy = value_iteration(rb, gamma, epsilon = 1e-6)
values.append(V[rb.map[(0,0,1,2)]])
plt.semilogy(gammas,values,'--')
plt.xlabel('Discount rate $\lambda$')
plt.ylabel('Value function V')
plt.title('Effect of $\lambda$ on V')
plt.plot()
#plt.show()
plt.savefig('Value_2b.png')
"""
# PLOTTING OPTIMAL POLICY FOR DIFFERENT LAMBDAS
"""
gammas = [0.1,0.5,0.8]
for gamma in gammas:
V, policy = value_iteration(rb, gamma, 1e-6)
path = rb.simulate(policy)
animate_solution(town_map, path, save_anim = False, until_caught = True,gamma=gamma)
""" | Assignment 2/robbing_banks.py | 16,523 | Constructor of the environment town_map.
Makes a step in the town_map, given a current position and an action.
If the action STAY or an inadmissible action is used, the robber stays in place.
:return integer next_cell corresponding to position (x,y) x (x,y) on the town_map that agent transitions to.
Input: The state as an int
Returns: A list of possible new minotaur positions from current state
Computes the transition probabilities for every state action pair.
:return numpy.tensor transition probabilities: tensor of transition
probabilities of dimension S*S*A
Solves the shortest path problem using value iteration
:input town_map env : The town_map environment in which we seek to
find the shortest path.
:input float gamma : The discount factor.
:input float epsilon : accuracy of the value iteration procedure.
:return numpy.array V : Optimal values for every state at every
time, dimension S*T
:return numpy.array policy: Optimal time-varying policy at every state,
dimension S*T
Implemented methods Some colours Actions Give names to actions Reward values Compute the future position given current (state, action) Is the future position an impossible one ? Based on the impossiblity check return the next state.caught = (row, col) == (new_police_pos[0], new_police_pos[1])Hot take: If you "unintentionally" hit the wall, the result should be that you (and the police) stay in place since it's not a "deliberate" move Initialize the transition probailities tensor (S,S,A) Compute the transition probabilities. Note that the transitions are deterministic.if we are in the same position as the police, we return to initialpolice positions rewards[i,j,k] = r(s' | s, a): tensor of rewards of dimension S x S x Aif we can get caught in the next moveif our next position is not a bankif our next position is a bankif we cannot get caught in the next movereward for standing in a bank list_pos = self.__police_positions(s) for a in range(self.n_actions): next_s = self.__move(s,a); Initialize current state, next state and time Add the starting position in the town_map to the path Move to next state given the policy and the current state Add the position in the town_map corresponding to the next state to the pygame.freetype.path Loop while state is not the goal state Update state Move to next state given the policy and the current state Add the position in the town_map corresponding to the next state to the path Update time and state for next iteration The value itearation algorithm requires the knowledge of : - Transition probabilities - Rewards - State space - Action space - The finite horizon Required variables and temporary ones for the VI to run Iteration counter Tolerance errortol = 100 Initialization of the VI Iterate until convergence Increment by one the numbers of iteration Update the value function Compute the new BV Show errorprint(np.linalg.norm(V - BV)) Compute policy Return the obtained policy Map a color to each cell in the town_map Give a color to each cell Create figure of the size of the town_map Remove the axis ticks and add title title Give a color to each cell Create figure of the size of the town_map Create a table to color Modify the hight and width of the cells in the table Map a color to each cell in the town_map Size of the town_map Create figure of the size of the town_map Remove the axis ticks and add title title Give a color to each cell Create figure of the size of the town_map Create a table to color Modify the hight and width of the cells in the table Update the color at each frame PLOTTING VALUE_FUNC(INIT_STATE) AS A FUNCTION OF LAMBDA/GAMMA PLOTTING OPTIMAL POLICY FOR DIFFERENT LAMBDAS | 3,782 | en | 0.776706 |
# -*- coding: utf-8 -*-
# Copyright 2018, IBM.
#
# This source code is licensed under the Apache License, Version 2.0 found in
# the LICENSE.txt file in the root directory of this source tree.
# pylint: disable=invalid-name, bad-continuation
"""Provider for local backends."""
import logging
from qiskit._qiskiterror import QISKitError
from qiskit.backends import BaseProvider
from .qasm_simulator_cpp import CliffordSimulatorCpp, QasmSimulatorCpp
from .qasm_simulator_py import QasmSimulatorPy
from .statevector_simulator_cpp import StatevectorSimulatorCpp
from .statevector_simulator_py import StatevectorSimulatorPy
from .unitary_simulator_py import UnitarySimulatorPy
logger = logging.getLogger(__name__)
SDK_STANDARD_BACKENDS = [
QasmSimulatorCpp,
QasmSimulatorPy,
StatevectorSimulatorCpp,
StatevectorSimulatorPy,
UnitarySimulatorPy,
CliffordSimulatorCpp,
]
class LocalProvider(BaseProvider):
"""Provider for local backends."""
def __init__(self, *args, **kwargs):
super().__init__(args, kwargs)
# Populate the list of local backends.
self.backends = self._verify_local_backends()
def get_backend(self, name):
return self.backends[name]
def available_backends(self, filters=None):
# pylint: disable=arguments-differ
backends = self.backends
filters = filters or {}
for key, value in filters.items():
backends = {name: instance for name, instance in backends.items()
if instance.configuration.get(key) == value}
return list(backends.values())
def aliased_backend_names(self):
return {
'local_qasm_simulator': ['local_qasm_simulator_cpp',
'local_qasm_simulator_py'],
'local_statevector_simulator': ['local_statevector_simulator_cpp',
'local_statevector_simulator_py'],
'local_unitary_simulator': ['local_unitary_simulator_cpp',
'local_unitary_simulator_py']
# TODO: restore after clifford simulator release
# 'local_clifford_simulator': ['local_clifford_simulator_cpp']
}
def deprecated_backend_names(self):
return {
'local_qiskit_simulator': 'local_qasm_simulator_cpp',
'wood_simulator': 'local_qasm_simulator_cpp',
}
@classmethod
def _verify_local_backends(cls):
"""
Return the local backends in `SDK_STANDARD_BACKENDS` that are
effectively available (as some of them might depend on the presence
of an optional dependency or on the existence of a binary).
Returns:
dict[str:BaseBackend]: a dict of the local backends instances for
the backends that could be instantiated, keyed by backend name.
"""
ret = {}
for backend_cls in SDK_STANDARD_BACKENDS:
try:
backend_instance = cls._get_backend_instance(backend_cls)
backend_name = backend_instance.configuration['name']
ret[backend_name] = backend_instance
except QISKitError as e:
# Ignore backends that could not be initialized.
logger.info('local backend %s is not available: %s',
backend_cls, str(e))
return ret
@classmethod
def _get_backend_instance(cls, backend_cls):
"""
Return an instance of a backend from its class.
Args:
backend_cls (class): Backend class.
Returns:
BaseBackend: a backend instance.
Raises:
QISKitError: if the backend could not be instantiated or does not
provide a valid configuration containing a name.
"""
# Verify that the backend can be instantiated.
try:
backend_instance = backend_cls()
except Exception as err:
raise QISKitError('Backend %s could not be instantiated: %s' %
(cls, err))
# Verify that the instance has a minimal valid configuration.
try:
_ = backend_instance.configuration['name']
except (LookupError, TypeError):
raise QISKitError('Backend %s has an invalid configuration')
return backend_instance
| qiskit/backends/local/localprovider.py | 4,407 | Provider for local backends.
Return an instance of a backend from its class.
Args:
backend_cls (class): Backend class.
Returns:
BaseBackend: a backend instance.
Raises:
QISKitError: if the backend could not be instantiated or does not
provide a valid configuration containing a name.
Return the local backends in `SDK_STANDARD_BACKENDS` that are
effectively available (as some of them might depend on the presence
of an optional dependency or on the existence of a binary).
Returns:
dict[str:BaseBackend]: a dict of the local backends instances for
the backends that could be instantiated, keyed by backend name.
Provider for local backends.
-*- coding: utf-8 -*- Copyright 2018, IBM. This source code is licensed under the Apache License, Version 2.0 found in the LICENSE.txt file in the root directory of this source tree. pylint: disable=invalid-name, bad-continuation Populate the list of local backends. pylint: disable=arguments-differ TODO: restore after clifford simulator release 'local_clifford_simulator': ['local_clifford_simulator_cpp'] Ignore backends that could not be initialized. Verify that the backend can be instantiated. Verify that the instance has a minimal valid configuration. | 1,237 | en | 0.803416 |
# -*- coding: utf-8 -*-
"""
product.py
Implementing Add listing wizard for downstream modules:
* In the __setup__ method of `product.listing.add.start` in downstream
module, add the type as a valid channel type. Since this is non trivial
a convenience method `add_source` is provided which will add the source
type in an idempotent fashion.
* Implement a StateView in the `product.listing.add` wizard with the name
`start_<source_name>`. This StateView can change the state to other state
views or transitions. Eventually it should end with the `end` state.
"""
from collections import defaultdict
from trytond.pool import PoolMeta, Pool
from trytond.wizard import Wizard, Button, StateTransition, StateView
from trytond.transaction import Transaction
from trytond.model import ModelView, fields, ModelSQL, Unique
from trytond.pyson import Eval, Bool
__metaclass__ = PoolMeta
__all__ = [
'ProductSaleChannelListing', 'Product', 'AddProductListing',
'AddProductListingStart', 'TemplateSaleChannelListing',
'Template'
]
class AddProductListingStart(ModelView):
"Add listing form start"
__name__ = 'product.listing.add.start'
product = fields.Many2One(
'product.product', 'Product', readonly=True
)
channel = fields.Many2One(
'sale.channel', 'Channel', required=True,
domain=[('source', 'in', [])]
)
channel_source = fields.Function(
fields.Char("Channel Source"),
getter="on_change_with_channel_source"
)
@fields.depends('channel')
def on_change_with_channel_source(self, name=None):
return self.channel and self.channel.source
@classmethod
def add_source(cls, source):
"""
A convenience method for downstream modules to add channel
source types once they have implemented the step in the wizard
below.
This method must be called from `__setup__` method of downstream
module.
"""
source_leaf = cls.channel.domain[0][2]
if source not in source_leaf:
source_leaf.append(source)
class AddProductListing(Wizard):
"Add product Channel Listing Wizard"
__name__ = 'product.listing.add'
start = StateView(
'product.listing.add.start',
'sale_channel.listing_add_start_form', [
Button('Cancel', 'end', 'tryton-cancel'),
Button('Next', 'next', 'tryton-go-next', default=True),
]
)
next = StateTransition()
def default_start(self, fields):
return {
'product': Transaction().context['active_id']
}
def transition_next(self):
return 'start_%s' % self.start.channel.source
class Template:
"Product Template"
__name__ = 'product.template'
channel_listings = fields.One2Many(
'product.template.channel_listing', 'template', 'Channel Listings'
)
class TemplateSaleChannelListing(ModelSQL, ModelView):
"""
Template - Sale Channel
This model keeps a record of a template's association with Sale Channels.
"""
__name__ = 'product.template.channel_listing'
channel = fields.Many2One(
'sale.channel', 'Sale Channel',
domain=[('source', '!=', 'manual')],
select=True, required=True,
ondelete='RESTRICT'
)
template = fields.Many2One(
'product.template', 'Product Template', required=True,
select=True, ondelete='CASCADE'
)
template_identifier = fields.Char(
'Template Identifier', select=True, required=True
)
@classmethod
def __setup__(cls):
"""
Setup the class and define constraints
"""
super(TemplateSaleChannelListing, cls).__setup__()
table = cls.__table__()
cls._sql_constraints += [(
'channel_template_unique',
Unique(table, table.channel, table.template_identifier, table.template), # noqa
'Product Template is already mapped to this channel with same identifier' # noqa
)]
class Product:
"Product"
__name__ = "product.product"
channel_listings = fields.One2Many(
'product.product.channel_listing', 'product', 'Channel Listings',
)
@classmethod
def __setup__(cls):
super(Product, cls).__setup__()
cls._buttons.update({
'add_listing': {},
})
@classmethod
@ModelView.button_action('sale_channel.wizard_add_listing')
def add_listing(cls, products):
pass
@classmethod
def create_from(cls, channel, product_data):
"""
Create the product for the channel
"""
raise NotImplementedError(
"create_from is not implemented in product for %s channels"
% channel.source
)
class ProductSaleChannelListing(ModelSQL, ModelView):
'''Product - Sale Channel
This model keeps a record of a product's association with Sale Channels.
A product can be listed on multiple marketplaces
'''
__name__ = 'product.product.channel_listing'
# TODO: Only show channels where this ability is there. For example
# showing a manual channel is pretty much useless
channel = fields.Many2One(
'sale.channel', 'Sale Channel',
domain=[('source', '!=', 'manual')],
required=True, select=True,
ondelete='RESTRICT'
)
product = fields.Many2One(
'product.product', 'Product', select=True,
states={'required': Eval('state') == 'active'},
ondelete='CASCADE', depends=['state']
)
product_identifier = fields.Char(
"Product Identifier", select=True, required=True
)
state = fields.Selection([
('active', 'Active'),
('disabled', 'Disabled'),
], 'State', required=True, select=True)
channel_source = fields.Function(
fields.Char("Channel Source"),
getter="get_channel_source"
)
quantity = fields.Function(
fields.Float(
'Quantity',
digits=(16, Eval('unit_digits', 2)), depends=['unit_digits']
), 'get_availability_fields'
)
unit_digits = fields.Function(
fields.Integer('Unit Digits'), 'get_unit_digits'
)
availability_type_used = fields.Function(
fields.Selection([
('bucket', 'Bucket'),
('quantity', 'Quantity'),
('infinite', 'Infinite'),
], 'Type'), 'get_availability_fields'
)
availability_used = fields.Function(
fields.Selection([
('in_stock', 'In-Stock'),
('out_of_stock', 'Out Of Stock'),
], 'Availability', states={
'invisible': ~Bool(Eval('availability_type_used') == 'bucket')
}, depends=['availability_type_used']),
'get_availability_fields'
)
listing_url = fields.Function(
fields.Char('Listing URL'), 'get_listing_url'
)
@classmethod
def search_rec_name(cls, name, clause):
return [
'OR',
('product',) + tuple(clause[1:]),
('product_identifier',) + tuple(clause[1:]),
]
@classmethod
def get_unit_digits(cls, records, name):
result = {r.id: r.product.default_uom.digits if r.product else 2
for r in records}
return result
@classmethod
def get_listing_url(cls, records, name):
"""
Downstream modules should implement this function
and return a valid url
"""
return dict.fromkeys([r.id for r in records])
@classmethod
def get_availability_fields(cls, listings, names):
listing_ids = map(int, listings)
values = defaultdict(lambda: dict.fromkeys(listing_ids, None))
for name in names:
# Just call the default dict once so all fields have values
# even if product is absent
values[name]
for listing in listings:
if listing.product:
availability = listing.get_availability()
values['availability_type_used'][listing.id] = \
availability['type']
values['availability_used'][listing.id] = availability.get(
'value'
)
values['quantity'][listing.id] = availability.get('quantity')
return values
@classmethod
def get_channel_source(cls, records, name):
result = {r.id: r.channel and r.channel.source for r in records}
return result
@fields.depends('channel')
def on_change_with_channel_source(self, name=None):
return self.channel and self.channel.source
@classmethod
def __setup__(cls):
'''
Setup the class and define constraints
'''
super(ProductSaleChannelListing, cls).__setup__()
table = cls.__table__()
cls._sql_constraints += [
(
'channel_product_identifier_uniq',
Unique(table, table.channel, table.product_identifier),
'This external product is already mapped with same channel.'
)
]
cls._buttons.update({
'export_inventory_button': {},
})
@staticmethod
def default_state():
return 'active'
@classmethod
def create_from(cls, channel, product_data):
"""
Create a listing for the product from channel and data
"""
raise NotImplementedError(
"create_from is not implemented in channel listing for %s channels"
% channel.source
)
@classmethod
@ModelView.button
def export_inventory_button(cls, listings):
return cls.export_bulk_inventory(listings)
def export_inventory(self):
"""
Export listing.product inventory to listing.channel
Since external channels are implemented by downstream modules, it is
the responsibility of those channels to implement exporting or call
super to delegate.
"""
raise NotImplementedError(
"Export inventory is not implemented for %s channels"
% self.channel.source
)
@classmethod
def export_bulk_inventory(cls, listings):
"""
Export listing.product inventory to listing.channel in bulk
Since external channels are implemented by downstream modules, it is
the responsibility of those channels to implement bulk exporting for
respective channels.
Default behaviour is to export inventory individually.
"""
for listing in listings:
listing.export_inventory()
def import_product_image(self):
"""
Import specific product image from external channel based on product
identifier.
Since external channels are implemented by downstream modules, it is
the responsibility of those channels to implement importing or call
super to delegate.
"""
raise NotImplementedError(
"Method import_product_image is not implemented for %s channel yet"
% self.source
)
def get_availability_context(self):
"""
Allow overriding the context used to compute availability of
products.
"""
return {
'locations': [self.channel.warehouse.id],
}
def get_availability(self):
"""
Return the availability of the product for this listing
"""
Product = Pool().get('product.product')
with Transaction().set_context(**self.get_availability_context()):
rv = {'type': 'bucket', 'value': None, 'quantity': None}
if self.product:
product = Product(self.product.id)
rv['quantity'] = product.quantity
if rv['quantity'] > 0:
rv['value'] = 'in_stock'
else:
rv['value'] = 'out_of_stock'
return rv
| product.py | 11,959 | Add product Channel Listing Wizard
Add listing form start
Product
Product - Sale Channel
This model keeps a record of a product's association with Sale Channels.
A product can be listed on multiple marketplaces
Product Template
Template - Sale Channel
This model keeps a record of a template's association with Sale Channels.
Setup the class and define constraints
Setup the class and define constraints
A convenience method for downstream modules to add channel
source types once they have implemented the step in the wizard
below.
This method must be called from `__setup__` method of downstream
module.
Create the product for the channel
Create a listing for the product from channel and data
Export listing.product inventory to listing.channel in bulk
Since external channels are implemented by downstream modules, it is
the responsibility of those channels to implement bulk exporting for
respective channels.
Default behaviour is to export inventory individually.
Export listing.product inventory to listing.channel
Since external channels are implemented by downstream modules, it is
the responsibility of those channels to implement exporting or call
super to delegate.
Return the availability of the product for this listing
Allow overriding the context used to compute availability of
products.
Downstream modules should implement this function
and return a valid url
Import specific product image from external channel based on product
identifier.
Since external channels are implemented by downstream modules, it is
the responsibility of those channels to implement importing or call
super to delegate.
product.py
Implementing Add listing wizard for downstream modules:
* In the __setup__ method of `product.listing.add.start` in downstream
module, add the type as a valid channel type. Since this is non trivial
a convenience method `add_source` is provided which will add the source
type in an idempotent fashion.
* Implement a StateView in the `product.listing.add` wizard with the name
`start_<source_name>`. This StateView can change the state to other state
views or transitions. Eventually it should end with the `end` state.
-*- coding: utf-8 -*- noqa noqa TODO: Only show channels where this ability is there. For example showing a manual channel is pretty much useless Just call the default dict once so all fields have values even if product is absent | 2,396 | en | 0.866166 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.