nwo
stringlengths
5
86
sha
stringlengths
40
40
path
stringlengths
4
189
language
stringclasses
1 value
identifier
stringlengths
1
94
parameters
stringlengths
2
4.03k
argument_list
stringclasses
1 value
return_statement
stringlengths
0
11.5k
docstring
stringlengths
1
33.2k
docstring_summary
stringlengths
0
5.15k
docstring_tokens
list
function
stringlengths
34
151k
function_tokens
list
url
stringlengths
90
278
edvardHua/Articles
60606c2dfc030d1dd3ac8e573be4367207efec8c
神经网络中 BP 算法的原理与 Python 实现源码解析/mnist_loader.py
python
load_data
()
return (training_data, validation_data, test_data)
Return the MNIST data as a tuple containing the training data, the validation data, and the test data. The ``training_data`` is returned as a tuple with two entries. The first entry contains the actual training images. This is a numpy ndarray with 50,000 entries. Each entry is, in turn, a numpy ndarray with 784 values, representing the 28 * 28 = 784 pixels in a single MNIST image. The second entry in the ``training_data`` tuple is a numpy ndarray containing 50,000 entries. Those entries are just the digit values (0...9) for the corresponding images contained in the first entry of the tuple. The ``validation_data`` and ``test_data`` are similar, except each contains only 10,000 images. This is a nice data format, but for use in neural networks it's helpful to modify the format of the ``training_data`` a little. That's done in the wrapper function ``load_data_wrapper()``, see below.
Return the MNIST data as a tuple containing the training data, the validation data, and the test data.
[ "Return", "the", "MNIST", "data", "as", "a", "tuple", "containing", "the", "training", "data", "the", "validation", "data", "and", "the", "test", "data", "." ]
def load_data(): """Return the MNIST data as a tuple containing the training data, the validation data, and the test data. The ``training_data`` is returned as a tuple with two entries. The first entry contains the actual training images. This is a numpy ndarray with 50,000 entries. Each entry is, in turn, a numpy ndarray with 784 values, representing the 28 * 28 = 784 pixels in a single MNIST image. The second entry in the ``training_data`` tuple is a numpy ndarray containing 50,000 entries. Those entries are just the digit values (0...9) for the corresponding images contained in the first entry of the tuple. The ``validation_data`` and ``test_data`` are similar, except each contains only 10,000 images. This is a nice data format, but for use in neural networks it's helpful to modify the format of the ``training_data`` a little. That's done in the wrapper function ``load_data_wrapper()``, see below. """ f = gzip.open('./data/mnist.pkl.gz', 'rb') training_data, validation_data, test_data = cPickle.load(f) f.close() return (training_data, validation_data, test_data)
[ "def", "load_data", "(", ")", ":", "f", "=", "gzip", ".", "open", "(", "'./data/mnist.pkl.gz'", ",", "'rb'", ")", "training_data", ",", "validation_data", ",", "test_data", "=", "cPickle", ".", "load", "(", "f", ")", "f", ".", "close", "(", ")", "retur...
https://github.com/edvardHua/Articles/blob/60606c2dfc030d1dd3ac8e573be4367207efec8c/神经网络中 BP 算法的原理与 Python 实现源码解析/mnist_loader.py#L19-L45
psi4/psi4
be533f7f426b6ccc263904e55122899b16663395
psi4/driver/driver.py
python
properties
(*args, **kwargs)
r"""Function to compute various properties. :aliases: prop() :returns: none. .. caution:: Some features are not yet implemented. Buy a developer a coffee. - This function at present has a limited functionality. Consult the keywords sections of other modules for further property capabilities. +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | Name | Calls Method | Reference | Supported Properties | +====================+===============================================+================+===============================================================+ | scf | Self-consistent field method(s) | RHF/ROHF/UHF | Listed :ref:`here <sec:oeprop>` | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | hf | HF Self-consistent field method(s) | RHF/ROHF/UHF | Listed :ref:`here <sec:oeprop>` | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | mp2 | MP2 with density fitting only (mp2_type df) | RHF | Listed :ref:`here <sec:oeprop>` | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | cc2 | 2nd-order approximate CCSD | RHF | dipole, quadrupole, polarizability, rotation, roa_tensor | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | ccsd | Coupled cluster singles and doubles (CCSD) | RHF | dipole, quadrupole, polarizability, rotation, roa_tensor | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | dct | density cumulant (functional) theory | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | :ref:`[manual] <sec:dct>` | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | omp2 | orbital-optimized second-order | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | MP perturbation theory | | Density fitted only | | | :ref:`[manual] <sec:occ_oo>` | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | omp3 | orbital-optimized third-order | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | MP perturbation theory | | Density fitted only | | | :ref:`[manual] <sec:occ_oo>` | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | omp2.5 | orbital-optimized MP2.5 | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | :ref:`[manual] <sec:occ_oo>` | | Density fitted only | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | olccd | orbital optimized LCCD | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | :ref:`[manual] <sec:occ_oo>` | | Density fitted only | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | eom-cc2 | 2nd-order approximate EOM-CCSD | RHF | oscillator_strength, rotational_strength | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | eom-ccsd | Equation-of-motion CCSD (EOM-CCSD) | RHF | oscillator_strength, rotational_strength | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | cisd, cisdt, | Configuration interaction | RHF/ROHF | Listed :ref:`here <sec:oeprop>`, transition_dipole, | | cisdt, cisdtq, | | | transition_quadrupole | | ci5, ..., fci | | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | casscf, rasscf | Multi-configurational SCF | RHF/ROHF | Listed :ref:`here <sec:oeprop>`, transition_dipole, | | | | | transition_quadrupole | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | adc(0), adc(1), | Algebraic-diagrammatic construction methods | RHF/UHF | dipole, transition_dipole, oscillator_strength, | | ..., adc(3), | :ref:`[manual] <sec:adc>` | | rotational_strength | | cvs-adc(0), ... | | | | | cvs-adc(3) | | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ :type name: str :param name: ``'ccsd'`` || etc. First argument, usually unlabeled. Indicates the computational method to be applied to the system. :type properties: List[str] :param properties: |dl| ``[]`` |dr| || ``['rotation', 'polarizability', 'oscillator_strength', 'roa']`` || etc. Indicates which properties should be computed. Defaults to dipole and quadrupole. :type molecule: :ref:`molecule <op_py_molecule>` :param molecule: ``h2o`` || etc. The target molecule, if not the last molecule defined. :examples: >>> # [1] Optical rotation calculation >>> properties('cc2', properties=['rotation'])
r"""Function to compute various properties.
[ "r", "Function", "to", "compute", "various", "properties", "." ]
def properties(*args, **kwargs): r"""Function to compute various properties. :aliases: prop() :returns: none. .. caution:: Some features are not yet implemented. Buy a developer a coffee. - This function at present has a limited functionality. Consult the keywords sections of other modules for further property capabilities. +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | Name | Calls Method | Reference | Supported Properties | +====================+===============================================+================+===============================================================+ | scf | Self-consistent field method(s) | RHF/ROHF/UHF | Listed :ref:`here <sec:oeprop>` | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | hf | HF Self-consistent field method(s) | RHF/ROHF/UHF | Listed :ref:`here <sec:oeprop>` | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | mp2 | MP2 with density fitting only (mp2_type df) | RHF | Listed :ref:`here <sec:oeprop>` | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | cc2 | 2nd-order approximate CCSD | RHF | dipole, quadrupole, polarizability, rotation, roa_tensor | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | ccsd | Coupled cluster singles and doubles (CCSD) | RHF | dipole, quadrupole, polarizability, rotation, roa_tensor | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | dct | density cumulant (functional) theory | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | :ref:`[manual] <sec:dct>` | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | omp2 | orbital-optimized second-order | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | MP perturbation theory | | Density fitted only | | | :ref:`[manual] <sec:occ_oo>` | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | omp3 | orbital-optimized third-order | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | MP perturbation theory | | Density fitted only | | | :ref:`[manual] <sec:occ_oo>` | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | omp2.5 | orbital-optimized MP2.5 | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | :ref:`[manual] <sec:occ_oo>` | | Density fitted only | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | olccd | orbital optimized LCCD | RHF/UHF | Listed :ref:`here <sec:oeprop>` | | | :ref:`[manual] <sec:occ_oo>` | | Density fitted only | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | eom-cc2 | 2nd-order approximate EOM-CCSD | RHF | oscillator_strength, rotational_strength | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | eom-ccsd | Equation-of-motion CCSD (EOM-CCSD) | RHF | oscillator_strength, rotational_strength | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | cisd, cisdt, | Configuration interaction | RHF/ROHF | Listed :ref:`here <sec:oeprop>`, transition_dipole, | | cisdt, cisdtq, | | | transition_quadrupole | | ci5, ..., fci | | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | casscf, rasscf | Multi-configurational SCF | RHF/ROHF | Listed :ref:`here <sec:oeprop>`, transition_dipole, | | | | | transition_quadrupole | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ | adc(0), adc(1), | Algebraic-diagrammatic construction methods | RHF/UHF | dipole, transition_dipole, oscillator_strength, | | ..., adc(3), | :ref:`[manual] <sec:adc>` | | rotational_strength | | cvs-adc(0), ... | | | | | cvs-adc(3) | | | | +--------------------+-----------------------------------------------+----------------+---------------------------------------------------------------+ :type name: str :param name: ``'ccsd'`` || etc. First argument, usually unlabeled. Indicates the computational method to be applied to the system. :type properties: List[str] :param properties: |dl| ``[]`` |dr| || ``['rotation', 'polarizability', 'oscillator_strength', 'roa']`` || etc. Indicates which properties should be computed. Defaults to dipole and quadrupole. :type molecule: :ref:`molecule <op_py_molecule>` :param molecule: ``h2o`` || etc. The target molecule, if not the last molecule defined. :examples: >>> # [1] Optical rotation calculation >>> properties('cc2', properties=['rotation']) """ kwargs = p4util.kwargs_lower(kwargs) # Make sure the molecule the user provided is the active one molecule = kwargs.pop('molecule', core.get_active_molecule()) molecule.update_geometry() kwargs['molecule'] = molecule # Allow specification of methods to arbitrary order lowername = args[0].lower() lowername, level = driver_util.parse_arbitrary_order(lowername) if level: kwargs['level'] = level if "/" in lowername: return driver_cbs._cbs_gufunc(properties, lowername, ptype='properties', **kwargs) return_wfn = kwargs.pop('return_wfn', False) props = kwargs.get('properties', ['dipole', 'quadrupole']) if len(args) > 1: props += args[1:] kwargs['properties'] = p4util.drop_duplicates(props) optstash = driver_util._set_convergence_criterion('properties', lowername, 6, 10, 6, 10, 8) wfn = procedures['properties'][lowername](lowername, **kwargs) optstash.restore() if return_wfn: return (core.variable('CURRENT ENERGY'), wfn) else: return core.variable('CURRENT ENERGY')
[ "def", "properties", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "kwargs", "=", "p4util", ".", "kwargs_lower", "(", "kwargs", ")", "# Make sure the molecule the user provided is the active one", "molecule", "=", "kwargs", ".", "pop", "(", "'molecule'", ...
https://github.com/psi4/psi4/blob/be533f7f426b6ccc263904e55122899b16663395/psi4/driver/driver.py#L802-L914
KhronosGroup/Vulkan-Headers
b32da5329b50e3cb96229aaecba9ded032fe29cc
registry/reg.py
python
FeatureInfo.__init__
(self, elem)
feature name string (e.g. 'VK_KHR_surface')
feature name string (e.g. 'VK_KHR_surface')
[ "feature", "name", "string", "(", "e", ".", "g", ".", "VK_KHR_surface", ")" ]
def __init__(self, elem): BaseInfo.__init__(self, elem) self.name = elem.get('name') "feature name string (e.g. 'VK_KHR_surface')" self.emit = False "has this feature been defined already?" self.sortorder = int(elem.get('sortorder', 0)) """explicit numeric sort key within feature and extension groups. Defaults to 0.""" # Determine element category (vendor). Only works # for <extension> elements. if elem.tag == 'feature': # Element category (vendor) is meaningless for <feature> self.category = 'VERSION' """category, e.g. VERSION or khr/vendor tag""" self.version = elem.get('name') """feature name string""" self.versionNumber = elem.get('number') """versionNumber - API version number, taken from the 'number' attribute of <feature>. Extensions do not have API version numbers and are assigned number 0.""" self.number = "0" self.supported = None else: # Extract vendor portion of <APIprefix>_<vendor>_<name> self.category = self.name.split('_', 2)[1] self.version = "0" self.versionNumber = "0" self.number = elem.get('number') """extension number, used for ordering and for assigning enumerant offsets. <feature> features do not have extension numbers and are assigned number 0.""" # If there is no 'number' attribute, use 0, so sorting works if self.number is None: self.number = 0 self.supported = elem.get('supported')
[ "def", "__init__", "(", "self", ",", "elem", ")", ":", "BaseInfo", ".", "__init__", "(", "self", ",", "elem", ")", "self", ".", "name", "=", "elem", ".", "get", "(", "'name'", ")", "self", ".", "emit", "=", "False", "\"has this feature been defined alrea...
https://github.com/KhronosGroup/Vulkan-Headers/blob/b32da5329b50e3cb96229aaecba9ded032fe29cc/registry/reg.py#L244-L286
windystrife/UnrealEngine_NVIDIAGameWorks
b50e6338a7c5b26374d66306ebc7807541ff815e
Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/optparse.py
python
OptionParser.disable_interspersed_args
(self)
Set parsing to stop on the first non-option. Use this if you have a command processor which runs another command that has options of its own and you want to make sure these options don't get confused.
Set parsing to stop on the first non-option. Use this if you have a command processor which runs another command that has options of its own and you want to make sure these options don't get confused.
[ "Set", "parsing", "to", "stop", "on", "the", "first", "non", "-", "option", ".", "Use", "this", "if", "you", "have", "a", "command", "processor", "which", "runs", "another", "command", "that", "has", "options", "of", "its", "own", "and", "you", "want", ...
def disable_interspersed_args(self): """Set parsing to stop on the first non-option. Use this if you have a command processor which runs another command that has options of its own and you want to make sure these options don't get confused. """ self.allow_interspersed_args = False
[ "def", "disable_interspersed_args", "(", "self", ")", ":", "self", ".", "allow_interspersed_args", "=", "False" ]
https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/optparse.py#L1295-L1301
Tencent/CMONGO
c40380caa14e05509f46993aa8b8da966b09b0b5
src/third_party/scons-2.5.0/scons-local-2.5.0/SCons/Script/Interactive.py
python
SConsInteractiveCmd.do_version
(self, argv)
\ version Prints SCons version information.
\ version Prints SCons version information.
[ "\\", "version", "Prints", "SCons", "version", "information", "." ]
def do_version(self, argv): """\ version Prints SCons version information. """ sys.stdout.write(self.parser.version + '\n')
[ "def", "do_version", "(", "self", ",", "argv", ")", ":", "sys", ".", "stdout", ".", "write", "(", "self", ".", "parser", ".", "version", "+", "'\\n'", ")" ]
https://github.com/Tencent/CMONGO/blob/c40380caa14e05509f46993aa8b8da966b09b0b5/src/third_party/scons-2.5.0/scons-local-2.5.0/SCons/Script/Interactive.py#L359-L363
domino-team/openwrt-cc
8b181297c34d14d3ca521cc9f31430d561dbc688
package/gli-pub/openwrt-node-packages-master/node/node-v6.9.1/tools/gyp/pylib/gyp/input.py
python
CheckedEval
(file_contents)
return CheckNode(c3[0], [])
Return the eval of a gyp file. The gyp file is restricted to dictionaries and lists only, and repeated keys are not allowed. Note that this is slower than eval() is.
Return the eval of a gyp file.
[ "Return", "the", "eval", "of", "a", "gyp", "file", "." ]
def CheckedEval(file_contents): """Return the eval of a gyp file. The gyp file is restricted to dictionaries and lists only, and repeated keys are not allowed. Note that this is slower than eval() is. """ ast = compiler.parse(file_contents) assert isinstance(ast, Module) c1 = ast.getChildren() assert c1[0] is None assert isinstance(c1[1], Stmt) c2 = c1[1].getChildren() assert isinstance(c2[0], Discard) c3 = c2[0].getChildren() assert len(c3) == 1 return CheckNode(c3[0], [])
[ "def", "CheckedEval", "(", "file_contents", ")", ":", "ast", "=", "compiler", ".", "parse", "(", "file_contents", ")", "assert", "isinstance", "(", "ast", ",", "Module", ")", "c1", "=", "ast", ".", "getChildren", "(", ")", "assert", "c1", "[", "0", "]"...
https://github.com/domino-team/openwrt-cc/blob/8b181297c34d14d3ca521cc9f31430d561dbc688/package/gli-pub/openwrt-node-packages-master/node/node-v6.9.1/tools/gyp/pylib/gyp/input.py#L171-L189
apple/turicreate
cce55aa5311300e3ce6af93cb45ba791fd1bdf49
deps/src/libxml2-2.9.1/python/libxml2class.py
python
ValidCtxt.validateElement
(self, doc, elem)
return ret
Try to validate the subtree under an element
Try to validate the subtree under an element
[ "Try", "to", "validate", "the", "subtree", "under", "an", "element" ]
def validateElement(self, doc, elem): """Try to validate the subtree under an element """ if doc is None: doc__o = None else: doc__o = doc._o if elem is None: elem__o = None else: elem__o = elem._o ret = libxml2mod.xmlValidateElement(self._o, doc__o, elem__o) return ret
[ "def", "validateElement", "(", "self", ",", "doc", ",", "elem", ")", ":", "if", "doc", "is", "None", ":", "doc__o", "=", "None", "else", ":", "doc__o", "=", "doc", ".", "_o", "if", "elem", "is", "None", ":", "elem__o", "=", "None", "else", ":", "...
https://github.com/apple/turicreate/blob/cce55aa5311300e3ce6af93cb45ba791fd1bdf49/deps/src/libxml2-2.9.1/python/libxml2class.py#L6367-L6374
OkCupid/okws
1c337392c676ccb4e9a4c92d11d5d2fada6427d2
contrib/pub2-upgrade.py
python
PubLexer.__init__
(self, **kwargs)
Ply magic to turn this class into a scanner given the class variables we set below.
Ply magic to turn this class into a scanner given the class variables we set below.
[ "Ply", "magic", "to", "turn", "this", "class", "into", "a", "scanner", "given", "the", "class", "variables", "we", "set", "below", "." ]
def __init__ (self, **kwargs): """Ply magic to turn this class into a scanner given the class variables we set below.""" self.lexer = ply.lex.lex (module = self, **kwargs) self._lineno = 1
[ "def", "__init__", "(", "self", ",", "*", "*", "kwargs", ")", ":", "self", ".", "lexer", "=", "ply", ".", "lex", ".", "lex", "(", "module", "=", "self", ",", "*", "*", "kwargs", ")", "self", ".", "_lineno", "=", "1" ]
https://github.com/OkCupid/okws/blob/1c337392c676ccb4e9a4c92d11d5d2fada6427d2/contrib/pub2-upgrade.py#L78-L83
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/gtk/_controls.py
python
TreeCtrl.Toggle
(*args, **kwargs)
return _controls_.TreeCtrl_Toggle(*args, **kwargs)
Toggle(self, TreeItemId item)
Toggle(self, TreeItemId item)
[ "Toggle", "(", "self", "TreeItemId", "item", ")" ]
def Toggle(*args, **kwargs): """Toggle(self, TreeItemId item)""" return _controls_.TreeCtrl_Toggle(*args, **kwargs)
[ "def", "Toggle", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_controls_", ".", "TreeCtrl_Toggle", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/gtk/_controls.py#L5491-L5493
mantidproject/mantid
03deeb89254ec4289edb8771e0188c2090a02f32
scripts/Inelastic/CrystalField/function.py
python
PhysicalProperties.__init__
(self, typeid, *args, **kwargs)
Initialize physical properties environment. :param typeid: a flag or string (case insensitive) indicating the type of physical properties. "Cp" or "Cv" or "HeatCap*" or 1: Data is heat capacity in J/mol/K "chi" or "susc*" or 2: Data is magnetic susceptibility "mag*" or "M(H)" or 3: Data is magnetisation vs field "mom*" or "M(T)" or 4: Data is magnetic moment vs temperature :param hdir: the direction of the applied magnetic field for susceptibiliy or M(T) measurements :param hmag: the magnitude in Tesla of the magnetic field for M(T) :param temperature: the temperature in Kelvin of measurements of M(H) :param inverse: a boolean indicating whether susceptibility is chi or 1/chi or M(T) or 1/M(T) :param unit: the unit the data was measured in. Either: 'bohr', 'SI' or 'cgs'. :param lambda: (susceptibility only) the value of the exchange constant in inverse susc units :param chi0: (susceptibility only) the value of the residual (background) susceptibility typeid is required in all cases, and all other parameters may be specified as keyword arguments. otherwise the syntax is: PhysicalProperties('Cp') # No further parameters required for heat capacity PhysicalProperties('chi', hdir, inverse, unit, lambda, chi0) PhysicalProperties('chi', unit) PhysicalProperties('mag', hdir, temp, unit) PhysicalProperties('mag', unit) PhysicalProperties('M(T)', hmag, hdir, inverse, unit) PhysicalProperties('M(T)', unit) Defaults are: hdir=[0, 0, 1]; hmag=1; temp=1; inverse=False; unit='cgs'; lambda=chi0=0.
Initialize physical properties environment.
[ "Initialize", "physical", "properties", "environment", "." ]
def __init__(self, typeid, *args, **kwargs): """ Initialize physical properties environment. :param typeid: a flag or string (case insensitive) indicating the type of physical properties. "Cp" or "Cv" or "HeatCap*" or 1: Data is heat capacity in J/mol/K "chi" or "susc*" or 2: Data is magnetic susceptibility "mag*" or "M(H)" or 3: Data is magnetisation vs field "mom*" or "M(T)" or 4: Data is magnetic moment vs temperature :param hdir: the direction of the applied magnetic field for susceptibiliy or M(T) measurements :param hmag: the magnitude in Tesla of the magnetic field for M(T) :param temperature: the temperature in Kelvin of measurements of M(H) :param inverse: a boolean indicating whether susceptibility is chi or 1/chi or M(T) or 1/M(T) :param unit: the unit the data was measured in. Either: 'bohr', 'SI' or 'cgs'. :param lambda: (susceptibility only) the value of the exchange constant in inverse susc units :param chi0: (susceptibility only) the value of the residual (background) susceptibility typeid is required in all cases, and all other parameters may be specified as keyword arguments. otherwise the syntax is: PhysicalProperties('Cp') # No further parameters required for heat capacity PhysicalProperties('chi', hdir, inverse, unit, lambda, chi0) PhysicalProperties('chi', unit) PhysicalProperties('mag', hdir, temp, unit) PhysicalProperties('mag', unit) PhysicalProperties('M(T)', hmag, hdir, inverse, unit) PhysicalProperties('M(T)', unit) Defaults are: hdir=[0, 0, 1]; hmag=1; temp=1; inverse=False; unit='cgs'; lambda=chi0=0. """ self._physpropUnit = 'cgs' self._suscInverseFlag = False self._hdir = [0., 0., 1.] self._hmag = 1. self._physpropTemperature = 1. self._lambda = 0. # Exchange parameter (for susceptibility only) self._chi0 = 0. # Residual/background susceptibility (for susceptibility only) self._typeid = self._str2id(typeid) if isinstance(typeid, str) else int(typeid) try: initialiser = getattr(self, 'init' + str(self._typeid)) except AttributeError: raise ValueError('Physical property type %s not recognised' % (str(typeid))) initialiser(*args, **kwargs)
[ "def", "__init__", "(", "self", ",", "typeid", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "self", ".", "_physpropUnit", "=", "'cgs'", "self", ".", "_suscInverseFlag", "=", "False", "self", ".", "_hdir", "=", "[", "0.", ",", "0.", ",", "1....
https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/scripts/Inelastic/CrystalField/function.py#L506-L548
IfcOpenShell/IfcOpenShell
2c2954b11a9c9d581bef03240836d4567e69ad0b
src/blenderbim/blenderbim/bim/module/drawing/gizmos.py
python
ExtrusionWidget.update
(self, ctx)
updating object
updating object
[ "updating", "object" ]
def update(self, ctx): """updating object""" bpy.ops.bim.update_parametric_representation() target = ctx.object prop = target.data.BIMMeshProperties.ifc_parameters.get("IfcExtrudedAreaSolid/Depth") self.handle.target_set_prop("offset", prop, "value") self.guides.target_set_prop("depth", prop, "value")
[ "def", "update", "(", "self", ",", "ctx", ")", ":", "bpy", ".", "ops", ".", "bim", ".", "update_parametric_representation", "(", ")", "target", "=", "ctx", ".", "object", "prop", "=", "target", ".", "data", ".", "BIMMeshProperties", ".", "ifc_parameters", ...
https://github.com/IfcOpenShell/IfcOpenShell/blob/2c2954b11a9c9d581bef03240836d4567e69ad0b/src/blenderbim/blenderbim/bim/module/drawing/gizmos.py#L522-L528
bytefish/libfacerec
ce8dac925db5a42516d0ba698e54178d2bdfd19f
doc/source/ocv.py
python
OCVPyObject.handle_signature
(self, sig, signode)
return fullname, name_prefix
Transform a Python signature into RST nodes. Returns (fully qualified name of the thing, classname if any). If inside a class, the current class name is handled intelligently: * it is stripped from the displayed name if present * it is added to the full name (return value) if not present
Transform a Python signature into RST nodes. Returns (fully qualified name of the thing, classname if any).
[ "Transform", "a", "Python", "signature", "into", "RST", "nodes", ".", "Returns", "(", "fully", "qualified", "name", "of", "the", "thing", "classname", "if", "any", ")", "." ]
def handle_signature(self, sig, signode): """ Transform a Python signature into RST nodes. Returns (fully qualified name of the thing, classname if any). If inside a class, the current class name is handled intelligently: * it is stripped from the displayed name if present * it is added to the full name (return value) if not present """ signode += nodes.strong("Python:", "Python:") signode += addnodes.desc_name(" ", " ") m = py_sig_re.match(sig) if m is None: raise ValueError name_prefix, name, arglist, retann = m.groups() # determine module and class name (if applicable), as well as full name modname = self.options.get( 'module', self.env.temp_data.get('py:module')) classname = self.env.temp_data.get('py:class') if classname: add_module = False if name_prefix and name_prefix.startswith(classname): fullname = name_prefix + name # class name is given again in the signature name_prefix = name_prefix[len(classname):].lstrip('.') elif name_prefix: # class name is given in the signature, but different # (shouldn't happen) fullname = classname + '.' + name_prefix + name else: # class name is not given in the signature fullname = classname + '.' + name else: add_module = True if name_prefix: classname = name_prefix.rstrip('.') fullname = name_prefix + name else: classname = '' fullname = name signode['module'] = modname signode['class'] = classname signode['fullname'] = fullname sig_prefix = self.get_signature_prefix(sig) if sig_prefix: signode += addnodes.desc_annotation(sig_prefix, sig_prefix) if name_prefix: signode += addnodes.desc_addname(name_prefix, name_prefix) # exceptions are a special case, since they are documented in the # 'exceptions' module. elif add_module and self.env.config.add_module_names: modname = self.options.get( 'module', self.env.temp_data.get('py:module')) if modname and modname != 'exceptions': nodetext = modname + '.' signode += addnodes.desc_addname(nodetext, nodetext) signode += addnodes.desc_name(name, name) if not arglist: if self.needs_arglist(): # for callables, add an empty parameter list signode += addnodes.desc_parameterlist() if retann: signode += addnodes.desc_returns(retann, retann) return fullname, name_prefix _pseudo_parse_arglist(signode, arglist) if retann: signode += addnodes.desc_returns(retann, retann) return fullname, name_prefix
[ "def", "handle_signature", "(", "self", ",", "sig", ",", "signode", ")", ":", "signode", "+=", "nodes", ".", "strong", "(", "\"Python:\"", ",", "\"Python:\"", ")", "signode", "+=", "addnodes", ".", "desc_name", "(", "\" \"", ",", "\" \"", ")", "m", "=", ...
https://github.com/bytefish/libfacerec/blob/ce8dac925db5a42516d0ba698e54178d2bdfd19f/doc/source/ocv.py#L128-L200
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/tools/python3/src/Lib/multiprocessing/context.py
python
BaseContext.cpu_count
(self)
Returns the number of CPUs in the system
Returns the number of CPUs in the system
[ "Returns", "the", "number", "of", "CPUs", "in", "the", "system" ]
def cpu_count(self): '''Returns the number of CPUs in the system''' num = os.cpu_count() if num is None: raise NotImplementedError('cannot determine number of cpus') else: return num
[ "def", "cpu_count", "(", "self", ")", ":", "num", "=", "os", ".", "cpu_count", "(", ")", "if", "num", "is", "None", ":", "raise", "NotImplementedError", "(", "'cannot determine number of cpus'", ")", "else", ":", "return", "num" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python3/src/Lib/multiprocessing/context.py#L41-L47
happynear/caffe-windows
967eedf25009e334b7f6f933bb5e17aaaff5bef6
tools/extra/parse_log.py
python
write_csv
(output_filename, dict_list, delimiter, verbose=False)
Write a CSV file
Write a CSV file
[ "Write", "a", "CSV", "file" ]
def write_csv(output_filename, dict_list, delimiter, verbose=False): """Write a CSV file """ if not dict_list: if verbose: print('Not writing %s; no lines to write' % output_filename) return dialect = csv.excel dialect.delimiter = delimiter with open(output_filename, 'w') as f: dict_writer = csv.DictWriter(f, fieldnames=dict_list[0].keys(), dialect=dialect) dict_writer.writeheader() dict_writer.writerows(dict_list) if verbose: print 'Wrote %s' % output_filename
[ "def", "write_csv", "(", "output_filename", ",", "dict_list", ",", "delimiter", ",", "verbose", "=", "False", ")", ":", "if", "not", "dict_list", ":", "if", "verbose", ":", "print", "(", "'Not writing %s; no lines to write'", "%", "output_filename", ")", "return...
https://github.com/happynear/caffe-windows/blob/967eedf25009e334b7f6f933bb5e17aaaff5bef6/tools/extra/parse_log.py#L157-L175
devpack/android-python27
d42dd67565e104cf7b0b50eb473f615db3e69901
python-build-with-qt/sip-4.11.2/sipconfig.py
python
PythonModuleMakefile.__init__
(self, configuration, dstdir, srcdir=None, dir=None, makefile="Makefile", installs=None)
Initialise an instance of a parent Makefile. dstdir is the name of the directory where the module's Python code will be installed. srcdir is the name of the directory (relative to the directory in which the Makefile will be created) containing the module's Python code. It defaults to the same directory.
Initialise an instance of a parent Makefile.
[ "Initialise", "an", "instance", "of", "a", "parent", "Makefile", "." ]
def __init__(self, configuration, dstdir, srcdir=None, dir=None, makefile="Makefile", installs=None): """Initialise an instance of a parent Makefile. dstdir is the name of the directory where the module's Python code will be installed. srcdir is the name of the directory (relative to the directory in which the Makefile will be created) containing the module's Python code. It defaults to the same directory. """ Makefile.__init__(self, configuration, dir=dir, makefile=makefile, installs=installs) if not srcdir: srcdir = "." if dir: self._moddir = os.path.join(dir, srcdir) else: self._moddir = srcdir self._srcdir = srcdir self._dstdir = dstdir
[ "def", "__init__", "(", "self", ",", "configuration", ",", "dstdir", ",", "srcdir", "=", "None", ",", "dir", "=", "None", ",", "makefile", "=", "\"Makefile\"", ",", "installs", "=", "None", ")", ":", "Makefile", ".", "__init__", "(", "self", ",", "conf...
https://github.com/devpack/android-python27/blob/d42dd67565e104cf7b0b50eb473f615db3e69901/python-build-with-qt/sip-4.11.2/sipconfig.py#L1383-L1404
apple/turicreate
cce55aa5311300e3ce6af93cb45ba791fd1bdf49
deps/src/libxml2-2.9.1/python/libxml2class.py
python
outputBuffer.write
(self, len, buf)
return ret
Write the content of the array in the output I/O buffer This routine handle the I18N transcoding from internal UTF-8 The buffer is lossless, i.e. will store in case of partial or delayed writes.
Write the content of the array in the output I/O buffer This routine handle the I18N transcoding from internal UTF-8 The buffer is lossless, i.e. will store in case of partial or delayed writes.
[ "Write", "the", "content", "of", "the", "array", "in", "the", "output", "I", "/", "O", "buffer", "This", "routine", "handle", "the", "I18N", "transcoding", "from", "internal", "UTF", "-", "8", "The", "buffer", "is", "lossless", "i", ".", "e", ".", "wil...
def write(self, len, buf): """Write the content of the array in the output I/O buffer This routine handle the I18N transcoding from internal UTF-8 The buffer is lossless, i.e. will store in case of partial or delayed writes. """ ret = libxml2mod.xmlOutputBufferWrite(self._o, len, buf) return ret
[ "def", "write", "(", "self", ",", "len", ",", "buf", ")", ":", "ret", "=", "libxml2mod", ".", "xmlOutputBufferWrite", "(", "self", ".", "_o", ",", "len", ",", "buf", ")", "return", "ret" ]
https://github.com/apple/turicreate/blob/cce55aa5311300e3ce6af93cb45ba791fd1bdf49/deps/src/libxml2-2.9.1/python/libxml2class.py#L5322-L5328
krishauser/Klampt
972cc83ea5befac3f653c1ba20f80155768ad519
Python/python2_version/klampt/src/robotsim.py
python
WorldModel.remove
(self, *args)
return _robotsim.WorldModel_remove(self, *args)
remove(WorldModel self, RobotModel robot) remove(WorldModel self, RigidObjectModel object) remove(WorldModel self, TerrainModel terrain) Removes a robot, rigid object, or terrain from the world. It must be in this world or an exception is raised. IMPORTANT: All other RobotModel, RigidObjectModel, and TerrainModel references will be invalidated.
remove(WorldModel self, RobotModel robot) remove(WorldModel self, RigidObjectModel object) remove(WorldModel self, TerrainModel terrain)
[ "remove", "(", "WorldModel", "self", "RobotModel", "robot", ")", "remove", "(", "WorldModel", "self", "RigidObjectModel", "object", ")", "remove", "(", "WorldModel", "self", "TerrainModel", "terrain", ")" ]
def remove(self, *args): """ remove(WorldModel self, RobotModel robot) remove(WorldModel self, RigidObjectModel object) remove(WorldModel self, TerrainModel terrain) Removes a robot, rigid object, or terrain from the world. It must be in this world or an exception is raised. IMPORTANT: All other RobotModel, RigidObjectModel, and TerrainModel references will be invalidated. """ return _robotsim.WorldModel_remove(self, *args)
[ "def", "remove", "(", "self", ",", "*", "args", ")", ":", "return", "_robotsim", ".", "WorldModel_remove", "(", "self", ",", "*", "args", ")" ]
https://github.com/krishauser/Klampt/blob/972cc83ea5befac3f653c1ba20f80155768ad519/Python/python2_version/klampt/src/robotsim.py#L6027-L6044
ApolloAuto/apollo-platform
86d9dc6743b496ead18d597748ebabd34a513289
ros/third_party/lib_x86_64/python2.7/dist-packages/numpy/distutils/npy_pkg_config.py
python
parse_flags
(line)
return d
Parse a line from a config file containing compile flags. Parameters ---------- line : str A single line containing one or more compile flags. Returns ------- d : dict Dictionary of parsed flags, split into relevant categories. These categories are the keys of `d`: * 'include_dirs' * 'library_dirs' * 'libraries' * 'macros' * 'ignored'
Parse a line from a config file containing compile flags.
[ "Parse", "a", "line", "from", "a", "config", "file", "containing", "compile", "flags", "." ]
def parse_flags(line): """ Parse a line from a config file containing compile flags. Parameters ---------- line : str A single line containing one or more compile flags. Returns ------- d : dict Dictionary of parsed flags, split into relevant categories. These categories are the keys of `d`: * 'include_dirs' * 'library_dirs' * 'libraries' * 'macros' * 'ignored' """ lexer = shlex.shlex(line) lexer.whitespace_split = True d = {'include_dirs': [], 'library_dirs': [], 'libraries': [], 'macros': [], 'ignored': []} def next_token(t): if t.startswith('-I'): if len(t) > 2: d['include_dirs'].append(t[2:]) else: t = lexer.get_token() d['include_dirs'].append(t) elif t.startswith('-L'): if len(t) > 2: d['library_dirs'].append(t[2:]) else: t = lexer.get_token() d['library_dirs'].append(t) elif t.startswith('-l'): d['libraries'].append(t[2:]) elif t.startswith('-D'): d['macros'].append(t[2:]) else: d['ignored'].append(t) return lexer.get_token() t = lexer.get_token() while t: t = next_token(t) return d
[ "def", "parse_flags", "(", "line", ")", ":", "lexer", "=", "shlex", ".", "shlex", "(", "line", ")", "lexer", ".", "whitespace_split", "=", "True", "d", "=", "{", "'include_dirs'", ":", "[", "]", ",", "'library_dirs'", ":", "[", "]", ",", "'libraries'",...
https://github.com/ApolloAuto/apollo-platform/blob/86d9dc6743b496ead18d597748ebabd34a513289/ros/third_party/lib_x86_64/python2.7/dist-packages/numpy/distutils/npy_pkg_config.py#L37-L89
apple/turicreate
cce55aa5311300e3ce6af93cb45ba791fd1bdf49
src/python/turicreate/data_structures/sgraph.py
python
_edge_list_to_dataframe
(ls, src_column_name, dst_column_name)
return df
Convert a list of edges into dataframe.
Convert a list of edges into dataframe.
[ "Convert", "a", "list", "of", "edges", "into", "dataframe", "." ]
def _edge_list_to_dataframe(ls, src_column_name, dst_column_name): """ Convert a list of edges into dataframe. """ assert ( HAS_PANDAS ), "Cannot use dataframe because Pandas is not available or version is too low." cols = reduce(set.union, (set(e.attr.keys()) for e in ls)) df = pd.DataFrame( { src_column_name: [e.src_vid for e in ls], dst_column_name: [e.dst_vid for e in ls], } ) for c in cols: df[c] = [e.attr.get(c) for e in ls] return df
[ "def", "_edge_list_to_dataframe", "(", "ls", ",", "src_column_name", ",", "dst_column_name", ")", ":", "assert", "(", "HAS_PANDAS", ")", ",", "\"Cannot use dataframe because Pandas is not available or version is too low.\"", "cols", "=", "reduce", "(", "set", ".", "union"...
https://github.com/apple/turicreate/blob/cce55aa5311300e3ce6af93cb45ba791fd1bdf49/src/python/turicreate/data_structures/sgraph.py#L1320-L1336
microsoft/LightGBM
904b2d5158703c4900b68008617951dd2f9ff21b
python-package/lightgbm/basic.py
python
Booster.rollback_one_iter
(self)
return self
Rollback one iteration. Returns ------- self : Booster Booster with rolled back one iteration.
Rollback one iteration.
[ "Rollback", "one", "iteration", "." ]
def rollback_one_iter(self): """Rollback one iteration. Returns ------- self : Booster Booster with rolled back one iteration. """ _safe_call(_LIB.LGBM_BoosterRollbackOneIter( self.handle)) self.__is_predicted_cur_iter = [False for _ in range(self.__num_dataset)] return self
[ "def", "rollback_one_iter", "(", "self", ")", ":", "_safe_call", "(", "_LIB", ".", "LGBM_BoosterRollbackOneIter", "(", "self", ".", "handle", ")", ")", "self", ".", "__is_predicted_cur_iter", "=", "[", "False", "for", "_", "in", "range", "(", "self", ".", ...
https://github.com/microsoft/LightGBM/blob/904b2d5158703c4900b68008617951dd2f9ff21b/python-package/lightgbm/basic.py#L3052-L3063
fluffos/fluffos
bf54d5d4acef4de49dbed7d184849a7b7b354156
src/thirdparty/widecharwidth/generate.py
python
codepoints_to_carray_str
(cps)
return result
Given a list of codepoints, return a C array string representing their inclusive ranges.
Given a list of codepoints, return a C array string representing their inclusive ranges.
[ "Given", "a", "list", "of", "codepoints", "return", "a", "C", "array", "string", "representing", "their", "inclusive", "ranges", "." ]
def codepoints_to_carray_str(cps): global RANGE_CHARS """ Given a list of codepoints, return a C array string representing their inclusive ranges. """ result = "" ranges = merged_codepoints(cps) seps = gen_seps(len(ranges)) for (start, end) in ranges: result += "%s%s, %s%s%s" % (RANGE_CHARS[0], start.hex(), end.hex(), RANGE_CHARS[1], next(seps)) return result
[ "def", "codepoints_to_carray_str", "(", "cps", ")", ":", "global", "RANGE_CHARS", "result", "=", "\"\"", "ranges", "=", "merged_codepoints", "(", "cps", ")", "seps", "=", "gen_seps", "(", "len", "(", "ranges", ")", ")", "for", "(", "start", ",", "end", "...
https://github.com/fluffos/fluffos/blob/bf54d5d4acef4de49dbed7d184849a7b7b354156/src/thirdparty/widecharwidth/generate.py#L367-L375
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/windows/Lib/site-packages/pip/_vendor/urllib3/packages/six.py
python
python_2_unicode_compatible
(klass)
return klass
A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class.
A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing.
[ "A", "decorator", "that", "defines", "__unicode__", "and", "__str__", "methods", "under", "Python", "2", ".", "Under", "Python", "3", "it", "does", "nothing", "." ]
def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if PY2: if "__str__" not in klass.__dict__: raise ValueError( "@python_2_unicode_compatible cannot be applied " "to %s because it doesn't define __str__()." % klass.__name__ ) klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode("utf-8") return klass
[ "def", "python_2_unicode_compatible", "(", "klass", ")", ":", "if", "PY2", ":", "if", "\"__str__\"", "not", "in", "klass", ".", "__dict__", ":", "raise", "ValueError", "(", "\"@python_2_unicode_compatible cannot be applied \"", "\"to %s because it doesn't define __str__().\...
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/site-packages/pip/_vendor/urllib3/packages/six.py#L978-L994
google/syzygy
8164b24ebde9c5649c9a09e88a7fc0b0fcbd1bc5
syzygy/build/lastchange.py
python
WriteIfChanged
(file_name, contents)
Writes the specified contents to the specified file_name iff the contents are different than the current contents.
Writes the specified contents to the specified file_name iff the contents are different than the current contents.
[ "Writes", "the", "specified", "contents", "to", "the", "specified", "file_name", "iff", "the", "contents", "are", "different", "than", "the", "current", "contents", "." ]
def WriteIfChanged(file_name, contents): """ Writes the specified contents to the specified file_name iff the contents are different than the current contents. """ try: old_contents = open(file_name, 'r').read() except EnvironmentError: pass else: if contents == old_contents: _LOGGER.info('Contents unchanged, not writing file: %s', file_name) return os.unlink(file_name) _LOGGER.info('Contents changes, writing file: %s', file_name) open(file_name, 'w').write(contents)
[ "def", "WriteIfChanged", "(", "file_name", ",", "contents", ")", ":", "try", ":", "old_contents", "=", "open", "(", "file_name", ",", "'r'", ")", ".", "read", "(", ")", "except", "EnvironmentError", ":", "pass", "else", ":", "if", "contents", "==", "old_...
https://github.com/google/syzygy/blob/8164b24ebde9c5649c9a09e88a7fc0b0fcbd1bc5/syzygy/build/lastchange.py#L125-L140
BitMEX/api-connectors
37a3a5b806ad5d0e0fc975ab86d9ed43c3bcd812
auto-generated/python/swagger_client/models/wallet.py
python
Wallet.delta_deposited
(self)
return self._delta_deposited
Gets the delta_deposited of this Wallet. # noqa: E501 :return: The delta_deposited of this Wallet. # noqa: E501 :rtype: float
Gets the delta_deposited of this Wallet. # noqa: E501
[ "Gets", "the", "delta_deposited", "of", "this", "Wallet", ".", "#", "noqa", ":", "E501" ]
def delta_deposited(self): """Gets the delta_deposited of this Wallet. # noqa: E501 :return: The delta_deposited of this Wallet. # noqa: E501 :rtype: float """ return self._delta_deposited
[ "def", "delta_deposited", "(", "self", ")", ":", "return", "self", ".", "_delta_deposited" ]
https://github.com/BitMEX/api-connectors/blob/37a3a5b806ad5d0e0fc975ab86d9ed43c3bcd812/auto-generated/python/swagger_client/models/wallet.py#L341-L348
timi-liuliang/echo
40a5a24d430eee4118314459ab7e03afcb3b8719
thirdparty/protobuf/python/google/protobuf/internal/wire_format.py
python
ZigZagEncode
(value)
return (value << 1) ^ (~0)
ZigZag Transform: Encodes signed integers so that they can be effectively used with varint encoding. See wire_format.h for more details.
ZigZag Transform: Encodes signed integers so that they can be effectively used with varint encoding. See wire_format.h for more details.
[ "ZigZag", "Transform", ":", "Encodes", "signed", "integers", "so", "that", "they", "can", "be", "effectively", "used", "with", "varint", "encoding", ".", "See", "wire_format", ".", "h", "for", "more", "details", "." ]
def ZigZagEncode(value): """ZigZag Transform: Encodes signed integers so that they can be effectively used with varint encoding. See wire_format.h for more details. """ if value >= 0: return value << 1 return (value << 1) ^ (~0)
[ "def", "ZigZagEncode", "(", "value", ")", ":", "if", "value", ">=", "0", ":", "return", "value", "<<", "1", "return", "(", "value", "<<", "1", ")", "^", "(", "~", "0", ")" ]
https://github.com/timi-liuliang/echo/blob/40a5a24d430eee4118314459ab7e03afcb3b8719/thirdparty/protobuf/python/google/protobuf/internal/wire_format.py#L100-L107
makefile/frcnn
8d9b9ebf8be8315ba2f374d460121b0adf1df29c
scripts/cpp_lint.py
python
CheckForNewlineAtEOF
(filename, lines, error)
Logs an error if there is no newline char at the end of the file. Args: filename: The name of the current file. lines: An array of strings, each representing a line of the file. error: The function to call with any errors found.
Logs an error if there is no newline char at the end of the file.
[ "Logs", "an", "error", "if", "there", "is", "no", "newline", "char", "at", "the", "end", "of", "the", "file", "." ]
def CheckForNewlineAtEOF(filename, lines, error): """Logs an error if there is no newline char at the end of the file. Args: filename: The name of the current file. lines: An array of strings, each representing a line of the file. error: The function to call with any errors found. """ # The array lines() was created by adding two newlines to the # original file (go figure), then splitting on \n. # To verify that the file ends in \n, we just have to make sure the # last-but-two element of lines() exists and is empty. if len(lines) < 3 or lines[-2]: error(filename, len(lines) - 2, 'whitespace/ending_newline', 5, 'Could not find a newline character at the end of the file.')
[ "def", "CheckForNewlineAtEOF", "(", "filename", ",", "lines", ",", "error", ")", ":", "# The array lines() was created by adding two newlines to the", "# original file (go figure), then splitting on \\n.", "# To verify that the file ends in \\n, we just have to make sure the", "# last-but-...
https://github.com/makefile/frcnn/blob/8d9b9ebf8be8315ba2f374d460121b0adf1df29c/scripts/cpp_lint.py#L1508-L1523
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_carbon/_gdi.py
python
RendererNative.DrawDropArrow
(*args, **kwargs)
return _gdi_.RendererNative_DrawDropArrow(*args, **kwargs)
DrawDropArrow(self, Window win, DC dc, Rect rect, int flags=0) Draw a drop down arrow that is suitable for use outside a combo box. Arrow will have a transparent background. ``rect`` is not entirely filled by the arrow. Instead, you should use bounding rectangle of a drop down button which arrow matches the size you need. ``flags`` may have the ``wx.CONTROL_PRESSED`` or ``wx.CONTROL_CURRENT`` bit set.
DrawDropArrow(self, Window win, DC dc, Rect rect, int flags=0)
[ "DrawDropArrow", "(", "self", "Window", "win", "DC", "dc", "Rect", "rect", "int", "flags", "=", "0", ")" ]
def DrawDropArrow(*args, **kwargs): """ DrawDropArrow(self, Window win, DC dc, Rect rect, int flags=0) Draw a drop down arrow that is suitable for use outside a combo box. Arrow will have a transparent background. ``rect`` is not entirely filled by the arrow. Instead, you should use bounding rectangle of a drop down button which arrow matches the size you need. ``flags`` may have the ``wx.CONTROL_PRESSED`` or ``wx.CONTROL_CURRENT`` bit set. """ return _gdi_.RendererNative_DrawDropArrow(*args, **kwargs)
[ "def", "DrawDropArrow", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_gdi_", ".", "RendererNative_DrawDropArrow", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/_gdi.py#L7341-L7353
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/tools/cython/Cython/Debugger/libcython.py
python
dont_suppress_errors
(function)
return wrapper
*sigh*, readline
*sigh*, readline
[ "*", "sigh", "*", "readline" ]
def dont_suppress_errors(function): "*sigh*, readline" @functools.wraps(function) def wrapper(*args, **kwargs): try: return function(*args, **kwargs) except Exception: traceback.print_exc() raise return wrapper
[ "def", "dont_suppress_errors", "(", "function", ")", ":", "@", "functools", ".", "wraps", "(", "function", ")", "def", "wrapper", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "try", ":", "return", "function", "(", "*", "args", ",", "*", "*", ...
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/cython/Cython/Debugger/libcython.py#L72-L82
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/email/encoders.py
python
encode_7or8bit
(msg)
Set the Content-Transfer-Encoding header to 7bit or 8bit.
Set the Content-Transfer-Encoding header to 7bit or 8bit.
[ "Set", "the", "Content", "-", "Transfer", "-", "Encoding", "header", "to", "7bit", "or", "8bit", "." ]
def encode_7or8bit(msg): """Set the Content-Transfer-Encoding header to 7bit or 8bit.""" orig = msg.get_payload() if orig is None: # There's no payload. For backwards compatibility we use 7bit msg['Content-Transfer-Encoding'] = '7bit' return # We play a trick to make this go fast. If encoding to ASCII succeeds, we # know the data must be 7bit, otherwise treat it as 8bit. try: orig.encode('ascii') except UnicodeError: msg['Content-Transfer-Encoding'] = '8bit' else: msg['Content-Transfer-Encoding'] = '7bit'
[ "def", "encode_7or8bit", "(", "msg", ")", ":", "orig", "=", "msg", ".", "get_payload", "(", ")", "if", "orig", "is", "None", ":", "# There's no payload. For backwards compatibility we use 7bit", "msg", "[", "'Content-Transfer-Encoding'", "]", "=", "'7bit'", "return...
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi-v7a/toolchain/lib/python2.7/email/encoders.py#L63-L77
msitt/blpapi-python
bebcf43668c9e5f5467b1f685f9baebbfc45bc87
src/blpapi/element.py
python
Element.getElement
(self, nameOrIndex)
return Element(res[1], self._getDataHolder())
Args: nameOrIndex (Name or str or int): Sub-element identifier Returns: Element: Sub-element identified by ``nameOrIndex`` Raises: Exception: If ``nameOrIndex`` is a string or a :class:`Name` and ``hasElement(nameOrIndex) != True``, or if ``nameOrIndex`` is an integer and ``nameOrIndex >= numElements()``. Also if this :class:`Element` is neither a sequence nor a choice.
Args: nameOrIndex (Name or str or int): Sub-element identifier
[ "Args", ":", "nameOrIndex", "(", "Name", "or", "str", "or", "int", ")", ":", "Sub", "-", "element", "identifier" ]
def getElement(self, nameOrIndex): """ Args: nameOrIndex (Name or str or int): Sub-element identifier Returns: Element: Sub-element identified by ``nameOrIndex`` Raises: Exception: If ``nameOrIndex`` is a string or a :class:`Name` and ``hasElement(nameOrIndex) != True``, or if ``nameOrIndex`` is an integer and ``nameOrIndex >= numElements()``. Also if this :class:`Element` is neither a sequence nor a choice. """ if not isinstance(nameOrIndex, int): self.__assertIsValid() name = getNamePair(nameOrIndex) res = internals.blpapi_Element_getElement(self.__handle, name[0], name[1]) _ExceptionUtil.raiseOnError(res[0]) return Element(res[1], self._getDataHolder()) self.__assertIsValid() res = internals.blpapi_Element_getElementAt(self.__handle, nameOrIndex) _ExceptionUtil.raiseOnError(res[0]) return Element(res[1], self._getDataHolder())
[ "def", "getElement", "(", "self", ",", "nameOrIndex", ")", ":", "if", "not", "isinstance", "(", "nameOrIndex", ",", "int", ")", ":", "self", ".", "__assertIsValid", "(", ")", "name", "=", "getNamePair", "(", "nameOrIndex", ")", "res", "=", "internals", "...
https://github.com/msitt/blpapi-python/blob/bebcf43668c9e5f5467b1f685f9baebbfc45bc87/src/blpapi/element.py#L569-L595
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/tools/python/src/Lib/compiler/pycodegen.py
python
CodeGenerator.getCode
(self)
return self.graph.getCode()
Return a code object
Return a code object
[ "Return", "a", "code", "object" ]
def getCode(self): """Return a code object""" return self.graph.getCode()
[ "def", "getCode", "(", "self", ")", ":", "return", "self", ".", "graph", ".", "getCode", "(", ")" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/tools/python/src/Lib/compiler/pycodegen.py#L246-L248
rapidsai/cudf
d5b2448fc69f17509304d594f029d0df56984962
python/cudf/cudf/core/abc.py
python
Serializable.device_deserialize
(cls, header, frames)
return obj
Perform device-side deserialization tasks. The primary purpose of this method is the creation of device memory buffers from host buffers where necessary. Parameters ---------- header : dict The metadata required to reconstruct the object. frames : list The Buffers or memoryviews that the object should contain. Returns ------- Serializable A new instance of `cls` (a subclass of `Serializable`) equivalent to the instance that was serialized to produce the header and frames. :meta private:
Perform device-side deserialization tasks.
[ "Perform", "device", "-", "side", "deserialization", "tasks", "." ]
def device_deserialize(cls, header, frames): """Perform device-side deserialization tasks. The primary purpose of this method is the creation of device memory buffers from host buffers where necessary. Parameters ---------- header : dict The metadata required to reconstruct the object. frames : list The Buffers or memoryviews that the object should contain. Returns ------- Serializable A new instance of `cls` (a subclass of `Serializable`) equivalent to the instance that was serialized to produce the header and frames. :meta private: """ typ = pickle.loads(header["type-serialized"]) frames = [ cudf.core.buffer.Buffer(f) if c else memoryview(f) for c, f in zip(header["is-cuda"], frames) ] assert all( (type(f._owner) is rmm.DeviceBuffer) if c else (type(f) is memoryview) for c, f in zip(header["is-cuda"], frames) ) obj = typ.deserialize(header, frames) return obj
[ "def", "device_deserialize", "(", "cls", ",", "header", ",", "frames", ")", ":", "typ", "=", "pickle", ".", "loads", "(", "header", "[", "\"type-serialized\"", "]", ")", "frames", "=", "[", "cudf", ".", "core", ".", "buffer", ".", "Buffer", "(", "f", ...
https://github.com/rapidsai/cudf/blob/d5b2448fc69f17509304d594f029d0df56984962/python/cudf/cudf/core/abc.py#L109-L144
google-ar/WebARonTango
e86965d2cbc652156b480e0fcf77c716745578cd
chromium/src/gpu/command_buffer/build_gles2_cmd_buffer.py
python
GLGenerator.WriteDocs
(self, filename)
Writes the command buffer doc version of the commands
Writes the command buffer doc version of the commands
[ "Writes", "the", "command", "buffer", "doc", "version", "of", "the", "commands" ]
def WriteDocs(self, filename): """Writes the command buffer doc version of the commands""" with CHeaderWriter(filename) as f: for func in self.functions: func.WriteDocs(f) f.write("\n") self.generated_cpp_filenames.append(filename)
[ "def", "WriteDocs", "(", "self", ",", "filename", ")", ":", "with", "CHeaderWriter", "(", "filename", ")", "as", "f", ":", "for", "func", "in", "self", ".", "functions", ":", "func", ".", "WriteDocs", "(", "f", ")", "f", ".", "write", "(", "\"\\n\"",...
https://github.com/google-ar/WebARonTango/blob/e86965d2cbc652156b480e0fcf77c716745578cd/chromium/src/gpu/command_buffer/build_gles2_cmd_buffer.py#L10027-L10033
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_carbon/grid.py
python
GridRangeSelectEvent.GetBottomRow
(*args, **kwargs)
return _grid.GridRangeSelectEvent_GetBottomRow(*args, **kwargs)
GetBottomRow(self) -> int
GetBottomRow(self) -> int
[ "GetBottomRow", "(", "self", ")", "-", ">", "int" ]
def GetBottomRow(*args, **kwargs): """GetBottomRow(self) -> int""" return _grid.GridRangeSelectEvent_GetBottomRow(*args, **kwargs)
[ "def", "GetBottomRow", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_grid", ".", "GridRangeSelectEvent_GetBottomRow", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/grid.py#L2413-L2415
apache/incubator-mxnet
f03fb23f1d103fec9541b5ae59ee06b1734a51d9
python/mxnet/symbol/symbol.py
python
Symbol.as_np_ndarray
(self)
return _Symbol(hdl)
Convert mx.sym.Symbol to mx.sym.np._Symbol.
Convert mx.sym.Symbol to mx.sym.np._Symbol.
[ "Convert", "mx", ".", "sym", ".", "Symbol", "to", "mx", ".", "sym", ".", "np", ".", "_Symbol", "." ]
def as_np_ndarray(self): """Convert mx.sym.Symbol to mx.sym.np._Symbol.""" from .numpy import _Symbol hdl = SymbolHandle() check_call(_LIB.MXShallowCopySymbol(self.handle, ctypes.byref(hdl))) return _Symbol(hdl)
[ "def", "as_np_ndarray", "(", "self", ")", ":", "from", ".", "numpy", "import", "_Symbol", "hdl", "=", "SymbolHandle", "(", ")", "check_call", "(", "_LIB", ".", "MXShallowCopySymbol", "(", "self", ".", "handle", ",", "ctypes", ".", "byref", "(", "hdl", ")...
https://github.com/apache/incubator-mxnet/blob/f03fb23f1d103fec9541b5ae59ee06b1734a51d9/python/mxnet/symbol/symbol.py#L63-L68
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Code/Tools/AzCodeGenerator/Scripts/az_code_gen/clang_cpp.py
python
format_enums
(json_object)
Collects all enums and enum annotations found in json_object['objects'] and coalesces/moves them into json_object['enums'] @param json_object The raw JSON output from clang's AST dump
Collects all enums and enum annotations found in json_object['objects'] and coalesces/moves them into json_object['enums']
[ "Collects", "all", "enums", "and", "enum", "annotations", "found", "in", "json_object", "[", "objects", "]", "and", "coalesces", "/", "moves", "them", "into", "json_object", "[", "enums", "]" ]
def format_enums(json_object): """ Collects all enums and enum annotations found in json_object['objects'] and coalesces/moves them into json_object['enums'] @param json_object The raw JSON output from clang's AST dump """ enums = {} annotations = {} decls_to_remove = set() for object in json_object.get('objects', []): # objects with type 'enum SomeEnum' are enum annotations cpp_type = object['type'] if 'canonical_type' in object: cpp_type = object['canonical_type'] if cpp_type.startswith('enum '): enum = cpp_type.replace('enum ', '') annotations[enum] = object['annotations'] decls_to_remove.add(object['qualified_name']) # objects with type enum are the enums themselves elif cpp_type == 'enum': enum = object['qualified_name'] enums[enum] = object decls_to_remove.add(object['qualified_name']) # move enums into the top level enums entry and coalesce annotations into them json_object['enums'] = [] for name, enum in enums.items(): if name in annotations: # only include enums with annotations enum_annotations = annotations[name] enum.setdefault('annotations', {}).update(enum_annotations) json_object['enums'].append(enum) # Strip out top level decls that have been coalesced if len(decls_to_remove) > 0: json_object['objects'] = [obj for obj in json_object['objects'] if obj['qualified_name'] not in decls_to_remove]
[ "def", "format_enums", "(", "json_object", ")", ":", "enums", "=", "{", "}", "annotations", "=", "{", "}", "decls_to_remove", "=", "set", "(", ")", "for", "object", "in", "json_object", ".", "get", "(", "'objects'", ",", "[", "]", ")", ":", "# objects ...
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Code/Tools/AzCodeGenerator/Scripts/az_code_gen/clang_cpp.py#L114-L147
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
python
LuongMonotonicAttentionV2.__init__
(self, units, memory, memory_sequence_length=None, scale=False, sigmoid_noise=0., sigmoid_noise_seed=None, score_bias_init=0., mode="parallel", dtype=None, name="LuongMonotonicAttention", **kwargs)
Construct the Attention mechanism. Args: units: The depth of the query mechanism. memory: The memory to query; usually the output of an RNN encoder. This tensor should be shaped `[batch_size, max_time, ...]`. memory_sequence_length: (optional): Sequence lengths for the batch entries in memory. If provided, the memory tensor rows are masked with zeros for values past the respective sequence lengths. scale: Python boolean. Whether to scale the energy term. sigmoid_noise: Standard deviation of pre-sigmoid noise. See the docstring for `_monotonic_probability_fn` for more information. sigmoid_noise_seed: (optional) Random seed for pre-sigmoid noise. score_bias_init: Initial value for score bias scalar. It's recommended to initialize this to a negative value when the length of the memory is large. mode: How to compute the attention distribution. Must be one of 'recursive', 'parallel', or 'hard'. See the docstring for `tf.contrib.seq2seq.monotonic_attention` for more information. dtype: The data type for the query and memory layers of the attention mechanism. name: Name to use when creating ops. **kwargs: Dictionary that contains other common arguments for layer creation.
Construct the Attention mechanism.
[ "Construct", "the", "Attention", "mechanism", "." ]
def __init__(self, units, memory, memory_sequence_length=None, scale=False, sigmoid_noise=0., sigmoid_noise_seed=None, score_bias_init=0., mode="parallel", dtype=None, name="LuongMonotonicAttention", **kwargs): """Construct the Attention mechanism. Args: units: The depth of the query mechanism. memory: The memory to query; usually the output of an RNN encoder. This tensor should be shaped `[batch_size, max_time, ...]`. memory_sequence_length: (optional): Sequence lengths for the batch entries in memory. If provided, the memory tensor rows are masked with zeros for values past the respective sequence lengths. scale: Python boolean. Whether to scale the energy term. sigmoid_noise: Standard deviation of pre-sigmoid noise. See the docstring for `_monotonic_probability_fn` for more information. sigmoid_noise_seed: (optional) Random seed for pre-sigmoid noise. score_bias_init: Initial value for score bias scalar. It's recommended to initialize this to a negative value when the length of the memory is large. mode: How to compute the attention distribution. Must be one of 'recursive', 'parallel', or 'hard'. See the docstring for `tf.contrib.seq2seq.monotonic_attention` for more information. dtype: The data type for the query and memory layers of the attention mechanism. name: Name to use when creating ops. **kwargs: Dictionary that contains other common arguments for layer creation. """ # Set up the monotonic probability fn with supplied parameters if dtype is None: dtype = dtypes.float32 wrapped_probability_fn = functools.partial( _monotonic_probability_fn, sigmoid_noise=sigmoid_noise, mode=mode, seed=sigmoid_noise_seed) memory_layer = kwargs.pop("memory_layer", None) if not memory_layer: memory_layer = layers.Dense( units, name="memory_layer", use_bias=False, dtype=dtype) self.units = units self.scale = scale self.sigmoid_noise = sigmoid_noise self.sigmoid_noise_seed = sigmoid_noise_seed self.score_bias_init = score_bias_init self.mode = mode self.attention_g = None self.attention_score_bias = None super(LuongMonotonicAttentionV2, self).__init__( memory=memory, memory_sequence_length=memory_sequence_length, query_layer=None, memory_layer=memory_layer, probability_fn=wrapped_probability_fn, name=name, dtype=dtype, **kwargs)
[ "def", "__init__", "(", "self", ",", "units", ",", "memory", ",", "memory_sequence_length", "=", "None", ",", "scale", "=", "False", ",", "sigmoid_noise", "=", "0.", ",", "sigmoid_noise_seed", "=", "None", ",", "score_bias_init", "=", "0.", ",", "mode", "=...
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py#L1814-L1879
mindspore-ai/mindspore
fb8fd3338605bb34fa5cea054e535a8b1d753fab
mindspore/python/mindspore/ops/_grad/grad_math_ops.py
python
get_bprop_tan
(self)
return bprop
Grad definition for `Tan` operation.
Grad definition for `Tan` operation.
[ "Grad", "definition", "for", "Tan", "operation", "." ]
def get_bprop_tan(self): """Grad definition for `Tan` operation.""" reciprocal = P.Reciprocal() square = P.Square() cos = P.Cos() def bprop(x, out, dout): cosx = cos(x) secx2 = square(reciprocal(cosx)) dx = secx2 * dout return (dx,) return bprop
[ "def", "get_bprop_tan", "(", "self", ")", ":", "reciprocal", "=", "P", ".", "Reciprocal", "(", ")", "square", "=", "P", ".", "Square", "(", ")", "cos", "=", "P", ".", "Cos", "(", ")", "def", "bprop", "(", "x", ",", "out", ",", "dout", ")", ":",...
https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/ops/_grad/grad_math_ops.py#L1356-L1368
xiaolonw/caffe-video_triplet
c39ea1ad6e937ccf7deba4510b7e555165abf05f
scripts/cpp_lint.py
python
_IncludeState.CheckNextIncludeOrder
(self, header_type)
return ''
Returns a non-empty error message if the next header is out of order. This function also updates the internal state to be ready to check the next include. Args: header_type: One of the _XXX_HEADER constants defined above. Returns: The empty string if the header is in the right order, or an error message describing what's wrong.
Returns a non-empty error message if the next header is out of order.
[ "Returns", "a", "non", "-", "empty", "error", "message", "if", "the", "next", "header", "is", "out", "of", "order", "." ]
def CheckNextIncludeOrder(self, header_type): """Returns a non-empty error message if the next header is out of order. This function also updates the internal state to be ready to check the next include. Args: header_type: One of the _XXX_HEADER constants defined above. Returns: The empty string if the header is in the right order, or an error message describing what's wrong. """ error_message = ('Found %s after %s' % (self._TYPE_NAMES[header_type], self._SECTION_NAMES[self._section])) last_section = self._section if header_type == _C_SYS_HEADER: if self._section <= self._C_SECTION: self._section = self._C_SECTION else: self._last_header = '' return error_message elif header_type == _CPP_SYS_HEADER: if self._section <= self._CPP_SECTION: self._section = self._CPP_SECTION else: self._last_header = '' return error_message elif header_type == _LIKELY_MY_HEADER: if self._section <= self._MY_H_SECTION: self._section = self._MY_H_SECTION else: self._section = self._OTHER_H_SECTION elif header_type == _POSSIBLE_MY_HEADER: if self._section <= self._MY_H_SECTION: self._section = self._MY_H_SECTION else: # This will always be the fallback because we're not sure # enough that the header is associated with this file. self._section = self._OTHER_H_SECTION else: assert header_type == _OTHER_HEADER self._section = self._OTHER_H_SECTION if last_section != self._section: self._last_header = '' return ''
[ "def", "CheckNextIncludeOrder", "(", "self", ",", "header_type", ")", ":", "error_message", "=", "(", "'Found %s after %s'", "%", "(", "self", ".", "_TYPE_NAMES", "[", "header_type", "]", ",", "self", ".", "_SECTION_NAMES", "[", "self", ".", "_section", "]", ...
https://github.com/xiaolonw/caffe-video_triplet/blob/c39ea1ad6e937ccf7deba4510b7e555165abf05f/scripts/cpp_lint.py#L633-L684
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
wx/lib/masked/numctrl.py
python
NumCtrl.SetParameters
(self, **kwargs)
This function is used to initialize and reconfigure the control. See TimeCtrl module overview for available parameters.
This function is used to initialize and reconfigure the control. See TimeCtrl module overview for available parameters.
[ "This", "function", "is", "used", "to", "initialize", "and", "reconfigure", "the", "control", ".", "See", "TimeCtrl", "module", "overview", "for", "available", "parameters", "." ]
def SetParameters(self, **kwargs): """ This function is used to initialize and reconfigure the control. See TimeCtrl module overview for available parameters. """ ## dbg('NumCtrl::SetParameters', indent=1) maskededit_kwargs = {} reset_fraction_width = False if( (kwargs.has_key('integerWidth') and kwargs['integerWidth'] != self._integerWidth) or (kwargs.has_key('fractionWidth') and kwargs['fractionWidth'] != self._fractionWidth) or (kwargs.has_key('groupDigits') and kwargs['groupDigits'] != self._groupDigits) or (kwargs.has_key('autoSize') and kwargs['autoSize'] != self._autoSize) ): fields = {} if kwargs.has_key('fractionWidth'): if type(kwargs['fractionWidth']) != types.IntType: raise AttributeError('invalid fractionWidth (%s) specified; expected integer' % repr(kwargs['fractionWidth'])) elif kwargs['fractionWidth'] < 0: raise AttributeError('invalid fractionWidth (%s) specified; must be >= 0' % repr(kwargs['fractionWidth'])) else: if self._fractionWidth != kwargs['fractionWidth']: self._fractionWidth = kwargs['fractionWidth'] if self._fractionWidth: fracmask = '.' + '#{%d}' % self._fractionWidth fields[1] = Field(defaultValue='0'*self._fractionWidth) emptyInvalid = False else: emptyInvalid = True fracmask = '' ## dbg('fracmask:', fracmask) if kwargs.has_key('integerWidth'): if type(kwargs['integerWidth']) != types.IntType: ## dbg(indent=0) raise AttributeError('invalid integerWidth (%s) specified; expected integer' % repr(kwargs['integerWidth'])) elif kwargs['integerWidth'] < 0: ## dbg(indent=0) raise AttributeError('invalid integerWidth (%s) specified; must be > 0' % repr(kwargs['integerWidth'])) else: self._integerWidth = kwargs['integerWidth'] if kwargs.has_key('groupDigits'): self._groupDigits = kwargs['groupDigits'] if self._groupDigits: self._groupSpace = (self._integerWidth - 1) / 3 else: self._groupSpace = 0 intmask = '#{%d}' % (self._integerWidth + self._groupSpace) ## dbg('intmask:', intmask) fields[0] = Field(formatcodes='r<>', emptyInvalid=emptyInvalid) maskededit_kwargs['fields'] = fields # don't bother to reprocess these arguments: if kwargs.has_key('integerWidth'): del kwargs['integerWidth'] if kwargs.has_key('fractionWidth'): del kwargs['fractionWidth'] maskededit_kwargs['mask'] = intmask+fracmask if kwargs.has_key('groupChar') or kwargs.has_key('decimalChar'): old_groupchar = self._groupChar # save so we can reformat properly old_decimalchar = self._decimalChar ## dbg("old_groupchar: '%s'" % old_groupchar) ## dbg("old_decimalchar: '%s'" % old_decimalchar) groupchar = old_groupchar decimalchar = old_decimalchar old_numvalue = self._GetNumValue(self._GetValue()) if kwargs.has_key('groupChar'): maskededit_kwargs['groupChar'] = kwargs['groupChar'] groupchar = kwargs['groupChar'] if kwargs.has_key('decimalChar'): maskededit_kwargs['decimalChar'] = kwargs['decimalChar'] decimalchar = kwargs['decimalChar'] # Add sanity check to make sure these are distinct, and if not, # raise attribute error if groupchar == decimalchar: raise AttributeError('groupChar and decimalChar must be distinct') # for all other parameters, assign keyword args as appropriate: for key, param_value in kwargs.items(): key = key.replace('Color', 'Colour') if key not in NumCtrl.valid_ctrl_params.keys(): raise AttributeError('invalid keyword argument "%s"' % key) elif key not in MaskedEditMixin.valid_ctrl_params.keys(): setattr(self, '_' + key, param_value) elif key in ('mask', 'autoformat'): # disallow explicit setting of mask raise AttributeError('invalid keyword argument "%s"' % key) else: maskededit_kwargs[key] = param_value ## dbg('kwargs:', kwargs) # reprocess existing format codes to ensure proper resulting format: formatcodes = self.GetCtrlParameter('formatcodes') if kwargs.has_key('allowNegative'): if kwargs['allowNegative'] and '-' not in formatcodes: formatcodes += '-' maskededit_kwargs['formatcodes'] = formatcodes elif not kwargs['allowNegative'] and '-' in formatcodes: formatcodes = formatcodes.replace('-','') maskededit_kwargs['formatcodes'] = formatcodes if kwargs.has_key('groupDigits'): if kwargs['groupDigits'] and ',' not in formatcodes: formatcodes += ',' maskededit_kwargs['formatcodes'] = formatcodes elif not kwargs['groupDigits'] and ',' in formatcodes: formatcodes = formatcodes.replace(',','') maskededit_kwargs['formatcodes'] = formatcodes if kwargs.has_key('selectOnEntry'): self._selectOnEntry = kwargs['selectOnEntry'] ## dbg("kwargs['selectOnEntry']?", kwargs['selectOnEntry'], "'S' in formatcodes?", 'S' in formatcodes) if kwargs['selectOnEntry'] and 'S' not in formatcodes: formatcodes += 'S' maskededit_kwargs['formatcodes'] = formatcodes elif not kwargs['selectOnEntry'] and 'S' in formatcodes: formatcodes = formatcodes.replace('S','') maskededit_kwargs['formatcodes'] = formatcodes if kwargs.has_key('autoSize'): self._autoSize = kwargs['autoSize'] if kwargs['autoSize'] and 'F' not in formatcodes: formatcodes += 'F' maskededit_kwargs['formatcodes'] = formatcodes elif not kwargs['autoSize'] and 'F' in formatcodes: formatcodes = formatcodes.replace('F', '') maskededit_kwargs['formatcodes'] = formatcodes if 'r' in formatcodes and self._fractionWidth: # top-level mask should only be right insert if no fractional # part will be shown; ie. if reconfiguring control, remove # previous "global" setting. formatcodes = formatcodes.replace('r', '') maskededit_kwargs['formatcodes'] = formatcodes if kwargs.has_key('limited'): if kwargs['limited'] and not self._limited: maskededit_kwargs['validRequired'] = True elif not kwargs['limited'] and self._limited: maskededit_kwargs['validRequired'] = False self._limited = kwargs['limited'] if kwargs.has_key('limitOnFieldChange'): if kwargs['limitOnFieldChange'] and not self._limitOnFieldChange: maskededit_kwargs['stopFieldChangeIfInvalid'] = True elif kwargs['limitOnFieldChange'] and self._limitOnFieldChange: maskededit_kwargs['stopFieldChangeIfInvalid'] = False ## dbg('maskededit_kwargs:', maskededit_kwargs) if maskededit_kwargs.keys(): self.SetCtrlParameters(**maskededit_kwargs) # Go ensure all the format codes necessary are present: orig_intformat = intformat = self.GetFieldParameter(0, 'formatcodes') if 'r' not in intformat: intformat += 'r' if '>' not in intformat: intformat += '>' if intformat != orig_intformat: if self._fractionWidth: self.SetFieldParameters(0, formatcodes=intformat) else: self.SetCtrlParameters(formatcodes=intformat) # Record end of integer and place cursor there unless selecting, or select entire field: integerStart, integerEnd = self._fields[0]._extent if not self._fields[0]._selectOnFieldEntry: self.SetInsertionPoint(0) self.SetInsertionPoint(integerEnd) self.SetSelection(integerEnd, integerEnd) else: self.SetInsertionPoint(0) # include any sign self.SetSelection(0, integerEnd) # Set min and max as appropriate: if kwargs.has_key('min'): min = kwargs['min'] if( self._max is None or min is None or (self._max is not None and self._max >= min) ): ## dbg('examining min') if min is not None: try: textmin = self._toGUI(min, apply_limits = False) except ValueError: ## dbg('min will not fit into control; ignoring', indent=0) raise ## dbg('accepted min') self._min = min else: ## dbg('ignoring min') pass if kwargs.has_key('max'): max = kwargs['max'] if( self._min is None or max is None or (self._min is not None and self._min <= max) ): ## dbg('examining max') if max is not None: try: textmax = self._toGUI(max, apply_limits = False) except ValueError: ## dbg('max will not fit into control; ignoring', indent=0) raise ## dbg('accepted max') self._max = max else: ## dbg('ignoring max') pass if kwargs.has_key('allowNegative'): self._allowNegative = kwargs['allowNegative'] # Ensure current value of control obeys any new restrictions imposed: text = self._GetValue() ## dbg('text value: "%s"' % text) if kwargs.has_key('groupChar') and self._groupChar != old_groupchar and text.find(old_groupchar) != -1: text = old_numvalue ## dbg('old_groupchar: "%s" newgroupchar: "%s"' % (old_groupchar, self._groupChar)) if kwargs.has_key('decimalChar') and self._decimalChar != old_decimalchar and text.find(old_decimalchar) != -1: text = old_numvalue if text != self._GetValue(): if self._decimalChar != '.': # ensure latest decimal char is in "numeric value" so it won't be removed # when going to the GUI: text = text.replace('.', self._decimalChar) newtext = self._toGUI(text) ## dbg('calling wx.TextCtrl.ChangeValue(self, %s)' % newtext) wx.TextCtrl.ChangeValue(self, newtext) value = self.GetValue() ## dbg('self._allowNegative?', self._allowNegative) if not self._allowNegative and self._isNeg: value = abs(value) ## dbg('abs(value):', value) self._isNeg = False elif not self._allowNone and BaseMaskedTextCtrl.GetValue(self) == '': if self._min > 0: value = self._min else: value = 0 sel_start, sel_to = self.GetSelection() if self.IsLimited() and self._min is not None and value < self._min: ## dbg('Set to min value:', self._min) self._ChangeValue(self._toGUI(self._min)) elif self.IsLimited() and self._max is not None and value > self._max: ## dbg('Setting to max value:', self._max) self._ChangeValue(self._toGUI(self._max)) else: # reformat current value as appropriate to possibly new conditions ## dbg('Reformatting value:', value) sel_start, sel_to = self.GetSelection() self._ChangeValue(self._toGUI(value)) self.Refresh()
[ "def", "SetParameters", "(", "self", ",", "*", "*", "kwargs", ")", ":", "## dbg('NumCtrl::SetParameters', indent=1)", "maskededit_kwargs", "=", "{", "}", "reset_fraction_width", "=", "False", "if", "(", "(", "kwargs", ".", "has_key", "(", "'integerWidth'", ...
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/wx/lib/masked/numctrl.py#L633-L906
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/msw/_controls.py
python
DragImage.__init__
(self, *args, **kwargs)
__init__(self, Bitmap image, Cursor cursor=wxNullCursor) -> DragImage
__init__(self, Bitmap image, Cursor cursor=wxNullCursor) -> DragImage
[ "__init__", "(", "self", "Bitmap", "image", "Cursor", "cursor", "=", "wxNullCursor", ")", "-", ">", "DragImage" ]
def __init__(self, *args, **kwargs): """__init__(self, Bitmap image, Cursor cursor=wxNullCursor) -> DragImage""" _controls_.DragImage_swiginit(self,_controls_.new_DragImage(*args, **kwargs))
[ "def", "__init__", "(", "self", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "_controls_", ".", "DragImage_swiginit", "(", "self", ",", "_controls_", ".", "new_DragImage", "(", "*", "args", ",", "*", "*", "kwargs", ")", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/_controls.py#L6355-L6357
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/saved_model/nested_structure_coder.py
python
StructureCoder.encode_structure
(self, nested_structure)
return self._map_structure(nested_structure, self._get_encoders())
Encodes nested structures composed of encodable types into a proto. Args: nested_structure: Structure to encode. Returns: Encoded proto. Raises: NotEncodableError: For values for which there are no encoders.
Encodes nested structures composed of encodable types into a proto.
[ "Encodes", "nested", "structures", "composed", "of", "encodable", "types", "into", "a", "proto", "." ]
def encode_structure(self, nested_structure): """Encodes nested structures composed of encodable types into a proto. Args: nested_structure: Structure to encode. Returns: Encoded proto. Raises: NotEncodableError: For values for which there are no encoders. """ return self._map_structure(nested_structure, self._get_encoders())
[ "def", "encode_structure", "(", "self", ",", "nested_structure", ")", ":", "return", "self", ".", "_map_structure", "(", "nested_structure", ",", "self", ".", "_get_encoders", "(", ")", ")" ]
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/saved_model/nested_structure_coder.py#L82-L94
hanpfei/chromium-net
392cc1fa3a8f92f42e4071ab6e674d8e0482f83f
tools/json_schema_compiler/cc_generator.py
python
_Generator._InitializePropertyToDefault
(self, prop, dst)
return c
Initialize a model.Property to its default value inside an object. E.g for optional enum "state", generate dst->state = STATE_NONE; dst: Type*
Initialize a model.Property to its default value inside an object.
[ "Initialize", "a", "model", ".", "Property", "to", "its", "default", "value", "inside", "an", "object", "." ]
def _InitializePropertyToDefault(self, prop, dst): """Initialize a model.Property to its default value inside an object. E.g for optional enum "state", generate dst->state = STATE_NONE; dst: Type* """ c = Code() underlying_type = self._type_helper.FollowRef(prop.type_) if (underlying_type.property_type == PropertyType.ENUM and prop.optional): namespace_prefix = ('%s::' % underlying_type.namespace.unix_name if underlying_type.namespace != self._namespace else '') c.Append('%s->%s = %s%s;' % ( dst, prop.unix_name, namespace_prefix, self._type_helper.GetEnumNoneValue(prop.type_))) return c
[ "def", "_InitializePropertyToDefault", "(", "self", ",", "prop", ",", "dst", ")", ":", "c", "=", "Code", "(", ")", "underlying_type", "=", "self", ".", "_type_helper", ".", "FollowRef", "(", "prop", ".", "type_", ")", "if", "(", "underlying_type", ".", "...
https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/tools/json_schema_compiler/cc_generator.py#L1113-L1132
krishauser/Klampt
972cc83ea5befac3f653c1ba20f80155768ad519
Python/python2_version/klampt/model/trajectory.py
python
Trajectory.eval_state
(self,t,endBehavior='halt')
return self.interpolate_state(self.milestones[i],self.milestones[i+1],u,self.times[i+1]-self.times[i])
Internal eval, used on the underlying state representation
Internal eval, used on the underlying state representation
[ "Internal", "eval", "used", "on", "the", "underlying", "state", "representation" ]
def eval_state(self,t,endBehavior='halt'): """Internal eval, used on the underlying state representation""" i,u = self.getSegment(t,endBehavior) if i<0: return self.milestones[0] elif i+1>=len(self.milestones): return self.milestones[-1] #linear interpolate between milestones[i] and milestones[i+1] return self.interpolate_state(self.milestones[i],self.milestones[i+1],u,self.times[i+1]-self.times[i])
[ "def", "eval_state", "(", "self", ",", "t", ",", "endBehavior", "=", "'halt'", ")", ":", "i", ",", "u", "=", "self", ".", "getSegment", "(", "t", ",", "endBehavior", ")", "if", "i", "<", "0", ":", "return", "self", ".", "milestones", "[", "0", "]...
https://github.com/krishauser/Klampt/blob/972cc83ea5befac3f653c1ba20f80155768ad519/Python/python2_version/klampt/model/trajectory.py#L188-L194
KratosMultiphysics/Kratos
0000833054ed0503424eb28205d6508d9ca6cbbc
kratos/python_scripts/gid_output_process.py
python
GiDOutputProcess.__get_gidpost_flag
(self, param, label, dictionary)
return value
Parse gidpost settings using an auxiliary dictionary of acceptable values.
Parse gidpost settings using an auxiliary dictionary of acceptable values.
[ "Parse", "gidpost", "settings", "using", "an", "auxiliary", "dictionary", "of", "acceptable", "values", "." ]
def __get_gidpost_flag(self, param, label, dictionary): '''Parse gidpost settings using an auxiliary dictionary of acceptable values.''' keystring = param[label].GetString() try: value = dictionary[keystring] except KeyError: msg = "{0} Error: Unknown value \"{1}\" read for parameter \"{2}\"".format(self.__class__.__name__,value,label) raise Exception(msg) return value
[ "def", "__get_gidpost_flag", "(", "self", ",", "param", ",", "label", ",", "dictionary", ")", ":", "keystring", "=", "param", "[", "label", "]", ".", "GetString", "(", ")", "try", ":", "value", "=", "dictionary", "[", "keystring", "]", "except", "KeyErro...
https://github.com/KratosMultiphysics/Kratos/blob/0000833054ed0503424eb28205d6508d9ca6cbbc/kratos/python_scripts/gid_output_process.py#L372-L382
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/prompt-toolkit/py3/prompt_toolkit/layout/containers.py
python
Container.get_children
(self)
return []
Return the list of child :class:`.Container` objects.
Return the list of child :class:`.Container` objects.
[ "Return", "the", "list", "of", "child", ":", "class", ":", ".", "Container", "objects", "." ]
def get_children(self) -> List["Container"]: """ Return the list of child :class:`.Container` objects. """ return []
[ "def", "get_children", "(", "self", ")", "->", "List", "[", "\"Container\"", "]", ":", "return", "[", "]" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/prompt-toolkit/py3/prompt_toolkit/layout/containers.py#L153-L157
tensorflow/tensorflow
419e3a6b650ea4bd1b0cba23c4348f8a69f3272e
tensorflow/python/eager/function.py
python
_HigherOrderTapeGradientFunctions._forward_and_backward_functions
(self, inference_args, input_tangents)
return (forward_function, forward_graph, backward_function, output_indices, num_output_tangents)
Forward and backward functions suitable for higher-order gradients. Unlike in `_FirstOrderTapeGradientFunctions`, the backward function built by this method accepts gradients for all of the outputs of the returned forward function, including side outputs. Args: inference_args: A flat list of Tensors, arguments to the inference function. input_tangents: A flat list of Tensors, jvps associated with `inference_args`. Returns: A tuple of (forward_function, backward_function): forward_function: Takes the same inputs as the inference function, but returns side outputs used by backward_function in addition to the inference function's outputs. backward_function: Takes side outputs from forward_function and gradients with respect to all of its outputs, real and side. Returns gradients with respect to the inputs.
Forward and backward functions suitable for higher-order gradients.
[ "Forward", "and", "backward", "functions", "suitable", "for", "higher", "-", "order", "gradients", "." ]
def _forward_and_backward_functions(self, inference_args, input_tangents): """Forward and backward functions suitable for higher-order gradients. Unlike in `_FirstOrderTapeGradientFunctions`, the backward function built by this method accepts gradients for all of the outputs of the returned forward function, including side outputs. Args: inference_args: A flat list of Tensors, arguments to the inference function. input_tangents: A flat list of Tensors, jvps associated with `inference_args`. Returns: A tuple of (forward_function, backward_function): forward_function: Takes the same inputs as the inference function, but returns side outputs used by backward_function in addition to the inference function's outputs. backward_function: Takes side outputs from forward_function and gradients with respect to all of its outputs, real and side. Returns gradients with respect to the inputs. """ outputs = [] iteration_count = 0 # First we need to figure out how many side outputs from the forward pass # will be required. We do this in a temporary graph to avoid actually # running multiple copies of the backward pass (one per _GradientsHelper # call). # # While computing gradients, the backward function captures Tensors from # the forward function. We add these as side outputs of the original # function. However, we then need to accept output gradients with respect # to these side outputs for higher order gradients to work. Thus we loop # until the number of outputs of the function stabilizes. Note that this # is only required for tape gradients, where we need to declare in advance # all of the forward op's outputs: symbolic gradients with tf.gradients # instead rely on regenerating backward functions when higher-order # gradients are requested. while (len(outputs) < len(self._func_graph.outputs) # It's possible for gradient generation to add new ops to the forward # pass. If all of the new outputs are non-trainable, there's no # reason to continue. and any(backprop_util.IsTrainable(output) for output in self._func_graph.outputs[len(outputs):])): iteration_count += 1 if iteration_count >= 20 and iteration_count % 5 == 0: new_op_with_trainable_output = None num_new_trainable_outputs = 0 for output in self._func_graph.outputs[len(outputs):]: if backprop_util.IsTrainable(output): num_new_trainable_outputs += 1 new_op_with_trainable_output = output.op logging.warning( ("Determining side outputs for the function '{}' is taking longer " "than expected ({} iterations, typically this converges in 5 or " "so). This could indicate that a gradient registration is adding " "new ops to the forward pass every time gradients are generated. " "{} new trainable output(s) were added this iteration, one from " "the following op:\n {}\nThis may indicate a TensorFlow bug, or " "an issue in a tf.custom_gradient.") .format( self._func_graph.name, iteration_count, num_new_trainable_outputs, new_op_with_trainable_output)) outputs = list(self._func_graph.outputs) self._build_functions_for_outputs( outputs, inference_args, input_tangents) (forward_function, forward_graph, backward_function, output_indices, num_output_tangents) = ( self._build_functions_for_outputs( outputs, inference_args, input_tangents)) if (len(self._func_graph.outputs) > len(outputs) and any(backprop_util.IsTrainable(output) for output in self._func_graph.outputs[len(outputs):])): raise errors.InternalError( "Unexpectedly added new outputs to the forward function when " "building the backward function: " f"{self._func_graph.outputs[len(outputs):]}.") return (forward_function, forward_graph, backward_function, output_indices, num_output_tangents)
[ "def", "_forward_and_backward_functions", "(", "self", ",", "inference_args", ",", "input_tangents", ")", ":", "outputs", "=", "[", "]", "iteration_count", "=", "0", "# First we need to figure out how many side outputs from the forward pass", "# will be required. We do this in a ...
https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/eager/function.py#L1290-L1369
Slicer/Slicer
ba9fadf332cb0303515b68d8d06a344c82e3e3e5
Base/Python/slicer/ScriptedLoadableModule.py
python
ScriptedLoadableModuleWidget._onModuleAboutToBeUnloaded
(self, moduleName)
This slot calls `cleanup()` if the module about to be unloaded is the current one.
This slot calls `cleanup()` if the module about to be unloaded is the current one.
[ "This", "slot", "calls", "cleanup", "()", "if", "the", "module", "about", "to", "be", "unloaded", "is", "the", "current", "one", "." ]
def _onModuleAboutToBeUnloaded(self, moduleName): """This slot calls `cleanup()` if the module about to be unloaded is the current one. """ if moduleName == self.moduleName: self.cleanup() slicer.app.moduleManager().disconnect( 'moduleAboutToBeUnloaded(QString)', self._onModuleAboutToBeUnloaded)
[ "def", "_onModuleAboutToBeUnloaded", "(", "self", ",", "moduleName", ")", ":", "if", "moduleName", "==", "self", ".", "moduleName", ":", "self", ".", "cleanup", "(", ")", "slicer", ".", "app", ".", "moduleManager", "(", ")", ".", "disconnect", "(", "'modul...
https://github.com/Slicer/Slicer/blob/ba9fadf332cb0303515b68d8d06a344c82e3e3e5/Base/Python/slicer/ScriptedLoadableModule.py#L110-L117
apache/arrow
af33dd1157eb8d7d9bfac25ebf61445b793b7943
dev/archery/archery/utils/source.py
python
ArrowSources.java
(self)
return self.path / "java"
Returns the java directory of an Arrow sources.
Returns the java directory of an Arrow sources.
[ "Returns", "the", "java", "directory", "of", "an", "Arrow", "sources", "." ]
def java(self): """ Returns the java directory of an Arrow sources. """ return self.path / "java"
[ "def", "java", "(", "self", ")", ":", "return", "self", ".", "path", "/", "\"java\"" ]
https://github.com/apache/arrow/blob/af33dd1157eb8d7d9bfac25ebf61445b793b7943/dev/archery/archery/utils/source.py#L72-L74
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/msw/_gdi.py
python
GraphicsMatrix.Get
(*args, **kwargs)
return _gdi_.GraphicsMatrix_Get(*args, **kwargs)
Get(self) --> (a, b, c, d, tx, ty) Gets the component values of the matrix and returns them as a tuple.
Get(self) --> (a, b, c, d, tx, ty)
[ "Get", "(", "self", ")", "--", ">", "(", "a", "b", "c", "d", "tx", "ty", ")" ]
def Get(*args, **kwargs): """ Get(self) --> (a, b, c, d, tx, ty) Gets the component values of the matrix and returns them as a tuple. """ return _gdi_.GraphicsMatrix_Get(*args, **kwargs)
[ "def", "Get", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_gdi_", ".", "GraphicsMatrix_Get", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/msw/_gdi.py#L5781-L5787
trilinos/Trilinos
6168be6dd51e35e1cd681e9c4b24433e709df140
packages/seacas/libraries/ioss/src/visualization/catalyst/phactori/phactori.py
python
UseMPIToFillInSharedListNPerProcess
( inThisProcVtkArray, inCountPerProc, outVtkArray, vtkArrayTypeIndex)
return mypid, numproc
given a list of N values on each process fill in on each process a list which contains all the entries from each process (i.e. it is N * number_of_processes long) will work on either vtkDoubleArray types or vtkIntArray types, with vtkArrayTypeIndex = 0 for vtkIntArray types and 1 for vtkDoubleArray types
given a list of N values on each process fill in on each process a list which contains all the entries from each process (i.e. it is N * number_of_processes long) will work on either vtkDoubleArray types or vtkIntArray types, with vtkArrayTypeIndex = 0 for vtkIntArray types and 1 for vtkDoubleArray types
[ "given", "a", "list", "of", "N", "values", "on", "each", "process", "fill", "in", "on", "each", "process", "a", "list", "which", "contains", "all", "the", "entries", "from", "each", "process", "(", "i", ".", "e", ".", "it", "is", "N", "*", "number_of...
def UseMPIToFillInSharedListNPerProcess( inThisProcVtkArray, inCountPerProc, outVtkArray, vtkArrayTypeIndex): """given a list of N values on each process fill in on each process a list which contains all the entries from each process (i.e. it is N * number_of_processes long) will work on either vtkDoubleArray types or vtkIntArray types, with vtkArrayTypeIndex = 0 for vtkIntArray types and 1 for vtkDoubleArray types""" mypid = SmartGetLocalProcessId() pm = paraview.servermanager.vtkProcessModule.GetProcessModule() globalController = pm.GetGlobalController() #gLocalProcessId = globalController.GetLocalProcessId() numproc = globalController.GetNumberOfProcesses() if PhactoriDbg(100): myDebugPrint3("mypid: " + str(mypid) + " numproc: " + str(numproc) + \ " count: " + str(inCountPerProc) + "\n") outVtkArray.SetNumberOfValues(inCountPerProc * numproc) if vtkArrayTypeIndex == 0: fromRemoteProcList = vtk.vtkIntArray() else: fromRemoteProcList = vtk.vtkDoubleArray() for ii in range(0,numproc): prndx = ii * inCountPerProc if ii == mypid: globalController.Broadcast(inThisProcVtkArray, ii) for jj in range(0,inCountPerProc): outVtkArray.SetValue(prndx + jj, inThisProcVtkArray.GetValue(jj)) else: fromRemoteProcList.SetNumberOfValues(inCountPerProc) globalController.Broadcast(fromRemoteProcList, ii) for jj in range(0,inCountPerProc): outVtkArray.SetValue(prndx + jj, fromRemoteProcList.GetValue(jj)) return mypid, numproc
[ "def", "UseMPIToFillInSharedListNPerProcess", "(", "inThisProcVtkArray", ",", "inCountPerProc", ",", "outVtkArray", ",", "vtkArrayTypeIndex", ")", ":", "mypid", "=", "SmartGetLocalProcessId", "(", ")", "pm", "=", "paraview", ".", "servermanager", ".", "vtkProcessModule"...
https://github.com/trilinos/Trilinos/blob/6168be6dd51e35e1cd681e9c4b24433e709df140/packages/seacas/libraries/ioss/src/visualization/catalyst/phactori/phactori.py#L2596-L2634
windystrife/UnrealEngine_NVIDIAGameWorks
b50e6338a7c5b26374d66306ebc7807541ff815e
Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/mailbox.py
python
Mailbox.__getitem__
(self, key)
Return the keyed message; raise KeyError if it doesn't exist.
Return the keyed message; raise KeyError if it doesn't exist.
[ "Return", "the", "keyed", "message", ";", "raise", "KeyError", "if", "it", "doesn", "t", "exist", "." ]
def __getitem__(self, key): """Return the keyed message; raise KeyError if it doesn't exist.""" if not self._factory: return self.get_message(key) else: return self._factory(self.get_file(key))
[ "def", "__getitem__", "(", "self", ",", "key", ")", ":", "if", "not", "self", ".", "_factory", ":", "return", "self", ".", "get_message", "(", "key", ")", "else", ":", "return", "self", ".", "_factory", "(", "self", ".", "get_file", "(", "key", ")", ...
https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/mailbox.py#L79-L84
Kitware/ParaView
f760af9124ff4634b23ebbeab95a4f56e0261955
ThirdParty/cinema/paraview/tpl/cinema_python/images/camera_utils.py
python
nearest_camera
(poses, mnext)
return best
find index of the pose that is closest match to mnext
find index of the pose that is closest match to mnext
[ "find", "index", "of", "the", "pose", "that", "is", "closest", "match", "to", "mnext" ]
def nearest_camera(poses, mnext): """ find index of the pose that is closest match to mnext """ # TODO: profile, and then if important, optimize this # 1: only need to compare 2 of 3 vectors since they are orthogonal # 2: will be faster still if we push the matrices into numpy # (flatten too perhaps) # algorithm: reduce to distance value, and then pick the index of # pose with least distance to mnext best = -1 bestdist = -1 primary = 0 secondary = 1 tertiary = 2 for i in range(0, len(poses)): cand = poses[i] dist1 = math.sqrt( math.pow(mnext[primary][0]-cand[primary][0], 2) + math.pow(mnext[primary][1]-cand[primary][1], 2) + math.pow(mnext[primary][2]-cand[primary][2], 2)) dist2 = math.sqrt( math.pow(mnext[secondary][0]-cand[secondary][0], 2) + math.pow(mnext[secondary][1]-cand[secondary][1], 2) + math.pow(mnext[secondary][2]-cand[secondary][2], 2)) dist3 = math.sqrt( math.pow(mnext[tertiary][0]-cand[tertiary][0], 2) + math.pow(mnext[tertiary][1]-cand[tertiary][1], 2) + math.pow(mnext[tertiary][2]-cand[tertiary][2], 2)) dist = dist1*dist1+dist2*dist2+dist3*dist3 if best == -1: bestdist = dist best = i if dist < bestdist: # print "NB", dist, i, dist-bestdist bestdist = dist best = i return best
[ "def", "nearest_camera", "(", "poses", ",", "mnext", ")", ":", "# TODO: profile, and then if important, optimize this", "# 1: only need to compare 2 of 3 vectors since they are orthogonal", "# 2: will be faster still if we push the matrices into numpy", "# (flatten too perhaps)", "# algorith...
https://github.com/Kitware/ParaView/blob/f760af9124ff4634b23ebbeab95a4f56e0261955/ThirdParty/cinema/paraview/tpl/cinema_python/images/camera_utils.py#L60-L98
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/keras/activations.py
python
softsign
(x)
return nn.softsign(x)
Softsign activation function. Arguments: x: Input tensor. Returns: The softplus activation: `x / (abs(x) + 1)`.
Softsign activation function.
[ "Softsign", "activation", "function", "." ]
def softsign(x): """Softsign activation function. Arguments: x: Input tensor. Returns: The softplus activation: `x / (abs(x) + 1)`. """ return nn.softsign(x)
[ "def", "softsign", "(", "x", ")", ":", "return", "nn", ".", "softsign", "(", "x", ")" ]
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/keras/activations.py#L166-L175
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/setuptools/py2/pkg_resources/__init__.py
python
ResourceManager.resource_listdir
(self, package_or_requirement, resource_name)
return get_provider(package_or_requirement).resource_listdir( resource_name )
List the contents of the named resource directory
List the contents of the named resource directory
[ "List", "the", "contents", "of", "the", "named", "resource", "directory" ]
def resource_listdir(self, package_or_requirement, resource_name): """List the contents of the named resource directory""" return get_provider(package_or_requirement).resource_listdir( resource_name )
[ "def", "resource_listdir", "(", "self", ",", "package_or_requirement", ",", "resource_name", ")", ":", "return", "get_provider", "(", "package_or_requirement", ")", ".", "resource_listdir", "(", "resource_name", ")" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/setuptools/py2/pkg_resources/__init__.py#L1160-L1164
wy1iu/LargeMargin_Softmax_Loss
c3e9f20e4f16e2b4daf7d358a614366b9b39a6ec
scripts/cpp_lint.py
python
GetPreviousNonBlankLine
(clean_lines, linenum)
return ('', -1)
Return the most recent non-blank line and its line number. Args: clean_lines: A CleansedLines instance containing the file contents. linenum: The number of the line to check. Returns: A tuple with two elements. The first element is the contents of the last non-blank line before the current line, or the empty string if this is the first non-blank line. The second is the line number of that line, or -1 if this is the first non-blank line.
Return the most recent non-blank line and its line number.
[ "Return", "the", "most", "recent", "non", "-", "blank", "line", "and", "its", "line", "number", "." ]
def GetPreviousNonBlankLine(clean_lines, linenum): """Return the most recent non-blank line and its line number. Args: clean_lines: A CleansedLines instance containing the file contents. linenum: The number of the line to check. Returns: A tuple with two elements. The first element is the contents of the last non-blank line before the current line, or the empty string if this is the first non-blank line. The second is the line number of that line, or -1 if this is the first non-blank line. """ prevlinenum = linenum - 1 while prevlinenum >= 0: prevline = clean_lines.elided[prevlinenum] if not IsBlankLine(prevline): # if not a blank line... return (prevline, prevlinenum) prevlinenum -= 1 return ('', -1)
[ "def", "GetPreviousNonBlankLine", "(", "clean_lines", ",", "linenum", ")", ":", "prevlinenum", "=", "linenum", "-", "1", "while", "prevlinenum", ">=", "0", ":", "prevline", "=", "clean_lines", ".", "elided", "[", "prevlinenum", "]", "if", "not", "IsBlankLine",...
https://github.com/wy1iu/LargeMargin_Softmax_Loss/blob/c3e9f20e4f16e2b4daf7d358a614366b9b39a6ec/scripts/cpp_lint.py#L3046-L3066
hanpfei/chromium-net
392cc1fa3a8f92f42e4071ab6e674d8e0482f83f
third_party/catapult/telemetry/telemetry/internal/backends/chrome/chrome_browser_backend.py
python
ChromeBrowserBackend._WaitForBrowserToComeUp
(self)
Wait for browser to come up.
Wait for browser to come up.
[ "Wait", "for", "browser", "to", "come", "up", "." ]
def _WaitForBrowserToComeUp(self): """ Wait for browser to come up. """ try: timeout = self.browser_options.browser_startup_timeout util.WaitFor(self.HasBrowserFinishedLaunching, timeout=timeout) except (exceptions.TimeoutException, exceptions.ProcessGoneException) as e: if not self.IsBrowserRunning(): raise exceptions.BrowserGoneException(self.browser, e) raise exceptions.BrowserConnectionGoneException(self.browser, e)
[ "def", "_WaitForBrowserToComeUp", "(", "self", ")", ":", "try", ":", "timeout", "=", "self", ".", "browser_options", ".", "browser_startup_timeout", "util", ".", "WaitFor", "(", "self", ".", "HasBrowserFinishedLaunching", ",", "timeout", "=", "timeout", ")", "ex...
https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/third_party/catapult/telemetry/telemetry/internal/backends/chrome/chrome_browser_backend.py#L166-L174
domino-team/openwrt-cc
8b181297c34d14d3ca521cc9f31430d561dbc688
package/gli-pub/openwrt-node-packages-master/node/node-v6.9.1/deps/v8_inspector/third_party/jinja2/ext/djangojinja2.py
python
get_template
(template_name, globals=None)
Load a template.
Load a template.
[ "Load", "a", "template", "." ]
def get_template(template_name, globals=None): """Load a template.""" try: return get_env().get_template(template_name, globals=globals) except TemplateNotFound, e: raise TemplateDoesNotExist(str(e))
[ "def", "get_template", "(", "template_name", ",", "globals", "=", "None", ")", ":", "try", ":", "return", "get_env", "(", ")", ".", "get_template", "(", "template_name", ",", "globals", "=", "globals", ")", "except", "TemplateNotFound", ",", "e", ":", "rai...
https://github.com/domino-team/openwrt-cc/blob/8b181297c34d14d3ca521cc9f31430d561dbc688/package/gli-pub/openwrt-node-packages-master/node/node-v6.9.1/deps/v8_inspector/third_party/jinja2/ext/djangojinja2.py#L52-L57
tuttleofx/TuttleOFX
36fc4cae15092a84ea8c29b9c6658c7cabfadb6e
applications/sam/sam_ls.py
python
Sam_ls._printItem
(self, item, args, level)
Print the item depending on the command line options.
Print the item depending on the command line options.
[ "Print", "the", "item", "depending", "on", "the", "command", "line", "options", "." ]
def _printItem(self, item, args, level): """ Print the item depending on the command line options. """ itemType = item.getType() filePath = '' detailed = '' detailedSequence = '' # sam-ls --explode-sequences sequenceExploded = False if args.explodeSequences and itemType == sequenceParser.eTypeSequence: sequence = item.getSequence() for frameRange in sequence.getFrameRanges(): # for each frame range, print a new item as sequence subSequence = sequenceParser.Sequence(sequence.getPrefix(), sequence.getFixedPadding(), sequence.getMaxPadding(), sequence.getSuffix(), frameRange.first, frameRange.last, frameRange.step) if subSequence.__str__() not in self._sequenceExploded: self._sequenceExploded.append(subSequence.__str__()) sequenceExploded = True self._printItem(sequenceParser.Item(subSequence, item.getFolder()), args, level) # to skip recursivity if sequenceExploded: return # sam-ls -l if args.longListing: # type - date - size characterFromType = 'a' if itemType == sequenceParser.eTypeUndefined: characterFromType = '?' elif itemType == sequenceParser.eTypeFolder: characterFromType = 'd' elif itemType == sequenceParser.eTypeFile: characterFromType = 'f' elif itemType == sequenceParser.eTypeSequence: characterFromType = 's' elif itemType == sequenceParser.eTypeLink: characterFromType = 'l' # type - permissions - user - group - lastUpdate - size itemStat = sequenceParser.ItemStat(item) permissions = '' permissions += 'r' if itemStat.ownerCanRead else '-' permissions += 'w' if itemStat.ownerCanWrite else '-' permissions += 'x' if itemStat.ownerCanExecute else '-' permissions += 'r' if itemStat.groupCanRead else '-' permissions += 'w' if itemStat.groupCanWrite else '-' permissions += 'x' if itemStat.groupCanExecute else '-' permissions += 'r' if itemStat.otherCanRead else '-' permissions += 'w' if itemStat.otherCanWrite else '-' permissions += 'x' if itemStat.otherCanExecute else '-' lastUpdate = date.fromtimestamp(itemStat.modificationTime).strftime('%d/%m/%y') minSize = samUtils.getReadableSize(itemStat.minSize) if itemStat.minSize != itemStat.size else '-' maxSize = samUtils.getReadableSize(itemStat.maxSize) if itemStat.maxSize != itemStat.size else '-' detailed = '{:1}{:9}'.format(characterFromType, permissions) detailed += ' {:8} {:8} {:8}'.format(itemStat.getUserName(), itemStat.getGroupName(), lastUpdate) detailed += ' {:6} {:6} {:6} '.format(minSize, maxSize, samUtils.getReadableSize(itemStat.size)) # only for sequences: [ begin : end ] nbFiles - nbMissingFiles if itemType == sequenceParser.eTypeSequence: sequence = item.getSequence() detailedSequence = ' [{first}:{last}] {nbFiles} files'.format(first=sequence.getFirstTime(), last=sequence.getLastTime(), nbFiles=sequence.getNbFiles()) nbHoles = (sequence.getLastTime() - sequence.getFirstTime() + 1) - sequence.getNbFiles() if nbHoles: detailedSequence += ' - {nbHoles} missing files'.format(nbHoles=nbHoles) # sam-ls --absolute-path if args.absolutePath: filePath += os.path.abspath(item.getFolder()) # sam-ls --relative-path if args.relativePath: filePath += item.getFolder() # filename filename = item.getFilename() # sam-ls --format if itemType == sequenceParser.eTypeSequence: filename = samUtils.getSequenceNameWithFormatting(item.getSequence(), args.format) # colors if itemType == sequenceParser.eTypeFolder: # blue is not visible without bold filePath = colored.blue(os.path.join(filePath, filename), bold=True) elif itemType == sequenceParser.eTypeFile: filePath = colored.green(os.path.join(filePath, filename)) elif itemType == sequenceParser.eTypeSequence: # magenta is not visible without bold filePath = colored.magenta(os.path.join(filePath, filename), bold=True) elif itemType == sequenceParser.eTypeLink: filePath = colored.cyan(os.path.join(filePath, filename)) else: filePath = colored.red(os.path.join(filePath, filename)) # sam-ls -R / sam-ls -L / sam-ls --script indentTree = '' if args.recursive and level != 0 and not args.script: indentTree += '| ' * (level - 1) indentTree += '|__ ' # display toPrint = detailed + filePath if not args.script: toPrint += detailedSequence # if first level or no tree formatting if level == 0 or args.script: puts(toPrint.format()) else: with indent(level, quote=indentTree): puts(toPrint.format()) self._itemPrinted.append(item.getAbsoluteFilepath())
[ "def", "_printItem", "(", "self", ",", "item", ",", "args", ",", "level", ")", ":", "itemType", "=", "item", ".", "getType", "(", ")", "filePath", "=", "''", "detailed", "=", "''", "detailedSequence", "=", "''", "# sam-ls --explode-sequences", "sequenceExplo...
https://github.com/tuttleofx/TuttleOFX/blob/36fc4cae15092a84ea8c29b9c6658c7cabfadb6e/applications/sam/sam_ls.py#L82-L199
thalium/icebox
99d147d5b9269222225443ce171b4fd46d8985d4
third_party/retdec-3.2/scripts/type_extractor/type_extractor/json_types.py
python
convert_func_types_to_type_for_json
(functions, types)
Converts parameters and return type of function declaration to json representation.
Converts parameters and return type of function declaration to json representation.
[ "Converts", "parameters", "and", "return", "type", "of", "function", "declaration", "to", "json", "representation", "." ]
def convert_func_types_to_type_for_json(functions, types): """Converts parameters and return type of function declaration to json representation.""" for name, f_info in functions.items(): t = parse_type_to_type_for_json(f_info.ret_type_text, types) if t.type_hash not in types: types[t.type_hash] = t f_info.ret_type = t.type_hash parse_params_to_json_types(f_info.params_list, types)
[ "def", "convert_func_types_to_type_for_json", "(", "functions", ",", "types", ")", ":", "for", "name", ",", "f_info", "in", "functions", ".", "items", "(", ")", ":", "t", "=", "parse_type_to_type_for_json", "(", "f_info", ".", "ret_type_text", ",", "types", ")...
https://github.com/thalium/icebox/blob/99d147d5b9269222225443ce171b4fd46d8985d4/third_party/retdec-3.2/scripts/type_extractor/type_extractor/json_types.py#L316-L323
oracle/graaljs
36a56e8e993d45fc40939a3a4d9c0c24990720f1
graal-nodejs/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py
python
MakefileWriter.ComputeOutput
(self, spec)
return os.path.join(path, self.ComputeOutputBasename(spec))
Return the 'output' (full output path) of a gyp spec. E.g., the loadable module 'foobar' in directory 'baz' will produce '$(obj)/baz/libfoobar.so'
Return the 'output' (full output path) of a gyp spec.
[ "Return", "the", "output", "(", "full", "output", "path", ")", "of", "a", "gyp", "spec", "." ]
def ComputeOutput(self, spec): """Return the 'output' (full output path) of a gyp spec. E.g., the loadable module 'foobar' in directory 'baz' will produce '$(obj)/baz/libfoobar.so' """ assert not self.is_mac_bundle path = os.path.join("$(obj)." + self.toolset, self.path) if self.type == "executable" or self._InstallImmediately(): path = "$(builddir)" path = spec.get("product_dir", path) return os.path.join(path, self.ComputeOutputBasename(spec))
[ "def", "ComputeOutput", "(", "self", ",", "spec", ")", ":", "assert", "not", "self", ".", "is_mac_bundle", "path", "=", "os", ".", "path", ".", "join", "(", "\"$(obj).\"", "+", "self", ".", "toolset", ",", "self", ".", "path", ")", "if", "self", ".",...
https://github.com/oracle/graaljs/blob/36a56e8e993d45fc40939a3a4d9c0c24990720f1/graal-nodejs/deps/npm/node_modules/node-gyp/gyp/pylib/gyp/generator/make.py#L1512-L1524
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/scipy/py2/scipy/linalg/_solvers.py
python
_are_validate_args
(a, b, q, r, e, s, eq_type='care')
return a, b, q, r, e, s, m, n, r_or_c, generalized_case
A helper function to validate the arguments supplied to the Riccati equation solvers. Any discrepancy found in the input matrices leads to a ``ValueError`` exception. Essentially, it performs: - a check whether the input is free of NaN and Infs. - a pass for the data through ``numpy.atleast_2d()`` - squareness check of the relevant arrays, - shape consistency check of the arrays, - singularity check of the relevant arrays, - symmetricity check of the relevant matrices, - a check whether the regular or the generalized version is asked. This function is used by ``solve_continuous_are`` and ``solve_discrete_are``. Parameters ---------- a, b, q, r, e, s : array_like Input data eq_type : str Accepted arguments are 'care' and 'dare'. Returns ------- a, b, q, r, e, s : ndarray Regularized input data m, n : int shape of the problem r_or_c : type Data type of the problem, returns float or complex gen_or_not : bool Type of the equation, True for generalized and False for regular ARE.
A helper function to validate the arguments supplied to the Riccati equation solvers. Any discrepancy found in the input matrices leads to a ``ValueError`` exception.
[ "A", "helper", "function", "to", "validate", "the", "arguments", "supplied", "to", "the", "Riccati", "equation", "solvers", ".", "Any", "discrepancy", "found", "in", "the", "input", "matrices", "leads", "to", "a", "ValueError", "exception", "." ]
def _are_validate_args(a, b, q, r, e, s, eq_type='care'): """ A helper function to validate the arguments supplied to the Riccati equation solvers. Any discrepancy found in the input matrices leads to a ``ValueError`` exception. Essentially, it performs: - a check whether the input is free of NaN and Infs. - a pass for the data through ``numpy.atleast_2d()`` - squareness check of the relevant arrays, - shape consistency check of the arrays, - singularity check of the relevant arrays, - symmetricity check of the relevant matrices, - a check whether the regular or the generalized version is asked. This function is used by ``solve_continuous_are`` and ``solve_discrete_are``. Parameters ---------- a, b, q, r, e, s : array_like Input data eq_type : str Accepted arguments are 'care' and 'dare'. Returns ------- a, b, q, r, e, s : ndarray Regularized input data m, n : int shape of the problem r_or_c : type Data type of the problem, returns float or complex gen_or_not : bool Type of the equation, True for generalized and False for regular ARE. """ if not eq_type.lower() in ('dare', 'care'): raise ValueError("Equation type unknown. " "Only 'care' and 'dare' is understood") a = np.atleast_2d(_asarray_validated(a, check_finite=True)) b = np.atleast_2d(_asarray_validated(b, check_finite=True)) q = np.atleast_2d(_asarray_validated(q, check_finite=True)) r = np.atleast_2d(_asarray_validated(r, check_finite=True)) # Get the correct data types otherwise Numpy complains # about pushing complex numbers into real arrays. r_or_c = complex if np.iscomplexobj(b) else float for ind, mat in enumerate((a, q, r)): if np.iscomplexobj(mat): r_or_c = complex if not np.equal(*mat.shape): raise ValueError("Matrix {} should be square.".format("aqr"[ind])) # Shape consistency checks m, n = b.shape if m != a.shape[0]: raise ValueError("Matrix a and b should have the same number of rows.") if m != q.shape[0]: raise ValueError("Matrix a and q should have the same shape.") if n != r.shape[0]: raise ValueError("Matrix b and r should have the same number of cols.") # Check if the data matrices q, r are (sufficiently) hermitian for ind, mat in enumerate((q, r)): if norm(mat - mat.conj().T, 1) > np.spacing(norm(mat, 1))*100: raise ValueError("Matrix {} should be symmetric/hermitian." "".format("qr"[ind])) # Continuous time ARE should have a nonsingular r matrix. if eq_type == 'care': min_sv = svd(r, compute_uv=False)[-1] if min_sv == 0. or min_sv < np.spacing(1.)*norm(r, 1): raise ValueError('Matrix r is numerically singular.') # Check if the generalized case is required with omitted arguments # perform late shape checking etc. generalized_case = e is not None or s is not None if generalized_case: if e is not None: e = np.atleast_2d(_asarray_validated(e, check_finite=True)) if not np.equal(*e.shape): raise ValueError("Matrix e should be square.") if m != e.shape[0]: raise ValueError("Matrix a and e should have the same shape.") # numpy.linalg.cond doesn't check for exact zeros and # emits a runtime warning. Hence the following manual check. min_sv = svd(e, compute_uv=False)[-1] if min_sv == 0. or min_sv < np.spacing(1.) * norm(e, 1): raise ValueError('Matrix e is numerically singular.') if np.iscomplexobj(e): r_or_c = complex if s is not None: s = np.atleast_2d(_asarray_validated(s, check_finite=True)) if s.shape != b.shape: raise ValueError("Matrix b and s should have the same shape.") if np.iscomplexobj(s): r_or_c = complex return a, b, q, r, e, s, m, n, r_or_c, generalized_case
[ "def", "_are_validate_args", "(", "a", ",", "b", ",", "q", ",", "r", ",", "e", ",", "s", ",", "eq_type", "=", "'care'", ")", ":", "if", "not", "eq_type", ".", "lower", "(", ")", "in", "(", "'dare'", ",", "'care'", ")", ":", "raise", "ValueError",...
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/py2/scipy/linalg/_solvers.py#L739-L844
mantidproject/mantid
03deeb89254ec4289edb8771e0188c2090a02f32
scripts/abins/input/casteploader.py
python
CASTEPLoader.read_vibrational_or_phonon_data
(self)
return self._rearrange_data(data=file_data)
Reads frequencies, weights of k-point vectors, k-point vectors, amplitudes of atomic displacements from a <>.phonon file. Save frequencies, weights of k-point vectors, k-point vectors, amplitudes of atomic displacements, hash of the phonon file (hash) to <>.hdf5 :returns: object of type AbinsData.
Reads frequencies, weights of k-point vectors, k-point vectors, amplitudes of atomic displacements from a <>.phonon file. Save frequencies, weights of k-point vectors, k-point vectors, amplitudes of atomic displacements, hash of the phonon file (hash) to <>.hdf5
[ "Reads", "frequencies", "weights", "of", "k", "-", "point", "vectors", "k", "-", "point", "vectors", "amplitudes", "of", "atomic", "displacements", "from", "a", "<", ">", ".", "phonon", "file", ".", "Save", "frequencies", "weights", "of", "k", "-", "point"...
def read_vibrational_or_phonon_data(self): """ Reads frequencies, weights of k-point vectors, k-point vectors, amplitudes of atomic displacements from a <>.phonon file. Save frequencies, weights of k-point vectors, k-point vectors, amplitudes of atomic displacements, hash of the phonon file (hash) to <>.hdf5 :returns: object of type AbinsData. """ file_data = {} # Header regex. Looks for lines in the following format: # q-pt= 1 0.000000 0.000000 0.000000 1.0000000000 0.000000 0.000000 1.000000 sum_rule_header = r"^ +q-pt=\s+\d+ +(%(s)s) +(%(s)s) +(%(s)s) +(%(s)s) + " \ r"(%(s)s) + (%(s)s) + (%(s)s)" % {'s': self._float_regex} no_sum_rule_header = r"^ +q-pt=\s+\d+ +(%(s)s) +(%(s)s) +(%(s)s) +(%(s)s)" % {'s': self._float_regex} if self._check_acoustic_sum(): header_regex_str = sum_rule_header self._sum_rule = True else: header_regex_str = no_sum_rule_header self._sum_rule = False compiled_header_regex_str = re.compile(header_regex_str) compiled_no_sum_rule_header = re.compile(no_sum_rule_header) headers = {True: compiled_header_regex_str, False: compiled_no_sum_rule_header} eigenvectors_regex = re.compile(r"\s*Mode\s+Ion\s+X\s+Y\s+Z\s*") block_count = 0 frequencies, weights, k_vectors, eigenvectors = [], [], [], [] with open(self._clerk.get_input_filename(), "r") as f_handle: file_data.update(self._parse_phonon_file_header(f_handle)) header_found = False while True: line = f_handle.readline() # Check we've reached the end of file if not line: break # Check if we've found a block of frequencies header_match = headers[block_count == 0].match(line) if header_match: header_found = True weight, k_vector = self._parse_block_header(header_match, block_count) weights.append(weight) k_vectors.append(k_vector) # Parse block of frequencies frequencies.append(self._parse_phonon_freq_block(f_handle=f_handle)) block_count += 1 vector_match = eigenvectors_regex.match(line) if vector_match and header_found: header_found = False vectors = self._parse_phonon_eigenvectors(f_handle=f_handle) eigenvectors.append(vectors) # normalise eigenvectors: # atomic displacements: [num_k, num_atom, num_freq, dim] disp = np.asarray(eigenvectors) # num_k: number of k-points # num_atom: number of atoms # num_freq: number of frequencies # dim: dimension (atoms vibrate in 3D so dim=3) # [num_k, num_atom, num_freq, dim] -> [num_k, num_atom, num_freq, dim, dim] -> [num_k, num_atom, num_freq] # -> [num_k, num_freq] norm = np.sum(np.trace(np.einsum('lkin, lkim->lkinm', disp, disp.conjugate()), axis1=3, axis2=4), axis=1) factor = 1.0 / np.sqrt(norm) disp = np.einsum('ijkl, ik-> ijkl', disp, factor) file_data.update({"frequencies": np.asarray(frequencies), "weights": np.asarray(weights), "k_vectors": np.asarray(k_vectors), "atomic_displacements": disp }) # save stuff to hdf file data_to_save = ["frequencies", "weights", "k_vectors", "atomic_displacements", "unit_cell", "atoms"] data = {} for key in data_to_save: data[key] = file_data[key] self.save_ab_initio_data(data=data) return self._rearrange_data(data=file_data)
[ "def", "read_vibrational_or_phonon_data", "(", "self", ")", ":", "file_data", "=", "{", "}", "# Header regex. Looks for lines in the following format:", "# q-pt= 1 0.000000 0.000000 0.000000 1.0000000000 0.000000 0.000000 1.000000", "sum_rule_header", "=", "r\"^ +q-...
https://github.com/mantidproject/mantid/blob/03deeb89254ec4289edb8771e0188c2090a02f32/scripts/abins/input/casteploader.py#L188-L276
intel/caffe
3f494b442ee3f9d17a07b09ecbd5fa2bbda00836
examples/pycaffe/layers/pascal_multilabel_datalayers.py
python
PascalMultilabelDataLayerSync.forward
(self, bottom, top)
Load data.
Load data.
[ "Load", "data", "." ]
def forward(self, bottom, top): """ Load data. """ for itt in range(self.batch_size): # Use the batch loader to load the next image. im, multilabel = self.batch_loader.load_next_image() # Add directly to the caffe data layer top[0].data[itt, ...] = im top[1].data[itt, ...] = multilabel
[ "def", "forward", "(", "self", ",", "bottom", ",", "top", ")", ":", "for", "itt", "in", "range", "(", "self", ".", "batch_size", ")", ":", "# Use the batch loader to load the next image.", "im", ",", "multilabel", "=", "self", ".", "batch_loader", ".", "load...
https://github.com/intel/caffe/blob/3f494b442ee3f9d17a07b09ecbd5fa2bbda00836/examples/pycaffe/layers/pascal_multilabel_datalayers.py#L91-L101
shoaibrayeen/Programmers-Community
1d352fb3e6ac5e2e7d9472d90527bdcc8d5ec355
Data Structure/Stack/Max Area under a Histogram/SolutionByShreayan.py
python
largest_histogram_area
(histogram: List[int])
return max_area
Args: histogram: List[int] - an array of integers histograms representing the histogram's bar where the width of each bar is 1 Returns: max_area: int - the area of the largest rectangle in the histogram Given an array of integers histograms representing the histogram's bar histogram where the width of each bar is 1, return the area of the largest rectangle in the histogram. Testcases: >>> largest_histogram_area([2,1,5,6,2,3]) 10 >>> largest_histogram_area([2,4]) 4 >>> largest_histogram_area([-1,2,4,-8]) Traceback (most recent call last): ... AssertionError: Histogram can have only positive bars
Args: histogram: List[int] - an array of integers histograms representing the histogram's bar where the width of each bar is 1 Returns: max_area: int - the area of the largest rectangle in the histogram Given an array of integers histograms representing the histogram's bar histogram where the width of each bar is 1, return the area of the largest rectangle in the histogram.
[ "Args", ":", "histogram", ":", "List", "[", "int", "]", "-", "an", "array", "of", "integers", "histograms", "representing", "the", "histogram", "s", "bar", "where", "the", "width", "of", "each", "bar", "is", "1", "Returns", ":", "max_area", ":", "int", ...
def largest_histogram_area(histogram: List[int]) -> int: """ Args: histogram: List[int] - an array of integers histograms representing the histogram's bar where the width of each bar is 1 Returns: max_area: int - the area of the largest rectangle in the histogram Given an array of integers histograms representing the histogram's bar histogram where the width of each bar is 1, return the area of the largest rectangle in the histogram. Testcases: >>> largest_histogram_area([2,1,5,6,2,3]) 10 >>> largest_histogram_area([2,4]) 4 >>> largest_histogram_area([-1,2,4,-8]) Traceback (most recent call last): ... AssertionError: Histogram can have only positive bars """ assert (isinstance(histogram, list) and all(bar > 0 for bar in histogram)), f"Histogram can have only positive bars" histogram.append(0) # the stack maintain the indexes of buildings with ascending height. # before adding a new building pop the building who is taller than the new one. stack = [-1] max_area = 0 for i in range(len(histogram)): while histogram[i] < histogram[stack[-1]]: # The building popped out represent the height of a rectangle with the new building as the right # boundary and the current stack top as the left boundary. h = histogram[stack.pop()] # Calculate its area and update max_area of maximum area. w = i - stack[-1] - 1 max_area = max(max_area, h * w) stack.append(i) histogram.pop() return max_area
[ "def", "largest_histogram_area", "(", "histogram", ":", "List", "[", "int", "]", ")", "->", "int", ":", "assert", "(", "isinstance", "(", "histogram", ",", "list", ")", "and", "all", "(", "bar", ">", "0", "for", "bar", "in", "histogram", ")", ")", ",...
https://github.com/shoaibrayeen/Programmers-Community/blob/1d352fb3e6ac5e2e7d9472d90527bdcc8d5ec355/Data Structure/Stack/Max Area under a Histogram/SolutionByShreayan.py#L21-L57
wlanjie/AndroidFFmpeg
7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf
tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/email/feedparser.py
python
FeedParser.close
(self)
return root
Parse all remaining data and return the root message object.
Parse all remaining data and return the root message object.
[ "Parse", "all", "remaining", "data", "and", "return", "the", "root", "message", "object", "." ]
def close(self): """Parse all remaining data and return the root message object.""" self._input.close() self._call_parse() root = self._pop_message() assert not self._msgstack # Look for final set of defects if root.get_content_maintype() == 'multipart' \ and not root.is_multipart(): root.defects.append(errors.MultipartInvariantViolationDefect()) return root
[ "def", "close", "(", "self", ")", ":", "self", ".", "_input", ".", "close", "(", ")", "self", ".", "_call_parse", "(", ")", "root", "=", "self", ".", "_pop_message", "(", ")", "assert", "not", "self", ".", "_msgstack", "# Look for final set of defects", ...
https://github.com/wlanjie/AndroidFFmpeg/blob/7baf9122f4b8e1c74e7baf4be5c422c7a5ba5aaf/tools/fdk-aac-build/armeabi/toolchain/lib/python2.7/email/feedparser.py#L165-L175
SFTtech/openage
d6a08c53c48dc1e157807471df92197f6ca9e04d
openage/convert/value_object/read/media/datfile/unit.py
python
BuildingAnnex.get_data_format_members
(cls, game_version)
return data_format
Return the members in this struct.
Return the members in this struct.
[ "Return", "the", "members", "in", "this", "struct", "." ]
def get_data_format_members(cls, game_version): """ Return the members in this struct. """ data_format = [ (READ_GEN, "unit_id", StorageType.ID_MEMBER, "int16_t"), (READ_GEN, "misplaced0", StorageType.FLOAT_MEMBER, "float"), (READ_GEN, "misplaced1", StorageType.FLOAT_MEMBER, "float"), ] return data_format
[ "def", "get_data_format_members", "(", "cls", ",", "game_version", ")", ":", "data_format", "=", "[", "(", "READ_GEN", ",", "\"unit_id\"", ",", "StorageType", ".", "ID_MEMBER", ",", "\"int16_t\"", ")", ",", "(", "READ_GEN", ",", "\"misplaced0\"", ",", "Storage...
https://github.com/SFTtech/openage/blob/d6a08c53c48dc1e157807471df92197f6ca9e04d/openage/convert/value_object/read/media/datfile/unit.py#L210-L220
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/cgi.py
python
print_environ_usage
()
Dump a list of environment variables used by CGI as HTML.
Dump a list of environment variables used by CGI as HTML.
[ "Dump", "a", "list", "of", "environment", "variables", "used", "by", "CGI", "as", "HTML", "." ]
def print_environ_usage(): """Dump a list of environment variables used by CGI as HTML.""" print(""" <H3>These environment variables could have been set:</H3> <UL> <LI>AUTH_TYPE <LI>CONTENT_LENGTH <LI>CONTENT_TYPE <LI>DATE_GMT <LI>DATE_LOCAL <LI>DOCUMENT_NAME <LI>DOCUMENT_ROOT <LI>DOCUMENT_URI <LI>GATEWAY_INTERFACE <LI>LAST_MODIFIED <LI>PATH <LI>PATH_INFO <LI>PATH_TRANSLATED <LI>QUERY_STRING <LI>REMOTE_ADDR <LI>REMOTE_HOST <LI>REMOTE_IDENT <LI>REMOTE_USER <LI>REQUEST_METHOD <LI>SCRIPT_NAME <LI>SERVER_NAME <LI>SERVER_PORT <LI>SERVER_PROTOCOL <LI>SERVER_ROOT <LI>SERVER_SOFTWARE </UL> In addition, HTTP headers sent by the server may be passed in the environment as well. Here are some common variable names: <UL> <LI>HTTP_ACCEPT <LI>HTTP_CONNECTION <LI>HTTP_HOST <LI>HTTP_PRAGMA <LI>HTTP_REFERER <LI>HTTP_USER_AGENT </UL> """)
[ "def", "print_environ_usage", "(", ")", ":", "print", "(", "\"\"\"\n<H3>These environment variables could have been set:</H3>\n<UL>\n<LI>AUTH_TYPE\n<LI>CONTENT_LENGTH\n<LI>CONTENT_TYPE\n<LI>DATE_GMT\n<LI>DATE_LOCAL\n<LI>DOCUMENT_NAME\n<LI>DOCUMENT_ROOT\n<LI>DOCUMENT_URI\n<LI>GATEWAY_INTERFACE\n<LI>LAS...
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/cgi.py#L958-L999
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py
python
ExFileObject.close
(self)
Close the file object.
Close the file object.
[ "Close", "the", "file", "object", "." ]
def close(self): """Close the file object. """ self.closed = True
[ "def", "close", "(", "self", ")", ":", "self", ".", "closed", "=", "True" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/linux_x64/lib/python3.7/site-packages/pip/_vendor/distlib/_backport/tarfile.py#L1809-L1815
facebook/ThreatExchange
31914a51820c73c8a0daffe62ccca29a6e3d359e
hasher-matcher-actioner/hmalib/common/s3_adapters.py
python
ThreatUpdateS3Store.get_signal_type_from_object_key
( cls, key: str )
return None
Inverses get_s3_object_key. Given an object key (potentially generated by this class), extracts the extension, compares that against known signal_types to see if any of them have the same indicator_type and returns that signal_type.
Inverses get_s3_object_key. Given an object key (potentially generated by this class), extracts the extension, compares that against known signal_types to see if any of them have the same indicator_type and returns that signal_type.
[ "Inverses", "get_s3_object_key", ".", "Given", "an", "object", "key", "(", "potentially", "generated", "by", "this", "class", ")", "extracts", "the", "extension", "compares", "that", "against", "known", "signal_types", "to", "see", "if", "any", "of", "them", "...
def get_signal_type_from_object_key( cls, key: str ) -> t.Optional[t.Type[SignalType]]: """ Inverses get_s3_object_key. Given an object key (potentially generated by this class), extracts the extension, compares that against known signal_types to see if any of them have the same indicator_type and returns that signal_type. """ # given s3://<foo_bucket>/threat_exchange_data/258601789084078.hash_pdq.te # .te and everything other than hash_pdq can be ignored. try: _, extension, _ = key.rsplit(".", 2) except ValueError: # key does not meet the structure necessary. Impossible to determine # signal_type return None for signal_type in KNOWN_SIGNAL_TYPES: if signal_type.INDICATOR_TYPE.lower() == extension: return signal_type # Hardcode for HASH_VIDEO_MD5 because threatexchange's VideoMD5 still # has HASH_MD5 as indicator_type if extension == "hash_video_md5": return VideoMD5Signal return None
[ "def", "get_signal_type_from_object_key", "(", "cls", ",", "key", ":", "str", ")", "->", "t", ".", "Optional", "[", "t", ".", "Type", "[", "SignalType", "]", "]", ":", "# given s3://<foo_bucket>/threat_exchange_data/258601789084078.hash_pdq.te", "# .te and everything ot...
https://github.com/facebook/ThreatExchange/blob/31914a51820c73c8a0daffe62ccca29a6e3d359e/hasher-matcher-actioner/hmalib/common/s3_adapters.py#L306-L333
hanpfei/chromium-net
392cc1fa3a8f92f42e4071ab6e674d8e0482f83f
tools/android/loading/cloud/frontend/clovis_frontend.py
python
GetTracePaths
(bucket)
return traces
Returns a list of trace files in a bucket. Finds and loads the trace databases, and returns their content as a list of paths. This function assumes a specific structure for the files in the bucket. These assumptions must match the behavior of the backend: - The trace databases are located in the bucket. - The trace databases files are the only objects with the TRACE_DATABASE_PREFIX prefix in their name. Returns: list: The list of paths to traces, as strings.
Returns a list of trace files in a bucket.
[ "Returns", "a", "list", "of", "trace", "files", "in", "a", "bucket", "." ]
def GetTracePaths(bucket): """Returns a list of trace files in a bucket. Finds and loads the trace databases, and returns their content as a list of paths. This function assumes a specific structure for the files in the bucket. These assumptions must match the behavior of the backend: - The trace databases are located in the bucket. - The trace databases files are the only objects with the TRACE_DATABASE_PREFIX prefix in their name. Returns: list: The list of paths to traces, as strings. """ traces = [] prefix = os.path.join('/', bucket, common.clovis_paths.TRACE_DATABASE_PREFIX) file_stats = cloudstorage.listbucket(prefix) for file_stat in file_stats: database_file = file_stat.filename clovis_logger.info('Loading trace database: ' + database_file) with cloudstorage.open(database_file) as remote_file: json_string = remote_file.read() if not json_string: clovis_logger.warning('Failed to download: ' + database_file) continue database = LoadingTraceDatabase.FromJsonString(json_string) if not database: clovis_logger.warning('Failed to parse: ' + database_file) continue for path in database.ToJsonDict(): traces.append(path) return traces
[ "def", "GetTracePaths", "(", "bucket", ")", ":", "traces", "=", "[", "]", "prefix", "=", "os", ".", "path", ".", "join", "(", "'/'", ",", "bucket", ",", "common", ".", "clovis_paths", ".", "TRACE_DATABASE_PREFIX", ")", "file_stats", "=", "cloudstorage", ...
https://github.com/hanpfei/chromium-net/blob/392cc1fa3a8f92f42e4071ab6e674d8e0482f83f/tools/android/loading/cloud/frontend/clovis_frontend.py#L236-L273
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/scikit-learn/py2/sklearn/externals/six.py
python
iterlists
(d, **kw)
return iter(getattr(d, _iterlists)(**kw))
Return an iterator over the (key, [values]) pairs of a dictionary.
Return an iterator over the (key, [values]) pairs of a dictionary.
[ "Return", "an", "iterator", "over", "the", "(", "key", "[", "values", "]", ")", "pairs", "of", "a", "dictionary", "." ]
def iterlists(d, **kw): """Return an iterator over the (key, [values]) pairs of a dictionary.""" return iter(getattr(d, _iterlists)(**kw))
[ "def", "iterlists", "(", "d", ",", "*", "*", "kw", ")", ":", "return", "iter", "(", "getattr", "(", "d", ",", "_iterlists", ")", "(", "*", "*", "kw", ")", ")" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scikit-learn/py2/sklearn/externals/six.py#L441-L443
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numpy/lib/function_base.py
python
trim_zeros
(filt, trim='fb')
return filt[first:last]
Trim the leading and/or trailing zeros from a 1-D array or sequence. Parameters ---------- filt : 1-D array or sequence Input array. trim : str, optional A string with 'f' representing trim from front and 'b' to trim from back. Default is 'fb', trim zeros from both front and back of the array. Returns ------- trimmed : 1-D array or sequence The result of trimming the input. The input data type is preserved. Examples -------- >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) >>> np.trim_zeros(a) array([1, 2, 3, 0, 2, 1]) >>> np.trim_zeros(a, 'b') array([0, 0, 0, ..., 0, 2, 1]) The input data type is preserved, list/tuple in means list/tuple out. >>> np.trim_zeros([0, 1, 2, 0]) [1, 2]
Trim the leading and/or trailing zeros from a 1-D array or sequence.
[ "Trim", "the", "leading", "and", "/", "or", "trailing", "zeros", "from", "a", "1", "-", "D", "array", "or", "sequence", "." ]
def trim_zeros(filt, trim='fb'): """ Trim the leading and/or trailing zeros from a 1-D array or sequence. Parameters ---------- filt : 1-D array or sequence Input array. trim : str, optional A string with 'f' representing trim from front and 'b' to trim from back. Default is 'fb', trim zeros from both front and back of the array. Returns ------- trimmed : 1-D array or sequence The result of trimming the input. The input data type is preserved. Examples -------- >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) >>> np.trim_zeros(a) array([1, 2, 3, 0, 2, 1]) >>> np.trim_zeros(a, 'b') array([0, 0, 0, ..., 0, 2, 1]) The input data type is preserved, list/tuple in means list/tuple out. >>> np.trim_zeros([0, 1, 2, 0]) [1, 2] """ first = 0 trim = trim.upper() if 'F' in trim: for i in filt: if i != 0.: break else: first = first + 1 last = len(filt) if 'B' in trim: for i in filt[::-1]: if i != 0.: break else: last = last - 1 return filt[first:last]
[ "def", "trim_zeros", "(", "filt", ",", "trim", "=", "'fb'", ")", ":", "first", "=", "0", "trim", "=", "trim", ".", "upper", "(", ")", "if", "'F'", "in", "trim", ":", "for", "i", "in", "filt", ":", "if", "i", "!=", "0.", ":", "break", "else", ...
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numpy/lib/function_base.py#L1574-L1622
facebook/fboss
60063db1df37c2ec0e7dcd0955c54885ea9bf7f0
build/fbcode_builder/getdeps/cache.py
python
ArtifactCache.download_to_file
(self, name, dest_file_name)
return False
If `name` exists in the cache, download it and place it in the specified `dest_file_name` location on the filesystem. If a transient issue was encountered a TransientFailure shall be raised. If `name` doesn't exist in the cache `False` shall be returned. If `dest_file_name` was successfully updated `True` shall be returned. All other conditions shall raise an appropriate exception.
If `name` exists in the cache, download it and place it in the specified `dest_file_name` location on the filesystem. If a transient issue was encountered a TransientFailure shall be raised. If `name` doesn't exist in the cache `False` shall be returned. If `dest_file_name` was successfully updated `True` shall be returned. All other conditions shall raise an appropriate exception.
[ "If", "name", "exists", "in", "the", "cache", "download", "it", "and", "place", "it", "in", "the", "specified", "dest_file_name", "location", "on", "the", "filesystem", ".", "If", "a", "transient", "issue", "was", "encountered", "a", "TransientFailure", "shall...
def download_to_file(self, name, dest_file_name): """If `name` exists in the cache, download it and place it in the specified `dest_file_name` location on the filesystem. If a transient issue was encountered a TransientFailure shall be raised. If `name` doesn't exist in the cache `False` shall be returned. If `dest_file_name` was successfully updated `True` shall be returned. All other conditions shall raise an appropriate exception.""" return False
[ "def", "download_to_file", "(", "self", ",", "name", ",", "dest_file_name", ")", ":", "return", "False" ]
https://github.com/facebook/fboss/blob/60063db1df37c2ec0e7dcd0955c54885ea9bf7f0/build/fbcode_builder/getdeps/cache.py#L13-L22
mindspore-ai/mindspore
fb8fd3338605bb34fa5cea054e535a8b1d753fab
mindspore/python/mindspore/communication/management.py
python
get_local_rank_size
(group=GlobalComm.WORLD_COMM_GROUP)
return _get_local_size_helper(group=_get_group(group), backend=GlobalComm.BACKEND)
Gets local rank size of the specified collective communication group. Note: GPU version of MindSpore doesn't support this method. This method should be used after init(). The user needs to preset communication environment variables before running the following example, please see the docstring of the mindspore.managerment. Args: group (str): The communication group to work on. The group is created by create_group or the default world communication group. Default: WORLD_COMM_GROUP. Returns: int, the local rank size where the calling process is within the group. Raises: TypeError: If group is not a string. ValueError: If backend is invalid. RuntimeError: If HCCL is not available or MindSpore is GPU version. Examples: >>> from mindspore.context import set_context >>> from mindspore.communication.management import init, get_local_rank_size >>> set_context(device_target="Ascend", device_num=16) # 2 server, each server with 8 NPU. >>> init() >>> local_rank_size = get_local_rank_size() >>> print("local_rank_size is: ", local_rank_size) local_rank_size is: 8
Gets local rank size of the specified collective communication group.
[ "Gets", "local", "rank", "size", "of", "the", "specified", "collective", "communication", "group", "." ]
def get_local_rank_size(group=GlobalComm.WORLD_COMM_GROUP): """ Gets local rank size of the specified collective communication group. Note: GPU version of MindSpore doesn't support this method. This method should be used after init(). The user needs to preset communication environment variables before running the following example, please see the docstring of the mindspore.managerment. Args: group (str): The communication group to work on. The group is created by create_group or the default world communication group. Default: WORLD_COMM_GROUP. Returns: int, the local rank size where the calling process is within the group. Raises: TypeError: If group is not a string. ValueError: If backend is invalid. RuntimeError: If HCCL is not available or MindSpore is GPU version. Examples: >>> from mindspore.context import set_context >>> from mindspore.communication.management import init, get_local_rank_size >>> set_context(device_target="Ascend", device_num=16) # 2 server, each server with 8 NPU. >>> init() >>> local_rank_size = get_local_rank_size() >>> print("local_rank_size is: ", local_rank_size) local_rank_size is: 8 """ if not isinstance(group, str): raise TypeError("For 'get_local_rank_size', the argument 'group' must be type of string, " "but got 'group' type : {}.".format(type(group))) return _get_local_size_helper(group=_get_group(group), backend=GlobalComm.BACKEND)
[ "def", "get_local_rank_size", "(", "group", "=", "GlobalComm", ".", "WORLD_COMM_GROUP", ")", ":", "if", "not", "isinstance", "(", "group", ",", "str", ")", ":", "raise", "TypeError", "(", "\"For 'get_local_rank_size', the argument 'group' must be type of string, \"", "\...
https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/communication/management.py#L277-L309
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/smtplib.py
python
SMTP.noop
(self)
return self.docmd("noop")
SMTP 'noop' command -- doesn't do anything :>
SMTP 'noop' command -- doesn't do anything :>
[ "SMTP", "noop", "command", "--", "doesn", "t", "do", "anything", ":", ">" ]
def noop(self): """SMTP 'noop' command -- doesn't do anything :>""" return self.docmd("noop")
[ "def", "noop", "(", "self", ")", ":", "return", "self", ".", "docmd", "(", "\"noop\"", ")" ]
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/mac/Python.framework/Versions/3.7/lib/python3.7/smtplib.py#L512-L514
windystrife/UnrealEngine_NVIDIAGameWorks
b50e6338a7c5b26374d66306ebc7807541ff815e
Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/site-packages/sipconfig.py
python
_Macro.extend
(self, value)
Append each element of a value to the macro. value is the list of elements to append.
Append each element of a value to the macro.
[ "Append", "each", "element", "of", "a", "value", "to", "the", "macro", "." ]
def extend(self, value): """Append each element of a value to the macro. value is the list of elements to append. """ for el in value: self.append(el)
[ "def", "extend", "(", "self", ",", "value", ")", ":", "for", "el", "in", "value", ":", "self", ".", "append", "(", "el", ")" ]
https://github.com/windystrife/UnrealEngine_NVIDIAGameWorks/blob/b50e6338a7c5b26374d66306ebc7807541ff815e/Engine/Extras/ThirdPartyNotUE/emsdk/Win64/python/2.7.5.3_64bit/Lib/site-packages/sipconfig.py#L285-L291
nileshkulkarni/csm
0e6e0e7d4f725fd36f2414c0be4b9d83197aa1fc
csm/utils/transformations.py
python
Arcball.__init__
(self, initial=None)
Initialize virtual trackball control. initial : quaternion or rotation matrix
Initialize virtual trackball control.
[ "Initialize", "virtual", "trackball", "control", "." ]
def __init__(self, initial=None): """Initialize virtual trackball control. initial : quaternion or rotation matrix """ self._axis = None self._axes = None self._radius = 1.0 self._center = [0.0, 0.0] self._vdown = numpy.array([0.0, 0.0, 1.0]) self._constrain = False if initial is None: self._qdown = numpy.array([1.0, 0.0, 0.0, 0.0]) else: initial = numpy.array(initial, dtype=numpy.float64) if initial.shape == (4, 4): self._qdown = quaternion_from_matrix(initial) elif initial.shape == (4, ): initial /= vector_norm(initial) self._qdown = initial else: raise ValueError("initial not a quaternion or matrix") self._qnow = self._qpre = self._qdown
[ "def", "__init__", "(", "self", ",", "initial", "=", "None", ")", ":", "self", ".", "_axis", "=", "None", "self", ".", "_axes", "=", "None", "self", ".", "_radius", "=", "1.0", "self", ".", "_center", "=", "[", "0.0", ",", "0.0", "]", "self", "."...
https://github.com/nileshkulkarni/csm/blob/0e6e0e7d4f725fd36f2414c0be4b9d83197aa1fc/csm/utils/transformations.py#L1538-L1561
tensorflow/tensorflow
419e3a6b650ea4bd1b0cba23c4348f8a69f3272e
tensorflow/python/eager/context.py
python
Context.register_custom_device
(self, device_capsule, device_name, device_info_capsule)
Calls TFE_RegisterCustomDevice. See the non-member function.
Calls TFE_RegisterCustomDevice. See the non-member function.
[ "Calls", "TFE_RegisterCustomDevice", ".", "See", "the", "non", "-", "member", "function", "." ]
def register_custom_device(self, device_capsule, device_name, device_info_capsule): """Calls TFE_RegisterCustomDevice. See the non-member function.""" self.ensure_initialized() pywrap_tfe.TFE_Py_RegisterCustomDevice(self._handle, device_capsule, device_name, device_info_capsule)
[ "def", "register_custom_device", "(", "self", ",", "device_capsule", ",", "device_name", ",", "device_info_capsule", ")", ":", "self", ".", "ensure_initialized", "(", ")", "pywrap_tfe", ".", "TFE_Py_RegisterCustomDevice", "(", "self", ".", "_handle", ",", "device_ca...
https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/eager/context.py#L1310-L1315
forkineye/ESPixelStick
22926f1c0d1131f1369fc7cad405689a095ae3cb
dist/bin/pyserial/serial/serialposix.py
python
Serial.close
(self)
Close port
Close port
[ "Close", "port" ]
def close(self): """Close port""" if self.is_open: if self.fd is not None: os.close(self.fd) self.fd = None os.close(self.pipe_abort_read_w) os.close(self.pipe_abort_read_r) os.close(self.pipe_abort_write_w) os.close(self.pipe_abort_write_r) self.pipe_abort_read_r, self.pipe_abort_read_w = None, None self.pipe_abort_write_r, self.pipe_abort_write_w = None, None self.is_open = False
[ "def", "close", "(", "self", ")", ":", "if", "self", ".", "is_open", ":", "if", "self", ".", "fd", "is", "not", "None", ":", "os", ".", "close", "(", "self", ".", "fd", ")", "self", ".", "fd", "=", "None", "os", ".", "close", "(", "self", "."...
https://github.com/forkineye/ESPixelStick/blob/22926f1c0d1131f1369fc7cad405689a095ae3cb/dist/bin/pyserial/serial/serialposix.py#L447-L459
MythTV/mythtv
d282a209cb8be85d036f85a62a8ec971b67d45f4
mythtv/contrib/imports/mirobridge/mirobridge/mirobridge_interpreter_3_0_0.py
python
MiroInterpreter.do_rmfeed
(self, line)
rmfeed <name> -- Deletes a feed.
rmfeed <name> -- Deletes a feed.
[ "rmfeed", "<name", ">", "--", "Deletes", "a", "feed", "." ]
def do_rmfeed(self, line): """rmfeed <name> -- Deletes a feed.""" for tab in self.video_feed_tabs.get_all_tabs(): if tab.get_title() == line: tab.remove() return for tab in self.audio_feed_tabs.get_all_tabs(): if tab.get_title() == line: tab.remove() return print "Error: %s not found" % line
[ "def", "do_rmfeed", "(", "self", ",", "line", ")", ":", "for", "tab", "in", "self", ".", "video_feed_tabs", ".", "get_all_tabs", "(", ")", ":", "if", "tab", ".", "get_title", "(", ")", "==", "line", ":", "tab", ".", "remove", "(", ")", "return", "f...
https://github.com/MythTV/mythtv/blob/d282a209cb8be85d036f85a62a8ec971b67d45f4/mythtv/contrib/imports/mirobridge/mirobridge/mirobridge_interpreter_3_0_0.py#L170-L180
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Tools/Python/3.7.10/windows/Lib/site-packages/pip/_vendor/html5lib/html5parser.py
python
HTMLParser.parse
(self, stream, *args, **kwargs)
return self.tree.getDocument()
Parse a HTML document into a well-formed tree :arg stream: a file-like object or string containing the HTML to be parsed The optional encoding parameter must be a string that indicates the encoding. If specified, that encoding will be used, regardless of any BOM or later declaration (such as in a meta element). :arg scripting: treat noscript elements as if JavaScript was turned on :returns: parsed tree Example: >>> from html5lib.html5parser import HTMLParser >>> parser = HTMLParser() >>> parser.parse('<html><body><p>This is a doc</p></body></html>') <Element u'{http://www.w3.org/1999/xhtml}html' at 0x7feac4909db0>
Parse a HTML document into a well-formed tree
[ "Parse", "a", "HTML", "document", "into", "a", "well", "-", "formed", "tree" ]
def parse(self, stream, *args, **kwargs): """Parse a HTML document into a well-formed tree :arg stream: a file-like object or string containing the HTML to be parsed The optional encoding parameter must be a string that indicates the encoding. If specified, that encoding will be used, regardless of any BOM or later declaration (such as in a meta element). :arg scripting: treat noscript elements as if JavaScript was turned on :returns: parsed tree Example: >>> from html5lib.html5parser import HTMLParser >>> parser = HTMLParser() >>> parser.parse('<html><body><p>This is a doc</p></body></html>') <Element u'{http://www.w3.org/1999/xhtml}html' at 0x7feac4909db0> """ self._parse(stream, False, None, *args, **kwargs) return self.tree.getDocument()
[ "def", "parse", "(", "self", ",", "stream", ",", "*", "args", ",", "*", "*", "kwargs", ")", ":", "self", ".", "_parse", "(", "stream", ",", "False", ",", "None", ",", "*", "args", ",", "*", "*", "kwargs", ")", "return", "self", ".", "tree", "."...
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Tools/Python/3.7.10/windows/Lib/site-packages/pip/_vendor/html5lib/html5parser.py#L262-L285
cztomczak/cefpython
5679f28cec18a57a56e298da2927aac8d8f83ad6
tools/automate.py
python
run_git
(command_line, working_dir)
return run_command("git %s" % command_line, working_dir)
Run git command using depot_tools.
Run git command using depot_tools.
[ "Run", "git", "command", "using", "depot_tools", "." ]
def run_git(command_line, working_dir): """Run git command using depot_tools.""" return run_command("git %s" % command_line, working_dir)
[ "def", "run_git", "(", "command_line", ",", "working_dir", ")", ":", "return", "run_command", "(", "\"git %s\"", "%", "command_line", ",", "working_dir", ")" ]
https://github.com/cztomczak/cefpython/blob/5679f28cec18a57a56e298da2927aac8d8f83ad6/tools/automate.py#L977-L979
mindspore-ai/mindspore
fb8fd3338605bb34fa5cea054e535a8b1d753fab
mindspore/python/mindspore/ops/composite/base.py
python
Shard.__init__
(self)
Initialize Shard.
Initialize Shard.
[ "Initialize", "Shard", "." ]
def __init__(self): """Initialize Shard.""" Shard_.__init__(self, 'Shard') self.shard_fn = None self.fn = None self.in_axes = None self.out_axes = None self.device = None self.level = None
[ "def", "__init__", "(", "self", ")", ":", "Shard_", ".", "__init__", "(", "self", ",", "'Shard'", ")", "self", ".", "shard_fn", "=", "None", "self", ".", "fn", "=", "None", "self", ".", "in_axes", "=", "None", "self", ".", "out_axes", "=", "None", ...
https://github.com/mindspore-ai/mindspore/blob/fb8fd3338605bb34fa5cea054e535a8b1d753fab/mindspore/python/mindspore/ops/composite/base.py#L744-L752
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/pandas/py3/pandas/core/generic.py
python
NDFrame.__len__
(self)
return len(self._info_axis)
Returns length of info axis
Returns length of info axis
[ "Returns", "length", "of", "info", "axis" ]
def __len__(self) -> int: """Returns length of info axis""" return len(self._info_axis)
[ "def", "__len__", "(", "self", ")", "->", "int", ":", "return", "len", "(", "self", ".", "_info_axis", ")" ]
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/pandas/py3/pandas/core/generic.py#L1926-L1928
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/ipython/py3/IPython/core/application.py
python
BaseIPythonApplication.init_config_files
(self)
[optionally] copy default config files into profile dir.
[optionally] copy default config files into profile dir.
[ "[", "optionally", "]", "copy", "default", "config", "files", "into", "profile", "dir", "." ]
def init_config_files(self): """[optionally] copy default config files into profile dir.""" self.config_file_paths.extend(ENV_CONFIG_DIRS) self.config_file_paths.extend(SYSTEM_CONFIG_DIRS) # copy config files path = self.builtin_profile_dir if self.copy_config_files: src = self.profile cfg = self.config_file_name if path and os.path.exists(os.path.join(path, cfg)): self.log.warning("Staging %r from %s into %r [overwrite=%s]"%( cfg, src, self.profile_dir.location, self.overwrite) ) self.profile_dir.copy_config_file(cfg, path=path, overwrite=self.overwrite) else: self.stage_default_config_file() else: # Still stage *bundled* config files, but not generated ones # This is necessary for `ipython profile=sympy` to load the profile # on the first go files = glob.glob(os.path.join(path, '*.py')) for fullpath in files: cfg = os.path.basename(fullpath) if self.profile_dir.copy_config_file(cfg, path=path, overwrite=False): # file was copied self.log.warning("Staging bundled %s from %s into %r"%( cfg, self.profile, self.profile_dir.location) )
[ "def", "init_config_files", "(", "self", ")", ":", "self", ".", "config_file_paths", ".", "extend", "(", "ENV_CONFIG_DIRS", ")", "self", ".", "config_file_paths", ".", "extend", "(", "SYSTEM_CONFIG_DIRS", ")", "# copy config files", "path", "=", "self", ".", "bu...
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/ipython/py3/IPython/core/application.py#L407-L435
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/distribute/input_lib.py
python
_SingleWorkerCallableIterator.get_next_as_list
(self, name=None)
Get next element from the callable.
Get next element from the callable.
[ "Get", "next", "element", "from", "the", "callable", "." ]
def get_next_as_list(self, name=None): """Get next element from the callable.""" del name with ops.device(self._worker): data_list = [self._fn() for _ in self._devices] return constant_op.constant(True), data_list
[ "def", "get_next_as_list", "(", "self", ",", "name", "=", "None", ")", ":", "del", "name", "with", "ops", ".", "device", "(", "self", ".", "_worker", ")", ":", "data_list", "=", "[", "self", ".", "_fn", "(", ")", "for", "_", "in", "self", ".", "_...
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/python/distribute/input_lib.py#L940-L945
giuspen/cherrytree
84712f206478fcf9acf30174009ad28c648c6344
pygtk2/modules/imports.py
python
RedNotebookHandler.__init__
(self, dad, folderpath)
Machine boot
Machine boot
[ "Machine", "boot" ]
def __init__(self, dad, folderpath): """Machine boot""" self.folderpath = folderpath self.dad = dad self.xml_handler = machines.XMLHandler(self)
[ "def", "__init__", "(", "self", ",", "dad", ",", "folderpath", ")", ":", "self", ".", "folderpath", "=", "folderpath", "self", ".", "dad", "=", "dad", "self", ".", "xml_handler", "=", "machines", ".", "XMLHandler", "(", "self", ")" ]
https://github.com/giuspen/cherrytree/blob/84712f206478fcf9acf30174009ad28c648c6344/pygtk2/modules/imports.py#L502-L506
aws/lumberyard
f85344403c1c2e77ec8c75deb2c116e97b713217
dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numpy/lib/npyio.py
python
ndfromtxt
(fname, **kwargs)
return genfromtxt(fname, **kwargs)
Load ASCII data stored in a file and return it as a single array. .. deprecated:: 1.17 ndfromtxt` is a deprecated alias of `genfromtxt` which overwrites the ``usemask`` argument with `False` even when explicitly called as ``ndfromtxt(..., usemask=True)``. Use `genfromtxt` instead. Parameters ---------- fname, kwargs : For a description of input parameters, see `genfromtxt`. See Also -------- numpy.genfromtxt : generic function.
Load ASCII data stored in a file and return it as a single array.
[ "Load", "ASCII", "data", "stored", "in", "a", "file", "and", "return", "it", "as", "a", "single", "array", "." ]
def ndfromtxt(fname, **kwargs): """ Load ASCII data stored in a file and return it as a single array. .. deprecated:: 1.17 ndfromtxt` is a deprecated alias of `genfromtxt` which overwrites the ``usemask`` argument with `False` even when explicitly called as ``ndfromtxt(..., usemask=True)``. Use `genfromtxt` instead. Parameters ---------- fname, kwargs : For a description of input parameters, see `genfromtxt`. See Also -------- numpy.genfromtxt : generic function. """ kwargs['usemask'] = False # Numpy 1.17 warnings.warn( "np.ndfromtxt is a deprecated alias of np.genfromtxt, " "prefer the latter.", DeprecationWarning, stacklevel=2) return genfromtxt(fname, **kwargs)
[ "def", "ndfromtxt", "(", "fname", ",", "*", "*", "kwargs", ")", ":", "kwargs", "[", "'usemask'", "]", "=", "False", "# Numpy 1.17", "warnings", ".", "warn", "(", "\"np.ndfromtxt is a deprecated alias of np.genfromtxt, \"", "\"prefer the latter.\"", ",", "DeprecationWa...
https://github.com/aws/lumberyard/blob/f85344403c1c2e77ec8c75deb2c116e97b713217/dev/Gems/CloudGemMetric/v1/AWS/common-code/Lib/numpy/lib/npyio.py#L2257-L2282
tensorflow/tensorflow
419e3a6b650ea4bd1b0cba23c4348f8a69f3272e
tensorflow/python/ops/clip_ops.py
python
_clip_by_value_grad
(op, grad)
return (gx, gy, gz)
Returns grad of clip_by_value.
Returns grad of clip_by_value.
[ "Returns", "grad", "of", "clip_by_value", "." ]
def _clip_by_value_grad(op, grad): """Returns grad of clip_by_value.""" x = op.inputs[0] y = op.inputs[1] z = op.inputs[2] gdtype = grad.dtype sx = array_ops.shape(x) sy = array_ops.shape(y) sz = array_ops.shape(z) gradshape = array_ops.shape(grad) zeros = array_ops.zeros(gradshape, gdtype) xymask = math_ops.less(x, y) xzmask = math_ops.greater(x, z) rx, ry = gen_array_ops.broadcast_gradient_args(sx, sy) rx, rz = gen_array_ops.broadcast_gradient_args(sx, sz) xgrad = array_ops.where(math_ops.logical_or(xymask, xzmask), zeros, grad) ygrad = array_ops.where(xymask, grad, zeros) zgrad = array_ops.where(xzmask, grad, zeros) gx = array_ops.reshape(math_ops.reduce_sum(xgrad, rx), sx) gy = array_ops.reshape(math_ops.reduce_sum(ygrad, ry), sy) gz = array_ops.reshape(math_ops.reduce_sum(zgrad, rz), sz) return (gx, gy, gz)
[ "def", "_clip_by_value_grad", "(", "op", ",", "grad", ")", ":", "x", "=", "op", ".", "inputs", "[", "0", "]", "y", "=", "op", ".", "inputs", "[", "1", "]", "z", "=", "op", ".", "inputs", "[", "2", "]", "gdtype", "=", "grad", ".", "dtype", "sx...
https://github.com/tensorflow/tensorflow/blob/419e3a6b650ea4bd1b0cba23c4348f8a69f3272e/tensorflow/python/ops/clip_ops.py#L127-L148
eventql/eventql
7ca0dbb2e683b525620ea30dc40540a22d5eb227
deps/3rdparty/spidermonkey/mozjs/python/mozbuild/mozbuild/controller/building.py
python
BuildMonitor._get_finder_cpu_usage
(self)
return None
Obtain the CPU usage of the Finder app on OS X. This is used to detect high CPU usage.
Obtain the CPU usage of the Finder app on OS X.
[ "Obtain", "the", "CPU", "usage", "of", "the", "Finder", "app", "on", "OS", "X", "." ]
def _get_finder_cpu_usage(self): """Obtain the CPU usage of the Finder app on OS X. This is used to detect high CPU usage. """ if not sys.platform.startswith('darwin'): return None if not psutil: return None for proc in psutil.process_iter(): if proc.name != 'Finder': continue if proc.username != getpass.getuser(): continue # Try to isolate system finder as opposed to other "Finder" # processes. if not proc.exe.endswith('CoreServices/Finder.app/Contents/MacOS/Finder'): continue return proc.get_cpu_times() return None
[ "def", "_get_finder_cpu_usage", "(", "self", ")", ":", "if", "not", "sys", ".", "platform", ".", "startswith", "(", "'darwin'", ")", ":", "return", "None", "if", "not", "psutil", ":", "return", "None", "for", "proc", "in", "psutil", ".", "process_iter", ...
https://github.com/eventql/eventql/blob/7ca0dbb2e683b525620ea30dc40540a22d5eb227/deps/3rdparty/spidermonkey/mozjs/python/mozbuild/mozbuild/controller/building.py#L265-L290
CRYTEK/CRYENGINE
232227c59a220cbbd311576f0fbeba7bb53b2a8c
Editor/Python/windows/Lib/site-packages/setuptools/_vendor/packaging/specifiers.py
python
BaseSpecifier.prereleases
(self)
Returns whether or not pre-releases as a whole are allowed by this specifier.
Returns whether or not pre-releases as a whole are allowed by this specifier.
[ "Returns", "whether", "or", "not", "pre", "-", "releases", "as", "a", "whole", "are", "allowed", "by", "this", "specifier", "." ]
def prereleases(self): """ Returns whether or not pre-releases as a whole are allowed by this specifier. """
[ "def", "prereleases", "(", "self", ")", ":" ]
https://github.com/CRYTEK/CRYENGINE/blob/232227c59a220cbbd311576f0fbeba7bb53b2a8c/Editor/Python/windows/Lib/site-packages/setuptools/_vendor/packaging/specifiers.py#L51-L55
hpi-xnor/BMXNet
ed0b201da6667887222b8e4b5f997c4f6b61943d
example/recommenders/movielens_data.py
python
load_mldata_iter
(filename, batch_size)
return mx.io.NDArrayIter(data={'user':user,'item':item},label={'score':score}, batch_size=batch_size, shuffle=True)
Not particularly fast code to parse the text file and load it into three NDArray's and product an NDArrayIter
Not particularly fast code to parse the text file and load it into three NDArray's and product an NDArrayIter
[ "Not", "particularly", "fast", "code", "to", "parse", "the", "text", "file", "and", "load", "it", "into", "three", "NDArray", "s", "and", "product", "an", "NDArrayIter" ]
def load_mldata_iter(filename, batch_size): """Not particularly fast code to parse the text file and load it into three NDArray's and product an NDArrayIter """ user = [] item = [] score = [] with file(filename) as f: for line in f: tks = line.strip().split('\t') if len(tks) != 4: continue user.append(int(tks[0])) item.append(int(tks[1])) score.append(float(tks[2])) user = mx.nd.array(user) item = mx.nd.array(item) score = mx.nd.array(score) return mx.io.NDArrayIter(data={'user':user,'item':item},label={'score':score}, batch_size=batch_size, shuffle=True)
[ "def", "load_mldata_iter", "(", "filename", ",", "batch_size", ")", ":", "user", "=", "[", "]", "item", "=", "[", "]", "score", "=", "[", "]", "with", "file", "(", "filename", ")", "as", "f", ":", "for", "line", "in", "f", ":", "tks", "=", "line"...
https://github.com/hpi-xnor/BMXNet/blob/ed0b201da6667887222b8e4b5f997c4f6b61943d/example/recommenders/movielens_data.py#L24-L43
libfive/libfive
ab5e354cf6fd992f80aaa9432c52683219515c8a
libfive/bind/python/libfive/stdlib/csg.py
python
difference
(a, b)
return Shape(stdlib.difference( args[0].ptr, args[1].ptr))
Subtracts the second shape from the first
Subtracts the second shape from the first
[ "Subtracts", "the", "second", "shape", "from", "the", "first" ]
def difference(a, b): """ Subtracts the second shape from the first """ args = [Shape.wrap(a), Shape.wrap(b)] return Shape(stdlib.difference( args[0].ptr, args[1].ptr))
[ "def", "difference", "(", "a", ",", "b", ")", ":", "args", "=", "[", "Shape", ".", "wrap", "(", "a", ")", ",", "Shape", ".", "wrap", "(", "b", ")", "]", "return", "Shape", "(", "stdlib", ".", "difference", "(", "args", "[", "0", "]", ".", "pt...
https://github.com/libfive/libfive/blob/ab5e354cf6fd992f80aaa9432c52683219515c8a/libfive/bind/python/libfive/stdlib/csg.py#L48-L54
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_cocoa/aui.py
python
AuiToolBarItem.SetSticky
(*args, **kwargs)
return _aui.AuiToolBarItem_SetSticky(*args, **kwargs)
SetSticky(self, bool b)
SetSticky(self, bool b)
[ "SetSticky", "(", "self", "bool", "b", ")" ]
def SetSticky(*args, **kwargs): """SetSticky(self, bool b)""" return _aui.AuiToolBarItem_SetSticky(*args, **kwargs)
[ "def", "SetSticky", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_aui", ".", "AuiToolBarItem_SetSticky", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_cocoa/aui.py#L1857-L1859
Xilinx/Vitis-AI
fc74d404563d9951b57245443c73bef389f3657f
tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/boosted_trees/estimator_batch/model.py
python
model_builder
(features, labels, mode, params, config, output_type=ModelBuilderOutputType.MODEL_FN_OPS)
return model_fn_ops
Multi-machine batch gradient descent tree model. Args: features: `Tensor` or `dict` of `Tensor` objects. labels: Labels used to train on. mode: Mode we are in. (TRAIN/EVAL/INFER) params: A dict of hyperparameters. The following hyperparameters are expected: * head: A `Head` instance. * learner_config: A config for the learner. * feature_columns: An iterable containing all the feature columns used by the model. * examples_per_layer: Number of examples to accumulate before growing a layer. It can also be a function that computes the number of examples based on the depth of the layer that's being built. * weight_column_name: The name of weight column. * center_bias: Whether a separate tree should be created for first fitting the bias. * override_global_step_value: If after the training is done, global step value must be reset to this value. This is particularly useful for hyper parameter tuning, which can't recognize early stopping due to the number of trees. If None, no override of global step will happen. config: `RunConfig` of the estimator. output_type: Whether to return ModelFnOps (old interface) or EstimatorSpec (new interface). Returns: A `ModelFnOps` object. Raises: ValueError: if inputs are not valid.
Multi-machine batch gradient descent tree model.
[ "Multi", "-", "machine", "batch", "gradient", "descent", "tree", "model", "." ]
def model_builder(features, labels, mode, params, config, output_type=ModelBuilderOutputType.MODEL_FN_OPS): """Multi-machine batch gradient descent tree model. Args: features: `Tensor` or `dict` of `Tensor` objects. labels: Labels used to train on. mode: Mode we are in. (TRAIN/EVAL/INFER) params: A dict of hyperparameters. The following hyperparameters are expected: * head: A `Head` instance. * learner_config: A config for the learner. * feature_columns: An iterable containing all the feature columns used by the model. * examples_per_layer: Number of examples to accumulate before growing a layer. It can also be a function that computes the number of examples based on the depth of the layer that's being built. * weight_column_name: The name of weight column. * center_bias: Whether a separate tree should be created for first fitting the bias. * override_global_step_value: If after the training is done, global step value must be reset to this value. This is particularly useful for hyper parameter tuning, which can't recognize early stopping due to the number of trees. If None, no override of global step will happen. config: `RunConfig` of the estimator. output_type: Whether to return ModelFnOps (old interface) or EstimatorSpec (new interface). Returns: A `ModelFnOps` object. Raises: ValueError: if inputs are not valid. """ head = params["head"] learner_config = params["learner_config"] examples_per_layer = params["examples_per_layer"] feature_columns = params["feature_columns"] weight_column_name = params["weight_column_name"] num_trees = params["num_trees"] use_core_libs = params["use_core_libs"] logits_modifier_function = params["logits_modifier_function"] output_leaf_index = params["output_leaf_index"] override_global_step_value = params.get("override_global_step_value", None) num_quantiles = params["num_quantiles"] if features is None: raise ValueError("At least one feature must be specified.") if config is None: raise ValueError("Missing estimator RunConfig.") if config.session_config is not None: session_config = config.session_config session_config.allow_soft_placement = True else: session_config = config_pb2.ConfigProto(allow_soft_placement=True) config = config.replace(session_config=session_config) center_bias = params["center_bias"] if isinstance(features, ops.Tensor): features = {features.name: features} # Make a shallow copy of features to ensure downstream usage # is unaffected by modifications in the model function. training_features = copy.copy(features) training_features.pop(weight_column_name, None) global_step = training_util.get_global_step() initial_ensemble = "" if learner_config.each_tree_start.nodes: if learner_config.each_tree_start_num_layers <= 0: raise ValueError("You must provide each_tree_start_num_layers.") num_layers = learner_config.each_tree_start_num_layers initial_ensemble = """ trees { %s } tree_weights: 0.1 tree_metadata { num_tree_weight_updates: 1 num_layers_grown: %d is_finalized: false } """ % (text_format.MessageToString( learner_config.each_tree_start), num_layers) tree_ensemble_proto = tree_config_pb2.DecisionTreeEnsembleConfig() text_format.Merge(initial_ensemble, tree_ensemble_proto) initial_ensemble = tree_ensemble_proto.SerializeToString() with ops.device(global_step.device): ensemble_handle = model_ops.tree_ensemble_variable( stamp_token=0, tree_ensemble_config=initial_ensemble, # Initialize the ensemble. name="ensemble_model") # Create GBDT model. gbdt_model = gbdt_batch.GradientBoostedDecisionTreeModel( is_chief=config.is_chief, num_ps_replicas=config.num_ps_replicas, ensemble_handle=ensemble_handle, center_bias=center_bias, examples_per_layer=examples_per_layer, learner_config=learner_config, feature_columns=feature_columns, logits_dimension=head.logits_dimension, features=training_features, use_core_columns=use_core_libs, output_leaf_index=output_leaf_index, num_quantiles=num_quantiles) with ops.name_scope("gbdt", "gbdt_optimizer"): predictions_dict = gbdt_model.predict(mode) logits = predictions_dict["predictions"] if logits_modifier_function: logits = logits_modifier_function(logits, features, mode) def _train_op_fn(loss): """Returns the op to optimize the loss.""" update_op = gbdt_model.train(loss, predictions_dict, labels) with ops.control_dependencies( [update_op]), (ops.colocate_with(global_step)): update_op = state_ops.assign_add(global_step, 1).op return update_op create_estimator_spec_op = getattr(head, "create_estimator_spec", None) training_hooks = [] if num_trees: if center_bias: num_trees += 1 finalized_trees, attempted_trees = gbdt_model.get_number_of_trees_tensor() training_hooks.append( trainer_hooks.StopAfterNTrees(num_trees, attempted_trees, finalized_trees, override_global_step_value)) if output_type == ModelBuilderOutputType.MODEL_FN_OPS: if use_core_libs and callable(create_estimator_spec_op): model_fn_ops = head.create_estimator_spec( features=features, mode=mode, labels=labels, train_op_fn=_train_op_fn, logits=logits) model_fn_ops = estimator_utils.estimator_spec_to_model_fn_ops( model_fn_ops) else: model_fn_ops = head.create_model_fn_ops( features=features, mode=mode, labels=labels, train_op_fn=_train_op_fn, logits=logits) if output_leaf_index and gbdt_batch.LEAF_INDEX in predictions_dict: model_fn_ops.predictions[gbdt_batch.LEAF_INDEX] = predictions_dict[ gbdt_batch.LEAF_INDEX] model_fn_ops.training_hooks.extend(training_hooks) return model_fn_ops elif output_type == ModelBuilderOutputType.ESTIMATOR_SPEC: assert callable(create_estimator_spec_op) estimator_spec = head.create_estimator_spec( features=features, mode=mode, labels=labels, train_op_fn=_train_op_fn, logits=logits) if output_leaf_index and gbdt_batch.LEAF_INDEX in predictions_dict: estimator_spec.predictions[gbdt_batch.LEAF_INDEX] = predictions_dict[ gbdt_batch.LEAF_INDEX] estimator_spec = estimator_spec._replace( training_hooks=training_hooks + list(estimator_spec.training_hooks)) return estimator_spec return model_fn_ops
[ "def", "model_builder", "(", "features", ",", "labels", ",", "mode", ",", "params", ",", "config", ",", "output_type", "=", "ModelBuilderOutputType", ".", "MODEL_FN_OPS", ")", ":", "head", "=", "params", "[", "\"head\"", "]", "learner_config", "=", "params", ...
https://github.com/Xilinx/Vitis-AI/blob/fc74d404563d9951b57245443c73bef389f3657f/tools/Vitis-AI-Quantizer/vai_q_tensorflow1.x/tensorflow/contrib/boosted_trees/estimator_batch/model.py#L41-L220
mapbox/mapbox-gl-native
cf734a2fec960025350d8de0d01ad38aeae155a0
scripts/clang-tidy-diff.py
python
run_tidy
(command, lines_by_file, queue, failed_files)
Takes filenames out of queue and runs clang-tidy on them.
Takes filenames out of queue and runs clang-tidy on them.
[ "Takes", "filenames", "out", "of", "queue", "and", "runs", "clang", "-", "tidy", "on", "them", "." ]
def run_tidy(command, lines_by_file, queue, failed_files): """Takes filenames out of queue and runs clang-tidy on them.""" while True: name = queue.get() line_filter_json = json.dumps([{"name" : name, "lines" : lines_by_file[name]}], separators = (',', ':')) if sys.platform == 'win32': line_filter_json = re.sub(r'"', r'"""', line_filter_json) else: line_filter_json = "'" + line_filter_json + "'"; invocation = list(command) invocation.append('-line-filter=' + line_filter_json) invocation.append(name) sys.stdout.write('Checking differences in {}...\n'.format(name)) return_code = subprocess.call(' '.join(invocation), shell=True) if return_code != 0: failed_files.append(name) queue.task_done()
[ "def", "run_tidy", "(", "command", ",", "lines_by_file", ",", "queue", ",", "failed_files", ")", ":", "while", "True", ":", "name", "=", "queue", ".", "get", "(", ")", "line_filter_json", "=", "json", ".", "dumps", "(", "[", "{", "\"name\"", ":", "name...
https://github.com/mapbox/mapbox-gl-native/blob/cf734a2fec960025350d8de0d01ad38aeae155a0/scripts/clang-tidy-diff.py#L43-L62
forkineye/ESPixelStick
22926f1c0d1131f1369fc7cad405689a095ae3cb
dist/bin/esptool/ecdsa/numbertheory.py
python
lcm
( *a )
return a[0]
Least common multiple. Usage: lcm( [ 3, 4, 5 ] ) or: lcm( 3, 4, 5 )
Least common multiple.
[ "Least", "common", "multiple", "." ]
def lcm( *a ): """Least common multiple. Usage: lcm( [ 3, 4, 5 ] ) or: lcm( 3, 4, 5 ) """ if len( a ) > 1: return reduce( lcm2, a ) if hasattr( a[0], "__iter__" ): return reduce( lcm2, a[0] ) return a[0]
[ "def", "lcm", "(", "*", "a", ")", ":", "if", "len", "(", "a", ")", ">", "1", ":", "return", "reduce", "(", "lcm2", ",", "a", ")", "if", "hasattr", "(", "a", "[", "0", "]", ",", "\"__iter__\"", ")", ":", "return", "reduce", "(", "lcm2", ",", ...
https://github.com/forkineye/ESPixelStick/blob/22926f1c0d1131f1369fc7cad405689a095ae3cb/dist/bin/esptool/ecdsa/numbertheory.py#L231-L240
catboost/catboost
167f64f237114a4d10b2b4ee42adb4569137debe
contrib/python/scipy/py2/scipy/signal/ltisys.py
python
_KNV0_loop
(ker_pole, transfer_matrix, poles, B, maxiter, rtol)
return stop, cur_rtol, nb_try
Loop over all poles one by one and apply KNV method 0 algorithm
Loop over all poles one by one and apply KNV method 0 algorithm
[ "Loop", "over", "all", "poles", "one", "by", "one", "and", "apply", "KNV", "method", "0", "algorithm" ]
def _KNV0_loop(ker_pole, transfer_matrix, poles, B, maxiter, rtol): """ Loop over all poles one by one and apply KNV method 0 algorithm """ # This method is useful only because we need to be able to call # _KNV0 from YT without looping over all poles, otherwise it would # have been fine to mix _KNV0_loop and _KNV0 in a single function stop = False nb_try = 0 while nb_try < maxiter and not stop: det_transfer_matrixb = np.abs(np.linalg.det(transfer_matrix)) for j in range(B.shape[0]): _KNV0(B, ker_pole, transfer_matrix, j, poles) det_transfer_matrix = np.max((np.sqrt(np.spacing(1)), np.abs(np.linalg.det(transfer_matrix)))) cur_rtol = np.abs((det_transfer_matrix - det_transfer_matrixb) / det_transfer_matrix) if cur_rtol < rtol and det_transfer_matrix > np.sqrt(np.spacing(1)): # Convergence test from YT page 21 stop = True nb_try += 1 return stop, cur_rtol, nb_try
[ "def", "_KNV0_loop", "(", "ker_pole", ",", "transfer_matrix", ",", "poles", ",", "B", ",", "maxiter", ",", "rtol", ")", ":", "# This method is useful only because we need to be able to call", "# _KNV0 from YT without looping over all poles, otherwise it would", "# have been fine ...
https://github.com/catboost/catboost/blob/167f64f237114a4d10b2b4ee42adb4569137debe/contrib/python/scipy/py2/scipy/signal/ltisys.py#L2854-L2877
wxWidgets/wxPython-Classic
19571e1ae65f1ac445f5491474121998c97a1bf0
src/osx_carbon/media.py
python
MediaCtrl.Pause
(*args, **kwargs)
return _media.MediaCtrl_Pause(*args, **kwargs)
Pause(self) -> bool
Pause(self) -> bool
[ "Pause", "(", "self", ")", "-", ">", "bool" ]
def Pause(*args, **kwargs): """Pause(self) -> bool""" return _media.MediaCtrl_Pause(*args, **kwargs)
[ "def", "Pause", "(", "*", "args", ",", "*", "*", "kwargs", ")", ":", "return", "_media", ".", "MediaCtrl_Pause", "(", "*", "args", ",", "*", "*", "kwargs", ")" ]
https://github.com/wxWidgets/wxPython-Classic/blob/19571e1ae65f1ac445f5491474121998c97a1bf0/src/osx_carbon/media.py#L113-L115