code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def contains_all(self, array): """Test if `array` is an array of real numbers.""" dtype = getattr(array, 'dtype', None) if dtype is None: dtype = np.result_type(*array) return is_real_dtype(dtype)
Test if `array` is an array of real numbers.
Below is the the instruction that describes the task: ### Input: Test if `array` is an array of real numbers. ### Response: def contains_all(self, array): """Test if `array` is an array of real numbers.""" dtype = getattr(array, 'dtype', None) if dtype is None: dtype = np.result_type(*array) return is_real_dtype(dtype)
def batch_delete_jobs( self, parent, filter_, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Deletes a list of ``Job``\ s by filter. Example: >>> from google.cloud import talent_v4beta1 >>> >>> client = talent_v4beta1.JobServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `filter_`: >>> filter_ = '' >>> >>> client.batch_delete_jobs(parent, filter_) Args: parent (str): Required. The resource name of the project under which the job is created. The format is "projects/{project\_id}", for example, "projects/api-test-project". filter_ (str): Required. The filter string specifies the jobs to be deleted. Supported operator: =, AND The fields eligible for filtering are: - ``companyName`` (Required) - ``requisitionId`` (Required) Sample Query: companyName = "projects/api-test-project/companies/123" AND requisitionId = "req-1" retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "batch_delete_jobs" not in self._inner_api_calls: self._inner_api_calls[ "batch_delete_jobs" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.batch_delete_jobs, default_retry=self._method_configs["BatchDeleteJobs"].retry, default_timeout=self._method_configs["BatchDeleteJobs"].timeout, client_info=self._client_info, ) request = job_service_pb2.BatchDeleteJobsRequest(parent=parent, filter=filter_) self._inner_api_calls["batch_delete_jobs"]( request, retry=retry, timeout=timeout, metadata=metadata )
Deletes a list of ``Job``\ s by filter. Example: >>> from google.cloud import talent_v4beta1 >>> >>> client = talent_v4beta1.JobServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `filter_`: >>> filter_ = '' >>> >>> client.batch_delete_jobs(parent, filter_) Args: parent (str): Required. The resource name of the project under which the job is created. The format is "projects/{project\_id}", for example, "projects/api-test-project". filter_ (str): Required. The filter string specifies the jobs to be deleted. Supported operator: =, AND The fields eligible for filtering are: - ``companyName`` (Required) - ``requisitionId`` (Required) Sample Query: companyName = "projects/api-test-project/companies/123" AND requisitionId = "req-1" retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid.
Below is the the instruction that describes the task: ### Input: Deletes a list of ``Job``\ s by filter. Example: >>> from google.cloud import talent_v4beta1 >>> >>> client = talent_v4beta1.JobServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `filter_`: >>> filter_ = '' >>> >>> client.batch_delete_jobs(parent, filter_) Args: parent (str): Required. The resource name of the project under which the job is created. The format is "projects/{project\_id}", for example, "projects/api-test-project". filter_ (str): Required. The filter string specifies the jobs to be deleted. Supported operator: =, AND The fields eligible for filtering are: - ``companyName`` (Required) - ``requisitionId`` (Required) Sample Query: companyName = "projects/api-test-project/companies/123" AND requisitionId = "req-1" retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. ### Response: def batch_delete_jobs( self, parent, filter_, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Deletes a list of ``Job``\ s by filter. Example: >>> from google.cloud import talent_v4beta1 >>> >>> client = talent_v4beta1.JobServiceClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `filter_`: >>> filter_ = '' >>> >>> client.batch_delete_jobs(parent, filter_) Args: parent (str): Required. The resource name of the project under which the job is created. The format is "projects/{project\_id}", for example, "projects/api-test-project". filter_ (str): Required. The filter string specifies the jobs to be deleted. Supported operator: =, AND The fields eligible for filtering are: - ``companyName`` (Required) - ``requisitionId`` (Required) Sample Query: companyName = "projects/api-test-project/companies/123" AND requisitionId = "req-1" retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """ # Wrap the transport method to add retry and timeout logic. if "batch_delete_jobs" not in self._inner_api_calls: self._inner_api_calls[ "batch_delete_jobs" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.batch_delete_jobs, default_retry=self._method_configs["BatchDeleteJobs"].retry, default_timeout=self._method_configs["BatchDeleteJobs"].timeout, client_info=self._client_info, ) request = job_service_pb2.BatchDeleteJobsRequest(parent=parent, filter=filter_) self._inner_api_calls["batch_delete_jobs"]( request, retry=retry, timeout=timeout, metadata=metadata )
def get_gender(self, name, country=None): """Returns best gender for the given name and country pair""" if not self.case_sensitive: name = name.lower() if name not in self.names: return self.unknown_value elif not country: def counter(country_values): country_values = map(ord, country_values.replace(" ", "")) return (len(country_values), sum(map(lambda c: c > 64 and c-55 or c-48, country_values))) return self._most_popular_gender(name, counter) elif country in self.__class__.COUNTRIES: index = self.__class__.COUNTRIES.index(country) counter = lambda e: (ord(e[index])-32, 0) return self._most_popular_gender(name, counter) else: raise NoCountryError("No such country: %s" % country)
Returns best gender for the given name and country pair
Below is the the instruction that describes the task: ### Input: Returns best gender for the given name and country pair ### Response: def get_gender(self, name, country=None): """Returns best gender for the given name and country pair""" if not self.case_sensitive: name = name.lower() if name not in self.names: return self.unknown_value elif not country: def counter(country_values): country_values = map(ord, country_values.replace(" ", "")) return (len(country_values), sum(map(lambda c: c > 64 and c-55 or c-48, country_values))) return self._most_popular_gender(name, counter) elif country in self.__class__.COUNTRIES: index = self.__class__.COUNTRIES.index(country) counter = lambda e: (ord(e[index])-32, 0) return self._most_popular_gender(name, counter) else: raise NoCountryError("No such country: %s" % country)
def start(dispatcher, future, *, loop=None, skip_updates=None, on_startup=None, on_shutdown=None): """ Execute Future. :param dispatcher: instance of Dispatcher :param future: future :param loop: instance of AbstractEventLoop :param skip_updates: :param on_startup: :param on_shutdown: :return: """ executor = Executor(dispatcher, skip_updates=skip_updates, loop=loop) _setup_callbacks(executor, on_startup, on_shutdown) return executor.start(future)
Execute Future. :param dispatcher: instance of Dispatcher :param future: future :param loop: instance of AbstractEventLoop :param skip_updates: :param on_startup: :param on_shutdown: :return:
Below is the the instruction that describes the task: ### Input: Execute Future. :param dispatcher: instance of Dispatcher :param future: future :param loop: instance of AbstractEventLoop :param skip_updates: :param on_startup: :param on_shutdown: :return: ### Response: def start(dispatcher, future, *, loop=None, skip_updates=None, on_startup=None, on_shutdown=None): """ Execute Future. :param dispatcher: instance of Dispatcher :param future: future :param loop: instance of AbstractEventLoop :param skip_updates: :param on_startup: :param on_shutdown: :return: """ executor = Executor(dispatcher, skip_updates=skip_updates, loop=loop) _setup_callbacks(executor, on_startup, on_shutdown) return executor.start(future)
def to_integer(value, ctx): """ Tries conversion of any value to an integer """ if isinstance(value, bool): return 1 if value else 0 elif isinstance(value, int): return value elif isinstance(value, Decimal): try: val = int(value.to_integral_exact(ROUND_HALF_UP)) if isinstance(val, int): return val except ArithmeticError: pass elif isinstance(value, str): try: return int(value) except ValueError: pass raise EvaluationError("Can't convert '%s' to an integer" % str(value))
Tries conversion of any value to an integer
Below is the the instruction that describes the task: ### Input: Tries conversion of any value to an integer ### Response: def to_integer(value, ctx): """ Tries conversion of any value to an integer """ if isinstance(value, bool): return 1 if value else 0 elif isinstance(value, int): return value elif isinstance(value, Decimal): try: val = int(value.to_integral_exact(ROUND_HALF_UP)) if isinstance(val, int): return val except ArithmeticError: pass elif isinstance(value, str): try: return int(value) except ValueError: pass raise EvaluationError("Can't convert '%s' to an integer" % str(value))
def inspect_obj(obj): '''Learn what there is to be learned from our target. Given an object at `obj`, which must be a function, method or class, return a configuration *discovered* from the name of the object and its parameter list. This function is responsible for doing runtime reflection and providing understandable failure modes. The return value is a dictionary with three keys: ``name``, ``required`` and ``defaults``. ``name`` is the name of the function/method/class. ``required`` is a list of parameters *without* default values. ``defaults`` is a dictionary mapping parameter names to default values. The sets of parameter names in ``required`` and ``defaults`` are disjoint. When given a class, the parameters are taken from its ``__init__`` method. Note that this function is purposefully conservative in the things that is will auto-configure. All of the following things will result in a :exc:`yakonfig.ProgrammerError` exception being raised: 1. A parameter list that contains tuple unpacking. (This is invalid syntax in Python 3.) 2. A parameter list that contains variable arguments (``*args``) or variable keyword words (``**kwargs``). This restriction forces an auto-configurable to explicitly state all configuration. Similarly, if given an object that isn't a function/method/class, a :exc:`yakonfig.ProgrammerError` will be raised. If reflection cannot be performed on ``obj``, then a ``TypeError`` is raised. ''' skip_params = 0 if inspect.isfunction(obj): name = obj.__name__ inspect_obj = obj skip_params = 0 elif inspect.ismethod(obj): name = obj.im_func.__name__ inspect_obj = obj skip_params = 1 # self elif inspect.isclass(obj): inspect_obj = None if hasattr(obj, '__dict__') and '__new__' in obj.__dict__: inspect_obj = obj.__new__ elif hasattr(obj, '__init__'): inspect_obj = obj.__init__ else: raise ProgrammerError( 'Class "%s" does not have a "__new__" or "__init__" ' 'method, so it cannot be auto configured.' % str(obj)) name = obj.__name__ if hasattr(obj, 'config_name'): name = obj.config_name if not inspect.ismethod(inspect_obj) \ and not inspect.isfunction(inspect_obj): raise ProgrammerError( '"%s.%s" is not a method/function (it is a "%s").' % (str(obj), inspect_obj.__name__, type(inspect_obj))) skip_params = 1 # self else: raise ProgrammerError( 'Expected a function, method or class to ' 'automatically configure, but got a "%s" ' '(type: "%s").' % (repr(obj), type(obj))) argspec = inspect.getargspec(inspect_obj) if argspec.varargs is not None or argspec.keywords is not None: raise ProgrammerError( 'The auto-configurable "%s" cannot contain ' '"*args" or "**kwargs" in its list of ' 'parameters.' % repr(obj)) if not all(isinstance(arg, string_types) for arg in argspec.args): raise ProgrammerError( 'Expected an auto-configurable with no nested ' 'parameters, but "%s" seems to contain some ' 'tuple unpacking: "%s"' % (repr(obj), argspec.args)) defaults = argspec.defaults or [] # The index into `argspec.args` at which keyword arguments with default # values starts. i_defaults = len(argspec.args) - len(defaults) return { 'name': name, 'required': argspec.args[skip_params:i_defaults], 'defaults': dict([(k, defaults[i]) for i, k in enumerate(argspec.args[i_defaults:])]), }
Learn what there is to be learned from our target. Given an object at `obj`, which must be a function, method or class, return a configuration *discovered* from the name of the object and its parameter list. This function is responsible for doing runtime reflection and providing understandable failure modes. The return value is a dictionary with three keys: ``name``, ``required`` and ``defaults``. ``name`` is the name of the function/method/class. ``required`` is a list of parameters *without* default values. ``defaults`` is a dictionary mapping parameter names to default values. The sets of parameter names in ``required`` and ``defaults`` are disjoint. When given a class, the parameters are taken from its ``__init__`` method. Note that this function is purposefully conservative in the things that is will auto-configure. All of the following things will result in a :exc:`yakonfig.ProgrammerError` exception being raised: 1. A parameter list that contains tuple unpacking. (This is invalid syntax in Python 3.) 2. A parameter list that contains variable arguments (``*args``) or variable keyword words (``**kwargs``). This restriction forces an auto-configurable to explicitly state all configuration. Similarly, if given an object that isn't a function/method/class, a :exc:`yakonfig.ProgrammerError` will be raised. If reflection cannot be performed on ``obj``, then a ``TypeError`` is raised.
Below is the the instruction that describes the task: ### Input: Learn what there is to be learned from our target. Given an object at `obj`, which must be a function, method or class, return a configuration *discovered* from the name of the object and its parameter list. This function is responsible for doing runtime reflection and providing understandable failure modes. The return value is a dictionary with three keys: ``name``, ``required`` and ``defaults``. ``name`` is the name of the function/method/class. ``required`` is a list of parameters *without* default values. ``defaults`` is a dictionary mapping parameter names to default values. The sets of parameter names in ``required`` and ``defaults`` are disjoint. When given a class, the parameters are taken from its ``__init__`` method. Note that this function is purposefully conservative in the things that is will auto-configure. All of the following things will result in a :exc:`yakonfig.ProgrammerError` exception being raised: 1. A parameter list that contains tuple unpacking. (This is invalid syntax in Python 3.) 2. A parameter list that contains variable arguments (``*args``) or variable keyword words (``**kwargs``). This restriction forces an auto-configurable to explicitly state all configuration. Similarly, if given an object that isn't a function/method/class, a :exc:`yakonfig.ProgrammerError` will be raised. If reflection cannot be performed on ``obj``, then a ``TypeError`` is raised. ### Response: def inspect_obj(obj): '''Learn what there is to be learned from our target. Given an object at `obj`, which must be a function, method or class, return a configuration *discovered* from the name of the object and its parameter list. This function is responsible for doing runtime reflection and providing understandable failure modes. The return value is a dictionary with three keys: ``name``, ``required`` and ``defaults``. ``name`` is the name of the function/method/class. ``required`` is a list of parameters *without* default values. ``defaults`` is a dictionary mapping parameter names to default values. The sets of parameter names in ``required`` and ``defaults`` are disjoint. When given a class, the parameters are taken from its ``__init__`` method. Note that this function is purposefully conservative in the things that is will auto-configure. All of the following things will result in a :exc:`yakonfig.ProgrammerError` exception being raised: 1. A parameter list that contains tuple unpacking. (This is invalid syntax in Python 3.) 2. A parameter list that contains variable arguments (``*args``) or variable keyword words (``**kwargs``). This restriction forces an auto-configurable to explicitly state all configuration. Similarly, if given an object that isn't a function/method/class, a :exc:`yakonfig.ProgrammerError` will be raised. If reflection cannot be performed on ``obj``, then a ``TypeError`` is raised. ''' skip_params = 0 if inspect.isfunction(obj): name = obj.__name__ inspect_obj = obj skip_params = 0 elif inspect.ismethod(obj): name = obj.im_func.__name__ inspect_obj = obj skip_params = 1 # self elif inspect.isclass(obj): inspect_obj = None if hasattr(obj, '__dict__') and '__new__' in obj.__dict__: inspect_obj = obj.__new__ elif hasattr(obj, '__init__'): inspect_obj = obj.__init__ else: raise ProgrammerError( 'Class "%s" does not have a "__new__" or "__init__" ' 'method, so it cannot be auto configured.' % str(obj)) name = obj.__name__ if hasattr(obj, 'config_name'): name = obj.config_name if not inspect.ismethod(inspect_obj) \ and not inspect.isfunction(inspect_obj): raise ProgrammerError( '"%s.%s" is not a method/function (it is a "%s").' % (str(obj), inspect_obj.__name__, type(inspect_obj))) skip_params = 1 # self else: raise ProgrammerError( 'Expected a function, method or class to ' 'automatically configure, but got a "%s" ' '(type: "%s").' % (repr(obj), type(obj))) argspec = inspect.getargspec(inspect_obj) if argspec.varargs is not None or argspec.keywords is not None: raise ProgrammerError( 'The auto-configurable "%s" cannot contain ' '"*args" or "**kwargs" in its list of ' 'parameters.' % repr(obj)) if not all(isinstance(arg, string_types) for arg in argspec.args): raise ProgrammerError( 'Expected an auto-configurable with no nested ' 'parameters, but "%s" seems to contain some ' 'tuple unpacking: "%s"' % (repr(obj), argspec.args)) defaults = argspec.defaults or [] # The index into `argspec.args` at which keyword arguments with default # values starts. i_defaults = len(argspec.args) - len(defaults) return { 'name': name, 'required': argspec.args[skip_params:i_defaults], 'defaults': dict([(k, defaults[i]) for i, k in enumerate(argspec.args[i_defaults:])]), }
def delete_one_word(self, word=RIGHT): """Delete one word the right or the the left of the cursor.""" assert word in (self.RIGHT, self.LEFT) if word == self.RIGHT: papy = self.text.find(' ', self.cursor) + 1 if not papy: papy = len(self.text) self.text = self.text[:self.cursor] + self.text[papy:] else: papy = self.text.rfind(' ', 0, self.cursor) if papy == -1: papy = 0 self.text = self.text[:papy] + self.text[self.cursor:] self.cursor = papy
Delete one word the right or the the left of the cursor.
Below is the the instruction that describes the task: ### Input: Delete one word the right or the the left of the cursor. ### Response: def delete_one_word(self, word=RIGHT): """Delete one word the right or the the left of the cursor.""" assert word in (self.RIGHT, self.LEFT) if word == self.RIGHT: papy = self.text.find(' ', self.cursor) + 1 if not papy: papy = len(self.text) self.text = self.text[:self.cursor] + self.text[papy:] else: papy = self.text.rfind(' ', 0, self.cursor) if papy == -1: papy = 0 self.text = self.text[:papy] + self.text[self.cursor:] self.cursor = papy
def check_existing_vr_tag(self): """ Checks if version-release tag (primary not floating tag) exists already, and fails plugin if it does. """ primary_images = get_primary_images(self.workflow) if not primary_images: return vr_image = None for image in primary_images: if '-' in image.tag: vr_image = image break if not vr_image: return should_fail = False for registry_name, registry in self.registries.items(): pullspec = vr_image.copy() pullspec.registry = registry_name insecure = registry.get('insecure', False) secret = registry.get('secret', None) manifest_list = get_manifest_list(pullspec, registry_name, insecure, secret) if manifest_list: self.log.error("Primary tag already exists in registry: %s", pullspec) should_fail = True if should_fail: raise RuntimeError("Primary tag already exists in registry")
Checks if version-release tag (primary not floating tag) exists already, and fails plugin if it does.
Below is the the instruction that describes the task: ### Input: Checks if version-release tag (primary not floating tag) exists already, and fails plugin if it does. ### Response: def check_existing_vr_tag(self): """ Checks if version-release tag (primary not floating tag) exists already, and fails plugin if it does. """ primary_images = get_primary_images(self.workflow) if not primary_images: return vr_image = None for image in primary_images: if '-' in image.tag: vr_image = image break if not vr_image: return should_fail = False for registry_name, registry in self.registries.items(): pullspec = vr_image.copy() pullspec.registry = registry_name insecure = registry.get('insecure', False) secret = registry.get('secret', None) manifest_list = get_manifest_list(pullspec, registry_name, insecure, secret) if manifest_list: self.log.error("Primary tag already exists in registry: %s", pullspec) should_fail = True if should_fail: raise RuntimeError("Primary tag already exists in registry")
def scopusParser(scopusFile): """Parses a scopus file, _scopusFile_, to extract the individual lines as [ScopusRecords](../classes/ScopusRecord.html#metaknowledge.scopus.ScopusRecord). A Scopus file is a csv (Comma-separated values) with a complete header, see [`scopus.scopusHeader`](#metaknowledge.scopus) for the entries, and each line after it containing a record's entry. The string valued entries are quoted with double quotes which means double quotes inside them can cause issues, see [scopusRecordParser()](#metaknowledge.scopus.recordScopus.scopusRecordParser) for more information. # Parameters _scopusFile_ : `str` > A path to a valid scopus file, use [isScopusFile()](#metaknowledge.scopus.scopusHandlers.isScopusFile) to verify # Returns `set[ScopusRecord]` > Records for each of the entries """ #assumes the file is Scopus recSet = set() error = None lineNum = 0 try: with open(scopusFile, 'r', encoding = 'utf-8') as openfile: #Get rid of the BOM openfile.read(1) header = openfile.readline()[:-1].split(',') if len(set(header) ^ set(scopusHeader)) == 0: header = None lineNum = 0 try: for line, row in enumerate(openfile, start = 2): lineNum = line recSet.add(ScopusRecord(row, header = header, sFile = scopusFile, sLine = line)) except BadScopusFile as e: if error is None: error = BadScopusFile("The file '{}' becomes unparsable after line: {}, due to the error: {} ".format(scopusFile, lineNum, e)) except (csv.Error, UnicodeDecodeError): if error is None: error = BadScopusFile("The file '{}' has parts of it that are unparsable starting at line: {}.".format(scopusFile, lineNum)) return recSet, error
Parses a scopus file, _scopusFile_, to extract the individual lines as [ScopusRecords](../classes/ScopusRecord.html#metaknowledge.scopus.ScopusRecord). A Scopus file is a csv (Comma-separated values) with a complete header, see [`scopus.scopusHeader`](#metaknowledge.scopus) for the entries, and each line after it containing a record's entry. The string valued entries are quoted with double quotes which means double quotes inside them can cause issues, see [scopusRecordParser()](#metaknowledge.scopus.recordScopus.scopusRecordParser) for more information. # Parameters _scopusFile_ : `str` > A path to a valid scopus file, use [isScopusFile()](#metaknowledge.scopus.scopusHandlers.isScopusFile) to verify # Returns `set[ScopusRecord]` > Records for each of the entries
Below is the the instruction that describes the task: ### Input: Parses a scopus file, _scopusFile_, to extract the individual lines as [ScopusRecords](../classes/ScopusRecord.html#metaknowledge.scopus.ScopusRecord). A Scopus file is a csv (Comma-separated values) with a complete header, see [`scopus.scopusHeader`](#metaknowledge.scopus) for the entries, and each line after it containing a record's entry. The string valued entries are quoted with double quotes which means double quotes inside them can cause issues, see [scopusRecordParser()](#metaknowledge.scopus.recordScopus.scopusRecordParser) for more information. # Parameters _scopusFile_ : `str` > A path to a valid scopus file, use [isScopusFile()](#metaknowledge.scopus.scopusHandlers.isScopusFile) to verify # Returns `set[ScopusRecord]` > Records for each of the entries ### Response: def scopusParser(scopusFile): """Parses a scopus file, _scopusFile_, to extract the individual lines as [ScopusRecords](../classes/ScopusRecord.html#metaknowledge.scopus.ScopusRecord). A Scopus file is a csv (Comma-separated values) with a complete header, see [`scopus.scopusHeader`](#metaknowledge.scopus) for the entries, and each line after it containing a record's entry. The string valued entries are quoted with double quotes which means double quotes inside them can cause issues, see [scopusRecordParser()](#metaknowledge.scopus.recordScopus.scopusRecordParser) for more information. # Parameters _scopusFile_ : `str` > A path to a valid scopus file, use [isScopusFile()](#metaknowledge.scopus.scopusHandlers.isScopusFile) to verify # Returns `set[ScopusRecord]` > Records for each of the entries """ #assumes the file is Scopus recSet = set() error = None lineNum = 0 try: with open(scopusFile, 'r', encoding = 'utf-8') as openfile: #Get rid of the BOM openfile.read(1) header = openfile.readline()[:-1].split(',') if len(set(header) ^ set(scopusHeader)) == 0: header = None lineNum = 0 try: for line, row in enumerate(openfile, start = 2): lineNum = line recSet.add(ScopusRecord(row, header = header, sFile = scopusFile, sLine = line)) except BadScopusFile as e: if error is None: error = BadScopusFile("The file '{}' becomes unparsable after line: {}, due to the error: {} ".format(scopusFile, lineNum, e)) except (csv.Error, UnicodeDecodeError): if error is None: error = BadScopusFile("The file '{}' has parts of it that are unparsable starting at line: {}.".format(scopusFile, lineNum)) return recSet, error
def evaluator(self, candidates, args): """Return the fitness values for the given candidates.""" fitness = [] if self._use_ants: for candidate in candidates: total = 0 for c in candidate: total += self.weights[c.element[0]][c.element[1]] last = (candidate[-1].element[1], candidate[0].element[0]) total += self.weights[last[0]][last[1]] fitness.append(1 / total) else: for candidate in candidates: total = 0 for src, dst in zip(candidate, candidate[1:] + [candidate[0]]): total += self.weights[src][dst] fitness.append(1 / total) return fitness
Return the fitness values for the given candidates.
Below is the the instruction that describes the task: ### Input: Return the fitness values for the given candidates. ### Response: def evaluator(self, candidates, args): """Return the fitness values for the given candidates.""" fitness = [] if self._use_ants: for candidate in candidates: total = 0 for c in candidate: total += self.weights[c.element[0]][c.element[1]] last = (candidate[-1].element[1], candidate[0].element[0]) total += self.weights[last[0]][last[1]] fitness.append(1 / total) else: for candidate in candidates: total = 0 for src, dst in zip(candidate, candidate[1:] + [candidate[0]]): total += self.weights[src][dst] fitness.append(1 / total) return fitness
def configure_discover(self, ns, definition): """ Register a swagger endpoint for a set of operations. """ @self.add_route(ns.singleton_path, Operation.Discover, ns) def discover(): swagger = build_swagger(self.graph, ns, self.find_matching_endpoints(ns)) g.hide_body = True return make_response(swagger)
Register a swagger endpoint for a set of operations.
Below is the the instruction that describes the task: ### Input: Register a swagger endpoint for a set of operations. ### Response: def configure_discover(self, ns, definition): """ Register a swagger endpoint for a set of operations. """ @self.add_route(ns.singleton_path, Operation.Discover, ns) def discover(): swagger = build_swagger(self.graph, ns, self.find_matching_endpoints(ns)) g.hide_body = True return make_response(swagger)
def find_proxy_plugin(component, plugin_name): """ Attempt to find a proxy plugin provided by a specific component Args: component (string): The name of the component that provides the plugin plugin_name (string): The name of the plugin to load Returns: TileBuxProxyPlugin: The plugin, if found, otherwise raises DataError """ reg = ComponentRegistry() plugins = reg.load_extensions('iotile.proxy_plugin', comp_filter=component, class_filter=TileBusProxyPlugin, product_name='proxy_plugin') for _name, plugin in plugins: if plugin.__name__ == plugin_name: return plugin raise DataError("Could not find proxy plugin module in registered components or installed distributions", component=component, name=plugin_name)
Attempt to find a proxy plugin provided by a specific component Args: component (string): The name of the component that provides the plugin plugin_name (string): The name of the plugin to load Returns: TileBuxProxyPlugin: The plugin, if found, otherwise raises DataError
Below is the the instruction that describes the task: ### Input: Attempt to find a proxy plugin provided by a specific component Args: component (string): The name of the component that provides the plugin plugin_name (string): The name of the plugin to load Returns: TileBuxProxyPlugin: The plugin, if found, otherwise raises DataError ### Response: def find_proxy_plugin(component, plugin_name): """ Attempt to find a proxy plugin provided by a specific component Args: component (string): The name of the component that provides the plugin plugin_name (string): The name of the plugin to load Returns: TileBuxProxyPlugin: The plugin, if found, otherwise raises DataError """ reg = ComponentRegistry() plugins = reg.load_extensions('iotile.proxy_plugin', comp_filter=component, class_filter=TileBusProxyPlugin, product_name='proxy_plugin') for _name, plugin in plugins: if plugin.__name__ == plugin_name: return plugin raise DataError("Could not find proxy plugin module in registered components or installed distributions", component=component, name=plugin_name)
def get_all_by_product_id(self, product_id, **kwargs): """ Gets all Build Configurations of a Product This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.get_all_by_product_id(product_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param int product_id: Product id (required) :param int page_index: Page Index :param int page_size: Pagination size :param str sort: Sorting RSQL :param str q: RSQL Query :return: BuildConfigurationPage If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('callback'): return self.get_all_by_product_id_with_http_info(product_id, **kwargs) else: (data) = self.get_all_by_product_id_with_http_info(product_id, **kwargs) return data
Gets all Build Configurations of a Product This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.get_all_by_product_id(product_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param int product_id: Product id (required) :param int page_index: Page Index :param int page_size: Pagination size :param str sort: Sorting RSQL :param str q: RSQL Query :return: BuildConfigurationPage If the method is called asynchronously, returns the request thread.
Below is the the instruction that describes the task: ### Input: Gets all Build Configurations of a Product This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.get_all_by_product_id(product_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param int product_id: Product id (required) :param int page_index: Page Index :param int page_size: Pagination size :param str sort: Sorting RSQL :param str q: RSQL Query :return: BuildConfigurationPage If the method is called asynchronously, returns the request thread. ### Response: def get_all_by_product_id(self, product_id, **kwargs): """ Gets all Build Configurations of a Product This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.get_all_by_product_id(product_id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param int product_id: Product id (required) :param int page_index: Page Index :param int page_size: Pagination size :param str sort: Sorting RSQL :param str q: RSQL Query :return: BuildConfigurationPage If the method is called asynchronously, returns the request thread. """ kwargs['_return_http_data_only'] = True if kwargs.get('callback'): return self.get_all_by_product_id_with_http_info(product_id, **kwargs) else: (data) = self.get_all_by_product_id_with_http_info(product_id, **kwargs) return data
def throw_random_private( lengths, regions, save_interval_func, allow_overlap=False, three_args=True ): """ (Internal function; we expect calls only through the interface functions above) `lengths`: A list containing the length of each interval to be generated. `regions`: A list of regions in which intervals can be placed, sorted by decreasing length. Elements are triples of the form (length, start, extra), This list CAN BE MODIFIED by this function. `save_interval_func`: A function accepting three arguments which will be passed the (start,stop,extra) for each generated interval. """ # Implementation: # We keep a list of the regions, sorted from largest to smallest. We then # place each length by following steps: # (1) construct a candidate counts array (cc array) # (2) choose a candidate at random # (3) find region containing that candidate # (4) map candidate to position in that region # (5) split region if not allowing overlaps # (6) report placed segment # # The cc array is only constructed if there's a change (different length # to place, or the region list has changed). It contains, for each # region, the total number of number of candidate positions in regions # *preceding* it in the region list: # cc[i] = sum over k in 0..(i-1) of length[i] - L + 1 # where N is the number of regions and L is the length being thrown. # At the same time, we determine the total number of candidates (the total # number of places the current length can be placed) and the index range # of regions into which the length will fit. # # example: # for L = 20 # i = 0 1 2 3 4 5 6 7 8 9 # length[i] = 96 66 56 50 48 40 29 17 11 8 # cc[i] = 0 77 124 161 192 221 242 X X X # candidates = 252 # lo_rgn = 0 # hi_rgn = 6 # # The candidate is chosen in (0..candidates-1). The candidate counts # array allows us to do a binary search to locate the region that holds that # candidate. Continuing the example above, we choose a random candidate # s in (0..251). If s happens to be in (124..160), it will be mapped to # region 2 at start position s-124. # # During the binary search, if we are looking at region 3, if s < cc[3] # then the desired region is region 2 or lower. Otherwise it is region 3 or # higher. min_length = min( lengths ) prev_length = None # (force initial cc array construction) cc = [0] * (len( regions ) + len(lengths) - 1) num_thrown = 0 for length in lengths: # construct cc array (only needed if length has changed or region list has # changed) if length != prev_length: prev_length = length assert len( cc ) >= len( regions ) candidates = 0 hi_rgn = 0 for region in regions: rgn_len = region[0] if rgn_len < length: break cc[hi_rgn] = candidates candidates += rgn_len - length + 1 hi_rgn += 1 if candidates == 0: raise MaxtriesException( "No region can fit an interval of length %d (we threw %d of %d)" \ % ( length, num_thrown,len( lengths ) ) ) hi_rgn -= 1 # Select a candidate s = random.randrange( candidates ) #.. #..for ix in range( len( regions ) ): #.. region = regions[ix] #.. if ix <= hi_rgn: print "%2s: %5s %5s %5s" % ( ix, region[1], region[0], cc[ix] ) #.. else: print "%2s: %5s %5s %5s" % ( ix, region[1], region[0], "X" ) #..print "s = %s (of %s candidates)" % ( s, candidates ) # Locate region containing that candidate, by binary search lo = 0 hi = hi_rgn while hi > lo: mid = (lo + hi + 1) / 2 # (we round up to prevent infinite loop) if s < cc[mid]: hi = mid-1 # (s < num candidates from 0..mid-1) else: lo = mid # (s >= num candidates from 0..mid-1) s -= cc[lo] # If we are not allowing overlaps we will remove the placed interval # from the region list if allow_overlap: rgn_length, rgn_start, rgn_extra = regions[lo] else: # Remove the chosen region and split rgn_length, rgn_start, rgn_extra = regions.pop( lo ) rgn_end = rgn_start + rgn_length assert s >= 0 assert rgn_start + s + length <= rgn_end, "Expected: %d + %d + %d == %d <= %d" % ( rgn_start, s, length, rgn_start + s + length, rgn_end ) regions.reverse() if s >= min_length: bisect.insort( regions, ( s, rgn_start, rgn_extra ) ) if s + length <= rgn_length - min_length: bisect.insort( regions, ( rgn_length - ( s + length ), rgn_start + s + length, rgn_extra ) ) regions.reverse() prev_length = None # (force cc array construction) # Save the new interval if (three_args): save_interval_func( rgn_start + s, rgn_start + s + length, rgn_extra ) else: save_interval_func( rgn_start + s, rgn_start + s + length ) num_thrown += 1
(Internal function; we expect calls only through the interface functions above) `lengths`: A list containing the length of each interval to be generated. `regions`: A list of regions in which intervals can be placed, sorted by decreasing length. Elements are triples of the form (length, start, extra), This list CAN BE MODIFIED by this function. `save_interval_func`: A function accepting three arguments which will be passed the (start,stop,extra) for each generated interval.
Below is the the instruction that describes the task: ### Input: (Internal function; we expect calls only through the interface functions above) `lengths`: A list containing the length of each interval to be generated. `regions`: A list of regions in which intervals can be placed, sorted by decreasing length. Elements are triples of the form (length, start, extra), This list CAN BE MODIFIED by this function. `save_interval_func`: A function accepting three arguments which will be passed the (start,stop,extra) for each generated interval. ### Response: def throw_random_private( lengths, regions, save_interval_func, allow_overlap=False, three_args=True ): """ (Internal function; we expect calls only through the interface functions above) `lengths`: A list containing the length of each interval to be generated. `regions`: A list of regions in which intervals can be placed, sorted by decreasing length. Elements are triples of the form (length, start, extra), This list CAN BE MODIFIED by this function. `save_interval_func`: A function accepting three arguments which will be passed the (start,stop,extra) for each generated interval. """ # Implementation: # We keep a list of the regions, sorted from largest to smallest. We then # place each length by following steps: # (1) construct a candidate counts array (cc array) # (2) choose a candidate at random # (3) find region containing that candidate # (4) map candidate to position in that region # (5) split region if not allowing overlaps # (6) report placed segment # # The cc array is only constructed if there's a change (different length # to place, or the region list has changed). It contains, for each # region, the total number of number of candidate positions in regions # *preceding* it in the region list: # cc[i] = sum over k in 0..(i-1) of length[i] - L + 1 # where N is the number of regions and L is the length being thrown. # At the same time, we determine the total number of candidates (the total # number of places the current length can be placed) and the index range # of regions into which the length will fit. # # example: # for L = 20 # i = 0 1 2 3 4 5 6 7 8 9 # length[i] = 96 66 56 50 48 40 29 17 11 8 # cc[i] = 0 77 124 161 192 221 242 X X X # candidates = 252 # lo_rgn = 0 # hi_rgn = 6 # # The candidate is chosen in (0..candidates-1). The candidate counts # array allows us to do a binary search to locate the region that holds that # candidate. Continuing the example above, we choose a random candidate # s in (0..251). If s happens to be in (124..160), it will be mapped to # region 2 at start position s-124. # # During the binary search, if we are looking at region 3, if s < cc[3] # then the desired region is region 2 or lower. Otherwise it is region 3 or # higher. min_length = min( lengths ) prev_length = None # (force initial cc array construction) cc = [0] * (len( regions ) + len(lengths) - 1) num_thrown = 0 for length in lengths: # construct cc array (only needed if length has changed or region list has # changed) if length != prev_length: prev_length = length assert len( cc ) >= len( regions ) candidates = 0 hi_rgn = 0 for region in regions: rgn_len = region[0] if rgn_len < length: break cc[hi_rgn] = candidates candidates += rgn_len - length + 1 hi_rgn += 1 if candidates == 0: raise MaxtriesException( "No region can fit an interval of length %d (we threw %d of %d)" \ % ( length, num_thrown,len( lengths ) ) ) hi_rgn -= 1 # Select a candidate s = random.randrange( candidates ) #.. #..for ix in range( len( regions ) ): #.. region = regions[ix] #.. if ix <= hi_rgn: print "%2s: %5s %5s %5s" % ( ix, region[1], region[0], cc[ix] ) #.. else: print "%2s: %5s %5s %5s" % ( ix, region[1], region[0], "X" ) #..print "s = %s (of %s candidates)" % ( s, candidates ) # Locate region containing that candidate, by binary search lo = 0 hi = hi_rgn while hi > lo: mid = (lo + hi + 1) / 2 # (we round up to prevent infinite loop) if s < cc[mid]: hi = mid-1 # (s < num candidates from 0..mid-1) else: lo = mid # (s >= num candidates from 0..mid-1) s -= cc[lo] # If we are not allowing overlaps we will remove the placed interval # from the region list if allow_overlap: rgn_length, rgn_start, rgn_extra = regions[lo] else: # Remove the chosen region and split rgn_length, rgn_start, rgn_extra = regions.pop( lo ) rgn_end = rgn_start + rgn_length assert s >= 0 assert rgn_start + s + length <= rgn_end, "Expected: %d + %d + %d == %d <= %d" % ( rgn_start, s, length, rgn_start + s + length, rgn_end ) regions.reverse() if s >= min_length: bisect.insort( regions, ( s, rgn_start, rgn_extra ) ) if s + length <= rgn_length - min_length: bisect.insort( regions, ( rgn_length - ( s + length ), rgn_start + s + length, rgn_extra ) ) regions.reverse() prev_length = None # (force cc array construction) # Save the new interval if (three_args): save_interval_func( rgn_start + s, rgn_start + s + length, rgn_extra ) else: save_interval_func( rgn_start + s, rgn_start + s + length ) num_thrown += 1
def _post_process_specs(specs): """Post-process specs after pure parsing Casts any number expected values into integers Args ---- specs : Notes ----- Modifies the specs object """ integer_specs = ['DIMENSION', 'CAPACITY'] for s in integer_specs: specs[s] = int(specs[s])
Post-process specs after pure parsing Casts any number expected values into integers Args ---- specs : Notes ----- Modifies the specs object
Below is the the instruction that describes the task: ### Input: Post-process specs after pure parsing Casts any number expected values into integers Args ---- specs : Notes ----- Modifies the specs object ### Response: def _post_process_specs(specs): """Post-process specs after pure parsing Casts any number expected values into integers Args ---- specs : Notes ----- Modifies the specs object """ integer_specs = ['DIMENSION', 'CAPACITY'] for s in integer_specs: specs[s] = int(specs[s])
def WritePythonFile(file_descriptor, package, version, printer): """Write the given extended file descriptor to out.""" _WriteFile(file_descriptor, package, version, _ProtoRpcPrinter(printer))
Write the given extended file descriptor to out.
Below is the the instruction that describes the task: ### Input: Write the given extended file descriptor to out. ### Response: def WritePythonFile(file_descriptor, package, version, printer): """Write the given extended file descriptor to out.""" _WriteFile(file_descriptor, package, version, _ProtoRpcPrinter(printer))
def unwrap(self, message, conf_req=True, qop_req=None, supplementary=False): """ Takes a token that has been generated by the peer application with :meth:`wrap`, verifies and optionally decrypts it, using this security context's cryptographic keys. The `supplementary` parameter determines how this method deals with replayed, unsequential, too-old or missing tokens, as follows: If the `supplementary` parameter is False (the default), and if a replayed or otherwise out-of-sequence token is detected, this method raises a :exc:`~gssapi.error.GSSCException`. If no replay or out-of-sequence token is detected, this method returns the unwrapped message only. If `supplementary` is True, instead of raising an exception when a replayed or out-of-sequence token is detected, this method returns a tuple ``(unwrapped_message, supplementary_info)`` where ``supplementary_info`` is a tuple containing zero or more of the constants :const:`~gssapi.S_DUPLICATE_TOKEN`, :const:`~gssapi.S_OLD_TOKEN`, :const:`~gssapi.S_UNSEQ_TOKEN` and :const:`~gssapi.S_GAP_TOKEN`. The supplementary info tells the caller whether a replayed or out-of-sequence message was detected. The caller must check this and decide how to handle the message if any of the flags are set. For a reference to the meaning of the flags, check `RFC 2744 Section 3.9.1 <http://tools.ietf.org/html/rfc2744#section-3.9.1>` for the corresponding GSS_S_OLD_TOKEN, etc, constants. :param message: The wrapped message token :type message: bytes :param conf_req: Whether to require confidentiality (encryption) :type conf_req: bool :param qop_req: The quality of protection required. It is recommended to not change this from the default None as most GSSAPI implementations do not support it. :param supplementary: Whether to also return supplementary info. :type supplementary: bool :returns: the verified and decrypted message if `supplementary` is False, or a tuple ``(unwrapped_message, supplementary_info)`` if `supplementary` is True. :raises: :exc:`~gssapi.error.GSSException` if :attr:`integrity_negotiated` is false, or if the verification or decryption fails, if the message was modified, or if confidentiality was required (`conf_req` was True) but the message did not have confidentiality protection applied (was not encrypted), or if the `qop_req` parameter was set and it did not match the QOP applied to the message, or if a replayed or out-of-sequence message was detected. """ if not (self.flags & C.GSS_C_INTEG_FLAG): raise GSSException("No integrity protection negotiated.") if not (self.established or (self.flags & C.GSS_C_PROT_READY_FLAG)): raise GSSException("Protection not yet ready.") minor_status = ffi.new('OM_uint32[1]') output_buffer = ffi.new('gss_buffer_desc[1]') message_buffer = ffi.new('gss_buffer_desc[1]') message_buffer[0].length = len(message) c_str_message = ffi.new('char[]', message) message_buffer[0].value = c_str_message conf_state = ffi.new('int[1]') qop_state = ffi.new('gss_qop_t[1]') retval = C.gss_unwrap( minor_status, self._ctx[0], message_buffer, output_buffer, conf_state, qop_state ) try: if GSS_ERROR(retval): if minor_status[0] and self.mech_type: raise _exception_for_status(retval, minor_status[0], self.mech_type) else: raise _exception_for_status(retval, minor_status[0]) output = _buf_to_str(output_buffer[0]) if conf_req and not conf_state[0]: raise GSSException("No confidentiality protection.") if qop_req is not None and qop_req != qop_state[0]: raise GSSException("QOP {0} does not match required value {1}.".format(qop_state[0], qop_req)) supp_bits = _status_bits(retval) if supplementary: return output, supp_bits elif len(supp_bits) > 0: # Raise if unseq/replayed token detected raise _exception_for_status(retval, minor_status[0], token=output) else: return output finally: if output_buffer[0].length != 0: C.gss_release_buffer(minor_status, output_buffer)
Takes a token that has been generated by the peer application with :meth:`wrap`, verifies and optionally decrypts it, using this security context's cryptographic keys. The `supplementary` parameter determines how this method deals with replayed, unsequential, too-old or missing tokens, as follows: If the `supplementary` parameter is False (the default), and if a replayed or otherwise out-of-sequence token is detected, this method raises a :exc:`~gssapi.error.GSSCException`. If no replay or out-of-sequence token is detected, this method returns the unwrapped message only. If `supplementary` is True, instead of raising an exception when a replayed or out-of-sequence token is detected, this method returns a tuple ``(unwrapped_message, supplementary_info)`` where ``supplementary_info`` is a tuple containing zero or more of the constants :const:`~gssapi.S_DUPLICATE_TOKEN`, :const:`~gssapi.S_OLD_TOKEN`, :const:`~gssapi.S_UNSEQ_TOKEN` and :const:`~gssapi.S_GAP_TOKEN`. The supplementary info tells the caller whether a replayed or out-of-sequence message was detected. The caller must check this and decide how to handle the message if any of the flags are set. For a reference to the meaning of the flags, check `RFC 2744 Section 3.9.1 <http://tools.ietf.org/html/rfc2744#section-3.9.1>` for the corresponding GSS_S_OLD_TOKEN, etc, constants. :param message: The wrapped message token :type message: bytes :param conf_req: Whether to require confidentiality (encryption) :type conf_req: bool :param qop_req: The quality of protection required. It is recommended to not change this from the default None as most GSSAPI implementations do not support it. :param supplementary: Whether to also return supplementary info. :type supplementary: bool :returns: the verified and decrypted message if `supplementary` is False, or a tuple ``(unwrapped_message, supplementary_info)`` if `supplementary` is True. :raises: :exc:`~gssapi.error.GSSException` if :attr:`integrity_negotiated` is false, or if the verification or decryption fails, if the message was modified, or if confidentiality was required (`conf_req` was True) but the message did not have confidentiality protection applied (was not encrypted), or if the `qop_req` parameter was set and it did not match the QOP applied to the message, or if a replayed or out-of-sequence message was detected.
Below is the the instruction that describes the task: ### Input: Takes a token that has been generated by the peer application with :meth:`wrap`, verifies and optionally decrypts it, using this security context's cryptographic keys. The `supplementary` parameter determines how this method deals with replayed, unsequential, too-old or missing tokens, as follows: If the `supplementary` parameter is False (the default), and if a replayed or otherwise out-of-sequence token is detected, this method raises a :exc:`~gssapi.error.GSSCException`. If no replay or out-of-sequence token is detected, this method returns the unwrapped message only. If `supplementary` is True, instead of raising an exception when a replayed or out-of-sequence token is detected, this method returns a tuple ``(unwrapped_message, supplementary_info)`` where ``supplementary_info`` is a tuple containing zero or more of the constants :const:`~gssapi.S_DUPLICATE_TOKEN`, :const:`~gssapi.S_OLD_TOKEN`, :const:`~gssapi.S_UNSEQ_TOKEN` and :const:`~gssapi.S_GAP_TOKEN`. The supplementary info tells the caller whether a replayed or out-of-sequence message was detected. The caller must check this and decide how to handle the message if any of the flags are set. For a reference to the meaning of the flags, check `RFC 2744 Section 3.9.1 <http://tools.ietf.org/html/rfc2744#section-3.9.1>` for the corresponding GSS_S_OLD_TOKEN, etc, constants. :param message: The wrapped message token :type message: bytes :param conf_req: Whether to require confidentiality (encryption) :type conf_req: bool :param qop_req: The quality of protection required. It is recommended to not change this from the default None as most GSSAPI implementations do not support it. :param supplementary: Whether to also return supplementary info. :type supplementary: bool :returns: the verified and decrypted message if `supplementary` is False, or a tuple ``(unwrapped_message, supplementary_info)`` if `supplementary` is True. :raises: :exc:`~gssapi.error.GSSException` if :attr:`integrity_negotiated` is false, or if the verification or decryption fails, if the message was modified, or if confidentiality was required (`conf_req` was True) but the message did not have confidentiality protection applied (was not encrypted), or if the `qop_req` parameter was set and it did not match the QOP applied to the message, or if a replayed or out-of-sequence message was detected. ### Response: def unwrap(self, message, conf_req=True, qop_req=None, supplementary=False): """ Takes a token that has been generated by the peer application with :meth:`wrap`, verifies and optionally decrypts it, using this security context's cryptographic keys. The `supplementary` parameter determines how this method deals with replayed, unsequential, too-old or missing tokens, as follows: If the `supplementary` parameter is False (the default), and if a replayed or otherwise out-of-sequence token is detected, this method raises a :exc:`~gssapi.error.GSSCException`. If no replay or out-of-sequence token is detected, this method returns the unwrapped message only. If `supplementary` is True, instead of raising an exception when a replayed or out-of-sequence token is detected, this method returns a tuple ``(unwrapped_message, supplementary_info)`` where ``supplementary_info`` is a tuple containing zero or more of the constants :const:`~gssapi.S_DUPLICATE_TOKEN`, :const:`~gssapi.S_OLD_TOKEN`, :const:`~gssapi.S_UNSEQ_TOKEN` and :const:`~gssapi.S_GAP_TOKEN`. The supplementary info tells the caller whether a replayed or out-of-sequence message was detected. The caller must check this and decide how to handle the message if any of the flags are set. For a reference to the meaning of the flags, check `RFC 2744 Section 3.9.1 <http://tools.ietf.org/html/rfc2744#section-3.9.1>` for the corresponding GSS_S_OLD_TOKEN, etc, constants. :param message: The wrapped message token :type message: bytes :param conf_req: Whether to require confidentiality (encryption) :type conf_req: bool :param qop_req: The quality of protection required. It is recommended to not change this from the default None as most GSSAPI implementations do not support it. :param supplementary: Whether to also return supplementary info. :type supplementary: bool :returns: the verified and decrypted message if `supplementary` is False, or a tuple ``(unwrapped_message, supplementary_info)`` if `supplementary` is True. :raises: :exc:`~gssapi.error.GSSException` if :attr:`integrity_negotiated` is false, or if the verification or decryption fails, if the message was modified, or if confidentiality was required (`conf_req` was True) but the message did not have confidentiality protection applied (was not encrypted), or if the `qop_req` parameter was set and it did not match the QOP applied to the message, or if a replayed or out-of-sequence message was detected. """ if not (self.flags & C.GSS_C_INTEG_FLAG): raise GSSException("No integrity protection negotiated.") if not (self.established or (self.flags & C.GSS_C_PROT_READY_FLAG)): raise GSSException("Protection not yet ready.") minor_status = ffi.new('OM_uint32[1]') output_buffer = ffi.new('gss_buffer_desc[1]') message_buffer = ffi.new('gss_buffer_desc[1]') message_buffer[0].length = len(message) c_str_message = ffi.new('char[]', message) message_buffer[0].value = c_str_message conf_state = ffi.new('int[1]') qop_state = ffi.new('gss_qop_t[1]') retval = C.gss_unwrap( minor_status, self._ctx[0], message_buffer, output_buffer, conf_state, qop_state ) try: if GSS_ERROR(retval): if minor_status[0] and self.mech_type: raise _exception_for_status(retval, minor_status[0], self.mech_type) else: raise _exception_for_status(retval, minor_status[0]) output = _buf_to_str(output_buffer[0]) if conf_req and not conf_state[0]: raise GSSException("No confidentiality protection.") if qop_req is not None and qop_req != qop_state[0]: raise GSSException("QOP {0} does not match required value {1}.".format(qop_state[0], qop_req)) supp_bits = _status_bits(retval) if supplementary: return output, supp_bits elif len(supp_bits) > 0: # Raise if unseq/replayed token detected raise _exception_for_status(retval, minor_status[0], token=output) else: return output finally: if output_buffer[0].length != 0: C.gss_release_buffer(minor_status, output_buffer)
def calc_temperature_stats(self): """ Calculates statistics in order to derive diurnal patterns of temperature """ self.temp.max_delta = melodist.get_shift_by_data(self.data.temp, self._lon, self._lat, self._timezone) self.temp.mean_course = melodist.util.calculate_mean_daily_course_by_month(self.data.temp, normalize=True)
Calculates statistics in order to derive diurnal patterns of temperature
Below is the the instruction that describes the task: ### Input: Calculates statistics in order to derive diurnal patterns of temperature ### Response: def calc_temperature_stats(self): """ Calculates statistics in order to derive diurnal patterns of temperature """ self.temp.max_delta = melodist.get_shift_by_data(self.data.temp, self._lon, self._lat, self._timezone) self.temp.mean_course = melodist.util.calculate_mean_daily_course_by_month(self.data.temp, normalize=True)
def optimize(self, timeSeries, forecastingMethods=None, startingPercentage=0.0, endPercentage=100.0): """Runs the optimization of the given TimeSeries. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param list forecastingMethods: List of forecastingMethods that will be used for optimization. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :return: Returns the optimized forecasting method, the corresponding error measure and the forecasting methods parameters. :rtype: [BaseForecastingMethod, BaseErrorMeasure, Dictionary] :raise: Raises a :py:exc:`ValueError` ValueError if no forecastingMethods is empty. """ if forecastingMethods is None or len(forecastingMethods) == 0: raise ValueError("forecastingMethods cannot be empty.") self._startingPercentage = startingPercentage self._endPercentage = endPercentage results = [] for forecastingMethod in forecastingMethods: results.append([forecastingMethod] + self.optimize_forecasting_method(timeSeries, forecastingMethod)) # get the forecasting method with the smallest error bestForecastingMethod = min(results, key=lambda item: item[1].get_error(self._startingPercentage, self._endPercentage)) for parameter in bestForecastingMethod[2]: bestForecastingMethod[0].set_parameter(parameter, bestForecastingMethod[2][parameter]) return bestForecastingMethod
Runs the optimization of the given TimeSeries. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param list forecastingMethods: List of forecastingMethods that will be used for optimization. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :return: Returns the optimized forecasting method, the corresponding error measure and the forecasting methods parameters. :rtype: [BaseForecastingMethod, BaseErrorMeasure, Dictionary] :raise: Raises a :py:exc:`ValueError` ValueError if no forecastingMethods is empty.
Below is the the instruction that describes the task: ### Input: Runs the optimization of the given TimeSeries. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param list forecastingMethods: List of forecastingMethods that will be used for optimization. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :return: Returns the optimized forecasting method, the corresponding error measure and the forecasting methods parameters. :rtype: [BaseForecastingMethod, BaseErrorMeasure, Dictionary] :raise: Raises a :py:exc:`ValueError` ValueError if no forecastingMethods is empty. ### Response: def optimize(self, timeSeries, forecastingMethods=None, startingPercentage=0.0, endPercentage=100.0): """Runs the optimization of the given TimeSeries. :param TimeSeries timeSeries: TimeSeries instance that requires an optimized forecast. :param list forecastingMethods: List of forecastingMethods that will be used for optimization. :param float startingPercentage: Defines the start of the interval. This has to be a value in [0.0, 100.0]. It represents the value, where the error calculation should be started. 25.0 for example means that the first 25% of all calculated errors will be ignored. :param float endPercentage: Defines the end of the interval. This has to be a value in [0.0, 100.0]. It represents the value, after which all error values will be ignored. 90.0 for example means that the last 10% of all local errors will be ignored. :return: Returns the optimized forecasting method, the corresponding error measure and the forecasting methods parameters. :rtype: [BaseForecastingMethod, BaseErrorMeasure, Dictionary] :raise: Raises a :py:exc:`ValueError` ValueError if no forecastingMethods is empty. """ if forecastingMethods is None or len(forecastingMethods) == 0: raise ValueError("forecastingMethods cannot be empty.") self._startingPercentage = startingPercentage self._endPercentage = endPercentage results = [] for forecastingMethod in forecastingMethods: results.append([forecastingMethod] + self.optimize_forecasting_method(timeSeries, forecastingMethod)) # get the forecasting method with the smallest error bestForecastingMethod = min(results, key=lambda item: item[1].get_error(self._startingPercentage, self._endPercentage)) for parameter in bestForecastingMethod[2]: bestForecastingMethod[0].set_parameter(parameter, bestForecastingMethod[2][parameter]) return bestForecastingMethod
def removeSegment(self, segment, preserveCurve=False): """ Remove segment from the contour. If ``preserveCurve`` is set to ``True`` an attempt will be made to preserve the shape of the curve if the environment supports that functionality. """ if not isinstance(segment, int): segment = self.segments.index(segment) segment = normalizers.normalizeIndex(segment) if segment >= self._len__segments(): raise ValueError("No segment located at index %d." % segment) preserveCurve = normalizers.normalizeBoolean(preserveCurve) self._removeSegment(segment, preserveCurve)
Remove segment from the contour. If ``preserveCurve`` is set to ``True`` an attempt will be made to preserve the shape of the curve if the environment supports that functionality.
Below is the the instruction that describes the task: ### Input: Remove segment from the contour. If ``preserveCurve`` is set to ``True`` an attempt will be made to preserve the shape of the curve if the environment supports that functionality. ### Response: def removeSegment(self, segment, preserveCurve=False): """ Remove segment from the contour. If ``preserveCurve`` is set to ``True`` an attempt will be made to preserve the shape of the curve if the environment supports that functionality. """ if not isinstance(segment, int): segment = self.segments.index(segment) segment = normalizers.normalizeIndex(segment) if segment >= self._len__segments(): raise ValueError("No segment located at index %d." % segment) preserveCurve = normalizers.normalizeBoolean(preserveCurve) self._removeSegment(segment, preserveCurve)
def set_archive_layout(self, archive_id, layout_type, stylesheet=None): """ Use this method to change the layout of videos in an OpenTok archive :param String archive_id: The ID of the archive that will be updated :param String layout_type: The layout type for the archive. Valid values are: 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation' :param String stylesheet optional: CSS used to style the custom layout. Specify this only if you set the type property to 'custom' """ payload = { 'type': layout_type, } if layout_type == 'custom': if stylesheet is not None: payload['stylesheet'] = stylesheet endpoint = self.endpoints.set_archive_layout_url(archive_id) response = requests.put( endpoint, data=json.dumps(payload), headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout ) if response.status_code == 200: pass elif response.status_code == 400: raise ArchiveError('Invalid request. This response may indicate that data in your request data is invalid JSON. It may also indicate that you passed in invalid layout options.') elif response.status_code == 403: raise AuthError('Authentication error.') else: raise RequestError('OpenTok server error.', response.status_code)
Use this method to change the layout of videos in an OpenTok archive :param String archive_id: The ID of the archive that will be updated :param String layout_type: The layout type for the archive. Valid values are: 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation' :param String stylesheet optional: CSS used to style the custom layout. Specify this only if you set the type property to 'custom'
Below is the the instruction that describes the task: ### Input: Use this method to change the layout of videos in an OpenTok archive :param String archive_id: The ID of the archive that will be updated :param String layout_type: The layout type for the archive. Valid values are: 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation' :param String stylesheet optional: CSS used to style the custom layout. Specify this only if you set the type property to 'custom' ### Response: def set_archive_layout(self, archive_id, layout_type, stylesheet=None): """ Use this method to change the layout of videos in an OpenTok archive :param String archive_id: The ID of the archive that will be updated :param String layout_type: The layout type for the archive. Valid values are: 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation' :param String stylesheet optional: CSS used to style the custom layout. Specify this only if you set the type property to 'custom' """ payload = { 'type': layout_type, } if layout_type == 'custom': if stylesheet is not None: payload['stylesheet'] = stylesheet endpoint = self.endpoints.set_archive_layout_url(archive_id) response = requests.put( endpoint, data=json.dumps(payload), headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout ) if response.status_code == 200: pass elif response.status_code == 400: raise ArchiveError('Invalid request. This response may indicate that data in your request data is invalid JSON. It may also indicate that you passed in invalid layout options.') elif response.status_code == 403: raise AuthError('Authentication error.') else: raise RequestError('OpenTok server error.', response.status_code)
def iter_doc_filepaths(self, **kwargs): """Generator that iterates over all detected documents. and returns the filesystem path to each doc. Order is by shard, but arbitrary within shards. @TEMP not locked to prevent doc creation/deletion """ for shard in self._shards: for doc_id, blob in shard.iter_doc_filepaths(**kwargs): yield doc_id, blob
Generator that iterates over all detected documents. and returns the filesystem path to each doc. Order is by shard, but arbitrary within shards. @TEMP not locked to prevent doc creation/deletion
Below is the the instruction that describes the task: ### Input: Generator that iterates over all detected documents. and returns the filesystem path to each doc. Order is by shard, but arbitrary within shards. @TEMP not locked to prevent doc creation/deletion ### Response: def iter_doc_filepaths(self, **kwargs): """Generator that iterates over all detected documents. and returns the filesystem path to each doc. Order is by shard, but arbitrary within shards. @TEMP not locked to prevent doc creation/deletion """ for shard in self._shards: for doc_id, blob in shard.iter_doc_filepaths(**kwargs): yield doc_id, blob
def listified_tokenizer(source): """Tokenizes *source* and returns the tokens as a list of lists.""" io_obj = io.StringIO(source) return [list(a) for a in tokenize.generate_tokens(io_obj.readline)]
Tokenizes *source* and returns the tokens as a list of lists.
Below is the the instruction that describes the task: ### Input: Tokenizes *source* and returns the tokens as a list of lists. ### Response: def listified_tokenizer(source): """Tokenizes *source* and returns the tokens as a list of lists.""" io_obj = io.StringIO(source) return [list(a) for a in tokenize.generate_tokens(io_obj.readline)]
def delete_bond(self, n, m): """ implementation of bond removing """ self.remove_edge(n, m) self.flush_cache()
implementation of bond removing
Below is the the instruction that describes the task: ### Input: implementation of bond removing ### Response: def delete_bond(self, n, m): """ implementation of bond removing """ self.remove_edge(n, m) self.flush_cache()
def isPeregrine(self): """ Returns if this object is peregrine. """ return isPeregrine(self.obj.id, self.obj.sign, self.obj.signlon)
Returns if this object is peregrine.
Below is the the instruction that describes the task: ### Input: Returns if this object is peregrine. ### Response: def isPeregrine(self): """ Returns if this object is peregrine. """ return isPeregrine(self.obj.id, self.obj.sign, self.obj.signlon)
def dpss(npts, fw, number_of_tapers, auto_spline=True, npts_max=None): """ Calculates DPSS also known as Slepian sequences or Slepian tapers. Calculation of the DPSS (Discrete Prolate Spheroidal Sequences) and the correspondent eigenvalues. The (1 - eigenvalue) terms are also calculated. Wraps the ``dpss()`` subroutine from the Fortran library. By default this routine will use spline interpolation if sequences with more than 200.000 samples are requested. .. note:: The tapers are the eigenvectors of the tridiagonal matrix sigma(i, j) [see Slepian(1978) eq 14 and 25]. They are also the eigenvectors of the Toeplitz matrix, eq. 18. :param npts: The number of points in the series. :type npts: int :param fw: The time-bandwidth product (number of Rayleigh bins). :type fw: float :param number_of_tapers: The desired number of tapers. :type number_of_tapers: int :param auto_spline: Whether or not to automatically use spline interpolation for ``npts`` > 200000. :type auto_spline: bool :param npts_max: The number of actual points to calculate the DPSS. If this number is smaller than ``npts``, spline interpolation will be performed, regardless of the value of ``auto_spline``. :type npts_max: None or int :returns: ``(v, lambda, theta)`` with ``v(npts, number_of_tapers)`` the eigenvectors (tapers), ``lambda`` the eigenvalues of the ``v``'s and ``theta`` the 1 - ``lambda`` (energy outside the bandwidth) values. .. rubric:: Example This example demonstrates how to calculate and plot the first five DPSS'. >>> import matplotlib.pyplot as plt >>> from mtspec import dpss >>> tapers, lamb, theta = dpss(512, 2.5, 5) >>> for i in range(5): ... plt.plot(tapers[:, i]) .. plot :: # Same as the code snippet in the docstring, just a bit prettier. import matplotlib.pyplot as plt plt.style.use("ggplot") from mtspec import dpss tapers, lamb, theta = dpss(512, 2.5, 5) for i in range(5): plt.plot(tapers[:, i]) plt.xlim(0, 512) plt.ylim(-0.09, 0.09) plt.tight_layout() """ mt = _MtspecType("float64") v = mt.empty((npts, number_of_tapers)) lamb = mt.empty(number_of_tapers) theta = mt.empty(number_of_tapers) # Set auto_spline to True. if npts_max and npts_max < npts: auto_spline = True # Always set npts_max. else: npts_max = 200000 # Call either the spline routine or the normal routine. if auto_spline is True and npts > npts_max: mtspeclib.dpss_spline_( C.byref(C.c_int(npts_max)), C.byref(C.c_int(npts)), C.byref(C.c_double(fw)), C.byref(C.c_int(number_of_tapers)), mt.p(v), mt.p(lamb), mt.p(theta)) else: mtspeclib.dpss_(C.byref(C.c_int(npts)), C.byref(C.c_double(fw)), C.byref(C.c_int(number_of_tapers)), mt.p(v), mt.p(lamb), mt.p(theta)) return (v, lamb, theta)
Calculates DPSS also known as Slepian sequences or Slepian tapers. Calculation of the DPSS (Discrete Prolate Spheroidal Sequences) and the correspondent eigenvalues. The (1 - eigenvalue) terms are also calculated. Wraps the ``dpss()`` subroutine from the Fortran library. By default this routine will use spline interpolation if sequences with more than 200.000 samples are requested. .. note:: The tapers are the eigenvectors of the tridiagonal matrix sigma(i, j) [see Slepian(1978) eq 14 and 25]. They are also the eigenvectors of the Toeplitz matrix, eq. 18. :param npts: The number of points in the series. :type npts: int :param fw: The time-bandwidth product (number of Rayleigh bins). :type fw: float :param number_of_tapers: The desired number of tapers. :type number_of_tapers: int :param auto_spline: Whether or not to automatically use spline interpolation for ``npts`` > 200000. :type auto_spline: bool :param npts_max: The number of actual points to calculate the DPSS. If this number is smaller than ``npts``, spline interpolation will be performed, regardless of the value of ``auto_spline``. :type npts_max: None or int :returns: ``(v, lambda, theta)`` with ``v(npts, number_of_tapers)`` the eigenvectors (tapers), ``lambda`` the eigenvalues of the ``v``'s and ``theta`` the 1 - ``lambda`` (energy outside the bandwidth) values. .. rubric:: Example This example demonstrates how to calculate and plot the first five DPSS'. >>> import matplotlib.pyplot as plt >>> from mtspec import dpss >>> tapers, lamb, theta = dpss(512, 2.5, 5) >>> for i in range(5): ... plt.plot(tapers[:, i]) .. plot :: # Same as the code snippet in the docstring, just a bit prettier. import matplotlib.pyplot as plt plt.style.use("ggplot") from mtspec import dpss tapers, lamb, theta = dpss(512, 2.5, 5) for i in range(5): plt.plot(tapers[:, i]) plt.xlim(0, 512) plt.ylim(-0.09, 0.09) plt.tight_layout()
Below is the the instruction that describes the task: ### Input: Calculates DPSS also known as Slepian sequences or Slepian tapers. Calculation of the DPSS (Discrete Prolate Spheroidal Sequences) and the correspondent eigenvalues. The (1 - eigenvalue) terms are also calculated. Wraps the ``dpss()`` subroutine from the Fortran library. By default this routine will use spline interpolation if sequences with more than 200.000 samples are requested. .. note:: The tapers are the eigenvectors of the tridiagonal matrix sigma(i, j) [see Slepian(1978) eq 14 and 25]. They are also the eigenvectors of the Toeplitz matrix, eq. 18. :param npts: The number of points in the series. :type npts: int :param fw: The time-bandwidth product (number of Rayleigh bins). :type fw: float :param number_of_tapers: The desired number of tapers. :type number_of_tapers: int :param auto_spline: Whether or not to automatically use spline interpolation for ``npts`` > 200000. :type auto_spline: bool :param npts_max: The number of actual points to calculate the DPSS. If this number is smaller than ``npts``, spline interpolation will be performed, regardless of the value of ``auto_spline``. :type npts_max: None or int :returns: ``(v, lambda, theta)`` with ``v(npts, number_of_tapers)`` the eigenvectors (tapers), ``lambda`` the eigenvalues of the ``v``'s and ``theta`` the 1 - ``lambda`` (energy outside the bandwidth) values. .. rubric:: Example This example demonstrates how to calculate and plot the first five DPSS'. >>> import matplotlib.pyplot as plt >>> from mtspec import dpss >>> tapers, lamb, theta = dpss(512, 2.5, 5) >>> for i in range(5): ... plt.plot(tapers[:, i]) .. plot :: # Same as the code snippet in the docstring, just a bit prettier. import matplotlib.pyplot as plt plt.style.use("ggplot") from mtspec import dpss tapers, lamb, theta = dpss(512, 2.5, 5) for i in range(5): plt.plot(tapers[:, i]) plt.xlim(0, 512) plt.ylim(-0.09, 0.09) plt.tight_layout() ### Response: def dpss(npts, fw, number_of_tapers, auto_spline=True, npts_max=None): """ Calculates DPSS also known as Slepian sequences or Slepian tapers. Calculation of the DPSS (Discrete Prolate Spheroidal Sequences) and the correspondent eigenvalues. The (1 - eigenvalue) terms are also calculated. Wraps the ``dpss()`` subroutine from the Fortran library. By default this routine will use spline interpolation if sequences with more than 200.000 samples are requested. .. note:: The tapers are the eigenvectors of the tridiagonal matrix sigma(i, j) [see Slepian(1978) eq 14 and 25]. They are also the eigenvectors of the Toeplitz matrix, eq. 18. :param npts: The number of points in the series. :type npts: int :param fw: The time-bandwidth product (number of Rayleigh bins). :type fw: float :param number_of_tapers: The desired number of tapers. :type number_of_tapers: int :param auto_spline: Whether or not to automatically use spline interpolation for ``npts`` > 200000. :type auto_spline: bool :param npts_max: The number of actual points to calculate the DPSS. If this number is smaller than ``npts``, spline interpolation will be performed, regardless of the value of ``auto_spline``. :type npts_max: None or int :returns: ``(v, lambda, theta)`` with ``v(npts, number_of_tapers)`` the eigenvectors (tapers), ``lambda`` the eigenvalues of the ``v``'s and ``theta`` the 1 - ``lambda`` (energy outside the bandwidth) values. .. rubric:: Example This example demonstrates how to calculate and plot the first five DPSS'. >>> import matplotlib.pyplot as plt >>> from mtspec import dpss >>> tapers, lamb, theta = dpss(512, 2.5, 5) >>> for i in range(5): ... plt.plot(tapers[:, i]) .. plot :: # Same as the code snippet in the docstring, just a bit prettier. import matplotlib.pyplot as plt plt.style.use("ggplot") from mtspec import dpss tapers, lamb, theta = dpss(512, 2.5, 5) for i in range(5): plt.plot(tapers[:, i]) plt.xlim(0, 512) plt.ylim(-0.09, 0.09) plt.tight_layout() """ mt = _MtspecType("float64") v = mt.empty((npts, number_of_tapers)) lamb = mt.empty(number_of_tapers) theta = mt.empty(number_of_tapers) # Set auto_spline to True. if npts_max and npts_max < npts: auto_spline = True # Always set npts_max. else: npts_max = 200000 # Call either the spline routine or the normal routine. if auto_spline is True and npts > npts_max: mtspeclib.dpss_spline_( C.byref(C.c_int(npts_max)), C.byref(C.c_int(npts)), C.byref(C.c_double(fw)), C.byref(C.c_int(number_of_tapers)), mt.p(v), mt.p(lamb), mt.p(theta)) else: mtspeclib.dpss_(C.byref(C.c_int(npts)), C.byref(C.c_double(fw)), C.byref(C.c_int(number_of_tapers)), mt.p(v), mt.p(lamb), mt.p(theta)) return (v, lamb, theta)
def state(self, state): """Update the status of a build""" state = state.lower() if state not in valid_states: raise ValueError("Build state must have a value from:\n{}".format(", ".join(valid_state))) self.obj['state'] = state self.changes.append("Updating build:{}.state={}" .format(self.obj['name'], state)) return self
Update the status of a build
Below is the the instruction that describes the task: ### Input: Update the status of a build ### Response: def state(self, state): """Update the status of a build""" state = state.lower() if state not in valid_states: raise ValueError("Build state must have a value from:\n{}".format(", ".join(valid_state))) self.obj['state'] = state self.changes.append("Updating build:{}.state={}" .format(self.obj['name'], state)) return self
def process_quote(self, data): """报价推送""" for ix, row in data.iterrows(): symbol = row['code'] tick = self._tick_dict.get(symbol, None) if not tick: tick = TinyQuoteData() tick.symbol = symbol self._tick_dict[symbol] = tick tick.date = row['data_date'].replace('-', '') tick.time = row['data_time'] # with GLOBAL.dt_lock: if tick.date and tick.time: tick.datetime = datetime.strptime(' '.join([tick.date, tick.time]), '%Y%m%d %H:%M:%S') else: return tick.openPrice = row['open_price'] tick.highPrice = row['high_price'] tick.lowPrice = row['low_price'] tick.preClosePrice = row['prev_close_price'] # 1.25 新增摆盘价差,方便计算正确的订单提交价格 要求牛牛最低版本 v3.42.4961.125 if 'price_spread' in row: tick.priceSpread = row['price_spread'] tick.lastPrice = row['last_price'] tick.volume = row['volume'] new_tick = copy(tick) self._notify_new_tick_event(new_tick)
报价推送
Below is the the instruction that describes the task: ### Input: 报价推送 ### Response: def process_quote(self, data): """报价推送""" for ix, row in data.iterrows(): symbol = row['code'] tick = self._tick_dict.get(symbol, None) if not tick: tick = TinyQuoteData() tick.symbol = symbol self._tick_dict[symbol] = tick tick.date = row['data_date'].replace('-', '') tick.time = row['data_time'] # with GLOBAL.dt_lock: if tick.date and tick.time: tick.datetime = datetime.strptime(' '.join([tick.date, tick.time]), '%Y%m%d %H:%M:%S') else: return tick.openPrice = row['open_price'] tick.highPrice = row['high_price'] tick.lowPrice = row['low_price'] tick.preClosePrice = row['prev_close_price'] # 1.25 新增摆盘价差,方便计算正确的订单提交价格 要求牛牛最低版本 v3.42.4961.125 if 'price_spread' in row: tick.priceSpread = row['price_spread'] tick.lastPrice = row['last_price'] tick.volume = row['volume'] new_tick = copy(tick) self._notify_new_tick_event(new_tick)
def _cleanup_factory(self): """Build a cleanup clojure that doesn't increase our ref count""" _self = weakref.proxy(self) def wrapper(): try: _self.close(timeout=0) except (ReferenceError, AttributeError): pass return wrapper
Build a cleanup clojure that doesn't increase our ref count
Below is the the instruction that describes the task: ### Input: Build a cleanup clojure that doesn't increase our ref count ### Response: def _cleanup_factory(self): """Build a cleanup clojure that doesn't increase our ref count""" _self = weakref.proxy(self) def wrapper(): try: _self.close(timeout=0) except (ReferenceError, AttributeError): pass return wrapper
def _group_changes(cur, wanted, remove=False): ''' Determine if the groups need to be changed ''' old = set(cur) new = set(wanted) if (remove and old != new) or (not remove and not new.issubset(old)): return True return False
Determine if the groups need to be changed
Below is the the instruction that describes the task: ### Input: Determine if the groups need to be changed ### Response: def _group_changes(cur, wanted, remove=False): ''' Determine if the groups need to be changed ''' old = set(cur) new = set(wanted) if (remove and old != new) or (not remove and not new.issubset(old)): return True return False
def value(self, extra=None): """The value used for processing. Can be a tuple. with optional extra bits """ if isinstance(self.code, WithExtra): if not 0<=extra<1<<self.extraBits(): raise ValueError("value: extra value doesn't fit in extraBits") return self.code.value(self.index, extra) if extra is not None: raise ValueError('value: no extra bits for this code') return self.code.value(self.index)
The value used for processing. Can be a tuple. with optional extra bits
Below is the the instruction that describes the task: ### Input: The value used for processing. Can be a tuple. with optional extra bits ### Response: def value(self, extra=None): """The value used for processing. Can be a tuple. with optional extra bits """ if isinstance(self.code, WithExtra): if not 0<=extra<1<<self.extraBits(): raise ValueError("value: extra value doesn't fit in extraBits") return self.code.value(self.index, extra) if extra is not None: raise ValueError('value: no extra bits for this code') return self.code.value(self.index)
def _linux_bradd(br): ''' Internal, creates the bridge ''' brctl = _tool_path('brctl') return __salt__['cmd.run']('{0} addbr {1}'.format(brctl, br), python_shell=False)
Internal, creates the bridge
Below is the the instruction that describes the task: ### Input: Internal, creates the bridge ### Response: def _linux_bradd(br): ''' Internal, creates the bridge ''' brctl = _tool_path('brctl') return __salt__['cmd.run']('{0} addbr {1}'.format(brctl, br), python_shell=False)
def editproject(self, project_id, **kwargs): """ Edit an existing project. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param default_branch: they default branch :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :return: """ data = {"id": project_id} if kwargs: data.update(kwargs) request = requests.put( '{0}/{1}'.format(self.projects_url, project_id), headers=self.headers, data=data, verify=self.verify_ssl, auth=self.auth, timeout=self.timeout) if request.status_code == 200: return True elif request.status_code == 400: if "Your param's are invalid" in request.text: print(request.text) return False else: return False
Edit an existing project. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param default_branch: they default branch :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :return:
Below is the the instruction that describes the task: ### Input: Edit an existing project. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param default_branch: they default branch :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :return: ### Response: def editproject(self, project_id, **kwargs): """ Edit an existing project. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param default_branch: they default branch :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :return: """ data = {"id": project_id} if kwargs: data.update(kwargs) request = requests.put( '{0}/{1}'.format(self.projects_url, project_id), headers=self.headers, data=data, verify=self.verify_ssl, auth=self.auth, timeout=self.timeout) if request.status_code == 200: return True elif request.status_code == 400: if "Your param's are invalid" in request.text: print(request.text) return False else: return False
def _read_txt(self, fin_txt, get_goids_only, exclude_ungrouped): """Read GO file. Store results in: section2goids sections_seen. Return goids_fin.""" goids_sec = [] with open(fin_txt) as istrm: # Lines starting with a GO ID will have that GO ID read and stored. # * Lines that do not start with a GO ID will be ignored. # * Text after the 10 characters in a GO ID will be ignored. section_name = None for line in istrm: if line[:3] == "GO:": goids_sec.append(line[:10]) elif not get_goids_only and ":" in line: mtch = self.srch_section.match(line) if mtch: secstr = mtch.group(1) if section_name is not None and goids_sec: self.section2goids[section_name] = goids_sec if not exclude_ungrouped or secstr != HdrgosSections.secdflt: section_name = secstr self.sections_seen.append(section_name) else: section_name = None goids_sec = [] if section_name is not None and goids_sec: self.section2goids[section_name] = goids_sec return goids_sec
Read GO file. Store results in: section2goids sections_seen. Return goids_fin.
Below is the the instruction that describes the task: ### Input: Read GO file. Store results in: section2goids sections_seen. Return goids_fin. ### Response: def _read_txt(self, fin_txt, get_goids_only, exclude_ungrouped): """Read GO file. Store results in: section2goids sections_seen. Return goids_fin.""" goids_sec = [] with open(fin_txt) as istrm: # Lines starting with a GO ID will have that GO ID read and stored. # * Lines that do not start with a GO ID will be ignored. # * Text after the 10 characters in a GO ID will be ignored. section_name = None for line in istrm: if line[:3] == "GO:": goids_sec.append(line[:10]) elif not get_goids_only and ":" in line: mtch = self.srch_section.match(line) if mtch: secstr = mtch.group(1) if section_name is not None and goids_sec: self.section2goids[section_name] = goids_sec if not exclude_ungrouped or secstr != HdrgosSections.secdflt: section_name = secstr self.sections_seen.append(section_name) else: section_name = None goids_sec = [] if section_name is not None and goids_sec: self.section2goids[section_name] = goids_sec return goids_sec
def _CheckPacketSize(cursor): """Checks that MySQL packet size is big enough for expected query size.""" cur_packet_size = int(_ReadVariable("max_allowed_packet", cursor)) if cur_packet_size < MAX_PACKET_SIZE: raise Error( "MySQL max_allowed_packet of {0} is required, got {1}. " "Please set max_allowed_packet={0} in your MySQL config.".format( MAX_PACKET_SIZE, cur_packet_size))
Checks that MySQL packet size is big enough for expected query size.
Below is the the instruction that describes the task: ### Input: Checks that MySQL packet size is big enough for expected query size. ### Response: def _CheckPacketSize(cursor): """Checks that MySQL packet size is big enough for expected query size.""" cur_packet_size = int(_ReadVariable("max_allowed_packet", cursor)) if cur_packet_size < MAX_PACKET_SIZE: raise Error( "MySQL max_allowed_packet of {0} is required, got {1}. " "Please set max_allowed_packet={0} in your MySQL config.".format( MAX_PACKET_SIZE, cur_packet_size))
def delete(self): 'Delete this file and return the new, deleted JFSFile' #url = '%s?dl=true' % self.path r = self.jfs.post(url=self.path, params={'dl':'true'}) return r
Delete this file and return the new, deleted JFSFile
Below is the the instruction that describes the task: ### Input: Delete this file and return the new, deleted JFSFile ### Response: def delete(self): 'Delete this file and return the new, deleted JFSFile' #url = '%s?dl=true' % self.path r = self.jfs.post(url=self.path, params={'dl':'true'}) return r
def _handle_multiple_svcallers(data, stage): """Retrieve configured structural variation caller, handling multiple. """ svs = get_svcallers(data) # special cases -- prioritization if stage == "ensemble" and dd.get_svprioritize(data): svs.append("prioritize") out = [] for svcaller in svs: if svcaller in _get_callers([data], stage): base = copy.deepcopy(data) # clean SV callers present in multiple rounds and not this caller final_svs = [] for sv in data.get("sv", []): if (stage == "ensemble" or sv["variantcaller"] == svcaller or sv["variantcaller"] not in svs or svcaller not in _get_callers([data], stage, special_cases=True)): final_svs.append(sv) base["sv"] = final_svs base["config"]["algorithm"]["svcaller"] = svcaller base["config"]["algorithm"]["svcaller_orig"] = svs out.append(base) return out
Retrieve configured structural variation caller, handling multiple.
Below is the the instruction that describes the task: ### Input: Retrieve configured structural variation caller, handling multiple. ### Response: def _handle_multiple_svcallers(data, stage): """Retrieve configured structural variation caller, handling multiple. """ svs = get_svcallers(data) # special cases -- prioritization if stage == "ensemble" and dd.get_svprioritize(data): svs.append("prioritize") out = [] for svcaller in svs: if svcaller in _get_callers([data], stage): base = copy.deepcopy(data) # clean SV callers present in multiple rounds and not this caller final_svs = [] for sv in data.get("sv", []): if (stage == "ensemble" or sv["variantcaller"] == svcaller or sv["variantcaller"] not in svs or svcaller not in _get_callers([data], stage, special_cases=True)): final_svs.append(sv) base["sv"] = final_svs base["config"]["algorithm"]["svcaller"] = svcaller base["config"]["algorithm"]["svcaller_orig"] = svs out.append(base) return out
def _get_flavor(): """ Download flavor from github """ target = op.join("seqcluster", "flavor") url = "https://github.com/lpantano/seqcluster.git" if not os.path.exists(target): # shutil.rmtree("seqcluster") subprocess.check_call(["git", "clone","-b", "flavor", "--single-branch", url]) return op.abspath(target)
Download flavor from github
Below is the the instruction that describes the task: ### Input: Download flavor from github ### Response: def _get_flavor(): """ Download flavor from github """ target = op.join("seqcluster", "flavor") url = "https://github.com/lpantano/seqcluster.git" if not os.path.exists(target): # shutil.rmtree("seqcluster") subprocess.check_call(["git", "clone","-b", "flavor", "--single-branch", url]) return op.abspath(target)
def do_cons(self, params): """ \x1b[1mNAME\x1b[0m cons - Executes the cons four-letter command \x1b[1mSYNOPSIS\x1b[0m cons [hosts] [match] \x1b[1mOPTIONS\x1b[0m * hosts: the hosts to connect to (default: the current connected host) * match: only output lines that include the given string (default: '') \x1b[1mEXAMPLES\x1b[0m > cons /127.0.0.1:40535[0](queued=0,recved=1,sent=0) ... """ hosts = params.hosts if params.hosts != "" else None if hosts is not None and invalid_hosts(hosts): self.show_output("List of hosts has the wrong syntax.") return if self._zk is None: self._zk = XClient() try: content = get_matching(self._zk.cons(hosts), params.match) self.show_output(content) except XClient.CmdFailed as ex: self.show_output(str(ex))
\x1b[1mNAME\x1b[0m cons - Executes the cons four-letter command \x1b[1mSYNOPSIS\x1b[0m cons [hosts] [match] \x1b[1mOPTIONS\x1b[0m * hosts: the hosts to connect to (default: the current connected host) * match: only output lines that include the given string (default: '') \x1b[1mEXAMPLES\x1b[0m > cons /127.0.0.1:40535[0](queued=0,recved=1,sent=0) ...
Below is the the instruction that describes the task: ### Input: \x1b[1mNAME\x1b[0m cons - Executes the cons four-letter command \x1b[1mSYNOPSIS\x1b[0m cons [hosts] [match] \x1b[1mOPTIONS\x1b[0m * hosts: the hosts to connect to (default: the current connected host) * match: only output lines that include the given string (default: '') \x1b[1mEXAMPLES\x1b[0m > cons /127.0.0.1:40535[0](queued=0,recved=1,sent=0) ... ### Response: def do_cons(self, params): """ \x1b[1mNAME\x1b[0m cons - Executes the cons four-letter command \x1b[1mSYNOPSIS\x1b[0m cons [hosts] [match] \x1b[1mOPTIONS\x1b[0m * hosts: the hosts to connect to (default: the current connected host) * match: only output lines that include the given string (default: '') \x1b[1mEXAMPLES\x1b[0m > cons /127.0.0.1:40535[0](queued=0,recved=1,sent=0) ... """ hosts = params.hosts if params.hosts != "" else None if hosts is not None and invalid_hosts(hosts): self.show_output("List of hosts has the wrong syntax.") return if self._zk is None: self._zk = XClient() try: content = get_matching(self._zk.cons(hosts), params.match) self.show_output(content) except XClient.CmdFailed as ex: self.show_output(str(ex))
def _sort_course_modes(self, modes): """ Sort the course mode dictionaries by slug according to the COURSE_MODE_SORT_ORDER constant. Arguments: modes (list): A list of course mode dictionaries. Returns: list: A list with the course modes dictionaries sorted by slug. """ def slug_weight(mode): """ Assign a weight to the course mode dictionary based on the position of its slug in the sorting list. """ sorting_slugs = COURSE_MODE_SORT_ORDER sorting_slugs_size = len(sorting_slugs) if mode['slug'] in sorting_slugs: return sorting_slugs_size - sorting_slugs.index(mode['slug']) return 0 # Sort slug weights in descending order return sorted(modes, key=slug_weight, reverse=True)
Sort the course mode dictionaries by slug according to the COURSE_MODE_SORT_ORDER constant. Arguments: modes (list): A list of course mode dictionaries. Returns: list: A list with the course modes dictionaries sorted by slug.
Below is the the instruction that describes the task: ### Input: Sort the course mode dictionaries by slug according to the COURSE_MODE_SORT_ORDER constant. Arguments: modes (list): A list of course mode dictionaries. Returns: list: A list with the course modes dictionaries sorted by slug. ### Response: def _sort_course_modes(self, modes): """ Sort the course mode dictionaries by slug according to the COURSE_MODE_SORT_ORDER constant. Arguments: modes (list): A list of course mode dictionaries. Returns: list: A list with the course modes dictionaries sorted by slug. """ def slug_weight(mode): """ Assign a weight to the course mode dictionary based on the position of its slug in the sorting list. """ sorting_slugs = COURSE_MODE_SORT_ORDER sorting_slugs_size = len(sorting_slugs) if mode['slug'] in sorting_slugs: return sorting_slugs_size - sorting_slugs.index(mode['slug']) return 0 # Sort slug weights in descending order return sorted(modes, key=slug_weight, reverse=True)
def _check_min(par, minval, name, unit, verb): r"""Check minimum value of parameter.""" scalar = False if par.shape == (): scalar = True par = np.atleast_1d(par) if minval is not None: ipar = np.where(par < minval) par[ipar] = minval if verb > 0 and np.size(ipar) != 0: print('* WARNING :: ' + name + ' < ' + str(minval) + ' ' + unit + ' are set to ' + str(minval) + ' ' + unit + '!') if scalar: return np.squeeze(par) else: return par
r"""Check minimum value of parameter.
Below is the the instruction that describes the task: ### Input: r"""Check minimum value of parameter. ### Response: def _check_min(par, minval, name, unit, verb): r"""Check minimum value of parameter.""" scalar = False if par.shape == (): scalar = True par = np.atleast_1d(par) if minval is not None: ipar = np.where(par < minval) par[ipar] = minval if verb > 0 and np.size(ipar) != 0: print('* WARNING :: ' + name + ' < ' + str(minval) + ' ' + unit + ' are set to ' + str(minval) + ' ' + unit + '!') if scalar: return np.squeeze(par) else: return par
def MergeMembers(self): """Add shadow group members to the group if gshadow is used. Normally group and shadow should be in sync, but no guarantees. Merges the two stores as membership in either file may confer membership. """ for group_name, members in iteritems(self.gshadow_members): group = self.entry.get(group_name) if group and group.pw_entry.store == self.shadow_store: group.members = members.union(group.members)
Add shadow group members to the group if gshadow is used. Normally group and shadow should be in sync, but no guarantees. Merges the two stores as membership in either file may confer membership.
Below is the the instruction that describes the task: ### Input: Add shadow group members to the group if gshadow is used. Normally group and shadow should be in sync, but no guarantees. Merges the two stores as membership in either file may confer membership. ### Response: def MergeMembers(self): """Add shadow group members to the group if gshadow is used. Normally group and shadow should be in sync, but no guarantees. Merges the two stores as membership in either file may confer membership. """ for group_name, members in iteritems(self.gshadow_members): group = self.entry.get(group_name) if group and group.pw_entry.store == self.shadow_store: group.members = members.union(group.members)
def register(self, func): """ Register function to templates. """ if callable(func): self.functions[func.__name__] = func return func
Register function to templates.
Below is the the instruction that describes the task: ### Input: Register function to templates. ### Response: def register(self, func): """ Register function to templates. """ if callable(func): self.functions[func.__name__] = func return func
def updateSigner(self, identifier, signer): """ Update signer for an already present identifier. The passed signer should have the same identifier as `identifier` or an error is raised. Also if the existing identifier has an alias in the wallet then the passed signer is given the same alias :param identifier: existing identifier in the wallet :param signer: new signer to update too :return: """ if identifier != signer.identifier: raise ValueError("Passed signer has identifier {} but it should" " have been {}".format(signer.identifier, identifier)) if identifier not in self.idsToSigners: raise KeyError("Identifier {} not present in wallet". format(identifier)) oldSigner = self.idsToSigners[identifier] if oldSigner.alias and oldSigner.alias in self.aliasesToIds: logger.debug('Changing alias of passed signer to {}'. format(oldSigner.alias)) signer.alias = oldSigner.alias self.idsToSigners[identifier] = signer
Update signer for an already present identifier. The passed signer should have the same identifier as `identifier` or an error is raised. Also if the existing identifier has an alias in the wallet then the passed signer is given the same alias :param identifier: existing identifier in the wallet :param signer: new signer to update too :return:
Below is the the instruction that describes the task: ### Input: Update signer for an already present identifier. The passed signer should have the same identifier as `identifier` or an error is raised. Also if the existing identifier has an alias in the wallet then the passed signer is given the same alias :param identifier: existing identifier in the wallet :param signer: new signer to update too :return: ### Response: def updateSigner(self, identifier, signer): """ Update signer for an already present identifier. The passed signer should have the same identifier as `identifier` or an error is raised. Also if the existing identifier has an alias in the wallet then the passed signer is given the same alias :param identifier: existing identifier in the wallet :param signer: new signer to update too :return: """ if identifier != signer.identifier: raise ValueError("Passed signer has identifier {} but it should" " have been {}".format(signer.identifier, identifier)) if identifier not in self.idsToSigners: raise KeyError("Identifier {} not present in wallet". format(identifier)) oldSigner = self.idsToSigners[identifier] if oldSigner.alias and oldSigner.alias in self.aliasesToIds: logger.debug('Changing alias of passed signer to {}'. format(oldSigner.alias)) signer.alias = oldSigner.alias self.idsToSigners[identifier] = signer
def connect(self, version = 3, clean_session = 1, will = None): """Connect to server.""" self.clean_session = clean_session self.will = None if will is not None: self.will = NyamukMsg( topic = will['topic'], # unicode text needs to be utf8 encoded to be sent on the wire # str or bytearray are kept as it is payload = utf8encode(will.get('message','')), qos = will.get('qos', 0), retain = will.get('retain', False) ) #CONNECT packet pkt = MqttPkt() pkt.connect_build(self, self.keep_alive, clean_session, version = version) #create socket self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) if self.ssl: opts = { 'do_handshake_on_connect': True, 'ssl_version': ssl.PROTOCOL_TLSv1 } opts.update(self.ssl_opts) #print opts, self.port try: self.sock = ssl.wrap_socket(self.sock, **opts) except Exception, e: self.logger.error("failed to initiate SSL connection: {0}".format(e)) return NC.ERR_UNKNOWN nyamuk_net.setkeepalives(self.sock) self.logger.info("Connecting to server ....%s", self.server) err = nyamuk_net.connect(self.sock,(self.server, self.port)) #print self.sock.cipher() if err != None: self.logger.error(err[1]) return NC.ERR_UNKNOWN #set to nonblock self.sock.setblocking(0) return self.packet_queue(pkt)
Connect to server.
Below is the the instruction that describes the task: ### Input: Connect to server. ### Response: def connect(self, version = 3, clean_session = 1, will = None): """Connect to server.""" self.clean_session = clean_session self.will = None if will is not None: self.will = NyamukMsg( topic = will['topic'], # unicode text needs to be utf8 encoded to be sent on the wire # str or bytearray are kept as it is payload = utf8encode(will.get('message','')), qos = will.get('qos', 0), retain = will.get('retain', False) ) #CONNECT packet pkt = MqttPkt() pkt.connect_build(self, self.keep_alive, clean_session, version = version) #create socket self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) if self.ssl: opts = { 'do_handshake_on_connect': True, 'ssl_version': ssl.PROTOCOL_TLSv1 } opts.update(self.ssl_opts) #print opts, self.port try: self.sock = ssl.wrap_socket(self.sock, **opts) except Exception, e: self.logger.error("failed to initiate SSL connection: {0}".format(e)) return NC.ERR_UNKNOWN nyamuk_net.setkeepalives(self.sock) self.logger.info("Connecting to server ....%s", self.server) err = nyamuk_net.connect(self.sock,(self.server, self.port)) #print self.sock.cipher() if err != None: self.logger.error(err[1]) return NC.ERR_UNKNOWN #set to nonblock self.sock.setblocking(0) return self.packet_queue(pkt)
def grant_db_access(conn, schema, table, role): r"""Gives access to database users/ groups Parameters ---------- conn : sqlalchemy connection object A valid connection to a database schema : str The database schema table : str The database table role : str database role that access is granted to """ grant_str = """GRANT ALL ON TABLE {schema}.{table} TO {role} WITH GRANT OPTION;""".format(schema=schema, table=table, role=role) conn.execute(grant_str)
r"""Gives access to database users/ groups Parameters ---------- conn : sqlalchemy connection object A valid connection to a database schema : str The database schema table : str The database table role : str database role that access is granted to
Below is the the instruction that describes the task: ### Input: r"""Gives access to database users/ groups Parameters ---------- conn : sqlalchemy connection object A valid connection to a database schema : str The database schema table : str The database table role : str database role that access is granted to ### Response: def grant_db_access(conn, schema, table, role): r"""Gives access to database users/ groups Parameters ---------- conn : sqlalchemy connection object A valid connection to a database schema : str The database schema table : str The database table role : str database role that access is granted to """ grant_str = """GRANT ALL ON TABLE {schema}.{table} TO {role} WITH GRANT OPTION;""".format(schema=schema, table=table, role=role) conn.execute(grant_str)
def _r1r2_standard(self, word, vowels): """ Return the standard interpretations of the string regions R1 and R2. R1 is the region after the first non-vowel following a vowel, or is the null region at the end of the word if there is no such non-vowel. R2 is the region after the first non-vowel following a vowel in R1, or is the null region at the end of the word if there is no such non-vowel. :param word: The word whose regions R1 and R2 are determined. :type word: str or unicode :param vowels: The vowels of the respective language that are used to determine the regions R1 and R2. :type vowels: unicode :return: (r1,r2), the regions R1 and R2 for the respective word. :rtype: tuple :note: This helper method is invoked by the respective stem method of the subclasses DutchStemmer, FinnishStemmer, FrenchStemmer, GermanStemmer, ItalianStemmer, PortugueseStemmer, RomanianStemmer, and SpanishStemmer. It is not to be invoked directly! :note: A detailed description of how to define R1 and R2 can be found at http://snowball.tartarus.org/texts/r1r2.html """ r1 = "" r2 = "" for i in range(1, len(word)): if word[i] not in vowels and word[i-1] in vowels: r1 = word[i+1:] break for i in range(1, len(r1)): if r1[i] not in vowels and r1[i-1] in vowels: r2 = r1[i+1:] break return (r1, r2)
Return the standard interpretations of the string regions R1 and R2. R1 is the region after the first non-vowel following a vowel, or is the null region at the end of the word if there is no such non-vowel. R2 is the region after the first non-vowel following a vowel in R1, or is the null region at the end of the word if there is no such non-vowel. :param word: The word whose regions R1 and R2 are determined. :type word: str or unicode :param vowels: The vowels of the respective language that are used to determine the regions R1 and R2. :type vowels: unicode :return: (r1,r2), the regions R1 and R2 for the respective word. :rtype: tuple :note: This helper method is invoked by the respective stem method of the subclasses DutchStemmer, FinnishStemmer, FrenchStemmer, GermanStemmer, ItalianStemmer, PortugueseStemmer, RomanianStemmer, and SpanishStemmer. It is not to be invoked directly! :note: A detailed description of how to define R1 and R2 can be found at http://snowball.tartarus.org/texts/r1r2.html
Below is the the instruction that describes the task: ### Input: Return the standard interpretations of the string regions R1 and R2. R1 is the region after the first non-vowel following a vowel, or is the null region at the end of the word if there is no such non-vowel. R2 is the region after the first non-vowel following a vowel in R1, or is the null region at the end of the word if there is no such non-vowel. :param word: The word whose regions R1 and R2 are determined. :type word: str or unicode :param vowels: The vowels of the respective language that are used to determine the regions R1 and R2. :type vowels: unicode :return: (r1,r2), the regions R1 and R2 for the respective word. :rtype: tuple :note: This helper method is invoked by the respective stem method of the subclasses DutchStemmer, FinnishStemmer, FrenchStemmer, GermanStemmer, ItalianStemmer, PortugueseStemmer, RomanianStemmer, and SpanishStemmer. It is not to be invoked directly! :note: A detailed description of how to define R1 and R2 can be found at http://snowball.tartarus.org/texts/r1r2.html ### Response: def _r1r2_standard(self, word, vowels): """ Return the standard interpretations of the string regions R1 and R2. R1 is the region after the first non-vowel following a vowel, or is the null region at the end of the word if there is no such non-vowel. R2 is the region after the first non-vowel following a vowel in R1, or is the null region at the end of the word if there is no such non-vowel. :param word: The word whose regions R1 and R2 are determined. :type word: str or unicode :param vowels: The vowels of the respective language that are used to determine the regions R1 and R2. :type vowels: unicode :return: (r1,r2), the regions R1 and R2 for the respective word. :rtype: tuple :note: This helper method is invoked by the respective stem method of the subclasses DutchStemmer, FinnishStemmer, FrenchStemmer, GermanStemmer, ItalianStemmer, PortugueseStemmer, RomanianStemmer, and SpanishStemmer. It is not to be invoked directly! :note: A detailed description of how to define R1 and R2 can be found at http://snowball.tartarus.org/texts/r1r2.html """ r1 = "" r2 = "" for i in range(1, len(word)): if word[i] not in vowels and word[i-1] in vowels: r1 = word[i+1:] break for i in range(1, len(r1)): if r1[i] not in vowels and r1[i-1] in vowels: r2 = r1[i+1:] break return (r1, r2)
def _process_reservations(self, reservations): """ Given a dict with the structure of a response from boto3.ec2.describe_instances(...), find the public/private ips. :param reservations: :return: """ reservations = reservations['Reservations'] private_ip_addresses = [] private_hostnames = [] public_ips = [] public_hostnames = [] for reservation in reservations: for instance in reservation['Instances']: private_ip_addresses.append(instance['PrivateIpAddress']) private_hostnames.append(instance['PrivateDnsName']) if 'PublicIpAddress' in instance: public_ips.append(instance['PublicIpAddress']) elif not self.remove_nones: public_ips.append(None) if ('PublicDnsName' in instance) & (not self.remove_nones): public_hostnames.append(instance['PublicDnsName']) elif not self.remove_nones: public_hostnames.append(None) return { 'private': { 'ips': private_ip_addresses, 'hostnames': private_hostnames }, 'public': { 'ips': public_ips, 'hostnames': public_hostnames }, 'reservations': reservations }
Given a dict with the structure of a response from boto3.ec2.describe_instances(...), find the public/private ips. :param reservations: :return:
Below is the the instruction that describes the task: ### Input: Given a dict with the structure of a response from boto3.ec2.describe_instances(...), find the public/private ips. :param reservations: :return: ### Response: def _process_reservations(self, reservations): """ Given a dict with the structure of a response from boto3.ec2.describe_instances(...), find the public/private ips. :param reservations: :return: """ reservations = reservations['Reservations'] private_ip_addresses = [] private_hostnames = [] public_ips = [] public_hostnames = [] for reservation in reservations: for instance in reservation['Instances']: private_ip_addresses.append(instance['PrivateIpAddress']) private_hostnames.append(instance['PrivateDnsName']) if 'PublicIpAddress' in instance: public_ips.append(instance['PublicIpAddress']) elif not self.remove_nones: public_ips.append(None) if ('PublicDnsName' in instance) & (not self.remove_nones): public_hostnames.append(instance['PublicDnsName']) elif not self.remove_nones: public_hostnames.append(None) return { 'private': { 'ips': private_ip_addresses, 'hostnames': private_hostnames }, 'public': { 'ips': public_ips, 'hostnames': public_hostnames }, 'reservations': reservations }
def do_work_unit(self, args): '''print basic details about work units''' work_spec_name = self._get_work_spec_name(args) for work_unit_name in args.unit: status = self.task_master.get_work_unit_status(work_spec_name, work_unit_name) self.stdout.write('{0} ({1!r})\n' .format(work_unit_name, status['status'])) if 'expiration' in status: when = time.ctime(status['expiration']) if status == 'available': if status['expiration'] == 0: self.stdout.write(' Never scheduled\n') else: self.stdout.write(' Available since: {0}\n' .format(when)) else: self.stdout.write(' Expires: {0}\n'.format(when)) if 'worker_id' in status: try: heartbeat = self.task_master.get_heartbeat(status['worker_id']) except: heartbeat = None if heartbeat: hostname = (heartbeat.get('fqdn', None) or heartbeat.get('hostname', None) or '') ipaddrs = ', '.join(heartbeat.get('ipaddrs', ())) if hostname and ipaddrs: summary = '{0} on {1}'.format(hostname, ipaddrs) else: summary = hostname + ipaddrs else: summary = 'No information' self.stdout.write(' Worker: {0} ({1})\n'.format( status['worker_id'], summary)) if 'traceback' in status: self.stdout.write(' Traceback:\n{0}\n'.format( status['traceback'])) if 'depends_on' in status: self.stdout.write(' Depends on:\n') for what in status['depends_on']: self.stdout.write(' {0!r}\n'.format(what))
print basic details about work units
Below is the the instruction that describes the task: ### Input: print basic details about work units ### Response: def do_work_unit(self, args): '''print basic details about work units''' work_spec_name = self._get_work_spec_name(args) for work_unit_name in args.unit: status = self.task_master.get_work_unit_status(work_spec_name, work_unit_name) self.stdout.write('{0} ({1!r})\n' .format(work_unit_name, status['status'])) if 'expiration' in status: when = time.ctime(status['expiration']) if status == 'available': if status['expiration'] == 0: self.stdout.write(' Never scheduled\n') else: self.stdout.write(' Available since: {0}\n' .format(when)) else: self.stdout.write(' Expires: {0}\n'.format(when)) if 'worker_id' in status: try: heartbeat = self.task_master.get_heartbeat(status['worker_id']) except: heartbeat = None if heartbeat: hostname = (heartbeat.get('fqdn', None) or heartbeat.get('hostname', None) or '') ipaddrs = ', '.join(heartbeat.get('ipaddrs', ())) if hostname and ipaddrs: summary = '{0} on {1}'.format(hostname, ipaddrs) else: summary = hostname + ipaddrs else: summary = 'No information' self.stdout.write(' Worker: {0} ({1})\n'.format( status['worker_id'], summary)) if 'traceback' in status: self.stdout.write(' Traceback:\n{0}\n'.format( status['traceback'])) if 'depends_on' in status: self.stdout.write(' Depends on:\n') for what in status['depends_on']: self.stdout.write(' {0!r}\n'.format(what))
def create_element_dict(self): """Convert a UNTL Python object into a UNTL Python dictionary.""" untl_dict = {} # Loop through all UNTL elements in the Python object. for element in self.children: # If an entry for the element list hasn't been made in the # dictionary, start an empty element list. if element.tag not in untl_dict: untl_dict[element.tag] = [] # Create a dictionary to put the element into. # Add any qualifier. element_dict = {} if element.qualifier is not None: element_dict['qualifier'] = element.qualifier # Add any children that have content. if len(element.contained_children) > 0: child_dict = {} for child in element.children: if child.content is not None: child_dict[child.tag] = child.content # Set the element's content as the dictionary # of children elements. element_dict['content'] = child_dict # The element has content, but no children. elif element.content is not None: element_dict['content'] = element.content # Append the dictionary element to the element list. untl_dict[element.tag].append(element_dict) return untl_dict
Convert a UNTL Python object into a UNTL Python dictionary.
Below is the the instruction that describes the task: ### Input: Convert a UNTL Python object into a UNTL Python dictionary. ### Response: def create_element_dict(self): """Convert a UNTL Python object into a UNTL Python dictionary.""" untl_dict = {} # Loop through all UNTL elements in the Python object. for element in self.children: # If an entry for the element list hasn't been made in the # dictionary, start an empty element list. if element.tag not in untl_dict: untl_dict[element.tag] = [] # Create a dictionary to put the element into. # Add any qualifier. element_dict = {} if element.qualifier is not None: element_dict['qualifier'] = element.qualifier # Add any children that have content. if len(element.contained_children) > 0: child_dict = {} for child in element.children: if child.content is not None: child_dict[child.tag] = child.content # Set the element's content as the dictionary # of children elements. element_dict['content'] = child_dict # The element has content, but no children. elif element.content is not None: element_dict['content'] = element.content # Append the dictionary element to the element list. untl_dict[element.tag].append(element_dict) return untl_dict
def qrotate(vector, axis, angle): """Rotate *vector* around *axis* by *angle* (in radians). *vector* is a matrix of column vectors, as is *axis*. This function uses quaternion rotation. """ n_axis = axis / vnorm(axis) sin_angle = np.expand_dims(np.sin(angle / 2), 0) if np.ndim(n_axis) == 1: n_axis = np.expand_dims(n_axis, 1) p__ = np.dot(n_axis, sin_angle)[:, np.newaxis] else: p__ = n_axis * sin_angle q__ = Quaternion(np.cos(angle / 2), p__) shape = vector.shape return np.einsum("kj, ikj->ij", vector.reshape((3, -1)), q__.rotation_matrix()[:3, :3]).reshape(shape)
Rotate *vector* around *axis* by *angle* (in radians). *vector* is a matrix of column vectors, as is *axis*. This function uses quaternion rotation.
Below is the the instruction that describes the task: ### Input: Rotate *vector* around *axis* by *angle* (in radians). *vector* is a matrix of column vectors, as is *axis*. This function uses quaternion rotation. ### Response: def qrotate(vector, axis, angle): """Rotate *vector* around *axis* by *angle* (in radians). *vector* is a matrix of column vectors, as is *axis*. This function uses quaternion rotation. """ n_axis = axis / vnorm(axis) sin_angle = np.expand_dims(np.sin(angle / 2), 0) if np.ndim(n_axis) == 1: n_axis = np.expand_dims(n_axis, 1) p__ = np.dot(n_axis, sin_angle)[:, np.newaxis] else: p__ = n_axis * sin_angle q__ = Quaternion(np.cos(angle / 2), p__) shape = vector.shape return np.einsum("kj, ikj->ij", vector.reshape((3, -1)), q__.rotation_matrix()[:3, :3]).reshape(shape)
def generate(self, where): """ where can be a : dict,list,tuple,string """ if where is None: return None q = self.table.queryables() try: return Expr(where, queryables=q, encoding=self.table.encoding) except NameError: # raise a nice message, suggesting that the user should use # data_columns raise ValueError( "The passed where expression: {0}\n" " contains an invalid variable reference\n" " all of the variable references must be a " "reference to\n" " an axis (e.g. 'index' or 'columns'), or a " "data_column\n" " The currently defined references are: {1}\n" .format(where, ','.join(q.keys())) )
where can be a : dict,list,tuple,string
Below is the the instruction that describes the task: ### Input: where can be a : dict,list,tuple,string ### Response: def generate(self, where): """ where can be a : dict,list,tuple,string """ if where is None: return None q = self.table.queryables() try: return Expr(where, queryables=q, encoding=self.table.encoding) except NameError: # raise a nice message, suggesting that the user should use # data_columns raise ValueError( "The passed where expression: {0}\n" " contains an invalid variable reference\n" " all of the variable references must be a " "reference to\n" " an axis (e.g. 'index' or 'columns'), or a " "data_column\n" " The currently defined references are: {1}\n" .format(where, ','.join(q.keys())) )
def compare_root_path(path_cost1, path_cost2, bridge_id1, bridge_id2, port_id1, port_id2): """ Decide the port of the side near a root bridge. It is compared by the following priorities. 1. root path cost 2. designated bridge ID value 3. designated port ID value """ result = Stp._cmp_value(path_cost1, path_cost2) if not result: result = Stp._cmp_value(bridge_id1, bridge_id2) if not result: result = Stp._cmp_value(port_id1, port_id2) return result
Decide the port of the side near a root bridge. It is compared by the following priorities. 1. root path cost 2. designated bridge ID value 3. designated port ID value
Below is the the instruction that describes the task: ### Input: Decide the port of the side near a root bridge. It is compared by the following priorities. 1. root path cost 2. designated bridge ID value 3. designated port ID value ### Response: def compare_root_path(path_cost1, path_cost2, bridge_id1, bridge_id2, port_id1, port_id2): """ Decide the port of the side near a root bridge. It is compared by the following priorities. 1. root path cost 2. designated bridge ID value 3. designated port ID value """ result = Stp._cmp_value(path_cost1, path_cost2) if not result: result = Stp._cmp_value(bridge_id1, bridge_id2) if not result: result = Stp._cmp_value(port_id1, port_id2) return result
def extend_children(children): """ extend_children(children) Returns a set containing nearest conditionally stochastic (Stochastic, not Deterministic) descendants. """ new_children = copy(children) need_recursion = False dtrm_children = set() for child in children: if isinstance(child, Deterministic): new_children |= child.children dtrm_children.add(child) need_recursion = True new_children -= dtrm_children if need_recursion: new_children = extend_children(new_children) return new_children
extend_children(children) Returns a set containing nearest conditionally stochastic (Stochastic, not Deterministic) descendants.
Below is the the instruction that describes the task: ### Input: extend_children(children) Returns a set containing nearest conditionally stochastic (Stochastic, not Deterministic) descendants. ### Response: def extend_children(children): """ extend_children(children) Returns a set containing nearest conditionally stochastic (Stochastic, not Deterministic) descendants. """ new_children = copy(children) need_recursion = False dtrm_children = set() for child in children: if isinstance(child, Deterministic): new_children |= child.children dtrm_children.add(child) need_recursion = True new_children -= dtrm_children if need_recursion: new_children = extend_children(new_children) return new_children
def update(self, d): """ This function make update according provided target and the last used input vector. **Args:** * `d` : target (float or 1-dimensional array). Size depends on number of MLP outputs. **Returns:** * `e` : error used for update (float or 1-diemnsional array). Size correspond to size of input `d`. """ # update output layer e = d - self.y error = np.copy(e) if self.outputs == 1: dw = self.mu * e * self.x w = np.copy(self.w)[1:] else: dw = self.mu * np.outer(e, self.x) w = np.copy(self.w)[:,1:] self.w += dw # update hidden layers for l in reversed(self.layers): w, e = l.update(w, e) return error
This function make update according provided target and the last used input vector. **Args:** * `d` : target (float or 1-dimensional array). Size depends on number of MLP outputs. **Returns:** * `e` : error used for update (float or 1-diemnsional array). Size correspond to size of input `d`.
Below is the the instruction that describes the task: ### Input: This function make update according provided target and the last used input vector. **Args:** * `d` : target (float or 1-dimensional array). Size depends on number of MLP outputs. **Returns:** * `e` : error used for update (float or 1-diemnsional array). Size correspond to size of input `d`. ### Response: def update(self, d): """ This function make update according provided target and the last used input vector. **Args:** * `d` : target (float or 1-dimensional array). Size depends on number of MLP outputs. **Returns:** * `e` : error used for update (float or 1-diemnsional array). Size correspond to size of input `d`. """ # update output layer e = d - self.y error = np.copy(e) if self.outputs == 1: dw = self.mu * e * self.x w = np.copy(self.w)[1:] else: dw = self.mu * np.outer(e, self.x) w = np.copy(self.w)[:,1:] self.w += dw # update hidden layers for l in reversed(self.layers): w, e = l.update(w, e) return error
def experiments_predictions_create(self, experiment_id, model_id, argument_defs, name, arguments=None, properties=None): """Create new model run for given experiment. Parameters ---------- experiment_id : string Unique experiment identifier model_id : string Unique identifier of model to run name : string User-provided name for the model run argument_defs : list(attribute.AttributeDefinition) Definition of valid arguments for the given model arguments : list(dict('name':...,'value:...')), optional List of attribute instances properties : Dictionary, optional Set of model run properties. Returns ------- ModelRunHandle Handle for created model run or None if experiment is unknown """ # Get experiment to ensure that it exists if self.experiments_get(experiment_id) is None: return None # Return created model run return self.predictions.create_object( name, experiment_id, model_id, argument_defs, arguments=arguments, properties=properties )
Create new model run for given experiment. Parameters ---------- experiment_id : string Unique experiment identifier model_id : string Unique identifier of model to run name : string User-provided name for the model run argument_defs : list(attribute.AttributeDefinition) Definition of valid arguments for the given model arguments : list(dict('name':...,'value:...')), optional List of attribute instances properties : Dictionary, optional Set of model run properties. Returns ------- ModelRunHandle Handle for created model run or None if experiment is unknown
Below is the the instruction that describes the task: ### Input: Create new model run for given experiment. Parameters ---------- experiment_id : string Unique experiment identifier model_id : string Unique identifier of model to run name : string User-provided name for the model run argument_defs : list(attribute.AttributeDefinition) Definition of valid arguments for the given model arguments : list(dict('name':...,'value:...')), optional List of attribute instances properties : Dictionary, optional Set of model run properties. Returns ------- ModelRunHandle Handle for created model run or None if experiment is unknown ### Response: def experiments_predictions_create(self, experiment_id, model_id, argument_defs, name, arguments=None, properties=None): """Create new model run for given experiment. Parameters ---------- experiment_id : string Unique experiment identifier model_id : string Unique identifier of model to run name : string User-provided name for the model run argument_defs : list(attribute.AttributeDefinition) Definition of valid arguments for the given model arguments : list(dict('name':...,'value:...')), optional List of attribute instances properties : Dictionary, optional Set of model run properties. Returns ------- ModelRunHandle Handle for created model run or None if experiment is unknown """ # Get experiment to ensure that it exists if self.experiments_get(experiment_id) is None: return None # Return created model run return self.predictions.create_object( name, experiment_id, model_id, argument_defs, arguments=arguments, properties=properties )
def get_intercom_data(self): """Specify the data sent to Intercom API according to event type""" data = { "event_name": self.get_type_display(), # event type "created_at": calendar.timegm(self.created.utctimetuple()), # date "metadata": self.metadata } if self.user: data["user_id"] = self.user.intercom_id return data
Specify the data sent to Intercom API according to event type
Below is the the instruction that describes the task: ### Input: Specify the data sent to Intercom API according to event type ### Response: def get_intercom_data(self): """Specify the data sent to Intercom API according to event type""" data = { "event_name": self.get_type_display(), # event type "created_at": calendar.timegm(self.created.utctimetuple()), # date "metadata": self.metadata } if self.user: data["user_id"] = self.user.intercom_id return data
def _update_url_map(self): ''' Assemble any dynamic or configurable URLs ''' if HAS_WEBSOCKETS: self.url_map.update({ 'ws': WebsocketEndpoint, }) # Allow the Webhook URL to be overridden from the conf. self.url_map.update({ self.apiopts.get('webhook_url', 'hook').lstrip('/'): Webhook, }) # Enable the single-page JS app URL. self.url_map.update({ self.apiopts.get('app_path', 'app').lstrip('/'): App, })
Assemble any dynamic or configurable URLs
Below is the the instruction that describes the task: ### Input: Assemble any dynamic or configurable URLs ### Response: def _update_url_map(self): ''' Assemble any dynamic or configurable URLs ''' if HAS_WEBSOCKETS: self.url_map.update({ 'ws': WebsocketEndpoint, }) # Allow the Webhook URL to be overridden from the conf. self.url_map.update({ self.apiopts.get('webhook_url', 'hook').lstrip('/'): Webhook, }) # Enable the single-page JS app URL. self.url_map.update({ self.apiopts.get('app_path', 'app').lstrip('/'): App, })
def __valueKeyWithHeaderIndex(self, values): """ This is hellper function, so that we can mach decision values with row index as represented in header index. Args: values (dict): Normaly this will have dict of header values and values from decision Return: >>> return() { values[headerName] : int(headerName index in header array), ... } """ machingIndexes = {} for index, name in enumerate(self.header): if name in values: machingIndexes[index] = values[name] return machingIndexes
This is hellper function, so that we can mach decision values with row index as represented in header index. Args: values (dict): Normaly this will have dict of header values and values from decision Return: >>> return() { values[headerName] : int(headerName index in header array), ... }
Below is the the instruction that describes the task: ### Input: This is hellper function, so that we can mach decision values with row index as represented in header index. Args: values (dict): Normaly this will have dict of header values and values from decision Return: >>> return() { values[headerName] : int(headerName index in header array), ... } ### Response: def __valueKeyWithHeaderIndex(self, values): """ This is hellper function, so that we can mach decision values with row index as represented in header index. Args: values (dict): Normaly this will have dict of header values and values from decision Return: >>> return() { values[headerName] : int(headerName index in header array), ... } """ machingIndexes = {} for index, name in enumerate(self.header): if name in values: machingIndexes[index] = values[name] return machingIndexes
def libvlc_media_list_insert_media(p_ml, p_md, i_pos): '''Insert media instance in media list on a position The L{libvlc_media_list_lock} should be held upon entering this function. @param p_ml: a media list instance. @param p_md: a media instance. @param i_pos: position in array where to insert. @return: 0 on success, -1 if the media list is read-only. ''' f = _Cfunctions.get('libvlc_media_list_insert_media', None) or \ _Cfunction('libvlc_media_list_insert_media', ((1,), (1,), (1,),), None, ctypes.c_int, MediaList, Media, ctypes.c_int) return f(p_ml, p_md, i_pos)
Insert media instance in media list on a position The L{libvlc_media_list_lock} should be held upon entering this function. @param p_ml: a media list instance. @param p_md: a media instance. @param i_pos: position in array where to insert. @return: 0 on success, -1 if the media list is read-only.
Below is the the instruction that describes the task: ### Input: Insert media instance in media list on a position The L{libvlc_media_list_lock} should be held upon entering this function. @param p_ml: a media list instance. @param p_md: a media instance. @param i_pos: position in array where to insert. @return: 0 on success, -1 if the media list is read-only. ### Response: def libvlc_media_list_insert_media(p_ml, p_md, i_pos): '''Insert media instance in media list on a position The L{libvlc_media_list_lock} should be held upon entering this function. @param p_ml: a media list instance. @param p_md: a media instance. @param i_pos: position in array where to insert. @return: 0 on success, -1 if the media list is read-only. ''' f = _Cfunctions.get('libvlc_media_list_insert_media', None) or \ _Cfunction('libvlc_media_list_insert_media', ((1,), (1,), (1,),), None, ctypes.c_int, MediaList, Media, ctypes.c_int) return f(p_ml, p_md, i_pos)
def delegators_count(self, account): """ Get number of delegators for a specific representative **account** .. version 8.0 required :param account: Account to get number of delegators for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.delegators_count( ... account="xrb_1111111111111111111111111111111111111111111111111117353trpda" ... ) 2 """ account = self._process_value(account, 'account') payload = {"account": account} resp = self.call('delegators_count', payload) return int(resp['count'])
Get number of delegators for a specific representative **account** .. version 8.0 required :param account: Account to get number of delegators for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.delegators_count( ... account="xrb_1111111111111111111111111111111111111111111111111117353trpda" ... ) 2
Below is the the instruction that describes the task: ### Input: Get number of delegators for a specific representative **account** .. version 8.0 required :param account: Account to get number of delegators for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.delegators_count( ... account="xrb_1111111111111111111111111111111111111111111111111117353trpda" ... ) 2 ### Response: def delegators_count(self, account): """ Get number of delegators for a specific representative **account** .. version 8.0 required :param account: Account to get number of delegators for :type account: str :raises: :py:exc:`nano.rpc.RPCException` >>> rpc.delegators_count( ... account="xrb_1111111111111111111111111111111111111111111111111117353trpda" ... ) 2 """ account = self._process_value(account, 'account') payload = {"account": account} resp = self.call('delegators_count', payload) return int(resp['count'])
def select_release(self, highest_allowed_release): """ Select the newest release that is not newer than the given release. :param highest_allowed_release: The identifier of the release that sets the upper bound for the selection (a string). :returns: The identifier of the selected release (a string). :raises: :exc:`~vcs_repo_mgr.exceptions.NoMatchingReleasesError` when no matching releases are found. """ matching_releases = [] highest_allowed_key = natsort_key(highest_allowed_release) for release in self.ordered_releases: release_key = natsort_key(release.identifier) if release_key <= highest_allowed_key: matching_releases.append(release) if not matching_releases: msg = "No releases below or equal to %r found in repository!" raise NoMatchingReleasesError(msg % highest_allowed_release) return matching_releases[-1]
Select the newest release that is not newer than the given release. :param highest_allowed_release: The identifier of the release that sets the upper bound for the selection (a string). :returns: The identifier of the selected release (a string). :raises: :exc:`~vcs_repo_mgr.exceptions.NoMatchingReleasesError` when no matching releases are found.
Below is the the instruction that describes the task: ### Input: Select the newest release that is not newer than the given release. :param highest_allowed_release: The identifier of the release that sets the upper bound for the selection (a string). :returns: The identifier of the selected release (a string). :raises: :exc:`~vcs_repo_mgr.exceptions.NoMatchingReleasesError` when no matching releases are found. ### Response: def select_release(self, highest_allowed_release): """ Select the newest release that is not newer than the given release. :param highest_allowed_release: The identifier of the release that sets the upper bound for the selection (a string). :returns: The identifier of the selected release (a string). :raises: :exc:`~vcs_repo_mgr.exceptions.NoMatchingReleasesError` when no matching releases are found. """ matching_releases = [] highest_allowed_key = natsort_key(highest_allowed_release) for release in self.ordered_releases: release_key = natsort_key(release.identifier) if release_key <= highest_allowed_key: matching_releases.append(release) if not matching_releases: msg = "No releases below or equal to %r found in repository!" raise NoMatchingReleasesError(msg % highest_allowed_release) return matching_releases[-1]
def class_details(self, title=None): """ Generates the class details. :param title: optional title :type title: str :return: the details :rtype: str """ if title is None: return javabridge.call( self.jobject, "toClassDetailsString", "()Ljava/lang/String;") else: return javabridge.call( self.jobject, "toClassDetailsString", "(Ljava/lang/String;)Ljava/lang/String;", title)
Generates the class details. :param title: optional title :type title: str :return: the details :rtype: str
Below is the the instruction that describes the task: ### Input: Generates the class details. :param title: optional title :type title: str :return: the details :rtype: str ### Response: def class_details(self, title=None): """ Generates the class details. :param title: optional title :type title: str :return: the details :rtype: str """ if title is None: return javabridge.call( self.jobject, "toClassDetailsString", "()Ljava/lang/String;") else: return javabridge.call( self.jobject, "toClassDetailsString", "(Ljava/lang/String;)Ljava/lang/String;", title)
def Reynolds_factor(FL, C, d, Rev, full_trim=True): r'''Calculates the Reynolds number factor `FR` for a valve with a Reynolds number `Rev`, diameter `d`, flow coefficient `C`, liquid pressure recovery factor `FL`, and with either full or reduced trim, all according to IEC 60534 calculations. If full trim: .. math:: F_{R,1a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_1^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,2} = \min(\frac{0.026}{F_L}\sqrt{n_1 Re_v},\; 1) .. math:: n_1 = \frac{N_2}{\left(\frac{C}{d^2}\right)^2} .. math:: F_R = F_{R,2} \text{ if Rev < 10 else } \min(F_{R,1a}, F_{R,2}) Otherwise : .. math:: F_{R,3a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_2^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,4} = \frac{0.026}{F_L}\sqrt{n_2 Re_v} .. math:: n_2 = 1 + N_{32}\left(\frac{C}{d}\right)^{2/3} .. math:: F_R = F_{R,4} \text{ if Rev < 10 else } \min(F_{R,3a}, F_{R,4}) Parameters ---------- FL : float Liquid pressure recovery factor of a control valve without attached fittings [] C : float Metric Kv valve flow coefficient (flow rate of water at a pressure drop of 1 bar) [m^3/hr] d : float Diameter of the valve [m] Rev : float Valve reynolds number [-] full_trim : bool Whether or not the valve has full trim Returns ------- FR : float Reynolds number factor for laminar or transitional flow [] Examples -------- In Example 4, compressible flow with small flow trim sized for gas flow (Cv in the problem was converted to Kv here to make FR match with N32, N2): >>> Reynolds_factor(FL=0.98, C=0.015483, d=15., Rev=1202., full_trim=False) 0.7148753122302025 References ---------- .. [1] IEC 60534-2-1 / ISA-75.01.01-2007 ''' if full_trim: n1 = N2/(min(C/d**2, 0.04))**2 # C/d**2 must not exceed 0.04 FR_1a = 1 + (0.33*FL**0.5)/n1**0.25*log10(Rev/10000.) FR_2 = 0.026/FL*(n1*Rev)**0.5 if Rev < 10: FR = FR_2 else: FR = min(FR_2, FR_1a) else: n2 = 1 + N32*(C/d**2)**(2/3.) FR_3a = 1 + (0.33*FL**0.5)/n2**0.25*log10(Rev/10000.) FR_4 = min(0.026/FL*(n2*Rev)**0.5, 1) if Rev < 10: FR = FR_4 else: FR = min(FR_3a, FR_4) return FR
r'''Calculates the Reynolds number factor `FR` for a valve with a Reynolds number `Rev`, diameter `d`, flow coefficient `C`, liquid pressure recovery factor `FL`, and with either full or reduced trim, all according to IEC 60534 calculations. If full trim: .. math:: F_{R,1a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_1^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,2} = \min(\frac{0.026}{F_L}\sqrt{n_1 Re_v},\; 1) .. math:: n_1 = \frac{N_2}{\left(\frac{C}{d^2}\right)^2} .. math:: F_R = F_{R,2} \text{ if Rev < 10 else } \min(F_{R,1a}, F_{R,2}) Otherwise : .. math:: F_{R,3a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_2^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,4} = \frac{0.026}{F_L}\sqrt{n_2 Re_v} .. math:: n_2 = 1 + N_{32}\left(\frac{C}{d}\right)^{2/3} .. math:: F_R = F_{R,4} \text{ if Rev < 10 else } \min(F_{R,3a}, F_{R,4}) Parameters ---------- FL : float Liquid pressure recovery factor of a control valve without attached fittings [] C : float Metric Kv valve flow coefficient (flow rate of water at a pressure drop of 1 bar) [m^3/hr] d : float Diameter of the valve [m] Rev : float Valve reynolds number [-] full_trim : bool Whether or not the valve has full trim Returns ------- FR : float Reynolds number factor for laminar or transitional flow [] Examples -------- In Example 4, compressible flow with small flow trim sized for gas flow (Cv in the problem was converted to Kv here to make FR match with N32, N2): >>> Reynolds_factor(FL=0.98, C=0.015483, d=15., Rev=1202., full_trim=False) 0.7148753122302025 References ---------- .. [1] IEC 60534-2-1 / ISA-75.01.01-2007
Below is the the instruction that describes the task: ### Input: r'''Calculates the Reynolds number factor `FR` for a valve with a Reynolds number `Rev`, diameter `d`, flow coefficient `C`, liquid pressure recovery factor `FL`, and with either full or reduced trim, all according to IEC 60534 calculations. If full trim: .. math:: F_{R,1a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_1^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,2} = \min(\frac{0.026}{F_L}\sqrt{n_1 Re_v},\; 1) .. math:: n_1 = \frac{N_2}{\left(\frac{C}{d^2}\right)^2} .. math:: F_R = F_{R,2} \text{ if Rev < 10 else } \min(F_{R,1a}, F_{R,2}) Otherwise : .. math:: F_{R,3a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_2^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,4} = \frac{0.026}{F_L}\sqrt{n_2 Re_v} .. math:: n_2 = 1 + N_{32}\left(\frac{C}{d}\right)^{2/3} .. math:: F_R = F_{R,4} \text{ if Rev < 10 else } \min(F_{R,3a}, F_{R,4}) Parameters ---------- FL : float Liquid pressure recovery factor of a control valve without attached fittings [] C : float Metric Kv valve flow coefficient (flow rate of water at a pressure drop of 1 bar) [m^3/hr] d : float Diameter of the valve [m] Rev : float Valve reynolds number [-] full_trim : bool Whether or not the valve has full trim Returns ------- FR : float Reynolds number factor for laminar or transitional flow [] Examples -------- In Example 4, compressible flow with small flow trim sized for gas flow (Cv in the problem was converted to Kv here to make FR match with N32, N2): >>> Reynolds_factor(FL=0.98, C=0.015483, d=15., Rev=1202., full_trim=False) 0.7148753122302025 References ---------- .. [1] IEC 60534-2-1 / ISA-75.01.01-2007 ### Response: def Reynolds_factor(FL, C, d, Rev, full_trim=True): r'''Calculates the Reynolds number factor `FR` for a valve with a Reynolds number `Rev`, diameter `d`, flow coefficient `C`, liquid pressure recovery factor `FL`, and with either full or reduced trim, all according to IEC 60534 calculations. If full trim: .. math:: F_{R,1a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_1^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,2} = \min(\frac{0.026}{F_L}\sqrt{n_1 Re_v},\; 1) .. math:: n_1 = \frac{N_2}{\left(\frac{C}{d^2}\right)^2} .. math:: F_R = F_{R,2} \text{ if Rev < 10 else } \min(F_{R,1a}, F_{R,2}) Otherwise : .. math:: F_{R,3a} = 1 + \left(\frac{0.33F_L^{0.5}}{n_2^{0.25}}\right)\log_{10} \left(\frac{Re_v}{10000}\right) .. math:: F_{R,4} = \frac{0.026}{F_L}\sqrt{n_2 Re_v} .. math:: n_2 = 1 + N_{32}\left(\frac{C}{d}\right)^{2/3} .. math:: F_R = F_{R,4} \text{ if Rev < 10 else } \min(F_{R,3a}, F_{R,4}) Parameters ---------- FL : float Liquid pressure recovery factor of a control valve without attached fittings [] C : float Metric Kv valve flow coefficient (flow rate of water at a pressure drop of 1 bar) [m^3/hr] d : float Diameter of the valve [m] Rev : float Valve reynolds number [-] full_trim : bool Whether or not the valve has full trim Returns ------- FR : float Reynolds number factor for laminar or transitional flow [] Examples -------- In Example 4, compressible flow with small flow trim sized for gas flow (Cv in the problem was converted to Kv here to make FR match with N32, N2): >>> Reynolds_factor(FL=0.98, C=0.015483, d=15., Rev=1202., full_trim=False) 0.7148753122302025 References ---------- .. [1] IEC 60534-2-1 / ISA-75.01.01-2007 ''' if full_trim: n1 = N2/(min(C/d**2, 0.04))**2 # C/d**2 must not exceed 0.04 FR_1a = 1 + (0.33*FL**0.5)/n1**0.25*log10(Rev/10000.) FR_2 = 0.026/FL*(n1*Rev)**0.5 if Rev < 10: FR = FR_2 else: FR = min(FR_2, FR_1a) else: n2 = 1 + N32*(C/d**2)**(2/3.) FR_3a = 1 + (0.33*FL**0.5)/n2**0.25*log10(Rev/10000.) FR_4 = min(0.026/FL*(n2*Rev)**0.5, 1) if Rev < 10: FR = FR_4 else: FR = min(FR_3a, FR_4) return FR
def receive(self): ''' Return the message received and the address. ''' try: msg, addr = self.skt.recvfrom(self.buffer_size) except socket.error as error: log.error('Received listener socket error: %s', error, exc_info=True) raise ListenerException(error) log.debug('[%s] Received %s from %s', msg, addr, time.time()) return msg, addr[0]
Return the message received and the address.
Below is the the instruction that describes the task: ### Input: Return the message received and the address. ### Response: def receive(self): ''' Return the message received and the address. ''' try: msg, addr = self.skt.recvfrom(self.buffer_size) except socket.error as error: log.error('Received listener socket error: %s', error, exc_info=True) raise ListenerException(error) log.debug('[%s] Received %s from %s', msg, addr, time.time()) return msg, addr[0]
def do_WhoHasRequest(self, apdu): """Respond to a Who-Has request.""" if _debug: WhoHasIHaveServices._debug("do_WhoHasRequest, %r", apdu) # ignore this if there's no local device if not self.localDevice: if _debug: WhoIsIAmServices._debug(" - no local device") return # if this has limits, check them like Who-Is if apdu.limits is not None: # extract the parameters low_limit = apdu.limits.deviceInstanceRangeLowLimit high_limit = apdu.limits.deviceInstanceRangeHighLimit # check for consistent parameters if (low_limit is None): raise MissingRequiredParameter("deviceInstanceRangeLowLimit required") if (low_limit < 0) or (low_limit > 4194303): raise ParameterOutOfRange("deviceInstanceRangeLowLimit out of range") if (high_limit is None): raise MissingRequiredParameter("deviceInstanceRangeHighLimit required") if (high_limit < 0) or (high_limit > 4194303): raise ParameterOutOfRange("deviceInstanceRangeHighLimit out of range") # see we should respond if (self.localDevice.objectIdentifier[1] < low_limit): return if (self.localDevice.objectIdentifier[1] > high_limit): return # find the object if apdu.object.objectIdentifier is not None: obj = self.objectIdentifier.get(apdu.object.objectIdentifier, None) elif apdu.object.objectName is not None: obj = self.objectName.get(apdu.object.objectName, None) else: raise InconsistentParameters("object identifier or object name required") # maybe we don't have it if not obj: return # send out the response self.i_have(obj, address=apdu.pduSource)
Respond to a Who-Has request.
Below is the the instruction that describes the task: ### Input: Respond to a Who-Has request. ### Response: def do_WhoHasRequest(self, apdu): """Respond to a Who-Has request.""" if _debug: WhoHasIHaveServices._debug("do_WhoHasRequest, %r", apdu) # ignore this if there's no local device if not self.localDevice: if _debug: WhoIsIAmServices._debug(" - no local device") return # if this has limits, check them like Who-Is if apdu.limits is not None: # extract the parameters low_limit = apdu.limits.deviceInstanceRangeLowLimit high_limit = apdu.limits.deviceInstanceRangeHighLimit # check for consistent parameters if (low_limit is None): raise MissingRequiredParameter("deviceInstanceRangeLowLimit required") if (low_limit < 0) or (low_limit > 4194303): raise ParameterOutOfRange("deviceInstanceRangeLowLimit out of range") if (high_limit is None): raise MissingRequiredParameter("deviceInstanceRangeHighLimit required") if (high_limit < 0) or (high_limit > 4194303): raise ParameterOutOfRange("deviceInstanceRangeHighLimit out of range") # see we should respond if (self.localDevice.objectIdentifier[1] < low_limit): return if (self.localDevice.objectIdentifier[1] > high_limit): return # find the object if apdu.object.objectIdentifier is not None: obj = self.objectIdentifier.get(apdu.object.objectIdentifier, None) elif apdu.object.objectName is not None: obj = self.objectName.get(apdu.object.objectName, None) else: raise InconsistentParameters("object identifier or object name required") # maybe we don't have it if not obj: return # send out the response self.i_have(obj, address=apdu.pduSource)
def transform(graph): '''core transform function for graph. ''' graphs = [] for _ in range(Constant.N_NEIGHBOURS * 2): random_num = randrange(3) temp_graph = None if random_num == 0: temp_graph = to_deeper_graph(deepcopy(graph)) elif random_num == 1: temp_graph = to_wider_graph(deepcopy(graph)) elif random_num == 2: temp_graph = to_skip_connection_graph(deepcopy(graph)) if temp_graph is not None and temp_graph.size() <= Constant.MAX_MODEL_SIZE: graphs.append(temp_graph) if len(graphs) >= Constant.N_NEIGHBOURS: break return graphs
core transform function for graph.
Below is the the instruction that describes the task: ### Input: core transform function for graph. ### Response: def transform(graph): '''core transform function for graph. ''' graphs = [] for _ in range(Constant.N_NEIGHBOURS * 2): random_num = randrange(3) temp_graph = None if random_num == 0: temp_graph = to_deeper_graph(deepcopy(graph)) elif random_num == 1: temp_graph = to_wider_graph(deepcopy(graph)) elif random_num == 2: temp_graph = to_skip_connection_graph(deepcopy(graph)) if temp_graph is not None and temp_graph.size() <= Constant.MAX_MODEL_SIZE: graphs.append(temp_graph) if len(graphs) >= Constant.N_NEIGHBOURS: break return graphs
def delete(self, timeout=-1, custom_headers=None, force=False): """Deletes current resource. Args: timeout: Timeout in seconds. custom_headers: Allows to set custom http headers. force: Flag to force the operation. """ uri = self.data['uri'] logger.debug("Delete resource (uri = %s)" % (str(uri))) return self._helper.delete(uri, timeout=timeout, custom_headers=custom_headers, force=force)
Deletes current resource. Args: timeout: Timeout in seconds. custom_headers: Allows to set custom http headers. force: Flag to force the operation.
Below is the the instruction that describes the task: ### Input: Deletes current resource. Args: timeout: Timeout in seconds. custom_headers: Allows to set custom http headers. force: Flag to force the operation. ### Response: def delete(self, timeout=-1, custom_headers=None, force=False): """Deletes current resource. Args: timeout: Timeout in seconds. custom_headers: Allows to set custom http headers. force: Flag to force the operation. """ uri = self.data['uri'] logger.debug("Delete resource (uri = %s)" % (str(uri))) return self._helper.delete(uri, timeout=timeout, custom_headers=custom_headers, force=force)
def QA_SU_save_future_day_all(engine, client=DATABASE): """save future_day_all Arguments: engine {[type]} -- [description] Keyword Arguments: client {[type]} -- [description] (default: {DATABASE}) """ engine = select_save_engine(engine) engine.QA_SU_save_future_day_all(client=client)
save future_day_all Arguments: engine {[type]} -- [description] Keyword Arguments: client {[type]} -- [description] (default: {DATABASE})
Below is the the instruction that describes the task: ### Input: save future_day_all Arguments: engine {[type]} -- [description] Keyword Arguments: client {[type]} -- [description] (default: {DATABASE}) ### Response: def QA_SU_save_future_day_all(engine, client=DATABASE): """save future_day_all Arguments: engine {[type]} -- [description] Keyword Arguments: client {[type]} -- [description] (default: {DATABASE}) """ engine = select_save_engine(engine) engine.QA_SU_save_future_day_all(client=client)
def angular_distance_fast(ra1, dec1, ra2, dec2): """ Compute angular distance using the Haversine formula. Use this one when you know you will never ask for points at their antipodes. If this is not the case, use the angular_distance function which is slower, but works also for antipodes. :param lon1: :param lat1: :param lon2: :param lat2: :return: """ lon1 = np.deg2rad(ra1) lat1 = np.deg2rad(dec1) lon2 = np.deg2rad(ra2) lat2 = np.deg2rad(dec2) dlon = lon2 - lon1 dlat = lat2 - lat1 a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon /2.0)**2 c = 2 * np.arcsin(np.sqrt(a)) return np.rad2deg(c)
Compute angular distance using the Haversine formula. Use this one when you know you will never ask for points at their antipodes. If this is not the case, use the angular_distance function which is slower, but works also for antipodes. :param lon1: :param lat1: :param lon2: :param lat2: :return:
Below is the the instruction that describes the task: ### Input: Compute angular distance using the Haversine formula. Use this one when you know you will never ask for points at their antipodes. If this is not the case, use the angular_distance function which is slower, but works also for antipodes. :param lon1: :param lat1: :param lon2: :param lat2: :return: ### Response: def angular_distance_fast(ra1, dec1, ra2, dec2): """ Compute angular distance using the Haversine formula. Use this one when you know you will never ask for points at their antipodes. If this is not the case, use the angular_distance function which is slower, but works also for antipodes. :param lon1: :param lat1: :param lon2: :param lat2: :return: """ lon1 = np.deg2rad(ra1) lat1 = np.deg2rad(dec1) lon2 = np.deg2rad(ra2) lat2 = np.deg2rad(dec2) dlon = lon2 - lon1 dlat = lat2 - lat1 a = np.sin(dlat/2.0)**2 + np.cos(lat1) * np.cos(lat2) * np.sin(dlon /2.0)**2 c = 2 * np.arcsin(np.sqrt(a)) return np.rad2deg(c)
def linter_functions_from_filters(whitelist=None, blacklist=None): """Yield tuples of _LinterFunction matching whitelist but not blacklist.""" def _keyvalue_pair_if(dictionary, condition): """Return a key-value pair in dictionary if condition matched.""" return { k: v for (k, v) in dictionary.items() if condition(k) } def _check_list(check_list, cond): """Return function testing against a list if the list exists.""" def _check_against_list(key): """Return true if list exists and condition passes.""" return cond(check_list, key) if check_list is not None else True return _check_against_list linter_functions = LINTER_FUNCTIONS linter_functions = _keyvalue_pair_if(linter_functions, _check_list(whitelist, lambda l, k: k in l)) linter_functions = _keyvalue_pair_if(linter_functions, _check_list(blacklist, lambda l, k: k not in l)) for code, linter_function in linter_functions.items(): yield (code, linter_function)
Yield tuples of _LinterFunction matching whitelist but not blacklist.
Below is the the instruction that describes the task: ### Input: Yield tuples of _LinterFunction matching whitelist but not blacklist. ### Response: def linter_functions_from_filters(whitelist=None, blacklist=None): """Yield tuples of _LinterFunction matching whitelist but not blacklist.""" def _keyvalue_pair_if(dictionary, condition): """Return a key-value pair in dictionary if condition matched.""" return { k: v for (k, v) in dictionary.items() if condition(k) } def _check_list(check_list, cond): """Return function testing against a list if the list exists.""" def _check_against_list(key): """Return true if list exists and condition passes.""" return cond(check_list, key) if check_list is not None else True return _check_against_list linter_functions = LINTER_FUNCTIONS linter_functions = _keyvalue_pair_if(linter_functions, _check_list(whitelist, lambda l, k: k in l)) linter_functions = _keyvalue_pair_if(linter_functions, _check_list(blacklist, lambda l, k: k not in l)) for code, linter_function in linter_functions.items(): yield (code, linter_function)
def clone(self, data=None, shared_data=True, new_type=None, link=True, *args, **overrides): """Clones the object, overriding data and parameters. Args: data: New data replacing the existing data shared_data (bool, optional): Whether to use existing data new_type (optional): Type to cast object to link (bool, optional): Whether clone should be linked Determines whether Streams and Links attached to original object will be inherited. *args: Additional arguments to pass to constructor **overrides: New keyword arguments to pass to constructor Returns: Cloned object """ settings = dict(self.get_param_values()) if settings.get('group', None) != self._group: settings.pop('group') if settings.get('label', None) != self._label: settings.pop('label') if new_type is None: clone_type = self.__class__ else: clone_type = new_type new_params = new_type.params() settings = {k: v for k, v in settings.items() if k in new_params} settings = dict(settings, **overrides) if 'id' not in settings and new_type in [type(self), None]: settings['id'] = self.id if data is None and shared_data: data = self.data if link: settings['plot_id'] = self._plot_id # Apply name mangling for __ attribute pos_args = getattr(self, '_' + type(self).__name__ + '__pos_params', []) with item_check(not shared_data and self._check_items): return clone_type(data, *args, **{k:v for k,v in settings.items() if k not in pos_args})
Clones the object, overriding data and parameters. Args: data: New data replacing the existing data shared_data (bool, optional): Whether to use existing data new_type (optional): Type to cast object to link (bool, optional): Whether clone should be linked Determines whether Streams and Links attached to original object will be inherited. *args: Additional arguments to pass to constructor **overrides: New keyword arguments to pass to constructor Returns: Cloned object
Below is the the instruction that describes the task: ### Input: Clones the object, overriding data and parameters. Args: data: New data replacing the existing data shared_data (bool, optional): Whether to use existing data new_type (optional): Type to cast object to link (bool, optional): Whether clone should be linked Determines whether Streams and Links attached to original object will be inherited. *args: Additional arguments to pass to constructor **overrides: New keyword arguments to pass to constructor Returns: Cloned object ### Response: def clone(self, data=None, shared_data=True, new_type=None, link=True, *args, **overrides): """Clones the object, overriding data and parameters. Args: data: New data replacing the existing data shared_data (bool, optional): Whether to use existing data new_type (optional): Type to cast object to link (bool, optional): Whether clone should be linked Determines whether Streams and Links attached to original object will be inherited. *args: Additional arguments to pass to constructor **overrides: New keyword arguments to pass to constructor Returns: Cloned object """ settings = dict(self.get_param_values()) if settings.get('group', None) != self._group: settings.pop('group') if settings.get('label', None) != self._label: settings.pop('label') if new_type is None: clone_type = self.__class__ else: clone_type = new_type new_params = new_type.params() settings = {k: v for k, v in settings.items() if k in new_params} settings = dict(settings, **overrides) if 'id' not in settings and new_type in [type(self), None]: settings['id'] = self.id if data is None and shared_data: data = self.data if link: settings['plot_id'] = self._plot_id # Apply name mangling for __ attribute pos_args = getattr(self, '_' + type(self).__name__ + '__pos_params', []) with item_check(not shared_data and self._check_items): return clone_type(data, *args, **{k:v for k,v in settings.items() if k not in pos_args})
def get_table_info(*table_names): """Returns a dict with table_name keys mapped to the Interface that table exists in :param *table_names: the tables you are searching for """ ret = {} if table_names: for table_name in table_names: for name, inter in get_interfaces().items(): if inter.has_table(table_name): yield table_name, inter, inter.get_fields(table_name) else: for name, inter in get_interfaces().items(): table_names = inter.get_tables() for table_name in table_names: yield table_name, inter, inter.get_fields(table_name)
Returns a dict with table_name keys mapped to the Interface that table exists in :param *table_names: the tables you are searching for
Below is the the instruction that describes the task: ### Input: Returns a dict with table_name keys mapped to the Interface that table exists in :param *table_names: the tables you are searching for ### Response: def get_table_info(*table_names): """Returns a dict with table_name keys mapped to the Interface that table exists in :param *table_names: the tables you are searching for """ ret = {} if table_names: for table_name in table_names: for name, inter in get_interfaces().items(): if inter.has_table(table_name): yield table_name, inter, inter.get_fields(table_name) else: for name, inter in get_interfaces().items(): table_names = inter.get_tables() for table_name in table_names: yield table_name, inter, inter.get_fields(table_name)
def all_edges(self, node): """ Returns a list of incoming and outging edges. """ return set(self.inc_edges(node) + self.out_edges(node))
Returns a list of incoming and outging edges.
Below is the the instruction that describes the task: ### Input: Returns a list of incoming and outging edges. ### Response: def all_edges(self, node): """ Returns a list of incoming and outging edges. """ return set(self.inc_edges(node) + self.out_edges(node))
def delete(self, file_id): """Delete a file from GridFS by ``"_id"``. Removes all data belonging to the file with ``"_id"``: `file_id`. .. warning:: Any processes/threads reading from the file while this method is executing will likely see an invalid/corrupt file. Care should be taken to avoid concurrent reads to a file while it is being deleted. :Parameters: - `file_id`: ``"_id"`` of the file to delete .. versionadded:: 1.6 """ return defer.DeferredList([ self.__files.remove({"_id": file_id}, safe=True), self.__chunks.remove({"files_id": file_id}) ])
Delete a file from GridFS by ``"_id"``. Removes all data belonging to the file with ``"_id"``: `file_id`. .. warning:: Any processes/threads reading from the file while this method is executing will likely see an invalid/corrupt file. Care should be taken to avoid concurrent reads to a file while it is being deleted. :Parameters: - `file_id`: ``"_id"`` of the file to delete .. versionadded:: 1.6
Below is the the instruction that describes the task: ### Input: Delete a file from GridFS by ``"_id"``. Removes all data belonging to the file with ``"_id"``: `file_id`. .. warning:: Any processes/threads reading from the file while this method is executing will likely see an invalid/corrupt file. Care should be taken to avoid concurrent reads to a file while it is being deleted. :Parameters: - `file_id`: ``"_id"`` of the file to delete .. versionadded:: 1.6 ### Response: def delete(self, file_id): """Delete a file from GridFS by ``"_id"``. Removes all data belonging to the file with ``"_id"``: `file_id`. .. warning:: Any processes/threads reading from the file while this method is executing will likely see an invalid/corrupt file. Care should be taken to avoid concurrent reads to a file while it is being deleted. :Parameters: - `file_id`: ``"_id"`` of the file to delete .. versionadded:: 1.6 """ return defer.DeferredList([ self.__files.remove({"_id": file_id}, safe=True), self.__chunks.remove({"files_id": file_id}) ])
def _discover_models(self): """ Return a dict containing a list of cassandra.cqlengine.Model classes within installed App. """ apps = get_installed_apps() connection = self.connection.connection.alias keyspace = self.connection.connection.keyspace for app in apps: self._cql_models[app.__name__] = get_cql_models( app, connection=connection, keyspace=keyspace)
Return a dict containing a list of cassandra.cqlengine.Model classes within installed App.
Below is the the instruction that describes the task: ### Input: Return a dict containing a list of cassandra.cqlengine.Model classes within installed App. ### Response: def _discover_models(self): """ Return a dict containing a list of cassandra.cqlengine.Model classes within installed App. """ apps = get_installed_apps() connection = self.connection.connection.alias keyspace = self.connection.connection.keyspace for app in apps: self._cql_models[app.__name__] = get_cql_models( app, connection=connection, keyspace=keyspace)
def _route(self, mapper): """ Set up the route(s) corresponding to the limit. This controls which limits are checked against the request. :param mapper: The routes.Mapper object to add the route to. """ # Build up the keyword arguments to feed to connect() kwargs = dict(conditions=dict(function=self._filter)) # Restrict the verbs if self.verbs: kwargs['conditions']['method'] = self.verbs # Add requirements, if provided if self.requirements: kwargs['requirements'] = self.requirements # Hook to allow subclasses to override arguments to connect() uri = self.route(self.uri, kwargs) # Create the route mapper.connect(None, uri, **kwargs)
Set up the route(s) corresponding to the limit. This controls which limits are checked against the request. :param mapper: The routes.Mapper object to add the route to.
Below is the the instruction that describes the task: ### Input: Set up the route(s) corresponding to the limit. This controls which limits are checked against the request. :param mapper: The routes.Mapper object to add the route to. ### Response: def _route(self, mapper): """ Set up the route(s) corresponding to the limit. This controls which limits are checked against the request. :param mapper: The routes.Mapper object to add the route to. """ # Build up the keyword arguments to feed to connect() kwargs = dict(conditions=dict(function=self._filter)) # Restrict the verbs if self.verbs: kwargs['conditions']['method'] = self.verbs # Add requirements, if provided if self.requirements: kwargs['requirements'] = self.requirements # Hook to allow subclasses to override arguments to connect() uri = self.route(self.uri, kwargs) # Create the route mapper.connect(None, uri, **kwargs)
def find_optimal_allocation(self, tokens): """ Finds longest, non-overlapping word-ranges of phrases in tokens stored in TokenTrie :param tokens: tokens tokenize :type tokens: list of str :return: Optimal allocation of tokens to phrases :rtype: list of TokenTrie.Token """ token_ranges = self.find_tracked_words(tokens) token_ranges.sort() for offset in range(1, len(token_ranges)): to_be_removed = [] for candidate in token_ranges[offset:]: for i in range(offset): if token_ranges[i].overlaps_with(candidate): to_be_removed.append(candidate) break token_ranges = [token for token in token_ranges if token not in to_be_removed] token_ranges.sort(key=lambda token: token.get_start_index()) return token_ranges
Finds longest, non-overlapping word-ranges of phrases in tokens stored in TokenTrie :param tokens: tokens tokenize :type tokens: list of str :return: Optimal allocation of tokens to phrases :rtype: list of TokenTrie.Token
Below is the the instruction that describes the task: ### Input: Finds longest, non-overlapping word-ranges of phrases in tokens stored in TokenTrie :param tokens: tokens tokenize :type tokens: list of str :return: Optimal allocation of tokens to phrases :rtype: list of TokenTrie.Token ### Response: def find_optimal_allocation(self, tokens): """ Finds longest, non-overlapping word-ranges of phrases in tokens stored in TokenTrie :param tokens: tokens tokenize :type tokens: list of str :return: Optimal allocation of tokens to phrases :rtype: list of TokenTrie.Token """ token_ranges = self.find_tracked_words(tokens) token_ranges.sort() for offset in range(1, len(token_ranges)): to_be_removed = [] for candidate in token_ranges[offset:]: for i in range(offset): if token_ranges[i].overlaps_with(candidate): to_be_removed.append(candidate) break token_ranges = [token for token in token_ranges if token not in to_be_removed] token_ranges.sort(key=lambda token: token.get_start_index()) return token_ranges
def rank_genes_groups( adata, groupby, use_raw=True, groups: Union[str, Iterable[str]] = 'all', reference='rest', n_genes=100, rankby_abs=False, key_added=None, copy=False, method='t-test_overestim_var', corr_method='benjamini-hochberg', log_transformed=True, **kwds ): """Rank genes for characterizing groups. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. groupby : `str` The key of the observations grouping to consider. use_raw : `bool`, optional (default: `True`) Use `raw` attribute of `adata` if present. groups Subset of groups, e.g. `['g1', 'g2', 'g3']`, to which comparison shall be restricted, or `'all'` (default), for all groups. reference : `str`, optional (default: `'rest'`) If `'rest'`, compare each group to the union of the rest of the group. If a group identifier, compare with respect to this group. n_genes : `int`, optional (default: 100) The number of genes that appear in the returned tables. method : `{'logreg', 't-test', 'wilcoxon', 't-test_overestim_var'}`, optional (default: 't-test_overestim_var') If 't-test', uses t-test, if 'wilcoxon', uses Wilcoxon-Rank-Sum. If 't-test_overestim_var', overestimates variance of each group. If 'logreg' uses logistic regression, see [Ntranos18]_, `here <https://github.com/theislab/scanpy/issues/95>`__ and `here <http://www.nxn.se/valent/2018/3/5/actionable-scrna-seq-clusters>`__, for why this is meaningful. corr_method : `{'benjamini-hochberg', 'bonferroni'}`, optional (default: 'benjamini-hochberg') p-value correction method. Used only for 't-test', 't-test_overestim_var', and 'wilcoxon' methods. rankby_abs : `bool`, optional (default: `False`) Rank genes by the absolute value of the score, not by the score. The returned scores are never the absolute values. **kwds : keyword parameters Are passed to test methods. Currently this affects only parameters that are passed to `sklearn.linear_model.LogisticRegression <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>`__. For instance, you can pass `penalty='l1'` to try to come up with a minimal set of genes that are good predictors (sparse solution meaning few non-zero fitted coefficients). Returns ------- **names** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the gene names. Ordered according to scores. **scores** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the z-score underlying the computation of a p-value for each gene for each group. Ordered according to scores. **logfoldchanges** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the log2 fold change for each gene for each group. Ordered according to scores. Only provided if method is 't-test' like. Note: this is an approximation calculated from mean-log values. **pvals** : structured `np.ndarray` (`.uns['rank_genes_groups']`) p-values. **pvals_adj** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Corrected p-values. Notes ----- There are slight inconsistencies depending on whether sparse or dense data are passed. See `here <https://github.com/theislab/scanpy/blob/master/scanpy/tests/test_rank_genes_groups.py>`__. Examples -------- >>> adata = sc.datasets.pbmc68k_reduced() >>> sc.tl.rank_genes_groups(adata, 'bulk_labels', method='wilcoxon') # to visualize the results >>> sc.pl.rank_genes_groups(adata) """ if 'only_positive' in kwds: rankby_abs = not kwds.pop('only_positive') # backwards compat logg.info('ranking genes', r=True) avail_methods = {'t-test', 't-test_overestim_var', 'wilcoxon', 'logreg'} if method not in avail_methods: raise ValueError('Method must be one of {}.'.format(avail_methods)) avail_corr = {'benjamini-hochberg', 'bonferroni'} if corr_method not in avail_corr: raise ValueError('Correction method must be one of {}.'.format(avail_corr)) adata = adata.copy() if copy else adata utils.sanitize_anndata(adata) # for clarity, rename variable groups_order = groups if isinstance(groups, str) else list(groups) if isinstance(groups_order, list) and isinstance(groups_order[0], int): groups_order = [str(n) for n in groups_order] if reference != 'rest' and reference not in set(groups_order): groups_order += [reference] if (reference != 'rest' and reference not in set(adata.obs[groupby].cat.categories)): raise ValueError('reference = {} needs to be one of groupby = {}.' .format(reference, adata.obs[groupby].cat.categories.tolist())) groups_order, groups_masks = utils.select_groups( adata, groups_order, groupby) if key_added is None: key_added = 'rank_genes_groups' adata.uns[key_added] = {} adata.uns[key_added]['params'] = { 'groupby': groupby, 'reference': reference, 'method': method, 'use_raw': use_raw, 'corr_method': corr_method, } # adata_comp mocks an AnnData object if use_raw is True # otherwise it's just the AnnData object adata_comp = adata if adata.raw is not None and use_raw: adata_comp = adata.raw X = adata_comp.X # for clarity, rename variable n_genes_user = n_genes # make sure indices are not OoB in case there are less genes than n_genes if n_genes_user > X.shape[1]: n_genes_user = X.shape[1] # in the following, n_genes is simply another name for the total number of genes n_genes = X.shape[1] n_groups = groups_masks.shape[0] ns = np.zeros(n_groups, dtype=int) for imask, mask in enumerate(groups_masks): ns[imask] = np.where(mask)[0].size logg.msg('consider \'{}\' groups:'.format(groupby), groups_order, v=4) logg.msg('with sizes:', ns, v=4) if reference != 'rest': ireference = np.where(groups_order == reference)[0][0] reference_indices = np.arange(adata_comp.n_vars, dtype=int) rankings_gene_scores = [] rankings_gene_names = [] rankings_gene_logfoldchanges = [] rankings_gene_pvals = [] rankings_gene_pvals_adj = [] if method in {'t-test', 't-test_overestim_var'}: from scipy import stats from statsmodels.stats.multitest import multipletests # loop over all masks and compute means, variances and sample numbers means = np.zeros((n_groups, n_genes)) vars = np.zeros((n_groups, n_genes)) for imask, mask in enumerate(groups_masks): means[imask], vars[imask] = _get_mean_var(X[mask]) # test each either against the union of all other groups or against a # specific group for igroup in range(n_groups): if reference == 'rest': mask_rest = ~groups_masks[igroup] else: if igroup == ireference: continue else: mask_rest = groups_masks[ireference] mean_group, var_group = means[igroup], vars[igroup] mean_rest, var_rest = _get_mean_var(X[mask_rest]) ns_group = ns[igroup] # number of observations in group if method == 't-test': ns_rest = np.where(mask_rest)[0].size elif method == 't-test_overestim_var': ns_rest = ns[igroup] # hack for overestimating the variance for small groups else: raise ValueError('Method does not exist.') scores, pvals = stats.ttest_ind_from_stats( mean1=mean_group, std1=np.sqrt(var_group), nobs1=ns_group, mean2=mean_rest, std2=np.sqrt(var_rest), nobs2=ns_rest, equal_var=False # Welch's ) # Fold change foldchanges = (np.expm1(mean_group) + 1e-9) / (np.expm1(mean_rest) + 1e-9) # add small value to remove 0's scores[np.isnan(scores)] = 0 # I think it's only nan when means are the same and vars are 0 pvals[np.isnan(pvals)] = 1 # This also has to happen for Benjamini Hochberg if corr_method == 'benjamini-hochberg': _, pvals_adj, _, _ = multipletests(pvals, alpha=0.05, method='fdr_bh') elif corr_method == 'bonferroni': pvals_adj = np.minimum(pvals * n_genes, 1.0) scores_sort = np.abs(scores) if rankby_abs else scores partition = np.argpartition(scores_sort, -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores_sort[partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[global_indices]) rankings_gene_logfoldchanges.append(np.log2(foldchanges[global_indices])) rankings_gene_names.append(adata_comp.var_names[global_indices]) rankings_gene_pvals.append(pvals[global_indices]) rankings_gene_pvals_adj.append(pvals_adj[global_indices]) elif method == 'logreg': # if reference is not set, then the groups listed will be compared to the rest # if reference is set, then the groups listed will be compared only to the other groups listed from sklearn.linear_model import LogisticRegression reference = groups_order[0] if len(groups) == 1: raise Exception('Cannot perform logistic regression on a single cluster.') adata_copy = adata[adata.obs[groupby].isin(groups_order)] adata_comp = adata_copy if adata.raw is not None and use_raw: adata_comp = adata_copy.raw X = adata_comp.X clf = LogisticRegression(**kwds) clf.fit(X, adata_copy.obs[groupby].cat.codes) scores_all = clf.coef_ for igroup, group in enumerate(groups_order): if len(groups_order) <= 2: # binary logistic regression scores = scores_all[0] else: scores = scores_all[igroup] partition = np.argpartition(scores, -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores[partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[global_indices]) rankings_gene_names.append(adata_comp.var_names[global_indices]) if len(groups_order) <= 2: break elif method == 'wilcoxon': from scipy import stats from statsmodels.stats.multitest import multipletests CONST_MAX_SIZE = 10000000 means = np.zeros((n_groups, n_genes)) vars = np.zeros((n_groups, n_genes)) # initialize space for z-scores scores = np.zeros(n_genes) # First loop: Loop over all genes if reference != 'rest': for imask, mask in enumerate(groups_masks): means[imask], vars[imask] = _get_mean_var(X[mask]) # for fold-change only if imask == ireference: continue else: mask_rest = groups_masks[ireference] ns_rest = np.where(mask_rest)[0].size mean_rest, var_rest = _get_mean_var(X[mask_rest]) # for fold-change only if ns_rest <= 25 or ns[imask] <= 25: logg.hint('Few observations in a group for ' 'normal approximation (<=25). Lower test accuracy.') n_active = ns[imask] m_active = ns_rest # Now calculate gene expression ranking in chunkes: chunk = [] # Calculate chunk frames n_genes_max_chunk = floor(CONST_MAX_SIZE / (n_active + m_active)) if n_genes_max_chunk < n_genes: chunk_index = n_genes_max_chunk while chunk_index < n_genes: chunk.append(chunk_index) chunk_index = chunk_index + n_genes_max_chunk chunk.append(n_genes) else: chunk.append(n_genes) left = 0 # Calculate rank sums for each chunk for the current mask for chunk_index, right in enumerate(chunk): # Check if issparse is true: AnnData objects are currently sparse.csr or ndarray. if issparse(X): df1 = pd.DataFrame(data=X[mask, left:right].todense()) df2 = pd.DataFrame(data=X[mask_rest, left:right].todense(), index=np.arange(start=n_active, stop=n_active + m_active)) else: df1 = pd.DataFrame(data=X[mask, left:right]) df2 = pd.DataFrame(data=X[mask_rest, left:right], index=np.arange(start=n_active, stop=n_active + m_active)) df1 = df1.append(df2) ranks = df1.rank() # sum up adjusted_ranks to calculate W_m,n scores[left:right] = np.sum(ranks.loc[0:n_active, :]) left = right scores = (scores - (n_active * (n_active + m_active + 1) / 2)) / sqrt( (n_active * m_active * (n_active + m_active + 1) / 12)) scores[np.isnan(scores)] = 0 pvals = 2 * stats.distributions.norm.sf(np.abs(scores)) if corr_method == 'benjamini-hochberg': pvals[np.isnan(pvals)] = 1 # set Nan values to 1 to properly convert using Benhjamini Hochberg _, pvals_adj, _, _ = multipletests(pvals, alpha=0.05, method='fdr_bh') elif corr_method == 'bonferroni': pvals_adj = np.minimum(pvals * n_genes, 1.0) # Fold change foldchanges = (np.expm1(means[imask]) + 1e-9) / (np.expm1(mean_rest) + 1e-9) # add small value to remove 0's scores_sort = np.abs(scores) if rankby_abs else scores partition = np.argpartition(scores_sort, -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores_sort[partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[global_indices]) rankings_gene_names.append(adata_comp.var_names[global_indices]) rankings_gene_logfoldchanges.append(np.log2(foldchanges[global_indices])) rankings_gene_pvals.append(pvals[global_indices]) rankings_gene_pvals_adj.append(pvals_adj[global_indices]) # If no reference group exists, ranking needs only to be done once (full mask) else: scores = np.zeros((n_groups, n_genes)) chunk = [] n_cells = X.shape[0] n_genes_max_chunk = floor(CONST_MAX_SIZE / n_cells) if n_genes_max_chunk < n_genes: chunk_index = n_genes_max_chunk while chunk_index < n_genes: chunk.append(chunk_index) chunk_index = chunk_index + n_genes_max_chunk chunk.append(n_genes) else: chunk.append(n_genes) left = 0 for chunk_index, right in enumerate(chunk): # Check if issparse is true if issparse(X): df1 = pd.DataFrame(data=X[:, left:right].todense()) else: df1 = pd.DataFrame(data=X[:, left:right]) ranks = df1.rank() # sum up adjusted_ranks to calculate W_m,n for imask, mask in enumerate(groups_masks): scores[imask, left:right] = np.sum(ranks.loc[mask, :]) left = right for imask, mask in enumerate(groups_masks): mask_rest = ~groups_masks[imask] means[imask], vars[imask] = _get_mean_var(X[mask]) #for fold-change mean_rest, var_rest = _get_mean_var(X[mask_rest]) # for fold-change scores[imask, :] = (scores[imask, :] - (ns[imask] * (n_cells + 1) / 2)) / sqrt( (ns[imask] * (n_cells - ns[imask]) * (n_cells + 1) / 12)) scores[np.isnan(scores)] = 0 pvals = 2 * stats.distributions.norm.sf(np.abs(scores[imask,:])) if corr_method == 'benjamini-hochberg': pvals[np.isnan(pvals)] = 1 # set Nan values to 1 to properly convert using Benhjamini Hochberg _, pvals_adj, _, _ = multipletests(pvals, alpha=0.05, method='fdr_bh') elif corr_method == 'bonferroni': pvals_adj = np.minimum(pvals * n_genes, 1.0) # Fold change foldchanges = (np.expm1(means[imask]) + 1e-9) / (np.expm1(mean_rest) + 1e-9) # add small value to remove 0's scores_sort = np.abs(scores) if rankby_abs else scores partition = np.argpartition(scores_sort[imask, :], -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores_sort[imask, partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[imask, global_indices]) rankings_gene_names.append(adata_comp.var_names[global_indices]) rankings_gene_logfoldchanges.append(np.log2(foldchanges[global_indices])) rankings_gene_pvals.append(pvals[global_indices]) rankings_gene_pvals_adj.append(pvals_adj[global_indices]) groups_order_save = [str(g) for g in groups_order] if (reference != 'rest' and method != 'logreg') or (method == 'logreg' and len(groups) == 2): groups_order_save = [g for g in groups_order if g != reference] adata.uns[key_added]['scores'] = np.rec.fromarrays( [n for n in rankings_gene_scores], dtype=[(rn, 'float32') for rn in groups_order_save]) adata.uns[key_added]['names'] = np.rec.fromarrays( [n for n in rankings_gene_names], dtype=[(rn, 'U50') for rn in groups_order_save]) if method in {'t-test', 't-test_overestim_var', 'wilcoxon'}: adata.uns[key_added]['logfoldchanges'] = np.rec.fromarrays( [n for n in rankings_gene_logfoldchanges], dtype=[(rn, 'float32') for rn in groups_order_save]) adata.uns[key_added]['pvals'] = np.rec.fromarrays( [n for n in rankings_gene_pvals], dtype=[(rn, 'float64') for rn in groups_order_save]) adata.uns[key_added]['pvals_adj'] = np.rec.fromarrays( [n for n in rankings_gene_pvals_adj], dtype=[(rn, 'float64') for rn in groups_order_save]) logg.info(' finished', time=True, end=' ' if _settings_verbosity_greater_or_equal_than(3) else '\n') logg.hint( 'added to `.uns[\'{}\']`\n' ' \'names\', sorted np.recarray to be indexed by group ids\n' ' \'scores\', sorted np.recarray to be indexed by group ids\n' .format(key_added) + (' \'logfoldchanges\', sorted np.recarray to be indexed by group ids\n' ' \'pvals\', sorted np.recarray to be indexed by group ids\n' ' \'pvals_adj\', sorted np.recarray to be indexed by group ids' if method in {'t-test', 't-test_overestim_var', 'wilcoxon'} else '')) return adata if copy else None
Rank genes for characterizing groups. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. groupby : `str` The key of the observations grouping to consider. use_raw : `bool`, optional (default: `True`) Use `raw` attribute of `adata` if present. groups Subset of groups, e.g. `['g1', 'g2', 'g3']`, to which comparison shall be restricted, or `'all'` (default), for all groups. reference : `str`, optional (default: `'rest'`) If `'rest'`, compare each group to the union of the rest of the group. If a group identifier, compare with respect to this group. n_genes : `int`, optional (default: 100) The number of genes that appear in the returned tables. method : `{'logreg', 't-test', 'wilcoxon', 't-test_overestim_var'}`, optional (default: 't-test_overestim_var') If 't-test', uses t-test, if 'wilcoxon', uses Wilcoxon-Rank-Sum. If 't-test_overestim_var', overestimates variance of each group. If 'logreg' uses logistic regression, see [Ntranos18]_, `here <https://github.com/theislab/scanpy/issues/95>`__ and `here <http://www.nxn.se/valent/2018/3/5/actionable-scrna-seq-clusters>`__, for why this is meaningful. corr_method : `{'benjamini-hochberg', 'bonferroni'}`, optional (default: 'benjamini-hochberg') p-value correction method. Used only for 't-test', 't-test_overestim_var', and 'wilcoxon' methods. rankby_abs : `bool`, optional (default: `False`) Rank genes by the absolute value of the score, not by the score. The returned scores are never the absolute values. **kwds : keyword parameters Are passed to test methods. Currently this affects only parameters that are passed to `sklearn.linear_model.LogisticRegression <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>`__. For instance, you can pass `penalty='l1'` to try to come up with a minimal set of genes that are good predictors (sparse solution meaning few non-zero fitted coefficients). Returns ------- **names** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the gene names. Ordered according to scores. **scores** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the z-score underlying the computation of a p-value for each gene for each group. Ordered according to scores. **logfoldchanges** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the log2 fold change for each gene for each group. Ordered according to scores. Only provided if method is 't-test' like. Note: this is an approximation calculated from mean-log values. **pvals** : structured `np.ndarray` (`.uns['rank_genes_groups']`) p-values. **pvals_adj** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Corrected p-values. Notes ----- There are slight inconsistencies depending on whether sparse or dense data are passed. See `here <https://github.com/theislab/scanpy/blob/master/scanpy/tests/test_rank_genes_groups.py>`__. Examples -------- >>> adata = sc.datasets.pbmc68k_reduced() >>> sc.tl.rank_genes_groups(adata, 'bulk_labels', method='wilcoxon') # to visualize the results >>> sc.pl.rank_genes_groups(adata)
Below is the the instruction that describes the task: ### Input: Rank genes for characterizing groups. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. groupby : `str` The key of the observations grouping to consider. use_raw : `bool`, optional (default: `True`) Use `raw` attribute of `adata` if present. groups Subset of groups, e.g. `['g1', 'g2', 'g3']`, to which comparison shall be restricted, or `'all'` (default), for all groups. reference : `str`, optional (default: `'rest'`) If `'rest'`, compare each group to the union of the rest of the group. If a group identifier, compare with respect to this group. n_genes : `int`, optional (default: 100) The number of genes that appear in the returned tables. method : `{'logreg', 't-test', 'wilcoxon', 't-test_overestim_var'}`, optional (default: 't-test_overestim_var') If 't-test', uses t-test, if 'wilcoxon', uses Wilcoxon-Rank-Sum. If 't-test_overestim_var', overestimates variance of each group. If 'logreg' uses logistic regression, see [Ntranos18]_, `here <https://github.com/theislab/scanpy/issues/95>`__ and `here <http://www.nxn.se/valent/2018/3/5/actionable-scrna-seq-clusters>`__, for why this is meaningful. corr_method : `{'benjamini-hochberg', 'bonferroni'}`, optional (default: 'benjamini-hochberg') p-value correction method. Used only for 't-test', 't-test_overestim_var', and 'wilcoxon' methods. rankby_abs : `bool`, optional (default: `False`) Rank genes by the absolute value of the score, not by the score. The returned scores are never the absolute values. **kwds : keyword parameters Are passed to test methods. Currently this affects only parameters that are passed to `sklearn.linear_model.LogisticRegression <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>`__. For instance, you can pass `penalty='l1'` to try to come up with a minimal set of genes that are good predictors (sparse solution meaning few non-zero fitted coefficients). Returns ------- **names** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the gene names. Ordered according to scores. **scores** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the z-score underlying the computation of a p-value for each gene for each group. Ordered according to scores. **logfoldchanges** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the log2 fold change for each gene for each group. Ordered according to scores. Only provided if method is 't-test' like. Note: this is an approximation calculated from mean-log values. **pvals** : structured `np.ndarray` (`.uns['rank_genes_groups']`) p-values. **pvals_adj** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Corrected p-values. Notes ----- There are slight inconsistencies depending on whether sparse or dense data are passed. See `here <https://github.com/theislab/scanpy/blob/master/scanpy/tests/test_rank_genes_groups.py>`__. Examples -------- >>> adata = sc.datasets.pbmc68k_reduced() >>> sc.tl.rank_genes_groups(adata, 'bulk_labels', method='wilcoxon') # to visualize the results >>> sc.pl.rank_genes_groups(adata) ### Response: def rank_genes_groups( adata, groupby, use_raw=True, groups: Union[str, Iterable[str]] = 'all', reference='rest', n_genes=100, rankby_abs=False, key_added=None, copy=False, method='t-test_overestim_var', corr_method='benjamini-hochberg', log_transformed=True, **kwds ): """Rank genes for characterizing groups. Parameters ---------- adata : :class:`~anndata.AnnData` Annotated data matrix. groupby : `str` The key of the observations grouping to consider. use_raw : `bool`, optional (default: `True`) Use `raw` attribute of `adata` if present. groups Subset of groups, e.g. `['g1', 'g2', 'g3']`, to which comparison shall be restricted, or `'all'` (default), for all groups. reference : `str`, optional (default: `'rest'`) If `'rest'`, compare each group to the union of the rest of the group. If a group identifier, compare with respect to this group. n_genes : `int`, optional (default: 100) The number of genes that appear in the returned tables. method : `{'logreg', 't-test', 'wilcoxon', 't-test_overestim_var'}`, optional (default: 't-test_overestim_var') If 't-test', uses t-test, if 'wilcoxon', uses Wilcoxon-Rank-Sum. If 't-test_overestim_var', overestimates variance of each group. If 'logreg' uses logistic regression, see [Ntranos18]_, `here <https://github.com/theislab/scanpy/issues/95>`__ and `here <http://www.nxn.se/valent/2018/3/5/actionable-scrna-seq-clusters>`__, for why this is meaningful. corr_method : `{'benjamini-hochberg', 'bonferroni'}`, optional (default: 'benjamini-hochberg') p-value correction method. Used only for 't-test', 't-test_overestim_var', and 'wilcoxon' methods. rankby_abs : `bool`, optional (default: `False`) Rank genes by the absolute value of the score, not by the score. The returned scores are never the absolute values. **kwds : keyword parameters Are passed to test methods. Currently this affects only parameters that are passed to `sklearn.linear_model.LogisticRegression <http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html>`__. For instance, you can pass `penalty='l1'` to try to come up with a minimal set of genes that are good predictors (sparse solution meaning few non-zero fitted coefficients). Returns ------- **names** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the gene names. Ordered according to scores. **scores** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the z-score underlying the computation of a p-value for each gene for each group. Ordered according to scores. **logfoldchanges** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Structured array to be indexed by group id storing the log2 fold change for each gene for each group. Ordered according to scores. Only provided if method is 't-test' like. Note: this is an approximation calculated from mean-log values. **pvals** : structured `np.ndarray` (`.uns['rank_genes_groups']`) p-values. **pvals_adj** : structured `np.ndarray` (`.uns['rank_genes_groups']`) Corrected p-values. Notes ----- There are slight inconsistencies depending on whether sparse or dense data are passed. See `here <https://github.com/theislab/scanpy/blob/master/scanpy/tests/test_rank_genes_groups.py>`__. Examples -------- >>> adata = sc.datasets.pbmc68k_reduced() >>> sc.tl.rank_genes_groups(adata, 'bulk_labels', method='wilcoxon') # to visualize the results >>> sc.pl.rank_genes_groups(adata) """ if 'only_positive' in kwds: rankby_abs = not kwds.pop('only_positive') # backwards compat logg.info('ranking genes', r=True) avail_methods = {'t-test', 't-test_overestim_var', 'wilcoxon', 'logreg'} if method not in avail_methods: raise ValueError('Method must be one of {}.'.format(avail_methods)) avail_corr = {'benjamini-hochberg', 'bonferroni'} if corr_method not in avail_corr: raise ValueError('Correction method must be one of {}.'.format(avail_corr)) adata = adata.copy() if copy else adata utils.sanitize_anndata(adata) # for clarity, rename variable groups_order = groups if isinstance(groups, str) else list(groups) if isinstance(groups_order, list) and isinstance(groups_order[0], int): groups_order = [str(n) for n in groups_order] if reference != 'rest' and reference not in set(groups_order): groups_order += [reference] if (reference != 'rest' and reference not in set(adata.obs[groupby].cat.categories)): raise ValueError('reference = {} needs to be one of groupby = {}.' .format(reference, adata.obs[groupby].cat.categories.tolist())) groups_order, groups_masks = utils.select_groups( adata, groups_order, groupby) if key_added is None: key_added = 'rank_genes_groups' adata.uns[key_added] = {} adata.uns[key_added]['params'] = { 'groupby': groupby, 'reference': reference, 'method': method, 'use_raw': use_raw, 'corr_method': corr_method, } # adata_comp mocks an AnnData object if use_raw is True # otherwise it's just the AnnData object adata_comp = adata if adata.raw is not None and use_raw: adata_comp = adata.raw X = adata_comp.X # for clarity, rename variable n_genes_user = n_genes # make sure indices are not OoB in case there are less genes than n_genes if n_genes_user > X.shape[1]: n_genes_user = X.shape[1] # in the following, n_genes is simply another name for the total number of genes n_genes = X.shape[1] n_groups = groups_masks.shape[0] ns = np.zeros(n_groups, dtype=int) for imask, mask in enumerate(groups_masks): ns[imask] = np.where(mask)[0].size logg.msg('consider \'{}\' groups:'.format(groupby), groups_order, v=4) logg.msg('with sizes:', ns, v=4) if reference != 'rest': ireference = np.where(groups_order == reference)[0][0] reference_indices = np.arange(adata_comp.n_vars, dtype=int) rankings_gene_scores = [] rankings_gene_names = [] rankings_gene_logfoldchanges = [] rankings_gene_pvals = [] rankings_gene_pvals_adj = [] if method in {'t-test', 't-test_overestim_var'}: from scipy import stats from statsmodels.stats.multitest import multipletests # loop over all masks and compute means, variances and sample numbers means = np.zeros((n_groups, n_genes)) vars = np.zeros((n_groups, n_genes)) for imask, mask in enumerate(groups_masks): means[imask], vars[imask] = _get_mean_var(X[mask]) # test each either against the union of all other groups or against a # specific group for igroup in range(n_groups): if reference == 'rest': mask_rest = ~groups_masks[igroup] else: if igroup == ireference: continue else: mask_rest = groups_masks[ireference] mean_group, var_group = means[igroup], vars[igroup] mean_rest, var_rest = _get_mean_var(X[mask_rest]) ns_group = ns[igroup] # number of observations in group if method == 't-test': ns_rest = np.where(mask_rest)[0].size elif method == 't-test_overestim_var': ns_rest = ns[igroup] # hack for overestimating the variance for small groups else: raise ValueError('Method does not exist.') scores, pvals = stats.ttest_ind_from_stats( mean1=mean_group, std1=np.sqrt(var_group), nobs1=ns_group, mean2=mean_rest, std2=np.sqrt(var_rest), nobs2=ns_rest, equal_var=False # Welch's ) # Fold change foldchanges = (np.expm1(mean_group) + 1e-9) / (np.expm1(mean_rest) + 1e-9) # add small value to remove 0's scores[np.isnan(scores)] = 0 # I think it's only nan when means are the same and vars are 0 pvals[np.isnan(pvals)] = 1 # This also has to happen for Benjamini Hochberg if corr_method == 'benjamini-hochberg': _, pvals_adj, _, _ = multipletests(pvals, alpha=0.05, method='fdr_bh') elif corr_method == 'bonferroni': pvals_adj = np.minimum(pvals * n_genes, 1.0) scores_sort = np.abs(scores) if rankby_abs else scores partition = np.argpartition(scores_sort, -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores_sort[partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[global_indices]) rankings_gene_logfoldchanges.append(np.log2(foldchanges[global_indices])) rankings_gene_names.append(adata_comp.var_names[global_indices]) rankings_gene_pvals.append(pvals[global_indices]) rankings_gene_pvals_adj.append(pvals_adj[global_indices]) elif method == 'logreg': # if reference is not set, then the groups listed will be compared to the rest # if reference is set, then the groups listed will be compared only to the other groups listed from sklearn.linear_model import LogisticRegression reference = groups_order[0] if len(groups) == 1: raise Exception('Cannot perform logistic regression on a single cluster.') adata_copy = adata[adata.obs[groupby].isin(groups_order)] adata_comp = adata_copy if adata.raw is not None and use_raw: adata_comp = adata_copy.raw X = adata_comp.X clf = LogisticRegression(**kwds) clf.fit(X, adata_copy.obs[groupby].cat.codes) scores_all = clf.coef_ for igroup, group in enumerate(groups_order): if len(groups_order) <= 2: # binary logistic regression scores = scores_all[0] else: scores = scores_all[igroup] partition = np.argpartition(scores, -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores[partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[global_indices]) rankings_gene_names.append(adata_comp.var_names[global_indices]) if len(groups_order) <= 2: break elif method == 'wilcoxon': from scipy import stats from statsmodels.stats.multitest import multipletests CONST_MAX_SIZE = 10000000 means = np.zeros((n_groups, n_genes)) vars = np.zeros((n_groups, n_genes)) # initialize space for z-scores scores = np.zeros(n_genes) # First loop: Loop over all genes if reference != 'rest': for imask, mask in enumerate(groups_masks): means[imask], vars[imask] = _get_mean_var(X[mask]) # for fold-change only if imask == ireference: continue else: mask_rest = groups_masks[ireference] ns_rest = np.where(mask_rest)[0].size mean_rest, var_rest = _get_mean_var(X[mask_rest]) # for fold-change only if ns_rest <= 25 or ns[imask] <= 25: logg.hint('Few observations in a group for ' 'normal approximation (<=25). Lower test accuracy.') n_active = ns[imask] m_active = ns_rest # Now calculate gene expression ranking in chunkes: chunk = [] # Calculate chunk frames n_genes_max_chunk = floor(CONST_MAX_SIZE / (n_active + m_active)) if n_genes_max_chunk < n_genes: chunk_index = n_genes_max_chunk while chunk_index < n_genes: chunk.append(chunk_index) chunk_index = chunk_index + n_genes_max_chunk chunk.append(n_genes) else: chunk.append(n_genes) left = 0 # Calculate rank sums for each chunk for the current mask for chunk_index, right in enumerate(chunk): # Check if issparse is true: AnnData objects are currently sparse.csr or ndarray. if issparse(X): df1 = pd.DataFrame(data=X[mask, left:right].todense()) df2 = pd.DataFrame(data=X[mask_rest, left:right].todense(), index=np.arange(start=n_active, stop=n_active + m_active)) else: df1 = pd.DataFrame(data=X[mask, left:right]) df2 = pd.DataFrame(data=X[mask_rest, left:right], index=np.arange(start=n_active, stop=n_active + m_active)) df1 = df1.append(df2) ranks = df1.rank() # sum up adjusted_ranks to calculate W_m,n scores[left:right] = np.sum(ranks.loc[0:n_active, :]) left = right scores = (scores - (n_active * (n_active + m_active + 1) / 2)) / sqrt( (n_active * m_active * (n_active + m_active + 1) / 12)) scores[np.isnan(scores)] = 0 pvals = 2 * stats.distributions.norm.sf(np.abs(scores)) if corr_method == 'benjamini-hochberg': pvals[np.isnan(pvals)] = 1 # set Nan values to 1 to properly convert using Benhjamini Hochberg _, pvals_adj, _, _ = multipletests(pvals, alpha=0.05, method='fdr_bh') elif corr_method == 'bonferroni': pvals_adj = np.minimum(pvals * n_genes, 1.0) # Fold change foldchanges = (np.expm1(means[imask]) + 1e-9) / (np.expm1(mean_rest) + 1e-9) # add small value to remove 0's scores_sort = np.abs(scores) if rankby_abs else scores partition = np.argpartition(scores_sort, -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores_sort[partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[global_indices]) rankings_gene_names.append(adata_comp.var_names[global_indices]) rankings_gene_logfoldchanges.append(np.log2(foldchanges[global_indices])) rankings_gene_pvals.append(pvals[global_indices]) rankings_gene_pvals_adj.append(pvals_adj[global_indices]) # If no reference group exists, ranking needs only to be done once (full mask) else: scores = np.zeros((n_groups, n_genes)) chunk = [] n_cells = X.shape[0] n_genes_max_chunk = floor(CONST_MAX_SIZE / n_cells) if n_genes_max_chunk < n_genes: chunk_index = n_genes_max_chunk while chunk_index < n_genes: chunk.append(chunk_index) chunk_index = chunk_index + n_genes_max_chunk chunk.append(n_genes) else: chunk.append(n_genes) left = 0 for chunk_index, right in enumerate(chunk): # Check if issparse is true if issparse(X): df1 = pd.DataFrame(data=X[:, left:right].todense()) else: df1 = pd.DataFrame(data=X[:, left:right]) ranks = df1.rank() # sum up adjusted_ranks to calculate W_m,n for imask, mask in enumerate(groups_masks): scores[imask, left:right] = np.sum(ranks.loc[mask, :]) left = right for imask, mask in enumerate(groups_masks): mask_rest = ~groups_masks[imask] means[imask], vars[imask] = _get_mean_var(X[mask]) #for fold-change mean_rest, var_rest = _get_mean_var(X[mask_rest]) # for fold-change scores[imask, :] = (scores[imask, :] - (ns[imask] * (n_cells + 1) / 2)) / sqrt( (ns[imask] * (n_cells - ns[imask]) * (n_cells + 1) / 12)) scores[np.isnan(scores)] = 0 pvals = 2 * stats.distributions.norm.sf(np.abs(scores[imask,:])) if corr_method == 'benjamini-hochberg': pvals[np.isnan(pvals)] = 1 # set Nan values to 1 to properly convert using Benhjamini Hochberg _, pvals_adj, _, _ = multipletests(pvals, alpha=0.05, method='fdr_bh') elif corr_method == 'bonferroni': pvals_adj = np.minimum(pvals * n_genes, 1.0) # Fold change foldchanges = (np.expm1(means[imask]) + 1e-9) / (np.expm1(mean_rest) + 1e-9) # add small value to remove 0's scores_sort = np.abs(scores) if rankby_abs else scores partition = np.argpartition(scores_sort[imask, :], -n_genes_user)[-n_genes_user:] partial_indices = np.argsort(scores_sort[imask, partition])[::-1] global_indices = reference_indices[partition][partial_indices] rankings_gene_scores.append(scores[imask, global_indices]) rankings_gene_names.append(adata_comp.var_names[global_indices]) rankings_gene_logfoldchanges.append(np.log2(foldchanges[global_indices])) rankings_gene_pvals.append(pvals[global_indices]) rankings_gene_pvals_adj.append(pvals_adj[global_indices]) groups_order_save = [str(g) for g in groups_order] if (reference != 'rest' and method != 'logreg') or (method == 'logreg' and len(groups) == 2): groups_order_save = [g for g in groups_order if g != reference] adata.uns[key_added]['scores'] = np.rec.fromarrays( [n for n in rankings_gene_scores], dtype=[(rn, 'float32') for rn in groups_order_save]) adata.uns[key_added]['names'] = np.rec.fromarrays( [n for n in rankings_gene_names], dtype=[(rn, 'U50') for rn in groups_order_save]) if method in {'t-test', 't-test_overestim_var', 'wilcoxon'}: adata.uns[key_added]['logfoldchanges'] = np.rec.fromarrays( [n for n in rankings_gene_logfoldchanges], dtype=[(rn, 'float32') for rn in groups_order_save]) adata.uns[key_added]['pvals'] = np.rec.fromarrays( [n for n in rankings_gene_pvals], dtype=[(rn, 'float64') for rn in groups_order_save]) adata.uns[key_added]['pvals_adj'] = np.rec.fromarrays( [n for n in rankings_gene_pvals_adj], dtype=[(rn, 'float64') for rn in groups_order_save]) logg.info(' finished', time=True, end=' ' if _settings_verbosity_greater_or_equal_than(3) else '\n') logg.hint( 'added to `.uns[\'{}\']`\n' ' \'names\', sorted np.recarray to be indexed by group ids\n' ' \'scores\', sorted np.recarray to be indexed by group ids\n' .format(key_added) + (' \'logfoldchanges\', sorted np.recarray to be indexed by group ids\n' ' \'pvals\', sorted np.recarray to be indexed by group ids\n' ' \'pvals_adj\', sorted np.recarray to be indexed by group ids' if method in {'t-test', 't-test_overestim_var', 'wilcoxon'} else '')) return adata if copy else None
def create_graph_from_data(self, data): """Use CGNN to create a graph from scratch. All the possible structures are tested, which leads to a super exponential complexity. It would be preferable to start from a graph skeleton for large graphs. Args: data (pandas.DataFrame): Observational data on which causal discovery has to be performed. Returns: networkx.DiGraph: Solution given by CGNN. """ warnings.warn("An exhaustive search of the causal structure of CGNN without" " skeleton is super-exponential in the number of variables.") # Building all possible candidates: nb_vars = len(list(data.columns)) data = scale(data.values).astype('float32') candidates = [np.reshape(np.array(i), (nb_vars, nb_vars)) for i in itertools.product([0, 1], repeat=nb_vars*nb_vars) if (np.trace(np.reshape(np.array(i), (nb_vars, nb_vars))) == 0 and nx.is_directed_acyclic_graph(nx.DiGraph(np.reshape(np.array(i), (nb_vars, nb_vars)))))] warnings.warn("A total of {} graphs will be evaluated.".format(len(candidates))) scores = [parallel_graph_evaluation(data, i, nh=self.nh, nb_runs=self.nb_runs, gpu=self.gpu, nb_jobs=self.nb_jobs, lr=self.lr, train_epochs=self.train_epochs, test_epochs=self.test_epochs, verbose=self.verbose) for i in candidates] final_candidate = candidates[scores.index(min(scores))] output = np.zeros(final_candidate.shape) # Retrieve the confidence score on each edge. for (i, j), x in np.ndenumerate(final_candidate): if x > 0: cand = final_candidate cand[i, j] = 0 output[i, j] = min(scores) - scores[candidates.index(cand)] return nx.DiGraph(candidates[output], {idx: i for idx, i in enumerate(data.columns)})
Use CGNN to create a graph from scratch. All the possible structures are tested, which leads to a super exponential complexity. It would be preferable to start from a graph skeleton for large graphs. Args: data (pandas.DataFrame): Observational data on which causal discovery has to be performed. Returns: networkx.DiGraph: Solution given by CGNN.
Below is the the instruction that describes the task: ### Input: Use CGNN to create a graph from scratch. All the possible structures are tested, which leads to a super exponential complexity. It would be preferable to start from a graph skeleton for large graphs. Args: data (pandas.DataFrame): Observational data on which causal discovery has to be performed. Returns: networkx.DiGraph: Solution given by CGNN. ### Response: def create_graph_from_data(self, data): """Use CGNN to create a graph from scratch. All the possible structures are tested, which leads to a super exponential complexity. It would be preferable to start from a graph skeleton for large graphs. Args: data (pandas.DataFrame): Observational data on which causal discovery has to be performed. Returns: networkx.DiGraph: Solution given by CGNN. """ warnings.warn("An exhaustive search of the causal structure of CGNN without" " skeleton is super-exponential in the number of variables.") # Building all possible candidates: nb_vars = len(list(data.columns)) data = scale(data.values).astype('float32') candidates = [np.reshape(np.array(i), (nb_vars, nb_vars)) for i in itertools.product([0, 1], repeat=nb_vars*nb_vars) if (np.trace(np.reshape(np.array(i), (nb_vars, nb_vars))) == 0 and nx.is_directed_acyclic_graph(nx.DiGraph(np.reshape(np.array(i), (nb_vars, nb_vars)))))] warnings.warn("A total of {} graphs will be evaluated.".format(len(candidates))) scores = [parallel_graph_evaluation(data, i, nh=self.nh, nb_runs=self.nb_runs, gpu=self.gpu, nb_jobs=self.nb_jobs, lr=self.lr, train_epochs=self.train_epochs, test_epochs=self.test_epochs, verbose=self.verbose) for i in candidates] final_candidate = candidates[scores.index(min(scores))] output = np.zeros(final_candidate.shape) # Retrieve the confidence score on each edge. for (i, j), x in np.ndenumerate(final_candidate): if x > 0: cand = final_candidate cand[i, j] = 0 output[i, j] = min(scores) - scores[candidates.index(cand)] return nx.DiGraph(candidates[output], {idx: i for idx, i in enumerate(data.columns)})
def attribute_exists(self, attribute, section): """ Checks if given attribute exists. Usage:: >>> content = ["[Section A]\\n", "; Comment.\\n", "Attribute 1 = \\"Value A\\"\\n", "\\n", \ "[Section B]\\n", "Attribute 2 = \\"Value B\\"\\n"] >>> sections_file_parser = SectionsFileParser() >>> sections_file_parser.content = content >>> sections_file_parser.parse() <foundations.parsers.SectionsFileParser object at 0x234564563> >>> sections_file_parser.attribute_exists("Attribute 1", "Section A") True >>> sections_file_parser.attribute_exists("Attribute 2", "Section A") False :param attribute: Attribute to check existence. :type attribute: unicode :param section: Section to search attribute into. :type section: unicode :return: Attribute existence. :rtype: bool """ if foundations.namespace.remove_namespace(attribute, root_only=True) in self.get_attributes(section, strip_namespaces=True): LOGGER.debug("> '{0}' attribute exists in '{1}' section.".format(attribute, section)) return True else: LOGGER.debug("> '{0}' attribute doesn't exists in '{1}' section.".format(attribute, section)) return False
Checks if given attribute exists. Usage:: >>> content = ["[Section A]\\n", "; Comment.\\n", "Attribute 1 = \\"Value A\\"\\n", "\\n", \ "[Section B]\\n", "Attribute 2 = \\"Value B\\"\\n"] >>> sections_file_parser = SectionsFileParser() >>> sections_file_parser.content = content >>> sections_file_parser.parse() <foundations.parsers.SectionsFileParser object at 0x234564563> >>> sections_file_parser.attribute_exists("Attribute 1", "Section A") True >>> sections_file_parser.attribute_exists("Attribute 2", "Section A") False :param attribute: Attribute to check existence. :type attribute: unicode :param section: Section to search attribute into. :type section: unicode :return: Attribute existence. :rtype: bool
Below is the the instruction that describes the task: ### Input: Checks if given attribute exists. Usage:: >>> content = ["[Section A]\\n", "; Comment.\\n", "Attribute 1 = \\"Value A\\"\\n", "\\n", \ "[Section B]\\n", "Attribute 2 = \\"Value B\\"\\n"] >>> sections_file_parser = SectionsFileParser() >>> sections_file_parser.content = content >>> sections_file_parser.parse() <foundations.parsers.SectionsFileParser object at 0x234564563> >>> sections_file_parser.attribute_exists("Attribute 1", "Section A") True >>> sections_file_parser.attribute_exists("Attribute 2", "Section A") False :param attribute: Attribute to check existence. :type attribute: unicode :param section: Section to search attribute into. :type section: unicode :return: Attribute existence. :rtype: bool ### Response: def attribute_exists(self, attribute, section): """ Checks if given attribute exists. Usage:: >>> content = ["[Section A]\\n", "; Comment.\\n", "Attribute 1 = \\"Value A\\"\\n", "\\n", \ "[Section B]\\n", "Attribute 2 = \\"Value B\\"\\n"] >>> sections_file_parser = SectionsFileParser() >>> sections_file_parser.content = content >>> sections_file_parser.parse() <foundations.parsers.SectionsFileParser object at 0x234564563> >>> sections_file_parser.attribute_exists("Attribute 1", "Section A") True >>> sections_file_parser.attribute_exists("Attribute 2", "Section A") False :param attribute: Attribute to check existence. :type attribute: unicode :param section: Section to search attribute into. :type section: unicode :return: Attribute existence. :rtype: bool """ if foundations.namespace.remove_namespace(attribute, root_only=True) in self.get_attributes(section, strip_namespaces=True): LOGGER.debug("> '{0}' attribute exists in '{1}' section.".format(attribute, section)) return True else: LOGGER.debug("> '{0}' attribute doesn't exists in '{1}' section.".format(attribute, section)) return False
def _find_primary_without_dot_start(self, offset): """It tries to find the undotted primary start It is different from `self._get_atom_start()` in that it follows function calls, too; such as in ``f(x)``. """ last_atom = offset offset = self._find_last_non_space_char(last_atom) while offset > 0 and self.code[offset] in ')]': last_atom = self._find_parens_start(offset) offset = self._find_last_non_space_char(last_atom - 1) if offset >= 0 and (self.code[offset] in '"\'})]' or self._is_id_char(offset)): atom_start = self._find_atom_start(offset) if not keyword.iskeyword(self.code[atom_start:offset + 1]): return atom_start return last_atom
It tries to find the undotted primary start It is different from `self._get_atom_start()` in that it follows function calls, too; such as in ``f(x)``.
Below is the the instruction that describes the task: ### Input: It tries to find the undotted primary start It is different from `self._get_atom_start()` in that it follows function calls, too; such as in ``f(x)``. ### Response: def _find_primary_without_dot_start(self, offset): """It tries to find the undotted primary start It is different from `self._get_atom_start()` in that it follows function calls, too; such as in ``f(x)``. """ last_atom = offset offset = self._find_last_non_space_char(last_atom) while offset > 0 and self.code[offset] in ')]': last_atom = self._find_parens_start(offset) offset = self._find_last_non_space_char(last_atom - 1) if offset >= 0 and (self.code[offset] in '"\'})]' or self._is_id_char(offset)): atom_start = self._find_atom_start(offset) if not keyword.iskeyword(self.code[atom_start:offset + 1]): return atom_start return last_atom
def getResultsRange(self): """Returns the valid result range for this routine analysis based on the results ranges defined in the Analysis Request this routine analysis is assigned to. A routine analysis will be considered out of range if it result falls out of the range defined in "min" and "max". If there are values set for "warn_min" and "warn_max", these are used to compute the shoulders in both ends of the range. Thus, an analysis can be out of range, but be within shoulders still. :return: A dictionary with keys "min", "max", "warn_min" and "warn_max" :rtype: dict """ specs = ResultsRangeDict() analysis_request = self.getRequest() if not analysis_request: return specs keyword = self.getKeyword() ar_ranges = analysis_request.getResultsRange() # Get the result range that corresponds to this specific analysis an_range = [rr for rr in ar_ranges if rr.get('keyword', '') == keyword] return an_range and an_range[0] or specs
Returns the valid result range for this routine analysis based on the results ranges defined in the Analysis Request this routine analysis is assigned to. A routine analysis will be considered out of range if it result falls out of the range defined in "min" and "max". If there are values set for "warn_min" and "warn_max", these are used to compute the shoulders in both ends of the range. Thus, an analysis can be out of range, but be within shoulders still. :return: A dictionary with keys "min", "max", "warn_min" and "warn_max" :rtype: dict
Below is the the instruction that describes the task: ### Input: Returns the valid result range for this routine analysis based on the results ranges defined in the Analysis Request this routine analysis is assigned to. A routine analysis will be considered out of range if it result falls out of the range defined in "min" and "max". If there are values set for "warn_min" and "warn_max", these are used to compute the shoulders in both ends of the range. Thus, an analysis can be out of range, but be within shoulders still. :return: A dictionary with keys "min", "max", "warn_min" and "warn_max" :rtype: dict ### Response: def getResultsRange(self): """Returns the valid result range for this routine analysis based on the results ranges defined in the Analysis Request this routine analysis is assigned to. A routine analysis will be considered out of range if it result falls out of the range defined in "min" and "max". If there are values set for "warn_min" and "warn_max", these are used to compute the shoulders in both ends of the range. Thus, an analysis can be out of range, but be within shoulders still. :return: A dictionary with keys "min", "max", "warn_min" and "warn_max" :rtype: dict """ specs = ResultsRangeDict() analysis_request = self.getRequest() if not analysis_request: return specs keyword = self.getKeyword() ar_ranges = analysis_request.getResultsRange() # Get the result range that corresponds to this specific analysis an_range = [rr for rr in ar_ranges if rr.get('keyword', '') == keyword] return an_range and an_range[0] or specs
def _set_load_balance(self, v, load=False): """ Setter method for load_balance, mapped from YANG variable /interface/port_channel/load_balance (enumeration) If this variable is read-only (config: false) in the source YANG file, then _set_load_balance is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_load_balance() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'src-dst-ip-port': {'value': 6}, u'src-mac-vid': {'value': 2}, u'src-dst-ip': {'value': 4}, u'src-dst-ip-mac-vid': {'value': 5}, u'dst-mac-vid': {'value': 1}, u'src-dst-mac-vid': {'value': 3}, u'src-dst-ip-mac-vid-port': {'value': 7}},), default=unicode("src-dst-ip-mac-vid-port"), is_leaf=True, yang_name="load-balance", rest_name="load-balance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Load balancing Commands'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='enumeration', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """load_balance must be of a type compatible with enumeration""", 'defined-type': "brocade-interface:enumeration", 'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'src-dst-ip-port': {'value': 6}, u'src-mac-vid': {'value': 2}, u'src-dst-ip': {'value': 4}, u'src-dst-ip-mac-vid': {'value': 5}, u'dst-mac-vid': {'value': 1}, u'src-dst-mac-vid': {'value': 3}, u'src-dst-ip-mac-vid-port': {'value': 7}},), default=unicode("src-dst-ip-mac-vid-port"), is_leaf=True, yang_name="load-balance", rest_name="load-balance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Load balancing Commands'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='enumeration', is_config=True)""", }) self.__load_balance = t if hasattr(self, '_set'): self._set()
Setter method for load_balance, mapped from YANG variable /interface/port_channel/load_balance (enumeration) If this variable is read-only (config: false) in the source YANG file, then _set_load_balance is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_load_balance() directly.
Below is the the instruction that describes the task: ### Input: Setter method for load_balance, mapped from YANG variable /interface/port_channel/load_balance (enumeration) If this variable is read-only (config: false) in the source YANG file, then _set_load_balance is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_load_balance() directly. ### Response: def _set_load_balance(self, v, load=False): """ Setter method for load_balance, mapped from YANG variable /interface/port_channel/load_balance (enumeration) If this variable is read-only (config: false) in the source YANG file, then _set_load_balance is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_load_balance() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'src-dst-ip-port': {'value': 6}, u'src-mac-vid': {'value': 2}, u'src-dst-ip': {'value': 4}, u'src-dst-ip-mac-vid': {'value': 5}, u'dst-mac-vid': {'value': 1}, u'src-dst-mac-vid': {'value': 3}, u'src-dst-ip-mac-vid-port': {'value': 7}},), default=unicode("src-dst-ip-mac-vid-port"), is_leaf=True, yang_name="load-balance", rest_name="load-balance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Load balancing Commands'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='enumeration', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """load_balance must be of a type compatible with enumeration""", 'defined-type': "brocade-interface:enumeration", 'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_type="dict_key", restriction_arg={u'src-dst-ip-port': {'value': 6}, u'src-mac-vid': {'value': 2}, u'src-dst-ip': {'value': 4}, u'src-dst-ip-mac-vid': {'value': 5}, u'dst-mac-vid': {'value': 1}, u'src-dst-mac-vid': {'value': 3}, u'src-dst-ip-mac-vid-port': {'value': 7}},), default=unicode("src-dst-ip-mac-vid-port"), is_leaf=True, yang_name="load-balance", rest_name="load-balance", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Load balancing Commands'}}, namespace='urn:brocade.com:mgmt:brocade-interface', defining_module='brocade-interface', yang_type='enumeration', is_config=True)""", }) self.__load_balance = t if hasattr(self, '_set'): self._set()
def metadata_wrapper(fn): """Save metadata of last api call.""" @functools.wraps(fn) def wrapped_f(self, *args, **kwargs): self.last_metadata = {} self.last_metadata["url"] = self.configuration.host + args[0] self.last_metadata["method"] = args[1] self.last_metadata["timestamp"] = time.time() try: return fn(self, *args, **kwargs) except Exception as e: self.last_metadata["exception"] = e raise return wrapped_f
Save metadata of last api call.
Below is the the instruction that describes the task: ### Input: Save metadata of last api call. ### Response: def metadata_wrapper(fn): """Save metadata of last api call.""" @functools.wraps(fn) def wrapped_f(self, *args, **kwargs): self.last_metadata = {} self.last_metadata["url"] = self.configuration.host + args[0] self.last_metadata["method"] = args[1] self.last_metadata["timestamp"] = time.time() try: return fn(self, *args, **kwargs) except Exception as e: self.last_metadata["exception"] = e raise return wrapped_f
def fire(self, target, topic, content, callback=None): """ Fires a message """ message = self.__make_message(topic, content) if callback is not None: self.__callbacks[message['uid']] = ('fire', callback) self.__client.send_message(target, json.dumps(message), message['uid'])
Fires a message
Below is the the instruction that describes the task: ### Input: Fires a message ### Response: def fire(self, target, topic, content, callback=None): """ Fires a message """ message = self.__make_message(topic, content) if callback is not None: self.__callbacks[message['uid']] = ('fire', callback) self.__client.send_message(target, json.dumps(message), message['uid'])
def plugin_get(name): """ Return plugin class. @param name: the cms label. """ plugins = plugins_base_get() for plugin in plugins: if plugin.Meta.label == name: return plugin raise RuntimeError('CMS "%s" not known.' % name)
Return plugin class. @param name: the cms label.
Below is the the instruction that describes the task: ### Input: Return plugin class. @param name: the cms label. ### Response: def plugin_get(name): """ Return plugin class. @param name: the cms label. """ plugins = plugins_base_get() for plugin in plugins: if plugin.Meta.label == name: return plugin raise RuntimeError('CMS "%s" not known.' % name)
def notify(self, method, params=None): """Send a JSON RPC notification to the client. Args: method (str): The method name of the notification to send params (any): The payload of the notification """ log.debug('Sending notification: %s %s', method, params) message = { 'jsonrpc': JSONRPC_VERSION, 'method': method, } if params is not None: message['params'] = params self._consumer(message)
Send a JSON RPC notification to the client. Args: method (str): The method name of the notification to send params (any): The payload of the notification
Below is the the instruction that describes the task: ### Input: Send a JSON RPC notification to the client. Args: method (str): The method name of the notification to send params (any): The payload of the notification ### Response: def notify(self, method, params=None): """Send a JSON RPC notification to the client. Args: method (str): The method name of the notification to send params (any): The payload of the notification """ log.debug('Sending notification: %s %s', method, params) message = { 'jsonrpc': JSONRPC_VERSION, 'method': method, } if params is not None: message['params'] = params self._consumer(message)
def history(self, channel, **kwargs): """ https://api.slack.com/methods/im.history """ self.params.update({ 'channel': channel, }) if kwargs: self.params.update(kwargs) return FromUrl('https://slack.com/api/im.history', self._requests)(data=self.params).get()
https://api.slack.com/methods/im.history
Below is the the instruction that describes the task: ### Input: https://api.slack.com/methods/im.history ### Response: def history(self, channel, **kwargs): """ https://api.slack.com/methods/im.history """ self.params.update({ 'channel': channel, }) if kwargs: self.params.update(kwargs) return FromUrl('https://slack.com/api/im.history', self._requests)(data=self.params).get()
def re_flags(flags, custom=ReFlags): """Parse regexp flag string. Parameters ---------- flags: `str` Flag string. custom: `IntEnum`, optional Custom flag enum (default: None). Returns ------- (`int`, `int`) (flags for `re.compile`, custom flags) Raises ------ ValueError """ re_, custom_ = 0, 0 for flag in flags.upper(): try: re_ |= getattr(re, flag) except AttributeError: if custom is not None: try: custom_ |= getattr(custom, flag) except AttributeError: raise ValueError('Invalid custom flag "%s"' % flag) else: raise ValueError('Invalid regexp flag "%s"' % flag) return re_, custom_
Parse regexp flag string. Parameters ---------- flags: `str` Flag string. custom: `IntEnum`, optional Custom flag enum (default: None). Returns ------- (`int`, `int`) (flags for `re.compile`, custom flags) Raises ------ ValueError
Below is the the instruction that describes the task: ### Input: Parse regexp flag string. Parameters ---------- flags: `str` Flag string. custom: `IntEnum`, optional Custom flag enum (default: None). Returns ------- (`int`, `int`) (flags for `re.compile`, custom flags) Raises ------ ValueError ### Response: def re_flags(flags, custom=ReFlags): """Parse regexp flag string. Parameters ---------- flags: `str` Flag string. custom: `IntEnum`, optional Custom flag enum (default: None). Returns ------- (`int`, `int`) (flags for `re.compile`, custom flags) Raises ------ ValueError """ re_, custom_ = 0, 0 for flag in flags.upper(): try: re_ |= getattr(re, flag) except AttributeError: if custom is not None: try: custom_ |= getattr(custom, flag) except AttributeError: raise ValueError('Invalid custom flag "%s"' % flag) else: raise ValueError('Invalid regexp flag "%s"' % flag) return re_, custom_
def raw_query(self, query, format=None, pretty=False): '''Executes a YQL query and returns a response >>>... >>> resp = yql.raw_query('select * from weather.forecast where woeid=2502265') >>> ''' if format: format = format else: format = self.format payload = self._payload_builder(query, format=format) response = self.execute_query(payload) if pretty: response = self.response_builder(response) return response
Executes a YQL query and returns a response >>>... >>> resp = yql.raw_query('select * from weather.forecast where woeid=2502265') >>>
Below is the the instruction that describes the task: ### Input: Executes a YQL query and returns a response >>>... >>> resp = yql.raw_query('select * from weather.forecast where woeid=2502265') >>> ### Response: def raw_query(self, query, format=None, pretty=False): '''Executes a YQL query and returns a response >>>... >>> resp = yql.raw_query('select * from weather.forecast where woeid=2502265') >>> ''' if format: format = format else: format = self.format payload = self._payload_builder(query, format=format) response = self.execute_query(payload) if pretty: response = self.response_builder(response) return response
def ValidateSyntax(rdf_artifact): """Validates artifact syntax. This method can be used to validate individual artifacts as they are loaded, without needing all artifacts to be loaded first, as for Validate(). Args: rdf_artifact: RDF object artifact. Raises: ArtifactSyntaxError: If artifact syntax is invalid. """ if not rdf_artifact.doc: raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, "missing doc") for supp_os in rdf_artifact.supported_os: valid_os = rdf_artifact.SUPPORTED_OS_LIST if supp_os not in valid_os: detail = "invalid `supported_os` ('%s' not in %s)" % (supp_os, valid_os) raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail) for condition in rdf_artifact.conditions: # FIXME(hanuszczak): It does not look like the code below can throw # `ConditionException`. Do we really need it then? try: of = objectfilter.Parser(condition).Parse() of.Compile(objectfilter.BaseFilterImplementation) except rdf_artifacts.ConditionError as e: detail = "invalid condition '%s'" % condition raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail, e) for label in rdf_artifact.labels: if label not in rdf_artifact.ARTIFACT_LABELS: raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, "invalid label '%s'" % label) # Anything listed in provides must be defined in the KnowledgeBase valid_provides = rdf_client.KnowledgeBase().GetKbFieldNames() for kb_var in rdf_artifact.provides: if kb_var not in valid_provides: detail = "broken `provides` ('%s' not in %s)" % (kb_var, valid_provides) raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail) # Any %%blah%% path dependencies must be defined in the KnowledgeBase for dep in GetArtifactPathDependencies(rdf_artifact): if dep not in valid_provides: detail = "broken path dependencies ('%s' not in %s)" % (dep, valid_provides) raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail) for source in rdf_artifact.sources: try: source.Validate() except rdf_artifacts.ArtifactSourceSyntaxError as e: raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, "bad source", e)
Validates artifact syntax. This method can be used to validate individual artifacts as they are loaded, without needing all artifacts to be loaded first, as for Validate(). Args: rdf_artifact: RDF object artifact. Raises: ArtifactSyntaxError: If artifact syntax is invalid.
Below is the the instruction that describes the task: ### Input: Validates artifact syntax. This method can be used to validate individual artifacts as they are loaded, without needing all artifacts to be loaded first, as for Validate(). Args: rdf_artifact: RDF object artifact. Raises: ArtifactSyntaxError: If artifact syntax is invalid. ### Response: def ValidateSyntax(rdf_artifact): """Validates artifact syntax. This method can be used to validate individual artifacts as they are loaded, without needing all artifacts to be loaded first, as for Validate(). Args: rdf_artifact: RDF object artifact. Raises: ArtifactSyntaxError: If artifact syntax is invalid. """ if not rdf_artifact.doc: raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, "missing doc") for supp_os in rdf_artifact.supported_os: valid_os = rdf_artifact.SUPPORTED_OS_LIST if supp_os not in valid_os: detail = "invalid `supported_os` ('%s' not in %s)" % (supp_os, valid_os) raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail) for condition in rdf_artifact.conditions: # FIXME(hanuszczak): It does not look like the code below can throw # `ConditionException`. Do we really need it then? try: of = objectfilter.Parser(condition).Parse() of.Compile(objectfilter.BaseFilterImplementation) except rdf_artifacts.ConditionError as e: detail = "invalid condition '%s'" % condition raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail, e) for label in rdf_artifact.labels: if label not in rdf_artifact.ARTIFACT_LABELS: raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, "invalid label '%s'" % label) # Anything listed in provides must be defined in the KnowledgeBase valid_provides = rdf_client.KnowledgeBase().GetKbFieldNames() for kb_var in rdf_artifact.provides: if kb_var not in valid_provides: detail = "broken `provides` ('%s' not in %s)" % (kb_var, valid_provides) raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail) # Any %%blah%% path dependencies must be defined in the KnowledgeBase for dep in GetArtifactPathDependencies(rdf_artifact): if dep not in valid_provides: detail = "broken path dependencies ('%s' not in %s)" % (dep, valid_provides) raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, detail) for source in rdf_artifact.sources: try: source.Validate() except rdf_artifacts.ArtifactSourceSyntaxError as e: raise rdf_artifacts.ArtifactSyntaxError(rdf_artifact, "bad source", e)
def perform_bulk_pubmed_query(self): """ If 'bulk_pubmed_query' contains any content, perform a bulk PubMed query, add the publications to the publication set, and save. """ if self.bulk_pubmed_query: failed_queries = [] pmid_list = re.findall(r'(\d+)(?:[\s,]+|$)', self.bulk_pubmed_query) for pmid in pmid_list: try: p, created = Publication.objects.get_or_create(pmid=pmid) except: failed_queries.append(pmid) else: self.publications.add(p.id) if failed_queries: failed_queries.sort(key=int) self.bulk_pubmed_query = 'FAILED QUERIES: {}'.format(', '.join(failed_queries)) else: self.bulk_pubmed_query = ''
If 'bulk_pubmed_query' contains any content, perform a bulk PubMed query, add the publications to the publication set, and save.
Below is the the instruction that describes the task: ### Input: If 'bulk_pubmed_query' contains any content, perform a bulk PubMed query, add the publications to the publication set, and save. ### Response: def perform_bulk_pubmed_query(self): """ If 'bulk_pubmed_query' contains any content, perform a bulk PubMed query, add the publications to the publication set, and save. """ if self.bulk_pubmed_query: failed_queries = [] pmid_list = re.findall(r'(\d+)(?:[\s,]+|$)', self.bulk_pubmed_query) for pmid in pmid_list: try: p, created = Publication.objects.get_or_create(pmid=pmid) except: failed_queries.append(pmid) else: self.publications.add(p.id) if failed_queries: failed_queries.sort(key=int) self.bulk_pubmed_query = 'FAILED QUERIES: {}'.format(', '.join(failed_queries)) else: self.bulk_pubmed_query = ''
def _on_dynamodb_exception(self, error): """Dynamically handle DynamoDB exceptions, returning HTTP error responses. :param exceptions.DynamoDBException error: """ if isinstance(error, exceptions.ConditionalCheckFailedException): raise web.HTTPError(409, reason='Condition Check Failure') elif isinstance(error, exceptions.NoCredentialsError): if _no_creds_should_return_429(): raise web.HTTPError(429, reason='Instance Credentials Failure') elif isinstance(error, (exceptions.ThroughputExceeded, exceptions.ThrottlingException)): raise web.HTTPError(429, reason='Too Many Requests') if hasattr(self, 'logger'): self.logger.error('DynamoDB Error: %s', error) raise web.HTTPError(500, reason=str(error))
Dynamically handle DynamoDB exceptions, returning HTTP error responses. :param exceptions.DynamoDBException error:
Below is the the instruction that describes the task: ### Input: Dynamically handle DynamoDB exceptions, returning HTTP error responses. :param exceptions.DynamoDBException error: ### Response: def _on_dynamodb_exception(self, error): """Dynamically handle DynamoDB exceptions, returning HTTP error responses. :param exceptions.DynamoDBException error: """ if isinstance(error, exceptions.ConditionalCheckFailedException): raise web.HTTPError(409, reason='Condition Check Failure') elif isinstance(error, exceptions.NoCredentialsError): if _no_creds_should_return_429(): raise web.HTTPError(429, reason='Instance Credentials Failure') elif isinstance(error, (exceptions.ThroughputExceeded, exceptions.ThrottlingException)): raise web.HTTPError(429, reason='Too Many Requests') if hasattr(self, 'logger'): self.logger.error('DynamoDB Error: %s', error) raise web.HTTPError(500, reason=str(error))
def _norm(self, x): """Return the norm of ``x``. This method is intended to be private. Public callers should resort to `norm` which is type-checked. """ return float(np.sqrt(self.inner(x, x).real))
Return the norm of ``x``. This method is intended to be private. Public callers should resort to `norm` which is type-checked.
Below is the the instruction that describes the task: ### Input: Return the norm of ``x``. This method is intended to be private. Public callers should resort to `norm` which is type-checked. ### Response: def _norm(self, x): """Return the norm of ``x``. This method is intended to be private. Public callers should resort to `norm` which is type-checked. """ return float(np.sqrt(self.inner(x, x).real))
def write_json_plan(self, proposed_layout, proposed_plan_file): """Dump proposed json plan to given output file for future usage.""" with open(proposed_plan_file, 'w') as output: json.dump(proposed_layout, output)
Dump proposed json plan to given output file for future usage.
Below is the the instruction that describes the task: ### Input: Dump proposed json plan to given output file for future usage. ### Response: def write_json_plan(self, proposed_layout, proposed_plan_file): """Dump proposed json plan to given output file for future usage.""" with open(proposed_plan_file, 'w') as output: json.dump(proposed_layout, output)
def get_status_display(self, **kwargs): """ Define how status is displayed in UIs (add units etc.). """ if 'value' in kwargs: value = kwargs['value'] else: value = self.status if self.show_stdev_seconds: stdev = self.stdev(self.show_stdev_seconds) return f'{value}±{stdev:2.2}' else: return str(value)
Define how status is displayed in UIs (add units etc.).
Below is the the instruction that describes the task: ### Input: Define how status is displayed in UIs (add units etc.). ### Response: def get_status_display(self, **kwargs): """ Define how status is displayed in UIs (add units etc.). """ if 'value' in kwargs: value = kwargs['value'] else: value = self.status if self.show_stdev_seconds: stdev = self.stdev(self.show_stdev_seconds) return f'{value}±{stdev:2.2}' else: return str(value)
def _setup_language_variables(self, lang: str): # pragma: no cover """Check for language availability and presence of tokenizer file, then read punctuation characters for language and build tokenizer file path. :param lang: The language argument given to the class. :type lang: str :rtype (str, str, str) """ assert lang in PUNCTUATION.keys(), \ 'Sentence tokenizer not available for {0} language.'.format(lang) internal_punctuation = PUNCTUATION[lang]['internal'] external_punctuation = PUNCTUATION[lang]['external'] file = PUNCTUATION[lang]['file'] tokenizer_path = os.path.join(os.path.expanduser(self._get_models_path(language=lang)), file) assert os.path.isfile(tokenizer_path), \ 'CLTK linguistics data not found for language {0} {}'.format(lang) return internal_punctuation, external_punctuation, tokenizer_path
Check for language availability and presence of tokenizer file, then read punctuation characters for language and build tokenizer file path. :param lang: The language argument given to the class. :type lang: str :rtype (str, str, str)
Below is the the instruction that describes the task: ### Input: Check for language availability and presence of tokenizer file, then read punctuation characters for language and build tokenizer file path. :param lang: The language argument given to the class. :type lang: str :rtype (str, str, str) ### Response: def _setup_language_variables(self, lang: str): # pragma: no cover """Check for language availability and presence of tokenizer file, then read punctuation characters for language and build tokenizer file path. :param lang: The language argument given to the class. :type lang: str :rtype (str, str, str) """ assert lang in PUNCTUATION.keys(), \ 'Sentence tokenizer not available for {0} language.'.format(lang) internal_punctuation = PUNCTUATION[lang]['internal'] external_punctuation = PUNCTUATION[lang]['external'] file = PUNCTUATION[lang]['file'] tokenizer_path = os.path.join(os.path.expanduser(self._get_models_path(language=lang)), file) assert os.path.isfile(tokenizer_path), \ 'CLTK linguistics data not found for language {0} {}'.format(lang) return internal_punctuation, external_punctuation, tokenizer_path
def add_ngram(sequences, token_indice, ngram_range=2): """ Augment the input list of list (sequences) by appending n-grams values. Example: adding bi-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017} >>> add_ngram(sequences, token_indice, ngram_range=2) [[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]] Example: adding tri-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018} >>> add_ngram(sequences, token_indice, ngram_range=3) [[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]] """ new_sequences = [] for input_list in sequences: new_list = input_list[:] for i in range(len(new_list) - ngram_range + 1): for ngram_value in range(2, ngram_range + 1): ngram = tuple(new_list[i:i + ngram_value]) if ngram in token_indice: new_list.append(token_indice[ngram]) new_sequences.append(new_list) return new_sequences
Augment the input list of list (sequences) by appending n-grams values. Example: adding bi-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017} >>> add_ngram(sequences, token_indice, ngram_range=2) [[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]] Example: adding tri-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018} >>> add_ngram(sequences, token_indice, ngram_range=3) [[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]]
Below is the the instruction that describes the task: ### Input: Augment the input list of list (sequences) by appending n-grams values. Example: adding bi-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017} >>> add_ngram(sequences, token_indice, ngram_range=2) [[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]] Example: adding tri-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018} >>> add_ngram(sequences, token_indice, ngram_range=3) [[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]] ### Response: def add_ngram(sequences, token_indice, ngram_range=2): """ Augment the input list of list (sequences) by appending n-grams values. Example: adding bi-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017} >>> add_ngram(sequences, token_indice, ngram_range=2) [[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]] Example: adding tri-gram >>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]] >>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018} >>> add_ngram(sequences, token_indice, ngram_range=3) [[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]] """ new_sequences = [] for input_list in sequences: new_list = input_list[:] for i in range(len(new_list) - ngram_range + 1): for ngram_value in range(2, ngram_range + 1): ngram = tuple(new_list[i:i + ngram_value]) if ngram in token_indice: new_list.append(token_indice[ngram]) new_sequences.append(new_list) return new_sequences
def setdefault(self, sid, value, dtype=F64): """ Works like `dict.setdefault`: if the `sid` key is missing, it fills it with an array and returns the associate ProbabilityCurve :param sid: site ID :param value: value used to fill the returned ProbabilityCurve :param dtype: dtype used internally (F32 or F64) """ try: return self[sid] except KeyError: array = numpy.empty((self.shape_y, self.shape_z), dtype) array.fill(value) pc = ProbabilityCurve(array) self[sid] = pc return pc
Works like `dict.setdefault`: if the `sid` key is missing, it fills it with an array and returns the associate ProbabilityCurve :param sid: site ID :param value: value used to fill the returned ProbabilityCurve :param dtype: dtype used internally (F32 or F64)
Below is the the instruction that describes the task: ### Input: Works like `dict.setdefault`: if the `sid` key is missing, it fills it with an array and returns the associate ProbabilityCurve :param sid: site ID :param value: value used to fill the returned ProbabilityCurve :param dtype: dtype used internally (F32 or F64) ### Response: def setdefault(self, sid, value, dtype=F64): """ Works like `dict.setdefault`: if the `sid` key is missing, it fills it with an array and returns the associate ProbabilityCurve :param sid: site ID :param value: value used to fill the returned ProbabilityCurve :param dtype: dtype used internally (F32 or F64) """ try: return self[sid] except KeyError: array = numpy.empty((self.shape_y, self.shape_z), dtype) array.fill(value) pc = ProbabilityCurve(array) self[sid] = pc return pc