code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def comment_sync(self, comment): """Update comments to host and notify subscribers""" self.host.update(key="comment", value=comment) self.host.emit("commented", comment=comment)
Update comments to host and notify subscribers
Below is the the instruction that describes the task: ### Input: Update comments to host and notify subscribers ### Response: def comment_sync(self, comment): """Update comments to host and notify subscribers""" self.host.update(key="comment", value=comment) self.host.emit("commented", comment=comment)
def calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50, max_nn_length=60, tm_method='santalucia', salt_corrections_method='santalucia'): ''' Calculate the melting temperature (Tm) of a DNA sequence. Note that NN thermodynamics will be used to calculate the Tm of sequences up to 60 bp in length, after which point the following formula will be used:: Tm = 81.5 + 16.6(log10([mv_conc])) + 0.41(%GC) - 600/length Args: seq (str) : DNA sequence mv_conc (float/int, optional) : Monovalent cation conc. (mM) dv_conc (float/int, optional) : Divalent cation conc. (mM) dntp_conc (float/int, optional) : dNTP conc. (mM) dna_conc (float/int, optional) : DNA conc. (nM) max_nn_length (int, optional) : Maximum length for nearest-neighbor calcs tm_method (str, optional) : Tm calculation method (breslauer or santalucia) salt_corrections_method (str, optional) : Salt correction method (schildkraut, owczarzy, santalucia) Returns: The melting temperature in degrees Celsius (float). ''' _setThermoArgs(**locals()) return _THERMO_ANALYSIS.calcTm(seq)
Calculate the melting temperature (Tm) of a DNA sequence. Note that NN thermodynamics will be used to calculate the Tm of sequences up to 60 bp in length, after which point the following formula will be used:: Tm = 81.5 + 16.6(log10([mv_conc])) + 0.41(%GC) - 600/length Args: seq (str) : DNA sequence mv_conc (float/int, optional) : Monovalent cation conc. (mM) dv_conc (float/int, optional) : Divalent cation conc. (mM) dntp_conc (float/int, optional) : dNTP conc. (mM) dna_conc (float/int, optional) : DNA conc. (nM) max_nn_length (int, optional) : Maximum length for nearest-neighbor calcs tm_method (str, optional) : Tm calculation method (breslauer or santalucia) salt_corrections_method (str, optional) : Salt correction method (schildkraut, owczarzy, santalucia) Returns: The melting temperature in degrees Celsius (float).
Below is the the instruction that describes the task: ### Input: Calculate the melting temperature (Tm) of a DNA sequence. Note that NN thermodynamics will be used to calculate the Tm of sequences up to 60 bp in length, after which point the following formula will be used:: Tm = 81.5 + 16.6(log10([mv_conc])) + 0.41(%GC) - 600/length Args: seq (str) : DNA sequence mv_conc (float/int, optional) : Monovalent cation conc. (mM) dv_conc (float/int, optional) : Divalent cation conc. (mM) dntp_conc (float/int, optional) : dNTP conc. (mM) dna_conc (float/int, optional) : DNA conc. (nM) max_nn_length (int, optional) : Maximum length for nearest-neighbor calcs tm_method (str, optional) : Tm calculation method (breslauer or santalucia) salt_corrections_method (str, optional) : Salt correction method (schildkraut, owczarzy, santalucia) Returns: The melting temperature in degrees Celsius (float). ### Response: def calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50, max_nn_length=60, tm_method='santalucia', salt_corrections_method='santalucia'): ''' Calculate the melting temperature (Tm) of a DNA sequence. Note that NN thermodynamics will be used to calculate the Tm of sequences up to 60 bp in length, after which point the following formula will be used:: Tm = 81.5 + 16.6(log10([mv_conc])) + 0.41(%GC) - 600/length Args: seq (str) : DNA sequence mv_conc (float/int, optional) : Monovalent cation conc. (mM) dv_conc (float/int, optional) : Divalent cation conc. (mM) dntp_conc (float/int, optional) : dNTP conc. (mM) dna_conc (float/int, optional) : DNA conc. (nM) max_nn_length (int, optional) : Maximum length for nearest-neighbor calcs tm_method (str, optional) : Tm calculation method (breslauer or santalucia) salt_corrections_method (str, optional) : Salt correction method (schildkraut, owczarzy, santalucia) Returns: The melting temperature in degrees Celsius (float). ''' _setThermoArgs(**locals()) return _THERMO_ANALYSIS.calcTm(seq)
def attach_log_stream(self): """A log stream can only be attached if the container uses a json-file log driver. """ if self.has_api_logs: self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
A log stream can only be attached if the container uses a json-file log driver.
Below is the the instruction that describes the task: ### Input: A log stream can only be attached if the container uses a json-file log driver. ### Response: def attach_log_stream(self): """A log stream can only be attached if the container uses a json-file log driver. """ if self.has_api_logs: self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
def havespace(self, scriptname, scriptsize): """Ask for available space. See MANAGESIEVE specifications, section 2.5 :param scriptname: script's name :param scriptsize: script's size :rtype: boolean """ code, data = self.__send_command( "HAVESPACE", [scriptname.encode("utf-8"), scriptsize]) if code == "OK": return True return False
Ask for available space. See MANAGESIEVE specifications, section 2.5 :param scriptname: script's name :param scriptsize: script's size :rtype: boolean
Below is the the instruction that describes the task: ### Input: Ask for available space. See MANAGESIEVE specifications, section 2.5 :param scriptname: script's name :param scriptsize: script's size :rtype: boolean ### Response: def havespace(self, scriptname, scriptsize): """Ask for available space. See MANAGESIEVE specifications, section 2.5 :param scriptname: script's name :param scriptsize: script's size :rtype: boolean """ code, data = self.__send_command( "HAVESPACE", [scriptname.encode("utf-8"), scriptsize]) if code == "OK": return True return False
def get_export( self, export_type, generate=False, wait=False, wait_timeout=None, ): """ Downloads a data export over HTTP. Returns a `Requests Response <http://docs.python-requests.org/en/master/api/#requests.Response>`_ object containing the content of the export. - **export_type** is a string specifying which type of export should be downloaded. - **generate** is a boolean specifying whether to generate a new export and wait for it to be ready, or to just download the latest export. - **wait** is a boolean specifying whether to wait for an in-progress export to finish, if there is one. Has no effect if ``generate`` is ``True``. - **wait_timeout** is the number of seconds to wait if ``wait`` is ``True``. Has no effect if ``wait`` is ``False`` or if ``generate`` is ``True``. The returned :py:class:`.Response` object has two additional attributes as a convenience for working with the CSV content; **csv_reader** and **csv_dictreader**, which are wrappers for :py:meth:`.csv.reader` and :py:class:`csv.DictReader` respectively. These wrappers take care of correctly decoding the export content for the CSV parser. Example:: classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_reader(): print(row) classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_dictreader(): print(row) """ if generate: self.generate_export(export_type) if generate or wait: export = self.wait_export(export_type, wait_timeout) else: export = self.describe_export(export_type) if export_type in TALK_EXPORT_TYPES: media_url = export['data_requests'][0]['url'] else: media_url = export['media'][0]['src'] response = requests.get(media_url, stream=True) response.csv_reader = functools.partial( csv.reader, response.iter_lines(decode_unicode=True), ) response.csv_dictreader = functools.partial( csv.DictReader, response.iter_lines(decode_unicode=True), ) return response
Downloads a data export over HTTP. Returns a `Requests Response <http://docs.python-requests.org/en/master/api/#requests.Response>`_ object containing the content of the export. - **export_type** is a string specifying which type of export should be downloaded. - **generate** is a boolean specifying whether to generate a new export and wait for it to be ready, or to just download the latest export. - **wait** is a boolean specifying whether to wait for an in-progress export to finish, if there is one. Has no effect if ``generate`` is ``True``. - **wait_timeout** is the number of seconds to wait if ``wait`` is ``True``. Has no effect if ``wait`` is ``False`` or if ``generate`` is ``True``. The returned :py:class:`.Response` object has two additional attributes as a convenience for working with the CSV content; **csv_reader** and **csv_dictreader**, which are wrappers for :py:meth:`.csv.reader` and :py:class:`csv.DictReader` respectively. These wrappers take care of correctly decoding the export content for the CSV parser. Example:: classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_reader(): print(row) classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_dictreader(): print(row)
Below is the the instruction that describes the task: ### Input: Downloads a data export over HTTP. Returns a `Requests Response <http://docs.python-requests.org/en/master/api/#requests.Response>`_ object containing the content of the export. - **export_type** is a string specifying which type of export should be downloaded. - **generate** is a boolean specifying whether to generate a new export and wait for it to be ready, or to just download the latest export. - **wait** is a boolean specifying whether to wait for an in-progress export to finish, if there is one. Has no effect if ``generate`` is ``True``. - **wait_timeout** is the number of seconds to wait if ``wait`` is ``True``. Has no effect if ``wait`` is ``False`` or if ``generate`` is ``True``. The returned :py:class:`.Response` object has two additional attributes as a convenience for working with the CSV content; **csv_reader** and **csv_dictreader**, which are wrappers for :py:meth:`.csv.reader` and :py:class:`csv.DictReader` respectively. These wrappers take care of correctly decoding the export content for the CSV parser. Example:: classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_reader(): print(row) classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_dictreader(): print(row) ### Response: def get_export( self, export_type, generate=False, wait=False, wait_timeout=None, ): """ Downloads a data export over HTTP. Returns a `Requests Response <http://docs.python-requests.org/en/master/api/#requests.Response>`_ object containing the content of the export. - **export_type** is a string specifying which type of export should be downloaded. - **generate** is a boolean specifying whether to generate a new export and wait for it to be ready, or to just download the latest export. - **wait** is a boolean specifying whether to wait for an in-progress export to finish, if there is one. Has no effect if ``generate`` is ``True``. - **wait_timeout** is the number of seconds to wait if ``wait`` is ``True``. Has no effect if ``wait`` is ``False`` or if ``generate`` is ``True``. The returned :py:class:`.Response` object has two additional attributes as a convenience for working with the CSV content; **csv_reader** and **csv_dictreader**, which are wrappers for :py:meth:`.csv.reader` and :py:class:`csv.DictReader` respectively. These wrappers take care of correctly decoding the export content for the CSV parser. Example:: classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_reader(): print(row) classification_export = Project(1234).get_export('classifications') for row in classification_export.csv_dictreader(): print(row) """ if generate: self.generate_export(export_type) if generate or wait: export = self.wait_export(export_type, wait_timeout) else: export = self.describe_export(export_type) if export_type in TALK_EXPORT_TYPES: media_url = export['data_requests'][0]['url'] else: media_url = export['media'][0]['src'] response = requests.get(media_url, stream=True) response.csv_reader = functools.partial( csv.reader, response.iter_lines(decode_unicode=True), ) response.csv_dictreader = functools.partial( csv.DictReader, response.iter_lines(decode_unicode=True), ) return response
def hasUserAddEditPermission(self): """ Checks if the current user has privileges to access to the editing view. From Jira LIMS-1549: - Creation/Edit: Lab manager, Client Contact, Lab Clerk, Client Contact (for Client-specific SRTs) :returns: True/False """ mtool = getToolByName(self, 'portal_membership') checkPermission = mtool.checkPermission # In bika_samplinground_workflow.csv there are defined the ModifyPortalContent statements. There is said that # client has ModifyPortalContent permission enabled, so here we have to check if the client satisfy the # condition wrote in the function's description if (checkPermission(ModifyPortalContent, self) or checkPermission(AddPortalContent, self)) \ and 'Client' in api.user.get_current().getRoles(): # Checking if the current user is a current client's contact userID = api.user.get_current().id contact_objs = self.getContacts() contact_ids = [obj.getUsername() for obj in contact_objs] if userID in contact_ids: return True else: return False return checkPermission(ModifyPortalContent, self) or checkPermission(AddPortalContent, self)
Checks if the current user has privileges to access to the editing view. From Jira LIMS-1549: - Creation/Edit: Lab manager, Client Contact, Lab Clerk, Client Contact (for Client-specific SRTs) :returns: True/False
Below is the the instruction that describes the task: ### Input: Checks if the current user has privileges to access to the editing view. From Jira LIMS-1549: - Creation/Edit: Lab manager, Client Contact, Lab Clerk, Client Contact (for Client-specific SRTs) :returns: True/False ### Response: def hasUserAddEditPermission(self): """ Checks if the current user has privileges to access to the editing view. From Jira LIMS-1549: - Creation/Edit: Lab manager, Client Contact, Lab Clerk, Client Contact (for Client-specific SRTs) :returns: True/False """ mtool = getToolByName(self, 'portal_membership') checkPermission = mtool.checkPermission # In bika_samplinground_workflow.csv there are defined the ModifyPortalContent statements. There is said that # client has ModifyPortalContent permission enabled, so here we have to check if the client satisfy the # condition wrote in the function's description if (checkPermission(ModifyPortalContent, self) or checkPermission(AddPortalContent, self)) \ and 'Client' in api.user.get_current().getRoles(): # Checking if the current user is a current client's contact userID = api.user.get_current().id contact_objs = self.getContacts() contact_ids = [obj.getUsername() for obj in contact_objs] if userID in contact_ids: return True else: return False return checkPermission(ModifyPortalContent, self) or checkPermission(AddPortalContent, self)
def from_clause(cls, clause): """ Factory method """ fn_name = clause[0] if fn_name == "size": return SizeConstraint.from_clause(clause) elif fn_name == "attribute_type": return TypeConstraint.from_clause(clause) else: fn_name = clause[0] field = clause[1] if len(clause) > 2: return cls(fn_name, field, resolve(clause[2])) else: return cls(fn_name, field)
Factory method
Below is the the instruction that describes the task: ### Input: Factory method ### Response: def from_clause(cls, clause): """ Factory method """ fn_name = clause[0] if fn_name == "size": return SizeConstraint.from_clause(clause) elif fn_name == "attribute_type": return TypeConstraint.from_clause(clause) else: fn_name = clause[0] field = clause[1] if len(clause) > 2: return cls(fn_name, field, resolve(clause[2])) else: return cls(fn_name, field)
def track_request(self, name, url, success, start_time=None, duration=None, response_code=None, http_method=None, properties=None, measurements=None, request_id=None): """Sends a single request that was captured for the application. Args: name (str). the name for this request. All requests with the same name will be grouped together.\n url (str). the actual URL for this request (to show in individual request instances).\n success (bool). true if the request ended in success, false otherwise.\n start_time (str). the start time of the request. The value should look the same as the one returned by :func:`datetime.isoformat()` (defaults to: None)\n duration (int). the number of milliseconds that this request lasted. (defaults to: None)\n response_code (str). the response code that this request returned. (defaults to: None)\n http_method (str). the HTTP method that triggered this request. (defaults to: None)\n properties (dict). the set of custom properties the client wants attached to this data item. (defaults to: None)\n measurements (dict). the set of custom measurements the client wants to attach to this data item. (defaults to: None)\n request_id (str). the id for this request. If None, a new uuid will be generated. (defaults to: None) """ data = channel.contracts.RequestData() data.id = request_id or str(uuid.uuid4()) data.name = name data.url = url data.success = success data.start_time = start_time or datetime.datetime.utcnow().isoformat() + 'Z' data.duration = self.__ms_to_duration(duration) data.response_code = str(response_code) or '200' data.http_method = http_method or 'GET' if properties: data.properties = properties if measurements: data.measurements = measurements self.track(data, self._context)
Sends a single request that was captured for the application. Args: name (str). the name for this request. All requests with the same name will be grouped together.\n url (str). the actual URL for this request (to show in individual request instances).\n success (bool). true if the request ended in success, false otherwise.\n start_time (str). the start time of the request. The value should look the same as the one returned by :func:`datetime.isoformat()` (defaults to: None)\n duration (int). the number of milliseconds that this request lasted. (defaults to: None)\n response_code (str). the response code that this request returned. (defaults to: None)\n http_method (str). the HTTP method that triggered this request. (defaults to: None)\n properties (dict). the set of custom properties the client wants attached to this data item. (defaults to: None)\n measurements (dict). the set of custom measurements the client wants to attach to this data item. (defaults to: None)\n request_id (str). the id for this request. If None, a new uuid will be generated. (defaults to: None)
Below is the the instruction that describes the task: ### Input: Sends a single request that was captured for the application. Args: name (str). the name for this request. All requests with the same name will be grouped together.\n url (str). the actual URL for this request (to show in individual request instances).\n success (bool). true if the request ended in success, false otherwise.\n start_time (str). the start time of the request. The value should look the same as the one returned by :func:`datetime.isoformat()` (defaults to: None)\n duration (int). the number of milliseconds that this request lasted. (defaults to: None)\n response_code (str). the response code that this request returned. (defaults to: None)\n http_method (str). the HTTP method that triggered this request. (defaults to: None)\n properties (dict). the set of custom properties the client wants attached to this data item. (defaults to: None)\n measurements (dict). the set of custom measurements the client wants to attach to this data item. (defaults to: None)\n request_id (str). the id for this request. If None, a new uuid will be generated. (defaults to: None) ### Response: def track_request(self, name, url, success, start_time=None, duration=None, response_code=None, http_method=None, properties=None, measurements=None, request_id=None): """Sends a single request that was captured for the application. Args: name (str). the name for this request. All requests with the same name will be grouped together.\n url (str). the actual URL for this request (to show in individual request instances).\n success (bool). true if the request ended in success, false otherwise.\n start_time (str). the start time of the request. The value should look the same as the one returned by :func:`datetime.isoformat()` (defaults to: None)\n duration (int). the number of milliseconds that this request lasted. (defaults to: None)\n response_code (str). the response code that this request returned. (defaults to: None)\n http_method (str). the HTTP method that triggered this request. (defaults to: None)\n properties (dict). the set of custom properties the client wants attached to this data item. (defaults to: None)\n measurements (dict). the set of custom measurements the client wants to attach to this data item. (defaults to: None)\n request_id (str). the id for this request. If None, a new uuid will be generated. (defaults to: None) """ data = channel.contracts.RequestData() data.id = request_id or str(uuid.uuid4()) data.name = name data.url = url data.success = success data.start_time = start_time or datetime.datetime.utcnow().isoformat() + 'Z' data.duration = self.__ms_to_duration(duration) data.response_code = str(response_code) or '200' data.http_method = http_method or 'GET' if properties: data.properties = properties if measurements: data.measurements = measurements self.track(data, self._context)
def _srn_store_single_run(self, traj, recursive=True, store_data=pypetconstants.STORE_DATA, max_depth=None): """ Stores a single run instance to disk (only meta data)""" if store_data != pypetconstants.STORE_NOTHING: self._logger.debug('Storing Data of single run `%s`.' % traj.v_crun) if max_depth is None: max_depth = float('inf') for name_pair in traj._new_nodes: _, name = name_pair parent_group, child_node = traj._new_nodes[name_pair] if not child_node._stored: self._tree_store_sub_branch(parent_group, name, store_data=store_data, with_links=True, recursive=recursive, max_depth=max_depth - child_node.v_depth, hdf5_group=None) for name_pair in traj._new_links: _, link = name_pair parent_group, _ = traj._new_links[name_pair] self._tree_store_sub_branch(parent_group, link, store_data=store_data, with_links=True, recursive=recursive, max_depth=max_depth - parent_group.v_depth - 1, hdf5_group=None)
Stores a single run instance to disk (only meta data)
Below is the the instruction that describes the task: ### Input: Stores a single run instance to disk (only meta data) ### Response: def _srn_store_single_run(self, traj, recursive=True, store_data=pypetconstants.STORE_DATA, max_depth=None): """ Stores a single run instance to disk (only meta data)""" if store_data != pypetconstants.STORE_NOTHING: self._logger.debug('Storing Data of single run `%s`.' % traj.v_crun) if max_depth is None: max_depth = float('inf') for name_pair in traj._new_nodes: _, name = name_pair parent_group, child_node = traj._new_nodes[name_pair] if not child_node._stored: self._tree_store_sub_branch(parent_group, name, store_data=store_data, with_links=True, recursive=recursive, max_depth=max_depth - child_node.v_depth, hdf5_group=None) for name_pair in traj._new_links: _, link = name_pair parent_group, _ = traj._new_links[name_pair] self._tree_store_sub_branch(parent_group, link, store_data=store_data, with_links=True, recursive=recursive, max_depth=max_depth - parent_group.v_depth - 1, hdf5_group=None)
def date_director(**kwargs): """Direct which class should be used based on the date qualifier or if the date should be converted at all. """ # If the date is a creation date, return the element object. if kwargs.get('qualifier') == 'creation': return ETD_MSDate(content=kwargs.get('content').strip()) elif kwargs.get('qualifier') != 'digitized': # Return the element object. return ETD_MSDate(content=kwargs.get('content').strip()) else: return None
Direct which class should be used based on the date qualifier or if the date should be converted at all.
Below is the the instruction that describes the task: ### Input: Direct which class should be used based on the date qualifier or if the date should be converted at all. ### Response: def date_director(**kwargs): """Direct which class should be used based on the date qualifier or if the date should be converted at all. """ # If the date is a creation date, return the element object. if kwargs.get('qualifier') == 'creation': return ETD_MSDate(content=kwargs.get('content').strip()) elif kwargs.get('qualifier') != 'digitized': # Return the element object. return ETD_MSDate(content=kwargs.get('content').strip()) else: return None
def load(self, stream): """ Load properties from an open file stream """ # For the time being only accept file input streams if not _is_file(stream): raise TypeError('Argument should be a file object!') # Check for the opened mode if stream.mode != 'r': raise ValueError('Stream should be opened in read-only mode!') try: lines = stream.readlines() self.__parse(lines) except IOError: raise
Load properties from an open file stream
Below is the the instruction that describes the task: ### Input: Load properties from an open file stream ### Response: def load(self, stream): """ Load properties from an open file stream """ # For the time being only accept file input streams if not _is_file(stream): raise TypeError('Argument should be a file object!') # Check for the opened mode if stream.mode != 'r': raise ValueError('Stream should be opened in read-only mode!') try: lines = stream.readlines() self.__parse(lines) except IOError: raise
def get(self): """ Parse a response into string format and clear out its temporary containers :return: The parsed response message :rtype : str """ self._log.debug('Converting Response object to string format') response = ''.join(map(str, self._response)).strip() self._log.debug('Resetting parent Trigger temporary containers') self.stars = { 'normalized': (), 'case_preserved': (), 'raw': () } user = self.trigger.user self.trigger.user = None if self.redirect: self._log.info('Redirecting response to: {msg}'.format(msg=response)) groups = self.agentml.request_log.most_recent().groups response = self.agentml.get_reply(user.id, response, groups) if not response: self._log.info('Failed to retrieve a valid response when redirecting') return '' return response
Parse a response into string format and clear out its temporary containers :return: The parsed response message :rtype : str
Below is the the instruction that describes the task: ### Input: Parse a response into string format and clear out its temporary containers :return: The parsed response message :rtype : str ### Response: def get(self): """ Parse a response into string format and clear out its temporary containers :return: The parsed response message :rtype : str """ self._log.debug('Converting Response object to string format') response = ''.join(map(str, self._response)).strip() self._log.debug('Resetting parent Trigger temporary containers') self.stars = { 'normalized': (), 'case_preserved': (), 'raw': () } user = self.trigger.user self.trigger.user = None if self.redirect: self._log.info('Redirecting response to: {msg}'.format(msg=response)) groups = self.agentml.request_log.most_recent().groups response = self.agentml.get_reply(user.id, response, groups) if not response: self._log.info('Failed to retrieve a valid response when redirecting') return '' return response
def set_goterm(self, go2obj): """Set goterm and copy GOTerm's name and namespace.""" if self.GO in go2obj: goterm = go2obj[self.GO] self.goterm = goterm self.name = goterm.name self.depth = goterm.depth self.NS = self.namespace2NS[self.goterm.namespace]
Set goterm and copy GOTerm's name and namespace.
Below is the the instruction that describes the task: ### Input: Set goterm and copy GOTerm's name and namespace. ### Response: def set_goterm(self, go2obj): """Set goterm and copy GOTerm's name and namespace.""" if self.GO in go2obj: goterm = go2obj[self.GO] self.goterm = goterm self.name = goterm.name self.depth = goterm.depth self.NS = self.namespace2NS[self.goterm.namespace]
def dependencies(self, name=None, prefix=None, pkgs=None, channels=None, dep=True): """Get dependenciy list for packages to be installed in an env.""" if not pkgs or not isinstance(pkgs, (list, tuple)): raise TypeError('must specify a list of one or more packages to ' 'install into existing environment') cmd_list = ['install', '--dry-run', '--json', '--force-pscheck'] if not dep: cmd_list.extend(['--no-deps']) if name: cmd_list.extend(['--name', name]) elif prefix: cmd_list.extend(['--prefix', prefix]) else: pass cmd_list.extend(pkgs) # TODO: Check if correct if channels: cmd_list.extend(['--override-channels']) for channel in channels: cmd_list.extend(['--channel']) cmd_list.extend([channel]) return self._call_and_parse(cmd_list)
Get dependenciy list for packages to be installed in an env.
Below is the the instruction that describes the task: ### Input: Get dependenciy list for packages to be installed in an env. ### Response: def dependencies(self, name=None, prefix=None, pkgs=None, channels=None, dep=True): """Get dependenciy list for packages to be installed in an env.""" if not pkgs or not isinstance(pkgs, (list, tuple)): raise TypeError('must specify a list of one or more packages to ' 'install into existing environment') cmd_list = ['install', '--dry-run', '--json', '--force-pscheck'] if not dep: cmd_list.extend(['--no-deps']) if name: cmd_list.extend(['--name', name]) elif prefix: cmd_list.extend(['--prefix', prefix]) else: pass cmd_list.extend(pkgs) # TODO: Check if correct if channels: cmd_list.extend(['--override-channels']) for channel in channels: cmd_list.extend(['--channel']) cmd_list.extend([channel]) return self._call_and_parse(cmd_list)
def migrate_tables(tables, engine_name=None): """ Used to migrate dynamic table to database :param tables: tables name list, such as ['user'] """ from alembic.migration import MigrationContext engine = engine_manager[engine_name] mc = MigrationContext.configure(engine.session().connection) script = get_migrate_script(mc, tables, engine.metadata) run_migrate_script(mc, script)
Used to migrate dynamic table to database :param tables: tables name list, such as ['user']
Below is the the instruction that describes the task: ### Input: Used to migrate dynamic table to database :param tables: tables name list, such as ['user'] ### Response: def migrate_tables(tables, engine_name=None): """ Used to migrate dynamic table to database :param tables: tables name list, such as ['user'] """ from alembic.migration import MigrationContext engine = engine_manager[engine_name] mc = MigrationContext.configure(engine.session().connection) script = get_migrate_script(mc, tables, engine.metadata) run_migrate_script(mc, script)
def get_content(self, r): """Abstraction for grabbing content from a returned response""" if self.type == 'exp_file': # don't use the decoded r.text return r.content elif self.type == 'version': return r.content else: if self.fmt == 'json': content = {} # Decode try: # Watch out for bad/empty json content = json.loads(r.text, strict=False) except ValueError as e: if not self.expect_empty_json(): # reraise for requests that shouldn't send empty json raise ValueError(e) finally: return content else: return r.text
Abstraction for grabbing content from a returned response
Below is the the instruction that describes the task: ### Input: Abstraction for grabbing content from a returned response ### Response: def get_content(self, r): """Abstraction for grabbing content from a returned response""" if self.type == 'exp_file': # don't use the decoded r.text return r.content elif self.type == 'version': return r.content else: if self.fmt == 'json': content = {} # Decode try: # Watch out for bad/empty json content = json.loads(r.text, strict=False) except ValueError as e: if not self.expect_empty_json(): # reraise for requests that shouldn't send empty json raise ValueError(e) finally: return content else: return r.text
def encode_nibbles(nibbles): """ The Hex Prefix function """ if is_nibbles_terminated(nibbles): flag = HP_FLAG_2 else: flag = HP_FLAG_0 raw_nibbles = remove_nibbles_terminator(nibbles) is_odd = len(raw_nibbles) % 2 if is_odd: flagged_nibbles = tuple(itertools.chain( (flag + 1,), raw_nibbles, )) else: flagged_nibbles = tuple(itertools.chain( (flag, 0), raw_nibbles, )) prefixed_value = nibbles_to_bytes(flagged_nibbles) return prefixed_value
The Hex Prefix function
Below is the the instruction that describes the task: ### Input: The Hex Prefix function ### Response: def encode_nibbles(nibbles): """ The Hex Prefix function """ if is_nibbles_terminated(nibbles): flag = HP_FLAG_2 else: flag = HP_FLAG_0 raw_nibbles = remove_nibbles_terminator(nibbles) is_odd = len(raw_nibbles) % 2 if is_odd: flagged_nibbles = tuple(itertools.chain( (flag + 1,), raw_nibbles, )) else: flagged_nibbles = tuple(itertools.chain( (flag, 0), raw_nibbles, )) prefixed_value = nibbles_to_bytes(flagged_nibbles) return prefixed_value
def conditional_probability_alive(self, frequency, recency, T): """ Compute conditional probability alive. Compute the probability that a customer with history (frequency, recency, T) is currently alive. From http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf Parameters ---------- frequency: array or scalar historical frequency of customer. recency: array or scalar historical recency of customer. T: array or scalar age of the customer. Returns ------- array value representing a probability """ r, alpha, a, b = self._unload_params("r", "alpha", "a", "b") log_div = (r + frequency) * np.log((alpha + T) / (alpha + recency)) + np.log( a / (b + np.maximum(frequency, 1) - 1) ) return np.atleast_1d(np.where(frequency == 0, 1.0, expit(-log_div)))
Compute conditional probability alive. Compute the probability that a customer with history (frequency, recency, T) is currently alive. From http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf Parameters ---------- frequency: array or scalar historical frequency of customer. recency: array or scalar historical recency of customer. T: array or scalar age of the customer. Returns ------- array value representing a probability
Below is the the instruction that describes the task: ### Input: Compute conditional probability alive. Compute the probability that a customer with history (frequency, recency, T) is currently alive. From http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf Parameters ---------- frequency: array or scalar historical frequency of customer. recency: array or scalar historical recency of customer. T: array or scalar age of the customer. Returns ------- array value representing a probability ### Response: def conditional_probability_alive(self, frequency, recency, T): """ Compute conditional probability alive. Compute the probability that a customer with history (frequency, recency, T) is currently alive. From http://www.brucehardie.com/notes/021/palive_for_BGNBD.pdf Parameters ---------- frequency: array or scalar historical frequency of customer. recency: array or scalar historical recency of customer. T: array or scalar age of the customer. Returns ------- array value representing a probability """ r, alpha, a, b = self._unload_params("r", "alpha", "a", "b") log_div = (r + frequency) * np.log((alpha + T) / (alpha + recency)) + np.log( a / (b + np.maximum(frequency, 1) - 1) ) return np.atleast_1d(np.where(frequency == 0, 1.0, expit(-log_div)))
def replace_gist_tags(generator): """Replace gist tags in the article content.""" from jinja2 import Template template = Template(gist_template) should_cache = generator.context.get('GIST_CACHE_ENABLED') cache_location = generator.context.get('GIST_CACHE_LOCATION') pygments_style = generator.context.get('GIST_PYGMENTS_STYLE') body = None for article in generator.articles: for match in gist_regex.findall(article._content): gist_id = match[1] filename = None filetype = None if match[3]: filename = match[3] if match[5]: filetype = match[5] logger.info('[gist]: Found gist id {} with filename {} and filetype {}'.format( gist_id, filename, filetype, )) if should_cache: body = get_cache(cache_location, gist_id, filename) # Fetch the gist if not body: logger.info('[gist]: Gist did not exist in cache, fetching...') body = fetch_gist(gist_id, filename) if should_cache: logger.info('[gist]: Saving gist to cache...') set_cache(cache_location, gist_id, body, filename) else: logger.info('[gist]: Found gist in cache.') # Create a context to render with context = generator.context.copy() context.update({ 'script_url': script_url(gist_id, filename), 'code': render_code(body, filetype, pygments_style) }) # Render the template replacement = template.render(context) article._content = article._content.replace(match[0], replacement)
Replace gist tags in the article content.
Below is the the instruction that describes the task: ### Input: Replace gist tags in the article content. ### Response: def replace_gist_tags(generator): """Replace gist tags in the article content.""" from jinja2 import Template template = Template(gist_template) should_cache = generator.context.get('GIST_CACHE_ENABLED') cache_location = generator.context.get('GIST_CACHE_LOCATION') pygments_style = generator.context.get('GIST_PYGMENTS_STYLE') body = None for article in generator.articles: for match in gist_regex.findall(article._content): gist_id = match[1] filename = None filetype = None if match[3]: filename = match[3] if match[5]: filetype = match[5] logger.info('[gist]: Found gist id {} with filename {} and filetype {}'.format( gist_id, filename, filetype, )) if should_cache: body = get_cache(cache_location, gist_id, filename) # Fetch the gist if not body: logger.info('[gist]: Gist did not exist in cache, fetching...') body = fetch_gist(gist_id, filename) if should_cache: logger.info('[gist]: Saving gist to cache...') set_cache(cache_location, gist_id, body, filename) else: logger.info('[gist]: Found gist in cache.') # Create a context to render with context = generator.context.copy() context.update({ 'script_url': script_url(gist_id, filename), 'code': render_code(body, filetype, pygments_style) }) # Render the template replacement = template.render(context) article._content = article._content.replace(match[0], replacement)
def _sample_item(self, **kwargs): """Sample an item from the pool according to the instrumental distribution """ loc = np.random.choice(self._n_items, p = self._inst_pmf) weight = (1/self._n_items)/self._inst_pmf[loc] return loc, weight, {}
Sample an item from the pool according to the instrumental distribution
Below is the the instruction that describes the task: ### Input: Sample an item from the pool according to the instrumental distribution ### Response: def _sample_item(self, **kwargs): """Sample an item from the pool according to the instrumental distribution """ loc = np.random.choice(self._n_items, p = self._inst_pmf) weight = (1/self._n_items)/self._inst_pmf[loc] return loc, weight, {}
def _get_stddevs(self, C, stddev_types, nsites): """ Compute total standard deviation, see table 4.2, page 50. """ stddevs = [] for stddev_type in stddev_types: assert stddev_type in self.DEFINED_FOR_STANDARD_DEVIATION_TYPES if stddev_type == const.StdDev.TOTAL: stddevs.append(C['sigma'] + np.zeros(nsites, dtype=float)) return stddevs
Compute total standard deviation, see table 4.2, page 50.
Below is the the instruction that describes the task: ### Input: Compute total standard deviation, see table 4.2, page 50. ### Response: def _get_stddevs(self, C, stddev_types, nsites): """ Compute total standard deviation, see table 4.2, page 50. """ stddevs = [] for stddev_type in stddev_types: assert stddev_type in self.DEFINED_FOR_STANDARD_DEVIATION_TYPES if stddev_type == const.StdDev.TOTAL: stddevs.append(C['sigma'] + np.zeros(nsites, dtype=float)) return stddevs
def rename_get_variable(mapping): """ Args: mapping(dict): an old -> new mapping for variable basename. e.g. {'kernel': 'W'} Returns: A context where the variables are renamed. """ def custom_getter(getter, name, *args, **kwargs): splits = name.split('/') basename = splits[-1] if basename in mapping: basename = mapping[basename] splits[-1] = basename name = '/'.join(splits) return getter(name, *args, **kwargs) return custom_getter_scope(custom_getter)
Args: mapping(dict): an old -> new mapping for variable basename. e.g. {'kernel': 'W'} Returns: A context where the variables are renamed.
Below is the the instruction that describes the task: ### Input: Args: mapping(dict): an old -> new mapping for variable basename. e.g. {'kernel': 'W'} Returns: A context where the variables are renamed. ### Response: def rename_get_variable(mapping): """ Args: mapping(dict): an old -> new mapping for variable basename. e.g. {'kernel': 'W'} Returns: A context where the variables are renamed. """ def custom_getter(getter, name, *args, **kwargs): splits = name.split('/') basename = splits[-1] if basename in mapping: basename = mapping[basename] splits[-1] = basename name = '/'.join(splits) return getter(name, *args, **kwargs) return custom_getter_scope(custom_getter)
def dev(): """Define dev stage""" env.roledefs = { 'web': ['192.168.1.2'], 'lb': ['192.168.1.2'], } env.user = 'vagrant' env.backends = env.roledefs['web'] env.server_name = 'django_search_model-dev.net' env.short_server_name = 'django_search_model-dev' env.static_folder = '/site_media/' env.server_ip = '192.168.1.2' env.no_shared_sessions = False env.server_ssl_on = False env.goal = 'dev' env.socket_port = '8001' env.map_settings = {} execute(build_env)
Define dev stage
Below is the the instruction that describes the task: ### Input: Define dev stage ### Response: def dev(): """Define dev stage""" env.roledefs = { 'web': ['192.168.1.2'], 'lb': ['192.168.1.2'], } env.user = 'vagrant' env.backends = env.roledefs['web'] env.server_name = 'django_search_model-dev.net' env.short_server_name = 'django_search_model-dev' env.static_folder = '/site_media/' env.server_ip = '192.168.1.2' env.no_shared_sessions = False env.server_ssl_on = False env.goal = 'dev' env.socket_port = '8001' env.map_settings = {} execute(build_env)
def _dms_formatter(latitude, longitude, mode, unistr=False): """Generate a human readable DM/DMS location string. Args: latitude (float): Location's latitude longitude (float): Location's longitude mode (str): Coordinate formatting system to use unistr (bool): Whether to use extended character set """ if unistr: chars = ('°', '′', '″') else: chars = ('°', "'", '"') latitude_dms = tuple(map(abs, utils.to_dms(latitude, mode))) longitude_dms = tuple(map(abs, utils.to_dms(longitude, mode))) text = [] if mode == 'dms': text.append('%%02i%s%%02i%s%%02i%s' % chars % latitude_dms) else: text.append('%%02i%s%%05.2f%s' % chars[:2] % latitude_dms) text.append('S' if latitude < 0 else 'N') if mode == 'dms': text.append(', %%03i%s%%02i%s%%02i%s' % chars % longitude_dms) else: text.append(', %%03i%s%%05.2f%s' % chars[:2] % longitude_dms) text.append('W' if longitude < 0 else 'E') return text
Generate a human readable DM/DMS location string. Args: latitude (float): Location's latitude longitude (float): Location's longitude mode (str): Coordinate formatting system to use unistr (bool): Whether to use extended character set
Below is the the instruction that describes the task: ### Input: Generate a human readable DM/DMS location string. Args: latitude (float): Location's latitude longitude (float): Location's longitude mode (str): Coordinate formatting system to use unistr (bool): Whether to use extended character set ### Response: def _dms_formatter(latitude, longitude, mode, unistr=False): """Generate a human readable DM/DMS location string. Args: latitude (float): Location's latitude longitude (float): Location's longitude mode (str): Coordinate formatting system to use unistr (bool): Whether to use extended character set """ if unistr: chars = ('°', '′', '″') else: chars = ('°', "'", '"') latitude_dms = tuple(map(abs, utils.to_dms(latitude, mode))) longitude_dms = tuple(map(abs, utils.to_dms(longitude, mode))) text = [] if mode == 'dms': text.append('%%02i%s%%02i%s%%02i%s' % chars % latitude_dms) else: text.append('%%02i%s%%05.2f%s' % chars[:2] % latitude_dms) text.append('S' if latitude < 0 else 'N') if mode == 'dms': text.append(', %%03i%s%%02i%s%%02i%s' % chars % longitude_dms) else: text.append(', %%03i%s%%05.2f%s' % chars[:2] % longitude_dms) text.append('W' if longitude < 0 else 'E') return text
def extractFile(self, filename): """ This function will extract a single file from the remote zip without downloading the entire zip file. The filename argument should match whatever is in the 'filename' key of the tableOfContents. """ files = [x for x in self.tableOfContents if x['filename'] == filename] if len(files) == 0: raise FileNotFoundException() fileRecord = files[0] # got here? need to fetch the file size metaheadroom = 1024 # should be enough request = urllib2.Request(self.zipURI) start = fileRecord['filestart'] end = fileRecord['filestart'] + fileRecord['compressedsize'] + metaheadroom request.headers['Range'] = "bytes=%s-%s" % (start, end) handle = urllib2.urlopen(request) # make sure the response is ranged return_range = handle.headers.get('Content-Range') if return_range != "bytes %d-%d/%s" % (start, end, self.filesize): raise Exception("Ranged requests are not supported for this URI") filedata = handle.read() # find start of raw file data zip_n = unpack("H", filedata[26:28])[0] zip_m = unpack("H", filedata[28:30])[0] # check compressed size has_data_descriptor = bool(unpack("H", filedata[6:8])[0] & 8) comp_size = unpack("I", filedata[18:22])[0] if comp_size == 0 and has_data_descriptor: # assume compressed size in the Central Directory is correct comp_size = fileRecord['compressedsize'] elif comp_size != fileRecord['compressedsize']: raise Exception("Something went wrong. Directory and file header disagree of compressed file size") raw_zip_data = filedata[30 + zip_n + zip_m: 30 + zip_n + zip_m + comp_size] uncompressed_data = "" # can't decompress if stored without compression compression_method = unpack("H", filedata[8:10])[0] if compression_method == 0: return raw_zip_data dec = zlib.decompressobj(-zlib.MAX_WBITS) for chunk in raw_zip_data: rv = dec.decompress(chunk) if rv: uncompressed_data = uncompressed_data + rv return uncompressed_data
This function will extract a single file from the remote zip without downloading the entire zip file. The filename argument should match whatever is in the 'filename' key of the tableOfContents.
Below is the the instruction that describes the task: ### Input: This function will extract a single file from the remote zip without downloading the entire zip file. The filename argument should match whatever is in the 'filename' key of the tableOfContents. ### Response: def extractFile(self, filename): """ This function will extract a single file from the remote zip without downloading the entire zip file. The filename argument should match whatever is in the 'filename' key of the tableOfContents. """ files = [x for x in self.tableOfContents if x['filename'] == filename] if len(files) == 0: raise FileNotFoundException() fileRecord = files[0] # got here? need to fetch the file size metaheadroom = 1024 # should be enough request = urllib2.Request(self.zipURI) start = fileRecord['filestart'] end = fileRecord['filestart'] + fileRecord['compressedsize'] + metaheadroom request.headers['Range'] = "bytes=%s-%s" % (start, end) handle = urllib2.urlopen(request) # make sure the response is ranged return_range = handle.headers.get('Content-Range') if return_range != "bytes %d-%d/%s" % (start, end, self.filesize): raise Exception("Ranged requests are not supported for this URI") filedata = handle.read() # find start of raw file data zip_n = unpack("H", filedata[26:28])[0] zip_m = unpack("H", filedata[28:30])[0] # check compressed size has_data_descriptor = bool(unpack("H", filedata[6:8])[0] & 8) comp_size = unpack("I", filedata[18:22])[0] if comp_size == 0 and has_data_descriptor: # assume compressed size in the Central Directory is correct comp_size = fileRecord['compressedsize'] elif comp_size != fileRecord['compressedsize']: raise Exception("Something went wrong. Directory and file header disagree of compressed file size") raw_zip_data = filedata[30 + zip_n + zip_m: 30 + zip_n + zip_m + comp_size] uncompressed_data = "" # can't decompress if stored without compression compression_method = unpack("H", filedata[8:10])[0] if compression_method == 0: return raw_zip_data dec = zlib.decompressobj(-zlib.MAX_WBITS) for chunk in raw_zip_data: rv = dec.decompress(chunk) if rv: uncompressed_data = uncompressed_data + rv return uncompressed_data
def logical_raid_levels(self): """Gets the raid level for each logical volume :returns the set of list of raid levels configured. """ lg_raid_lvls = set() for member in self.get_members(): lg_raid_lvls.add(mappings.RAID_LEVEL_MAP_REV.get(member.raid)) return lg_raid_lvls
Gets the raid level for each logical volume :returns the set of list of raid levels configured.
Below is the the instruction that describes the task: ### Input: Gets the raid level for each logical volume :returns the set of list of raid levels configured. ### Response: def logical_raid_levels(self): """Gets the raid level for each logical volume :returns the set of list of raid levels configured. """ lg_raid_lvls = set() for member in self.get_members(): lg_raid_lvls.add(mappings.RAID_LEVEL_MAP_REV.get(member.raid)) return lg_raid_lvls
def delete_process_by_id(self, process_type_id): """DeleteProcessById. [Preview API] Removes a process of a specific ID. :param str process_type_id: """ route_values = {} if process_type_id is not None: route_values['processTypeId'] = self._serialize.url('process_type_id', process_type_id, 'str') self._send(http_method='DELETE', location_id='02cc6a73-5cfb-427d-8c8e-b49fb086e8af', version='5.0-preview.2', route_values=route_values)
DeleteProcessById. [Preview API] Removes a process of a specific ID. :param str process_type_id:
Below is the the instruction that describes the task: ### Input: DeleteProcessById. [Preview API] Removes a process of a specific ID. :param str process_type_id: ### Response: def delete_process_by_id(self, process_type_id): """DeleteProcessById. [Preview API] Removes a process of a specific ID. :param str process_type_id: """ route_values = {} if process_type_id is not None: route_values['processTypeId'] = self._serialize.url('process_type_id', process_type_id, 'str') self._send(http_method='DELETE', location_id='02cc6a73-5cfb-427d-8c8e-b49fb086e8af', version='5.0-preview.2', route_values=route_values)
def getChild(self, path, request): """ This is necessary because the parent class would call proxy.ReverseProxyResource instead of CacheProxyResource """ return CacheProxyResource( self.host, self.port, self.path + '/' + urlquote(path, safe=""), self.reactor )
This is necessary because the parent class would call proxy.ReverseProxyResource instead of CacheProxyResource
Below is the the instruction that describes the task: ### Input: This is necessary because the parent class would call proxy.ReverseProxyResource instead of CacheProxyResource ### Response: def getChild(self, path, request): """ This is necessary because the parent class would call proxy.ReverseProxyResource instead of CacheProxyResource """ return CacheProxyResource( self.host, self.port, self.path + '/' + urlquote(path, safe=""), self.reactor )
def symmetrize_JMS_dict(C): """For a dictionary with JMS Wilson coefficients but keys that might not be in the non-redundant basis, return a dictionary with keys from the basis and values conjugated if necessary.""" wc_keys = set(wcxf.Basis['WET', 'JMS'].all_wcs) Cs = {} for op, v in C.items(): if '_' not in op or op in wc_keys: Cs[op] = v continue name, ind = op.split('_') if name in C_symm_keys[5]: i, j, k, l = ind indnew = ''.join([j, i, l, k]) Cs['_'.join([name, indnew])] = v.conjugate() elif name in C_symm_keys[41]: i, j, k, l = ind indnew = ''.join([k, l, i, j]) Cs['_'.join([name, indnew])] = v elif name in C_symm_keys[4]: i, j, k, l = ind indnew = ''.join([l, k, j, i]) newname = '_'.join([name, indnew]) if newname in wc_keys: Cs[newname] = v.conjugate() else: indnew = ''.join([j, i, l, k]) newname = '_'.join([name, indnew]) if newname in wc_keys: Cs[newname] = v.conjugate() else: indnew = ''.join([k, l, i, j]) newname = '_'.join([name, indnew]) Cs[newname] = v elif name in C_symm_keys[9]: i, j, k, l = ind indnew = ''.join([j, i, k, l]) Cs['_'.join([name, indnew])] = -v return Cs
For a dictionary with JMS Wilson coefficients but keys that might not be in the non-redundant basis, return a dictionary with keys from the basis and values conjugated if necessary.
Below is the the instruction that describes the task: ### Input: For a dictionary with JMS Wilson coefficients but keys that might not be in the non-redundant basis, return a dictionary with keys from the basis and values conjugated if necessary. ### Response: def symmetrize_JMS_dict(C): """For a dictionary with JMS Wilson coefficients but keys that might not be in the non-redundant basis, return a dictionary with keys from the basis and values conjugated if necessary.""" wc_keys = set(wcxf.Basis['WET', 'JMS'].all_wcs) Cs = {} for op, v in C.items(): if '_' not in op or op in wc_keys: Cs[op] = v continue name, ind = op.split('_') if name in C_symm_keys[5]: i, j, k, l = ind indnew = ''.join([j, i, l, k]) Cs['_'.join([name, indnew])] = v.conjugate() elif name in C_symm_keys[41]: i, j, k, l = ind indnew = ''.join([k, l, i, j]) Cs['_'.join([name, indnew])] = v elif name in C_symm_keys[4]: i, j, k, l = ind indnew = ''.join([l, k, j, i]) newname = '_'.join([name, indnew]) if newname in wc_keys: Cs[newname] = v.conjugate() else: indnew = ''.join([j, i, l, k]) newname = '_'.join([name, indnew]) if newname in wc_keys: Cs[newname] = v.conjugate() else: indnew = ''.join([k, l, i, j]) newname = '_'.join([name, indnew]) Cs[newname] = v elif name in C_symm_keys[9]: i, j, k, l = ind indnew = ''.join([j, i, k, l]) Cs['_'.join([name, indnew])] = -v return Cs
def check_status(self): """ tests both the ext_url and local_url to see if the database is running returns: True if a connection can be made False if the connection cannot me made """ log = logging.getLogger("%s.%s" % (self.log_name, inspect.stack()[0][3])) log.setLevel(self.log_level) if self.url: return True try: result = requests.get(self.ext_url) self.url = self.ext_url return True except requests.exceptions.ConnectionError: pass try: result = requests.get(self.local_url) log.warning("Url '%s' not connecting. Using local_url '%s'" % \ (self.ext_url, self.local_url)) self.url = self.local_url return True except requests.exceptions.ConnectionError: self.url = None log.warning("Unable to connect using urls: %s" % set([self.ext_url, self.local_url])) return False
tests both the ext_url and local_url to see if the database is running returns: True if a connection can be made False if the connection cannot me made
Below is the the instruction that describes the task: ### Input: tests both the ext_url and local_url to see if the database is running returns: True if a connection can be made False if the connection cannot me made ### Response: def check_status(self): """ tests both the ext_url and local_url to see if the database is running returns: True if a connection can be made False if the connection cannot me made """ log = logging.getLogger("%s.%s" % (self.log_name, inspect.stack()[0][3])) log.setLevel(self.log_level) if self.url: return True try: result = requests.get(self.ext_url) self.url = self.ext_url return True except requests.exceptions.ConnectionError: pass try: result = requests.get(self.local_url) log.warning("Url '%s' not connecting. Using local_url '%s'" % \ (self.ext_url, self.local_url)) self.url = self.local_url return True except requests.exceptions.ConnectionError: self.url = None log.warning("Unable to connect using urls: %s" % set([self.ext_url, self.local_url])) return False
def _unichr(i): """ Helper function for taking a Unicode scalar value and returning a Unicode character. :param s: Unicode scalar value to convert. :return: Unicode character """ if not isinstance(i, int): raise TypeError try: return six.unichr(i) except ValueError: # Workaround the error "ValueError: unichr() arg not in range(0x10000) (narrow Python build)" return struct.pack("i", i).decode("utf-32")
Helper function for taking a Unicode scalar value and returning a Unicode character. :param s: Unicode scalar value to convert. :return: Unicode character
Below is the the instruction that describes the task: ### Input: Helper function for taking a Unicode scalar value and returning a Unicode character. :param s: Unicode scalar value to convert. :return: Unicode character ### Response: def _unichr(i): """ Helper function for taking a Unicode scalar value and returning a Unicode character. :param s: Unicode scalar value to convert. :return: Unicode character """ if not isinstance(i, int): raise TypeError try: return six.unichr(i) except ValueError: # Workaround the error "ValueError: unichr() arg not in range(0x10000) (narrow Python build)" return struct.pack("i", i).decode("utf-32")
def _SID_call_prep(align_bams, items, ref_file, assoc_files, region=None, out_file=None): """Preparation work for SomaticIndelDetector. """ base_config = items[0]["config"] for x in align_bams: bam.index(x, base_config) params = ["-R", ref_file, "-T", "SomaticIndelDetector", "-U", "ALLOW_N_CIGAR_READS"] # Limit per base read start count to between 200-10000, i.e. from any base # can no more 10000 new reads begin. # Further, limit maxNumberOfReads accordingly, otherwise SID discards # windows for high coverage panels. paired = vcfutils.get_paired_bams(align_bams, items) params += ["--read_filter", "NotPrimaryAlignment"] params += ["-I:tumor", paired.tumor_bam] min_af = float(get_in(paired.tumor_config, ("algorithm", "min_allele_fraction"), 10)) / 100.0 if paired.normal_bam is not None: params += ["-I:normal", paired.normal_bam] # notice there must be at least 4 reads of coverage in normal params += ["--filter_expressions", "T_COV<6||N_COV<4||T_INDEL_F<%s||T_INDEL_CF<0.7" % min_af] else: params += ["--unpaired"] params += ["--filter_expressions", "COV<6||INDEL_F<%s||INDEL_CF<0.7" % min_af] if region: params += ["-L", bamprep.region_to_gatk(region), "--interval_set_rule", "INTERSECTION"] return params
Preparation work for SomaticIndelDetector.
Below is the the instruction that describes the task: ### Input: Preparation work for SomaticIndelDetector. ### Response: def _SID_call_prep(align_bams, items, ref_file, assoc_files, region=None, out_file=None): """Preparation work for SomaticIndelDetector. """ base_config = items[0]["config"] for x in align_bams: bam.index(x, base_config) params = ["-R", ref_file, "-T", "SomaticIndelDetector", "-U", "ALLOW_N_CIGAR_READS"] # Limit per base read start count to between 200-10000, i.e. from any base # can no more 10000 new reads begin. # Further, limit maxNumberOfReads accordingly, otherwise SID discards # windows for high coverage panels. paired = vcfutils.get_paired_bams(align_bams, items) params += ["--read_filter", "NotPrimaryAlignment"] params += ["-I:tumor", paired.tumor_bam] min_af = float(get_in(paired.tumor_config, ("algorithm", "min_allele_fraction"), 10)) / 100.0 if paired.normal_bam is not None: params += ["-I:normal", paired.normal_bam] # notice there must be at least 4 reads of coverage in normal params += ["--filter_expressions", "T_COV<6||N_COV<4||T_INDEL_F<%s||T_INDEL_CF<0.7" % min_af] else: params += ["--unpaired"] params += ["--filter_expressions", "COV<6||INDEL_F<%s||INDEL_CF<0.7" % min_af] if region: params += ["-L", bamprep.region_to_gatk(region), "--interval_set_rule", "INTERSECTION"] return params
def body_json(soup, base_url=None): """ Get body json and then alter it with section wrapping and removing boxed-text """ body_content = body(soup, remove_key_info_box=True, base_url=base_url) # Wrap in a section if the first block is not a section if (body_content and len(body_content) > 0 and "type" in body_content[0] and body_content[0]["type"] != "section"): # Wrap this one new_body_section = OrderedDict() new_body_section["type"] = "section" new_body_section["id"] = "s0" new_body_section["title"] = "Main text" new_body_section["content"] = [] for body_block in body_content: new_body_section["content"].append(body_block) new_body = [] new_body.append(new_body_section) body_content = new_body body_content_rewritten = elifetools.json_rewrite.rewrite_json("body_json", soup, body_content) return body_content_rewritten
Get body json and then alter it with section wrapping and removing boxed-text
Below is the the instruction that describes the task: ### Input: Get body json and then alter it with section wrapping and removing boxed-text ### Response: def body_json(soup, base_url=None): """ Get body json and then alter it with section wrapping and removing boxed-text """ body_content = body(soup, remove_key_info_box=True, base_url=base_url) # Wrap in a section if the first block is not a section if (body_content and len(body_content) > 0 and "type" in body_content[0] and body_content[0]["type"] != "section"): # Wrap this one new_body_section = OrderedDict() new_body_section["type"] = "section" new_body_section["id"] = "s0" new_body_section["title"] = "Main text" new_body_section["content"] = [] for body_block in body_content: new_body_section["content"].append(body_block) new_body = [] new_body.append(new_body_section) body_content = new_body body_content_rewritten = elifetools.json_rewrite.rewrite_json("body_json", soup, body_content) return body_content_rewritten
def execute_procedure(self, name, args=None, kwargs=None): """ Call the concrete python function corresponding to given RPC Method `name` and return the result. Raise RPCUnknownMethod, AuthenticationFailed, RPCInvalidParams or any Exception sub-class. """ _method = registry.get_method(name, self.entry_point, self.protocol) if not _method: raise RPCUnknownMethod(name) logger.debug('Check authentication / permissions for method {} and user {}' .format(name, self.request.user)) if not _method.check_permissions(self.request): raise AuthenticationFailed(name) logger.debug('RPC method {} will be executed'.format(name)) # Replace default None value with empty instance of corresponding type args = args or [] kwargs = kwargs or {} # If the RPC method needs to access some internals, update kwargs dict if _method.accept_kwargs: kwargs.update({ REQUEST_KEY: self.request, ENTRY_POINT_KEY: self.entry_point, PROTOCOL_KEY: self.protocol, HANDLER_KEY: self, }) if six.PY2: method_std, encoding = _method.str_standardization, _method.str_std_encoding args = modernrpc.compat.standardize_strings(args, strtype=method_std, encoding=encoding) kwargs = modernrpc.compat.standardize_strings(kwargs, strtype=method_std, encoding=encoding) logger.debug('Params: args = {} - kwargs = {}'.format(args, kwargs)) try: # Call the rpc method, as standard python function return _method.function(*args, **kwargs) except TypeError as e: # If given arguments cannot be transmitted properly to python function, # raise an Invalid Params exceptions raise RPCInvalidParams(str(e))
Call the concrete python function corresponding to given RPC Method `name` and return the result. Raise RPCUnknownMethod, AuthenticationFailed, RPCInvalidParams or any Exception sub-class.
Below is the the instruction that describes the task: ### Input: Call the concrete python function corresponding to given RPC Method `name` and return the result. Raise RPCUnknownMethod, AuthenticationFailed, RPCInvalidParams or any Exception sub-class. ### Response: def execute_procedure(self, name, args=None, kwargs=None): """ Call the concrete python function corresponding to given RPC Method `name` and return the result. Raise RPCUnknownMethod, AuthenticationFailed, RPCInvalidParams or any Exception sub-class. """ _method = registry.get_method(name, self.entry_point, self.protocol) if not _method: raise RPCUnknownMethod(name) logger.debug('Check authentication / permissions for method {} and user {}' .format(name, self.request.user)) if not _method.check_permissions(self.request): raise AuthenticationFailed(name) logger.debug('RPC method {} will be executed'.format(name)) # Replace default None value with empty instance of corresponding type args = args or [] kwargs = kwargs or {} # If the RPC method needs to access some internals, update kwargs dict if _method.accept_kwargs: kwargs.update({ REQUEST_KEY: self.request, ENTRY_POINT_KEY: self.entry_point, PROTOCOL_KEY: self.protocol, HANDLER_KEY: self, }) if six.PY2: method_std, encoding = _method.str_standardization, _method.str_std_encoding args = modernrpc.compat.standardize_strings(args, strtype=method_std, encoding=encoding) kwargs = modernrpc.compat.standardize_strings(kwargs, strtype=method_std, encoding=encoding) logger.debug('Params: args = {} - kwargs = {}'.format(args, kwargs)) try: # Call the rpc method, as standard python function return _method.function(*args, **kwargs) except TypeError as e: # If given arguments cannot be transmitted properly to python function, # raise an Invalid Params exceptions raise RPCInvalidParams(str(e))
def _load_tasks(self, project): '''load tasks from database''' task_queue = project.task_queue for task in self.taskdb.load_tasks( self.taskdb.ACTIVE, project.name, self.scheduler_task_fields ): taskid = task['taskid'] _schedule = task.get('schedule', self.default_schedule) priority = _schedule.get('priority', self.default_schedule['priority']) exetime = _schedule.get('exetime', self.default_schedule['exetime']) task_queue.put(taskid, priority, exetime) project.task_loaded = True logger.debug('project: %s loaded %d tasks.', project.name, len(task_queue)) if project not in self._cnt['all']: self._update_project_cnt(project.name) self._cnt['all'].value((project.name, 'pending'), len(project.task_queue))
load tasks from database
Below is the the instruction that describes the task: ### Input: load tasks from database ### Response: def _load_tasks(self, project): '''load tasks from database''' task_queue = project.task_queue for task in self.taskdb.load_tasks( self.taskdb.ACTIVE, project.name, self.scheduler_task_fields ): taskid = task['taskid'] _schedule = task.get('schedule', self.default_schedule) priority = _schedule.get('priority', self.default_schedule['priority']) exetime = _schedule.get('exetime', self.default_schedule['exetime']) task_queue.put(taskid, priority, exetime) project.task_loaded = True logger.debug('project: %s loaded %d tasks.', project.name, len(task_queue)) if project not in self._cnt['all']: self._update_project_cnt(project.name) self._cnt['all'].value((project.name, 'pending'), len(project.task_queue))
def publish_attrs(self, upcount=1): """ Magic function which inject all attrs into the callers namespace :param upcount int, how many stack levels we go up :return: """ frame = inspect.currentframe() i = upcount while True: if frame.f_back is None: break frame = frame.f_back i -= 1 if i == 0: break for k, v in self.__dict__.items(): frame.f_globals[k] = v
Magic function which inject all attrs into the callers namespace :param upcount int, how many stack levels we go up :return:
Below is the the instruction that describes the task: ### Input: Magic function which inject all attrs into the callers namespace :param upcount int, how many stack levels we go up :return: ### Response: def publish_attrs(self, upcount=1): """ Magic function which inject all attrs into the callers namespace :param upcount int, how many stack levels we go up :return: """ frame = inspect.currentframe() i = upcount while True: if frame.f_back is None: break frame = frame.f_back i -= 1 if i == 0: break for k, v in self.__dict__.items(): frame.f_globals[k] = v
def confirm_deliveries(self): """Set the channel to confirm that each message has been successfully delivered. :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. :return: """ self._confirming_deliveries = True confirm_frame = specification.Confirm.Select() return self.rpc_request(confirm_frame)
Set the channel to confirm that each message has been successfully delivered. :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. :return:
Below is the the instruction that describes the task: ### Input: Set the channel to confirm that each message has been successfully delivered. :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. :return: ### Response: def confirm_deliveries(self): """Set the channel to confirm that each message has been successfully delivered. :raises AMQPChannelError: Raises if the channel encountered an error. :raises AMQPConnectionError: Raises if the connection encountered an error. :return: """ self._confirming_deliveries = True confirm_frame = specification.Confirm.Select() return self.rpc_request(confirm_frame)
def send_keys(self, keysToSend): """ Send Keys to the Alert. :Args: - keysToSend: The text to be sent to Alert. """ if self.driver.w3c: self.driver.execute(Command.W3C_SET_ALERT_VALUE, {'value': keys_to_typing(keysToSend), 'text': keysToSend}) else: self.driver.execute(Command.SET_ALERT_VALUE, {'text': keysToSend})
Send Keys to the Alert. :Args: - keysToSend: The text to be sent to Alert.
Below is the the instruction that describes the task: ### Input: Send Keys to the Alert. :Args: - keysToSend: The text to be sent to Alert. ### Response: def send_keys(self, keysToSend): """ Send Keys to the Alert. :Args: - keysToSend: The text to be sent to Alert. """ if self.driver.w3c: self.driver.execute(Command.W3C_SET_ALERT_VALUE, {'value': keys_to_typing(keysToSend), 'text': keysToSend}) else: self.driver.execute(Command.SET_ALERT_VALUE, {'text': keysToSend})
def is_bin(ip): """Return true if the IP address is in binary notation.""" try: ip = str(ip) if len(ip) != 32: return False dec = int(ip, 2) except (TypeError, ValueError): return False if dec > 4294967295 or dec < 0: return False return True
Return true if the IP address is in binary notation.
Below is the the instruction that describes the task: ### Input: Return true if the IP address is in binary notation. ### Response: def is_bin(ip): """Return true if the IP address is in binary notation.""" try: ip = str(ip) if len(ip) != 32: return False dec = int(ip, 2) except (TypeError, ValueError): return False if dec > 4294967295 or dec < 0: return False return True
def to_frame(self, frame, current_frame=None, **kwargs): """ TODO: Parameters ---------- frame : `gala.potential.CFrameBase` The frame to transform to. current_frame : `gala.potential.CFrameBase` (optional) If the Orbit has no associated Hamiltonian, this specifies the current frame of the orbit. Returns ------- orbit : `gala.dynamics.Orbit` The orbit in the new reference frame. """ kw = kwargs.copy() # TODO: need a better way to do this! from ..potential.frame.builtin import ConstantRotatingFrame for fr in [frame, current_frame, self.frame]: if isinstance(fr, ConstantRotatingFrame): if 't' not in kw: kw['t'] = self.t psp = super(Orbit, self).to_frame(frame, current_frame, **kw) return Orbit(pos=psp.pos, vel=psp.vel, t=self.t, frame=frame, potential=self.potential)
TODO: Parameters ---------- frame : `gala.potential.CFrameBase` The frame to transform to. current_frame : `gala.potential.CFrameBase` (optional) If the Orbit has no associated Hamiltonian, this specifies the current frame of the orbit. Returns ------- orbit : `gala.dynamics.Orbit` The orbit in the new reference frame.
Below is the the instruction that describes the task: ### Input: TODO: Parameters ---------- frame : `gala.potential.CFrameBase` The frame to transform to. current_frame : `gala.potential.CFrameBase` (optional) If the Orbit has no associated Hamiltonian, this specifies the current frame of the orbit. Returns ------- orbit : `gala.dynamics.Orbit` The orbit in the new reference frame. ### Response: def to_frame(self, frame, current_frame=None, **kwargs): """ TODO: Parameters ---------- frame : `gala.potential.CFrameBase` The frame to transform to. current_frame : `gala.potential.CFrameBase` (optional) If the Orbit has no associated Hamiltonian, this specifies the current frame of the orbit. Returns ------- orbit : `gala.dynamics.Orbit` The orbit in the new reference frame. """ kw = kwargs.copy() # TODO: need a better way to do this! from ..potential.frame.builtin import ConstantRotatingFrame for fr in [frame, current_frame, self.frame]: if isinstance(fr, ConstantRotatingFrame): if 't' not in kw: kw['t'] = self.t psp = super(Orbit, self).to_frame(frame, current_frame, **kw) return Orbit(pos=psp.pos, vel=psp.vel, t=self.t, frame=frame, potential=self.potential)
def _addDatasetAction(self, dataset): """ Adds an action for the inputed dataset to the toolbar :param dataset | <XChartDataset> """ # create the toolbar action action = QAction(dataset.name(), self) action.setIcon(XColorIcon(dataset.color())) action.setCheckable(True) action.setChecked(True) action.setData(wrapVariant(dataset)) action.toggled.connect(self.toggleDataset) self.uiDatasetTBAR.addAction(action)
Adds an action for the inputed dataset to the toolbar :param dataset | <XChartDataset>
Below is the the instruction that describes the task: ### Input: Adds an action for the inputed dataset to the toolbar :param dataset | <XChartDataset> ### Response: def _addDatasetAction(self, dataset): """ Adds an action for the inputed dataset to the toolbar :param dataset | <XChartDataset> """ # create the toolbar action action = QAction(dataset.name(), self) action.setIcon(XColorIcon(dataset.color())) action.setCheckable(True) action.setChecked(True) action.setData(wrapVariant(dataset)) action.toggled.connect(self.toggleDataset) self.uiDatasetTBAR.addAction(action)
def path(cls, ref): """ :return: string to absolute path at which the reflog of the given ref instance would be found. The path is not guaranteed to point to a valid file though. :param ref: SymbolicReference instance""" return osp.join(ref.repo.git_dir, "logs", to_native_path(ref.path))
:return: string to absolute path at which the reflog of the given ref instance would be found. The path is not guaranteed to point to a valid file though. :param ref: SymbolicReference instance
Below is the the instruction that describes the task: ### Input: :return: string to absolute path at which the reflog of the given ref instance would be found. The path is not guaranteed to point to a valid file though. :param ref: SymbolicReference instance ### Response: def path(cls, ref): """ :return: string to absolute path at which the reflog of the given ref instance would be found. The path is not guaranteed to point to a valid file though. :param ref: SymbolicReference instance""" return osp.join(ref.repo.git_dir, "logs", to_native_path(ref.path))
def _get_dependencies_of(name, location=None): ''' Returns list of first level dependencies of the given installed dap or dap from Dapi if not installed If a location is specified, this only checks for dap installed in that path and return [] if the dap is not located there ''' if not location: detailed_dap_list = get_installed_daps_detailed() if name not in detailed_dap_list: return _get_api_dependencies_of(name) location = detailed_dap_list[name][0]['location'] meta = '{d}/meta/{dap}.yaml'.format(d=location, dap=name) try: data = yaml.load(open(meta), Loader=Loader) except IOError: return [] return data.get('dependencies', [])
Returns list of first level dependencies of the given installed dap or dap from Dapi if not installed If a location is specified, this only checks for dap installed in that path and return [] if the dap is not located there
Below is the the instruction that describes the task: ### Input: Returns list of first level dependencies of the given installed dap or dap from Dapi if not installed If a location is specified, this only checks for dap installed in that path and return [] if the dap is not located there ### Response: def _get_dependencies_of(name, location=None): ''' Returns list of first level dependencies of the given installed dap or dap from Dapi if not installed If a location is specified, this only checks for dap installed in that path and return [] if the dap is not located there ''' if not location: detailed_dap_list = get_installed_daps_detailed() if name not in detailed_dap_list: return _get_api_dependencies_of(name) location = detailed_dap_list[name][0]['location'] meta = '{d}/meta/{dap}.yaml'.format(d=location, dap=name) try: data = yaml.load(open(meta), Loader=Loader) except IOError: return [] return data.get('dependencies', [])
def parse_gene_list(path: str, graph: Graph, anno_type: str = "name") -> list: """Parse a list of genes and return them if they are in the network. :param str path: The path of input file. :param Graph graph: The graph with genes as nodes. :param str anno_type: The type of annotation with two options:name-Entrez ID, symbol-HGNC symbol. :return list: A list of genes, all of which are in the network. """ # read the file genes = pd.read_csv(path, header=None)[0].tolist() genes = [str(int(gene)) for gene in genes] # get those genes which are in the network ind = [] if anno_type == "name": ind = graph.vs.select(name_in=genes).indices elif anno_type == "symbol": ind = graph.vs.select(symbol_in=genes).indices else: raise Exception("The type can either be name or symbol, {} is not " "supported".format(anno_type)) genes = graph.vs[ind][anno_type] return genes
Parse a list of genes and return them if they are in the network. :param str path: The path of input file. :param Graph graph: The graph with genes as nodes. :param str anno_type: The type of annotation with two options:name-Entrez ID, symbol-HGNC symbol. :return list: A list of genes, all of which are in the network.
Below is the the instruction that describes the task: ### Input: Parse a list of genes and return them if they are in the network. :param str path: The path of input file. :param Graph graph: The graph with genes as nodes. :param str anno_type: The type of annotation with two options:name-Entrez ID, symbol-HGNC symbol. :return list: A list of genes, all of which are in the network. ### Response: def parse_gene_list(path: str, graph: Graph, anno_type: str = "name") -> list: """Parse a list of genes and return them if they are in the network. :param str path: The path of input file. :param Graph graph: The graph with genes as nodes. :param str anno_type: The type of annotation with two options:name-Entrez ID, symbol-HGNC symbol. :return list: A list of genes, all of which are in the network. """ # read the file genes = pd.read_csv(path, header=None)[0].tolist() genes = [str(int(gene)) for gene in genes] # get those genes which are in the network ind = [] if anno_type == "name": ind = graph.vs.select(name_in=genes).indices elif anno_type == "symbol": ind = graph.vs.select(symbol_in=genes).indices else: raise Exception("The type can either be name or symbol, {} is not " "supported".format(anno_type)) genes = graph.vs[ind][anno_type] return genes
def text(self, encoding=None, errors='strict'): r""" Open this file, read it in, return the content as a string. This uses 'U' mode in Python 2.3 and later, so '\r\n' and '\r' are automatically translated to '\n'. Optional arguments: encoding - The Unicode encoding (or character set) of the file. If present, the content of the file is decoded and returned as a unicode object; otherwise it is returned as an 8-bit str. errors - How to handle Unicode errors; see help(str.decode) for the options. Default is 'strict'. """ if encoding is None: # 8-bit f = self.open(_textmode) try: return f.read() finally: f.close() else: # Unicode f = codecs.open(self, 'r', encoding, errors) # (Note - Can't use 'U' mode here, since codecs.open # doesn't support 'U' mode, even in Python 2.3.) try: t = f.read() finally: f.close() return (t.replace(u'\r\n', u'\n') .replace(u'\r\x85', u'\n') .replace(u'\r', u'\n') .replace(u'\x85', u'\n') .replace(u'\u2028', u'\n'))
r""" Open this file, read it in, return the content as a string. This uses 'U' mode in Python 2.3 and later, so '\r\n' and '\r' are automatically translated to '\n'. Optional arguments: encoding - The Unicode encoding (or character set) of the file. If present, the content of the file is decoded and returned as a unicode object; otherwise it is returned as an 8-bit str. errors - How to handle Unicode errors; see help(str.decode) for the options. Default is 'strict'.
Below is the the instruction that describes the task: ### Input: r""" Open this file, read it in, return the content as a string. This uses 'U' mode in Python 2.3 and later, so '\r\n' and '\r' are automatically translated to '\n'. Optional arguments: encoding - The Unicode encoding (or character set) of the file. If present, the content of the file is decoded and returned as a unicode object; otherwise it is returned as an 8-bit str. errors - How to handle Unicode errors; see help(str.decode) for the options. Default is 'strict'. ### Response: def text(self, encoding=None, errors='strict'): r""" Open this file, read it in, return the content as a string. This uses 'U' mode in Python 2.3 and later, so '\r\n' and '\r' are automatically translated to '\n'. Optional arguments: encoding - The Unicode encoding (or character set) of the file. If present, the content of the file is decoded and returned as a unicode object; otherwise it is returned as an 8-bit str. errors - How to handle Unicode errors; see help(str.decode) for the options. Default is 'strict'. """ if encoding is None: # 8-bit f = self.open(_textmode) try: return f.read() finally: f.close() else: # Unicode f = codecs.open(self, 'r', encoding, errors) # (Note - Can't use 'U' mode here, since codecs.open # doesn't support 'U' mode, even in Python 2.3.) try: t = f.read() finally: f.close() return (t.replace(u'\r\n', u'\n') .replace(u'\r\x85', u'\n') .replace(u'\r', u'\n') .replace(u'\x85', u'\n') .replace(u'\u2028', u'\n'))
def determine_final_config(config_module): """Determines the final additions and replacements. Combines the config module with the defaults. Args: config_module: The loaded local configuration module. Returns: Config: the final configuration. """ config = Config( DEFAULT_LIBRARY_RC_ADDITIONS, DEFAULT_LIBRARY_RC_REPLACEMENTS, DEFAULT_TEST_RC_ADDITIONS, DEFAULT_TEST_RC_REPLACEMENTS) for field in config._fields: if hasattr(config_module, field): config = config._replace(**{field: getattr(config_module, field)}) return config
Determines the final additions and replacements. Combines the config module with the defaults. Args: config_module: The loaded local configuration module. Returns: Config: the final configuration.
Below is the the instruction that describes the task: ### Input: Determines the final additions and replacements. Combines the config module with the defaults. Args: config_module: The loaded local configuration module. Returns: Config: the final configuration. ### Response: def determine_final_config(config_module): """Determines the final additions and replacements. Combines the config module with the defaults. Args: config_module: The loaded local configuration module. Returns: Config: the final configuration. """ config = Config( DEFAULT_LIBRARY_RC_ADDITIONS, DEFAULT_LIBRARY_RC_REPLACEMENTS, DEFAULT_TEST_RC_ADDITIONS, DEFAULT_TEST_RC_REPLACEMENTS) for field in config._fields: if hasattr(config_module, field): config = config._replace(**{field: getattr(config_module, field)}) return config
def log_request_success(self, method, full_url, path, body, status_code, response, duration): """ Log a successful API call. """ if body and not isinstance(body, dict): body = body.decode('utf-8') logger.info( '%s %s [status:%s request:%.3fs]', method, full_url, status_code, duration ) logger.debug('> %s', body) logger.debug('< %s', response) if tracer.isEnabledFor(logging.INFO): if self.url_prefix: path = path.replace(self.url_prefix, '', 1) tracer.info("curl -X%s 'http://localhost:8047%s' -d '%s'", method, path, self._pretty_json(body) if body else '') if tracer.isEnabledFor(logging.DEBUG): tracer.debug('#[%s] (%.3fs)\n#%s', status_code, duration, self._pretty_json(response).replace('\n', '\n#') if response else '')
Log a successful API call.
Below is the the instruction that describes the task: ### Input: Log a successful API call. ### Response: def log_request_success(self, method, full_url, path, body, status_code, response, duration): """ Log a successful API call. """ if body and not isinstance(body, dict): body = body.decode('utf-8') logger.info( '%s %s [status:%s request:%.3fs]', method, full_url, status_code, duration ) logger.debug('> %s', body) logger.debug('< %s', response) if tracer.isEnabledFor(logging.INFO): if self.url_prefix: path = path.replace(self.url_prefix, '', 1) tracer.info("curl -X%s 'http://localhost:8047%s' -d '%s'", method, path, self._pretty_json(body) if body else '') if tracer.isEnabledFor(logging.DEBUG): tracer.debug('#[%s] (%.3fs)\n#%s', status_code, duration, self._pretty_json(response).replace('\n', '\n#') if response else '')
def delete_query(self, query_id): """Delete query in device query service. :param int query_id: ID of the query to delete (Required) :return: void """ api = self._get_api(device_directory.DefaultApi) api.device_query_destroy(query_id) return
Delete query in device query service. :param int query_id: ID of the query to delete (Required) :return: void
Below is the the instruction that describes the task: ### Input: Delete query in device query service. :param int query_id: ID of the query to delete (Required) :return: void ### Response: def delete_query(self, query_id): """Delete query in device query service. :param int query_id: ID of the query to delete (Required) :return: void """ api = self._get_api(device_directory.DefaultApi) api.device_query_destroy(query_id) return
def rdcandump(filename, count=None, is_not_log_file_format=False, interface=None): """Read a candump log file and return a packet list count: read only <count> packets is_not_log_file_format: read input with candumps stdout format interfaces: return only packets from a specified interface """ try: if isinstance(filename, six.string_types): file = open(filename, "rb") else: file = filename pkts = list() ifilter = None if interface is not None: if isinstance(interface, six.string_types): ifilter = [interface] else: ifilter = interface for l in file.readlines(): if is_not_log_file_format: h, data = l.split(b']') intf, idn, le = h.split() t = None else: t, intf, f = l.split() idn, data = f.split(b'#') le = None t = float(t[1:-1]) if ifilter is not None and intf.decode('ASCII') not in ifilter: continue data = data.replace(b' ', b'') data = data.strip() pkt = CAN(identifier=int(idn, 16), data=binascii.unhexlify(data)) if le is not None: pkt.length = int(le[1:]) else: pkt.length = len(pkt.data) if len(idn) > 3: pkt.flags = 0b100 if t is not None: pkt.time = t pkts.append(pkt) if count is not None and len(pkts) >= count: break finally: file.close() return pkts
Read a candump log file and return a packet list count: read only <count> packets is_not_log_file_format: read input with candumps stdout format interfaces: return only packets from a specified interface
Below is the the instruction that describes the task: ### Input: Read a candump log file and return a packet list count: read only <count> packets is_not_log_file_format: read input with candumps stdout format interfaces: return only packets from a specified interface ### Response: def rdcandump(filename, count=None, is_not_log_file_format=False, interface=None): """Read a candump log file and return a packet list count: read only <count> packets is_not_log_file_format: read input with candumps stdout format interfaces: return only packets from a specified interface """ try: if isinstance(filename, six.string_types): file = open(filename, "rb") else: file = filename pkts = list() ifilter = None if interface is not None: if isinstance(interface, six.string_types): ifilter = [interface] else: ifilter = interface for l in file.readlines(): if is_not_log_file_format: h, data = l.split(b']') intf, idn, le = h.split() t = None else: t, intf, f = l.split() idn, data = f.split(b'#') le = None t = float(t[1:-1]) if ifilter is not None and intf.decode('ASCII') not in ifilter: continue data = data.replace(b' ', b'') data = data.strip() pkt = CAN(identifier=int(idn, 16), data=binascii.unhexlify(data)) if le is not None: pkt.length = int(le[1:]) else: pkt.length = len(pkt.data) if len(idn) > 3: pkt.flags = 0b100 if t is not None: pkt.time = t pkts.append(pkt) if count is not None and len(pkts) >= count: break finally: file.close() return pkts
def expression(self, text): """expression = number , op_mult , expression | expression_terminal , op_mult , number , [operator , expression] | expression_terminal , op_add , [operator , expression] | expression_terminal , [operator , expression] ; """ self._attempting(text) return alternation([ # number , op_mult , expression concatenation([ self.number, self.op_mult, self.expression ], ignore_whitespace=True), # expression_terminal , op_mult , number , [operator , expression] concatenation([ self.expression_terminal, self.op_mult, self.number, option( concatenation([ self.operator, self.expression ], ignore_whitespace=True) ) ], ignore_whitespace=True), # expression_terminal , op_add , [operator , expression] concatenation([ self.expression_terminal, self.op_add, option( concatenation([ self.operator, self.expression ], ignore_whitespace=True) ) ], ignore_whitespace=True), # expression_terminal , [operator , expression] concatenation([ self.expression_terminal, option( concatenation([ self.operator, self.expression ], ignore_whitespace=True) ) ], ignore_whitespace=True) ])(text).retyped(TokenType.expression)
expression = number , op_mult , expression | expression_terminal , op_mult , number , [operator , expression] | expression_terminal , op_add , [operator , expression] | expression_terminal , [operator , expression] ;
Below is the the instruction that describes the task: ### Input: expression = number , op_mult , expression | expression_terminal , op_mult , number , [operator , expression] | expression_terminal , op_add , [operator , expression] | expression_terminal , [operator , expression] ; ### Response: def expression(self, text): """expression = number , op_mult , expression | expression_terminal , op_mult , number , [operator , expression] | expression_terminal , op_add , [operator , expression] | expression_terminal , [operator , expression] ; """ self._attempting(text) return alternation([ # number , op_mult , expression concatenation([ self.number, self.op_mult, self.expression ], ignore_whitespace=True), # expression_terminal , op_mult , number , [operator , expression] concatenation([ self.expression_terminal, self.op_mult, self.number, option( concatenation([ self.operator, self.expression ], ignore_whitespace=True) ) ], ignore_whitespace=True), # expression_terminal , op_add , [operator , expression] concatenation([ self.expression_terminal, self.op_add, option( concatenation([ self.operator, self.expression ], ignore_whitespace=True) ) ], ignore_whitespace=True), # expression_terminal , [operator , expression] concatenation([ self.expression_terminal, option( concatenation([ self.operator, self.expression ], ignore_whitespace=True) ) ], ignore_whitespace=True) ])(text).retyped(TokenType.expression)
def _flat_values(self): """Return tuple of mean values as found in cube response. Mean data may include missing items represented by a dict like {'?': -1} in the cube response. These are replaced by np.nan in the returned value. """ return tuple( np.nan if type(x) is dict else x for x in self._cube_dict["result"]["measures"]["mean"]["data"] )
Return tuple of mean values as found in cube response. Mean data may include missing items represented by a dict like {'?': -1} in the cube response. These are replaced by np.nan in the returned value.
Below is the the instruction that describes the task: ### Input: Return tuple of mean values as found in cube response. Mean data may include missing items represented by a dict like {'?': -1} in the cube response. These are replaced by np.nan in the returned value. ### Response: def _flat_values(self): """Return tuple of mean values as found in cube response. Mean data may include missing items represented by a dict like {'?': -1} in the cube response. These are replaced by np.nan in the returned value. """ return tuple( np.nan if type(x) is dict else x for x in self._cube_dict["result"]["measures"]["mean"]["data"] )
def load(cls, filename, name=None): """ Load yaml configuration from filename. """ if not os.path.exists(filename): return {} name = name or filename if name not in cls._conffiles: with open(filename) as fdesc: content = yaml.load(fdesc, YAMLLoader) # in case the file is empty if content is None: content = {} cls._conffiles[name] = content return cls._conffiles[name]
Load yaml configuration from filename.
Below is the the instruction that describes the task: ### Input: Load yaml configuration from filename. ### Response: def load(cls, filename, name=None): """ Load yaml configuration from filename. """ if not os.path.exists(filename): return {} name = name or filename if name not in cls._conffiles: with open(filename) as fdesc: content = yaml.load(fdesc, YAMLLoader) # in case the file is empty if content is None: content = {} cls._conffiles[name] = content return cls._conffiles[name]
def AsRegEx(self): """Return the current glob as a simple regex. Note: No interpolation is performed. Returns: A RegularExpression() object. """ parts = self.__class__.REGEX_SPLIT_PATTERN.split(self._value) result = u"".join(self._ReplaceRegExPart(p) for p in parts) return rdf_standard.RegularExpression(u"(?i)\\A%s\\Z" % result)
Return the current glob as a simple regex. Note: No interpolation is performed. Returns: A RegularExpression() object.
Below is the the instruction that describes the task: ### Input: Return the current glob as a simple regex. Note: No interpolation is performed. Returns: A RegularExpression() object. ### Response: def AsRegEx(self): """Return the current glob as a simple regex. Note: No interpolation is performed. Returns: A RegularExpression() object. """ parts = self.__class__.REGEX_SPLIT_PATTERN.split(self._value) result = u"".join(self._ReplaceRegExPart(p) for p in parts) return rdf_standard.RegularExpression(u"(?i)\\A%s\\Z" % result)
def add_canstrat_striplogs(self, path, uwi_transform=None, name='canstrat'): """ This may be too specific a method... just move it to the workflow. Requires striplog. """ from striplog import Striplog uwi_transform = uwi_transform or utils.null for w in self.__list: try: dat_file = utils.find_file(str(uwi_transform(w.uwi)), path) except: print("- Skipping {}: something went wrong".format(w.uwi)) continue if dat_file is None: print("- Omitting {}: no data".format(w.uwi)) continue # If we got here, we're using it. print("+ Adding {} from {}".format(w.uwi, dat_file)) w.data[name] = Striplog.from_canstrat(dat_file) return
This may be too specific a method... just move it to the workflow. Requires striplog.
Below is the the instruction that describes the task: ### Input: This may be too specific a method... just move it to the workflow. Requires striplog. ### Response: def add_canstrat_striplogs(self, path, uwi_transform=None, name='canstrat'): """ This may be too specific a method... just move it to the workflow. Requires striplog. """ from striplog import Striplog uwi_transform = uwi_transform or utils.null for w in self.__list: try: dat_file = utils.find_file(str(uwi_transform(w.uwi)), path) except: print("- Skipping {}: something went wrong".format(w.uwi)) continue if dat_file is None: print("- Omitting {}: no data".format(w.uwi)) continue # If we got here, we're using it. print("+ Adding {} from {}".format(w.uwi, dat_file)) w.data[name] = Striplog.from_canstrat(dat_file) return
def old_properties_names_to_new(self): # pragma: no cover, never called """Convert old Nagios2 names to Nagios3 new names TODO: still useful? :return: None """ for i in itertools.chain(iter(list(self.items.values())), iter(list(self.templates.values()))): i.old_properties_names_to_new()
Convert old Nagios2 names to Nagios3 new names TODO: still useful? :return: None
Below is the the instruction that describes the task: ### Input: Convert old Nagios2 names to Nagios3 new names TODO: still useful? :return: None ### Response: def old_properties_names_to_new(self): # pragma: no cover, never called """Convert old Nagios2 names to Nagios3 new names TODO: still useful? :return: None """ for i in itertools.chain(iter(list(self.items.values())), iter(list(self.templates.values()))): i.old_properties_names_to_new()
def _parse_proto(prototxt_fname): """Parse Caffe prototxt into symbol string """ proto = caffe_parser.read_prototxt(prototxt_fname) # process data layer input_name, input_dim, layers = _get_input(proto) # only support single input, so always use `data` as the input data mapping = {input_name: 'data'} need_flatten = {input_name: False} symbol_string = "import mxnet as mx\ndata = mx.symbol.Variable(name='data')\n" flatten_count = 0 output_name = "" prev_name = None _output_name = {} # convert reset layers one by one for i, layer in enumerate(layers): type_string = '' param_string = '' skip_layer = False name = re.sub('[-/]', '_', layer.name) for k in range(len(layer.bottom)): if layer.bottom[k] in _output_name: _output_name[layer.bottom[k]]['count'] = _output_name[layer.bottom[k]]['count']+1 else: _output_name[layer.bottom[k]] = {'count':0} for k in range(len(layer.top)): if layer.top[k] in _output_name: _output_name[layer.top[k]]['count'] = _output_name[layer.top[k]]['count']+1 else: _output_name[layer.top[k]] = {'count':0, 'name':name} if layer.type == 'Convolution' or layer.type == 4: type_string = 'mx.symbol.Convolution' param_string = _convert_conv_param(layer.convolution_param) need_flatten[name] = True if layer.type == 'Deconvolution' or layer.type == 39: type_string = 'mx.symbol.Deconvolution' param_string = _convert_conv_param(layer.convolution_param) need_flatten[name] = True if layer.type == 'Pooling' or layer.type == 17: type_string = 'mx.symbol.Pooling' param_string = _convert_pooling_param(layer.pooling_param) need_flatten[name] = True if layer.type == 'ReLU' or layer.type == 18: type_string = 'mx.symbol.Activation' param_string = "act_type='relu'" param = layer.relu_param if hasattr(param, 'negative_slope'): if param.negative_slope > 0: type_string = 'mx.symbol.LeakyReLU' param_string = "act_type='leaky', slope=%f" % param.negative_slope need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'TanH' or layer.type == 23: type_string = 'mx.symbol.Activation' param_string = "act_type='tanh'" need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Sigmoid' or layer.type == 19: type_string = 'mx.symbol.Activation' param_string = "act_type='sigmoid'" need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'LRN' or layer.type == 15: type_string = 'mx.symbol.LRN' param = layer.lrn_param param_string = "alpha=%f, beta=%f, knorm=%f, nsize=%d" % ( param.alpha, param.beta, param.k, param.local_size) need_flatten[name] = True if layer.type == 'InnerProduct' or layer.type == 14: type_string = 'mx.symbol.FullyConnected' param = layer.inner_product_param param_string = "num_hidden=%d, no_bias=%s" % ( param.num_output, not param.bias_term) need_flatten[name] = False if layer.type == 'Dropout' or layer.type == 6: type_string = 'mx.symbol.Dropout' param = layer.dropout_param param_string = "p=%f" % param.dropout_ratio need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Softmax' or layer.type == 20: type_string = 'mx.symbol.SoftmaxOutput' if layer.type == 'Flatten' or layer.type == 8: type_string = 'mx.symbol.Flatten' need_flatten[name] = False if layer.type == 'Split' or layer.type == 22: type_string = 'split' # will process later if layer.type == 'Concat' or layer.type == 3: type_string = 'mx.symbol.Concat' need_flatten[name] = True if layer.type == 'Crop': type_string = 'mx.symbol.Crop' need_flatten[name] = True param_string = 'center_crop=True' if layer.type == 'BatchNorm': type_string = 'mx.symbol.BatchNorm' param = layer.batch_norm_param # CuDNN requires eps to be greater than 1e-05 # We compensate for this change in convert_model epsilon = param.eps if (epsilon <= 1e-05): epsilon = 1e-04 # if next layer is scale, don't fix gamma fix_gamma = layers[i+1].type != 'Scale' param_string = 'use_global_stats=%s, fix_gamma=%s, eps=%f' % ( param.use_global_stats, fix_gamma, epsilon) need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Scale': assert layers[i-1].type == 'BatchNorm' need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] skip_layer = True prev_name = re.sub('[-/]', '_', layers[i-1].name) if layer.type == 'PReLU': type_string = 'mx.symbol.LeakyReLU' param = layer.prelu_param param_string = "act_type='prelu', slope=%f" % param.filler.value need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Eltwise': type_string = 'mx.symbol.broadcast_add' param = layer.eltwise_param param_string = "" need_flatten[name] = False if layer.type == 'Reshape': type_string = 'mx.symbol.Reshape' need_flatten[name] = False param = layer.reshape_param param_string = "shape=(%s)" % (','.join(param.shape.dim),) if layer.type == 'AbsVal': type_string = 'mx.symbol.abs' need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if skip_layer: assert len(layer.bottom) == 1 symbol_string += "%s = %s\n" % (name, prev_name) elif type_string == '': raise ValueError('Unknown layer %s!' % layer.type) elif type_string != 'split': bottom = layer.bottom if param_string != "": param_string = ", " + param_string if len(bottom) == 1: if need_flatten[mapping[bottom[0]]] and type_string == 'mx.symbol.FullyConnected': flatten_name = "flatten_%d" % flatten_count symbol_string += "%s=mx.symbol.Flatten(name='%s', data=%s)\n" % ( flatten_name, flatten_name, mapping[bottom[0]]) flatten_count += 1 need_flatten[flatten_name] = False bottom[0] = flatten_name mapping[bottom[0]] = bottom[0] symbol_string += "%s = %s(name='%s', data=%s %s)\n" % ( name, type_string, name, mapping[bottom[0]], param_string) else: if layer.type == 'Eltwise' and param.operation == 1 and len(param.coeff) > 0: symbol_string += "%s = " % name symbol_string += " + ".join(["%s * %s" % ( mapping[bottom[i]], param.coeff[i]) for i in range(len(param.coeff))]) symbol_string += "\n" else: symbol_string += "%s = %s(name='%s', *[%s] %s)\n" % ( name, type_string, name, ','.join( [mapping[x] for x in bottom]), param_string) for j in range(len(layer.top)): mapping[layer.top[j]] = name output_name = name output_name = [] for i in _output_name: if 'name' in _output_name[i] and _output_name[i]['count'] == 0: output_name.append(_output_name[i]['name']) return symbol_string, output_name, input_dim
Parse Caffe prototxt into symbol string
Below is the the instruction that describes the task: ### Input: Parse Caffe prototxt into symbol string ### Response: def _parse_proto(prototxt_fname): """Parse Caffe prototxt into symbol string """ proto = caffe_parser.read_prototxt(prototxt_fname) # process data layer input_name, input_dim, layers = _get_input(proto) # only support single input, so always use `data` as the input data mapping = {input_name: 'data'} need_flatten = {input_name: False} symbol_string = "import mxnet as mx\ndata = mx.symbol.Variable(name='data')\n" flatten_count = 0 output_name = "" prev_name = None _output_name = {} # convert reset layers one by one for i, layer in enumerate(layers): type_string = '' param_string = '' skip_layer = False name = re.sub('[-/]', '_', layer.name) for k in range(len(layer.bottom)): if layer.bottom[k] in _output_name: _output_name[layer.bottom[k]]['count'] = _output_name[layer.bottom[k]]['count']+1 else: _output_name[layer.bottom[k]] = {'count':0} for k in range(len(layer.top)): if layer.top[k] in _output_name: _output_name[layer.top[k]]['count'] = _output_name[layer.top[k]]['count']+1 else: _output_name[layer.top[k]] = {'count':0, 'name':name} if layer.type == 'Convolution' or layer.type == 4: type_string = 'mx.symbol.Convolution' param_string = _convert_conv_param(layer.convolution_param) need_flatten[name] = True if layer.type == 'Deconvolution' or layer.type == 39: type_string = 'mx.symbol.Deconvolution' param_string = _convert_conv_param(layer.convolution_param) need_flatten[name] = True if layer.type == 'Pooling' or layer.type == 17: type_string = 'mx.symbol.Pooling' param_string = _convert_pooling_param(layer.pooling_param) need_flatten[name] = True if layer.type == 'ReLU' or layer.type == 18: type_string = 'mx.symbol.Activation' param_string = "act_type='relu'" param = layer.relu_param if hasattr(param, 'negative_slope'): if param.negative_slope > 0: type_string = 'mx.symbol.LeakyReLU' param_string = "act_type='leaky', slope=%f" % param.negative_slope need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'TanH' or layer.type == 23: type_string = 'mx.symbol.Activation' param_string = "act_type='tanh'" need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Sigmoid' or layer.type == 19: type_string = 'mx.symbol.Activation' param_string = "act_type='sigmoid'" need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'LRN' or layer.type == 15: type_string = 'mx.symbol.LRN' param = layer.lrn_param param_string = "alpha=%f, beta=%f, knorm=%f, nsize=%d" % ( param.alpha, param.beta, param.k, param.local_size) need_flatten[name] = True if layer.type == 'InnerProduct' or layer.type == 14: type_string = 'mx.symbol.FullyConnected' param = layer.inner_product_param param_string = "num_hidden=%d, no_bias=%s" % ( param.num_output, not param.bias_term) need_flatten[name] = False if layer.type == 'Dropout' or layer.type == 6: type_string = 'mx.symbol.Dropout' param = layer.dropout_param param_string = "p=%f" % param.dropout_ratio need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Softmax' or layer.type == 20: type_string = 'mx.symbol.SoftmaxOutput' if layer.type == 'Flatten' or layer.type == 8: type_string = 'mx.symbol.Flatten' need_flatten[name] = False if layer.type == 'Split' or layer.type == 22: type_string = 'split' # will process later if layer.type == 'Concat' or layer.type == 3: type_string = 'mx.symbol.Concat' need_flatten[name] = True if layer.type == 'Crop': type_string = 'mx.symbol.Crop' need_flatten[name] = True param_string = 'center_crop=True' if layer.type == 'BatchNorm': type_string = 'mx.symbol.BatchNorm' param = layer.batch_norm_param # CuDNN requires eps to be greater than 1e-05 # We compensate for this change in convert_model epsilon = param.eps if (epsilon <= 1e-05): epsilon = 1e-04 # if next layer is scale, don't fix gamma fix_gamma = layers[i+1].type != 'Scale' param_string = 'use_global_stats=%s, fix_gamma=%s, eps=%f' % ( param.use_global_stats, fix_gamma, epsilon) need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Scale': assert layers[i-1].type == 'BatchNorm' need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] skip_layer = True prev_name = re.sub('[-/]', '_', layers[i-1].name) if layer.type == 'PReLU': type_string = 'mx.symbol.LeakyReLU' param = layer.prelu_param param_string = "act_type='prelu', slope=%f" % param.filler.value need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if layer.type == 'Eltwise': type_string = 'mx.symbol.broadcast_add' param = layer.eltwise_param param_string = "" need_flatten[name] = False if layer.type == 'Reshape': type_string = 'mx.symbol.Reshape' need_flatten[name] = False param = layer.reshape_param param_string = "shape=(%s)" % (','.join(param.shape.dim),) if layer.type == 'AbsVal': type_string = 'mx.symbol.abs' need_flatten[name] = need_flatten[mapping[layer.bottom[0]]] if skip_layer: assert len(layer.bottom) == 1 symbol_string += "%s = %s\n" % (name, prev_name) elif type_string == '': raise ValueError('Unknown layer %s!' % layer.type) elif type_string != 'split': bottom = layer.bottom if param_string != "": param_string = ", " + param_string if len(bottom) == 1: if need_flatten[mapping[bottom[0]]] and type_string == 'mx.symbol.FullyConnected': flatten_name = "flatten_%d" % flatten_count symbol_string += "%s=mx.symbol.Flatten(name='%s', data=%s)\n" % ( flatten_name, flatten_name, mapping[bottom[0]]) flatten_count += 1 need_flatten[flatten_name] = False bottom[0] = flatten_name mapping[bottom[0]] = bottom[0] symbol_string += "%s = %s(name='%s', data=%s %s)\n" % ( name, type_string, name, mapping[bottom[0]], param_string) else: if layer.type == 'Eltwise' and param.operation == 1 and len(param.coeff) > 0: symbol_string += "%s = " % name symbol_string += " + ".join(["%s * %s" % ( mapping[bottom[i]], param.coeff[i]) for i in range(len(param.coeff))]) symbol_string += "\n" else: symbol_string += "%s = %s(name='%s', *[%s] %s)\n" % ( name, type_string, name, ','.join( [mapping[x] for x in bottom]), param_string) for j in range(len(layer.top)): mapping[layer.top[j]] = name output_name = name output_name = [] for i in _output_name: if 'name' in _output_name[i] and _output_name[i]['count'] == 0: output_name.append(_output_name[i]['name']) return symbol_string, output_name, input_dim
def train(self): """ Train model with transformed data """ for i, model in enumerate(self.models): N = [int(i * len(self.y)) for i in self.lc_range] for n in N: X = self.X[:n] y = self.y[:n] e = Experiment(X, y, model.estimator, self.scores, self.validation_method) e.log_folder = self.log_folder e.train()
Train model with transformed data
Below is the the instruction that describes the task: ### Input: Train model with transformed data ### Response: def train(self): """ Train model with transformed data """ for i, model in enumerate(self.models): N = [int(i * len(self.y)) for i in self.lc_range] for n in N: X = self.X[:n] y = self.y[:n] e = Experiment(X, y, model.estimator, self.scores, self.validation_method) e.log_folder = self.log_folder e.train()
def first_from_generator(generator): """Pull the first value from a generator and return it, closing the generator :param generator: A generator, this will be mapped onto a list and the first item extracted. :return: None if there are no items, or the first item otherwise. :internal: """ try: result = next(generator) except StopIteration: result = None finally: generator.close() return result
Pull the first value from a generator and return it, closing the generator :param generator: A generator, this will be mapped onto a list and the first item extracted. :return: None if there are no items, or the first item otherwise. :internal:
Below is the the instruction that describes the task: ### Input: Pull the first value from a generator and return it, closing the generator :param generator: A generator, this will be mapped onto a list and the first item extracted. :return: None if there are no items, or the first item otherwise. :internal: ### Response: def first_from_generator(generator): """Pull the first value from a generator and return it, closing the generator :param generator: A generator, this will be mapped onto a list and the first item extracted. :return: None if there are no items, or the first item otherwise. :internal: """ try: result = next(generator) except StopIteration: result = None finally: generator.close() return result
def deduplicate(list_object): """Rebuild `list_object` removing duplicated and keeping order""" new = [] for item in list_object: if item not in new: new.append(item) return new
Rebuild `list_object` removing duplicated and keeping order
Below is the the instruction that describes the task: ### Input: Rebuild `list_object` removing duplicated and keeping order ### Response: def deduplicate(list_object): """Rebuild `list_object` removing duplicated and keeping order""" new = [] for item in list_object: if item not in new: new.append(item) return new
def get_pause_time(self, speed=ETHER_SPEED_MBIT_1000): """ get pause time for given link speed in seconds :param speed: select link speed to get the pause time for, must be ETHER_SPEED_MBIT_[10,100,1000] # noqa: E501 :return: pause time in seconds :raises MACControlInvalidSpeedException: on invalid speed selector """ try: return self.pause_time * { ETHER_SPEED_MBIT_10: (0.0000001 * 512), ETHER_SPEED_MBIT_100: (0.00000001 * 512), ETHER_SPEED_MBIT_1000: (0.000000001 * 512 * 2) }[speed] except KeyError: raise MACControlInvalidSpeedException('Invalid speed selector given. ' # noqa: E501 'Must be one of ETHER_SPEED_MBIT_[10,100,1000]')
get pause time for given link speed in seconds :param speed: select link speed to get the pause time for, must be ETHER_SPEED_MBIT_[10,100,1000] # noqa: E501 :return: pause time in seconds :raises MACControlInvalidSpeedException: on invalid speed selector
Below is the the instruction that describes the task: ### Input: get pause time for given link speed in seconds :param speed: select link speed to get the pause time for, must be ETHER_SPEED_MBIT_[10,100,1000] # noqa: E501 :return: pause time in seconds :raises MACControlInvalidSpeedException: on invalid speed selector ### Response: def get_pause_time(self, speed=ETHER_SPEED_MBIT_1000): """ get pause time for given link speed in seconds :param speed: select link speed to get the pause time for, must be ETHER_SPEED_MBIT_[10,100,1000] # noqa: E501 :return: pause time in seconds :raises MACControlInvalidSpeedException: on invalid speed selector """ try: return self.pause_time * { ETHER_SPEED_MBIT_10: (0.0000001 * 512), ETHER_SPEED_MBIT_100: (0.00000001 * 512), ETHER_SPEED_MBIT_1000: (0.000000001 * 512 * 2) }[speed] except KeyError: raise MACControlInvalidSpeedException('Invalid speed selector given. ' # noqa: E501 'Must be one of ETHER_SPEED_MBIT_[10,100,1000]')
def sample_bitstrings(self, n_samples): """ Sample bitstrings from the distribution defined by the wavefunction. Qubit 0 is at ``out[:, 0]``. :param n_samples: The number of bitstrings to sample :return: An array of shape (n_samples, n_qubits) """ if self.rs is None: raise ValueError("You have tried to perform a stochastic operation without setting the " "random state of the simulator. Might I suggest using a PyQVM object?") probabilities = np.abs(self.wf) ** 2 possible_bitstrings = all_bitstrings(self.n_qubits) inds = self.rs.choice(2 ** self.n_qubits, n_samples, p=probabilities) bitstrings = possible_bitstrings[inds, :] bitstrings = np.flip(bitstrings, axis=1) # qubit ordering: 0 on the left. return bitstrings
Sample bitstrings from the distribution defined by the wavefunction. Qubit 0 is at ``out[:, 0]``. :param n_samples: The number of bitstrings to sample :return: An array of shape (n_samples, n_qubits)
Below is the the instruction that describes the task: ### Input: Sample bitstrings from the distribution defined by the wavefunction. Qubit 0 is at ``out[:, 0]``. :param n_samples: The number of bitstrings to sample :return: An array of shape (n_samples, n_qubits) ### Response: def sample_bitstrings(self, n_samples): """ Sample bitstrings from the distribution defined by the wavefunction. Qubit 0 is at ``out[:, 0]``. :param n_samples: The number of bitstrings to sample :return: An array of shape (n_samples, n_qubits) """ if self.rs is None: raise ValueError("You have tried to perform a stochastic operation without setting the " "random state of the simulator. Might I suggest using a PyQVM object?") probabilities = np.abs(self.wf) ** 2 possible_bitstrings = all_bitstrings(self.n_qubits) inds = self.rs.choice(2 ** self.n_qubits, n_samples, p=probabilities) bitstrings = possible_bitstrings[inds, :] bitstrings = np.flip(bitstrings, axis=1) # qubit ordering: 0 on the left. return bitstrings
def delete(self, key, *keys): """Delete a key.""" fut = self.execute(b'DEL', key, *keys) return wait_convert(fut, int)
Delete a key.
Below is the the instruction that describes the task: ### Input: Delete a key. ### Response: def delete(self, key, *keys): """Delete a key.""" fut = self.execute(b'DEL', key, *keys) return wait_convert(fut, int)
def tgread_bool(self): """Reads a Telegram boolean value.""" value = self.read_int(signed=False) if value == 0x997275b5: # boolTrue return True elif value == 0xbc799737: # boolFalse return False else: raise RuntimeError('Invalid boolean code {}'.format(hex(value)))
Reads a Telegram boolean value.
Below is the the instruction that describes the task: ### Input: Reads a Telegram boolean value. ### Response: def tgread_bool(self): """Reads a Telegram boolean value.""" value = self.read_int(signed=False) if value == 0x997275b5: # boolTrue return True elif value == 0xbc799737: # boolFalse return False else: raise RuntimeError('Invalid boolean code {}'.format(hex(value)))
def main(arguments=None): """ *The main function used when ``cl_utils.py`` is run as a single script from the cl, or when installed as a cl command* """ # setup the command-line util settings su = tools( arguments=arguments, docString=__doc__, logLevel="DEBUG", options_first=False, projectName="sloancone", tunnel=False ) arguments, settings, log, dbConn = su.setup() # tab completion for raw_input readline.set_completer_delims(' \t\n;') readline.parse_and_bind("tab: complete") readline.set_completer(tab_complete) # unpack remaining cl arguments using `exec` to setup the variable names # automatically for arg, val in arguments.iteritems(): if arg[0] == "-": varname = arg.replace("-", "") + "Flag" else: varname = arg.replace("<", "").replace(">", "") if isinstance(val, str) or isinstance(val, unicode): exec(varname + " = '%s'" % (val,)) else: exec(varname + " = %s" % (val,)) if arg == "--dbConn": dbConn = val log.debug('%s = %s' % (varname, val,)) ## START LOGGING ## startTime = times.get_now_sql_datetime() log.info( '--- STARTING TO RUN THE cl_utils.py AT %s' % (startTime,)) # set options interactively if user requests if "interactiveFlag" in locals() and interactiveFlag: # load previous settings moduleDirectory = os.path.dirname(__file__) + "/resources" pathToPickleFile = "%(moduleDirectory)s/previousSettings.p" % locals() try: with open(pathToPickleFile): pass previousSettingsExist = True except: previousSettingsExist = False previousSettings = {} if previousSettingsExist: previousSettings = pickle.load(open(pathToPickleFile, "rb")) # x-raw-input # x-boolean-raw-input # x-raw-input-with-default-value-from-previous-settings # save the most recently used requests pickleMeObjects = [] pickleMe = {} theseLocals = locals() for k in pickleMeObjects: pickleMe[k] = theseLocals[k] pickle.dump(pickleMe, open(pathToPickleFile, "wb")) # CALL FUNCTIONS/OBJECTS if "dbConn" in locals() and dbConn: dbConn.commit() dbConn.close() ## FINISH LOGGING ## endTime = times.get_now_sql_datetime() runningTime = times.calculate_time_difference(startTime, endTime) log.info('-- FINISHED ATTEMPT TO RUN THE cl_utils.py AT %s (RUNTIME: %s) --' % (endTime, runningTime, )) return
*The main function used when ``cl_utils.py`` is run as a single script from the cl, or when installed as a cl command*
Below is the the instruction that describes the task: ### Input: *The main function used when ``cl_utils.py`` is run as a single script from the cl, or when installed as a cl command* ### Response: def main(arguments=None): """ *The main function used when ``cl_utils.py`` is run as a single script from the cl, or when installed as a cl command* """ # setup the command-line util settings su = tools( arguments=arguments, docString=__doc__, logLevel="DEBUG", options_first=False, projectName="sloancone", tunnel=False ) arguments, settings, log, dbConn = su.setup() # tab completion for raw_input readline.set_completer_delims(' \t\n;') readline.parse_and_bind("tab: complete") readline.set_completer(tab_complete) # unpack remaining cl arguments using `exec` to setup the variable names # automatically for arg, val in arguments.iteritems(): if arg[0] == "-": varname = arg.replace("-", "") + "Flag" else: varname = arg.replace("<", "").replace(">", "") if isinstance(val, str) or isinstance(val, unicode): exec(varname + " = '%s'" % (val,)) else: exec(varname + " = %s" % (val,)) if arg == "--dbConn": dbConn = val log.debug('%s = %s' % (varname, val,)) ## START LOGGING ## startTime = times.get_now_sql_datetime() log.info( '--- STARTING TO RUN THE cl_utils.py AT %s' % (startTime,)) # set options interactively if user requests if "interactiveFlag" in locals() and interactiveFlag: # load previous settings moduleDirectory = os.path.dirname(__file__) + "/resources" pathToPickleFile = "%(moduleDirectory)s/previousSettings.p" % locals() try: with open(pathToPickleFile): pass previousSettingsExist = True except: previousSettingsExist = False previousSettings = {} if previousSettingsExist: previousSettings = pickle.load(open(pathToPickleFile, "rb")) # x-raw-input # x-boolean-raw-input # x-raw-input-with-default-value-from-previous-settings # save the most recently used requests pickleMeObjects = [] pickleMe = {} theseLocals = locals() for k in pickleMeObjects: pickleMe[k] = theseLocals[k] pickle.dump(pickleMe, open(pathToPickleFile, "wb")) # CALL FUNCTIONS/OBJECTS if "dbConn" in locals() and dbConn: dbConn.commit() dbConn.close() ## FINISH LOGGING ## endTime = times.get_now_sql_datetime() runningTime = times.calculate_time_difference(startTime, endTime) log.info('-- FINISHED ATTEMPT TO RUN THE cl_utils.py AT %s (RUNTIME: %s) --' % (endTime, runningTime, )) return
def selected(self): """ returns the list of selected component names. if no component selected return the one marked as default. If the block is required and no component where indicated as default, then the first component is selected. """ selected = self._selected if len(self._selected) == 0 and self.required: # nothing has been selected yet BUT the component is required selected = self.defaults return selected
returns the list of selected component names. if no component selected return the one marked as default. If the block is required and no component where indicated as default, then the first component is selected.
Below is the the instruction that describes the task: ### Input: returns the list of selected component names. if no component selected return the one marked as default. If the block is required and no component where indicated as default, then the first component is selected. ### Response: def selected(self): """ returns the list of selected component names. if no component selected return the one marked as default. If the block is required and no component where indicated as default, then the first component is selected. """ selected = self._selected if len(self._selected) == 0 and self.required: # nothing has been selected yet BUT the component is required selected = self.defaults return selected
def _check_completion_errors(self): """ Parses four potential errors that can cause jobs to crash: inability to transform coordinates due to a bad symmetric specification, an input file that fails to pass inspection, and errors reading and writing files. """ if read_pattern( self.text, { "key": r"Coordinates do not transform within specified threshold" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["failed_to_transform_coords"] elif read_pattern( self.text, { "key": r"The Q\-Chem input file has failed to pass inspection" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["input_file_error"] elif read_pattern( self.text, { "key": r"Error opening input stream" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["failed_to_read_input"] elif read_pattern( self.text, { "key": r"FileMan error: End of file reached prematurely" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["IO_error"] elif read_pattern( self.text, { "key": r"Could not find \$molecule section in ParseQInput" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["read_molecule_error"] elif read_pattern( self.text, { "key": r"Welcome to Q-Chem" }, terminate_on_match=True).get('key') != [[]]: self.data["errors"] += ["never_called_qchem"] else: self.data["errors"] += ["unknown_error"]
Parses four potential errors that can cause jobs to crash: inability to transform coordinates due to a bad symmetric specification, an input file that fails to pass inspection, and errors reading and writing files.
Below is the the instruction that describes the task: ### Input: Parses four potential errors that can cause jobs to crash: inability to transform coordinates due to a bad symmetric specification, an input file that fails to pass inspection, and errors reading and writing files. ### Response: def _check_completion_errors(self): """ Parses four potential errors that can cause jobs to crash: inability to transform coordinates due to a bad symmetric specification, an input file that fails to pass inspection, and errors reading and writing files. """ if read_pattern( self.text, { "key": r"Coordinates do not transform within specified threshold" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["failed_to_transform_coords"] elif read_pattern( self.text, { "key": r"The Q\-Chem input file has failed to pass inspection" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["input_file_error"] elif read_pattern( self.text, { "key": r"Error opening input stream" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["failed_to_read_input"] elif read_pattern( self.text, { "key": r"FileMan error: End of file reached prematurely" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["IO_error"] elif read_pattern( self.text, { "key": r"Could not find \$molecule section in ParseQInput" }, terminate_on_match=True).get('key') == [[]]: self.data["errors"] += ["read_molecule_error"] elif read_pattern( self.text, { "key": r"Welcome to Q-Chem" }, terminate_on_match=True).get('key') != [[]]: self.data["errors"] += ["never_called_qchem"] else: self.data["errors"] += ["unknown_error"]
def get_url_for_get(url, parameters=None): # type: (str, Optional[Dict]) -> str """Get full url for GET request including parameters Args: url (str): URL to download parameters (Optional[Dict]): Parameters to pass. Defaults to None. Returns: str: Full url """ spliturl = urlsplit(url) getparams = OrderedDict(parse_qsl(spliturl.query)) if parameters is not None: getparams.update(parameters) spliturl = spliturl._replace(query=urlencode(getparams)) return urlunsplit(spliturl)
Get full url for GET request including parameters Args: url (str): URL to download parameters (Optional[Dict]): Parameters to pass. Defaults to None. Returns: str: Full url
Below is the the instruction that describes the task: ### Input: Get full url for GET request including parameters Args: url (str): URL to download parameters (Optional[Dict]): Parameters to pass. Defaults to None. Returns: str: Full url ### Response: def get_url_for_get(url, parameters=None): # type: (str, Optional[Dict]) -> str """Get full url for GET request including parameters Args: url (str): URL to download parameters (Optional[Dict]): Parameters to pass. Defaults to None. Returns: str: Full url """ spliturl = urlsplit(url) getparams = OrderedDict(parse_qsl(spliturl.query)) if parameters is not None: getparams.update(parameters) spliturl = spliturl._replace(query=urlencode(getparams)) return urlunsplit(spliturl)
async def set_access_string(self, **params): """Writes content access string to database """ if params.get("message"): params = json.loads(params.get("message", "{}")) cid = int(params.get("cid", "0")) seller_access_string = params.get("seller_access_string") seller_pubkey = params.get("seller_pubkey") coinid = params.get("coinid") try: coinid = coinid.replace("TEST", "") except: pass database = client[coinid] collection = database[settings.CONTENT] content = await collection.find_one({"cid":cid}) if not content: return {"error":404, "reason":"Content not found"} if not all([cid, seller_access_string, seller_pubkey]): return {"error":400, "reason":"Missed required fields"} await collection.find_one_and_update({"cid":cid}, {"$set":{"seller_access_string":seller_access_string}}) await collection.find_one_and_update({"cid":cid}, {"$set":{"seller_pubkey":seller_pubkey}}) content = await collection.find_one({"cid":cid}) return {i:content[i] for i in content if i != "_id"}
Writes content access string to database
Below is the the instruction that describes the task: ### Input: Writes content access string to database ### Response: async def set_access_string(self, **params): """Writes content access string to database """ if params.get("message"): params = json.loads(params.get("message", "{}")) cid = int(params.get("cid", "0")) seller_access_string = params.get("seller_access_string") seller_pubkey = params.get("seller_pubkey") coinid = params.get("coinid") try: coinid = coinid.replace("TEST", "") except: pass database = client[coinid] collection = database[settings.CONTENT] content = await collection.find_one({"cid":cid}) if not content: return {"error":404, "reason":"Content not found"} if not all([cid, seller_access_string, seller_pubkey]): return {"error":400, "reason":"Missed required fields"} await collection.find_one_and_update({"cid":cid}, {"$set":{"seller_access_string":seller_access_string}}) await collection.find_one_and_update({"cid":cid}, {"$set":{"seller_pubkey":seller_pubkey}}) content = await collection.find_one({"cid":cid}) return {i:content[i] for i in content if i != "_id"}
def set_tensor_final(self, tensor_name): """Denotes a tensor as a final output of the computation. Args: tensor_name: a string, name of a tensor in the graph. """ tensor = self._name_to_tensor(tensor_name) self._final_tensors.add(tensor)
Denotes a tensor as a final output of the computation. Args: tensor_name: a string, name of a tensor in the graph.
Below is the the instruction that describes the task: ### Input: Denotes a tensor as a final output of the computation. Args: tensor_name: a string, name of a tensor in the graph. ### Response: def set_tensor_final(self, tensor_name): """Denotes a tensor as a final output of the computation. Args: tensor_name: a string, name of a tensor in the graph. """ tensor = self._name_to_tensor(tensor_name) self._final_tensors.add(tensor)
def _make_random_string(length): """Returns a random lowercase, uppercase, alphanumerical string. :param int length: The length in bytes of the string to generate. """ chars = string.ascii_lowercase + string.ascii_uppercase + string.digits return ''.join(random.choice(chars) for x in range(length))
Returns a random lowercase, uppercase, alphanumerical string. :param int length: The length in bytes of the string to generate.
Below is the the instruction that describes the task: ### Input: Returns a random lowercase, uppercase, alphanumerical string. :param int length: The length in bytes of the string to generate. ### Response: def _make_random_string(length): """Returns a random lowercase, uppercase, alphanumerical string. :param int length: The length in bytes of the string to generate. """ chars = string.ascii_lowercase + string.ascii_uppercase + string.digits return ''.join(random.choice(chars) for x in range(length))
def send_apply_request(self, socket, f, args=None, kwargs=None, subheader=None, track=False, ident=None): """construct and send an apply message via a socket. This is the principal method with which all engine execution is performed by views. """ if self._closed: raise RuntimeError("Client cannot be used after its sockets have been closed") # defaults: args = args if args is not None else [] kwargs = kwargs if kwargs is not None else {} subheader = subheader if subheader is not None else {} # validate arguments if not callable(f) and not isinstance(f, Reference): raise TypeError("f must be callable, not %s"%type(f)) if not isinstance(args, (tuple, list)): raise TypeError("args must be tuple or list, not %s"%type(args)) if not isinstance(kwargs, dict): raise TypeError("kwargs must be dict, not %s"%type(kwargs)) if not isinstance(subheader, dict): raise TypeError("subheader must be dict, not %s"%type(subheader)) bufs = util.pack_apply_message(f,args,kwargs) msg = self.session.send(socket, "apply_request", buffers=bufs, ident=ident, subheader=subheader, track=track) msg_id = msg['header']['msg_id'] self.outstanding.add(msg_id) if ident: # possibly routed to a specific engine if isinstance(ident, list): ident = ident[-1] if ident in self._engines.values(): # save for later, in case of engine death self._outstanding_dict[ident].add(msg_id) self.history.append(msg_id) self.metadata[msg_id]['submitted'] = datetime.now() return msg
construct and send an apply message via a socket. This is the principal method with which all engine execution is performed by views.
Below is the the instruction that describes the task: ### Input: construct and send an apply message via a socket. This is the principal method with which all engine execution is performed by views. ### Response: def send_apply_request(self, socket, f, args=None, kwargs=None, subheader=None, track=False, ident=None): """construct and send an apply message via a socket. This is the principal method with which all engine execution is performed by views. """ if self._closed: raise RuntimeError("Client cannot be used after its sockets have been closed") # defaults: args = args if args is not None else [] kwargs = kwargs if kwargs is not None else {} subheader = subheader if subheader is not None else {} # validate arguments if not callable(f) and not isinstance(f, Reference): raise TypeError("f must be callable, not %s"%type(f)) if not isinstance(args, (tuple, list)): raise TypeError("args must be tuple or list, not %s"%type(args)) if not isinstance(kwargs, dict): raise TypeError("kwargs must be dict, not %s"%type(kwargs)) if not isinstance(subheader, dict): raise TypeError("subheader must be dict, not %s"%type(subheader)) bufs = util.pack_apply_message(f,args,kwargs) msg = self.session.send(socket, "apply_request", buffers=bufs, ident=ident, subheader=subheader, track=track) msg_id = msg['header']['msg_id'] self.outstanding.add(msg_id) if ident: # possibly routed to a specific engine if isinstance(ident, list): ident = ident[-1] if ident in self._engines.values(): # save for later, in case of engine death self._outstanding_dict[ident].add(msg_id) self.history.append(msg_id) self.metadata[msg_id]['submitted'] = datetime.now() return msg
def create_history_model(self, model, inherited): """ Creates a historical model to associate with the model provided. """ attrs = { "__module__": self.module, "_history_excluded_fields": self.excluded_fields, } app_module = "%s.models" % model._meta.app_label if inherited: # inherited use models module attrs["__module__"] = model.__module__ elif model.__module__ != self.module: # registered under different app attrs["__module__"] = self.module elif app_module != self.module: # Abuse an internal API because the app registry is loading. app = apps.app_configs[model._meta.app_label] models_module = app.name attrs["__module__"] = models_module fields = self.copy_fields(model) attrs.update(fields) attrs.update(self.get_extra_fields(model, fields)) # type in python2 wants str as a first argument attrs.update(Meta=type(str("Meta"), (), self.get_meta_options(model))) if self.table_name is not None: attrs["Meta"].db_table = self.table_name # Set as the default then check for overrides name = self.get_history_model_name(model) registered_models[model._meta.db_table] = model return python_2_unicode_compatible(type(str(name), self.bases, attrs))
Creates a historical model to associate with the model provided.
Below is the the instruction that describes the task: ### Input: Creates a historical model to associate with the model provided. ### Response: def create_history_model(self, model, inherited): """ Creates a historical model to associate with the model provided. """ attrs = { "__module__": self.module, "_history_excluded_fields": self.excluded_fields, } app_module = "%s.models" % model._meta.app_label if inherited: # inherited use models module attrs["__module__"] = model.__module__ elif model.__module__ != self.module: # registered under different app attrs["__module__"] = self.module elif app_module != self.module: # Abuse an internal API because the app registry is loading. app = apps.app_configs[model._meta.app_label] models_module = app.name attrs["__module__"] = models_module fields = self.copy_fields(model) attrs.update(fields) attrs.update(self.get_extra_fields(model, fields)) # type in python2 wants str as a first argument attrs.update(Meta=type(str("Meta"), (), self.get_meta_options(model))) if self.table_name is not None: attrs["Meta"].db_table = self.table_name # Set as the default then check for overrides name = self.get_history_model_name(model) registered_models[model._meta.db_table] = model return python_2_unicode_compatible(type(str(name), self.bases, attrs))
def _timestamp_regulator(self): """ Makes a dictionary whose keys are audio file basenames and whose values are a list of word blocks from unregulated timestamps and updates the main timestamp attribute. After all done, purges unregulated ones. In case the audio file was large enough to be splitted, it adds seconds to correct timing and in case the timestamp was manually loaded, it leaves it alone. Note that the difference between self.__timestamps and self.__timestamps_unregulated is that in the regulated version, right after the word, a list of word blocks must appear. However in the unregulated version, after a word, a list of individual splits containing word blocks would appear! """ unified_timestamps = _PrettyDefaultDict(list) staged_files = self._list_audio_files(sub_dir="staging") for timestamp_basename in self.__timestamps_unregulated: if len(self.__timestamps_unregulated[timestamp_basename]) > 1: # File has been splitted timestamp_name = ''.join(timestamp_basename.split('.')[:-1]) staged_splitted_files_of_timestamp = list( filter(lambda staged_file: ( timestamp_name == staged_file[:-3] and all([(x in set(map(str, range(10)))) for x in staged_file[-3:]])), staged_files)) if len(staged_splitted_files_of_timestamp) == 0: self.__errors[(time(), timestamp_basename)] = { "reason": "Missing staged file", "current_staged_files": staged_files} continue staged_splitted_files_of_timestamp.sort() unified_timestamp = list() for staging_digits, splitted_file in enumerate( self.__timestamps_unregulated[timestamp_basename]): prev_splits_sec = 0 if int(staging_digits) != 0: prev_splits_sec = self._get_audio_duration_seconds( "{}/staging/{}{:03d}".format( self.src_dir, timestamp_name, staging_digits - 1)) for word_block in splitted_file: unified_timestamp.append( _WordBlock( word=word_block.word, start=round(word_block.start + prev_splits_sec, 2), end=round(word_block.end + prev_splits_sec, 2))) unified_timestamps[ str(timestamp_basename)] += unified_timestamp else: unified_timestamps[ timestamp_basename] += self.__timestamps_unregulated[ timestamp_basename][0] self.__timestamps.update(unified_timestamps) self.__timestamps_unregulated = _PrettyDefaultDict(list)
Makes a dictionary whose keys are audio file basenames and whose values are a list of word blocks from unregulated timestamps and updates the main timestamp attribute. After all done, purges unregulated ones. In case the audio file was large enough to be splitted, it adds seconds to correct timing and in case the timestamp was manually loaded, it leaves it alone. Note that the difference between self.__timestamps and self.__timestamps_unregulated is that in the regulated version, right after the word, a list of word blocks must appear. However in the unregulated version, after a word, a list of individual splits containing word blocks would appear!
Below is the the instruction that describes the task: ### Input: Makes a dictionary whose keys are audio file basenames and whose values are a list of word blocks from unregulated timestamps and updates the main timestamp attribute. After all done, purges unregulated ones. In case the audio file was large enough to be splitted, it adds seconds to correct timing and in case the timestamp was manually loaded, it leaves it alone. Note that the difference between self.__timestamps and self.__timestamps_unregulated is that in the regulated version, right after the word, a list of word blocks must appear. However in the unregulated version, after a word, a list of individual splits containing word blocks would appear! ### Response: def _timestamp_regulator(self): """ Makes a dictionary whose keys are audio file basenames and whose values are a list of word blocks from unregulated timestamps and updates the main timestamp attribute. After all done, purges unregulated ones. In case the audio file was large enough to be splitted, it adds seconds to correct timing and in case the timestamp was manually loaded, it leaves it alone. Note that the difference between self.__timestamps and self.__timestamps_unregulated is that in the regulated version, right after the word, a list of word blocks must appear. However in the unregulated version, after a word, a list of individual splits containing word blocks would appear! """ unified_timestamps = _PrettyDefaultDict(list) staged_files = self._list_audio_files(sub_dir="staging") for timestamp_basename in self.__timestamps_unregulated: if len(self.__timestamps_unregulated[timestamp_basename]) > 1: # File has been splitted timestamp_name = ''.join(timestamp_basename.split('.')[:-1]) staged_splitted_files_of_timestamp = list( filter(lambda staged_file: ( timestamp_name == staged_file[:-3] and all([(x in set(map(str, range(10)))) for x in staged_file[-3:]])), staged_files)) if len(staged_splitted_files_of_timestamp) == 0: self.__errors[(time(), timestamp_basename)] = { "reason": "Missing staged file", "current_staged_files": staged_files} continue staged_splitted_files_of_timestamp.sort() unified_timestamp = list() for staging_digits, splitted_file in enumerate( self.__timestamps_unregulated[timestamp_basename]): prev_splits_sec = 0 if int(staging_digits) != 0: prev_splits_sec = self._get_audio_duration_seconds( "{}/staging/{}{:03d}".format( self.src_dir, timestamp_name, staging_digits - 1)) for word_block in splitted_file: unified_timestamp.append( _WordBlock( word=word_block.word, start=round(word_block.start + prev_splits_sec, 2), end=round(word_block.end + prev_splits_sec, 2))) unified_timestamps[ str(timestamp_basename)] += unified_timestamp else: unified_timestamps[ timestamp_basename] += self.__timestamps_unregulated[ timestamp_basename][0] self.__timestamps.update(unified_timestamps) self.__timestamps_unregulated = _PrettyDefaultDict(list)
def catch_errors(f): """ Catches specific errors in admin actions and shows a friendly error. """ @functools.wraps(f) def wrapper(self, request, *args, **kwargs): try: return f(self, request, *args, **kwargs) except exceptions.CertificateExpired: self.message_user( request, _('The AFIP Taxpayer certificate has expired.'), messages.ERROR, ) except exceptions.UntrustedCertificate: self.message_user( request, _('The AFIP Taxpayer certificate is untrusted.'), messages.ERROR, ) except exceptions.CorruptCertificate: self.message_user( request, _('The AFIP Taxpayer certificate is corrupt.'), messages.ERROR, ) except exceptions.AuthenticationError as e: logger.exception('AFIP auth failed') self.message_user( request, _('An unknown authentication error has ocurred: %s') % e, messages.ERROR, ) return wrapper
Catches specific errors in admin actions and shows a friendly error.
Below is the the instruction that describes the task: ### Input: Catches specific errors in admin actions and shows a friendly error. ### Response: def catch_errors(f): """ Catches specific errors in admin actions and shows a friendly error. """ @functools.wraps(f) def wrapper(self, request, *args, **kwargs): try: return f(self, request, *args, **kwargs) except exceptions.CertificateExpired: self.message_user( request, _('The AFIP Taxpayer certificate has expired.'), messages.ERROR, ) except exceptions.UntrustedCertificate: self.message_user( request, _('The AFIP Taxpayer certificate is untrusted.'), messages.ERROR, ) except exceptions.CorruptCertificate: self.message_user( request, _('The AFIP Taxpayer certificate is corrupt.'), messages.ERROR, ) except exceptions.AuthenticationError as e: logger.exception('AFIP auth failed') self.message_user( request, _('An unknown authentication error has ocurred: %s') % e, messages.ERROR, ) return wrapper
def _ExtractJQuery(self, jquery_raw): """Extracts values from a JQuery string. Args: jquery_raw (str): JQuery string. Returns: dict[str, str]: extracted values. """ data_part = '' if not jquery_raw: return {} if '[' in jquery_raw: _, _, first_part = jquery_raw.partition('[') data_part, _, _ = first_part.partition(']') elif jquery_raw.startswith('//'): _, _, first_part = jquery_raw.partition('{') data_part = '{{{0:s}'.format(first_part) elif '({' in jquery_raw: _, _, first_part = jquery_raw.partition('(') data_part, _, _ = first_part.rpartition(')') if not data_part: return {} try: data_dict = json.loads(data_part) except ValueError: return {} return data_dict
Extracts values from a JQuery string. Args: jquery_raw (str): JQuery string. Returns: dict[str, str]: extracted values.
Below is the the instruction that describes the task: ### Input: Extracts values from a JQuery string. Args: jquery_raw (str): JQuery string. Returns: dict[str, str]: extracted values. ### Response: def _ExtractJQuery(self, jquery_raw): """Extracts values from a JQuery string. Args: jquery_raw (str): JQuery string. Returns: dict[str, str]: extracted values. """ data_part = '' if not jquery_raw: return {} if '[' in jquery_raw: _, _, first_part = jquery_raw.partition('[') data_part, _, _ = first_part.partition(']') elif jquery_raw.startswith('//'): _, _, first_part = jquery_raw.partition('{') data_part = '{{{0:s}'.format(first_part) elif '({' in jquery_raw: _, _, first_part = jquery_raw.partition('(') data_part, _, _ = first_part.rpartition(')') if not data_part: return {} try: data_dict = json.loads(data_part) except ValueError: return {} return data_dict
def wrap(self, value, session=None): ''' Validates that ``value`` is an ObjectId (or hex representation of one), then returns it ''' self.validate_wrap(value) if isinstance(value, bytes) or isinstance(value, basestring): return ObjectId(value) return value
Validates that ``value`` is an ObjectId (or hex representation of one), then returns it
Below is the the instruction that describes the task: ### Input: Validates that ``value`` is an ObjectId (or hex representation of one), then returns it ### Response: def wrap(self, value, session=None): ''' Validates that ``value`` is an ObjectId (or hex representation of one), then returns it ''' self.validate_wrap(value) if isinstance(value, bytes) or isinstance(value, basestring): return ObjectId(value) return value
def _send_ffe(self, pid, app_id, app_flags, fr): """Send a flood-fill end packet. The cores and regions that the application should be loaded to will have been specified by a stream of flood-fill core select packets (FFCS). """ arg1 = (NNCommands.flood_fill_end << 24) | pid arg2 = (app_id << 24) | (app_flags << 18) self._send_scp(255, 255, 0, SCPCommands.nearest_neighbour_packet, arg1, arg2, fr)
Send a flood-fill end packet. The cores and regions that the application should be loaded to will have been specified by a stream of flood-fill core select packets (FFCS).
Below is the the instruction that describes the task: ### Input: Send a flood-fill end packet. The cores and regions that the application should be loaded to will have been specified by a stream of flood-fill core select packets (FFCS). ### Response: def _send_ffe(self, pid, app_id, app_flags, fr): """Send a flood-fill end packet. The cores and regions that the application should be loaded to will have been specified by a stream of flood-fill core select packets (FFCS). """ arg1 = (NNCommands.flood_fill_end << 24) | pid arg2 = (app_id << 24) | (app_flags << 18) self._send_scp(255, 255, 0, SCPCommands.nearest_neighbour_packet, arg1, arg2, fr)
def get_parser(): """ This is a helper method to return an argparse parser, to be used with the Sphinx argparse plugin for documentation. """ manager = cfg.build_manager() source = cfg.build_command_line_source(prog='prospector', description=None) return source.build_parser(manager.settings, None)
This is a helper method to return an argparse parser, to be used with the Sphinx argparse plugin for documentation.
Below is the the instruction that describes the task: ### Input: This is a helper method to return an argparse parser, to be used with the Sphinx argparse plugin for documentation. ### Response: def get_parser(): """ This is a helper method to return an argparse parser, to be used with the Sphinx argparse plugin for documentation. """ manager = cfg.build_manager() source = cfg.build_command_line_source(prog='prospector', description=None) return source.build_parser(manager.settings, None)
def resolver(self, vocab_data, attribute): """Pull the requested attribute based on the given vocabulary and content. """ term_list = vocab_data.get(self.content_vocab, []) # Loop through the terms from the vocabulary. for term_dict in term_list: # Match the name to the current content. if term_dict['name'] == self.content: return term_dict[attribute] return self.content
Pull the requested attribute based on the given vocabulary and content.
Below is the the instruction that describes the task: ### Input: Pull the requested attribute based on the given vocabulary and content. ### Response: def resolver(self, vocab_data, attribute): """Pull the requested attribute based on the given vocabulary and content. """ term_list = vocab_data.get(self.content_vocab, []) # Loop through the terms from the vocabulary. for term_dict in term_list: # Match the name to the current content. if term_dict['name'] == self.content: return term_dict[attribute] return self.content
def cs_bahdanau_attention(key, context, hidden_size, depth, projected_align=False): """ It is a implementation of the Bahdanau et al. attention mechanism. Based on the papers: https://arxiv.org/abs/1409.0473 "Neural Machine Translation by Jointly Learning to Align and Translate" https://andre-martins.github.io/docs/emnlp2017_final.pdf "Learning What's Easy: Fully Differentiable Neural Easy-First Taggers" Args: key: A tensorflow tensor with dimensionality [None, None, key_size] context: A tensorflow tensor with dimensionality [None, None, max_num_tokens, token_size] hidden_size: Number of units in hidden representation depth: Number of csoftmax usages projected_align: Using bidirectional lstm for hidden representation of context. If true, beetween input and attention mechanism insert layer of bidirectional lstm with dimensionality [hidden_size]. If false, bidirectional lstm is not used. Returns: output: Tensor at the output with dimensionality [None, None, depth * hidden_size] """ if hidden_size % 2 != 0: raise ValueError("hidden size must be dividable by two") batch_size = tf.shape(context)[0] max_num_tokens, token_size = context.get_shape().as_list()[-2:] r_context = tf.reshape(context, shape=[-1, max_num_tokens, token_size]) # projected context: [None, max_num_tokens, token_size] projected_context = tf.layers.dense(r_context, token_size, kernel_initializer=xav(), name='projected_context') # projected_key: [None, None, hidden_size] projected_key = tf.layers.dense(key, hidden_size, kernel_initializer=xav(), name='projected_key') r_projected_key = \ tf.tile(tf.reshape(projected_key, shape=[-1, 1, hidden_size]), [1, max_num_tokens, 1]) lstm_fw_cell = tf.nn.rnn_cell.LSTMCell(hidden_size//2) lstm_bw_cell = tf.nn.rnn_cell.LSTMCell(hidden_size//2) (output_fw, output_bw), states = \ tf.nn.bidirectional_dynamic_rnn(cell_fw=lstm_fw_cell, cell_bw=lstm_bw_cell, inputs=projected_context, dtype=tf.float32) # bilstm_output: [-1, max_num_tokens, hidden_size] bilstm_output = tf.concat([output_fw, output_bw], -1) concat_h_state = tf.concat([r_projected_key, output_fw, output_bw], -1) if projected_align: log.info("Using projected attention alignment") h_state_for_attn_alignment = bilstm_output aligned_h_state = csoftmax_attention.attention_bah_block( concat_h_state, h_state_for_attn_alignment, depth) output = \ tf.reshape(aligned_h_state, shape=[batch_size, -1, depth * hidden_size]) else: log.info("Using without projected attention alignment") h_state_for_attn_alignment = projected_context aligned_h_state = csoftmax_attention.attention_bah_block( concat_h_state, h_state_for_attn_alignment, depth) output = \ tf.reshape(aligned_h_state, shape=[batch_size, -1, depth * token_size]) return output
It is a implementation of the Bahdanau et al. attention mechanism. Based on the papers: https://arxiv.org/abs/1409.0473 "Neural Machine Translation by Jointly Learning to Align and Translate" https://andre-martins.github.io/docs/emnlp2017_final.pdf "Learning What's Easy: Fully Differentiable Neural Easy-First Taggers" Args: key: A tensorflow tensor with dimensionality [None, None, key_size] context: A tensorflow tensor with dimensionality [None, None, max_num_tokens, token_size] hidden_size: Number of units in hidden representation depth: Number of csoftmax usages projected_align: Using bidirectional lstm for hidden representation of context. If true, beetween input and attention mechanism insert layer of bidirectional lstm with dimensionality [hidden_size]. If false, bidirectional lstm is not used. Returns: output: Tensor at the output with dimensionality [None, None, depth * hidden_size]
Below is the the instruction that describes the task: ### Input: It is a implementation of the Bahdanau et al. attention mechanism. Based on the papers: https://arxiv.org/abs/1409.0473 "Neural Machine Translation by Jointly Learning to Align and Translate" https://andre-martins.github.io/docs/emnlp2017_final.pdf "Learning What's Easy: Fully Differentiable Neural Easy-First Taggers" Args: key: A tensorflow tensor with dimensionality [None, None, key_size] context: A tensorflow tensor with dimensionality [None, None, max_num_tokens, token_size] hidden_size: Number of units in hidden representation depth: Number of csoftmax usages projected_align: Using bidirectional lstm for hidden representation of context. If true, beetween input and attention mechanism insert layer of bidirectional lstm with dimensionality [hidden_size]. If false, bidirectional lstm is not used. Returns: output: Tensor at the output with dimensionality [None, None, depth * hidden_size] ### Response: def cs_bahdanau_attention(key, context, hidden_size, depth, projected_align=False): """ It is a implementation of the Bahdanau et al. attention mechanism. Based on the papers: https://arxiv.org/abs/1409.0473 "Neural Machine Translation by Jointly Learning to Align and Translate" https://andre-martins.github.io/docs/emnlp2017_final.pdf "Learning What's Easy: Fully Differentiable Neural Easy-First Taggers" Args: key: A tensorflow tensor with dimensionality [None, None, key_size] context: A tensorflow tensor with dimensionality [None, None, max_num_tokens, token_size] hidden_size: Number of units in hidden representation depth: Number of csoftmax usages projected_align: Using bidirectional lstm for hidden representation of context. If true, beetween input and attention mechanism insert layer of bidirectional lstm with dimensionality [hidden_size]. If false, bidirectional lstm is not used. Returns: output: Tensor at the output with dimensionality [None, None, depth * hidden_size] """ if hidden_size % 2 != 0: raise ValueError("hidden size must be dividable by two") batch_size = tf.shape(context)[0] max_num_tokens, token_size = context.get_shape().as_list()[-2:] r_context = tf.reshape(context, shape=[-1, max_num_tokens, token_size]) # projected context: [None, max_num_tokens, token_size] projected_context = tf.layers.dense(r_context, token_size, kernel_initializer=xav(), name='projected_context') # projected_key: [None, None, hidden_size] projected_key = tf.layers.dense(key, hidden_size, kernel_initializer=xav(), name='projected_key') r_projected_key = \ tf.tile(tf.reshape(projected_key, shape=[-1, 1, hidden_size]), [1, max_num_tokens, 1]) lstm_fw_cell = tf.nn.rnn_cell.LSTMCell(hidden_size//2) lstm_bw_cell = tf.nn.rnn_cell.LSTMCell(hidden_size//2) (output_fw, output_bw), states = \ tf.nn.bidirectional_dynamic_rnn(cell_fw=lstm_fw_cell, cell_bw=lstm_bw_cell, inputs=projected_context, dtype=tf.float32) # bilstm_output: [-1, max_num_tokens, hidden_size] bilstm_output = tf.concat([output_fw, output_bw], -1) concat_h_state = tf.concat([r_projected_key, output_fw, output_bw], -1) if projected_align: log.info("Using projected attention alignment") h_state_for_attn_alignment = bilstm_output aligned_h_state = csoftmax_attention.attention_bah_block( concat_h_state, h_state_for_attn_alignment, depth) output = \ tf.reshape(aligned_h_state, shape=[batch_size, -1, depth * hidden_size]) else: log.info("Using without projected attention alignment") h_state_for_attn_alignment = projected_context aligned_h_state = csoftmax_attention.attention_bah_block( concat_h_state, h_state_for_attn_alignment, depth) output = \ tf.reshape(aligned_h_state, shape=[batch_size, -1, depth * token_size]) return output
def get_version_from_list(v, vlist): """See if we can match v (string) in vlist (list of strings) Linux has to match in a fuzzy way.""" if is_windows: # Simple case, just find it in the list if v in vlist: return v else: return None else: # Fuzzy match: normalize version number first, but still return # original non-normalized form. fuzz = 0.001 for vi in vlist: if math.fabs(linux_ver_normalize(vi) - linux_ver_normalize(v)) < fuzz: return vi # Not found return None
See if we can match v (string) in vlist (list of strings) Linux has to match in a fuzzy way.
Below is the the instruction that describes the task: ### Input: See if we can match v (string) in vlist (list of strings) Linux has to match in a fuzzy way. ### Response: def get_version_from_list(v, vlist): """See if we can match v (string) in vlist (list of strings) Linux has to match in a fuzzy way.""" if is_windows: # Simple case, just find it in the list if v in vlist: return v else: return None else: # Fuzzy match: normalize version number first, but still return # original non-normalized form. fuzz = 0.001 for vi in vlist: if math.fabs(linux_ver_normalize(vi) - linux_ver_normalize(v)) < fuzz: return vi # Not found return None
def to_vec3(self): """Convert this vector4 instance into a vector3 instance.""" vec3 = Vector3() vec3.x = self.x vec3.y = self.y vec3.z = self.z if self.w != 0: vec3 /= self.w return vec3
Convert this vector4 instance into a vector3 instance.
Below is the the instruction that describes the task: ### Input: Convert this vector4 instance into a vector3 instance. ### Response: def to_vec3(self): """Convert this vector4 instance into a vector3 instance.""" vec3 = Vector3() vec3.x = self.x vec3.y = self.y vec3.z = self.z if self.w != 0: vec3 /= self.w return vec3
def updateMappingsOnDeviceType(self, thingTypeId, logicalInterfaceId, mappingsObject, notificationStrategy = "never"): """ Add mappings for a thing type. Parameters: - thingTypeId (string) - the thing type - logicalInterfaceId (string) - the id of the application interface these mappings are for - notificationStrategy (string) - the notification strategy to use for these mappings - mappingsObject (Python dictionary corresponding to JSON object) example: { # eventid -> { property -> eventid property expression } "status" : { "eventCount" : "($state.eventCount == -1) ? $event.d.count : ($state.eventCount+1)", } } Throws APIException on failure. """ req = ApiClient.oneThingTypeMappingUrl % (self.host, "/draft", thingTypeId, logicalInterfaceId) try: mappings = json.dumps({ "logicalInterfaceId" : logicalInterfaceId, "notificationStrategy" : notificationStrategy, "propertyMappings" : mappingsObject }) except Exception as exc: raise ibmiotf.APIException(-1, "Exception formatting mappings object to JSON", exc) resp = requests.put(req, auth=self.credentials, headers={"Content-Type":"application/json"}, data=mappings, verify=self.verify) if resp.status_code == 200: self.logger.debug("Thing type mappings updated for logical interface") else: raise ibmiotf.APIException(resp.status_code, "HTTP error updating thing type mappings for logical interface", resp) return resp.json()
Add mappings for a thing type. Parameters: - thingTypeId (string) - the thing type - logicalInterfaceId (string) - the id of the application interface these mappings are for - notificationStrategy (string) - the notification strategy to use for these mappings - mappingsObject (Python dictionary corresponding to JSON object) example: { # eventid -> { property -> eventid property expression } "status" : { "eventCount" : "($state.eventCount == -1) ? $event.d.count : ($state.eventCount+1)", } } Throws APIException on failure.
Below is the the instruction that describes the task: ### Input: Add mappings for a thing type. Parameters: - thingTypeId (string) - the thing type - logicalInterfaceId (string) - the id of the application interface these mappings are for - notificationStrategy (string) - the notification strategy to use for these mappings - mappingsObject (Python dictionary corresponding to JSON object) example: { # eventid -> { property -> eventid property expression } "status" : { "eventCount" : "($state.eventCount == -1) ? $event.d.count : ($state.eventCount+1)", } } Throws APIException on failure. ### Response: def updateMappingsOnDeviceType(self, thingTypeId, logicalInterfaceId, mappingsObject, notificationStrategy = "never"): """ Add mappings for a thing type. Parameters: - thingTypeId (string) - the thing type - logicalInterfaceId (string) - the id of the application interface these mappings are for - notificationStrategy (string) - the notification strategy to use for these mappings - mappingsObject (Python dictionary corresponding to JSON object) example: { # eventid -> { property -> eventid property expression } "status" : { "eventCount" : "($state.eventCount == -1) ? $event.d.count : ($state.eventCount+1)", } } Throws APIException on failure. """ req = ApiClient.oneThingTypeMappingUrl % (self.host, "/draft", thingTypeId, logicalInterfaceId) try: mappings = json.dumps({ "logicalInterfaceId" : logicalInterfaceId, "notificationStrategy" : notificationStrategy, "propertyMappings" : mappingsObject }) except Exception as exc: raise ibmiotf.APIException(-1, "Exception formatting mappings object to JSON", exc) resp = requests.put(req, auth=self.credentials, headers={"Content-Type":"application/json"}, data=mappings, verify=self.verify) if resp.status_code == 200: self.logger.debug("Thing type mappings updated for logical interface") else: raise ibmiotf.APIException(resp.status_code, "HTTP error updating thing type mappings for logical interface", resp) return resp.json()
def read_frames(self): ''' Read frames from the transport and process them. Some transports may choose to do this in the background, in several threads, and so on. ''' # It's possible in a concurrent environment that our transport handle # has gone away, so handle that cleanly. # TODO: Consider moving this block into Translator base class. In many # ways it belongs there. One of the problems though is that this is # essentially the read loop. Each Transport has different rules for # how to kick this off, and in the case of gevent, this is how a # blocking call to read from the socket is kicked off. if self._transport is None: return # Send a heartbeat (if needed) self._channels[0].send_heartbeat() data = self._transport.read(self._heartbeat) current_time = time.time() if data is None: # Wait for 2 heartbeat intervals before giving up. See AMQP 4.2.7: # "If a peer detects no incoming traffic (i.e. received octets) for two heartbeat intervals or longer, # it should close the connection" if self._heartbeat and (current_time-self._last_octet_time > 2*self._heartbeat): msg = 'Heartbeats not received from %s for %d seconds' % (self._host, 2*self._heartbeat) self.transport_closed(msg=msg) raise ConnectionClosed('Connection is closed: ' + msg) return self._last_octet_time = current_time reader = Reader(data) p_channels = set() try: for frame in Frame.read_frames(reader): if self._debug > 1: self.logger.debug("READ: %s", frame) self._frames_read += 1 ch = self.channel(frame.channel_id) ch.buffer_frame(frame) p_channels.add(ch) except Frame.FrameError as e: # Frame error in the peer, disconnect self.close(reply_code=501, reply_text='frame error from %s : %s' % ( self._host, str(e)), class_id=0, method_id=0, disconnect=True) raise ConnectionClosed("connection is closed: %s : %s" % (self._close_info['reply_code'], self._close_info['reply_text'])) # NOTE: we process channels after buffering unused data in order to # preserve the integrity of the input stream in case a channel needs to # read input, such as when a channel framing error necessitates the use # of the synchronous channel.close method. See `Channel.process_frames`. # # HACK: read the buffer contents and re-buffer. Would prefer to pass # buffer back, but there's no good way of asking the total size of the # buffer, comparing to tell(), and then re-buffering. There's also no # ability to clear the buffer up to the current position. It would be # awesome if we could free that memory without a new allocation. if reader.tell() < len(data): self._transport.buffer(data[reader.tell():]) self._transport.process_channels(p_channels)
Read frames from the transport and process them. Some transports may choose to do this in the background, in several threads, and so on.
Below is the the instruction that describes the task: ### Input: Read frames from the transport and process them. Some transports may choose to do this in the background, in several threads, and so on. ### Response: def read_frames(self): ''' Read frames from the transport and process them. Some transports may choose to do this in the background, in several threads, and so on. ''' # It's possible in a concurrent environment that our transport handle # has gone away, so handle that cleanly. # TODO: Consider moving this block into Translator base class. In many # ways it belongs there. One of the problems though is that this is # essentially the read loop. Each Transport has different rules for # how to kick this off, and in the case of gevent, this is how a # blocking call to read from the socket is kicked off. if self._transport is None: return # Send a heartbeat (if needed) self._channels[0].send_heartbeat() data = self._transport.read(self._heartbeat) current_time = time.time() if data is None: # Wait for 2 heartbeat intervals before giving up. See AMQP 4.2.7: # "If a peer detects no incoming traffic (i.e. received octets) for two heartbeat intervals or longer, # it should close the connection" if self._heartbeat and (current_time-self._last_octet_time > 2*self._heartbeat): msg = 'Heartbeats not received from %s for %d seconds' % (self._host, 2*self._heartbeat) self.transport_closed(msg=msg) raise ConnectionClosed('Connection is closed: ' + msg) return self._last_octet_time = current_time reader = Reader(data) p_channels = set() try: for frame in Frame.read_frames(reader): if self._debug > 1: self.logger.debug("READ: %s", frame) self._frames_read += 1 ch = self.channel(frame.channel_id) ch.buffer_frame(frame) p_channels.add(ch) except Frame.FrameError as e: # Frame error in the peer, disconnect self.close(reply_code=501, reply_text='frame error from %s : %s' % ( self._host, str(e)), class_id=0, method_id=0, disconnect=True) raise ConnectionClosed("connection is closed: %s : %s" % (self._close_info['reply_code'], self._close_info['reply_text'])) # NOTE: we process channels after buffering unused data in order to # preserve the integrity of the input stream in case a channel needs to # read input, such as when a channel framing error necessitates the use # of the synchronous channel.close method. See `Channel.process_frames`. # # HACK: read the buffer contents and re-buffer. Would prefer to pass # buffer back, but there's no good way of asking the total size of the # buffer, comparing to tell(), and then re-buffering. There's also no # ability to clear the buffer up to the current position. It would be # awesome if we could free that memory without a new allocation. if reader.tell() < len(data): self._transport.buffer(data[reader.tell():]) self._transport.process_channels(p_channels)
def generate(categorize=unicodedata.category, group_class=RangeGroup): ''' Generate a dict of RangeGroups for each unicode character category, including general ones. :param categorize: category function, defaults to unicodedata.category. :type categorize: callable :param group_class: class for range groups, defaults to RangeGroup :type group_class: type :returns: dictionary of categories and range groups :rtype: dict of RangeGroup ''' categories = collections.defaultdict(list) last_category = None last_range = None for c in range(sys.maxunicode + 1): category = categorize(chr(c)) if category != last_category: last_category = category last_range = [c, c + 1] categories[last_category].append(last_range) else: last_range[1] += 1 categories = {k: group_class(v) for k, v in categories.items()} categories.update({ k: merge(*map(categories.__getitem__, g)) for k, g in itertools.groupby(sorted(categories), key=lambda k: k[0]) }) return categories
Generate a dict of RangeGroups for each unicode character category, including general ones. :param categorize: category function, defaults to unicodedata.category. :type categorize: callable :param group_class: class for range groups, defaults to RangeGroup :type group_class: type :returns: dictionary of categories and range groups :rtype: dict of RangeGroup
Below is the the instruction that describes the task: ### Input: Generate a dict of RangeGroups for each unicode character category, including general ones. :param categorize: category function, defaults to unicodedata.category. :type categorize: callable :param group_class: class for range groups, defaults to RangeGroup :type group_class: type :returns: dictionary of categories and range groups :rtype: dict of RangeGroup ### Response: def generate(categorize=unicodedata.category, group_class=RangeGroup): ''' Generate a dict of RangeGroups for each unicode character category, including general ones. :param categorize: category function, defaults to unicodedata.category. :type categorize: callable :param group_class: class for range groups, defaults to RangeGroup :type group_class: type :returns: dictionary of categories and range groups :rtype: dict of RangeGroup ''' categories = collections.defaultdict(list) last_category = None last_range = None for c in range(sys.maxunicode + 1): category = categorize(chr(c)) if category != last_category: last_category = category last_range = [c, c + 1] categories[last_category].append(last_range) else: last_range[1] += 1 categories = {k: group_class(v) for k, v in categories.items()} categories.update({ k: merge(*map(categories.__getitem__, g)) for k, g in itertools.groupby(sorted(categories), key=lambda k: k[0]) }) return categories
def intersects_id(self, ray_origins, ray_directions, return_locations=False, multiple_hits=True, **kwargs): """ Find the intersections between the current mesh and a list of rays. Parameters ------------ ray_origins: (m,3) float, ray origin points ray_directions: (m,3) float, ray direction vectors multiple_hits: bool, consider multiple hits of each ray or not return_locations: bool, return hit locations or not Returns ----------- index_triangle: (h,) int, index of triangles hit index_ray: (h,) int, index of ray that hit triangle locations: (h,3) float, (optional) position of intersection in space """ (index_tri, index_ray, locations) = ray_triangle_id(triangles=self.mesh.triangles, ray_origins=ray_origins, ray_directions=ray_directions, tree=self.mesh.triangles_tree, multiple_hits=multiple_hits, triangles_normal=self.mesh.face_normals) if return_locations: if len(index_tri) == 0: return index_tri, index_ray, locations unique = grouping.unique_rows(np.column_stack((locations, index_ray)))[0] return index_tri[unique], index_ray[unique], locations[unique] return index_tri, index_ray
Find the intersections between the current mesh and a list of rays. Parameters ------------ ray_origins: (m,3) float, ray origin points ray_directions: (m,3) float, ray direction vectors multiple_hits: bool, consider multiple hits of each ray or not return_locations: bool, return hit locations or not Returns ----------- index_triangle: (h,) int, index of triangles hit index_ray: (h,) int, index of ray that hit triangle locations: (h,3) float, (optional) position of intersection in space
Below is the the instruction that describes the task: ### Input: Find the intersections between the current mesh and a list of rays. Parameters ------------ ray_origins: (m,3) float, ray origin points ray_directions: (m,3) float, ray direction vectors multiple_hits: bool, consider multiple hits of each ray or not return_locations: bool, return hit locations or not Returns ----------- index_triangle: (h,) int, index of triangles hit index_ray: (h,) int, index of ray that hit triangle locations: (h,3) float, (optional) position of intersection in space ### Response: def intersects_id(self, ray_origins, ray_directions, return_locations=False, multiple_hits=True, **kwargs): """ Find the intersections between the current mesh and a list of rays. Parameters ------------ ray_origins: (m,3) float, ray origin points ray_directions: (m,3) float, ray direction vectors multiple_hits: bool, consider multiple hits of each ray or not return_locations: bool, return hit locations or not Returns ----------- index_triangle: (h,) int, index of triangles hit index_ray: (h,) int, index of ray that hit triangle locations: (h,3) float, (optional) position of intersection in space """ (index_tri, index_ray, locations) = ray_triangle_id(triangles=self.mesh.triangles, ray_origins=ray_origins, ray_directions=ray_directions, tree=self.mesh.triangles_tree, multiple_hits=multiple_hits, triangles_normal=self.mesh.face_normals) if return_locations: if len(index_tri) == 0: return index_tri, index_ray, locations unique = grouping.unique_rows(np.column_stack((locations, index_ray)))[0] return index_tri[unique], index_ray[unique], locations[unique] return index_tri, index_ray
def create_dbsnapshot(self, snapshot_id, dbinstance_id): """ Create a new DB snapshot. :type snapshot_id: string :param snapshot_id: The identifier for the DBSnapshot :type dbinstance_id: string :param dbinstance_id: The source identifier for the RDS instance from which the snapshot is created. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot """ params = {'DBSnapshotIdentifier' : snapshot_id, 'DBInstanceIdentifier' : dbinstance_id} return self.get_object('CreateDBSnapshot', params, DBSnapshot)
Create a new DB snapshot. :type snapshot_id: string :param snapshot_id: The identifier for the DBSnapshot :type dbinstance_id: string :param dbinstance_id: The source identifier for the RDS instance from which the snapshot is created. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot
Below is the the instruction that describes the task: ### Input: Create a new DB snapshot. :type snapshot_id: string :param snapshot_id: The identifier for the DBSnapshot :type dbinstance_id: string :param dbinstance_id: The source identifier for the RDS instance from which the snapshot is created. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot ### Response: def create_dbsnapshot(self, snapshot_id, dbinstance_id): """ Create a new DB snapshot. :type snapshot_id: string :param snapshot_id: The identifier for the DBSnapshot :type dbinstance_id: string :param dbinstance_id: The source identifier for the RDS instance from which the snapshot is created. :rtype: :class:`boto.rds.dbsnapshot.DBSnapshot` :return: The newly created DBSnapshot """ params = {'DBSnapshotIdentifier' : snapshot_id, 'DBInstanceIdentifier' : dbinstance_id} return self.get_object('CreateDBSnapshot', params, DBSnapshot)
def subset(self, service=None): """Subset the dataset. Open the remote dataset and get a client for talking to ``service``. Parameters ---------- service : str, optional The name of the service for subsetting the dataset. Defaults to 'NetcdfSubset' or 'NetcdfServer', in that order, depending on the services listed in the catalog. Returns ------- a client for communicating using ``service`` """ if service is None: for serviceName in self.ncssServiceNames: if serviceName in self.access_urls: service = serviceName break else: raise RuntimeError('Subset access is not available for this dataset.') elif service not in self.ncssServiceNames: raise ValueError(service + ' is not a valid service for subset. Options are: ' + ', '.join(self.ncssServiceNames)) return self.access_with_service(service)
Subset the dataset. Open the remote dataset and get a client for talking to ``service``. Parameters ---------- service : str, optional The name of the service for subsetting the dataset. Defaults to 'NetcdfSubset' or 'NetcdfServer', in that order, depending on the services listed in the catalog. Returns ------- a client for communicating using ``service``
Below is the the instruction that describes the task: ### Input: Subset the dataset. Open the remote dataset and get a client for talking to ``service``. Parameters ---------- service : str, optional The name of the service for subsetting the dataset. Defaults to 'NetcdfSubset' or 'NetcdfServer', in that order, depending on the services listed in the catalog. Returns ------- a client for communicating using ``service`` ### Response: def subset(self, service=None): """Subset the dataset. Open the remote dataset and get a client for talking to ``service``. Parameters ---------- service : str, optional The name of the service for subsetting the dataset. Defaults to 'NetcdfSubset' or 'NetcdfServer', in that order, depending on the services listed in the catalog. Returns ------- a client for communicating using ``service`` """ if service is None: for serviceName in self.ncssServiceNames: if serviceName in self.access_urls: service = serviceName break else: raise RuntimeError('Subset access is not available for this dataset.') elif service not in self.ncssServiceNames: raise ValueError(service + ' is not a valid service for subset. Options are: ' + ', '.join(self.ncssServiceNames)) return self.access_with_service(service)
def sentences(self): """ Returns the instances by sentences, and yields a list of tokens, similar to the pywsd.semcor.sentences. >>> coarse_wsd = SemEval2007_Coarse_WSD() >>> for sent in coarse_wsd.sentences(): >>> for token in sent: >>> print token >>> break >>> break word(id=None, text=u'Your', offset=None, sentid=0, paraid=u'd001', term=None) """ for sentid, ys in enumerate(self.yield_sentences()): sent, context_sent, context_doc, inst2ans, textid = ys instances = {} for instance in sent.findAll('instance'): instid = instance['id'] lemma = instance['lemma'] word = instance.text instances[instid] = Instance(instid, lemma, word) tokens = [] for i in sent: # Iterates through BeautifulSoup object. if str(i).startswith('<instance'): # BeautifulSoup.Tag instid = sent.find('instance')['id'] inst = instances[instid] answer = inst2ans[instid] term = Term(instid, answer.pos, inst.lemma, answer.sensekey, type='open') tokens.append(Word(instid, inst.word, sentid, textid, term)) else: # if BeautifulSoup.NavigableString tokens+=[Word(None, w, sentid, textid, None) for w in i.split()] yield tokens
Returns the instances by sentences, and yields a list of tokens, similar to the pywsd.semcor.sentences. >>> coarse_wsd = SemEval2007_Coarse_WSD() >>> for sent in coarse_wsd.sentences(): >>> for token in sent: >>> print token >>> break >>> break word(id=None, text=u'Your', offset=None, sentid=0, paraid=u'd001', term=None)
Below is the the instruction that describes the task: ### Input: Returns the instances by sentences, and yields a list of tokens, similar to the pywsd.semcor.sentences. >>> coarse_wsd = SemEval2007_Coarse_WSD() >>> for sent in coarse_wsd.sentences(): >>> for token in sent: >>> print token >>> break >>> break word(id=None, text=u'Your', offset=None, sentid=0, paraid=u'd001', term=None) ### Response: def sentences(self): """ Returns the instances by sentences, and yields a list of tokens, similar to the pywsd.semcor.sentences. >>> coarse_wsd = SemEval2007_Coarse_WSD() >>> for sent in coarse_wsd.sentences(): >>> for token in sent: >>> print token >>> break >>> break word(id=None, text=u'Your', offset=None, sentid=0, paraid=u'd001', term=None) """ for sentid, ys in enumerate(self.yield_sentences()): sent, context_sent, context_doc, inst2ans, textid = ys instances = {} for instance in sent.findAll('instance'): instid = instance['id'] lemma = instance['lemma'] word = instance.text instances[instid] = Instance(instid, lemma, word) tokens = [] for i in sent: # Iterates through BeautifulSoup object. if str(i).startswith('<instance'): # BeautifulSoup.Tag instid = sent.find('instance')['id'] inst = instances[instid] answer = inst2ans[instid] term = Term(instid, answer.pos, inst.lemma, answer.sensekey, type='open') tokens.append(Word(instid, inst.word, sentid, textid, term)) else: # if BeautifulSoup.NavigableString tokens+=[Word(None, w, sentid, textid, None) for w in i.split()] yield tokens
def build(self, builder): """Build XML by appending to builder""" builder.start("Symbol", {}) for child in self.translations: child.build(builder) builder.end("Symbol")
Build XML by appending to builder
Below is the the instruction that describes the task: ### Input: Build XML by appending to builder ### Response: def build(self, builder): """Build XML by appending to builder""" builder.start("Symbol", {}) for child in self.translations: child.build(builder) builder.end("Symbol")
def autoconf(self): """Implements Munin Plugin Auto-Configuration Option. @return: True if plugin can be auto-configured, False otherwise. """ nginxInfo = NginxInfo(self._host, self._port, self._user, self._password, self._statuspath, self._ssl) return nginxInfo is not None
Implements Munin Plugin Auto-Configuration Option. @return: True if plugin can be auto-configured, False otherwise.
Below is the the instruction that describes the task: ### Input: Implements Munin Plugin Auto-Configuration Option. @return: True if plugin can be auto-configured, False otherwise. ### Response: def autoconf(self): """Implements Munin Plugin Auto-Configuration Option. @return: True if plugin can be auto-configured, False otherwise. """ nginxInfo = NginxInfo(self._host, self._port, self._user, self._password, self._statuspath, self._ssl) return nginxInfo is not None
def paginate_queryset(self, queryset, request, view=None): """ adds `max_count` as a running tally of the largest table size. Used for calculating next/previous links later """ result = super(MultipleModelLimitOffsetPagination, self).paginate_queryset(queryset, request, view) try: if self.max_count < self.count: self.max_count = self.count except AttributeError: self.max_count = self.count try: self.total += self.count except AttributeError: self.total = self.count return result
adds `max_count` as a running tally of the largest table size. Used for calculating next/previous links later
Below is the the instruction that describes the task: ### Input: adds `max_count` as a running tally of the largest table size. Used for calculating next/previous links later ### Response: def paginate_queryset(self, queryset, request, view=None): """ adds `max_count` as a running tally of the largest table size. Used for calculating next/previous links later """ result = super(MultipleModelLimitOffsetPagination, self).paginate_queryset(queryset, request, view) try: if self.max_count < self.count: self.max_count = self.count except AttributeError: self.max_count = self.count try: self.total += self.count except AttributeError: self.total = self.count return result
def stream(self, to=values.unset, from_=values.unset, date_sent_before=values.unset, date_sent=values.unset, date_sent_after=values.unset, limit=None, page_size=None): """ Streams MessageInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param unicode to: Filter by messages sent to this number :param unicode from_: Filter by from number :param datetime date_sent_before: Filter by date sent :param datetime date_sent: Filter by date sent :param datetime date_sent_after: Filter by date sent :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.api.v2010.account.message.MessageInstance] """ limits = self._version.read_limits(limit, page_size) page = self.page( to=to, from_=from_, date_sent_before=date_sent_before, date_sent=date_sent, date_sent_after=date_sent_after, page_size=limits['page_size'], ) return self._version.stream(page, limits['limit'], limits['page_limit'])
Streams MessageInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param unicode to: Filter by messages sent to this number :param unicode from_: Filter by from number :param datetime date_sent_before: Filter by date sent :param datetime date_sent: Filter by date sent :param datetime date_sent_after: Filter by date sent :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.api.v2010.account.message.MessageInstance]
Below is the the instruction that describes the task: ### Input: Streams MessageInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param unicode to: Filter by messages sent to this number :param unicode from_: Filter by from number :param datetime date_sent_before: Filter by date sent :param datetime date_sent: Filter by date sent :param datetime date_sent_after: Filter by date sent :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.api.v2010.account.message.MessageInstance] ### Response: def stream(self, to=values.unset, from_=values.unset, date_sent_before=values.unset, date_sent=values.unset, date_sent_after=values.unset, limit=None, page_size=None): """ Streams MessageInstance records from the API as a generator stream. This operation lazily loads records as efficiently as possible until the limit is reached. The results are returned as a generator, so this operation is memory efficient. :param unicode to: Filter by messages sent to this number :param unicode from_: Filter by from number :param datetime date_sent_before: Filter by date sent :param datetime date_sent: Filter by date sent :param datetime date_sent_after: Filter by date sent :param int limit: Upper limit for the number of records to return. stream() guarantees to never return more than limit. Default is no limit :param int page_size: Number of records to fetch per request, when not set will use the default value of 50 records. If no page_size is defined but a limit is defined, stream() will attempt to read the limit with the most efficient page size, i.e. min(limit, 1000) :returns: Generator that will yield up to limit results :rtype: list[twilio.rest.api.v2010.account.message.MessageInstance] """ limits = self._version.read_limits(limit, page_size) page = self.page( to=to, from_=from_, date_sent_before=date_sent_before, date_sent=date_sent, date_sent_after=date_sent_after, page_size=limits['page_size'], ) return self._version.stream(page, limits['limit'], limits['page_limit'])
def auth(view, **kwargs): """ This plugin allow user to login to application kwargs: - signin_view - signout_view - template_dir - menu: - name - group_name - ... @plugin(user.login, model=model.User) class MyAccount(Juice): pass """ endpoint_namespace = view.__name__ + ":%s" view_name = view.__name__ UserModel = kwargs.pop("model") User = UserModel.User login_view = endpoint_namespace % "login" on_signin_view = kwargs.get("signin_view", "Index:index") on_signout_view = kwargs.get("signout_view", "Index:index") template_dir = kwargs.get("template_dir", "Juice/Plugin/User/Account") template_page = template_dir + "/%s.html" login_manager = LoginManager() login_manager.login_view = login_view login_manager.login_message_category = "error" init_app(login_manager.init_app) menu_context = view _menu = kwargs.get("menu", {}) if _menu: @menu(**_menu) class UserAccountMenu(object): pass menu_context = UserAccountMenu @login_manager.user_loader def load_user(userid): return User.get(userid) View.g(__USER_AUTH_ENABLED__=True) class Auth(object): decorators = view.decorators + [login_required] SESSION_KEY_SET_EMAIL_DATA = "set_email_tmp_data" TEMP_DATA_KEY = "login_tmp_data" @property def tmp_data(self): return session[self.TEMP_DATA_KEY] @tmp_data.setter def tmp_data(self, data): session[self.TEMP_DATA_KEY] = data def _login_enabled(self): if self.get_config("USER_AUTH_ALLOW_LOGIN") is not True: abort("UserLoginDisabledError") def _signup_enabled(self): if self.get_config("USER_AUTH_ALLOW_SIGNUP") is not True: abort("UserSignupDisabledError") def _oauth_enabled(self): if self.get_config("USER_AUTH_ALLOW_OAUTH") is not True: abort("UserOAuthDisabledError") def _send_reset_password(self, user): delivery = self.get_config("USER_AUTH_PASSWORD_RESET_METHOD") token_reset_ttl = self.get_config("USER_AUTH_TOKEN_RESET_TTL", 60) new_password = None if delivery.upper() == "TOKEN": token = user.set_temp_login(token_reset_ttl) url = url_for(endpoint_namespace % "reset_password", token=token, _external=True) else: new_password = user.set_password(password=None, random=True) url = url_for(endpoint_namespace % "login", _external=True) mail.send(template="reset-password.txt", method_=delivery, to=user.email, name=user.email, url=url, new_password=new_password) @classmethod def login_user(cls, user): login_user(user) now = datetime.datetime.now() user.update(last_login=now, last_visited=now) @menu("Login", endpoint=endpoint_namespace % "login", visible_with_auth_user=False, extends=menu_context) @template(template_page % "login", endpoint_namespace=endpoint_namespace) @route("login/", methods=["GET", "POST"], endpoint=endpoint_namespace % "login") @no_login_required def login(self): """ Login page """ self._login_enabled() logout_user() self.tmp_data = None self.meta_tags(title="Login") if request.method == "POST": email = request.form.get("email").strip() password = request.form.get("password").strip() if not email or not password: flash("Email or Password is empty", "error") return redirect(url_for(login_view, next=request.form.get("next"))) user = User.get_by_email(email) if user and user.password_hash and user.password_matched(password): self.login_user(user) return redirect(request.form.get("next") or url_for(on_signin_view)) else: flash("Email or Password is invalid", "error") return redirect(url_for(login_view, next=request.form.get("next"))) return dict(login_url_next=request.args.get("next", ""), login_url_default=url_for(on_signin_view), signup_enabled=self.get_config("USER_AUTH_ALLOW_SIGNUP"), oauth_enabled=self.get_config("USER_AUTH_ALLOW_LOGIN")) @menu("Logout", endpoint=endpoint_namespace % "logout", visible_with_auth_user=True, order=100, extends=menu_context) @route("logout/", endpoint=endpoint_namespace % "logout") @no_login_required def logout(self): logout_user() return redirect(url_for(on_signout_view or login_view)) @menu("Signup", endpoint=endpoint_namespace % "signup", visible_with_auth_user=False, extends=menu_context) @template(template_page % "signup", endpoint_namespace=endpoint_namespace) @route("signup/", methods=["GET", "POST"], endpoint=endpoint_namespace % "signup") @no_login_required def signup(self): """ For Email Signup :return: """ self._login_enabled() self._signup_enabled() self.meta_tags(title="Signup") if request.method == "POST": # reCaptcha if not recaptcha.verify(): flash("Invalid Security code", "error") return redirect(url_for(endpoint_namespace % "signup", next=request.form.get("next"))) try: name = request.form.get("name") email = request.form.get("email") password = request.form.get("password") password2 = request.form.get("password2") profile_image_url = request.form.get("profile_image_url", None) if not name: raise UserError("Name is required") elif not utils.is_valid_email(email): raise UserError("Invalid email address '%s'" % email) elif not password.strip() or password.strip() != password2.strip(): raise UserError("Passwords don't match") elif not utils.is_valid_password(password): raise UserError("Invalid password") else: new_account = User.new(email=email, password=password.strip(), first_name=name, profile_image_url=profile_image_url, signup_method="email") self.login_user(new_account) return redirect(request.form.get("next") or url_for(on_signin_view)) except ApplicationError as ex: flash(ex.message, "error") return redirect(url_for(endpoint_namespace % "signup", next=request.form.get("next"))) logout_user() return dict(login_url_next=request.args.get("next", "")) @route("lost-password/", methods=["GET", "POST"], endpoint=endpoint_namespace % "lost_password") @template(template_page % "lost_password", endpoint_namespace=endpoint_namespace) @no_login_required def lost_password(self): self._login_enabled() logout_user() self.meta_tags(title="Lost Password") if request.method == "POST": email = request.form.get("email") user = User.get_by_email(email) if user: self._send_reset_password(user) flash("A new password has been sent to '%s'" % email, "success") else: flash("Invalid email address", "error") return redirect(url_for(login_view)) else: return {} @menu("Account Settings", endpoint=endpoint_namespace % "account_settings", order=99, visible_with_auth_user=True, extends=menu_context) @template(template_page % "account_settings", endpoint_namespace=endpoint_namespace) @route("account-settings", methods=["GET", "POST"], endpoint=endpoint_namespace % "account_settings") @fresh_login_required def account_settings(self): self.meta_tags(title="Account Settings") if request.method == "POST": action = request.form.get("action") try: action = action.lower() # if action == "info": first_name = request.form.get("first_name").strip() last_name = request.form.get("last_name", "").strip() data = { "first_name": first_name, "last_name": last_name } current_user.update(**data) flash("Account info updated successfully!", "success") # elif action == "login": confirm_password = request.form.get("confirm-password").strip() if current_user.password_matched(confirm_password): self.change_login_handler() flash("Login Info updated successfully!", "success") else: flash("Invalid password", "error") # elif action == "password": confirm_password = request.form.get("confirm-password").strip() if current_user.password_matched(confirm_password): self.change_password_handler() flash("Password updated successfully!", "success") else: flash("Invalid password", "error") elif action == "profile-photo": file = request.files.get("file") if file: prefix = "profile-photos/%s/" % current_user.id extensions = ["jpg", "jpeg", "png", "gif"] my_photo = storage.upload(file, prefix=prefix, allowed_extensions=extensions) if my_photo: url = my_photo.url current_user.update(profile_image_url=url) flash("Profile Image updated successfully!", "success") else: raise UserError("Invalid action") except Exception as e: flash(e.message, "error") return redirect(url_for(endpoint_namespace % "account_settings")) return {} @classmethod def change_login_handler(cls, user_context=None, email=None): if not user_context: user_context = current_user if not email: email = request.form.get("email").strip() if not utils.is_valid_email(email): raise UserWarning("Invalid email address '%s'" % email) else: if email != user_context.email and User.get_by_email(email): raise UserWarning("Email exists already '%s'" % email) elif email != user_context.email: user_context.update(email=email) return True return False @classmethod def change_password_handler(cls, user_context=None, password=None, password2=None): if not user_context: user_context = current_user if not password: password = request.form.get("password").strip() if not password2: password2 = request.form.get("password2").strip() if password: if password != password2: raise UserWarning("Password don't match") elif not utils.is_valid_password(password): raise UserWarning("Invalid password") else: user_context.set_password(password) return True else: raise UserWarning("Password is empty") # OAUTH Login @route("oauth-login/<provider>", methods=["GET", "POST"], endpoint=endpoint_namespace % "oauth_login") @template(template_page % "oauth_login", endpoint_namespace=endpoint_namespace) @no_login_required def oauth_login(self, provider): """ Login via oauth providers """ self._login_enabled() self._oauth_enabled() provider = provider.lower() result = oauth.login(provider) response = oauth.response popup_js_custom = { "action": "", "url": "" } if result: if result.error: pass elif result.user: result.user.update() oauth_user = result.user user = User.get_by_oauth(provider=provider, provider_user_id=oauth_user.id) if not user: if oauth_user.email and User.get_by_email(oauth_user.email): flash("Account already exists with this email '%s'. " "Try to login or retrieve your password " % oauth_user.email, "error") popup_js_custom.update({ "action": "redirect", "url": url_for(login_view, next=request.form.get("next")) }) else: tmp_data = { "is_oauth": True, "provider": provider, "id": oauth_user.id, "name": oauth_user.name, "picture": oauth_user.picture, "first_name": oauth_user.first_name, "last_name": oauth_user.last_name, "email": oauth_user.email, "link": oauth_user.link } if not oauth_user.email: self.tmp_data = tmp_data popup_js_custom.update({ "action": "redirect", "url": url_for(endpoint_namespace % "setup_login") }) else: try: picture = oauth_user.picture user = User.new(email=oauth_user.email, name=oauth_user.name, signup_method=provider, profile_image_url=picture ) user.add_oauth(provider, oauth_user.provider_id, name=oauth_user.name, email=oauth_user.email, profile_image_url=oauth_user.picture, link=oauth_user.link) except ModelError as e: flash(e.message, "error") popup_js_custom.update({ "action": "redirect", "url": url_for(endpoint_namespace % "login") }) if user: self.login_user(user) return dict(popup_js=result.popup_js(custom=popup_js_custom), template_=template_page % "oauth_login") return response @template(template_page % "setup_login", endpoint_namespace=endpoint_namespace) @route("setup-login/", methods=["GET", "POST"], endpoint=endpoint_namespace % "setup_login") def setup_login(self): """ Allows to setup a email password if it's not provided specially coming from oauth-login :return: """ self._login_enabled() self.meta_tags(title="Setup Login") # Only user without email can set email if current_user.is_authenticated() and current_user.email: return redirect(url_for(endpoint_namespace % "account_settings")) if self.tmp_data: if request.method == "POST": if not self.tmp_data["is_oauth"]: return redirect(endpoint_namespace % "login") try: email = request.form.get("email") password = request.form.get("password") password2 = request.form.get("password2") if not utils.is_valid_email(email): raise UserError("Invalid email address '%s'" % email) elif User.get_by_email(email): raise UserError("An account exists already with this email address '%s' " % email) elif not password.strip() or password.strip() != password2.strip(): raise UserError("Passwords don't match") elif not utils.is_valid_password(password): raise UserError("Invalid password") else: user = User.new(email=email, password=password.strip(), name=self.tmp_data["name"], profile_image_url=self.tmp_data["picture"], signup_method=self.tmp_data["provider"]) user.add_oauth(self.tmp_data["provider"], self.tmp_data["id"], name=self.tmp_data["name"], email=email, profile_image_url=self.tmp_data["picture"], link=self.tmp_data["link"]) self.login_user(user) self.tmp_data = None return redirect(request.form.get("next") or url_for(on_signin_view)) except ApplicationError as ex: flash(ex.message, "error") return redirect(url_for(endpoint_namespace % "login")) return dict(provider=self.tmp_data) else: return redirect(url_for(endpoint_namespace % "login")) @route("reset-password/<token>", methods=["GET", "POST"], endpoint=endpoint_namespace % "reset_password") @template(template_page % "reset_password", endpoint_namespace=endpoint_namespace) @no_login_required def reset_password(self, token): self._login_enabled() logout_user() self.meta_tags(title="Reset Password") user = User.get_by_temp_login(token) if user: if not user.has_temp_login: return redirect(url_for(on_signin_view)) if request.method == "POST": try: self.change_password_handler(user_context=user) user.clear_temp_login() flash("Password updated successfully!", "success") return redirect(url_for(on_signin_view)) except Exception as ex: flash("Error: %s" % ex.message, "error") return redirect(url_for(endpoint_namespace % "reset_password", token=token)) else: return dict(token=token) else: abort(404, "Invalid token") @route("oauth-connect", methods=["POST"], endpoint="%s:oauth_connect" % endpoint_namespace) def oauth_connect(self): """ To login via social """ email = request.form.get("email").strip() name = request.form.get("name").strip() provider = request.form.get("provider").strip() provider_user_id = request.form.get("provider_user_id").strip() image_url = request.form.get("image_url").strip() next = request.form.get("next", "") try: current_user.oauth_connect(provider=provider, provider_user_id=provider_user_id, email=email, name=name, image_url=image_url) except Exception as ex: flash("Unable to link your account", "error") return redirect(url_for(endpoint_namespace % "account_settings")) return Auth
This plugin allow user to login to application kwargs: - signin_view - signout_view - template_dir - menu: - name - group_name - ... @plugin(user.login, model=model.User) class MyAccount(Juice): pass
Below is the the instruction that describes the task: ### Input: This plugin allow user to login to application kwargs: - signin_view - signout_view - template_dir - menu: - name - group_name - ... @plugin(user.login, model=model.User) class MyAccount(Juice): pass ### Response: def auth(view, **kwargs): """ This plugin allow user to login to application kwargs: - signin_view - signout_view - template_dir - menu: - name - group_name - ... @plugin(user.login, model=model.User) class MyAccount(Juice): pass """ endpoint_namespace = view.__name__ + ":%s" view_name = view.__name__ UserModel = kwargs.pop("model") User = UserModel.User login_view = endpoint_namespace % "login" on_signin_view = kwargs.get("signin_view", "Index:index") on_signout_view = kwargs.get("signout_view", "Index:index") template_dir = kwargs.get("template_dir", "Juice/Plugin/User/Account") template_page = template_dir + "/%s.html" login_manager = LoginManager() login_manager.login_view = login_view login_manager.login_message_category = "error" init_app(login_manager.init_app) menu_context = view _menu = kwargs.get("menu", {}) if _menu: @menu(**_menu) class UserAccountMenu(object): pass menu_context = UserAccountMenu @login_manager.user_loader def load_user(userid): return User.get(userid) View.g(__USER_AUTH_ENABLED__=True) class Auth(object): decorators = view.decorators + [login_required] SESSION_KEY_SET_EMAIL_DATA = "set_email_tmp_data" TEMP_DATA_KEY = "login_tmp_data" @property def tmp_data(self): return session[self.TEMP_DATA_KEY] @tmp_data.setter def tmp_data(self, data): session[self.TEMP_DATA_KEY] = data def _login_enabled(self): if self.get_config("USER_AUTH_ALLOW_LOGIN") is not True: abort("UserLoginDisabledError") def _signup_enabled(self): if self.get_config("USER_AUTH_ALLOW_SIGNUP") is not True: abort("UserSignupDisabledError") def _oauth_enabled(self): if self.get_config("USER_AUTH_ALLOW_OAUTH") is not True: abort("UserOAuthDisabledError") def _send_reset_password(self, user): delivery = self.get_config("USER_AUTH_PASSWORD_RESET_METHOD") token_reset_ttl = self.get_config("USER_AUTH_TOKEN_RESET_TTL", 60) new_password = None if delivery.upper() == "TOKEN": token = user.set_temp_login(token_reset_ttl) url = url_for(endpoint_namespace % "reset_password", token=token, _external=True) else: new_password = user.set_password(password=None, random=True) url = url_for(endpoint_namespace % "login", _external=True) mail.send(template="reset-password.txt", method_=delivery, to=user.email, name=user.email, url=url, new_password=new_password) @classmethod def login_user(cls, user): login_user(user) now = datetime.datetime.now() user.update(last_login=now, last_visited=now) @menu("Login", endpoint=endpoint_namespace % "login", visible_with_auth_user=False, extends=menu_context) @template(template_page % "login", endpoint_namespace=endpoint_namespace) @route("login/", methods=["GET", "POST"], endpoint=endpoint_namespace % "login") @no_login_required def login(self): """ Login page """ self._login_enabled() logout_user() self.tmp_data = None self.meta_tags(title="Login") if request.method == "POST": email = request.form.get("email").strip() password = request.form.get("password").strip() if not email or not password: flash("Email or Password is empty", "error") return redirect(url_for(login_view, next=request.form.get("next"))) user = User.get_by_email(email) if user and user.password_hash and user.password_matched(password): self.login_user(user) return redirect(request.form.get("next") or url_for(on_signin_view)) else: flash("Email or Password is invalid", "error") return redirect(url_for(login_view, next=request.form.get("next"))) return dict(login_url_next=request.args.get("next", ""), login_url_default=url_for(on_signin_view), signup_enabled=self.get_config("USER_AUTH_ALLOW_SIGNUP"), oauth_enabled=self.get_config("USER_AUTH_ALLOW_LOGIN")) @menu("Logout", endpoint=endpoint_namespace % "logout", visible_with_auth_user=True, order=100, extends=menu_context) @route("logout/", endpoint=endpoint_namespace % "logout") @no_login_required def logout(self): logout_user() return redirect(url_for(on_signout_view or login_view)) @menu("Signup", endpoint=endpoint_namespace % "signup", visible_with_auth_user=False, extends=menu_context) @template(template_page % "signup", endpoint_namespace=endpoint_namespace) @route("signup/", methods=["GET", "POST"], endpoint=endpoint_namespace % "signup") @no_login_required def signup(self): """ For Email Signup :return: """ self._login_enabled() self._signup_enabled() self.meta_tags(title="Signup") if request.method == "POST": # reCaptcha if not recaptcha.verify(): flash("Invalid Security code", "error") return redirect(url_for(endpoint_namespace % "signup", next=request.form.get("next"))) try: name = request.form.get("name") email = request.form.get("email") password = request.form.get("password") password2 = request.form.get("password2") profile_image_url = request.form.get("profile_image_url", None) if not name: raise UserError("Name is required") elif not utils.is_valid_email(email): raise UserError("Invalid email address '%s'" % email) elif not password.strip() or password.strip() != password2.strip(): raise UserError("Passwords don't match") elif not utils.is_valid_password(password): raise UserError("Invalid password") else: new_account = User.new(email=email, password=password.strip(), first_name=name, profile_image_url=profile_image_url, signup_method="email") self.login_user(new_account) return redirect(request.form.get("next") or url_for(on_signin_view)) except ApplicationError as ex: flash(ex.message, "error") return redirect(url_for(endpoint_namespace % "signup", next=request.form.get("next"))) logout_user() return dict(login_url_next=request.args.get("next", "")) @route("lost-password/", methods=["GET", "POST"], endpoint=endpoint_namespace % "lost_password") @template(template_page % "lost_password", endpoint_namespace=endpoint_namespace) @no_login_required def lost_password(self): self._login_enabled() logout_user() self.meta_tags(title="Lost Password") if request.method == "POST": email = request.form.get("email") user = User.get_by_email(email) if user: self._send_reset_password(user) flash("A new password has been sent to '%s'" % email, "success") else: flash("Invalid email address", "error") return redirect(url_for(login_view)) else: return {} @menu("Account Settings", endpoint=endpoint_namespace % "account_settings", order=99, visible_with_auth_user=True, extends=menu_context) @template(template_page % "account_settings", endpoint_namespace=endpoint_namespace) @route("account-settings", methods=["GET", "POST"], endpoint=endpoint_namespace % "account_settings") @fresh_login_required def account_settings(self): self.meta_tags(title="Account Settings") if request.method == "POST": action = request.form.get("action") try: action = action.lower() # if action == "info": first_name = request.form.get("first_name").strip() last_name = request.form.get("last_name", "").strip() data = { "first_name": first_name, "last_name": last_name } current_user.update(**data) flash("Account info updated successfully!", "success") # elif action == "login": confirm_password = request.form.get("confirm-password").strip() if current_user.password_matched(confirm_password): self.change_login_handler() flash("Login Info updated successfully!", "success") else: flash("Invalid password", "error") # elif action == "password": confirm_password = request.form.get("confirm-password").strip() if current_user.password_matched(confirm_password): self.change_password_handler() flash("Password updated successfully!", "success") else: flash("Invalid password", "error") elif action == "profile-photo": file = request.files.get("file") if file: prefix = "profile-photos/%s/" % current_user.id extensions = ["jpg", "jpeg", "png", "gif"] my_photo = storage.upload(file, prefix=prefix, allowed_extensions=extensions) if my_photo: url = my_photo.url current_user.update(profile_image_url=url) flash("Profile Image updated successfully!", "success") else: raise UserError("Invalid action") except Exception as e: flash(e.message, "error") return redirect(url_for(endpoint_namespace % "account_settings")) return {} @classmethod def change_login_handler(cls, user_context=None, email=None): if not user_context: user_context = current_user if not email: email = request.form.get("email").strip() if not utils.is_valid_email(email): raise UserWarning("Invalid email address '%s'" % email) else: if email != user_context.email and User.get_by_email(email): raise UserWarning("Email exists already '%s'" % email) elif email != user_context.email: user_context.update(email=email) return True return False @classmethod def change_password_handler(cls, user_context=None, password=None, password2=None): if not user_context: user_context = current_user if not password: password = request.form.get("password").strip() if not password2: password2 = request.form.get("password2").strip() if password: if password != password2: raise UserWarning("Password don't match") elif not utils.is_valid_password(password): raise UserWarning("Invalid password") else: user_context.set_password(password) return True else: raise UserWarning("Password is empty") # OAUTH Login @route("oauth-login/<provider>", methods=["GET", "POST"], endpoint=endpoint_namespace % "oauth_login") @template(template_page % "oauth_login", endpoint_namespace=endpoint_namespace) @no_login_required def oauth_login(self, provider): """ Login via oauth providers """ self._login_enabled() self._oauth_enabled() provider = provider.lower() result = oauth.login(provider) response = oauth.response popup_js_custom = { "action": "", "url": "" } if result: if result.error: pass elif result.user: result.user.update() oauth_user = result.user user = User.get_by_oauth(provider=provider, provider_user_id=oauth_user.id) if not user: if oauth_user.email and User.get_by_email(oauth_user.email): flash("Account already exists with this email '%s'. " "Try to login or retrieve your password " % oauth_user.email, "error") popup_js_custom.update({ "action": "redirect", "url": url_for(login_view, next=request.form.get("next")) }) else: tmp_data = { "is_oauth": True, "provider": provider, "id": oauth_user.id, "name": oauth_user.name, "picture": oauth_user.picture, "first_name": oauth_user.first_name, "last_name": oauth_user.last_name, "email": oauth_user.email, "link": oauth_user.link } if not oauth_user.email: self.tmp_data = tmp_data popup_js_custom.update({ "action": "redirect", "url": url_for(endpoint_namespace % "setup_login") }) else: try: picture = oauth_user.picture user = User.new(email=oauth_user.email, name=oauth_user.name, signup_method=provider, profile_image_url=picture ) user.add_oauth(provider, oauth_user.provider_id, name=oauth_user.name, email=oauth_user.email, profile_image_url=oauth_user.picture, link=oauth_user.link) except ModelError as e: flash(e.message, "error") popup_js_custom.update({ "action": "redirect", "url": url_for(endpoint_namespace % "login") }) if user: self.login_user(user) return dict(popup_js=result.popup_js(custom=popup_js_custom), template_=template_page % "oauth_login") return response @template(template_page % "setup_login", endpoint_namespace=endpoint_namespace) @route("setup-login/", methods=["GET", "POST"], endpoint=endpoint_namespace % "setup_login") def setup_login(self): """ Allows to setup a email password if it's not provided specially coming from oauth-login :return: """ self._login_enabled() self.meta_tags(title="Setup Login") # Only user without email can set email if current_user.is_authenticated() and current_user.email: return redirect(url_for(endpoint_namespace % "account_settings")) if self.tmp_data: if request.method == "POST": if not self.tmp_data["is_oauth"]: return redirect(endpoint_namespace % "login") try: email = request.form.get("email") password = request.form.get("password") password2 = request.form.get("password2") if not utils.is_valid_email(email): raise UserError("Invalid email address '%s'" % email) elif User.get_by_email(email): raise UserError("An account exists already with this email address '%s' " % email) elif not password.strip() or password.strip() != password2.strip(): raise UserError("Passwords don't match") elif not utils.is_valid_password(password): raise UserError("Invalid password") else: user = User.new(email=email, password=password.strip(), name=self.tmp_data["name"], profile_image_url=self.tmp_data["picture"], signup_method=self.tmp_data["provider"]) user.add_oauth(self.tmp_data["provider"], self.tmp_data["id"], name=self.tmp_data["name"], email=email, profile_image_url=self.tmp_data["picture"], link=self.tmp_data["link"]) self.login_user(user) self.tmp_data = None return redirect(request.form.get("next") or url_for(on_signin_view)) except ApplicationError as ex: flash(ex.message, "error") return redirect(url_for(endpoint_namespace % "login")) return dict(provider=self.tmp_data) else: return redirect(url_for(endpoint_namespace % "login")) @route("reset-password/<token>", methods=["GET", "POST"], endpoint=endpoint_namespace % "reset_password") @template(template_page % "reset_password", endpoint_namespace=endpoint_namespace) @no_login_required def reset_password(self, token): self._login_enabled() logout_user() self.meta_tags(title="Reset Password") user = User.get_by_temp_login(token) if user: if not user.has_temp_login: return redirect(url_for(on_signin_view)) if request.method == "POST": try: self.change_password_handler(user_context=user) user.clear_temp_login() flash("Password updated successfully!", "success") return redirect(url_for(on_signin_view)) except Exception as ex: flash("Error: %s" % ex.message, "error") return redirect(url_for(endpoint_namespace % "reset_password", token=token)) else: return dict(token=token) else: abort(404, "Invalid token") @route("oauth-connect", methods=["POST"], endpoint="%s:oauth_connect" % endpoint_namespace) def oauth_connect(self): """ To login via social """ email = request.form.get("email").strip() name = request.form.get("name").strip() provider = request.form.get("provider").strip() provider_user_id = request.form.get("provider_user_id").strip() image_url = request.form.get("image_url").strip() next = request.form.get("next", "") try: current_user.oauth_connect(provider=provider, provider_user_id=provider_user_id, email=email, name=name, image_url=image_url) except Exception as ex: flash("Unable to link your account", "error") return redirect(url_for(endpoint_namespace % "account_settings")) return Auth
def ajax_required(func): # taken from djangosnippets.org """ AJAX request required decorator use it in your views: @ajax_required def my_view(request): .... """ def wrap(request, *args, **kwargs): if not request.is_ajax(): return HttpResponseBadRequest return func(request, *args, **kwargs) wrap.__doc__ = func.__doc__ wrap.__name__ = func.__name__ return wrap
AJAX request required decorator use it in your views: @ajax_required def my_view(request): ....
Below is the the instruction that describes the task: ### Input: AJAX request required decorator use it in your views: @ajax_required def my_view(request): .... ### Response: def ajax_required(func): # taken from djangosnippets.org """ AJAX request required decorator use it in your views: @ajax_required def my_view(request): .... """ def wrap(request, *args, **kwargs): if not request.is_ajax(): return HttpResponseBadRequest return func(request, *args, **kwargs) wrap.__doc__ = func.__doc__ wrap.__name__ = func.__name__ return wrap
def _choose_pool(self, protocol=None): """ Selects a connection pool according to the default protocol and the passed one. :param protocol: the protocol to use :type protocol: string :rtype: Pool """ if not protocol: protocol = self.protocol if protocol == 'http': pool = self._http_pool elif protocol == 'tcp' or protocol == 'pbc': pool = self._tcp_pool else: raise ValueError("invalid protocol %s" % protocol) if pool is None or self._closed: # NB: GH-500, this can happen if client is closed raise RuntimeError("Client is closed.") return pool
Selects a connection pool according to the default protocol and the passed one. :param protocol: the protocol to use :type protocol: string :rtype: Pool
Below is the the instruction that describes the task: ### Input: Selects a connection pool according to the default protocol and the passed one. :param protocol: the protocol to use :type protocol: string :rtype: Pool ### Response: def _choose_pool(self, protocol=None): """ Selects a connection pool according to the default protocol and the passed one. :param protocol: the protocol to use :type protocol: string :rtype: Pool """ if not protocol: protocol = self.protocol if protocol == 'http': pool = self._http_pool elif protocol == 'tcp' or protocol == 'pbc': pool = self._tcp_pool else: raise ValueError("invalid protocol %s" % protocol) if pool is None or self._closed: # NB: GH-500, this can happen if client is closed raise RuntimeError("Client is closed.") return pool
def stdlib_list(version=None): """ Given a ``version``, return a ``list`` of names of the Python Standard Libraries for that version. These names are obtained from the Sphinx inventory file (used in :py:mod:`sphinx.ext.intersphinx`). :param str|None version: The version (as a string) whose list of libraries you want (one of ``"2.6"``, ``"2.7"``, ``"3.2"``, ``"3.3"``, ``"3.4"``, or ``"3.5"``). If not specified, the current version of Python will be used. :return: A list of standard libraries from the specified version of Python :rtype: list """ version = get_canonical_version(version) if version is not None else '.'.join( str(x) for x in sys.version_info[:2]) module_list_file = os.path.join(list_dir, "{}.txt".format(version)) with open(module_list_file) as f: result = [y for y in [x.strip() for x in f.readlines()] if y] return result
Given a ``version``, return a ``list`` of names of the Python Standard Libraries for that version. These names are obtained from the Sphinx inventory file (used in :py:mod:`sphinx.ext.intersphinx`). :param str|None version: The version (as a string) whose list of libraries you want (one of ``"2.6"``, ``"2.7"``, ``"3.2"``, ``"3.3"``, ``"3.4"``, or ``"3.5"``). If not specified, the current version of Python will be used. :return: A list of standard libraries from the specified version of Python :rtype: list
Below is the the instruction that describes the task: ### Input: Given a ``version``, return a ``list`` of names of the Python Standard Libraries for that version. These names are obtained from the Sphinx inventory file (used in :py:mod:`sphinx.ext.intersphinx`). :param str|None version: The version (as a string) whose list of libraries you want (one of ``"2.6"``, ``"2.7"``, ``"3.2"``, ``"3.3"``, ``"3.4"``, or ``"3.5"``). If not specified, the current version of Python will be used. :return: A list of standard libraries from the specified version of Python :rtype: list ### Response: def stdlib_list(version=None): """ Given a ``version``, return a ``list`` of names of the Python Standard Libraries for that version. These names are obtained from the Sphinx inventory file (used in :py:mod:`sphinx.ext.intersphinx`). :param str|None version: The version (as a string) whose list of libraries you want (one of ``"2.6"``, ``"2.7"``, ``"3.2"``, ``"3.3"``, ``"3.4"``, or ``"3.5"``). If not specified, the current version of Python will be used. :return: A list of standard libraries from the specified version of Python :rtype: list """ version = get_canonical_version(version) if version is not None else '.'.join( str(x) for x in sys.version_info[:2]) module_list_file = os.path.join(list_dir, "{}.txt".format(version)) with open(module_list_file) as f: result = [y for y in [x.strip() for x in f.readlines()] if y] return result
def factorial(n): """ Returns the factorial of n. """ f = 1 while (n > 0): f = f * n n = n - 1 return f
Returns the factorial of n.
Below is the the instruction that describes the task: ### Input: Returns the factorial of n. ### Response: def factorial(n): """ Returns the factorial of n. """ f = 1 while (n > 0): f = f * n n = n - 1 return f
def memoize(func): """ simple memoization decorator References: https://wiki.python.org/moin/PythonDecoratorLibrary#Memoize Args: func (function): live python function Returns: func: CommandLine: python -m utool.util_decor memoize Example: >>> # ENABLE_DOCTEST >>> from utool.util_decor import * # NOQA >>> import utool as ut >>> closure = {'a': 'b', 'c': 'd'} >>> incr = [0] >>> def foo(key): >>> value = closure[key] >>> incr[0] += 1 >>> return value >>> foo_memo = memoize(foo) >>> assert foo('a') == 'b' and foo('c') == 'd' >>> assert incr[0] == 2 >>> print('Call memoized version') >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> assert incr[0] == 4 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> print('Counter should no longer increase') >>> assert incr[0] == 4 >>> print('Closure changes result without memoization') >>> closure = {'a': 0, 'c': 1} >>> assert foo('a') == 0 and foo('c') == 1 >>> assert incr[0] == 6 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' """ cache = func._util_decor_memoize_cache = {} # @functools.wraps(func) def memoizer(*args, **kwargs): key = str(args) + str(kwargs) if key not in cache: cache[key] = func(*args, **kwargs) return cache[key] memoizer = preserve_sig(memoizer, func) memoizer.cache = cache return memoizer
simple memoization decorator References: https://wiki.python.org/moin/PythonDecoratorLibrary#Memoize Args: func (function): live python function Returns: func: CommandLine: python -m utool.util_decor memoize Example: >>> # ENABLE_DOCTEST >>> from utool.util_decor import * # NOQA >>> import utool as ut >>> closure = {'a': 'b', 'c': 'd'} >>> incr = [0] >>> def foo(key): >>> value = closure[key] >>> incr[0] += 1 >>> return value >>> foo_memo = memoize(foo) >>> assert foo('a') == 'b' and foo('c') == 'd' >>> assert incr[0] == 2 >>> print('Call memoized version') >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> assert incr[0] == 4 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> print('Counter should no longer increase') >>> assert incr[0] == 4 >>> print('Closure changes result without memoization') >>> closure = {'a': 0, 'c': 1} >>> assert foo('a') == 0 and foo('c') == 1 >>> assert incr[0] == 6 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd'
Below is the the instruction that describes the task: ### Input: simple memoization decorator References: https://wiki.python.org/moin/PythonDecoratorLibrary#Memoize Args: func (function): live python function Returns: func: CommandLine: python -m utool.util_decor memoize Example: >>> # ENABLE_DOCTEST >>> from utool.util_decor import * # NOQA >>> import utool as ut >>> closure = {'a': 'b', 'c': 'd'} >>> incr = [0] >>> def foo(key): >>> value = closure[key] >>> incr[0] += 1 >>> return value >>> foo_memo = memoize(foo) >>> assert foo('a') == 'b' and foo('c') == 'd' >>> assert incr[0] == 2 >>> print('Call memoized version') >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> assert incr[0] == 4 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> print('Counter should no longer increase') >>> assert incr[0] == 4 >>> print('Closure changes result without memoization') >>> closure = {'a': 0, 'c': 1} >>> assert foo('a') == 0 and foo('c') == 1 >>> assert incr[0] == 6 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' ### Response: def memoize(func): """ simple memoization decorator References: https://wiki.python.org/moin/PythonDecoratorLibrary#Memoize Args: func (function): live python function Returns: func: CommandLine: python -m utool.util_decor memoize Example: >>> # ENABLE_DOCTEST >>> from utool.util_decor import * # NOQA >>> import utool as ut >>> closure = {'a': 'b', 'c': 'd'} >>> incr = [0] >>> def foo(key): >>> value = closure[key] >>> incr[0] += 1 >>> return value >>> foo_memo = memoize(foo) >>> assert foo('a') == 'b' and foo('c') == 'd' >>> assert incr[0] == 2 >>> print('Call memoized version') >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> assert incr[0] == 4 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' >>> print('Counter should no longer increase') >>> assert incr[0] == 4 >>> print('Closure changes result without memoization') >>> closure = {'a': 0, 'c': 1} >>> assert foo('a') == 0 and foo('c') == 1 >>> assert incr[0] == 6 >>> assert foo_memo('a') == 'b' and foo_memo('c') == 'd' """ cache = func._util_decor_memoize_cache = {} # @functools.wraps(func) def memoizer(*args, **kwargs): key = str(args) + str(kwargs) if key not in cache: cache[key] = func(*args, **kwargs) return cache[key] memoizer = preserve_sig(memoizer, func) memoizer.cache = cache return memoizer
def iteritems(self): r""" Iterator over (column name, Series) pairs. Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series. Yields ------ label : object The column names for the DataFrame being iterated over. content : Series The column entries belonging to each label, as a Series. See Also -------- DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs. DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values. Examples -------- >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'], ... 'population': [1864, 22000, 80000]}, ... index=['panda', 'polar', 'koala']) >>> df species population panda bear 1864 polar bear 22000 koala marsupial 80000 >>> for label, content in df.iteritems(): ... print('label:', label) ... print('content:', content, sep='\n') ... label: species content: panda bear polar bear koala marsupial Name: species, dtype: object label: population content: panda 1864 polar 22000 koala 80000 Name: population, dtype: int64 """ if self.columns.is_unique and hasattr(self, '_item_cache'): for k in self.columns: yield k, self._get_item_cache(k) else: for i, k in enumerate(self.columns): yield k, self._ixs(i, axis=1)
r""" Iterator over (column name, Series) pairs. Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series. Yields ------ label : object The column names for the DataFrame being iterated over. content : Series The column entries belonging to each label, as a Series. See Also -------- DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs. DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values. Examples -------- >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'], ... 'population': [1864, 22000, 80000]}, ... index=['panda', 'polar', 'koala']) >>> df species population panda bear 1864 polar bear 22000 koala marsupial 80000 >>> for label, content in df.iteritems(): ... print('label:', label) ... print('content:', content, sep='\n') ... label: species content: panda bear polar bear koala marsupial Name: species, dtype: object label: population content: panda 1864 polar 22000 koala 80000 Name: population, dtype: int64
Below is the the instruction that describes the task: ### Input: r""" Iterator over (column name, Series) pairs. Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series. Yields ------ label : object The column names for the DataFrame being iterated over. content : Series The column entries belonging to each label, as a Series. See Also -------- DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs. DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values. Examples -------- >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'], ... 'population': [1864, 22000, 80000]}, ... index=['panda', 'polar', 'koala']) >>> df species population panda bear 1864 polar bear 22000 koala marsupial 80000 >>> for label, content in df.iteritems(): ... print('label:', label) ... print('content:', content, sep='\n') ... label: species content: panda bear polar bear koala marsupial Name: species, dtype: object label: population content: panda 1864 polar 22000 koala 80000 Name: population, dtype: int64 ### Response: def iteritems(self): r""" Iterator over (column name, Series) pairs. Iterates over the DataFrame columns, returning a tuple with the column name and the content as a Series. Yields ------ label : object The column names for the DataFrame being iterated over. content : Series The column entries belonging to each label, as a Series. See Also -------- DataFrame.iterrows : Iterate over DataFrame rows as (index, Series) pairs. DataFrame.itertuples : Iterate over DataFrame rows as namedtuples of the values. Examples -------- >>> df = pd.DataFrame({'species': ['bear', 'bear', 'marsupial'], ... 'population': [1864, 22000, 80000]}, ... index=['panda', 'polar', 'koala']) >>> df species population panda bear 1864 polar bear 22000 koala marsupial 80000 >>> for label, content in df.iteritems(): ... print('label:', label) ... print('content:', content, sep='\n') ... label: species content: panda bear polar bear koala marsupial Name: species, dtype: object label: population content: panda 1864 polar 22000 koala 80000 Name: population, dtype: int64 """ if self.columns.is_unique and hasattr(self, '_item_cache'): for k in self.columns: yield k, self._get_item_cache(k) else: for i, k in enumerate(self.columns): yield k, self._ixs(i, axis=1)