code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def kwarg_decorator(func): """ Turns a function that accepts a single arg and some kwargs in to a decorator that can optionally be called with kwargs: .. code-block:: python @kwarg_decorator def my_decorator(func, bar=True, baz=None): ... @my_decorator def my_func(): pass @my_decorator(bar=False) def my_other_func(): pass """ @wraps(func) def decorator(arg=None, **kwargs): if arg is None: return lambda arg: decorator(arg, **kwargs) return func(arg, **kwargs) return decorator
Turns a function that accepts a single arg and some kwargs in to a decorator that can optionally be called with kwargs: .. code-block:: python @kwarg_decorator def my_decorator(func, bar=True, baz=None): ... @my_decorator def my_func(): pass @my_decorator(bar=False) def my_other_func(): pass
Below is the the instruction that describes the task: ### Input: Turns a function that accepts a single arg and some kwargs in to a decorator that can optionally be called with kwargs: .. code-block:: python @kwarg_decorator def my_decorator(func, bar=True, baz=None): ... @my_decorator def my_func(): pass @my_decorator(bar=False) def my_other_func(): pass ### Response: def kwarg_decorator(func): """ Turns a function that accepts a single arg and some kwargs in to a decorator that can optionally be called with kwargs: .. code-block:: python @kwarg_decorator def my_decorator(func, bar=True, baz=None): ... @my_decorator def my_func(): pass @my_decorator(bar=False) def my_other_func(): pass """ @wraps(func) def decorator(arg=None, **kwargs): if arg is None: return lambda arg: decorator(arg, **kwargs) return func(arg, **kwargs) return decorator
def write_reference(self, ref_index): """ Writes a reference :param ref_index: Local index (0-based) to the reference """ self._writeStruct( ">BL", 1, (self.TC_REFERENCE, ref_index + self.BASE_REFERENCE_IDX) )
Writes a reference :param ref_index: Local index (0-based) to the reference
Below is the the instruction that describes the task: ### Input: Writes a reference :param ref_index: Local index (0-based) to the reference ### Response: def write_reference(self, ref_index): """ Writes a reference :param ref_index: Local index (0-based) to the reference """ self._writeStruct( ">BL", 1, (self.TC_REFERENCE, ref_index + self.BASE_REFERENCE_IDX) )
def get_dataset(lcc_server, dataset_id, strformat=False, page=1): '''This downloads a JSON form of a dataset from the specified lcc_server. If the dataset contains more than 1000 rows, it will be paginated, so you must use the `page` kwarg to get the page you want. The dataset JSON will contain the keys 'npages', 'currpage', and 'rows_per_page' to help with this. The 'rows' key contains the actual data rows as a list of tuples. The JSON contains metadata about the query that produced the dataset, information about the data table's columns, and links to download the dataset's products including the light curve ZIP and the dataset CSV. Parameters ---------- lcc_server : str This is the base URL of the LCC-Server to talk to. dataset_id : str This is the unique setid of the dataset you want to get. In the results from the `*_search` functions above, this is the value of the `infodict['result']['setid']` key in the first item (the infodict) in the returned tuple. strformat : bool This sets if you want the returned data rows to be formatted in their string representations already. This can be useful if you're piping the returned JSON straight into some sort of UI and you don't want to deal with formatting floats, etc. To do this manually when strformat is set to False, look at the `coldesc` item in the returned dict, which gives the Python and Numpy string format specifiers for each column in the data table. page : int This sets which page of the dataset should be retrieved. Returns ------- dict This returns the dataset JSON loaded into a dict. ''' urlparams = {'strformat':1 if strformat else 0, 'page':page, 'json':1} urlqs = urlencode(urlparams) dataset_url = '%s/set/%s?%s' % (lcc_server, dataset_id, urlqs) LOGINFO('retrieving dataset %s from %s, using URL: %s ...' % (lcc_server, dataset_id, dataset_url)) try: # check if we have an API key already have_apikey, apikey, expires = check_existing_apikey(lcc_server) # if not, get a new one if not have_apikey: apikey, expires = get_new_apikey(lcc_server) # if apikey is not None, add it in as an Authorization: Bearer [apikey] # header if apikey: headers = {'Authorization':'Bearer: %s' % apikey} else: headers = {} # hit the server req = Request(dataset_url, data=None, headers=headers) resp = urlopen(req) dataset = json.loads(resp.read()) return dataset except Exception as e: LOGEXCEPTION('could not retrieve the dataset JSON!') return None
This downloads a JSON form of a dataset from the specified lcc_server. If the dataset contains more than 1000 rows, it will be paginated, so you must use the `page` kwarg to get the page you want. The dataset JSON will contain the keys 'npages', 'currpage', and 'rows_per_page' to help with this. The 'rows' key contains the actual data rows as a list of tuples. The JSON contains metadata about the query that produced the dataset, information about the data table's columns, and links to download the dataset's products including the light curve ZIP and the dataset CSV. Parameters ---------- lcc_server : str This is the base URL of the LCC-Server to talk to. dataset_id : str This is the unique setid of the dataset you want to get. In the results from the `*_search` functions above, this is the value of the `infodict['result']['setid']` key in the first item (the infodict) in the returned tuple. strformat : bool This sets if you want the returned data rows to be formatted in their string representations already. This can be useful if you're piping the returned JSON straight into some sort of UI and you don't want to deal with formatting floats, etc. To do this manually when strformat is set to False, look at the `coldesc` item in the returned dict, which gives the Python and Numpy string format specifiers for each column in the data table. page : int This sets which page of the dataset should be retrieved. Returns ------- dict This returns the dataset JSON loaded into a dict.
Below is the the instruction that describes the task: ### Input: This downloads a JSON form of a dataset from the specified lcc_server. If the dataset contains more than 1000 rows, it will be paginated, so you must use the `page` kwarg to get the page you want. The dataset JSON will contain the keys 'npages', 'currpage', and 'rows_per_page' to help with this. The 'rows' key contains the actual data rows as a list of tuples. The JSON contains metadata about the query that produced the dataset, information about the data table's columns, and links to download the dataset's products including the light curve ZIP and the dataset CSV. Parameters ---------- lcc_server : str This is the base URL of the LCC-Server to talk to. dataset_id : str This is the unique setid of the dataset you want to get. In the results from the `*_search` functions above, this is the value of the `infodict['result']['setid']` key in the first item (the infodict) in the returned tuple. strformat : bool This sets if you want the returned data rows to be formatted in their string representations already. This can be useful if you're piping the returned JSON straight into some sort of UI and you don't want to deal with formatting floats, etc. To do this manually when strformat is set to False, look at the `coldesc` item in the returned dict, which gives the Python and Numpy string format specifiers for each column in the data table. page : int This sets which page of the dataset should be retrieved. Returns ------- dict This returns the dataset JSON loaded into a dict. ### Response: def get_dataset(lcc_server, dataset_id, strformat=False, page=1): '''This downloads a JSON form of a dataset from the specified lcc_server. If the dataset contains more than 1000 rows, it will be paginated, so you must use the `page` kwarg to get the page you want. The dataset JSON will contain the keys 'npages', 'currpage', and 'rows_per_page' to help with this. The 'rows' key contains the actual data rows as a list of tuples. The JSON contains metadata about the query that produced the dataset, information about the data table's columns, and links to download the dataset's products including the light curve ZIP and the dataset CSV. Parameters ---------- lcc_server : str This is the base URL of the LCC-Server to talk to. dataset_id : str This is the unique setid of the dataset you want to get. In the results from the `*_search` functions above, this is the value of the `infodict['result']['setid']` key in the first item (the infodict) in the returned tuple. strformat : bool This sets if you want the returned data rows to be formatted in their string representations already. This can be useful if you're piping the returned JSON straight into some sort of UI and you don't want to deal with formatting floats, etc. To do this manually when strformat is set to False, look at the `coldesc` item in the returned dict, which gives the Python and Numpy string format specifiers for each column in the data table. page : int This sets which page of the dataset should be retrieved. Returns ------- dict This returns the dataset JSON loaded into a dict. ''' urlparams = {'strformat':1 if strformat else 0, 'page':page, 'json':1} urlqs = urlencode(urlparams) dataset_url = '%s/set/%s?%s' % (lcc_server, dataset_id, urlqs) LOGINFO('retrieving dataset %s from %s, using URL: %s ...' % (lcc_server, dataset_id, dataset_url)) try: # check if we have an API key already have_apikey, apikey, expires = check_existing_apikey(lcc_server) # if not, get a new one if not have_apikey: apikey, expires = get_new_apikey(lcc_server) # if apikey is not None, add it in as an Authorization: Bearer [apikey] # header if apikey: headers = {'Authorization':'Bearer: %s' % apikey} else: headers = {} # hit the server req = Request(dataset_url, data=None, headers=headers) resp = urlopen(req) dataset = json.loads(resp.read()) return dataset except Exception as e: LOGEXCEPTION('could not retrieve the dataset JSON!') return None
def availableRoles(self): ''' Returns the set of roles for this event. Since roles are not always custom specified for event, this looks for the set of available roles in multiple places. If no roles are found, then the method returns an empty list, in which case it can be assumed that the event's registration is not role-specific. ''' eventRoles = self.eventrole_set.filter(capacity__gt=0) if eventRoles.count() > 0: return [x.role for x in eventRoles] elif isinstance(self,Series): return self.classDescription.danceTypeLevel.danceType.roles.all() return []
Returns the set of roles for this event. Since roles are not always custom specified for event, this looks for the set of available roles in multiple places. If no roles are found, then the method returns an empty list, in which case it can be assumed that the event's registration is not role-specific.
Below is the the instruction that describes the task: ### Input: Returns the set of roles for this event. Since roles are not always custom specified for event, this looks for the set of available roles in multiple places. If no roles are found, then the method returns an empty list, in which case it can be assumed that the event's registration is not role-specific. ### Response: def availableRoles(self): ''' Returns the set of roles for this event. Since roles are not always custom specified for event, this looks for the set of available roles in multiple places. If no roles are found, then the method returns an empty list, in which case it can be assumed that the event's registration is not role-specific. ''' eventRoles = self.eventrole_set.filter(capacity__gt=0) if eventRoles.count() > 0: return [x.role for x in eventRoles] elif isinstance(self,Series): return self.classDescription.danceTypeLevel.danceType.roles.all() return []
def get_matching_blocks(self): """Return list of triples describing matching subsequences. Each triple is of the form (i, j, n), and means that a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i and in j. New in Python 2.5, it's also guaranteed that if (i, j, n) and (i', j', n') are adjacent triples in the list, and the second is not the last triple in the list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe adjacent equal blocks. The last triple is a dummy, (len(a), len(b), 0), and is the only triple with n==0. >>> s = SequenceMatcher(None, "abxcd", "abcd") >>> s.get_matching_blocks() [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] """ if self.matching_blocks is not None: return self.matching_blocks la, lb = len(self.a), len(self.b) # This is most naturally expressed as a recursive algorithm, but # at least one user bumped into extreme use cases that exceeded # the recursion limit on their box. So, now we maintain a list # ('queue`) of blocks we still need to look at, and append partial # results to `matching_blocks` in a loop; the matches are sorted # at the end. queue = [(0, la, 0, lb)] matching_blocks = [] while queue: alo, ahi, blo, bhi = queue.pop() i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi) # a[alo:i] vs b[blo:j] unknown # a[i:i+k] same as b[j:j+k] # a[i+k:ahi] vs b[j+k:bhi] unknown if k: # if k is 0, there was no matching block matching_blocks.append(x) if alo < i and blo < j: queue.append((alo, i, blo, j)) if i+k < ahi and j+k < bhi: queue.append((i+k, ahi, j+k, bhi)) matching_blocks.sort() # It's possible that we have adjacent equal blocks in the # matching_blocks list now. Starting with 2.5, this code was added # to collapse them. i1 = j1 = k1 = 0 non_adjacent = [] for i2, j2, k2 in matching_blocks: # Is this block adjacent to i1, j1, k1? if i1 + k1 == i2 and j1 + k1 == j2: # Yes, so collapse them -- this just increases the length of # the first block by the length of the second, and the first # block so lengthened remains the block to compare against. k1 += k2 else: # Not adjacent. Remember the first block (k1==0 means it's # the dummy we started with), and make the second block the # new block to compare against. if k1: non_adjacent.append((i1, j1, k1)) i1, j1, k1 = i2, j2, k2 if k1: non_adjacent.append((i1, j1, k1)) non_adjacent.append( (la, lb, 0) ) self.matching_blocks = map(Match._make, non_adjacent) return self.matching_blocks
Return list of triples describing matching subsequences. Each triple is of the form (i, j, n), and means that a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i and in j. New in Python 2.5, it's also guaranteed that if (i, j, n) and (i', j', n') are adjacent triples in the list, and the second is not the last triple in the list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe adjacent equal blocks. The last triple is a dummy, (len(a), len(b), 0), and is the only triple with n==0. >>> s = SequenceMatcher(None, "abxcd", "abcd") >>> s.get_matching_blocks() [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)]
Below is the the instruction that describes the task: ### Input: Return list of triples describing matching subsequences. Each triple is of the form (i, j, n), and means that a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i and in j. New in Python 2.5, it's also guaranteed that if (i, j, n) and (i', j', n') are adjacent triples in the list, and the second is not the last triple in the list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe adjacent equal blocks. The last triple is a dummy, (len(a), len(b), 0), and is the only triple with n==0. >>> s = SequenceMatcher(None, "abxcd", "abcd") >>> s.get_matching_blocks() [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] ### Response: def get_matching_blocks(self): """Return list of triples describing matching subsequences. Each triple is of the form (i, j, n), and means that a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in i and in j. New in Python 2.5, it's also guaranteed that if (i, j, n) and (i', j', n') are adjacent triples in the list, and the second is not the last triple in the list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe adjacent equal blocks. The last triple is a dummy, (len(a), len(b), 0), and is the only triple with n==0. >>> s = SequenceMatcher(None, "abxcd", "abcd") >>> s.get_matching_blocks() [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)] """ if self.matching_blocks is not None: return self.matching_blocks la, lb = len(self.a), len(self.b) # This is most naturally expressed as a recursive algorithm, but # at least one user bumped into extreme use cases that exceeded # the recursion limit on their box. So, now we maintain a list # ('queue`) of blocks we still need to look at, and append partial # results to `matching_blocks` in a loop; the matches are sorted # at the end. queue = [(0, la, 0, lb)] matching_blocks = [] while queue: alo, ahi, blo, bhi = queue.pop() i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi) # a[alo:i] vs b[blo:j] unknown # a[i:i+k] same as b[j:j+k] # a[i+k:ahi] vs b[j+k:bhi] unknown if k: # if k is 0, there was no matching block matching_blocks.append(x) if alo < i and blo < j: queue.append((alo, i, blo, j)) if i+k < ahi and j+k < bhi: queue.append((i+k, ahi, j+k, bhi)) matching_blocks.sort() # It's possible that we have adjacent equal blocks in the # matching_blocks list now. Starting with 2.5, this code was added # to collapse them. i1 = j1 = k1 = 0 non_adjacent = [] for i2, j2, k2 in matching_blocks: # Is this block adjacent to i1, j1, k1? if i1 + k1 == i2 and j1 + k1 == j2: # Yes, so collapse them -- this just increases the length of # the first block by the length of the second, and the first # block so lengthened remains the block to compare against. k1 += k2 else: # Not adjacent. Remember the first block (k1==0 means it's # the dummy we started with), and make the second block the # new block to compare against. if k1: non_adjacent.append((i1, j1, k1)) i1, j1, k1 = i2, j2, k2 if k1: non_adjacent.append((i1, j1, k1)) non_adjacent.append( (la, lb, 0) ) self.matching_blocks = map(Match._make, non_adjacent) return self.matching_blocks
def sql_program_name_func(command): """ Extract program name from `command`. >>> sql_program_name_func('ls') 'ls' >>> sql_program_name_func('git status') 'git' >>> sql_program_name_func('EMACS=emacs make') 'make' :type command: str """ args = command.split(' ') for prog in args: if '=' not in prog: return prog return args[0]
Extract program name from `command`. >>> sql_program_name_func('ls') 'ls' >>> sql_program_name_func('git status') 'git' >>> sql_program_name_func('EMACS=emacs make') 'make' :type command: str
Below is the the instruction that describes the task: ### Input: Extract program name from `command`. >>> sql_program_name_func('ls') 'ls' >>> sql_program_name_func('git status') 'git' >>> sql_program_name_func('EMACS=emacs make') 'make' :type command: str ### Response: def sql_program_name_func(command): """ Extract program name from `command`. >>> sql_program_name_func('ls') 'ls' >>> sql_program_name_func('git status') 'git' >>> sql_program_name_func('EMACS=emacs make') 'make' :type command: str """ args = command.split(' ') for prog in args: if '=' not in prog: return prog return args[0]
def send_sms(request, to_number, body, callback_urlname="sms_status_callback"): """ Create :class:`OutgoingSMS` object and send SMS using Twilio. """ client = TwilioRestClient(settings.TWILIO_ACCOUNT_SID, settings.TWILIO_AUTH_TOKEN) from_number = settings.TWILIO_PHONE_NUMBER message = OutgoingSMS.objects.create( from_number=from_number, to_number=to_number, body=body, ) status_callback = None if callback_urlname: status_callback = build_callback_url(request, callback_urlname, message) logger.debug("Sending SMS message to %s with callback url %s: %s.", to_number, status_callback, body) if not getattr(settings, "TWILIO_DRY_MODE", False): sent = client.sms.messages.create( to=to_number, from_=from_number, body=body, status_callback=status_callback ) logger.debug("SMS message sent: %s", sent.__dict__) message.sms_sid = sent.sid message.account_sid = sent.account_sid message.status = sent.status message.to_parsed = sent.to if sent.price: message.price = Decimal(force_text(sent.price)) message.price_unit = sent.price_unit message.sent_at = sent.date_created message.save(update_fields=[ "sms_sid", "account_sid", "status", "to_parsed", "price", "price_unit", "sent_at" ]) else: logger.info("SMS: from %s to %s: %s", from_number, to_number, body) return message
Create :class:`OutgoingSMS` object and send SMS using Twilio.
Below is the the instruction that describes the task: ### Input: Create :class:`OutgoingSMS` object and send SMS using Twilio. ### Response: def send_sms(request, to_number, body, callback_urlname="sms_status_callback"): """ Create :class:`OutgoingSMS` object and send SMS using Twilio. """ client = TwilioRestClient(settings.TWILIO_ACCOUNT_SID, settings.TWILIO_AUTH_TOKEN) from_number = settings.TWILIO_PHONE_NUMBER message = OutgoingSMS.objects.create( from_number=from_number, to_number=to_number, body=body, ) status_callback = None if callback_urlname: status_callback = build_callback_url(request, callback_urlname, message) logger.debug("Sending SMS message to %s with callback url %s: %s.", to_number, status_callback, body) if not getattr(settings, "TWILIO_DRY_MODE", False): sent = client.sms.messages.create( to=to_number, from_=from_number, body=body, status_callback=status_callback ) logger.debug("SMS message sent: %s", sent.__dict__) message.sms_sid = sent.sid message.account_sid = sent.account_sid message.status = sent.status message.to_parsed = sent.to if sent.price: message.price = Decimal(force_text(sent.price)) message.price_unit = sent.price_unit message.sent_at = sent.date_created message.save(update_fields=[ "sms_sid", "account_sid", "status", "to_parsed", "price", "price_unit", "sent_at" ]) else: logger.info("SMS: from %s to %s: %s", from_number, to_number, body) return message
def create_snapshot(kwargs=None, call=None): ''' Create a new disk snapshot. Must specify `name` and `disk_name`. CLI Example: .. code-block:: bash salt-cloud -f create_snapshot gce name=snap1 disk_name=pd ''' if call != 'function': raise SaltCloudSystemExit( 'The create_snapshot function must be called with -f or --function.' ) if not kwargs or 'name' not in kwargs: log.error( 'A name must be specified when creating a snapshot.' ) return False if 'disk_name' not in kwargs: log.error( 'A disk_name must be specified when creating a snapshot.' ) return False conn = get_conn() name = kwargs.get('name') disk_name = kwargs.get('disk_name') try: disk = conn.ex_get_volume(disk_name) except ResourceNotFoundError as exc: log.error( 'Disk %s was not found. Exception was: %s', disk_name, exc, exc_info_on_loglevel=logging.DEBUG ) return False __utils__['cloud.fire_event']( 'event', 'create snapshot', 'salt/cloud/snapshot/creating', args={ 'name': name, 'disk_name': disk_name, }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) snapshot = conn.create_volume_snapshot(disk, name) __utils__['cloud.fire_event']( 'event', 'created snapshot', 'salt/cloud/snapshot/created', args={ 'name': name, 'disk_name': disk_name, }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) return _expand_item(snapshot)
Create a new disk snapshot. Must specify `name` and `disk_name`. CLI Example: .. code-block:: bash salt-cloud -f create_snapshot gce name=snap1 disk_name=pd
Below is the the instruction that describes the task: ### Input: Create a new disk snapshot. Must specify `name` and `disk_name`. CLI Example: .. code-block:: bash salt-cloud -f create_snapshot gce name=snap1 disk_name=pd ### Response: def create_snapshot(kwargs=None, call=None): ''' Create a new disk snapshot. Must specify `name` and `disk_name`. CLI Example: .. code-block:: bash salt-cloud -f create_snapshot gce name=snap1 disk_name=pd ''' if call != 'function': raise SaltCloudSystemExit( 'The create_snapshot function must be called with -f or --function.' ) if not kwargs or 'name' not in kwargs: log.error( 'A name must be specified when creating a snapshot.' ) return False if 'disk_name' not in kwargs: log.error( 'A disk_name must be specified when creating a snapshot.' ) return False conn = get_conn() name = kwargs.get('name') disk_name = kwargs.get('disk_name') try: disk = conn.ex_get_volume(disk_name) except ResourceNotFoundError as exc: log.error( 'Disk %s was not found. Exception was: %s', disk_name, exc, exc_info_on_loglevel=logging.DEBUG ) return False __utils__['cloud.fire_event']( 'event', 'create snapshot', 'salt/cloud/snapshot/creating', args={ 'name': name, 'disk_name': disk_name, }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) snapshot = conn.create_volume_snapshot(disk, name) __utils__['cloud.fire_event']( 'event', 'created snapshot', 'salt/cloud/snapshot/created', args={ 'name': name, 'disk_name': disk_name, }, sock_dir=__opts__['sock_dir'], transport=__opts__['transport'] ) return _expand_item(snapshot)
def PrepareMergeTaskStorage(self, task): """Prepares a task storage for merging. Moves the task storage file from the processed directory to the merge directory. Args: task (Task): task. Raises: IOError: if the storage type is not supported or if the storage file cannot be renamed. OSError: if the storage type is not supported or if the storage file cannot be renamed. """ if self._storage_type != definitions.STORAGE_TYPE_SESSION: raise IOError('Unsupported storage type.') merge_storage_file_path = self._GetMergeTaskStorageFilePath(task) processed_storage_file_path = self._GetProcessedStorageFilePath(task) task.storage_file_size = os.path.getsize(processed_storage_file_path) try: os.rename(processed_storage_file_path, merge_storage_file_path) except OSError as exception: raise IOError(( 'Unable to rename task storage file: {0:s} with error: ' '{1!s}').format(processed_storage_file_path, exception))
Prepares a task storage for merging. Moves the task storage file from the processed directory to the merge directory. Args: task (Task): task. Raises: IOError: if the storage type is not supported or if the storage file cannot be renamed. OSError: if the storage type is not supported or if the storage file cannot be renamed.
Below is the the instruction that describes the task: ### Input: Prepares a task storage for merging. Moves the task storage file from the processed directory to the merge directory. Args: task (Task): task. Raises: IOError: if the storage type is not supported or if the storage file cannot be renamed. OSError: if the storage type is not supported or if the storage file cannot be renamed. ### Response: def PrepareMergeTaskStorage(self, task): """Prepares a task storage for merging. Moves the task storage file from the processed directory to the merge directory. Args: task (Task): task. Raises: IOError: if the storage type is not supported or if the storage file cannot be renamed. OSError: if the storage type is not supported or if the storage file cannot be renamed. """ if self._storage_type != definitions.STORAGE_TYPE_SESSION: raise IOError('Unsupported storage type.') merge_storage_file_path = self._GetMergeTaskStorageFilePath(task) processed_storage_file_path = self._GetProcessedStorageFilePath(task) task.storage_file_size = os.path.getsize(processed_storage_file_path) try: os.rename(processed_storage_file_path, merge_storage_file_path) except OSError as exception: raise IOError(( 'Unable to rename task storage file: {0:s} with error: ' '{1!s}').format(processed_storage_file_path, exception))
def check_is_uuid(self, uuid_str: str): """Check if it's an Isogeo UUID handling specific form. :param str uuid_str: UUID string to check """ # check uuid type if not isinstance(uuid_str, str): raise TypeError("'uuid_str' expected a str value.") else: pass # handle Isogeo specific UUID in XML exports if "isogeo:metadata" in uuid_str: uuid_str = "urn:uuid:{}".format(uuid_str.split(":")[-1]) else: pass # test it try: uid = UUID(uuid_str) return uid.hex == uuid_str.replace("-", "").replace("urn:uuid:", "") except ValueError as e: logging.error( "uuid ValueError. {} ({}) -- {}".format(type(uuid_str), uuid_str, e) ) return False
Check if it's an Isogeo UUID handling specific form. :param str uuid_str: UUID string to check
Below is the the instruction that describes the task: ### Input: Check if it's an Isogeo UUID handling specific form. :param str uuid_str: UUID string to check ### Response: def check_is_uuid(self, uuid_str: str): """Check if it's an Isogeo UUID handling specific form. :param str uuid_str: UUID string to check """ # check uuid type if not isinstance(uuid_str, str): raise TypeError("'uuid_str' expected a str value.") else: pass # handle Isogeo specific UUID in XML exports if "isogeo:metadata" in uuid_str: uuid_str = "urn:uuid:{}".format(uuid_str.split(":")[-1]) else: pass # test it try: uid = UUID(uuid_str) return uid.hex == uuid_str.replace("-", "").replace("urn:uuid:", "") except ValueError as e: logging.error( "uuid ValueError. {} ({}) -- {}".format(type(uuid_str), uuid_str, e) ) return False
def start_reporter(redis_address, stdout_file=None, stderr_file=None, redis_password=None): """Start a reporter process. Args: redis_address (str): The address of the Redis instance. stdout_file: A file handle opened for writing to redirect stdout to. If no redirection should happen, then this should be None. stderr_file: A file handle opened for writing to redirect stderr to. If no redirection should happen, then this should be None. redis_password (str): The password of the redis server. Returns: ProcessInfo for the process that was started. """ reporter_filepath = os.path.join( os.path.dirname(os.path.abspath(__file__)), "reporter.py") command = [ sys.executable, "-u", reporter_filepath, "--redis-address={}".format(redis_address) ] if redis_password: command += ["--redis-password", redis_password] try: import psutil # noqa: F401 except ImportError: logger.warning("Failed to start the reporter. The reporter requires " "'pip install psutil'.") return None process_info = start_ray_process( command, ray_constants.PROCESS_TYPE_REPORTER, stdout_file=stdout_file, stderr_file=stderr_file) return process_info
Start a reporter process. Args: redis_address (str): The address of the Redis instance. stdout_file: A file handle opened for writing to redirect stdout to. If no redirection should happen, then this should be None. stderr_file: A file handle opened for writing to redirect stderr to. If no redirection should happen, then this should be None. redis_password (str): The password of the redis server. Returns: ProcessInfo for the process that was started.
Below is the the instruction that describes the task: ### Input: Start a reporter process. Args: redis_address (str): The address of the Redis instance. stdout_file: A file handle opened for writing to redirect stdout to. If no redirection should happen, then this should be None. stderr_file: A file handle opened for writing to redirect stderr to. If no redirection should happen, then this should be None. redis_password (str): The password of the redis server. Returns: ProcessInfo for the process that was started. ### Response: def start_reporter(redis_address, stdout_file=None, stderr_file=None, redis_password=None): """Start a reporter process. Args: redis_address (str): The address of the Redis instance. stdout_file: A file handle opened for writing to redirect stdout to. If no redirection should happen, then this should be None. stderr_file: A file handle opened for writing to redirect stderr to. If no redirection should happen, then this should be None. redis_password (str): The password of the redis server. Returns: ProcessInfo for the process that was started. """ reporter_filepath = os.path.join( os.path.dirname(os.path.abspath(__file__)), "reporter.py") command = [ sys.executable, "-u", reporter_filepath, "--redis-address={}".format(redis_address) ] if redis_password: command += ["--redis-password", redis_password] try: import psutil # noqa: F401 except ImportError: logger.warning("Failed to start the reporter. The reporter requires " "'pip install psutil'.") return None process_info = start_ray_process( command, ray_constants.PROCESS_TYPE_REPORTER, stdout_file=stdout_file, stderr_file=stderr_file) return process_info
def register(self, service, name=''): """ Exposes a given service to this API. """ try: is_model = issubclass(service, orb.Model) except StandardError: is_model = False # expose an ORB table dynamically as a service if is_model: self.services[service.schema().dbname()] = (ModelService, service) else: super(OrbApiFactory, self).register(service, name=name)
Exposes a given service to this API.
Below is the the instruction that describes the task: ### Input: Exposes a given service to this API. ### Response: def register(self, service, name=''): """ Exposes a given service to this API. """ try: is_model = issubclass(service, orb.Model) except StandardError: is_model = False # expose an ORB table dynamically as a service if is_model: self.services[service.schema().dbname()] = (ModelService, service) else: super(OrbApiFactory, self).register(service, name=name)
def _is_same_type_as_root(self, obj): """ Testing if we try to collect an object of the same type as root. This is not really a good sign, because it means that we are going to collect a whole new tree, that will maybe collect a new tree, that will... """ if not self.ALLOWS_SAME_TYPE_AS_ROOT_COLLECT: obj_model = get_model_from_instance(obj) obj_key = get_key_from_instance(obj) is_same_type_as_root = obj_model == self.root_obj_model and obj_key != self.root_obj_key if is_same_type_as_root: self.emit_event(type='same_type_as_root', obj=obj) return is_same_type_as_root else: return False
Testing if we try to collect an object of the same type as root. This is not really a good sign, because it means that we are going to collect a whole new tree, that will maybe collect a new tree, that will...
Below is the the instruction that describes the task: ### Input: Testing if we try to collect an object of the same type as root. This is not really a good sign, because it means that we are going to collect a whole new tree, that will maybe collect a new tree, that will... ### Response: def _is_same_type_as_root(self, obj): """ Testing if we try to collect an object of the same type as root. This is not really a good sign, because it means that we are going to collect a whole new tree, that will maybe collect a new tree, that will... """ if not self.ALLOWS_SAME_TYPE_AS_ROOT_COLLECT: obj_model = get_model_from_instance(obj) obj_key = get_key_from_instance(obj) is_same_type_as_root = obj_model == self.root_obj_model and obj_key != self.root_obj_key if is_same_type_as_root: self.emit_event(type='same_type_as_root', obj=obj) return is_same_type_as_root else: return False
def transform(self, Y): """Transform input data `Y` to reduced data space defined by `self.data` Takes data in the same ambient space as `self.data` and transforms it to be in the same reduced space as `self.data_nu`. Parameters ---------- Y : array-like, shape=[n_samples_y, n_features] n_features must be the same as `self.data`. Returns ------- Transformed data, shape=[n_samples_y, n_pca] Raises ------ ValueError : if Y.shape[1] != self.data.shape[1] """ try: # try PCA first return self.data_pca.transform(Y) except AttributeError: # no pca, try to return data try: if Y.shape[1] != self.data.shape[1]: # shape is wrong raise ValueError return Y except IndexError: # len(Y.shape) < 2 raise ValueError except ValueError: # more informative error raise ValueError("data of shape {} cannot be transformed" " to graph built on data of shape {}".format( Y.shape, self.data.shape))
Transform input data `Y` to reduced data space defined by `self.data` Takes data in the same ambient space as `self.data` and transforms it to be in the same reduced space as `self.data_nu`. Parameters ---------- Y : array-like, shape=[n_samples_y, n_features] n_features must be the same as `self.data`. Returns ------- Transformed data, shape=[n_samples_y, n_pca] Raises ------ ValueError : if Y.shape[1] != self.data.shape[1]
Below is the the instruction that describes the task: ### Input: Transform input data `Y` to reduced data space defined by `self.data` Takes data in the same ambient space as `self.data` and transforms it to be in the same reduced space as `self.data_nu`. Parameters ---------- Y : array-like, shape=[n_samples_y, n_features] n_features must be the same as `self.data`. Returns ------- Transformed data, shape=[n_samples_y, n_pca] Raises ------ ValueError : if Y.shape[1] != self.data.shape[1] ### Response: def transform(self, Y): """Transform input data `Y` to reduced data space defined by `self.data` Takes data in the same ambient space as `self.data` and transforms it to be in the same reduced space as `self.data_nu`. Parameters ---------- Y : array-like, shape=[n_samples_y, n_features] n_features must be the same as `self.data`. Returns ------- Transformed data, shape=[n_samples_y, n_pca] Raises ------ ValueError : if Y.shape[1] != self.data.shape[1] """ try: # try PCA first return self.data_pca.transform(Y) except AttributeError: # no pca, try to return data try: if Y.shape[1] != self.data.shape[1]: # shape is wrong raise ValueError return Y except IndexError: # len(Y.shape) < 2 raise ValueError except ValueError: # more informative error raise ValueError("data of shape {} cannot be transformed" " to graph built on data of shape {}".format( Y.shape, self.data.shape))
def login_oauth2_user(valid, oauth): """Log in a user after having been verified.""" if valid: oauth.user.login_via_oauth2 = True _request_ctx_stack.top.user = oauth.user identity_changed.send(current_app._get_current_object(), identity=Identity(oauth.user.id)) return valid, oauth
Log in a user after having been verified.
Below is the the instruction that describes the task: ### Input: Log in a user after having been verified. ### Response: def login_oauth2_user(valid, oauth): """Log in a user after having been verified.""" if valid: oauth.user.login_via_oauth2 = True _request_ctx_stack.top.user = oauth.user identity_changed.send(current_app._get_current_object(), identity=Identity(oauth.user.id)) return valid, oauth
def write_library_constants(): """Write libtcod constants into the tcod.constants module.""" from tcod._libtcod import lib, ffi import tcod.color with open("tcod/constants.py", "w") as f: all_names = [] f.write(CONSTANT_MODULE_HEADER) for name in dir(lib): value = getattr(lib, name) if name[:5] == "TCOD_": if name.isupper(): # const names f.write("%s = %r\n" % (name[5:], value)) all_names.append(name[5:]) elif name.startswith("FOV"): # fov const names f.write("%s = %r\n" % (name, value)) all_names.append(name) elif name[:6] == "TCODK_": # key name f.write("KEY_%s = %r\n" % (name[6:], value)) all_names.append("KEY_%s" % name[6:]) f.write("\n# --- colors ---\n") for name in dir(lib): if name[:5] != "TCOD_": continue value = getattr(lib, name) if not isinstance(value, ffi.CData): continue if ffi.typeof(value) != ffi.typeof("TCOD_color_t"): continue color = tcod.color.Color._new_from_cdata(value) f.write("%s = %r\n" % (name[5:], color)) all_names.append(name[5:]) all_names = ",\n ".join('"%s"' % name for name in all_names) f.write("\n__all__ = [\n %s,\n]\n" % (all_names,)) with open("tcod/event_constants.py", "w") as f: all_names = [] f.write(EVENT_CONSTANT_MODULE_HEADER) f.write("# --- SDL scancodes ---\n") f.write( "%s\n_REVERSE_SCANCODE_TABLE = %s\n" % parse_sdl_attrs("SDL_SCANCODE", all_names) ) f.write("\n# --- SDL keyboard symbols ---\n") f.write( "%s\n_REVERSE_SYM_TABLE = %s\n" % parse_sdl_attrs("SDLK", all_names) ) f.write("\n# --- SDL keyboard modifiers ---\n") f.write( "%s\n_REVERSE_MOD_TABLE = %s\n" % parse_sdl_attrs("KMOD", all_names) ) f.write("\n# --- SDL wheel ---\n") f.write( "%s\n_REVERSE_WHEEL_TABLE = %s\n" % parse_sdl_attrs("SDL_MOUSEWHEEL", all_names) ) all_names = ",\n ".join('"%s"' % name for name in all_names) f.write("\n__all__ = [\n %s,\n]\n" % (all_names,))
Write libtcod constants into the tcod.constants module.
Below is the the instruction that describes the task: ### Input: Write libtcod constants into the tcod.constants module. ### Response: def write_library_constants(): """Write libtcod constants into the tcod.constants module.""" from tcod._libtcod import lib, ffi import tcod.color with open("tcod/constants.py", "w") as f: all_names = [] f.write(CONSTANT_MODULE_HEADER) for name in dir(lib): value = getattr(lib, name) if name[:5] == "TCOD_": if name.isupper(): # const names f.write("%s = %r\n" % (name[5:], value)) all_names.append(name[5:]) elif name.startswith("FOV"): # fov const names f.write("%s = %r\n" % (name, value)) all_names.append(name) elif name[:6] == "TCODK_": # key name f.write("KEY_%s = %r\n" % (name[6:], value)) all_names.append("KEY_%s" % name[6:]) f.write("\n# --- colors ---\n") for name in dir(lib): if name[:5] != "TCOD_": continue value = getattr(lib, name) if not isinstance(value, ffi.CData): continue if ffi.typeof(value) != ffi.typeof("TCOD_color_t"): continue color = tcod.color.Color._new_from_cdata(value) f.write("%s = %r\n" % (name[5:], color)) all_names.append(name[5:]) all_names = ",\n ".join('"%s"' % name for name in all_names) f.write("\n__all__ = [\n %s,\n]\n" % (all_names,)) with open("tcod/event_constants.py", "w") as f: all_names = [] f.write(EVENT_CONSTANT_MODULE_HEADER) f.write("# --- SDL scancodes ---\n") f.write( "%s\n_REVERSE_SCANCODE_TABLE = %s\n" % parse_sdl_attrs("SDL_SCANCODE", all_names) ) f.write("\n# --- SDL keyboard symbols ---\n") f.write( "%s\n_REVERSE_SYM_TABLE = %s\n" % parse_sdl_attrs("SDLK", all_names) ) f.write("\n# --- SDL keyboard modifiers ---\n") f.write( "%s\n_REVERSE_MOD_TABLE = %s\n" % parse_sdl_attrs("KMOD", all_names) ) f.write("\n# --- SDL wheel ---\n") f.write( "%s\n_REVERSE_WHEEL_TABLE = %s\n" % parse_sdl_attrs("SDL_MOUSEWHEEL", all_names) ) all_names = ",\n ".join('"%s"' % name for name in all_names) f.write("\n__all__ = [\n %s,\n]\n" % (all_names,))
def send_message(self, subject=None, text=None, markdown=None, message_dict=None): """ Helper function to send a message to a group """ message = FiestaMessage(self.api, self, subject, text, markdown, message_dict) return message.send()
Helper function to send a message to a group
Below is the the instruction that describes the task: ### Input: Helper function to send a message to a group ### Response: def send_message(self, subject=None, text=None, markdown=None, message_dict=None): """ Helper function to send a message to a group """ message = FiestaMessage(self.api, self, subject, text, markdown, message_dict) return message.send()
def get_external_paths(self): """Returns a list of the external paths listed in the combobox.""" return [to_text_string(self.itemText(i)) for i in range(EXTERNAL_PATHS, self.count())]
Returns a list of the external paths listed in the combobox.
Below is the the instruction that describes the task: ### Input: Returns a list of the external paths listed in the combobox. ### Response: def get_external_paths(self): """Returns a list of the external paths listed in the combobox.""" return [to_text_string(self.itemText(i)) for i in range(EXTERNAL_PATHS, self.count())]
def main(): """ Usage: pre-sugartex [OPTIONS] Reads from stdin and writes to stdout. When no options: only replace U+02CE Modifier Letter Low Grave Accent (that looks like low '`') with $ Options: --all Full SugarTeX replace with regexp, --kiwi Same as above but with kiwi flavor, --help Show this message and exit. """ if len(sys.argv) > 1: arg1 = sys.argv[1] if arg1 == '--all' or arg1 == '--kiwi': if arg1 == '--kiwi': sugartex.mjx_hack() # sugartex.subscripts['ᵩ'] = 'ψ' # Consolas font specific # sugartex.superscripts['ᵠ'] = 'ψ' # Consolas font specific sugartex.ready() sys.stdout.write(sugartex_replace_all(sys.stdin.read())) elif arg1.lower() == '--help': print(str(main.__doc__).replace('\n ', '\n')) else: raise Exception("Invalid first argument: " + arg1) else: sys.stdout.write(sugartex_preprocess(sys.stdin.read()))
Usage: pre-sugartex [OPTIONS] Reads from stdin and writes to stdout. When no options: only replace U+02CE Modifier Letter Low Grave Accent (that looks like low '`') with $ Options: --all Full SugarTeX replace with regexp, --kiwi Same as above but with kiwi flavor, --help Show this message and exit.
Below is the the instruction that describes the task: ### Input: Usage: pre-sugartex [OPTIONS] Reads from stdin and writes to stdout. When no options: only replace U+02CE Modifier Letter Low Grave Accent (that looks like low '`') with $ Options: --all Full SugarTeX replace with regexp, --kiwi Same as above but with kiwi flavor, --help Show this message and exit. ### Response: def main(): """ Usage: pre-sugartex [OPTIONS] Reads from stdin and writes to stdout. When no options: only replace U+02CE Modifier Letter Low Grave Accent (that looks like low '`') with $ Options: --all Full SugarTeX replace with regexp, --kiwi Same as above but with kiwi flavor, --help Show this message and exit. """ if len(sys.argv) > 1: arg1 = sys.argv[1] if arg1 == '--all' or arg1 == '--kiwi': if arg1 == '--kiwi': sugartex.mjx_hack() # sugartex.subscripts['ᵩ'] = 'ψ' # Consolas font specific # sugartex.superscripts['ᵠ'] = 'ψ' # Consolas font specific sugartex.ready() sys.stdout.write(sugartex_replace_all(sys.stdin.read())) elif arg1.lower() == '--help': print(str(main.__doc__).replace('\n ', '\n')) else: raise Exception("Invalid first argument: " + arg1) else: sys.stdout.write(sugartex_preprocess(sys.stdin.read()))
def clean_conf_folder(self, locale): """Remove the configuration directory for `locale`""" dirname = self.configuration.get_messages_dir(locale) dirname.removedirs_p()
Remove the configuration directory for `locale`
Below is the the instruction that describes the task: ### Input: Remove the configuration directory for `locale` ### Response: def clean_conf_folder(self, locale): """Remove the configuration directory for `locale`""" dirname = self.configuration.get_messages_dir(locale) dirname.removedirs_p()
async def stop(self, _signal): """ Finish all running tasks, cancel remaining tasks, then stop loop. :param _signal: :return: """ self.logger.info(f'Stopping spider: {self.name}') await self._cancel_tasks() self.loop.stop()
Finish all running tasks, cancel remaining tasks, then stop loop. :param _signal: :return:
Below is the the instruction that describes the task: ### Input: Finish all running tasks, cancel remaining tasks, then stop loop. :param _signal: :return: ### Response: async def stop(self, _signal): """ Finish all running tasks, cancel remaining tasks, then stop loop. :param _signal: :return: """ self.logger.info(f'Stopping spider: {self.name}') await self._cancel_tasks() self.loop.stop()
def _get_object_from_python_path(python_path): """Method that will fetch a Marshmallow schema from a path to it. Args: python_path (str): The string path to the Marshmallow schema. Returns: marshmallow.Schema: The schema matching the provided path. Raises: TypeError: This is raised if the specified object isn't a Marshmallow schema. """ # Dissect the path python_path = python_path.split('.') module_path = python_path[:-1] object_class = python_path[-1] if isinstance(module_path, list): module_path = '.'.join(module_path) # Grab the object module = import_module(module_path) schema = getattr(module, object_class) if isclass(schema): schema = schema() return schema
Method that will fetch a Marshmallow schema from a path to it. Args: python_path (str): The string path to the Marshmallow schema. Returns: marshmallow.Schema: The schema matching the provided path. Raises: TypeError: This is raised if the specified object isn't a Marshmallow schema.
Below is the the instruction that describes the task: ### Input: Method that will fetch a Marshmallow schema from a path to it. Args: python_path (str): The string path to the Marshmallow schema. Returns: marshmallow.Schema: The schema matching the provided path. Raises: TypeError: This is raised if the specified object isn't a Marshmallow schema. ### Response: def _get_object_from_python_path(python_path): """Method that will fetch a Marshmallow schema from a path to it. Args: python_path (str): The string path to the Marshmallow schema. Returns: marshmallow.Schema: The schema matching the provided path. Raises: TypeError: This is raised if the specified object isn't a Marshmallow schema. """ # Dissect the path python_path = python_path.split('.') module_path = python_path[:-1] object_class = python_path[-1] if isinstance(module_path, list): module_path = '.'.join(module_path) # Grab the object module = import_module(module_path) schema = getattr(module, object_class) if isclass(schema): schema = schema() return schema
def get_hosts(self, pattern="all"): """ find all host names matching a pattern string, taking into account any inventory restrictions or applied subsets. """ # process patterns if isinstance(pattern, list): pattern = ';'.join(pattern) patterns = pattern.replace(";",":").split(":") hosts = self._get_hosts(patterns) # exclude hosts not in a subset, if defined if self._subset: subset = self._get_hosts(self._subset) hosts.intersection_update(subset) # exclude hosts mentioned in any restriction (ex: failed hosts) if self._restriction is not None: hosts = [ h for h in hosts if h.name in self._restriction ] if self._also_restriction is not None: hosts = [ h for h in hosts if h.name in self._also_restriction ] return sorted(hosts, key=lambda x: x.name)
find all host names matching a pattern string, taking into account any inventory restrictions or applied subsets.
Below is the the instruction that describes the task: ### Input: find all host names matching a pattern string, taking into account any inventory restrictions or applied subsets. ### Response: def get_hosts(self, pattern="all"): """ find all host names matching a pattern string, taking into account any inventory restrictions or applied subsets. """ # process patterns if isinstance(pattern, list): pattern = ';'.join(pattern) patterns = pattern.replace(";",":").split(":") hosts = self._get_hosts(patterns) # exclude hosts not in a subset, if defined if self._subset: subset = self._get_hosts(self._subset) hosts.intersection_update(subset) # exclude hosts mentioned in any restriction (ex: failed hosts) if self._restriction is not None: hosts = [ h for h in hosts if h.name in self._restriction ] if self._also_restriction is not None: hosts = [ h for h in hosts if h.name in self._also_restriction ] return sorted(hosts, key=lambda x: x.name)
def jenkins_request_with_headers(jenkins_server, req): """ We need to get the headers in addition to the body answer to get the location from them This function uses jenkins_request method from python-jenkins library with just the return call changed :param jenkins_server: The server to query :param req: The request to execute :return: Dict containing the response body (key body) and the headers coming along (headers) """ try: response = jenkins_server.jenkins_request(req) response_body = response.content response_headers = response.headers if response_body is None: raise jenkins.EmptyResponseException( "Error communicating with server[%s]: " "empty response" % jenkins_server.server) return {'body': response_body.decode('utf-8'), 'headers': response_headers} except HTTPError as e: # Jenkins's funky authentication means its nigh impossible to # distinguish errors. if e.code in [401, 403, 500]: # six.moves.urllib.error.HTTPError provides a 'reason' # attribute for all python version except for ver 2.6 # Falling back to HTTPError.msg since it contains the # same info as reason raise JenkinsException( 'Error in request. ' + 'Possibly authentication failed [%s]: %s' % ( e.code, e.msg) ) elif e.code == 404: raise jenkins.NotFoundException('Requested item could not be found') else: raise except socket.timeout as e: raise jenkins.TimeoutException('Error in request: %s' % e) except URLError as e: # python 2.6 compatibility to ensure same exception raised # since URLError wraps a socket timeout on python 2.6. if str(e.reason) == "timed out": raise jenkins.TimeoutException('Error in request: %s' % e.reason) raise JenkinsException('Error in request: %s' % e.reason)
We need to get the headers in addition to the body answer to get the location from them This function uses jenkins_request method from python-jenkins library with just the return call changed :param jenkins_server: The server to query :param req: The request to execute :return: Dict containing the response body (key body) and the headers coming along (headers)
Below is the the instruction that describes the task: ### Input: We need to get the headers in addition to the body answer to get the location from them This function uses jenkins_request method from python-jenkins library with just the return call changed :param jenkins_server: The server to query :param req: The request to execute :return: Dict containing the response body (key body) and the headers coming along (headers) ### Response: def jenkins_request_with_headers(jenkins_server, req): """ We need to get the headers in addition to the body answer to get the location from them This function uses jenkins_request method from python-jenkins library with just the return call changed :param jenkins_server: The server to query :param req: The request to execute :return: Dict containing the response body (key body) and the headers coming along (headers) """ try: response = jenkins_server.jenkins_request(req) response_body = response.content response_headers = response.headers if response_body is None: raise jenkins.EmptyResponseException( "Error communicating with server[%s]: " "empty response" % jenkins_server.server) return {'body': response_body.decode('utf-8'), 'headers': response_headers} except HTTPError as e: # Jenkins's funky authentication means its nigh impossible to # distinguish errors. if e.code in [401, 403, 500]: # six.moves.urllib.error.HTTPError provides a 'reason' # attribute for all python version except for ver 2.6 # Falling back to HTTPError.msg since it contains the # same info as reason raise JenkinsException( 'Error in request. ' + 'Possibly authentication failed [%s]: %s' % ( e.code, e.msg) ) elif e.code == 404: raise jenkins.NotFoundException('Requested item could not be found') else: raise except socket.timeout as e: raise jenkins.TimeoutException('Error in request: %s' % e) except URLError as e: # python 2.6 compatibility to ensure same exception raised # since URLError wraps a socket timeout on python 2.6. if str(e.reason) == "timed out": raise jenkins.TimeoutException('Error in request: %s' % e.reason) raise JenkinsException('Error in request: %s' % e.reason)
def _resources(self, custom_indicators=False): """Initialize the resource module. This method will make a request to the ThreatConnect API to dynamically build classes to support custom Indicators. All other resources are available via this class. .. Note:: Resource Classes can be accessed using ``tcex.resources.<Class>`` or using tcex.resource('<resource name>'). """ from importlib import import_module # create resource object self.resources = import_module('tcex.tcex_resources') if custom_indicators: self.log.info('Loading custom indicator types.') # Retrieve all indicator types from the API r = self.session.get('/v2/types/indicatorTypes') # check for bad status code and response that is not JSON if not r.ok or 'application/json' not in r.headers.get('content-type', ''): warn = u'Custom Indicators are not supported ({}).'.format(r.text) self.log.warning(warn) return response = r.json() if response.get('status') != 'Success': warn = u'Bad Status: Custom Indicators are not supported ({}).'.format(r.text) self.log.warning(warn) return try: # Dynamically create custom indicator class data = response.get('data', {}).get('indicatorType', []) for entry in data: name = self.safe_rt(entry.get('name')) # temp fix for API issue where boolean are returned as strings entry['custom'] = self.utils.to_bool(entry.get('custom')) entry['parsable'] = self.utils.to_bool(entry.get('parsable')) self._indicator_types.append(u'{}'.format(entry.get('name'))) self._indicator_types_data[entry.get('name')] = entry if not entry['custom']: continue # Custom Indicator have 3 values. Only add the value if it is set. value_fields = [] if entry.get('value1Label'): value_fields.append(entry.get('value1Label')) if entry.get('value2Label'): value_fields.append(entry.get('value2Label')) if entry.get('value3Label'): value_fields.append(entry.get('value3Label')) # get instance of Indicator Class i = self.resources.Indicator(self) custom = { '_api_branch': entry['apiBranch'], '_api_entity': entry['apiEntity'], '_api_uri': '{}/{}'.format(i.api_branch, entry['apiBranch']), '_case_preference': entry['casePreference'], '_custom': entry['custom'], '_name': name, '_parsable': entry['parsable'], '_request_entity': entry['apiEntity'], '_request_uri': '{}/{}'.format(i.api_branch, entry['apiBranch']), '_status_codes': { 'DELETE': [200], 'GET': [200], 'POST': [200, 201], 'PUT': [200], }, '_value_fields': value_fields, } # Call custom indicator class factory setattr( self.resources, name, self.resources.class_factory(name, self.resources.Indicator, custom), ) except Exception as e: self.handle_error(220, [e])
Initialize the resource module. This method will make a request to the ThreatConnect API to dynamically build classes to support custom Indicators. All other resources are available via this class. .. Note:: Resource Classes can be accessed using ``tcex.resources.<Class>`` or using tcex.resource('<resource name>').
Below is the the instruction that describes the task: ### Input: Initialize the resource module. This method will make a request to the ThreatConnect API to dynamically build classes to support custom Indicators. All other resources are available via this class. .. Note:: Resource Classes can be accessed using ``tcex.resources.<Class>`` or using tcex.resource('<resource name>'). ### Response: def _resources(self, custom_indicators=False): """Initialize the resource module. This method will make a request to the ThreatConnect API to dynamically build classes to support custom Indicators. All other resources are available via this class. .. Note:: Resource Classes can be accessed using ``tcex.resources.<Class>`` or using tcex.resource('<resource name>'). """ from importlib import import_module # create resource object self.resources = import_module('tcex.tcex_resources') if custom_indicators: self.log.info('Loading custom indicator types.') # Retrieve all indicator types from the API r = self.session.get('/v2/types/indicatorTypes') # check for bad status code and response that is not JSON if not r.ok or 'application/json' not in r.headers.get('content-type', ''): warn = u'Custom Indicators are not supported ({}).'.format(r.text) self.log.warning(warn) return response = r.json() if response.get('status') != 'Success': warn = u'Bad Status: Custom Indicators are not supported ({}).'.format(r.text) self.log.warning(warn) return try: # Dynamically create custom indicator class data = response.get('data', {}).get('indicatorType', []) for entry in data: name = self.safe_rt(entry.get('name')) # temp fix for API issue where boolean are returned as strings entry['custom'] = self.utils.to_bool(entry.get('custom')) entry['parsable'] = self.utils.to_bool(entry.get('parsable')) self._indicator_types.append(u'{}'.format(entry.get('name'))) self._indicator_types_data[entry.get('name')] = entry if not entry['custom']: continue # Custom Indicator have 3 values. Only add the value if it is set. value_fields = [] if entry.get('value1Label'): value_fields.append(entry.get('value1Label')) if entry.get('value2Label'): value_fields.append(entry.get('value2Label')) if entry.get('value3Label'): value_fields.append(entry.get('value3Label')) # get instance of Indicator Class i = self.resources.Indicator(self) custom = { '_api_branch': entry['apiBranch'], '_api_entity': entry['apiEntity'], '_api_uri': '{}/{}'.format(i.api_branch, entry['apiBranch']), '_case_preference': entry['casePreference'], '_custom': entry['custom'], '_name': name, '_parsable': entry['parsable'], '_request_entity': entry['apiEntity'], '_request_uri': '{}/{}'.format(i.api_branch, entry['apiBranch']), '_status_codes': { 'DELETE': [200], 'GET': [200], 'POST': [200, 201], 'PUT': [200], }, '_value_fields': value_fields, } # Call custom indicator class factory setattr( self.resources, name, self.resources.class_factory(name, self.resources.Indicator, custom), ) except Exception as e: self.handle_error(220, [e])
def handle_request(self, connection, msg): """Dispatch a request message to the appropriate method. Parameters ---------- connection : ClientConnection object The client connection the message was from. msg : Message object The request message to process. Returns ------- done_future : Future or None Returns Future for async request handlers that will resolve when done, or None for sync request handlers once they have completed. """ send_reply = True # TODO Should check presence of Message-ids against protocol flags and # raise an error as needed. if msg.name in self._request_handlers: req_conn = ClientRequestConnection(connection, msg) handler = self._request_handlers[msg.name] try: reply = handler(self, req_conn, msg) # If we get a future, assume this is an async message handler # that will resolve the future with the reply message when it # is complete. Attach a message-sending callback to the future, # and return the future. if gen.is_future(reply): concurrent = getattr(handler, '_concurrent_reply', False) concurrent_str = ' CONCURRENT' if concurrent else '' done_future = Future() def async_reply(f): try: connection.reply(f.result(), msg) self._logger.debug("%s FUTURE%s replied", msg.name, concurrent_str) except FailReply, e: reason = str(e) self._logger.error("Request %s FUTURE%s FAIL: %s", msg.name, concurrent_str, reason) reply = Message.reply(msg.name, "fail", reason) connection.reply(reply, msg) except AsyncReply: self._logger.debug("%s FUTURE ASYNC OK" % (msg.name,)) except Exception: error_reply = self.create_exception_reply_and_log( msg, sys.exc_info()) connection.reply(error_reply, msg) finally: done_future.set_result(None) # TODO When using the return_reply() decorator the future # returned is not currently threadsafe, must either deal # with it here, or in kattypes.py. Would be nice if we don't # have to always fall back to adding a callback, or wrapping # a thread-safe future. Supporting sync-with-thread and # async futures is turning out to be a pain in the ass ;) self.ioloop.add_callback(reply.add_done_callback, async_reply) # reply.add_done_callback(async_reply) if concurrent: # Return immediately if this is a concurrent handler self._logger.debug("%s FUTURE CONCURRENT OK", msg.name) return else: self._logger.debug("%s FUTURE OK", msg.name) return done_future else: assert (reply.mtype == Message.REPLY) assert (reply.name == msg.name) self._logger.debug("%s OK" % (msg.name,)) except AsyncReply, e: self._logger.debug("%s ASYNC OK" % (msg.name,)) send_reply = False except FailReply, e: reason = str(e) self._logger.error("Request %s FAIL: %s" % (msg.name, reason)) reply = Message.reply(msg.name, "fail", reason) except Exception: reply = self.create_exception_reply_and_log(msg, sys.exc_info()) else: self._logger.error("%s INVALID: Unknown request." % (msg.name,)) reply = Message.reply(msg.name, "invalid", "Unknown request.") if send_reply: connection.reply(reply, msg)
Dispatch a request message to the appropriate method. Parameters ---------- connection : ClientConnection object The client connection the message was from. msg : Message object The request message to process. Returns ------- done_future : Future or None Returns Future for async request handlers that will resolve when done, or None for sync request handlers once they have completed.
Below is the the instruction that describes the task: ### Input: Dispatch a request message to the appropriate method. Parameters ---------- connection : ClientConnection object The client connection the message was from. msg : Message object The request message to process. Returns ------- done_future : Future or None Returns Future for async request handlers that will resolve when done, or None for sync request handlers once they have completed. ### Response: def handle_request(self, connection, msg): """Dispatch a request message to the appropriate method. Parameters ---------- connection : ClientConnection object The client connection the message was from. msg : Message object The request message to process. Returns ------- done_future : Future or None Returns Future for async request handlers that will resolve when done, or None for sync request handlers once they have completed. """ send_reply = True # TODO Should check presence of Message-ids against protocol flags and # raise an error as needed. if msg.name in self._request_handlers: req_conn = ClientRequestConnection(connection, msg) handler = self._request_handlers[msg.name] try: reply = handler(self, req_conn, msg) # If we get a future, assume this is an async message handler # that will resolve the future with the reply message when it # is complete. Attach a message-sending callback to the future, # and return the future. if gen.is_future(reply): concurrent = getattr(handler, '_concurrent_reply', False) concurrent_str = ' CONCURRENT' if concurrent else '' done_future = Future() def async_reply(f): try: connection.reply(f.result(), msg) self._logger.debug("%s FUTURE%s replied", msg.name, concurrent_str) except FailReply, e: reason = str(e) self._logger.error("Request %s FUTURE%s FAIL: %s", msg.name, concurrent_str, reason) reply = Message.reply(msg.name, "fail", reason) connection.reply(reply, msg) except AsyncReply: self._logger.debug("%s FUTURE ASYNC OK" % (msg.name,)) except Exception: error_reply = self.create_exception_reply_and_log( msg, sys.exc_info()) connection.reply(error_reply, msg) finally: done_future.set_result(None) # TODO When using the return_reply() decorator the future # returned is not currently threadsafe, must either deal # with it here, or in kattypes.py. Would be nice if we don't # have to always fall back to adding a callback, or wrapping # a thread-safe future. Supporting sync-with-thread and # async futures is turning out to be a pain in the ass ;) self.ioloop.add_callback(reply.add_done_callback, async_reply) # reply.add_done_callback(async_reply) if concurrent: # Return immediately if this is a concurrent handler self._logger.debug("%s FUTURE CONCURRENT OK", msg.name) return else: self._logger.debug("%s FUTURE OK", msg.name) return done_future else: assert (reply.mtype == Message.REPLY) assert (reply.name == msg.name) self._logger.debug("%s OK" % (msg.name,)) except AsyncReply, e: self._logger.debug("%s ASYNC OK" % (msg.name,)) send_reply = False except FailReply, e: reason = str(e) self._logger.error("Request %s FAIL: %s" % (msg.name, reason)) reply = Message.reply(msg.name, "fail", reason) except Exception: reply = self.create_exception_reply_and_log(msg, sys.exc_info()) else: self._logger.error("%s INVALID: Unknown request." % (msg.name,)) reply = Message.reply(msg.name, "invalid", "Unknown request.") if send_reply: connection.reply(reply, msg)
def sort_cards(cards, ranks=None): """ Sorts a given list of cards, either by poker ranks, or big two ranks. :arg cards: The cards to sort. :arg dict ranks: The rank dict to reference for sorting. If ``None``, it will default to ``DEFAULT_RANKS``. :returns: The sorted cards. """ ranks = ranks or DEFAULT_RANKS if ranks.get("suits"): cards = sorted( cards, key=lambda x: ranks["suits"][x.suit] if x.suit != None else 0 ) if ranks.get("values"): cards = sorted( cards, key=lambda x: ranks["values"][x.value] ) return cards
Sorts a given list of cards, either by poker ranks, or big two ranks. :arg cards: The cards to sort. :arg dict ranks: The rank dict to reference for sorting. If ``None``, it will default to ``DEFAULT_RANKS``. :returns: The sorted cards.
Below is the the instruction that describes the task: ### Input: Sorts a given list of cards, either by poker ranks, or big two ranks. :arg cards: The cards to sort. :arg dict ranks: The rank dict to reference for sorting. If ``None``, it will default to ``DEFAULT_RANKS``. :returns: The sorted cards. ### Response: def sort_cards(cards, ranks=None): """ Sorts a given list of cards, either by poker ranks, or big two ranks. :arg cards: The cards to sort. :arg dict ranks: The rank dict to reference for sorting. If ``None``, it will default to ``DEFAULT_RANKS``. :returns: The sorted cards. """ ranks = ranks or DEFAULT_RANKS if ranks.get("suits"): cards = sorted( cards, key=lambda x: ranks["suits"][x.suit] if x.suit != None else 0 ) if ranks.get("values"): cards = sorted( cards, key=lambda x: ranks["values"][x.value] ) return cards
def add_finder_patterns(matrix, is_micro): """\ Adds the finder pattern(s) to the matrix. QR Codes get three finder patterns, Micro QR Codes have just one finder pattern. ISO/IEC 18004:2015(E) -- 6.3.3 Finder pattern (page 16) ISO/IEC 18004:2015(E) -- 6.3.4 Separator (page 17) :param matrix: The matrix. :param bool is_micro: Indicates if the matrix represents a Micro QR Code. """ add_finder_pattern(matrix, 0, 0) # Upper left corner if not is_micro: add_finder_pattern(matrix, 0, -7) # Upper right corner add_finder_pattern(matrix, -7, 0)
\ Adds the finder pattern(s) to the matrix. QR Codes get three finder patterns, Micro QR Codes have just one finder pattern. ISO/IEC 18004:2015(E) -- 6.3.3 Finder pattern (page 16) ISO/IEC 18004:2015(E) -- 6.3.4 Separator (page 17) :param matrix: The matrix. :param bool is_micro: Indicates if the matrix represents a Micro QR Code.
Below is the the instruction that describes the task: ### Input: \ Adds the finder pattern(s) to the matrix. QR Codes get three finder patterns, Micro QR Codes have just one finder pattern. ISO/IEC 18004:2015(E) -- 6.3.3 Finder pattern (page 16) ISO/IEC 18004:2015(E) -- 6.3.4 Separator (page 17) :param matrix: The matrix. :param bool is_micro: Indicates if the matrix represents a Micro QR Code. ### Response: def add_finder_patterns(matrix, is_micro): """\ Adds the finder pattern(s) to the matrix. QR Codes get three finder patterns, Micro QR Codes have just one finder pattern. ISO/IEC 18004:2015(E) -- 6.3.3 Finder pattern (page 16) ISO/IEC 18004:2015(E) -- 6.3.4 Separator (page 17) :param matrix: The matrix. :param bool is_micro: Indicates if the matrix represents a Micro QR Code. """ add_finder_pattern(matrix, 0, 0) # Upper left corner if not is_micro: add_finder_pattern(matrix, 0, -7) # Upper right corner add_finder_pattern(matrix, -7, 0)
def get_gradebook_columns_by_genus_type(self, gradebook_column_genus_type): """Gets a ``GradebookColumnList`` corresponding to the given gradebook column genus ``Type`` which does not include gradebook columns of genus types derived from the specified ``Type``. In plenary mode, the returned list contains all known gradebook columns or an error results. Otherwise, the returned list may contain only those gradebook columns that are accessible through this session. arg: gradebook_column_genus_type (osid.type.Type): a gradebook column genus type return: (osid.grading.GradebookColumnList) - the returned ``GradebookColumn`` list raise: NullArgument - ``gradebook_column_genus_type`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceLookupSession.get_resources_by_genus_type # NOTE: This implementation currently ignores plenary view collection = JSONClientValidated('grading', collection='GradebookColumn', runtime=self._runtime) result = collection.find( dict({'genusTypeId': str(gradebook_column_genus_type)}, **self._view_filter())).sort('_id', DESCENDING) return objects.GradebookColumnList(result, runtime=self._runtime, proxy=self._proxy)
Gets a ``GradebookColumnList`` corresponding to the given gradebook column genus ``Type`` which does not include gradebook columns of genus types derived from the specified ``Type``. In plenary mode, the returned list contains all known gradebook columns or an error results. Otherwise, the returned list may contain only those gradebook columns that are accessible through this session. arg: gradebook_column_genus_type (osid.type.Type): a gradebook column genus type return: (osid.grading.GradebookColumnList) - the returned ``GradebookColumn`` list raise: NullArgument - ``gradebook_column_genus_type`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Gets a ``GradebookColumnList`` corresponding to the given gradebook column genus ``Type`` which does not include gradebook columns of genus types derived from the specified ``Type``. In plenary mode, the returned list contains all known gradebook columns or an error results. Otherwise, the returned list may contain only those gradebook columns that are accessible through this session. arg: gradebook_column_genus_type (osid.type.Type): a gradebook column genus type return: (osid.grading.GradebookColumnList) - the returned ``GradebookColumn`` list raise: NullArgument - ``gradebook_column_genus_type`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* ### Response: def get_gradebook_columns_by_genus_type(self, gradebook_column_genus_type): """Gets a ``GradebookColumnList`` corresponding to the given gradebook column genus ``Type`` which does not include gradebook columns of genus types derived from the specified ``Type``. In plenary mode, the returned list contains all known gradebook columns or an error results. Otherwise, the returned list may contain only those gradebook columns that are accessible through this session. arg: gradebook_column_genus_type (osid.type.Type): a gradebook column genus type return: (osid.grading.GradebookColumnList) - the returned ``GradebookColumn`` list raise: NullArgument - ``gradebook_column_genus_type`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.ResourceLookupSession.get_resources_by_genus_type # NOTE: This implementation currently ignores plenary view collection = JSONClientValidated('grading', collection='GradebookColumn', runtime=self._runtime) result = collection.find( dict({'genusTypeId': str(gradebook_column_genus_type)}, **self._view_filter())).sort('_id', DESCENDING) return objects.GradebookColumnList(result, runtime=self._runtime, proxy=self._proxy)
def Initialize(self): """Open the delegate object.""" if "r" in self.mode: delegate = self.Get(self.Schema.DELEGATE) if delegate: self.delegate = aff4.FACTORY.Open( delegate, mode=self.mode, token=self.token, age=self.age_policy)
Open the delegate object.
Below is the the instruction that describes the task: ### Input: Open the delegate object. ### Response: def Initialize(self): """Open the delegate object.""" if "r" in self.mode: delegate = self.Get(self.Schema.DELEGATE) if delegate: self.delegate = aff4.FACTORY.Open( delegate, mode=self.mode, token=self.token, age=self.age_policy)
def make_input(self, seq_idx: int, word_vec_prev: mx.sym.Symbol, decoder_state: mx.sym.Symbol) -> AttentionInput: """ Returns AttentionInput to be fed into the attend callable returned by the on() method. :param seq_idx: Decoder time step. :param word_vec_prev: Embedding of previously predicted ord :param decoder_state: Current decoder state :return: Attention input. """ query = decoder_state if self._input_previous_word: # (batch_size, num_target_embed + rnn_num_hidden) query = mx.sym.concat(word_vec_prev, decoder_state, dim=1, name='%sconcat_prev_word_%d' % (self.prefix, seq_idx)) return AttentionInput(seq_idx=seq_idx, query=query)
Returns AttentionInput to be fed into the attend callable returned by the on() method. :param seq_idx: Decoder time step. :param word_vec_prev: Embedding of previously predicted ord :param decoder_state: Current decoder state :return: Attention input.
Below is the the instruction that describes the task: ### Input: Returns AttentionInput to be fed into the attend callable returned by the on() method. :param seq_idx: Decoder time step. :param word_vec_prev: Embedding of previously predicted ord :param decoder_state: Current decoder state :return: Attention input. ### Response: def make_input(self, seq_idx: int, word_vec_prev: mx.sym.Symbol, decoder_state: mx.sym.Symbol) -> AttentionInput: """ Returns AttentionInput to be fed into the attend callable returned by the on() method. :param seq_idx: Decoder time step. :param word_vec_prev: Embedding of previously predicted ord :param decoder_state: Current decoder state :return: Attention input. """ query = decoder_state if self._input_previous_word: # (batch_size, num_target_embed + rnn_num_hidden) query = mx.sym.concat(word_vec_prev, decoder_state, dim=1, name='%sconcat_prev_word_%d' % (self.prefix, seq_idx)) return AttentionInput(seq_idx=seq_idx, query=query)
def create(cls, name, protocol_number, protocol_agent=None, comment=None): """ Create the IP Service :param str name: name of ip-service :param int protocol_number: ip proto number for this service :param str,ProtocolAgent protocol_agent: optional protocol agent for this service :param str comment: optional comment :raises CreateElementFailed: failure creating element with reason :return: instance with meta :rtype: IPService """ json = {'name': name, 'protocol_number': protocol_number, 'protocol_agent_ref': element_resolver(protocol_agent) or None, 'comment': comment} return ElementCreator(cls, json)
Create the IP Service :param str name: name of ip-service :param int protocol_number: ip proto number for this service :param str,ProtocolAgent protocol_agent: optional protocol agent for this service :param str comment: optional comment :raises CreateElementFailed: failure creating element with reason :return: instance with meta :rtype: IPService
Below is the the instruction that describes the task: ### Input: Create the IP Service :param str name: name of ip-service :param int protocol_number: ip proto number for this service :param str,ProtocolAgent protocol_agent: optional protocol agent for this service :param str comment: optional comment :raises CreateElementFailed: failure creating element with reason :return: instance with meta :rtype: IPService ### Response: def create(cls, name, protocol_number, protocol_agent=None, comment=None): """ Create the IP Service :param str name: name of ip-service :param int protocol_number: ip proto number for this service :param str,ProtocolAgent protocol_agent: optional protocol agent for this service :param str comment: optional comment :raises CreateElementFailed: failure creating element with reason :return: instance with meta :rtype: IPService """ json = {'name': name, 'protocol_number': protocol_number, 'protocol_agent_ref': element_resolver(protocol_agent) or None, 'comment': comment} return ElementCreator(cls, json)
def names(self): """ Returns a list of queues available, ``None`` if no such queues found. Remember this will only shows queues with at least one item enqueued. """ data = None if not self.connected: raise ConnectionError('Queue is not connected') try: data = self.rdb.keys("retaskqueue-*") except redis.exceptions.ConnectionError as err: raise ConnectionError(str(err)) return [name[12:] for name in data]
Returns a list of queues available, ``None`` if no such queues found. Remember this will only shows queues with at least one item enqueued.
Below is the the instruction that describes the task: ### Input: Returns a list of queues available, ``None`` if no such queues found. Remember this will only shows queues with at least one item enqueued. ### Response: def names(self): """ Returns a list of queues available, ``None`` if no such queues found. Remember this will only shows queues with at least one item enqueued. """ data = None if not self.connected: raise ConnectionError('Queue is not connected') try: data = self.rdb.keys("retaskqueue-*") except redis.exceptions.ConnectionError as err: raise ConnectionError(str(err)) return [name[12:] for name in data]
def i18n(ctx, update=False): '''Extract translatable strings''' header('Extract translatable strings') info('Extract Python strings') lrun('python setup.py extract_messages') # Fix crowdin requiring Language with `2-digit` iso code in potfile # to produce 2-digit iso code pofile # Opening the catalog also allows to set extra metadata potfile = join(ROOT, 'udata', 'translations', '{}.pot'.format(I18N_DOMAIN)) with open(potfile, 'rb') as infile: catalog = read_po(infile, 'en') catalog.copyright_holder = 'Open Data Team' catalog.msgid_bugs_address = 'i18n@opendata.team' catalog.language_team = 'Open Data Team <i18n@opendata.team>' catalog.last_translator = 'Open Data Team <i18n@opendata.team>' catalog.revision_date = datetime.now(LOCALTZ) with open(potfile, 'wb') as outfile: write_po(outfile, catalog, width=80) if update: lrun('python setup.py update_catalog') info('Extract JavaScript strings') keys = set() catalog = {} catalog_filename = join(ROOT, 'js', 'locales', '{}.en.json'.format(I18N_DOMAIN)) if exists(catalog_filename): with codecs.open(catalog_filename, encoding='utf8') as f: catalog = json.load(f) globs = '*.js', '*.vue', '*.hbs' regexps = [ re.compile(r'(?:|\.|\s|\{)_\(\s*(?:"|\')(.*?)(?:"|\')\s*(?:\)|,)'), # JS _('trad') re.compile(r'v-i18n="(.*?)"'), # Vue.js directive v-i18n="trad" re.compile(r'"\{\{\{?\s*\'(.*?)\'\s*\|\s*i18n\}\}\}?"'), # Vue.js filter {{ 'trad'|i18n }} re.compile(r'{{_\s*"(.*?)"\s*}}'), # Handlebars {{_ "trad" }} re.compile(r'{{_\s*\'(.*?)\'\s*}}'), # Handlebars {{_ 'trad' }} re.compile(r'\:[a-z0-9_\-]+="\s*_\(\'(.*?)\'\)\s*"'), # Vue.js binding :prop="_('trad')" ] for directory, _, _ in os.walk(join(ROOT, 'js')): glob_patterns = (iglob(join(directory, g)) for g in globs) for filename in itertools.chain(*glob_patterns): print('Extracting messages from {0}'.format(green(filename))) content = codecs.open(filename, encoding='utf8').read() for regexp in regexps: for match in regexp.finditer(content): key = match.group(1) key = key.replace('\\n', '\n') keys.add(key) if key not in catalog: catalog[key] = key # Remove old/not found translations for key in catalog.keys(): if key not in keys: del catalog[key] with codecs.open(catalog_filename, 'w', encoding='utf8') as f: json.dump(catalog, f, sort_keys=True, indent=4, ensure_ascii=False, encoding='utf8', separators=(',', ': '))
Extract translatable strings
Below is the the instruction that describes the task: ### Input: Extract translatable strings ### Response: def i18n(ctx, update=False): '''Extract translatable strings''' header('Extract translatable strings') info('Extract Python strings') lrun('python setup.py extract_messages') # Fix crowdin requiring Language with `2-digit` iso code in potfile # to produce 2-digit iso code pofile # Opening the catalog also allows to set extra metadata potfile = join(ROOT, 'udata', 'translations', '{}.pot'.format(I18N_DOMAIN)) with open(potfile, 'rb') as infile: catalog = read_po(infile, 'en') catalog.copyright_holder = 'Open Data Team' catalog.msgid_bugs_address = 'i18n@opendata.team' catalog.language_team = 'Open Data Team <i18n@opendata.team>' catalog.last_translator = 'Open Data Team <i18n@opendata.team>' catalog.revision_date = datetime.now(LOCALTZ) with open(potfile, 'wb') as outfile: write_po(outfile, catalog, width=80) if update: lrun('python setup.py update_catalog') info('Extract JavaScript strings') keys = set() catalog = {} catalog_filename = join(ROOT, 'js', 'locales', '{}.en.json'.format(I18N_DOMAIN)) if exists(catalog_filename): with codecs.open(catalog_filename, encoding='utf8') as f: catalog = json.load(f) globs = '*.js', '*.vue', '*.hbs' regexps = [ re.compile(r'(?:|\.|\s|\{)_\(\s*(?:"|\')(.*?)(?:"|\')\s*(?:\)|,)'), # JS _('trad') re.compile(r'v-i18n="(.*?)"'), # Vue.js directive v-i18n="trad" re.compile(r'"\{\{\{?\s*\'(.*?)\'\s*\|\s*i18n\}\}\}?"'), # Vue.js filter {{ 'trad'|i18n }} re.compile(r'{{_\s*"(.*?)"\s*}}'), # Handlebars {{_ "trad" }} re.compile(r'{{_\s*\'(.*?)\'\s*}}'), # Handlebars {{_ 'trad' }} re.compile(r'\:[a-z0-9_\-]+="\s*_\(\'(.*?)\'\)\s*"'), # Vue.js binding :prop="_('trad')" ] for directory, _, _ in os.walk(join(ROOT, 'js')): glob_patterns = (iglob(join(directory, g)) for g in globs) for filename in itertools.chain(*glob_patterns): print('Extracting messages from {0}'.format(green(filename))) content = codecs.open(filename, encoding='utf8').read() for regexp in regexps: for match in regexp.finditer(content): key = match.group(1) key = key.replace('\\n', '\n') keys.add(key) if key not in catalog: catalog[key] = key # Remove old/not found translations for key in catalog.keys(): if key not in keys: del catalog[key] with codecs.open(catalog_filename, 'w', encoding='utf8') as f: json.dump(catalog, f, sort_keys=True, indent=4, ensure_ascii=False, encoding='utf8', separators=(',', ': '))
def set_thresh(thresh,p=False,hostname=None): '''Sets the level of the threshold slider. If ``p==True`` will be interpreted as a _p_-value''' driver_send("SET_THRESHNEW %s *%s" % (str(thresh),"p" if p else ""),hostname=hostname)
Sets the level of the threshold slider. If ``p==True`` will be interpreted as a _p_-value
Below is the the instruction that describes the task: ### Input: Sets the level of the threshold slider. If ``p==True`` will be interpreted as a _p_-value ### Response: def set_thresh(thresh,p=False,hostname=None): '''Sets the level of the threshold slider. If ``p==True`` will be interpreted as a _p_-value''' driver_send("SET_THRESHNEW %s *%s" % (str(thresh),"p" if p else ""),hostname=hostname)
def add_instruction(self, reil_instruction): """Add an instruction for analysis. """ for expr in self._translator.translate(reil_instruction): self._solver.add(expr)
Add an instruction for analysis.
Below is the the instruction that describes the task: ### Input: Add an instruction for analysis. ### Response: def add_instruction(self, reil_instruction): """Add an instruction for analysis. """ for expr in self._translator.translate(reil_instruction): self._solver.add(expr)
def add_to_rc(self, content): """ add content to the rc script. """ if not self.rewrite_config: raise DirectoryException("Error! Directory was not intialized w/ rewrite_config.") if not self.rc_file: self.rc_path, self.rc_file = self.__get_rc_handle(self.root_dir) self.rc_file.write(content + '\n')
add content to the rc script.
Below is the the instruction that describes the task: ### Input: add content to the rc script. ### Response: def add_to_rc(self, content): """ add content to the rc script. """ if not self.rewrite_config: raise DirectoryException("Error! Directory was not intialized w/ rewrite_config.") if not self.rc_file: self.rc_path, self.rc_file = self.__get_rc_handle(self.root_dir) self.rc_file.write(content + '\n')
def _marginal_hidden_probs(self): """Compute marginal pdf for each individual observable.""" initial_log_probs = tf.broadcast_to(self._log_init, tf.concat([self.batch_shape_tensor(), [self._num_states]], axis=0)) # initial_log_probs :: batch_shape num_states if self._num_steps > 1: transition_log_probs = self._log_trans def forward_step(log_probs, _): return _log_vector_matrix(log_probs, transition_log_probs) dummy_index = tf.zeros(self._num_steps - 1, dtype=tf.float32) forward_log_probs = tf.scan(forward_step, dummy_index, initializer=initial_log_probs, name="forward_log_probs") forward_log_probs = tf.concat([[initial_log_probs], forward_log_probs], axis=0) else: forward_log_probs = initial_log_probs[tf.newaxis, ...] # returns :: num_steps batch_shape num_states return tf.exp(forward_log_probs)
Compute marginal pdf for each individual observable.
Below is the the instruction that describes the task: ### Input: Compute marginal pdf for each individual observable. ### Response: def _marginal_hidden_probs(self): """Compute marginal pdf for each individual observable.""" initial_log_probs = tf.broadcast_to(self._log_init, tf.concat([self.batch_shape_tensor(), [self._num_states]], axis=0)) # initial_log_probs :: batch_shape num_states if self._num_steps > 1: transition_log_probs = self._log_trans def forward_step(log_probs, _): return _log_vector_matrix(log_probs, transition_log_probs) dummy_index = tf.zeros(self._num_steps - 1, dtype=tf.float32) forward_log_probs = tf.scan(forward_step, dummy_index, initializer=initial_log_probs, name="forward_log_probs") forward_log_probs = tf.concat([[initial_log_probs], forward_log_probs], axis=0) else: forward_log_probs = initial_log_probs[tf.newaxis, ...] # returns :: num_steps batch_shape num_states return tf.exp(forward_log_probs)
def windowed_run_count(da, window, dim='time'): """Return the number of consecutive true values in array for runs at least as long as given duration. Parameters ---------- da: N-dimensional Xarray data array (boolean) Input data array window : int Minimum run length. dim : Xarray dimension (default = 'time') Dimension along which to calculate consecutive run Returns ------- out : N-dimensional xarray data array (int) Total number of true values part of a consecutive runs of at least `window` long. """ d = rle(da, dim=dim) out = d.where(d >= window, 0).sum(dim=dim) return out
Return the number of consecutive true values in array for runs at least as long as given duration. Parameters ---------- da: N-dimensional Xarray data array (boolean) Input data array window : int Minimum run length. dim : Xarray dimension (default = 'time') Dimension along which to calculate consecutive run Returns ------- out : N-dimensional xarray data array (int) Total number of true values part of a consecutive runs of at least `window` long.
Below is the the instruction that describes the task: ### Input: Return the number of consecutive true values in array for runs at least as long as given duration. Parameters ---------- da: N-dimensional Xarray data array (boolean) Input data array window : int Minimum run length. dim : Xarray dimension (default = 'time') Dimension along which to calculate consecutive run Returns ------- out : N-dimensional xarray data array (int) Total number of true values part of a consecutive runs of at least `window` long. ### Response: def windowed_run_count(da, window, dim='time'): """Return the number of consecutive true values in array for runs at least as long as given duration. Parameters ---------- da: N-dimensional Xarray data array (boolean) Input data array window : int Minimum run length. dim : Xarray dimension (default = 'time') Dimension along which to calculate consecutive run Returns ------- out : N-dimensional xarray data array (int) Total number of true values part of a consecutive runs of at least `window` long. """ d = rle(da, dim=dim) out = d.where(d >= window, 0).sum(dim=dim) return out
def open_dut(self, port=None): """ Open connection to dut. :param port: com port to use. :return: """ if port is not None: self.comport = port try: self.open_connection() except (DutConnectionError, ValueError) as err: self.close_dut(use_prepare=False) raise DutConnectionError(str(err)) except KeyboardInterrupt: self.close_dut(use_prepare=False) self.close_connection() raise
Open connection to dut. :param port: com port to use. :return:
Below is the the instruction that describes the task: ### Input: Open connection to dut. :param port: com port to use. :return: ### Response: def open_dut(self, port=None): """ Open connection to dut. :param port: com port to use. :return: """ if port is not None: self.comport = port try: self.open_connection() except (DutConnectionError, ValueError) as err: self.close_dut(use_prepare=False) raise DutConnectionError(str(err)) except KeyboardInterrupt: self.close_dut(use_prepare=False) self.close_connection() raise
def block_layer(inputs, filters, block_fn, blocks, strides, is_training, name, data_format="channels_first", use_td=False, targeting_rate=None, keep_prob=None): """Creates one layer of blocks for the ResNet model. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first convolution of the layer. block_fn: `function` for the block to use within the model blocks: `int` number of blocks contained in the layer. strides: `int` stride to use for the first convolution of the layer. If greater than 1, this layer will downsample the input. is_training: `bool` for whether the model is training. name: `str`name for the Tensor output of the block layer. data_format: `str` either "channels_first" for `[batch, channels, height, width]` or "channels_last for `[batch, height, width, channels]`. use_td: `str` one of "weight" or "unit". Set to False or "" to disable targeted dropout. targeting_rate: `float` proportion of weights to target with targeted dropout. keep_prob: `float` keep probability for targeted dropout. Returns: The output `Tensor` of the block layer. """ # Bottleneck blocks end with 4x the number of filters as they start with filters_out = 4 * filters if block_fn is bottleneck_block else filters def projection_shortcut(inputs): """Project identity branch.""" inputs = conv2d_fixed_padding( inputs=inputs, filters=filters_out, kernel_size=1, strides=strides, data_format=data_format, use_td=use_td, targeting_rate=targeting_rate, keep_prob=keep_prob, is_training=is_training) return batch_norm_relu( inputs, is_training, relu=False, data_format=data_format) # Only the first block per block_layer uses projection_shortcut and strides inputs = block_fn( inputs, filters, is_training, projection_shortcut, strides, False, data_format, use_td=use_td, targeting_rate=targeting_rate, keep_prob=keep_prob) for i in range(1, blocks): inputs = block_fn( inputs, filters, is_training, None, 1, (i + 1 == blocks), data_format, use_td=use_td, targeting_rate=targeting_rate, keep_prob=keep_prob) return tf.identity(inputs, name)
Creates one layer of blocks for the ResNet model. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first convolution of the layer. block_fn: `function` for the block to use within the model blocks: `int` number of blocks contained in the layer. strides: `int` stride to use for the first convolution of the layer. If greater than 1, this layer will downsample the input. is_training: `bool` for whether the model is training. name: `str`name for the Tensor output of the block layer. data_format: `str` either "channels_first" for `[batch, channels, height, width]` or "channels_last for `[batch, height, width, channels]`. use_td: `str` one of "weight" or "unit". Set to False or "" to disable targeted dropout. targeting_rate: `float` proportion of weights to target with targeted dropout. keep_prob: `float` keep probability for targeted dropout. Returns: The output `Tensor` of the block layer.
Below is the the instruction that describes the task: ### Input: Creates one layer of blocks for the ResNet model. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first convolution of the layer. block_fn: `function` for the block to use within the model blocks: `int` number of blocks contained in the layer. strides: `int` stride to use for the first convolution of the layer. If greater than 1, this layer will downsample the input. is_training: `bool` for whether the model is training. name: `str`name for the Tensor output of the block layer. data_format: `str` either "channels_first" for `[batch, channels, height, width]` or "channels_last for `[batch, height, width, channels]`. use_td: `str` one of "weight" or "unit". Set to False or "" to disable targeted dropout. targeting_rate: `float` proportion of weights to target with targeted dropout. keep_prob: `float` keep probability for targeted dropout. Returns: The output `Tensor` of the block layer. ### Response: def block_layer(inputs, filters, block_fn, blocks, strides, is_training, name, data_format="channels_first", use_td=False, targeting_rate=None, keep_prob=None): """Creates one layer of blocks for the ResNet model. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first convolution of the layer. block_fn: `function` for the block to use within the model blocks: `int` number of blocks contained in the layer. strides: `int` stride to use for the first convolution of the layer. If greater than 1, this layer will downsample the input. is_training: `bool` for whether the model is training. name: `str`name for the Tensor output of the block layer. data_format: `str` either "channels_first" for `[batch, channels, height, width]` or "channels_last for `[batch, height, width, channels]`. use_td: `str` one of "weight" or "unit". Set to False or "" to disable targeted dropout. targeting_rate: `float` proportion of weights to target with targeted dropout. keep_prob: `float` keep probability for targeted dropout. Returns: The output `Tensor` of the block layer. """ # Bottleneck blocks end with 4x the number of filters as they start with filters_out = 4 * filters if block_fn is bottleneck_block else filters def projection_shortcut(inputs): """Project identity branch.""" inputs = conv2d_fixed_padding( inputs=inputs, filters=filters_out, kernel_size=1, strides=strides, data_format=data_format, use_td=use_td, targeting_rate=targeting_rate, keep_prob=keep_prob, is_training=is_training) return batch_norm_relu( inputs, is_training, relu=False, data_format=data_format) # Only the first block per block_layer uses projection_shortcut and strides inputs = block_fn( inputs, filters, is_training, projection_shortcut, strides, False, data_format, use_td=use_td, targeting_rate=targeting_rate, keep_prob=keep_prob) for i in range(1, blocks): inputs = block_fn( inputs, filters, is_training, None, 1, (i + 1 == blocks), data_format, use_td=use_td, targeting_rate=targeting_rate, keep_prob=keep_prob) return tf.identity(inputs, name)
def _system_handler(self, data, ts): """Distributes system messages to the appropriate handler. System messages include everything that arrives as a dict, or a list containing a heartbeat. :param data: :param ts: :return: """ self.log.debug("_system_handler(): Received a system message: %s", data) # Unpack the data event = data.pop('event') if event == 'pong': self.log.debug("_system_handler(): Distributing %s to _pong_handler..", data) self._pong_handler() elif event == 'info': self.log.debug("_system_handler(): Distributing %s to _info_handler..", data) self._info_handler(data) elif event == 'error': self.log.debug("_system_handler(): Distributing %s to _error_handler..", data) self._error_handler(data) elif event in ('subscribed', 'unsubscribed', 'conf', 'auth', 'unauth'): self.log.debug("_system_handler(): Distributing %s to " "_response_handler..", data) self._response_handler(event, data, ts) else: self.log.error("Unhandled event: %s, data: %s", event, data)
Distributes system messages to the appropriate handler. System messages include everything that arrives as a dict, or a list containing a heartbeat. :param data: :param ts: :return:
Below is the the instruction that describes the task: ### Input: Distributes system messages to the appropriate handler. System messages include everything that arrives as a dict, or a list containing a heartbeat. :param data: :param ts: :return: ### Response: def _system_handler(self, data, ts): """Distributes system messages to the appropriate handler. System messages include everything that arrives as a dict, or a list containing a heartbeat. :param data: :param ts: :return: """ self.log.debug("_system_handler(): Received a system message: %s", data) # Unpack the data event = data.pop('event') if event == 'pong': self.log.debug("_system_handler(): Distributing %s to _pong_handler..", data) self._pong_handler() elif event == 'info': self.log.debug("_system_handler(): Distributing %s to _info_handler..", data) self._info_handler(data) elif event == 'error': self.log.debug("_system_handler(): Distributing %s to _error_handler..", data) self._error_handler(data) elif event in ('subscribed', 'unsubscribed', 'conf', 'auth', 'unauth'): self.log.debug("_system_handler(): Distributing %s to " "_response_handler..", data) self._response_handler(event, data, ts) else: self.log.error("Unhandled event: %s, data: %s", event, data)
def cache_s3(self, url, named): # pragma: no cover """Basically this is not to deploy automatically this is to be run once all is properly defined to catch the xsd in your own S3 instance and avoid third party (like government servers) failure, trying to manage the cache transparently to the user. This method was created due to SAT is too unstable to serve the files related with xsd and invoices, and this probably is the same in other governmental institutions. :param url: Element path to be cached. :type url: str :param named: Local path with the file already downloaded. :type named: str :return: """ # **Technical Notes:** # # Even if the tries are looking nested, this is perfect valid for # this case: # https://docs.python.org/3/glossary.html#term-eafp. # # The Coverage was excluded in order to avoid decrease it because an # unused tool. url_parsed = urlparse(url) if self.domain == urlparse(url).netloc: # If I am asking for the same domain it is probably because it # exists BTW return url # Get the service resource session = boto3.Session(profile_name=self.profile_name) s3 = session.resource('s3') client = session.client('s3') element = url_parsed.path[1:] # Get the bucket object (I am not connected yet this is setting a # local object in memory only) bucket = s3.Bucket(self.domain) # Dear future me there MUST be a more elegant way to do this. new_url = 'http://%s/%s' % (self.domain, element) if self.check_s3(bucket.name, element): return new_url # Look a way this code is not tested in coverage because this MUST run # locally due to s3 credentials. try: # pragma: no cover # No coverage code unreachable on travis 'intentionally' transfer = S3Transfer(client) # Making the object public transfer.upload_file(named, bucket.name, element, extra_args={'ACL': 'public-read'}) except ClientError as inst: # pragma: no cover # No coverage code unreachable on travis 'intentionally' print(inst.message) return url else: # pragma: no cover # No coverage code unreachable on travis 'intentionally' return new_url
Basically this is not to deploy automatically this is to be run once all is properly defined to catch the xsd in your own S3 instance and avoid third party (like government servers) failure, trying to manage the cache transparently to the user. This method was created due to SAT is too unstable to serve the files related with xsd and invoices, and this probably is the same in other governmental institutions. :param url: Element path to be cached. :type url: str :param named: Local path with the file already downloaded. :type named: str :return:
Below is the the instruction that describes the task: ### Input: Basically this is not to deploy automatically this is to be run once all is properly defined to catch the xsd in your own S3 instance and avoid third party (like government servers) failure, trying to manage the cache transparently to the user. This method was created due to SAT is too unstable to serve the files related with xsd and invoices, and this probably is the same in other governmental institutions. :param url: Element path to be cached. :type url: str :param named: Local path with the file already downloaded. :type named: str :return: ### Response: def cache_s3(self, url, named): # pragma: no cover """Basically this is not to deploy automatically this is to be run once all is properly defined to catch the xsd in your own S3 instance and avoid third party (like government servers) failure, trying to manage the cache transparently to the user. This method was created due to SAT is too unstable to serve the files related with xsd and invoices, and this probably is the same in other governmental institutions. :param url: Element path to be cached. :type url: str :param named: Local path with the file already downloaded. :type named: str :return: """ # **Technical Notes:** # # Even if the tries are looking nested, this is perfect valid for # this case: # https://docs.python.org/3/glossary.html#term-eafp. # # The Coverage was excluded in order to avoid decrease it because an # unused tool. url_parsed = urlparse(url) if self.domain == urlparse(url).netloc: # If I am asking for the same domain it is probably because it # exists BTW return url # Get the service resource session = boto3.Session(profile_name=self.profile_name) s3 = session.resource('s3') client = session.client('s3') element = url_parsed.path[1:] # Get the bucket object (I am not connected yet this is setting a # local object in memory only) bucket = s3.Bucket(self.domain) # Dear future me there MUST be a more elegant way to do this. new_url = 'http://%s/%s' % (self.domain, element) if self.check_s3(bucket.name, element): return new_url # Look a way this code is not tested in coverage because this MUST run # locally due to s3 credentials. try: # pragma: no cover # No coverage code unreachable on travis 'intentionally' transfer = S3Transfer(client) # Making the object public transfer.upload_file(named, bucket.name, element, extra_args={'ACL': 'public-read'}) except ClientError as inst: # pragma: no cover # No coverage code unreachable on travis 'intentionally' print(inst.message) return url else: # pragma: no cover # No coverage code unreachable on travis 'intentionally' return new_url
def of_pyobj(self, pyobj): """Use default hash method to return hash value of a piece of Python picklable object. """ m = self.hash_algo() m.update(pickle.dumps(pyobj, protocol=self.pk_protocol)) if self.return_int: return int(m.hexdigest(), 16) else: return m.hexdigest()
Use default hash method to return hash value of a piece of Python picklable object.
Below is the the instruction that describes the task: ### Input: Use default hash method to return hash value of a piece of Python picklable object. ### Response: def of_pyobj(self, pyobj): """Use default hash method to return hash value of a piece of Python picklable object. """ m = self.hash_algo() m.update(pickle.dumps(pyobj, protocol=self.pk_protocol)) if self.return_int: return int(m.hexdigest(), 16) else: return m.hexdigest()
async def _send_text(self, request: Request, stack: Stack): """ Send text layers to the user. Each layer will go in its own bubble. Also, Facebook limits messages to 320 chars, so if any message is longer than that it will be split into as many messages as needed to be accepted by Facebook. """ parts = [] for layer in stack.layers: if isinstance(layer, lyr.MultiText): lines = await render(layer.text, request, multi_line=True) for line in lines: for part in wrap(line, 320): parts.append(part) elif isinstance(layer, (lyr.Text, lyr.RawText)): text = await render(layer.text, request) for part in wrap(text, 320): parts.append(part) for part in parts[:-1]: await self._send(request, { 'text': part, }, stack) part = parts[-1] msg = { 'text': part, } await self._add_qr(stack, msg, request) await self._send(request, msg, stack)
Send text layers to the user. Each layer will go in its own bubble. Also, Facebook limits messages to 320 chars, so if any message is longer than that it will be split into as many messages as needed to be accepted by Facebook.
Below is the the instruction that describes the task: ### Input: Send text layers to the user. Each layer will go in its own bubble. Also, Facebook limits messages to 320 chars, so if any message is longer than that it will be split into as many messages as needed to be accepted by Facebook. ### Response: async def _send_text(self, request: Request, stack: Stack): """ Send text layers to the user. Each layer will go in its own bubble. Also, Facebook limits messages to 320 chars, so if any message is longer than that it will be split into as many messages as needed to be accepted by Facebook. """ parts = [] for layer in stack.layers: if isinstance(layer, lyr.MultiText): lines = await render(layer.text, request, multi_line=True) for line in lines: for part in wrap(line, 320): parts.append(part) elif isinstance(layer, (lyr.Text, lyr.RawText)): text = await render(layer.text, request) for part in wrap(text, 320): parts.append(part) for part in parts[:-1]: await self._send(request, { 'text': part, }, stack) part = parts[-1] msg = { 'text': part, } await self._add_qr(stack, msg, request) await self._send(request, msg, stack)
def level(self): """ Returns the the user's profile level, note that this runs a separate request because the profile level data isn't in the standard player summary output even though it should be. Which is also why it's not implemented as a separate class. You won't need this output and not the profile output """ level_key = "player_level" if level_key in self._api["response"]: return self._api["response"][level_key] try: lvl = api.interface("IPlayerService").GetSteamLevel(steamid=self.id64)["response"][level_key] self._api["response"][level_key] = lvl return lvl except: return -1
Returns the the user's profile level, note that this runs a separate request because the profile level data isn't in the standard player summary output even though it should be. Which is also why it's not implemented as a separate class. You won't need this output and not the profile output
Below is the the instruction that describes the task: ### Input: Returns the the user's profile level, note that this runs a separate request because the profile level data isn't in the standard player summary output even though it should be. Which is also why it's not implemented as a separate class. You won't need this output and not the profile output ### Response: def level(self): """ Returns the the user's profile level, note that this runs a separate request because the profile level data isn't in the standard player summary output even though it should be. Which is also why it's not implemented as a separate class. You won't need this output and not the profile output """ level_key = "player_level" if level_key in self._api["response"]: return self._api["response"][level_key] try: lvl = api.interface("IPlayerService").GetSteamLevel(steamid=self.id64)["response"][level_key] self._api["response"][level_key] = lvl return lvl except: return -1
def get_date(context, value): """Tries to return a DateTime.DateTime object """ if not value: return None if isinstance(value, DateTime): return value if isinstance(value, datetime): return dt2DT(value) if not isinstance(value, basestring): return None def try_parse(date_string, format): if not format: return None try: struct_time = strptime(date_string, format) return datetime(*struct_time[:6]) except ValueError: pass return None def get_locale_format(key, context): format = context.translate(key, domain="senaite.core", mapping={}) # TODO: Is this replacement below strictly necessary? return format.replace(r"${", '%').replace('}', '') # Try with prioritized formats formats = [get_locale_format("date_format_long", context), get_locale_format("date_format_short", context), "%Y-%m-%d %H:%M", "%Y-%m-%d", "%Y-%m-%d %H:%M:%S"] for pri_format in formats: val = try_parse(value, pri_format) if not val: continue val = dt2DT(val) if val.timezoneNaive(): # Use local timezone for tz naive strings # see http://dev.plone.org/plone/ticket/10141 zone = val.localZone(safelocaltime(val.timeTime())) parts = val.parts()[:-1] + (zone,) val = DateTime(*parts) return val logger.warn("Unable to convert {} to datetime".format(value)) return None
Tries to return a DateTime.DateTime object
Below is the the instruction that describes the task: ### Input: Tries to return a DateTime.DateTime object ### Response: def get_date(context, value): """Tries to return a DateTime.DateTime object """ if not value: return None if isinstance(value, DateTime): return value if isinstance(value, datetime): return dt2DT(value) if not isinstance(value, basestring): return None def try_parse(date_string, format): if not format: return None try: struct_time = strptime(date_string, format) return datetime(*struct_time[:6]) except ValueError: pass return None def get_locale_format(key, context): format = context.translate(key, domain="senaite.core", mapping={}) # TODO: Is this replacement below strictly necessary? return format.replace(r"${", '%').replace('}', '') # Try with prioritized formats formats = [get_locale_format("date_format_long", context), get_locale_format("date_format_short", context), "%Y-%m-%d %H:%M", "%Y-%m-%d", "%Y-%m-%d %H:%M:%S"] for pri_format in formats: val = try_parse(value, pri_format) if not val: continue val = dt2DT(val) if val.timezoneNaive(): # Use local timezone for tz naive strings # see http://dev.plone.org/plone/ticket/10141 zone = val.localZone(safelocaltime(val.timeTime())) parts = val.parts()[:-1] + (zone,) val = DateTime(*parts) return val logger.warn("Unable to convert {} to datetime".format(value)) return None
def _handle_autoreload(self,cfg_file,*args,**options): """Command 'supervisor autoreload' watches for code changes. This command provides a simulation of the Django dev server's auto-reloading mechanism that will restart all supervised processes. It's not quite as accurate as Django's autoreloader because it runs in a separate process, so it doesn't know the precise set of modules that have been loaded. Instead, it tries to watch all python files that are "nearby" the files loaded at startup by Django. """ if args: raise CommandError("supervisor autoreload takes no arguments") live_dirs = self._find_live_code_dirs() reload_progs = self._get_autoreload_programs(cfg_file) def autoreloader(): """ Forks a subprocess to make the restart call. Otherwise supervisord might kill us and cancel the restart! """ if os.fork() == 0: sys.exit(self.handle("restart", *reload_progs, **options)) # Call the autoreloader callback whenever a .py file changes. # To prevent thrashing, limit callbacks to one per second. handler = CallbackModifiedHandler(callback=autoreloader, repeat_delay=1, patterns=AUTORELOAD_PATTERNS, ignore_patterns=AUTORELOAD_IGNORE, ignore_directories=True) # Try to add watches using the platform-specific observer. # If this fails, print a warning and fall back to the PollingObserver. # This will avoid errors with e.g. too many inotify watches. from watchdog.observers import Observer from watchdog.observers.polling import PollingObserver observer = None for ObserverCls in (Observer, PollingObserver): observer = ObserverCls() try: for live_dir in set(live_dirs): observer.schedule(handler, live_dir, True) break except Exception: print>>sys.stderr, "COULD NOT WATCH FILESYSTEM USING" print>>sys.stderr, "OBSERVER CLASS: ", ObserverCls traceback.print_exc() observer.start() observer.stop() # Fail out if none of the observers worked. if observer is None: print>>sys.stderr, "COULD NOT WATCH FILESYSTEM" return 1 # Poll if we have an observer. # TODO: Is this sleep necessary? Or will it suffice # to block indefinitely on something and wait to be killed? observer.start() try: while True: time.sleep(1) except KeyboardInterrupt: observer.stop() observer.join() return 0
Command 'supervisor autoreload' watches for code changes. This command provides a simulation of the Django dev server's auto-reloading mechanism that will restart all supervised processes. It's not quite as accurate as Django's autoreloader because it runs in a separate process, so it doesn't know the precise set of modules that have been loaded. Instead, it tries to watch all python files that are "nearby" the files loaded at startup by Django.
Below is the the instruction that describes the task: ### Input: Command 'supervisor autoreload' watches for code changes. This command provides a simulation of the Django dev server's auto-reloading mechanism that will restart all supervised processes. It's not quite as accurate as Django's autoreloader because it runs in a separate process, so it doesn't know the precise set of modules that have been loaded. Instead, it tries to watch all python files that are "nearby" the files loaded at startup by Django. ### Response: def _handle_autoreload(self,cfg_file,*args,**options): """Command 'supervisor autoreload' watches for code changes. This command provides a simulation of the Django dev server's auto-reloading mechanism that will restart all supervised processes. It's not quite as accurate as Django's autoreloader because it runs in a separate process, so it doesn't know the precise set of modules that have been loaded. Instead, it tries to watch all python files that are "nearby" the files loaded at startup by Django. """ if args: raise CommandError("supervisor autoreload takes no arguments") live_dirs = self._find_live_code_dirs() reload_progs = self._get_autoreload_programs(cfg_file) def autoreloader(): """ Forks a subprocess to make the restart call. Otherwise supervisord might kill us and cancel the restart! """ if os.fork() == 0: sys.exit(self.handle("restart", *reload_progs, **options)) # Call the autoreloader callback whenever a .py file changes. # To prevent thrashing, limit callbacks to one per second. handler = CallbackModifiedHandler(callback=autoreloader, repeat_delay=1, patterns=AUTORELOAD_PATTERNS, ignore_patterns=AUTORELOAD_IGNORE, ignore_directories=True) # Try to add watches using the platform-specific observer. # If this fails, print a warning and fall back to the PollingObserver. # This will avoid errors with e.g. too many inotify watches. from watchdog.observers import Observer from watchdog.observers.polling import PollingObserver observer = None for ObserverCls in (Observer, PollingObserver): observer = ObserverCls() try: for live_dir in set(live_dirs): observer.schedule(handler, live_dir, True) break except Exception: print>>sys.stderr, "COULD NOT WATCH FILESYSTEM USING" print>>sys.stderr, "OBSERVER CLASS: ", ObserverCls traceback.print_exc() observer.start() observer.stop() # Fail out if none of the observers worked. if observer is None: print>>sys.stderr, "COULD NOT WATCH FILESYSTEM" return 1 # Poll if we have an observer. # TODO: Is this sleep necessary? Or will it suffice # to block indefinitely on something and wait to be killed? observer.start() try: while True: time.sleep(1) except KeyboardInterrupt: observer.stop() observer.join() return 0
def buffer(self, buf): """ Returns a textual description of the contents of the argument passed as a buffer or None if an error occurred and the MAGIC_ERROR flag is set. A call to errno() will return the numeric error code. """ try: # attempt python3 approach first return str(_buffer(self._magic_t, buf, len(buf)), 'utf-8') except: return _buffer(self._magic_t, buf, len(buf))
Returns a textual description of the contents of the argument passed as a buffer or None if an error occurred and the MAGIC_ERROR flag is set. A call to errno() will return the numeric error code.
Below is the the instruction that describes the task: ### Input: Returns a textual description of the contents of the argument passed as a buffer or None if an error occurred and the MAGIC_ERROR flag is set. A call to errno() will return the numeric error code. ### Response: def buffer(self, buf): """ Returns a textual description of the contents of the argument passed as a buffer or None if an error occurred and the MAGIC_ERROR flag is set. A call to errno() will return the numeric error code. """ try: # attempt python3 approach first return str(_buffer(self._magic_t, buf, len(buf)), 'utf-8') except: return _buffer(self._magic_t, buf, len(buf))
def getEstimatedNodeCounts(self, queuedJobShapes, currentNodeCounts): """ Given the resource requirements of queued jobs and the current size of the cluster, returns a dict mapping from nodeShape to the number of nodes we want in the cluster right now. """ nodesToRunQueuedJobs = binPacking(jobShapes=queuedJobShapes, nodeShapes=self.nodeShapes, goalTime=self.targetTime) estimatedNodeCounts = {} for nodeShape in self.nodeShapes: nodeType = self.nodeShapeToType[nodeShape] logger.debug("Nodes of type %s to run queued jobs = " "%s" % (nodeType, nodesToRunQueuedJobs[nodeShape])) # Actual calculation of the estimated number of nodes required estimatedNodeCount = 0 if nodesToRunQueuedJobs[nodeShape] == 0 \ else max(1, self._round(nodesToRunQueuedJobs[nodeShape])) logger.debug("Estimating %i nodes of shape %s" % (estimatedNodeCount, nodeShape)) # Use inertia parameter to smooth out fluctuations according to an exponentially # weighted moving average. estimatedNodeCount = self.smoothEstimate(nodeShape, estimatedNodeCount) # If we're scaling a non-preemptable node type, we need to see if we have a # deficit of preemptable nodes of this type that we should compensate for. if not nodeShape.preemptable: compensation = self.config.preemptableCompensation assert 0.0 <= compensation <= 1.0 # The number of nodes we provision as compensation for missing preemptable # nodes is the product of the deficit (the number of preemptable nodes we did # _not_ allocate) and configuration preference. compensationNodes = self._round(self.preemptableNodeDeficit[nodeType] * compensation) if compensationNodes > 0: logger.debug('Adding %d non-preemptable nodes of type %s to compensate for a ' 'deficit of %d preemptable ones.', compensationNodes, nodeType, self.preemptableNodeDeficit[nodeType]) estimatedNodeCount += compensationNodes logger.debug("Currently %i nodes of type %s in cluster" % (currentNodeCounts[nodeShape], nodeType)) if self.leader.toilMetrics: self.leader.toilMetrics.logClusterSize(nodeType=nodeType, currentSize=currentNodeCounts[nodeShape], desiredSize=estimatedNodeCount) # Bound number using the max and min node parameters if estimatedNodeCount > self.maxNodes[nodeShape]: logger.debug('Limiting the estimated number of necessary %s (%s) to the ' 'configured maximum (%s).', nodeType, estimatedNodeCount, self.maxNodes[nodeShape]) estimatedNodeCount = self.maxNodes[nodeShape] elif estimatedNodeCount < self.minNodes[nodeShape]: logger.debug('Raising the estimated number of necessary %s (%s) to the ' 'configured minimum (%s).', nodeType, estimatedNodeCount, self.minNodes[nodeShape]) estimatedNodeCount = self.minNodes[nodeShape] estimatedNodeCounts[nodeShape] = estimatedNodeCount return estimatedNodeCounts
Given the resource requirements of queued jobs and the current size of the cluster, returns a dict mapping from nodeShape to the number of nodes we want in the cluster right now.
Below is the the instruction that describes the task: ### Input: Given the resource requirements of queued jobs and the current size of the cluster, returns a dict mapping from nodeShape to the number of nodes we want in the cluster right now. ### Response: def getEstimatedNodeCounts(self, queuedJobShapes, currentNodeCounts): """ Given the resource requirements of queued jobs and the current size of the cluster, returns a dict mapping from nodeShape to the number of nodes we want in the cluster right now. """ nodesToRunQueuedJobs = binPacking(jobShapes=queuedJobShapes, nodeShapes=self.nodeShapes, goalTime=self.targetTime) estimatedNodeCounts = {} for nodeShape in self.nodeShapes: nodeType = self.nodeShapeToType[nodeShape] logger.debug("Nodes of type %s to run queued jobs = " "%s" % (nodeType, nodesToRunQueuedJobs[nodeShape])) # Actual calculation of the estimated number of nodes required estimatedNodeCount = 0 if nodesToRunQueuedJobs[nodeShape] == 0 \ else max(1, self._round(nodesToRunQueuedJobs[nodeShape])) logger.debug("Estimating %i nodes of shape %s" % (estimatedNodeCount, nodeShape)) # Use inertia parameter to smooth out fluctuations according to an exponentially # weighted moving average. estimatedNodeCount = self.smoothEstimate(nodeShape, estimatedNodeCount) # If we're scaling a non-preemptable node type, we need to see if we have a # deficit of preemptable nodes of this type that we should compensate for. if not nodeShape.preemptable: compensation = self.config.preemptableCompensation assert 0.0 <= compensation <= 1.0 # The number of nodes we provision as compensation for missing preemptable # nodes is the product of the deficit (the number of preemptable nodes we did # _not_ allocate) and configuration preference. compensationNodes = self._round(self.preemptableNodeDeficit[nodeType] * compensation) if compensationNodes > 0: logger.debug('Adding %d non-preemptable nodes of type %s to compensate for a ' 'deficit of %d preemptable ones.', compensationNodes, nodeType, self.preemptableNodeDeficit[nodeType]) estimatedNodeCount += compensationNodes logger.debug("Currently %i nodes of type %s in cluster" % (currentNodeCounts[nodeShape], nodeType)) if self.leader.toilMetrics: self.leader.toilMetrics.logClusterSize(nodeType=nodeType, currentSize=currentNodeCounts[nodeShape], desiredSize=estimatedNodeCount) # Bound number using the max and min node parameters if estimatedNodeCount > self.maxNodes[nodeShape]: logger.debug('Limiting the estimated number of necessary %s (%s) to the ' 'configured maximum (%s).', nodeType, estimatedNodeCount, self.maxNodes[nodeShape]) estimatedNodeCount = self.maxNodes[nodeShape] elif estimatedNodeCount < self.minNodes[nodeShape]: logger.debug('Raising the estimated number of necessary %s (%s) to the ' 'configured minimum (%s).', nodeType, estimatedNodeCount, self.minNodes[nodeShape]) estimatedNodeCount = self.minNodes[nodeShape] estimatedNodeCounts[nodeShape] = estimatedNodeCount return estimatedNodeCounts
def get_select_title_url(self, attach=None): """ 获取商户专属开票链接 商户调用接口,获取链接。用户扫码,可以选择抬头发给商户。可以将链接转成二维码,立在收银台。 详情请参考 https://mp.weixin.qq.com/wiki?id=mp1496554912_vfWU0 :param attach: 附加字段,用户提交发票时会发送给商户 :return: 商户专属开票链接 """ return self._post( 'biz/getselecttitleurl', data={ 'attach': attach, }, result_processor=lambda x: x['url'], )
获取商户专属开票链接 商户调用接口,获取链接。用户扫码,可以选择抬头发给商户。可以将链接转成二维码,立在收银台。 详情请参考 https://mp.weixin.qq.com/wiki?id=mp1496554912_vfWU0 :param attach: 附加字段,用户提交发票时会发送给商户 :return: 商户专属开票链接
Below is the the instruction that describes the task: ### Input: 获取商户专属开票链接 商户调用接口,获取链接。用户扫码,可以选择抬头发给商户。可以将链接转成二维码,立在收银台。 详情请参考 https://mp.weixin.qq.com/wiki?id=mp1496554912_vfWU0 :param attach: 附加字段,用户提交发票时会发送给商户 :return: 商户专属开票链接 ### Response: def get_select_title_url(self, attach=None): """ 获取商户专属开票链接 商户调用接口,获取链接。用户扫码,可以选择抬头发给商户。可以将链接转成二维码,立在收银台。 详情请参考 https://mp.weixin.qq.com/wiki?id=mp1496554912_vfWU0 :param attach: 附加字段,用户提交发票时会发送给商户 :return: 商户专属开票链接 """ return self._post( 'biz/getselecttitleurl', data={ 'attach': attach, }, result_processor=lambda x: x['url'], )
def envGet(self, name, default=None, conv=None): """Return value for environment variable or None. @param name: Name of environment variable. @param default: Default value if variable is undefined. @param conv: Function for converting value to desired type. @return: Value of environment variable. """ if self._env.has_key(name): if conv is not None: return conv(self._env.get(name)) else: return self._env.get(name) else: return default
Return value for environment variable or None. @param name: Name of environment variable. @param default: Default value if variable is undefined. @param conv: Function for converting value to desired type. @return: Value of environment variable.
Below is the the instruction that describes the task: ### Input: Return value for environment variable or None. @param name: Name of environment variable. @param default: Default value if variable is undefined. @param conv: Function for converting value to desired type. @return: Value of environment variable. ### Response: def envGet(self, name, default=None, conv=None): """Return value for environment variable or None. @param name: Name of environment variable. @param default: Default value if variable is undefined. @param conv: Function for converting value to desired type. @return: Value of environment variable. """ if self._env.has_key(name): if conv is not None: return conv(self._env.get(name)) else: return self._env.get(name) else: return default
def format_duration(secs): """ Format a duration in seconds as minutes and seconds. """ secs = int(secs) if abs(secs) > 60: mins = abs(secs) / 60 secs = abs(secs) - (mins * 60) return '%s%im %02is' % ('-' if secs < 0 else '', mins, secs) return '%is' % secs
Format a duration in seconds as minutes and seconds.
Below is the the instruction that describes the task: ### Input: Format a duration in seconds as minutes and seconds. ### Response: def format_duration(secs): """ Format a duration in seconds as minutes and seconds. """ secs = int(secs) if abs(secs) > 60: mins = abs(secs) / 60 secs = abs(secs) - (mins * 60) return '%s%im %02is' % ('-' if secs < 0 else '', mins, secs) return '%is' % secs
def set_master(self, url): """Set the master url that this object works with.""" m = urlparse(url) self.master_scheme = m.scheme self.master_netloc = m.netloc self.master_path = os.path.dirname(m.path)
Set the master url that this object works with.
Below is the the instruction that describes the task: ### Input: Set the master url that this object works with. ### Response: def set_master(self, url): """Set the master url that this object works with.""" m = urlparse(url) self.master_scheme = m.scheme self.master_netloc = m.netloc self.master_path = os.path.dirname(m.path)
def http_basic(r, username, password): """Attaches HTTP Basic Authentication to the given Request object. Arguments should be considered non-positional. """ username = str(username) password = str(password) auth_s = b64encode('%s:%s' % (username, password)) r.headers['Authorization'] = ('Basic %s' % auth_s) return r
Attaches HTTP Basic Authentication to the given Request object. Arguments should be considered non-positional.
Below is the the instruction that describes the task: ### Input: Attaches HTTP Basic Authentication to the given Request object. Arguments should be considered non-positional. ### Response: def http_basic(r, username, password): """Attaches HTTP Basic Authentication to the given Request object. Arguments should be considered non-positional. """ username = str(username) password = str(password) auth_s = b64encode('%s:%s' % (username, password)) r.headers['Authorization'] = ('Basic %s' % auth_s) return r
def get_ipaths(self): """ Returns generator of paths from nodes marked as added, changed or removed. """ for node in itertools.chain(self.added, self.changed, self.removed): yield node.path
Returns generator of paths from nodes marked as added, changed or removed.
Below is the the instruction that describes the task: ### Input: Returns generator of paths from nodes marked as added, changed or removed. ### Response: def get_ipaths(self): """ Returns generator of paths from nodes marked as added, changed or removed. """ for node in itertools.chain(self.added, self.changed, self.removed): yield node.path
def request_writes(self, offset, data): """Request any available writes given new incoming data. You call this method by providing new data along with the offset associated with the data. If that new data unlocks any contiguous writes that can now be submitted, this method will return all applicable writes. This is done with 1 method call so you don't have to make two method calls (put(), get()) which acquires a lock each method call. """ if offset < self._next_offset: # This is a request for a write that we've already # seen. This can happen in the event of a retry # where if we retry at at offset N/2, we'll requeue # offsets 0-N/2 again. return [] writes = [] if offset in self._pending_offsets: # We've already queued this offset so this request is # a duplicate. In this case we should ignore # this request and prefer what's already queued. return [] heapq.heappush(self._writes, (offset, data)) self._pending_offsets.add(offset) while self._writes and self._writes[0][0] == self._next_offset: next_write = heapq.heappop(self._writes) writes.append({'offset': next_write[0], 'data': next_write[1]}) self._pending_offsets.remove(next_write[0]) self._next_offset += len(next_write[1]) return writes
Request any available writes given new incoming data. You call this method by providing new data along with the offset associated with the data. If that new data unlocks any contiguous writes that can now be submitted, this method will return all applicable writes. This is done with 1 method call so you don't have to make two method calls (put(), get()) which acquires a lock each method call.
Below is the the instruction that describes the task: ### Input: Request any available writes given new incoming data. You call this method by providing new data along with the offset associated with the data. If that new data unlocks any contiguous writes that can now be submitted, this method will return all applicable writes. This is done with 1 method call so you don't have to make two method calls (put(), get()) which acquires a lock each method call. ### Response: def request_writes(self, offset, data): """Request any available writes given new incoming data. You call this method by providing new data along with the offset associated with the data. If that new data unlocks any contiguous writes that can now be submitted, this method will return all applicable writes. This is done with 1 method call so you don't have to make two method calls (put(), get()) which acquires a lock each method call. """ if offset < self._next_offset: # This is a request for a write that we've already # seen. This can happen in the event of a retry # where if we retry at at offset N/2, we'll requeue # offsets 0-N/2 again. return [] writes = [] if offset in self._pending_offsets: # We've already queued this offset so this request is # a duplicate. In this case we should ignore # this request and prefer what's already queued. return [] heapq.heappush(self._writes, (offset, data)) self._pending_offsets.add(offset) while self._writes and self._writes[0][0] == self._next_offset: next_write = heapq.heappop(self._writes) writes.append({'offset': next_write[0], 'data': next_write[1]}) self._pending_offsets.remove(next_write[0]) self._next_offset += len(next_write[1]) return writes
def next_stop(self): """Return the next stop for this bus.""" p = self.api.predictions(vid=self.vid)['prd'] pobj = Prediction.fromapi(self.api, p[0]) pobj._busobj = self return pobj
Return the next stop for this bus.
Below is the the instruction that describes the task: ### Input: Return the next stop for this bus. ### Response: def next_stop(self): """Return the next stop for this bus.""" p = self.api.predictions(vid=self.vid)['prd'] pobj = Prediction.fromapi(self.api, p[0]) pobj._busobj = self return pobj
def strtime(at=None, fmt=PERFECT_TIME_FORMAT): """Returns formatted utcnow.""" if not at: at = utcnow() return at.strftime(fmt)
Returns formatted utcnow.
Below is the the instruction that describes the task: ### Input: Returns formatted utcnow. ### Response: def strtime(at=None, fmt=PERFECT_TIME_FORMAT): """Returns formatted utcnow.""" if not at: at = utcnow() return at.strftime(fmt)
def readpipe(self, chunk=None): """ Return iterator that iterates over STDIN line by line If ``chunk`` is set to a positive non-zero integer value, then the reads are performed in chunks of that many lines, and returned as a list. Otherwise the lines are returned one by one. """ read = [] while True: l = sys.stdin.readline() if not l: if read: yield read return return if not chunk: yield l else: read.append(l) if len(read) == chunk: yield read
Return iterator that iterates over STDIN line by line If ``chunk`` is set to a positive non-zero integer value, then the reads are performed in chunks of that many lines, and returned as a list. Otherwise the lines are returned one by one.
Below is the the instruction that describes the task: ### Input: Return iterator that iterates over STDIN line by line If ``chunk`` is set to a positive non-zero integer value, then the reads are performed in chunks of that many lines, and returned as a list. Otherwise the lines are returned one by one. ### Response: def readpipe(self, chunk=None): """ Return iterator that iterates over STDIN line by line If ``chunk`` is set to a positive non-zero integer value, then the reads are performed in chunks of that many lines, and returned as a list. Otherwise the lines are returned one by one. """ read = [] while True: l = sys.stdin.readline() if not l: if read: yield read return return if not chunk: yield l else: read.append(l) if len(read) == chunk: yield read
def plot_job_history(jobs, interval='year'): """Plots the job history of the user from the given list of jobs. Args: jobs (list): A list of jobs with type IBMQjob. interval (str): Interval over which to examine. Returns: fig: A Matplotlib figure instance. """ def get_date(job): """Returns a datetime object from a IBMQJob instance. Args: job (IBMQJob): A job. Returns: dt: A datetime object. """ return datetime.datetime.strptime(job.creation_date(), '%Y-%m-%dT%H:%M:%S.%fZ') current_time = datetime.datetime.now() if interval == 'year': bins = [(current_time - datetime.timedelta(days=k*365/12)) for k in range(12)] elif interval == 'month': bins = [(current_time - datetime.timedelta(days=k)) for k in range(30)] elif interval == 'week': bins = [(current_time - datetime.timedelta(days=k)) for k in range(7)] binned_jobs = [0]*len(bins) if interval == 'year': for job in jobs: for ind, dat in enumerate(bins): date = get_date(job) if date.month == dat.month: binned_jobs[ind] += 1 break else: continue else: for job in jobs: for ind, dat in enumerate(bins): date = get_date(job) if date.day == dat.day and date.month == dat.month: binned_jobs[ind] += 1 break else: continue nz_bins = [] nz_idx = [] for ind, val in enumerate(binned_jobs): if val != 0: nz_idx.append(ind) nz_bins.append(val) total_jobs = sum(binned_jobs) colors = ['#003f5c', '#ffa600', '#374c80', '#ff764a', '#7a5195', '#ef5675', '#bc5090'] if interval == 'year': labels = ['{}-{}'.format(str(bins[b].year)[2:], bins[b].month) for b in nz_idx] else: labels = ['{}-{}'.format(bins[b].month, bins[b].day) for b in nz_idx] fig, ax = plt.subplots(1, 1, figsize=(5, 5)) # pylint: disable=invalid-name ax.pie(nz_bins[::-1], labels=labels, colors=colors, textprops={'fontsize': 14}, rotatelabels=True, counterclock=False) ax.add_artist(Circle((0, 0), 0.7, color='white', zorder=1)) ax.text(0, 0, total_jobs, horizontalalignment='center', verticalalignment='center', fontsize=26) fig.tight_layout() return fig
Plots the job history of the user from the given list of jobs. Args: jobs (list): A list of jobs with type IBMQjob. interval (str): Interval over which to examine. Returns: fig: A Matplotlib figure instance.
Below is the the instruction that describes the task: ### Input: Plots the job history of the user from the given list of jobs. Args: jobs (list): A list of jobs with type IBMQjob. interval (str): Interval over which to examine. Returns: fig: A Matplotlib figure instance. ### Response: def plot_job_history(jobs, interval='year'): """Plots the job history of the user from the given list of jobs. Args: jobs (list): A list of jobs with type IBMQjob. interval (str): Interval over which to examine. Returns: fig: A Matplotlib figure instance. """ def get_date(job): """Returns a datetime object from a IBMQJob instance. Args: job (IBMQJob): A job. Returns: dt: A datetime object. """ return datetime.datetime.strptime(job.creation_date(), '%Y-%m-%dT%H:%M:%S.%fZ') current_time = datetime.datetime.now() if interval == 'year': bins = [(current_time - datetime.timedelta(days=k*365/12)) for k in range(12)] elif interval == 'month': bins = [(current_time - datetime.timedelta(days=k)) for k in range(30)] elif interval == 'week': bins = [(current_time - datetime.timedelta(days=k)) for k in range(7)] binned_jobs = [0]*len(bins) if interval == 'year': for job in jobs: for ind, dat in enumerate(bins): date = get_date(job) if date.month == dat.month: binned_jobs[ind] += 1 break else: continue else: for job in jobs: for ind, dat in enumerate(bins): date = get_date(job) if date.day == dat.day and date.month == dat.month: binned_jobs[ind] += 1 break else: continue nz_bins = [] nz_idx = [] for ind, val in enumerate(binned_jobs): if val != 0: nz_idx.append(ind) nz_bins.append(val) total_jobs = sum(binned_jobs) colors = ['#003f5c', '#ffa600', '#374c80', '#ff764a', '#7a5195', '#ef5675', '#bc5090'] if interval == 'year': labels = ['{}-{}'.format(str(bins[b].year)[2:], bins[b].month) for b in nz_idx] else: labels = ['{}-{}'.format(bins[b].month, bins[b].day) for b in nz_idx] fig, ax = plt.subplots(1, 1, figsize=(5, 5)) # pylint: disable=invalid-name ax.pie(nz_bins[::-1], labels=labels, colors=colors, textprops={'fontsize': 14}, rotatelabels=True, counterclock=False) ax.add_artist(Circle((0, 0), 0.7, color='white', zorder=1)) ax.text(0, 0, total_jobs, horizontalalignment='center', verticalalignment='center', fontsize=26) fig.tight_layout() return fig
def read(self, address, size): """Read arbitrary size content from memory. """ value = 0x0 for i in range(0, size): value |= self._read_byte(address + i) << (i * 8) return value
Read arbitrary size content from memory.
Below is the the instruction that describes the task: ### Input: Read arbitrary size content from memory. ### Response: def read(self, address, size): """Read arbitrary size content from memory. """ value = 0x0 for i in range(0, size): value |= self._read_byte(address + i) << (i * 8) return value
def play(self): """Starts an animation playing.""" if self.state == PygAnimation.PLAYING: pass # nothing to do elif self.state == PygAnimation.STOPPED: # restart from beginning of animation self.index = 0 # first image in list self.elapsed = 0 self.playingStartTime = time.time() self.elapsedStopTime = self.endTimesList[-1] # end of last animation image time self.nextElapsedThreshold = self.endTimesList[0] self.nIterationsLeft = self.nTimes # typically 1 elif self.state == PygAnimation.PAUSED: # restart where we left off self.playingStartTime = time.time() - self.elapsedAtPause # recalc start time self.elapsed = self.elapsedAtPause self.elapsedStopTime = self.endTimesList[-1] # end of last animation image time self.nextElapsedThreshold = self.endTimesList[self.index] self.state = PygAnimation.PLAYING
Starts an animation playing.
Below is the the instruction that describes the task: ### Input: Starts an animation playing. ### Response: def play(self): """Starts an animation playing.""" if self.state == PygAnimation.PLAYING: pass # nothing to do elif self.state == PygAnimation.STOPPED: # restart from beginning of animation self.index = 0 # first image in list self.elapsed = 0 self.playingStartTime = time.time() self.elapsedStopTime = self.endTimesList[-1] # end of last animation image time self.nextElapsedThreshold = self.endTimesList[0] self.nIterationsLeft = self.nTimes # typically 1 elif self.state == PygAnimation.PAUSED: # restart where we left off self.playingStartTime = time.time() - self.elapsedAtPause # recalc start time self.elapsed = self.elapsedAtPause self.elapsedStopTime = self.endTimesList[-1] # end of last animation image time self.nextElapsedThreshold = self.endTimesList[self.index] self.state = PygAnimation.PLAYING
def update_batch(self, **kwargs): """ Simplistic batch update operation implemented in terms of `replace()`. Assumes that: - Request and response schemas contains lists of items. - Request items define a primary key identifier - The entire batch succeeds or fails together. """ items = kwargs.pop("items") def transform(item): """ Transform the dictionary expected for replace (which uses the URI path's id) into the resource expected from individual resources (which uses plain id). """ item[self.identifier_key] = item.pop("id") return item return dict( items=[ self.replace(**transform(item)) for item in items ], )
Simplistic batch update operation implemented in terms of `replace()`. Assumes that: - Request and response schemas contains lists of items. - Request items define a primary key identifier - The entire batch succeeds or fails together.
Below is the the instruction that describes the task: ### Input: Simplistic batch update operation implemented in terms of `replace()`. Assumes that: - Request and response schemas contains lists of items. - Request items define a primary key identifier - The entire batch succeeds or fails together. ### Response: def update_batch(self, **kwargs): """ Simplistic batch update operation implemented in terms of `replace()`. Assumes that: - Request and response schemas contains lists of items. - Request items define a primary key identifier - The entire batch succeeds or fails together. """ items = kwargs.pop("items") def transform(item): """ Transform the dictionary expected for replace (which uses the URI path's id) into the resource expected from individual resources (which uses plain id). """ item[self.identifier_key] = item.pop("id") return item return dict( items=[ self.replace(**transform(item)) for item in items ], )
def Cpg(self): r'''Gas-phase heat capacity of the chemical at its current temperature, in units of [J/kg/K]. For calculation of this property at other temperatures, or specifying manually the method used to calculate it, and more - see the object oriented interface :obj:`thermo.heat_capacity.HeatCapacityGas`; each Chemical instance creates one to actually perform the calculations. Note that that interface provides output in molar units. Examples -------- >>> w = Chemical('water', T=520) >>> w.Cpg 1967.6698314620658 ''' Cpgm = self.HeatCapacityGas(self.T) if Cpgm: return property_molar_to_mass(Cpgm, self.MW) return None
r'''Gas-phase heat capacity of the chemical at its current temperature, in units of [J/kg/K]. For calculation of this property at other temperatures, or specifying manually the method used to calculate it, and more - see the object oriented interface :obj:`thermo.heat_capacity.HeatCapacityGas`; each Chemical instance creates one to actually perform the calculations. Note that that interface provides output in molar units. Examples -------- >>> w = Chemical('water', T=520) >>> w.Cpg 1967.6698314620658
Below is the the instruction that describes the task: ### Input: r'''Gas-phase heat capacity of the chemical at its current temperature, in units of [J/kg/K]. For calculation of this property at other temperatures, or specifying manually the method used to calculate it, and more - see the object oriented interface :obj:`thermo.heat_capacity.HeatCapacityGas`; each Chemical instance creates one to actually perform the calculations. Note that that interface provides output in molar units. Examples -------- >>> w = Chemical('water', T=520) >>> w.Cpg 1967.6698314620658 ### Response: def Cpg(self): r'''Gas-phase heat capacity of the chemical at its current temperature, in units of [J/kg/K]. For calculation of this property at other temperatures, or specifying manually the method used to calculate it, and more - see the object oriented interface :obj:`thermo.heat_capacity.HeatCapacityGas`; each Chemical instance creates one to actually perform the calculations. Note that that interface provides output in molar units. Examples -------- >>> w = Chemical('water', T=520) >>> w.Cpg 1967.6698314620658 ''' Cpgm = self.HeatCapacityGas(self.T) if Cpgm: return property_molar_to_mass(Cpgm, self.MW) return None
def _dictToAlignments(self, diamondDict, read): """ Take a dict (made by DiamondTabularFormatReader.records) and convert it to a list of alignments. @param diamondDict: A C{dict}, from records(). @param read: A C{Read} instance, containing the read that DIAMOND used to create this record. @return: A C{list} of L{dark.alignment.Alignment} instances. """ alignments = [] getScore = itemgetter('bits' if self._hspClass is HSP else 'expect') for diamondAlignment in diamondDict['alignments']: alignment = Alignment(diamondAlignment['length'], diamondAlignment['title']) alignments.append(alignment) for diamondHsp in diamondAlignment['hsps']: score = getScore(diamondHsp) normalized = normalizeHSP(diamondHsp, len(read), self.diamondTask) hsp = self._hspClass( score, readStart=normalized['readStart'], readEnd=normalized['readEnd'], readStartInSubject=normalized['readStartInSubject'], readEndInSubject=normalized['readEndInSubject'], readFrame=diamondHsp['frame'], subjectStart=normalized['subjectStart'], subjectEnd=normalized['subjectEnd'], readMatchedSequence=diamondHsp['query'], subjectMatchedSequence=diamondHsp['sbjct'], # Use blastHsp.get on identicalCount and positiveCount # because they were added in version 2.0.3 and will not # be present in any of our JSON output generated before # that. Those values will be None for those JSON files, # but that's much better than no longer being able to # read all that data. identicalCount=diamondHsp.get('identicalCount'), positiveCount=diamondHsp.get('positiveCount')) alignment.addHsp(hsp) return alignments
Take a dict (made by DiamondTabularFormatReader.records) and convert it to a list of alignments. @param diamondDict: A C{dict}, from records(). @param read: A C{Read} instance, containing the read that DIAMOND used to create this record. @return: A C{list} of L{dark.alignment.Alignment} instances.
Below is the the instruction that describes the task: ### Input: Take a dict (made by DiamondTabularFormatReader.records) and convert it to a list of alignments. @param diamondDict: A C{dict}, from records(). @param read: A C{Read} instance, containing the read that DIAMOND used to create this record. @return: A C{list} of L{dark.alignment.Alignment} instances. ### Response: def _dictToAlignments(self, diamondDict, read): """ Take a dict (made by DiamondTabularFormatReader.records) and convert it to a list of alignments. @param diamondDict: A C{dict}, from records(). @param read: A C{Read} instance, containing the read that DIAMOND used to create this record. @return: A C{list} of L{dark.alignment.Alignment} instances. """ alignments = [] getScore = itemgetter('bits' if self._hspClass is HSP else 'expect') for diamondAlignment in diamondDict['alignments']: alignment = Alignment(diamondAlignment['length'], diamondAlignment['title']) alignments.append(alignment) for diamondHsp in diamondAlignment['hsps']: score = getScore(diamondHsp) normalized = normalizeHSP(diamondHsp, len(read), self.diamondTask) hsp = self._hspClass( score, readStart=normalized['readStart'], readEnd=normalized['readEnd'], readStartInSubject=normalized['readStartInSubject'], readEndInSubject=normalized['readEndInSubject'], readFrame=diamondHsp['frame'], subjectStart=normalized['subjectStart'], subjectEnd=normalized['subjectEnd'], readMatchedSequence=diamondHsp['query'], subjectMatchedSequence=diamondHsp['sbjct'], # Use blastHsp.get on identicalCount and positiveCount # because they were added in version 2.0.3 and will not # be present in any of our JSON output generated before # that. Those values will be None for those JSON files, # but that's much better than no longer being able to # read all that data. identicalCount=diamondHsp.get('identicalCount'), positiveCount=diamondHsp.get('positiveCount')) alignment.addHsp(hsp) return alignments
def get_account_info(self): """ Gets account info. @return: AccountInfo """ attrs = {sconstant.A_BY: sconstant.V_NAME} account = SOAPpy.Types.stringType(data=self.auth_token.account_name, attrs=attrs) params = {sconstant.E_ACCOUNT: account} res = self.invoke(zconstant.NS_ZIMBRA_ACC_URL, sconstant.GetAccountInfoRequest, params) info = AccountInfo() info.parse(res) return info
Gets account info. @return: AccountInfo
Below is the the instruction that describes the task: ### Input: Gets account info. @return: AccountInfo ### Response: def get_account_info(self): """ Gets account info. @return: AccountInfo """ attrs = {sconstant.A_BY: sconstant.V_NAME} account = SOAPpy.Types.stringType(data=self.auth_token.account_name, attrs=attrs) params = {sconstant.E_ACCOUNT: account} res = self.invoke(zconstant.NS_ZIMBRA_ACC_URL, sconstant.GetAccountInfoRequest, params) info = AccountInfo() info.parse(res) return info
def get_create_base_agent(self, agent): """Return BaseAgent from an Agent, creating it if needed. Parameters ---------- agent : indra.statements.Agent Returns ------- base_agent : indra.mechlinker.BaseAgent """ try: base_agent = self.agents[agent.name] except KeyError: base_agent = BaseAgent(agent.name) self.agents[agent.name] = base_agent return base_agent
Return BaseAgent from an Agent, creating it if needed. Parameters ---------- agent : indra.statements.Agent Returns ------- base_agent : indra.mechlinker.BaseAgent
Below is the the instruction that describes the task: ### Input: Return BaseAgent from an Agent, creating it if needed. Parameters ---------- agent : indra.statements.Agent Returns ------- base_agent : indra.mechlinker.BaseAgent ### Response: def get_create_base_agent(self, agent): """Return BaseAgent from an Agent, creating it if needed. Parameters ---------- agent : indra.statements.Agent Returns ------- base_agent : indra.mechlinker.BaseAgent """ try: base_agent = self.agents[agent.name] except KeyError: base_agent = BaseAgent(agent.name) self.agents[agent.name] = base_agent return base_agent
def imrphenomc_tmplt(**kwds): """ Return an IMRPhenomC waveform using CUDA to generate the phase and amplitude Main Paper: arXiv:1005.3306 """ # Pull out the input arguments f_min = float128(kwds['f_lower']) f_max = float128(kwds['f_final']) delta_f = float128(kwds['delta_f']) distance = float128(kwds['distance']) mass1 = float128(kwds['mass1']) mass2 = float128(kwds['mass2']) spin1z = float128(kwds['spin1z']) spin2z = float128(kwds['spin2z']) if 'out' in kwds: out = kwds['out'] else: out = None # Calculate binary parameters M = mass1 + mass2 eta = mass1 * mass2 / (M * M) Xi = (mass1 * spin1z / M) + (mass2 * spin2z / M) Xisum = 2.*Xi Xiprod = Xi*Xi Xi2 = Xi*Xi m_sec = M * lal.MTSUN_SI; piM = lal.PI * m_sec; ## The units of distance given as input is taken to pe Mpc. Converting to SI distance *= (1.0e6 * lal.PC_SI / (2. * sqrt(5. / (64.*lal.PI)) * M * lal.MRSUN_SI * M * lal.MTSUN_SI)) # Check if the value of f_max is correctly given, else replace with the fCut # used in the PhenomB code in lalsimulation. The various coefficients come # from Eq.(4.18) of http://arxiv.org/pdf/0710.2335 and # Table I of http://arxiv.org/pdf/0712.0343 if not f_max: f_max = (1.7086 * eta * eta - 0.26592 * eta + 0.28236) / piM # Transform the eta, chi to Lambda parameters, using Eq 5.14, Table II of Main # paper. z101 = -2.417e-03 z102 = -1.093e-03 z111 = -1.917e-02 z110 = 7.267e-02 z120 = -2.504e-01 z201 = 5.962e-01 z202 = -5.600e-02 z211 = 1.520e-01 z210 = -2.970e+00 z220 = 1.312e+01 z301 = -3.283e+01 z302 = 8.859e+00 z311 = 2.931e+01 z310 = 7.954e+01 z320 = -4.349e+02 z401 = 1.619e+02 z402 = -4.702e+01 z411 = -1.751e+02 z410 = -3.225e+02 z420 = 1.587e+03 z501 = -6.320e+02 z502 = 2.463e+02 z511 = 1.048e+03 z510 = 3.355e+02 z520 = -5.115e+03 z601 = -4.809e+01 z602 = -3.643e+02 z611 = -5.215e+02 z610 = 1.870e+03 z620 = 7.354e+02 z701 = 4.149e+00 z702 = -4.070e+00 z711 = -8.752e+01 z710 = -4.897e+01 z720 = 6.665e+02 z801 = -5.472e-02 z802 = 2.094e-02 z811 = 3.554e-01 z810 = 1.151e-01 z820 = 9.640e-01 z901 = -1.235e+00 z902 = 3.423e-01 z911 = 6.062e+00 z910 = 5.949e+00 z920 = -1.069e+01 eta2 = eta*eta Xi2 = Xiprod # Calculate alphas, gamma, deltas from Table II and Eq 5.14 of Main paper a1 = z101 * Xi + z102 * Xi2 + z111 * eta * Xi + z110 * eta + z120 * eta2 a2 = z201 * Xi + z202 * Xi2 + z211 * eta * Xi + z210 * eta + z220 * eta2 a3 = z301 * Xi + z302 * Xi2 + z311 * eta * Xi + z310 * eta + z320 * eta2 a4 = z401 * Xi + z402 * Xi2 + z411 * eta * Xi + z410 * eta + z420 * eta2 a5 = z501 * Xi + z502 * Xi2 + z511 * eta * Xi + z510 * eta + z520 * eta2 a6 = z601 * Xi + z602 * Xi2 + z611 * eta * Xi + z610 * eta + z620 * eta2 g1 = z701 * Xi + z702 * Xi2 + z711 * eta * Xi + z710 * eta + z720 * eta2 del1 = z801 * Xi + z802 * Xi2 + z811 * eta * Xi + z810 * eta + z820 * eta2 del2 = z901 * Xi + z902 * Xi2 + z911 * eta * Xi + z910 * eta + z920 * eta2 # Get the spin of the final BH afin = FinalSpin( Xi, eta ) Q = Qa( abs(afin) ) # Get the fRD frd = fRD( abs(afin), M) Mfrd = frd * m_sec # Define the frequencies where SPA->PM->RD f1 = 0.1 * frd Mf1 = m_sec * f1 f2 = frd Mf2 = m_sec * f2 d1 = 0.005 d2 = 0.005 f0 = 0.98 * frd Mf0 = m_sec * f0 d0 = 0.015 # Now use this frequency for calculation of betas # calculate beta1 and beta2, that appear in Eq 5.7 in the main paper. b2 = ((-5./3.)* a1 * pow(Mfrd,(-8./3.)) - a2/(Mfrd*Mfrd) - \ (a3/3.)*pow(Mfrd,(-4./3.)) + (2./3.)* a5 * pow(Mfrd,(-1./3.)) + a6)/eta psiPMrd = (a1 * pow(Mfrd,(-5./3.)) + a2/Mfrd + a3 * pow(Mfrd,(-1./3.)) + \ a4 + a5 * pow(Mfrd,(2./3.)) + a6 * Mfrd)/eta b1 = psiPMrd - (b2 * Mfrd) ### Calculate the PN coefficients, Eq A3 - A5 of main paper ### pfaN = 3.0/(128.0 * eta) pfa2 = (3715./756.) + (55.*eta/9.0) pfa3 = -16.0*lal.PI + (113./3.)*Xi - 38.*eta*Xisum/3. pfa4 = (152.93365/5.08032) - 50.*Xi2 + eta*(271.45/5.04 + 1.25*Xiprod) + \ 3085.*eta2/72. pfa5 = lal.PI*(386.45/7.56 - 65.*eta/9.) - \ Xi*(735.505/2.268 + 130.*eta/9.) + Xisum*(1285.0*eta/8.1 + 170.*eta2/9.) - \ 10.*Xi2*Xi/3. + 10.*eta*Xi*Xiprod pfa6 = 11583.231236531/4.694215680 - 640.0*lal.PI*lal.PI/3. - \ 6848.0*lal.GAMMA/21. - 684.8*log(64.)/6.3 + \ eta*(2255.*lal.PI*lal.PI/12. - 15737.765635/3.048192) + \ 76.055*eta2/1.728 - (127.825*eta2*eta/1.296) + \ 2920.*lal.PI*Xi/3. - (175. - 1490.*eta)*Xi2/3. - \ (1120.*lal.PI/3. - 1085.*Xi/3.)*eta*Xisum + \ (269.45*eta/3.36 - 2365.*eta2/6.)*Xiprod pfa6log = -6848./63. pfa7 = lal.PI*(770.96675/2.54016 + 378.515*eta/1.512 - 740.45*eta2/7.56) - \ Xi*(20373.952415/3.048192 + 1509.35*eta/2.24 - 5786.95*eta2/4.32) + \ Xisum*(4862.041225*eta/1.524096 + 1189.775*eta2/1.008 - 717.05*eta2*eta/2.16 - 830.*eta*Xi2/3. + 35.*eta2*Xiprod/3.) - \ 560.*lal.PI*Xi2 + 20.*lal.PI*eta*Xiprod + \ Xi2*Xi*(945.55/1.68 - 85.*eta) + Xi*Xiprod*(396.65*eta/1.68 + 255.*eta2) xdotaN = 64.*eta/5. xdota2 = -7.43/3.36 - 11.*eta/4. xdota3 = 4.*lal.PI - 11.3*Xi/1.2 + 19.*eta*Xisum/6. xdota4 = 3.4103/1.8144 + 5*Xi2 + eta*(13.661/2.016 - Xiprod/8.) + 5.9*eta2/1.8 xdota5 = -lal.PI*(41.59/6.72 + 189.*eta/8.) - Xi*(31.571/1.008 - 116.5*eta/2.4) + \ Xisum*(21.863*eta/1.008 - 79.*eta2/6.) - 3*Xi*Xi2/4. + \ 9.*eta*Xi*Xiprod/4. xdota6 = 164.47322263/1.39708800 - 17.12*lal.GAMMA/1.05 + \ 16.*lal.PI*lal.PI/3 - 8.56*log(16.)/1.05 + \ eta*(45.1*lal.PI*lal.PI/4.8 - 561.98689/2.17728) + \ 5.41*eta2/8.96 - 5.605*eta*eta2/2.592 - 80.*lal.PI*Xi/3. + \ eta*Xisum*(20.*lal.PI/3. - 113.5*Xi/3.6) + \ Xi2*(64.153/1.008 - 45.7*eta/3.6) - \ Xiprod*(7.87*eta/1.44 - 30.37*eta2/1.44) xdota6log = -856./105. xdota7 = -lal.PI*(4.415/4.032 - 358.675*eta/6.048 - 91.495*eta2/1.512) - \ Xi*(252.9407/2.7216 - 845.827*eta/6.048 + 415.51*eta2/8.64) + \ Xisum*(158.0239*eta/5.4432 - 451.597*eta2/6.048 + 20.45*eta2*eta/4.32 + 107.*eta*Xi2/6. - 5.*eta2*Xiprod/24.) + \ 12.*lal.PI*Xi2 - Xi2*Xi*(150.5/2.4 + eta/8.) + \ Xi*Xiprod*(10.1*eta/2.4 + 3.*eta2/8.) AN = 8.*eta*sqrt(lal.PI/5.) A2 = (-107. + 55.*eta)/42. A3 = 2.*lal.PI - 4.*Xi/3. + 2.*eta*Xisum/3. A4 = -2.173/1.512 - eta*(10.69/2.16 - 2.*Xiprod) + 2.047*eta2/1.512 A5 = -10.7*lal.PI/2.1 + eta*(3.4*lal.PI/2.1) A5imag = -24.*eta A6 = 270.27409/6.46800 - 8.56*lal.GAMMA/1.05 + \ 2.*lal.PI*lal.PI/3. + \ eta*(4.1*lal.PI*lal.PI/9.6 - 27.8185/3.3264) - \ 20.261*eta2/2.772 + 11.4635*eta*eta2/9.9792 - \ 4.28*log(16.)/1.05 A6log = -428./105. A6imag = 4.28*lal.PI/1.05 ### Define other parameters needed by waveform generation ### kmin = int(f_min / delta_f) kmax = int(f_max / delta_f) n = kmax + 1; if not out: htilde = FrequencySeries(zeros(n,dtype=numpy.complex128), delta_f=delta_f, copy=False) else: if type(out) is not Array: raise TypeError("Output must be an instance of Array") if len(out) < kmax: raise TypeError("Output array is too small") if out.dtype != complex64: raise TypeError("Output array is the wrong dtype") htilde = FrequencySeries(out, delta_f=delta_f, copy=False) phenomC_kernel(htilde.data[kmin:kmax], kmin, delta_f, eta, Xi, distance, m_sec, piM, Mfrd, pfaN, pfa2, pfa3, pfa4, pfa5, pfa6, pfa6log, pfa7, a1, a2, a3, a4, a5, a6, b1, b2, Mf1, Mf2, Mf0, d1, d2, d0, xdota2, xdota3, xdota4, xdota5, xdota6, xdota6log, xdota7, xdotaN, AN, A2, A3, A4, A5, A5imag, A6, A6log, A6imag, g1, del1, del2, Q ) hp = htilde hc = htilde * 1j return hp, hc
Return an IMRPhenomC waveform using CUDA to generate the phase and amplitude Main Paper: arXiv:1005.3306
Below is the the instruction that describes the task: ### Input: Return an IMRPhenomC waveform using CUDA to generate the phase and amplitude Main Paper: arXiv:1005.3306 ### Response: def imrphenomc_tmplt(**kwds): """ Return an IMRPhenomC waveform using CUDA to generate the phase and amplitude Main Paper: arXiv:1005.3306 """ # Pull out the input arguments f_min = float128(kwds['f_lower']) f_max = float128(kwds['f_final']) delta_f = float128(kwds['delta_f']) distance = float128(kwds['distance']) mass1 = float128(kwds['mass1']) mass2 = float128(kwds['mass2']) spin1z = float128(kwds['spin1z']) spin2z = float128(kwds['spin2z']) if 'out' in kwds: out = kwds['out'] else: out = None # Calculate binary parameters M = mass1 + mass2 eta = mass1 * mass2 / (M * M) Xi = (mass1 * spin1z / M) + (mass2 * spin2z / M) Xisum = 2.*Xi Xiprod = Xi*Xi Xi2 = Xi*Xi m_sec = M * lal.MTSUN_SI; piM = lal.PI * m_sec; ## The units of distance given as input is taken to pe Mpc. Converting to SI distance *= (1.0e6 * lal.PC_SI / (2. * sqrt(5. / (64.*lal.PI)) * M * lal.MRSUN_SI * M * lal.MTSUN_SI)) # Check if the value of f_max is correctly given, else replace with the fCut # used in the PhenomB code in lalsimulation. The various coefficients come # from Eq.(4.18) of http://arxiv.org/pdf/0710.2335 and # Table I of http://arxiv.org/pdf/0712.0343 if not f_max: f_max = (1.7086 * eta * eta - 0.26592 * eta + 0.28236) / piM # Transform the eta, chi to Lambda parameters, using Eq 5.14, Table II of Main # paper. z101 = -2.417e-03 z102 = -1.093e-03 z111 = -1.917e-02 z110 = 7.267e-02 z120 = -2.504e-01 z201 = 5.962e-01 z202 = -5.600e-02 z211 = 1.520e-01 z210 = -2.970e+00 z220 = 1.312e+01 z301 = -3.283e+01 z302 = 8.859e+00 z311 = 2.931e+01 z310 = 7.954e+01 z320 = -4.349e+02 z401 = 1.619e+02 z402 = -4.702e+01 z411 = -1.751e+02 z410 = -3.225e+02 z420 = 1.587e+03 z501 = -6.320e+02 z502 = 2.463e+02 z511 = 1.048e+03 z510 = 3.355e+02 z520 = -5.115e+03 z601 = -4.809e+01 z602 = -3.643e+02 z611 = -5.215e+02 z610 = 1.870e+03 z620 = 7.354e+02 z701 = 4.149e+00 z702 = -4.070e+00 z711 = -8.752e+01 z710 = -4.897e+01 z720 = 6.665e+02 z801 = -5.472e-02 z802 = 2.094e-02 z811 = 3.554e-01 z810 = 1.151e-01 z820 = 9.640e-01 z901 = -1.235e+00 z902 = 3.423e-01 z911 = 6.062e+00 z910 = 5.949e+00 z920 = -1.069e+01 eta2 = eta*eta Xi2 = Xiprod # Calculate alphas, gamma, deltas from Table II and Eq 5.14 of Main paper a1 = z101 * Xi + z102 * Xi2 + z111 * eta * Xi + z110 * eta + z120 * eta2 a2 = z201 * Xi + z202 * Xi2 + z211 * eta * Xi + z210 * eta + z220 * eta2 a3 = z301 * Xi + z302 * Xi2 + z311 * eta * Xi + z310 * eta + z320 * eta2 a4 = z401 * Xi + z402 * Xi2 + z411 * eta * Xi + z410 * eta + z420 * eta2 a5 = z501 * Xi + z502 * Xi2 + z511 * eta * Xi + z510 * eta + z520 * eta2 a6 = z601 * Xi + z602 * Xi2 + z611 * eta * Xi + z610 * eta + z620 * eta2 g1 = z701 * Xi + z702 * Xi2 + z711 * eta * Xi + z710 * eta + z720 * eta2 del1 = z801 * Xi + z802 * Xi2 + z811 * eta * Xi + z810 * eta + z820 * eta2 del2 = z901 * Xi + z902 * Xi2 + z911 * eta * Xi + z910 * eta + z920 * eta2 # Get the spin of the final BH afin = FinalSpin( Xi, eta ) Q = Qa( abs(afin) ) # Get the fRD frd = fRD( abs(afin), M) Mfrd = frd * m_sec # Define the frequencies where SPA->PM->RD f1 = 0.1 * frd Mf1 = m_sec * f1 f2 = frd Mf2 = m_sec * f2 d1 = 0.005 d2 = 0.005 f0 = 0.98 * frd Mf0 = m_sec * f0 d0 = 0.015 # Now use this frequency for calculation of betas # calculate beta1 and beta2, that appear in Eq 5.7 in the main paper. b2 = ((-5./3.)* a1 * pow(Mfrd,(-8./3.)) - a2/(Mfrd*Mfrd) - \ (a3/3.)*pow(Mfrd,(-4./3.)) + (2./3.)* a5 * pow(Mfrd,(-1./3.)) + a6)/eta psiPMrd = (a1 * pow(Mfrd,(-5./3.)) + a2/Mfrd + a3 * pow(Mfrd,(-1./3.)) + \ a4 + a5 * pow(Mfrd,(2./3.)) + a6 * Mfrd)/eta b1 = psiPMrd - (b2 * Mfrd) ### Calculate the PN coefficients, Eq A3 - A5 of main paper ### pfaN = 3.0/(128.0 * eta) pfa2 = (3715./756.) + (55.*eta/9.0) pfa3 = -16.0*lal.PI + (113./3.)*Xi - 38.*eta*Xisum/3. pfa4 = (152.93365/5.08032) - 50.*Xi2 + eta*(271.45/5.04 + 1.25*Xiprod) + \ 3085.*eta2/72. pfa5 = lal.PI*(386.45/7.56 - 65.*eta/9.) - \ Xi*(735.505/2.268 + 130.*eta/9.) + Xisum*(1285.0*eta/8.1 + 170.*eta2/9.) - \ 10.*Xi2*Xi/3. + 10.*eta*Xi*Xiprod pfa6 = 11583.231236531/4.694215680 - 640.0*lal.PI*lal.PI/3. - \ 6848.0*lal.GAMMA/21. - 684.8*log(64.)/6.3 + \ eta*(2255.*lal.PI*lal.PI/12. - 15737.765635/3.048192) + \ 76.055*eta2/1.728 - (127.825*eta2*eta/1.296) + \ 2920.*lal.PI*Xi/3. - (175. - 1490.*eta)*Xi2/3. - \ (1120.*lal.PI/3. - 1085.*Xi/3.)*eta*Xisum + \ (269.45*eta/3.36 - 2365.*eta2/6.)*Xiprod pfa6log = -6848./63. pfa7 = lal.PI*(770.96675/2.54016 + 378.515*eta/1.512 - 740.45*eta2/7.56) - \ Xi*(20373.952415/3.048192 + 1509.35*eta/2.24 - 5786.95*eta2/4.32) + \ Xisum*(4862.041225*eta/1.524096 + 1189.775*eta2/1.008 - 717.05*eta2*eta/2.16 - 830.*eta*Xi2/3. + 35.*eta2*Xiprod/3.) - \ 560.*lal.PI*Xi2 + 20.*lal.PI*eta*Xiprod + \ Xi2*Xi*(945.55/1.68 - 85.*eta) + Xi*Xiprod*(396.65*eta/1.68 + 255.*eta2) xdotaN = 64.*eta/5. xdota2 = -7.43/3.36 - 11.*eta/4. xdota3 = 4.*lal.PI - 11.3*Xi/1.2 + 19.*eta*Xisum/6. xdota4 = 3.4103/1.8144 + 5*Xi2 + eta*(13.661/2.016 - Xiprod/8.) + 5.9*eta2/1.8 xdota5 = -lal.PI*(41.59/6.72 + 189.*eta/8.) - Xi*(31.571/1.008 - 116.5*eta/2.4) + \ Xisum*(21.863*eta/1.008 - 79.*eta2/6.) - 3*Xi*Xi2/4. + \ 9.*eta*Xi*Xiprod/4. xdota6 = 164.47322263/1.39708800 - 17.12*lal.GAMMA/1.05 + \ 16.*lal.PI*lal.PI/3 - 8.56*log(16.)/1.05 + \ eta*(45.1*lal.PI*lal.PI/4.8 - 561.98689/2.17728) + \ 5.41*eta2/8.96 - 5.605*eta*eta2/2.592 - 80.*lal.PI*Xi/3. + \ eta*Xisum*(20.*lal.PI/3. - 113.5*Xi/3.6) + \ Xi2*(64.153/1.008 - 45.7*eta/3.6) - \ Xiprod*(7.87*eta/1.44 - 30.37*eta2/1.44) xdota6log = -856./105. xdota7 = -lal.PI*(4.415/4.032 - 358.675*eta/6.048 - 91.495*eta2/1.512) - \ Xi*(252.9407/2.7216 - 845.827*eta/6.048 + 415.51*eta2/8.64) + \ Xisum*(158.0239*eta/5.4432 - 451.597*eta2/6.048 + 20.45*eta2*eta/4.32 + 107.*eta*Xi2/6. - 5.*eta2*Xiprod/24.) + \ 12.*lal.PI*Xi2 - Xi2*Xi*(150.5/2.4 + eta/8.) + \ Xi*Xiprod*(10.1*eta/2.4 + 3.*eta2/8.) AN = 8.*eta*sqrt(lal.PI/5.) A2 = (-107. + 55.*eta)/42. A3 = 2.*lal.PI - 4.*Xi/3. + 2.*eta*Xisum/3. A4 = -2.173/1.512 - eta*(10.69/2.16 - 2.*Xiprod) + 2.047*eta2/1.512 A5 = -10.7*lal.PI/2.1 + eta*(3.4*lal.PI/2.1) A5imag = -24.*eta A6 = 270.27409/6.46800 - 8.56*lal.GAMMA/1.05 + \ 2.*lal.PI*lal.PI/3. + \ eta*(4.1*lal.PI*lal.PI/9.6 - 27.8185/3.3264) - \ 20.261*eta2/2.772 + 11.4635*eta*eta2/9.9792 - \ 4.28*log(16.)/1.05 A6log = -428./105. A6imag = 4.28*lal.PI/1.05 ### Define other parameters needed by waveform generation ### kmin = int(f_min / delta_f) kmax = int(f_max / delta_f) n = kmax + 1; if not out: htilde = FrequencySeries(zeros(n,dtype=numpy.complex128), delta_f=delta_f, copy=False) else: if type(out) is not Array: raise TypeError("Output must be an instance of Array") if len(out) < kmax: raise TypeError("Output array is too small") if out.dtype != complex64: raise TypeError("Output array is the wrong dtype") htilde = FrequencySeries(out, delta_f=delta_f, copy=False) phenomC_kernel(htilde.data[kmin:kmax], kmin, delta_f, eta, Xi, distance, m_sec, piM, Mfrd, pfaN, pfa2, pfa3, pfa4, pfa5, pfa6, pfa6log, pfa7, a1, a2, a3, a4, a5, a6, b1, b2, Mf1, Mf2, Mf0, d1, d2, d0, xdota2, xdota3, xdota4, xdota5, xdota6, xdota6log, xdota7, xdotaN, AN, A2, A3, A4, A5, A5imag, A6, A6log, A6imag, g1, del1, del2, Q ) hp = htilde hc = htilde * 1j return hp, hc
def deleteGenome(species, name) : """Removes a genome from the database""" printf('deleting genome (%s, %s)...' % (species, name)) conf.db.beginTransaction() objs = [] allGood = True try : genome = Genome_Raba(name = name, species = species.lower()) objs.append(genome) pBar = ProgressBar(label = 'preparing') for typ in (Chromosome_Raba, Gene_Raba, Transcript_Raba, Exon_Raba, Protein_Raba) : pBar.update() f = RabaQuery(typ, namespace = genome._raba_namespace) f.addFilter({'genome' : genome}) for e in f.iterRun() : objs.append(e) pBar.close() pBar = ProgressBar(nbEpochs = len(objs), label = 'deleting objects') for e in objs : pBar.update() e.delete() pBar.close() except KeyError as e : #~ printf("\tWARNING, couldn't remove genome form db, maybe it's not there: ", e) raise KeyError("\tWARNING, couldn't remove genome form db, maybe it's not there: ", e) allGood = False printf('\tdeleting folder') try : shutil.rmtree(conf.getGenomeSequencePath(species, name)) except OSError as e: #~ printf('\tWARNING, Unable to delete folder: ', e) OSError('\tWARNING, Unable to delete folder: ', e) allGood = False conf.db.endTransaction() return allGood
Removes a genome from the database
Below is the the instruction that describes the task: ### Input: Removes a genome from the database ### Response: def deleteGenome(species, name) : """Removes a genome from the database""" printf('deleting genome (%s, %s)...' % (species, name)) conf.db.beginTransaction() objs = [] allGood = True try : genome = Genome_Raba(name = name, species = species.lower()) objs.append(genome) pBar = ProgressBar(label = 'preparing') for typ in (Chromosome_Raba, Gene_Raba, Transcript_Raba, Exon_Raba, Protein_Raba) : pBar.update() f = RabaQuery(typ, namespace = genome._raba_namespace) f.addFilter({'genome' : genome}) for e in f.iterRun() : objs.append(e) pBar.close() pBar = ProgressBar(nbEpochs = len(objs), label = 'deleting objects') for e in objs : pBar.update() e.delete() pBar.close() except KeyError as e : #~ printf("\tWARNING, couldn't remove genome form db, maybe it's not there: ", e) raise KeyError("\tWARNING, couldn't remove genome form db, maybe it's not there: ", e) allGood = False printf('\tdeleting folder') try : shutil.rmtree(conf.getGenomeSequencePath(species, name)) except OSError as e: #~ printf('\tWARNING, Unable to delete folder: ', e) OSError('\tWARNING, Unable to delete folder: ', e) allGood = False conf.db.endTransaction() return allGood
def start_agent(agent, recp, desc, allocation_id=None, *args, **kwargs): ''' Tells remote host agent to start agent identified by desc. The result value of the fiber is IRecipient. ''' f = fiber.Fiber() f.add_callback(agent.initiate_protocol, IRecipient(recp), desc, allocation_id, *args, **kwargs) f.add_callback(StartAgentRequester.notify_finish) f.succeed(StartAgentRequester) return f
Tells remote host agent to start agent identified by desc. The result value of the fiber is IRecipient.
Below is the the instruction that describes the task: ### Input: Tells remote host agent to start agent identified by desc. The result value of the fiber is IRecipient. ### Response: def start_agent(agent, recp, desc, allocation_id=None, *args, **kwargs): ''' Tells remote host agent to start agent identified by desc. The result value of the fiber is IRecipient. ''' f = fiber.Fiber() f.add_callback(agent.initiate_protocol, IRecipient(recp), desc, allocation_id, *args, **kwargs) f.add_callback(StartAgentRequester.notify_finish) f.succeed(StartAgentRequester) return f
def _write_str(self, data): """ Converts the given data then writes it :param data: Data to be written :return: The result of ``self.output.write()`` """ with self.__lock: self.output.write( to_str(data, self.encoding) .encode() .decode(self.out_encoding, errors="replace") )
Converts the given data then writes it :param data: Data to be written :return: The result of ``self.output.write()``
Below is the the instruction that describes the task: ### Input: Converts the given data then writes it :param data: Data to be written :return: The result of ``self.output.write()`` ### Response: def _write_str(self, data): """ Converts the given data then writes it :param data: Data to be written :return: The result of ``self.output.write()`` """ with self.__lock: self.output.write( to_str(data, self.encoding) .encode() .decode(self.out_encoding, errors="replace") )
def build_info(name, path=None, module=None): """Return the build info tuple.""" verlist = get_version_list(path, module) verlist[0] = name return tuple(verlist)
Return the build info tuple.
Below is the the instruction that describes the task: ### Input: Return the build info tuple. ### Response: def build_info(name, path=None, module=None): """Return the build info tuple.""" verlist = get_version_list(path, module) verlist[0] = name return tuple(verlist)
def clean_response_type(self): """ :rfc:`3.1.1` Lists of values are space delimited. """ response_type = self.cleaned_data.get('response_type') if not response_type: raise OAuthValidationError({'error': 'invalid_request', 'error_description': "No 'response_type' supplied."}) types = response_type.split(" ") for type in types: if type not in RESPONSE_TYPE_CHOICES: raise OAuthValidationError({ 'error': 'unsupported_response_type', 'error_description': u"'%s' is not a supported response " "type." % type}) return response_type
:rfc:`3.1.1` Lists of values are space delimited.
Below is the the instruction that describes the task: ### Input: :rfc:`3.1.1` Lists of values are space delimited. ### Response: def clean_response_type(self): """ :rfc:`3.1.1` Lists of values are space delimited. """ response_type = self.cleaned_data.get('response_type') if not response_type: raise OAuthValidationError({'error': 'invalid_request', 'error_description': "No 'response_type' supplied."}) types = response_type.split(" ") for type in types: if type not in RESPONSE_TYPE_CHOICES: raise OAuthValidationError({ 'error': 'unsupported_response_type', 'error_description': u"'%s' is not a supported response " "type." % type}) return response_type
def get_vc_dir_from_vs_dir(self): """ Get Visual C++ directory from Visual Studio directory. """ vc_dir = os.path.join(self.vs_dir, 'vc') if os.path.isdir(vc_dir): logging.info(_('using vc: %s'), vc_dir) return vc_dir logging.debug(_('vc not found: %s'), vc_dir) return ''
Get Visual C++ directory from Visual Studio directory.
Below is the the instruction that describes the task: ### Input: Get Visual C++ directory from Visual Studio directory. ### Response: def get_vc_dir_from_vs_dir(self): """ Get Visual C++ directory from Visual Studio directory. """ vc_dir = os.path.join(self.vs_dir, 'vc') if os.path.isdir(vc_dir): logging.info(_('using vc: %s'), vc_dir) return vc_dir logging.debug(_('vc not found: %s'), vc_dir) return ''
def reset_hba(self): """Remove all records from pg_hba.conf.""" status = self.get_status() if status == 'not-initialized': raise ClusterError( 'cannot modify HBA records: cluster is not initialized') pg_hba = os.path.join(self._data_dir, 'pg_hba.conf') try: with open(pg_hba, 'w'): pass except IOError as e: raise ClusterError( 'cannot modify HBA records: {}'.format(e)) from e
Remove all records from pg_hba.conf.
Below is the the instruction that describes the task: ### Input: Remove all records from pg_hba.conf. ### Response: def reset_hba(self): """Remove all records from pg_hba.conf.""" status = self.get_status() if status == 'not-initialized': raise ClusterError( 'cannot modify HBA records: cluster is not initialized') pg_hba = os.path.join(self._data_dir, 'pg_hba.conf') try: with open(pg_hba, 'w'): pass except IOError as e: raise ClusterError( 'cannot modify HBA records: {}'.format(e)) from e
def unique_iter(seq): """ See http://www.peterbe.com/plog/uniqifiers-benchmark Originally f8 written by Dave Kirby """ seen = set() return [x for x in seq if x not in seen and not seen.add(x)]
See http://www.peterbe.com/plog/uniqifiers-benchmark Originally f8 written by Dave Kirby
Below is the the instruction that describes the task: ### Input: See http://www.peterbe.com/plog/uniqifiers-benchmark Originally f8 written by Dave Kirby ### Response: def unique_iter(seq): """ See http://www.peterbe.com/plog/uniqifiers-benchmark Originally f8 written by Dave Kirby """ seen = set() return [x for x in seq if x not in seen and not seen.add(x)]
def _update_camera(self, camera_center): """Update the camera transform based on the new camera center.""" self._world_tl_to_world_camera_rel.offset = ( -self._world_to_world_tl.fwd_pt(camera_center) * self._world_tl_to_world_camera_rel.scale)
Update the camera transform based on the new camera center.
Below is the the instruction that describes the task: ### Input: Update the camera transform based on the new camera center. ### Response: def _update_camera(self, camera_center): """Update the camera transform based on the new camera center.""" self._world_tl_to_world_camera_rel.offset = ( -self._world_to_world_tl.fwd_pt(camera_center) * self._world_tl_to_world_camera_rel.scale)
def update(self, identity_id, service, total_allowance=None, analyze_queries=None): """ Update the limit :param identity_id: The ID of the identity to retrieve :param service: The service that the token is linked to :param total_allowance: The total allowance for this token's limit :param analyze_queries: The number of analyze calls :return: dict of REST API output with headers attached :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError` """ params = {'service': service} if total_allowance is not None: params['total_allowance'] = total_allowance if analyze_queries is not None: params['analyze_queries'] = analyze_queries return self.request.put(str(identity_id) + '/limit/' + service, params)
Update the limit :param identity_id: The ID of the identity to retrieve :param service: The service that the token is linked to :param total_allowance: The total allowance for this token's limit :param analyze_queries: The number of analyze calls :return: dict of REST API output with headers attached :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError`
Below is the the instruction that describes the task: ### Input: Update the limit :param identity_id: The ID of the identity to retrieve :param service: The service that the token is linked to :param total_allowance: The total allowance for this token's limit :param analyze_queries: The number of analyze calls :return: dict of REST API output with headers attached :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError` ### Response: def update(self, identity_id, service, total_allowance=None, analyze_queries=None): """ Update the limit :param identity_id: The ID of the identity to retrieve :param service: The service that the token is linked to :param total_allowance: The total allowance for this token's limit :param analyze_queries: The number of analyze calls :return: dict of REST API output with headers attached :rtype: :class:`~datasift.request.DictResponse` :raises: :class:`~datasift.exceptions.DataSiftApiException`, :class:`requests.exceptions.HTTPError` """ params = {'service': service} if total_allowance is not None: params['total_allowance'] = total_allowance if analyze_queries is not None: params['analyze_queries'] = analyze_queries return self.request.put(str(identity_id) + '/limit/' + service, params)
def GetHostMemSwappedMB(self): '''Undocumented.''' counter = c_uint() ret = vmGuestLib.VMGuestLib_GetHostMemSwappedMB(self.handle.value, byref(counter)) if ret != VMGUESTLIB_ERROR_SUCCESS: raise VMGuestLibException(ret) return counter.value
Undocumented.
Below is the the instruction that describes the task: ### Input: Undocumented. ### Response: def GetHostMemSwappedMB(self): '''Undocumented.''' counter = c_uint() ret = vmGuestLib.VMGuestLib_GetHostMemSwappedMB(self.handle.value, byref(counter)) if ret != VMGUESTLIB_ERROR_SUCCESS: raise VMGuestLibException(ret) return counter.value
def Private(input_dim, num_outputs, kernel, output, kappa=None,name='X'): """ Builds a kernel for an Intrinsic Coregionalization Model :input_dim: Input dimensionality :num_outputs: Number of outputs :param kernel: kernel that will be multiplied by the coregionalize kernel (matrix B). :type kernel: a GPy kernel :param W_rank: number tuples of the corregionalization parameters 'W' :type W_rank: integer """ K = ICM(input_dim,num_outputs,kernel,W_rank=1,kappa=kappa,name=name) K.B.W.fix(0) _range = range(num_outputs) _range.pop(output) for j in _range: K.B.kappa[j] = 0 K.B.kappa[j].fix() return K
Builds a kernel for an Intrinsic Coregionalization Model :input_dim: Input dimensionality :num_outputs: Number of outputs :param kernel: kernel that will be multiplied by the coregionalize kernel (matrix B). :type kernel: a GPy kernel :param W_rank: number tuples of the corregionalization parameters 'W' :type W_rank: integer
Below is the the instruction that describes the task: ### Input: Builds a kernel for an Intrinsic Coregionalization Model :input_dim: Input dimensionality :num_outputs: Number of outputs :param kernel: kernel that will be multiplied by the coregionalize kernel (matrix B). :type kernel: a GPy kernel :param W_rank: number tuples of the corregionalization parameters 'W' :type W_rank: integer ### Response: def Private(input_dim, num_outputs, kernel, output, kappa=None,name='X'): """ Builds a kernel for an Intrinsic Coregionalization Model :input_dim: Input dimensionality :num_outputs: Number of outputs :param kernel: kernel that will be multiplied by the coregionalize kernel (matrix B). :type kernel: a GPy kernel :param W_rank: number tuples of the corregionalization parameters 'W' :type W_rank: integer """ K = ICM(input_dim,num_outputs,kernel,W_rank=1,kappa=kappa,name=name) K.B.W.fix(0) _range = range(num_outputs) _range.pop(output) for j in _range: K.B.kappa[j] = 0 K.B.kappa[j].fix() return K
def primary_container_name(names, default=None, strip_trailing_slash=True): """ From the list of names, finds the primary name of the container. Returns the defined default value (e.g. the container id or ``None``) in case it cannot find any. :param names: List with name and aliases of the container. :type names: list[unicode | str] :param default: Default value. :param strip_trailing_slash: As read directly from the Docker service, every container name includes a trailing slash. Set this to ``False`` if it is already removed. :type strip_trailing_slash: bool :return: Primary name of the container. :rtype: unicode | str """ if strip_trailing_slash: ex_names = [name[1:] for name in names if name.find('/', 2) == -1] else: ex_names = [name for name in names if name.find('/', 2) == -1] if ex_names: return ex_names[0] return default
From the list of names, finds the primary name of the container. Returns the defined default value (e.g. the container id or ``None``) in case it cannot find any. :param names: List with name and aliases of the container. :type names: list[unicode | str] :param default: Default value. :param strip_trailing_slash: As read directly from the Docker service, every container name includes a trailing slash. Set this to ``False`` if it is already removed. :type strip_trailing_slash: bool :return: Primary name of the container. :rtype: unicode | str
Below is the the instruction that describes the task: ### Input: From the list of names, finds the primary name of the container. Returns the defined default value (e.g. the container id or ``None``) in case it cannot find any. :param names: List with name and aliases of the container. :type names: list[unicode | str] :param default: Default value. :param strip_trailing_slash: As read directly from the Docker service, every container name includes a trailing slash. Set this to ``False`` if it is already removed. :type strip_trailing_slash: bool :return: Primary name of the container. :rtype: unicode | str ### Response: def primary_container_name(names, default=None, strip_trailing_slash=True): """ From the list of names, finds the primary name of the container. Returns the defined default value (e.g. the container id or ``None``) in case it cannot find any. :param names: List with name and aliases of the container. :type names: list[unicode | str] :param default: Default value. :param strip_trailing_slash: As read directly from the Docker service, every container name includes a trailing slash. Set this to ``False`` if it is already removed. :type strip_trailing_slash: bool :return: Primary name of the container. :rtype: unicode | str """ if strip_trailing_slash: ex_names = [name[1:] for name in names if name.find('/', 2) == -1] else: ex_names = [name for name in names if name.find('/', 2) == -1] if ex_names: return ex_names[0] return default
def getShocks(self): ''' Gets permanent and transitory income shocks for this period as well as preference shocks. Parameters ---------- None Returns ------- None ''' IndShockConsumerType.getShocks(self) # Get permanent and transitory income shocks PrefShkNow = np.zeros(self.AgentCount) # Initialize shock array for t in range(self.T_cycle): these = t == self.t_cycle N = np.sum(these) if N > 0: PrefShkNow[these] = self.RNG.permutation(approxMeanOneLognormal(N,sigma=self.PrefShkStd[t])[1]) self.PrefShkNow = PrefShkNow
Gets permanent and transitory income shocks for this period as well as preference shocks. Parameters ---------- None Returns ------- None
Below is the the instruction that describes the task: ### Input: Gets permanent and transitory income shocks for this period as well as preference shocks. Parameters ---------- None Returns ------- None ### Response: def getShocks(self): ''' Gets permanent and transitory income shocks for this period as well as preference shocks. Parameters ---------- None Returns ------- None ''' IndShockConsumerType.getShocks(self) # Get permanent and transitory income shocks PrefShkNow = np.zeros(self.AgentCount) # Initialize shock array for t in range(self.T_cycle): these = t == self.t_cycle N = np.sum(these) if N > 0: PrefShkNow[these] = self.RNG.permutation(approxMeanOneLognormal(N,sigma=self.PrefShkStd[t])[1]) self.PrefShkNow = PrefShkNow
def parse_rest_response(self, records, rowcount, row_type=list): """Parse the REST API response to DB API cursor flat response""" if self.is_plain_count: # result of "SELECT COUNT() FROM ... WHERE ..." assert list(records) == [] yield rowcount # originally [resp.json()['totalSize']] else: while True: for row_deep in records: assert self.is_aggregation == (row_deep['attributes']['type'] == 'AggregateResult') row_flat = self._make_flat(row_deep, path=(), subroots=self.subroots) # TODO Will be the expression "or x['done']" really correct also for long subrequests? assert all(not isinstance(x, dict) or x['done'] for x in row_flat) if issubclass(row_type, dict): yield {k: fix_data_type(row_flat[k.lower()]) for k in self.aliases} else: yield [fix_data_type(row_flat[k.lower()]) for k in self.aliases] # if not resp['done']: # if not cursor: # raise ProgrammingError("Must get a cursor") # resp = cursor.query_more(resp['nextRecordsUrl']).json() # else: # break break
Parse the REST API response to DB API cursor flat response
Below is the the instruction that describes the task: ### Input: Parse the REST API response to DB API cursor flat response ### Response: def parse_rest_response(self, records, rowcount, row_type=list): """Parse the REST API response to DB API cursor flat response""" if self.is_plain_count: # result of "SELECT COUNT() FROM ... WHERE ..." assert list(records) == [] yield rowcount # originally [resp.json()['totalSize']] else: while True: for row_deep in records: assert self.is_aggregation == (row_deep['attributes']['type'] == 'AggregateResult') row_flat = self._make_flat(row_deep, path=(), subroots=self.subroots) # TODO Will be the expression "or x['done']" really correct also for long subrequests? assert all(not isinstance(x, dict) or x['done'] for x in row_flat) if issubclass(row_type, dict): yield {k: fix_data_type(row_flat[k.lower()]) for k in self.aliases} else: yield [fix_data_type(row_flat[k.lower()]) for k in self.aliases] # if not resp['done']: # if not cursor: # raise ProgrammingError("Must get a cursor") # resp = cursor.query_more(resp['nextRecordsUrl']).json() # else: # break break
def RenderWidget(self): """Returns a QWidget subclass instance. Exact class depends on self.type""" t = self.type if t == int: ret = QSpinBox() ret.setMaximum(999999999) ret.setValue(self.value) elif t == float: ret = QLineEdit() ret.setText(str(self.value)) elif t == bool: ret = QCheckBox() ret.setChecked(self.value) else: # str, list left ret = QLineEdit() ret.setText(str(self.value)) if self.toolTip is not None: ret.setToolTip(self.toolTip) self.widget = ret return ret
Returns a QWidget subclass instance. Exact class depends on self.type
Below is the the instruction that describes the task: ### Input: Returns a QWidget subclass instance. Exact class depends on self.type ### Response: def RenderWidget(self): """Returns a QWidget subclass instance. Exact class depends on self.type""" t = self.type if t == int: ret = QSpinBox() ret.setMaximum(999999999) ret.setValue(self.value) elif t == float: ret = QLineEdit() ret.setText(str(self.value)) elif t == bool: ret = QCheckBox() ret.setChecked(self.value) else: # str, list left ret = QLineEdit() ret.setText(str(self.value)) if self.toolTip is not None: ret.setToolTip(self.toolTip) self.widget = ret return ret
def serializable_value(self, obj): ''' Produce the value as it should be serialized. Sometimes it is desirable for the serialized value to differ from the ``__get__`` in order for the ``__get__`` value to appear simpler for user or developer convenience. Args: obj (HasProps) : the object to get the serialized attribute for Returns: JSON-like ''' value = self.__get__(obj, obj.__class__) return self.property.serialize_value(value)
Produce the value as it should be serialized. Sometimes it is desirable for the serialized value to differ from the ``__get__`` in order for the ``__get__`` value to appear simpler for user or developer convenience. Args: obj (HasProps) : the object to get the serialized attribute for Returns: JSON-like
Below is the the instruction that describes the task: ### Input: Produce the value as it should be serialized. Sometimes it is desirable for the serialized value to differ from the ``__get__`` in order for the ``__get__`` value to appear simpler for user or developer convenience. Args: obj (HasProps) : the object to get the serialized attribute for Returns: JSON-like ### Response: def serializable_value(self, obj): ''' Produce the value as it should be serialized. Sometimes it is desirable for the serialized value to differ from the ``__get__`` in order for the ``__get__`` value to appear simpler for user or developer convenience. Args: obj (HasProps) : the object to get the serialized attribute for Returns: JSON-like ''' value = self.__get__(obj, obj.__class__) return self.property.serialize_value(value)
def edge_coord_in_direction(tile_id, direction): """ Returns the edge coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: edge coord, int """ tile_coord = tile_id_to_coord(tile_id) for edge_coord in edges_touching_tile(tile_id): if tile_edge_offset_to_direction(edge_coord - tile_coord) == direction: return edge_coord raise ValueError('No edge found in direction={} at tile_id={}'.format( direction, tile_id ))
Returns the edge coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: edge coord, int
Below is the the instruction that describes the task: ### Input: Returns the edge coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: edge coord, int ### Response: def edge_coord_in_direction(tile_id, direction): """ Returns the edge coordinate in the given direction at the given tile identifier. :param tile_id: tile identifier, int :param direction: direction, str :return: edge coord, int """ tile_coord = tile_id_to_coord(tile_id) for edge_coord in edges_touching_tile(tile_id): if tile_edge_offset_to_direction(edge_coord - tile_coord) == direction: return edge_coord raise ValueError('No edge found in direction={} at tile_id={}'.format( direction, tile_id ))
def list_fpgas(self): """Return a list with all the supported FPGAs""" # Print table click.echo('\nSupported FPGAs:\n') FPGALIST_TPL = ('{fpga:30} {type:<5} {size:<5} {pack:<10}') terminal_width, _ = click.get_terminal_size() click.echo('-' * terminal_width) click.echo(FPGALIST_TPL.format( fpga=click.style('FPGA', fg='cyan'), type='Type', size='Size', pack='Pack')) click.echo('-' * terminal_width) for fpga in self.fpgas: click.echo(FPGALIST_TPL.format( fpga=click.style(fpga, fg='cyan'), type=self.fpgas.get(fpga).get('type'), size=self.fpgas.get(fpga).get('size'), pack=self.fpgas.get(fpga).get('pack')))
Return a list with all the supported FPGAs
Below is the the instruction that describes the task: ### Input: Return a list with all the supported FPGAs ### Response: def list_fpgas(self): """Return a list with all the supported FPGAs""" # Print table click.echo('\nSupported FPGAs:\n') FPGALIST_TPL = ('{fpga:30} {type:<5} {size:<5} {pack:<10}') terminal_width, _ = click.get_terminal_size() click.echo('-' * terminal_width) click.echo(FPGALIST_TPL.format( fpga=click.style('FPGA', fg='cyan'), type='Type', size='Size', pack='Pack')) click.echo('-' * terminal_width) for fpga in self.fpgas: click.echo(FPGALIST_TPL.format( fpga=click.style(fpga, fg='cyan'), type=self.fpgas.get(fpga).get('type'), size=self.fpgas.get(fpga).get('size'), pack=self.fpgas.get(fpga).get('pack')))
def default_security_rules_list(security_group, resource_group, **kwargs): ''' .. versionadded:: 2019.2.0 List default security rules within a security group. :param security_group: The network security group to query. :param resource_group: The resource group name assigned to the network security group. CLI Example: .. code-block:: bash salt-call azurearm_network.default_security_rules_list testnsg testgroup ''' result = {} secgroup = network_security_group_get( security_group=security_group, resource_group=resource_group, **kwargs ) if 'error' in secgroup: return secgroup try: result = secgroup['default_security_rules'] except KeyError as exc: log.error('No default security rules found for %s!', security_group) result = {'error': str(exc)} return result
.. versionadded:: 2019.2.0 List default security rules within a security group. :param security_group: The network security group to query. :param resource_group: The resource group name assigned to the network security group. CLI Example: .. code-block:: bash salt-call azurearm_network.default_security_rules_list testnsg testgroup
Below is the the instruction that describes the task: ### Input: .. versionadded:: 2019.2.0 List default security rules within a security group. :param security_group: The network security group to query. :param resource_group: The resource group name assigned to the network security group. CLI Example: .. code-block:: bash salt-call azurearm_network.default_security_rules_list testnsg testgroup ### Response: def default_security_rules_list(security_group, resource_group, **kwargs): ''' .. versionadded:: 2019.2.0 List default security rules within a security group. :param security_group: The network security group to query. :param resource_group: The resource group name assigned to the network security group. CLI Example: .. code-block:: bash salt-call azurearm_network.default_security_rules_list testnsg testgroup ''' result = {} secgroup = network_security_group_get( security_group=security_group, resource_group=resource_group, **kwargs ) if 'error' in secgroup: return secgroup try: result = secgroup['default_security_rules'] except KeyError as exc: log.error('No default security rules found for %s!', security_group) result = {'error': str(exc)} return result
def validated_formatter(self, url_format): """validate visualization url format""" # We try to create a string by substituting all known # parameters. If an unknown parameter is present, an error # will be thrown valid_parameters = { "${CLUSTER}": "cluster", "${ENVIRON}": "environ", "${TOPOLOGY}": "topology", "${ROLE}": "role", "${USER}": "user", } dummy_formatted_url = url_format for key, value in valid_parameters.items(): dummy_formatted_url = dummy_formatted_url.replace(key, value) # All $ signs must have been replaced if '$' in dummy_formatted_url: raise Exception("Invalid viz.url.format: %s" % (url_format)) # No error is thrown, so the format is valid. return url_format
validate visualization url format
Below is the the instruction that describes the task: ### Input: validate visualization url format ### Response: def validated_formatter(self, url_format): """validate visualization url format""" # We try to create a string by substituting all known # parameters. If an unknown parameter is present, an error # will be thrown valid_parameters = { "${CLUSTER}": "cluster", "${ENVIRON}": "environ", "${TOPOLOGY}": "topology", "${ROLE}": "role", "${USER}": "user", } dummy_formatted_url = url_format for key, value in valid_parameters.items(): dummy_formatted_url = dummy_formatted_url.replace(key, value) # All $ signs must have been replaced if '$' in dummy_formatted_url: raise Exception("Invalid viz.url.format: %s" % (url_format)) # No error is thrown, so the format is valid. return url_format
def dRV(self, dt, band='g'): """Returns dRV of star A, if A is brighter than B+C, or of star B if B+C is brighter """ return (self.orbpop.dRV_1(dt)*self.A_brighter(band) + self.orbpop.dRV_2(dt)*self.BC_brighter(band))
Returns dRV of star A, if A is brighter than B+C, or of star B if B+C is brighter
Below is the the instruction that describes the task: ### Input: Returns dRV of star A, if A is brighter than B+C, or of star B if B+C is brighter ### Response: def dRV(self, dt, band='g'): """Returns dRV of star A, if A is brighter than B+C, or of star B if B+C is brighter """ return (self.orbpop.dRV_1(dt)*self.A_brighter(band) + self.orbpop.dRV_2(dt)*self.BC_brighter(band))
def mask(self, table): """ Use the current Query object to count the number of entries in `table` that satisfy `queries`. Parameters ---------- table : NumPy structured array, astropy Table, etc. Returns ------- mask : numpy bool array """ if self._operator is None: if self._operands is None: return np.ones(self._get_table_len(table), dtype=np.bool) else: return self._create_mask(table, self._operands) if self._operator == 'NOT': return ~self._operands.mask(table) if self._operator == 'AND': op_func = np.logical_and elif self._operator == 'OR': op_func = np.logical_or elif self._operator == 'XOR': op_func = np.logical_xor mask_this = self._operands[0].mask(table) for op in self._operands[1:]: mask_this = op_func(mask_this, op.mask(table), out=mask_this) return mask_this
Use the current Query object to count the number of entries in `table` that satisfy `queries`. Parameters ---------- table : NumPy structured array, astropy Table, etc. Returns ------- mask : numpy bool array
Below is the the instruction that describes the task: ### Input: Use the current Query object to count the number of entries in `table` that satisfy `queries`. Parameters ---------- table : NumPy structured array, astropy Table, etc. Returns ------- mask : numpy bool array ### Response: def mask(self, table): """ Use the current Query object to count the number of entries in `table` that satisfy `queries`. Parameters ---------- table : NumPy structured array, astropy Table, etc. Returns ------- mask : numpy bool array """ if self._operator is None: if self._operands is None: return np.ones(self._get_table_len(table), dtype=np.bool) else: return self._create_mask(table, self._operands) if self._operator == 'NOT': return ~self._operands.mask(table) if self._operator == 'AND': op_func = np.logical_and elif self._operator == 'OR': op_func = np.logical_or elif self._operator == 'XOR': op_func = np.logical_xor mask_this = self._operands[0].mask(table) for op in self._operands[1:]: mask_this = op_func(mask_this, op.mask(table), out=mask_this) return mask_this
def open(self, mode='a', **kwargs): """ Open the file in the specified mode Parameters ---------- mode : {'a', 'w', 'r', 'r+'}, default 'a' See HDFStore docstring or tables.open_file for info about modes """ tables = _tables() if self._mode != mode: # if we are changing a write mode to read, ok if self._mode in ['a', 'w'] and mode in ['r', 'r+']: pass elif mode in ['w']: # this would truncate, raise here if self.is_open: raise PossibleDataLossError( "Re-opening the file [{0}] with mode [{1}] " "will delete the current file!" .format(self._path, self._mode) ) self._mode = mode # close and reopen the handle if self.is_open: self.close() if self._complevel and self._complevel > 0: self._filters = _tables().Filters(self._complevel, self._complib, fletcher32=self._fletcher32) try: self._handle = tables.open_file(self._path, self._mode, **kwargs) except (IOError) as e: # pragma: no cover if 'can not be written' in str(e): print( 'Opening {path} in read-only mode'.format(path=self._path)) self._handle = tables.open_file(self._path, 'r', **kwargs) else: raise except (ValueError) as e: # trap PyTables >= 3.1 FILE_OPEN_POLICY exception # to provide an updated message if 'FILE_OPEN_POLICY' in str(e): e = ValueError( "PyTables [{version}] no longer supports opening multiple " "files\n" "even in read-only mode on this HDF5 version " "[{hdf_version}]. You can accept this\n" "and not open the same file multiple times at once,\n" "upgrade the HDF5 version, or downgrade to PyTables 3.0.0 " "which allows\n" "files to be opened multiple times at once\n" .format(version=tables.__version__, hdf_version=tables.get_hdf5_version())) raise e except (Exception) as e: # trying to read from a non-existent file causes an error which # is not part of IOError, make it one if self._mode == 'r' and 'Unable to open/create file' in str(e): raise IOError(str(e)) raise
Open the file in the specified mode Parameters ---------- mode : {'a', 'w', 'r', 'r+'}, default 'a' See HDFStore docstring or tables.open_file for info about modes
Below is the the instruction that describes the task: ### Input: Open the file in the specified mode Parameters ---------- mode : {'a', 'w', 'r', 'r+'}, default 'a' See HDFStore docstring or tables.open_file for info about modes ### Response: def open(self, mode='a', **kwargs): """ Open the file in the specified mode Parameters ---------- mode : {'a', 'w', 'r', 'r+'}, default 'a' See HDFStore docstring or tables.open_file for info about modes """ tables = _tables() if self._mode != mode: # if we are changing a write mode to read, ok if self._mode in ['a', 'w'] and mode in ['r', 'r+']: pass elif mode in ['w']: # this would truncate, raise here if self.is_open: raise PossibleDataLossError( "Re-opening the file [{0}] with mode [{1}] " "will delete the current file!" .format(self._path, self._mode) ) self._mode = mode # close and reopen the handle if self.is_open: self.close() if self._complevel and self._complevel > 0: self._filters = _tables().Filters(self._complevel, self._complib, fletcher32=self._fletcher32) try: self._handle = tables.open_file(self._path, self._mode, **kwargs) except (IOError) as e: # pragma: no cover if 'can not be written' in str(e): print( 'Opening {path} in read-only mode'.format(path=self._path)) self._handle = tables.open_file(self._path, 'r', **kwargs) else: raise except (ValueError) as e: # trap PyTables >= 3.1 FILE_OPEN_POLICY exception # to provide an updated message if 'FILE_OPEN_POLICY' in str(e): e = ValueError( "PyTables [{version}] no longer supports opening multiple " "files\n" "even in read-only mode on this HDF5 version " "[{hdf_version}]. You can accept this\n" "and not open the same file multiple times at once,\n" "upgrade the HDF5 version, or downgrade to PyTables 3.0.0 " "which allows\n" "files to be opened multiple times at once\n" .format(version=tables.__version__, hdf_version=tables.get_hdf5_version())) raise e except (Exception) as e: # trying to read from a non-existent file causes an error which # is not part of IOError, make it one if self._mode == 'r' and 'Unable to open/create file' in str(e): raise IOError(str(e)) raise
def roll_up_down_capture(returns, factor_returns, window=10, **kwargs): """ Computes the up/down capture measure over a rolling window. see documentation for :func:`~empyrical.stats.up_down_capture`. (pass all args, kwargs required) Parameters ---------- returns : pd.Series or np.ndarray Daily returns of the strategy, noncumulative. - See full explanation in :func:`~empyrical.stats.cum_returns`. factor_returns : pd.Series or np.ndarray Noncumulative returns of the factor to which beta is computed. Usually a benchmark such as the market. - This is in the same style as returns. window : int, required Size of the rolling window in terms of the periodicity of the data. - eg window = 60, periodicity=DAILY, represents a rolling 60 day window """ return roll(returns, factor_returns, window=window, function=up_down_capture, **kwargs)
Computes the up/down capture measure over a rolling window. see documentation for :func:`~empyrical.stats.up_down_capture`. (pass all args, kwargs required) Parameters ---------- returns : pd.Series or np.ndarray Daily returns of the strategy, noncumulative. - See full explanation in :func:`~empyrical.stats.cum_returns`. factor_returns : pd.Series or np.ndarray Noncumulative returns of the factor to which beta is computed. Usually a benchmark such as the market. - This is in the same style as returns. window : int, required Size of the rolling window in terms of the periodicity of the data. - eg window = 60, periodicity=DAILY, represents a rolling 60 day window
Below is the the instruction that describes the task: ### Input: Computes the up/down capture measure over a rolling window. see documentation for :func:`~empyrical.stats.up_down_capture`. (pass all args, kwargs required) Parameters ---------- returns : pd.Series or np.ndarray Daily returns of the strategy, noncumulative. - See full explanation in :func:`~empyrical.stats.cum_returns`. factor_returns : pd.Series or np.ndarray Noncumulative returns of the factor to which beta is computed. Usually a benchmark such as the market. - This is in the same style as returns. window : int, required Size of the rolling window in terms of the periodicity of the data. - eg window = 60, periodicity=DAILY, represents a rolling 60 day window ### Response: def roll_up_down_capture(returns, factor_returns, window=10, **kwargs): """ Computes the up/down capture measure over a rolling window. see documentation for :func:`~empyrical.stats.up_down_capture`. (pass all args, kwargs required) Parameters ---------- returns : pd.Series or np.ndarray Daily returns of the strategy, noncumulative. - See full explanation in :func:`~empyrical.stats.cum_returns`. factor_returns : pd.Series or np.ndarray Noncumulative returns of the factor to which beta is computed. Usually a benchmark such as the market. - This is in the same style as returns. window : int, required Size of the rolling window in terms of the periodicity of the data. - eg window = 60, periodicity=DAILY, represents a rolling 60 day window """ return roll(returns, factor_returns, window=window, function=up_down_capture, **kwargs)
def read_PIA0_B_control(self, cpu_cycles, op_address, address): """ read from 0xff03 -> PIA 0 B side Control reg. """ value = self.pia_0_B_control.value log.error( "%04x| read $%04x (PIA 0 B side Control reg.) send $%02x (%s) back.\t|%s", op_address, address, value, byte2bit_string(value), self.cfg.mem_info.get_shortest(op_address) ) return value
read from 0xff03 -> PIA 0 B side Control reg.
Below is the the instruction that describes the task: ### Input: read from 0xff03 -> PIA 0 B side Control reg. ### Response: def read_PIA0_B_control(self, cpu_cycles, op_address, address): """ read from 0xff03 -> PIA 0 B side Control reg. """ value = self.pia_0_B_control.value log.error( "%04x| read $%04x (PIA 0 B side Control reg.) send $%02x (%s) back.\t|%s", op_address, address, value, byte2bit_string(value), self.cfg.mem_info.get_shortest(op_address) ) return value
def train_hmm(self,observation_list, iterations, quantities): """ Runs the Baum Welch Algorithm and finds the new model parameters **Arguments**: :param observation_list: A nested list, or a list of lists :type observation_list: Contains a list multiple observation sequences. :param iterations: Maximum number of iterations for the algorithm :type iterations: An integer :param quantities: Number of times, each corresponding item in 'observation_list' occurs. :type quantities: A list of integers :return: Returns the emission, transition and start probabilites as numpy matrices :rtype: Three numpy matices **Features**: Scaling applied here. This ensures that no underflow error occurs. **Example**: >>> states = ('s', 't') >>> possible_observation = ('A','B' ) >>> # Numpy arrays of the data >>> start_probability = np.matrix( '0.5 0.5 ') >>> transition_probability = np.matrix('0.6 0.4 ; 0.3 0.7 ') >>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' ) >>> # Initialize class object >>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability) >>> >>> observations = ('A', 'B','B','A') >>> obs4 = ('B', 'A','B') >>> observation_tuple = [] >>> observation_tuple.extend( [observations,obs4] ) >>> quantities_observations = [10, 20] >>> num_iter=1000 >>> e,t,s = test.train_hmm(observation_tuple,num_iter,quantities_observations) >>> # e,t,s contain new emission transition and start probabilities """ obs_size = len(observation_list) prob = float('inf') q = quantities # Train the model 'iteration' number of times # store em_prob and trans_prob copies since you should use same values for one loop for i in range(iterations): emProbNew = np.asmatrix(np.zeros((self.em_prob.shape))) transProbNew = np.asmatrix(np.zeros((self.trans_prob.shape))) startProbNew = np.asmatrix(np.zeros((self.start_prob.shape))) for j in range(obs_size): # re-assing values based on weight emProbNew= emProbNew + q[j] * self._train_emission(observation_list[j]) transProbNew = transProbNew + q[j] * self._train_transition(observation_list[j]) startProbNew = startProbNew + q[j] * self._train_start_prob(observation_list[j]) # Normalizing em_norm = emProbNew.sum(axis = 1) trans_norm = transProbNew.sum(axis = 1) start_norm = startProbNew.sum(axis = 1) emProbNew = emProbNew/ em_norm.transpose() startProbNew = startProbNew/ start_norm.transpose() transProbNew = transProbNew/ trans_norm.transpose() self.em_prob,self.trans_prob = emProbNew,transProbNew self.start_prob = startProbNew if prob - self.log_prob(observation_list,quantities)>0.0000001: prob = self.log_prob(observation_list,quantities) else: return self.em_prob, self.trans_prob , self.start_prob return self.em_prob, self.trans_prob , self.start_prob
Runs the Baum Welch Algorithm and finds the new model parameters **Arguments**: :param observation_list: A nested list, or a list of lists :type observation_list: Contains a list multiple observation sequences. :param iterations: Maximum number of iterations for the algorithm :type iterations: An integer :param quantities: Number of times, each corresponding item in 'observation_list' occurs. :type quantities: A list of integers :return: Returns the emission, transition and start probabilites as numpy matrices :rtype: Three numpy matices **Features**: Scaling applied here. This ensures that no underflow error occurs. **Example**: >>> states = ('s', 't') >>> possible_observation = ('A','B' ) >>> # Numpy arrays of the data >>> start_probability = np.matrix( '0.5 0.5 ') >>> transition_probability = np.matrix('0.6 0.4 ; 0.3 0.7 ') >>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' ) >>> # Initialize class object >>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability) >>> >>> observations = ('A', 'B','B','A') >>> obs4 = ('B', 'A','B') >>> observation_tuple = [] >>> observation_tuple.extend( [observations,obs4] ) >>> quantities_observations = [10, 20] >>> num_iter=1000 >>> e,t,s = test.train_hmm(observation_tuple,num_iter,quantities_observations) >>> # e,t,s contain new emission transition and start probabilities
Below is the the instruction that describes the task: ### Input: Runs the Baum Welch Algorithm and finds the new model parameters **Arguments**: :param observation_list: A nested list, or a list of lists :type observation_list: Contains a list multiple observation sequences. :param iterations: Maximum number of iterations for the algorithm :type iterations: An integer :param quantities: Number of times, each corresponding item in 'observation_list' occurs. :type quantities: A list of integers :return: Returns the emission, transition and start probabilites as numpy matrices :rtype: Three numpy matices **Features**: Scaling applied here. This ensures that no underflow error occurs. **Example**: >>> states = ('s', 't') >>> possible_observation = ('A','B' ) >>> # Numpy arrays of the data >>> start_probability = np.matrix( '0.5 0.5 ') >>> transition_probability = np.matrix('0.6 0.4 ; 0.3 0.7 ') >>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' ) >>> # Initialize class object >>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability) >>> >>> observations = ('A', 'B','B','A') >>> obs4 = ('B', 'A','B') >>> observation_tuple = [] >>> observation_tuple.extend( [observations,obs4] ) >>> quantities_observations = [10, 20] >>> num_iter=1000 >>> e,t,s = test.train_hmm(observation_tuple,num_iter,quantities_observations) >>> # e,t,s contain new emission transition and start probabilities ### Response: def train_hmm(self,observation_list, iterations, quantities): """ Runs the Baum Welch Algorithm and finds the new model parameters **Arguments**: :param observation_list: A nested list, or a list of lists :type observation_list: Contains a list multiple observation sequences. :param iterations: Maximum number of iterations for the algorithm :type iterations: An integer :param quantities: Number of times, each corresponding item in 'observation_list' occurs. :type quantities: A list of integers :return: Returns the emission, transition and start probabilites as numpy matrices :rtype: Three numpy matices **Features**: Scaling applied here. This ensures that no underflow error occurs. **Example**: >>> states = ('s', 't') >>> possible_observation = ('A','B' ) >>> # Numpy arrays of the data >>> start_probability = np.matrix( '0.5 0.5 ') >>> transition_probability = np.matrix('0.6 0.4 ; 0.3 0.7 ') >>> emission_probability = np.matrix( '0.3 0.7 ; 0.4 0.6 ' ) >>> # Initialize class object >>> test = hmm(states,possible_observation,start_probability,transition_probability,emission_probability) >>> >>> observations = ('A', 'B','B','A') >>> obs4 = ('B', 'A','B') >>> observation_tuple = [] >>> observation_tuple.extend( [observations,obs4] ) >>> quantities_observations = [10, 20] >>> num_iter=1000 >>> e,t,s = test.train_hmm(observation_tuple,num_iter,quantities_observations) >>> # e,t,s contain new emission transition and start probabilities """ obs_size = len(observation_list) prob = float('inf') q = quantities # Train the model 'iteration' number of times # store em_prob and trans_prob copies since you should use same values for one loop for i in range(iterations): emProbNew = np.asmatrix(np.zeros((self.em_prob.shape))) transProbNew = np.asmatrix(np.zeros((self.trans_prob.shape))) startProbNew = np.asmatrix(np.zeros((self.start_prob.shape))) for j in range(obs_size): # re-assing values based on weight emProbNew= emProbNew + q[j] * self._train_emission(observation_list[j]) transProbNew = transProbNew + q[j] * self._train_transition(observation_list[j]) startProbNew = startProbNew + q[j] * self._train_start_prob(observation_list[j]) # Normalizing em_norm = emProbNew.sum(axis = 1) trans_norm = transProbNew.sum(axis = 1) start_norm = startProbNew.sum(axis = 1) emProbNew = emProbNew/ em_norm.transpose() startProbNew = startProbNew/ start_norm.transpose() transProbNew = transProbNew/ trans_norm.transpose() self.em_prob,self.trans_prob = emProbNew,transProbNew self.start_prob = startProbNew if prob - self.log_prob(observation_list,quantities)>0.0000001: prob = self.log_prob(observation_list,quantities) else: return self.em_prob, self.trans_prob , self.start_prob return self.em_prob, self.trans_prob , self.start_prob
def delete(self): ''' Deletes this model from the database, calling delete in each field to properly delete special cases ''' redis = type(self).get_redis() for fieldname, field in self.proxy: field.delete(redis) redis.delete(self.key()) redis.srem(type(self).members_key(), self.id) if isinstance(self, PermissionHolder): redis.delete(self.allow_key()) if self.notify: data = json.dumps({ 'event': 'delete', 'data': self.to_json(), }) redis.publish(type(self).cls_key(), data) redis.publish(self.key(), data) return self
Deletes this model from the database, calling delete in each field to properly delete special cases
Below is the the instruction that describes the task: ### Input: Deletes this model from the database, calling delete in each field to properly delete special cases ### Response: def delete(self): ''' Deletes this model from the database, calling delete in each field to properly delete special cases ''' redis = type(self).get_redis() for fieldname, field in self.proxy: field.delete(redis) redis.delete(self.key()) redis.srem(type(self).members_key(), self.id) if isinstance(self, PermissionHolder): redis.delete(self.allow_key()) if self.notify: data = json.dumps({ 'event': 'delete', 'data': self.to_json(), }) redis.publish(type(self).cls_key(), data) redis.publish(self.key(), data) return self
def XCHG(cpu, dest, src): """ Exchanges register/memory with register. Exchanges the contents of the destination (first) and source (second) operands. The operands can be two general-purpose registers or a register and a memory location. If a memory operand is referenced, the processor's locking protocol is automatically implemented for the duration of the exchange operation, regardless of the presence or absence of the LOCK prefix or of the value of the IOPL. This instruction is useful for implementing semaphores or similar data structures for process synchronization. The XCHG instruction can also be used instead of the BSWAP instruction for 16-bit operands:: TEMP = DEST DEST = SRC SRC = TEMP :param cpu: current CPU. :param dest: destination operand. :param src: source operand. """ temp = dest.read() dest.write(src.read()) src.write(temp)
Exchanges register/memory with register. Exchanges the contents of the destination (first) and source (second) operands. The operands can be two general-purpose registers or a register and a memory location. If a memory operand is referenced, the processor's locking protocol is automatically implemented for the duration of the exchange operation, regardless of the presence or absence of the LOCK prefix or of the value of the IOPL. This instruction is useful for implementing semaphores or similar data structures for process synchronization. The XCHG instruction can also be used instead of the BSWAP instruction for 16-bit operands:: TEMP = DEST DEST = SRC SRC = TEMP :param cpu: current CPU. :param dest: destination operand. :param src: source operand.
Below is the the instruction that describes the task: ### Input: Exchanges register/memory with register. Exchanges the contents of the destination (first) and source (second) operands. The operands can be two general-purpose registers or a register and a memory location. If a memory operand is referenced, the processor's locking protocol is automatically implemented for the duration of the exchange operation, regardless of the presence or absence of the LOCK prefix or of the value of the IOPL. This instruction is useful for implementing semaphores or similar data structures for process synchronization. The XCHG instruction can also be used instead of the BSWAP instruction for 16-bit operands:: TEMP = DEST DEST = SRC SRC = TEMP :param cpu: current CPU. :param dest: destination operand. :param src: source operand. ### Response: def XCHG(cpu, dest, src): """ Exchanges register/memory with register. Exchanges the contents of the destination (first) and source (second) operands. The operands can be two general-purpose registers or a register and a memory location. If a memory operand is referenced, the processor's locking protocol is automatically implemented for the duration of the exchange operation, regardless of the presence or absence of the LOCK prefix or of the value of the IOPL. This instruction is useful for implementing semaphores or similar data structures for process synchronization. The XCHG instruction can also be used instead of the BSWAP instruction for 16-bit operands:: TEMP = DEST DEST = SRC SRC = TEMP :param cpu: current CPU. :param dest: destination operand. :param src: source operand. """ temp = dest.read() dest.write(src.read()) src.write(temp)
def _get_distance_scaling_term(self, C, mag, rrup): """ Returns the magnitude dependent distance scaling term """ if mag < 6.75: mag_factor = -(C["b1_lo"] + C["b2_lo"] * mag) else: mag_factor = -(C["b1_hi"] + C["b2_hi"] * mag) return mag_factor * np.log(rrup + 10.0) + (C["gamma"] * rrup)
Returns the magnitude dependent distance scaling term
Below is the the instruction that describes the task: ### Input: Returns the magnitude dependent distance scaling term ### Response: def _get_distance_scaling_term(self, C, mag, rrup): """ Returns the magnitude dependent distance scaling term """ if mag < 6.75: mag_factor = -(C["b1_lo"] + C["b2_lo"] * mag) else: mag_factor = -(C["b1_hi"] + C["b2_hi"] * mag) return mag_factor * np.log(rrup + 10.0) + (C["gamma"] * rrup)
def search_show_top_unite(self, category, genre=None, area=None, year=None, orderby=None, headnum=1, tailnum=1, onesiteflag=None, page=1, count=20): """doc: http://open.youku.com/docs/doc?id=86 """ url = 'https://openapi.youku.com/v2/searches/show/top_unite.json' params = { 'client_id': self.client_id, 'category': category, 'genre': genre, 'area': area, 'year': year, 'orderby': orderby, 'headnum': headnum, 'tailnum': tailnum, 'onesiteflag': onesiteflag, 'page': page, 'count': count } params = remove_none_value(params) r = requests.get(url, params=params) check_error(r) return r.json()
doc: http://open.youku.com/docs/doc?id=86
Below is the the instruction that describes the task: ### Input: doc: http://open.youku.com/docs/doc?id=86 ### Response: def search_show_top_unite(self, category, genre=None, area=None, year=None, orderby=None, headnum=1, tailnum=1, onesiteflag=None, page=1, count=20): """doc: http://open.youku.com/docs/doc?id=86 """ url = 'https://openapi.youku.com/v2/searches/show/top_unite.json' params = { 'client_id': self.client_id, 'category': category, 'genre': genre, 'area': area, 'year': year, 'orderby': orderby, 'headnum': headnum, 'tailnum': tailnum, 'onesiteflag': onesiteflag, 'page': page, 'count': count } params = remove_none_value(params) r = requests.get(url, params=params) check_error(r) return r.json()