code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def update(self, ipv4s): """ Method to update ipv4's :param ipv4s: List containing ipv4's desired to updated :return: None """ data = {'ips': ipv4s} ipv4s_ids = [str(ipv4.get('id')) for ipv4 in ipv4s] return super(ApiIPv4, self).put('api/v3/ipv4/%s/' % ';'.join(ipv4s_ids), data)
Method to update ipv4's :param ipv4s: List containing ipv4's desired to updated :return: None
Below is the the instruction that describes the task: ### Input: Method to update ipv4's :param ipv4s: List containing ipv4's desired to updated :return: None ### Response: def update(self, ipv4s): """ Method to update ipv4's :param ipv4s: List containing ipv4's desired to updated :return: None """ data = {'ips': ipv4s} ipv4s_ids = [str(ipv4.get('id')) for ipv4 in ipv4s] return super(ApiIPv4, self).put('api/v3/ipv4/%s/' % ';'.join(ipv4s_ids), data)
def first_return_times(dts, c=None, d=0.0): """For an ensemble of time series, return the set of all time intervals between successive returns to value c for all instances in the ensemble. If c is not given, the default is the mean across all times and across all time series in the ensemble. Args: dts (DistTimeseries) c (float): Optional target value (default is the ensemble mean value) d (float): Optional min distance from c to be attained between returns Returns: array of time intervals (Can take the mean of these to estimate the expected first return time for the whole ensemble) """ if c is None: c = dts.mean() vmrt = distob.vectorize(analyses1.first_return_times) all_intervals = vmrt(dts, c, d) if hasattr(type(all_intervals), '__array_interface__'): return np.ravel(all_intervals) else: return np.hstack([distob.gather(ilist) for ilist in all_intervals])
For an ensemble of time series, return the set of all time intervals between successive returns to value c for all instances in the ensemble. If c is not given, the default is the mean across all times and across all time series in the ensemble. Args: dts (DistTimeseries) c (float): Optional target value (default is the ensemble mean value) d (float): Optional min distance from c to be attained between returns Returns: array of time intervals (Can take the mean of these to estimate the expected first return time for the whole ensemble)
Below is the the instruction that describes the task: ### Input: For an ensemble of time series, return the set of all time intervals between successive returns to value c for all instances in the ensemble. If c is not given, the default is the mean across all times and across all time series in the ensemble. Args: dts (DistTimeseries) c (float): Optional target value (default is the ensemble mean value) d (float): Optional min distance from c to be attained between returns Returns: array of time intervals (Can take the mean of these to estimate the expected first return time for the whole ensemble) ### Response: def first_return_times(dts, c=None, d=0.0): """For an ensemble of time series, return the set of all time intervals between successive returns to value c for all instances in the ensemble. If c is not given, the default is the mean across all times and across all time series in the ensemble. Args: dts (DistTimeseries) c (float): Optional target value (default is the ensemble mean value) d (float): Optional min distance from c to be attained between returns Returns: array of time intervals (Can take the mean of these to estimate the expected first return time for the whole ensemble) """ if c is None: c = dts.mean() vmrt = distob.vectorize(analyses1.first_return_times) all_intervals = vmrt(dts, c, d) if hasattr(type(all_intervals), '__array_interface__'): return np.ravel(all_intervals) else: return np.hstack([distob.gather(ilist) for ilist in all_intervals])
def cursor_event(self, x, y, dx, dy): """ The standard mouse movement event method. Can be overriden to add new functionality. By default this feeds the system camera with new values. Args: x: The current mouse x position y: The current mouse y position dx: Delta x postion (x position difference from the previous event) dy: Delta y postion (y position difference from the previous event) """ self.sys_camera.rot_state(x, y)
The standard mouse movement event method. Can be overriden to add new functionality. By default this feeds the system camera with new values. Args: x: The current mouse x position y: The current mouse y position dx: Delta x postion (x position difference from the previous event) dy: Delta y postion (y position difference from the previous event)
Below is the the instruction that describes the task: ### Input: The standard mouse movement event method. Can be overriden to add new functionality. By default this feeds the system camera with new values. Args: x: The current mouse x position y: The current mouse y position dx: Delta x postion (x position difference from the previous event) dy: Delta y postion (y position difference from the previous event) ### Response: def cursor_event(self, x, y, dx, dy): """ The standard mouse movement event method. Can be overriden to add new functionality. By default this feeds the system camera with new values. Args: x: The current mouse x position y: The current mouse y position dx: Delta x postion (x position difference from the previous event) dy: Delta y postion (y position difference from the previous event) """ self.sys_camera.rot_state(x, y)
def get_output_content(job_id, max_size=1024, conn=None): """ returns the content buffer for a job_id if that job output exists :param job_id: <str> id for the job :param max_size: <int> truncate after [max_size] bytes :param conn: (optional)<connection> to run on :return: <str> or <bytes> """ content = None if RBO.index_list().contains(IDX_OUTPUT_JOB_ID).run(conn): # NEW check_status = RBO.get_all(job_id, index=IDX_OUTPUT_JOB_ID).run(conn) else: # OLD check_status = RBO.filter({OUTPUTJOB_FIELD: {ID_FIELD: job_id}}).run(conn) for status_item in check_status: content = _truncate_output_content_if_required(status_item, max_size) return content
returns the content buffer for a job_id if that job output exists :param job_id: <str> id for the job :param max_size: <int> truncate after [max_size] bytes :param conn: (optional)<connection> to run on :return: <str> or <bytes>
Below is the the instruction that describes the task: ### Input: returns the content buffer for a job_id if that job output exists :param job_id: <str> id for the job :param max_size: <int> truncate after [max_size] bytes :param conn: (optional)<connection> to run on :return: <str> or <bytes> ### Response: def get_output_content(job_id, max_size=1024, conn=None): """ returns the content buffer for a job_id if that job output exists :param job_id: <str> id for the job :param max_size: <int> truncate after [max_size] bytes :param conn: (optional)<connection> to run on :return: <str> or <bytes> """ content = None if RBO.index_list().contains(IDX_OUTPUT_JOB_ID).run(conn): # NEW check_status = RBO.get_all(job_id, index=IDX_OUTPUT_JOB_ID).run(conn) else: # OLD check_status = RBO.filter({OUTPUTJOB_FIELD: {ID_FIELD: job_id}}).run(conn) for status_item in check_status: content = _truncate_output_content_if_required(status_item, max_size) return content
def user_activity_stats(self, username, format=None): """ Retrieve the activity stats about a specific user over the last year. Params: username (string): filters the username of the user whose activity you are interested in. format (string): Allows changing the of the date/time returned from iso format to unix timestamp Can be: timestamp or isoformat Returns: dict: A dictionary of activities done by a given user for all the projects for a given Pagure instance. """ request_url = "{}/api/0/user/{}/activity/stats".format(self.instance, username) payload = {} if username is not None: payload['username'] = username if format is not None: payload['format'] = format return_value = self._call_api(request_url, params=payload) return return_value
Retrieve the activity stats about a specific user over the last year. Params: username (string): filters the username of the user whose activity you are interested in. format (string): Allows changing the of the date/time returned from iso format to unix timestamp Can be: timestamp or isoformat Returns: dict: A dictionary of activities done by a given user for all the projects for a given Pagure instance.
Below is the the instruction that describes the task: ### Input: Retrieve the activity stats about a specific user over the last year. Params: username (string): filters the username of the user whose activity you are interested in. format (string): Allows changing the of the date/time returned from iso format to unix timestamp Can be: timestamp or isoformat Returns: dict: A dictionary of activities done by a given user for all the projects for a given Pagure instance. ### Response: def user_activity_stats(self, username, format=None): """ Retrieve the activity stats about a specific user over the last year. Params: username (string): filters the username of the user whose activity you are interested in. format (string): Allows changing the of the date/time returned from iso format to unix timestamp Can be: timestamp or isoformat Returns: dict: A dictionary of activities done by a given user for all the projects for a given Pagure instance. """ request_url = "{}/api/0/user/{}/activity/stats".format(self.instance, username) payload = {} if username is not None: payload['username'] = username if format is not None: payload['format'] = format return_value = self._call_api(request_url, params=payload) return return_value
def identity(ctx, variant_id): """Check how well SVs are working in the database """ if not variant_id: LOG.warning("Please provide a variant id") ctx.abort() adapter = ctx.obj['adapter'] version = ctx.obj['version'] LOG.info("Search variants {0}".format(adapter)) result = adapter.get_clusters(variant_id) if result.count() == 0: LOG.info("No hits for variant %s", variant_id) return for res in result: click.echo(res)
Check how well SVs are working in the database
Below is the the instruction that describes the task: ### Input: Check how well SVs are working in the database ### Response: def identity(ctx, variant_id): """Check how well SVs are working in the database """ if not variant_id: LOG.warning("Please provide a variant id") ctx.abort() adapter = ctx.obj['adapter'] version = ctx.obj['version'] LOG.info("Search variants {0}".format(adapter)) result = adapter.get_clusters(variant_id) if result.count() == 0: LOG.info("No hits for variant %s", variant_id) return for res in result: click.echo(res)
def handle(self): """ Executes the command """ database = self.option("database") repository = DatabaseMigrationRepository(self.resolver, "migrations") repository.set_source(database) repository.create_repository() self.info("Migration table created successfully")
Executes the command
Below is the the instruction that describes the task: ### Input: Executes the command ### Response: def handle(self): """ Executes the command """ database = self.option("database") repository = DatabaseMigrationRepository(self.resolver, "migrations") repository.set_source(database) repository.create_repository() self.info("Migration table created successfully")
def service_status(self, short_name): """Get the current status of a service. Returns information about the service such as the length since the last heartbeat, any status messages that have been posted about the service and whether the heartbeat should be considered out of the ordinary. Args: short_name (string): The short name of the service to query Returns: dict: A dictionary with the status of the service """ if short_name not in self.services: raise ArgumentError("Unknown service name", short_name=short_name) info = {} service = self.services[short_name]['state'] info['heartbeat_age'] = monotonic() - service.last_heartbeat info['numeric_status'] = service.state info['string_status'] = service.string_state return info
Get the current status of a service. Returns information about the service such as the length since the last heartbeat, any status messages that have been posted about the service and whether the heartbeat should be considered out of the ordinary. Args: short_name (string): The short name of the service to query Returns: dict: A dictionary with the status of the service
Below is the the instruction that describes the task: ### Input: Get the current status of a service. Returns information about the service such as the length since the last heartbeat, any status messages that have been posted about the service and whether the heartbeat should be considered out of the ordinary. Args: short_name (string): The short name of the service to query Returns: dict: A dictionary with the status of the service ### Response: def service_status(self, short_name): """Get the current status of a service. Returns information about the service such as the length since the last heartbeat, any status messages that have been posted about the service and whether the heartbeat should be considered out of the ordinary. Args: short_name (string): The short name of the service to query Returns: dict: A dictionary with the status of the service """ if short_name not in self.services: raise ArgumentError("Unknown service name", short_name=short_name) info = {} service = self.services[short_name]['state'] info['heartbeat_age'] = monotonic() - service.last_heartbeat info['numeric_status'] = service.state info['string_status'] = service.string_state return info
def get_all_allowed_internal_commands(self): """Returns an iterable which includes all allowed commands. This does not mean that a specific command from the result is executable right now in this session state (or that it can be executed at all in this connection). Please note that the returned values are /internal/ commands, not SMTP commands (use get_all_allowed_smtp_commands for that) so there will be 'MAIL FROM' instead of 'MAIL'.""" states = set() for command_name in self._get_all_commands(including_quit=True): if command_name not in ['GREET', 'MSGDATA']: states.add(command_name) return states
Returns an iterable which includes all allowed commands. This does not mean that a specific command from the result is executable right now in this session state (or that it can be executed at all in this connection). Please note that the returned values are /internal/ commands, not SMTP commands (use get_all_allowed_smtp_commands for that) so there will be 'MAIL FROM' instead of 'MAIL'.
Below is the the instruction that describes the task: ### Input: Returns an iterable which includes all allowed commands. This does not mean that a specific command from the result is executable right now in this session state (or that it can be executed at all in this connection). Please note that the returned values are /internal/ commands, not SMTP commands (use get_all_allowed_smtp_commands for that) so there will be 'MAIL FROM' instead of 'MAIL'. ### Response: def get_all_allowed_internal_commands(self): """Returns an iterable which includes all allowed commands. This does not mean that a specific command from the result is executable right now in this session state (or that it can be executed at all in this connection). Please note that the returned values are /internal/ commands, not SMTP commands (use get_all_allowed_smtp_commands for that) so there will be 'MAIL FROM' instead of 'MAIL'.""" states = set() for command_name in self._get_all_commands(including_quit=True): if command_name not in ['GREET', 'MSGDATA']: states.add(command_name) return states
def _syllabify_simplex(word): '''Syllabify the given word.''' word, rules = apply_T1(word) if re.search(r'[^ieAyOauo]*([ieAyOauo]{2})[^ieAyOauo]*', word): word, T2 = apply_T2(word) word, T8 = apply_T8(word) word, T9 = apply_T9(word) rules += T2 + T8 + T9 # T4 produces variation syllabifications = list(apply_T4(word)) else: syllabifications = [(word, ''), ] for word, rule in syllabifications: RULES = rules + rule if re.search(r'[ieAyOauo]{3}', word): word, T6 = apply_T6(word) word, T5 = apply_T5(word) word, T10 = apply_T10(word) word, T7 = apply_T7(word) word, T2 = apply_T2(word) RULES += T5 + T6 + T10 + T7 + T2 RULES = RULES or ' T0' # T0 means no rules have applied yield word, RULES
Syllabify the given word.
Below is the the instruction that describes the task: ### Input: Syllabify the given word. ### Response: def _syllabify_simplex(word): '''Syllabify the given word.''' word, rules = apply_T1(word) if re.search(r'[^ieAyOauo]*([ieAyOauo]{2})[^ieAyOauo]*', word): word, T2 = apply_T2(word) word, T8 = apply_T8(word) word, T9 = apply_T9(word) rules += T2 + T8 + T9 # T4 produces variation syllabifications = list(apply_T4(word)) else: syllabifications = [(word, ''), ] for word, rule in syllabifications: RULES = rules + rule if re.search(r'[ieAyOauo]{3}', word): word, T6 = apply_T6(word) word, T5 = apply_T5(word) word, T10 = apply_T10(word) word, T7 = apply_T7(word) word, T2 = apply_T2(word) RULES += T5 + T6 + T10 + T7 + T2 RULES = RULES or ' T0' # T0 means no rules have applied yield word, RULES
def match_field(field_class): """ Iterates the field_classes and returns the first match """ for cls in field_class.mro(): if cls in list(LOOKUP_TABLE.keys()): return cls # could not match the field class raise Exception('{0} None Found '.format(field_class))
Iterates the field_classes and returns the first match
Below is the the instruction that describes the task: ### Input: Iterates the field_classes and returns the first match ### Response: def match_field(field_class): """ Iterates the field_classes and returns the first match """ for cls in field_class.mro(): if cls in list(LOOKUP_TABLE.keys()): return cls # could not match the field class raise Exception('{0} None Found '.format(field_class))
def get_node(self, path): '''Get a child node of this node, or this node, based on a path. @param path A list of path elements pointing to a node in the tree. For example, ['/', 'localhost', 'dir.host']. The first element in this path should be this node's name. @return The node pointed to by @ref path, or None if the path does not point to a node in the tree below this node. Example: >>> c1 = TreeNode(name='c1') >>> c2 = TreeNode(name='c2') >>> p = TreeNode(name='p', children={'c1':c1, 'c2':c2}) >>> c1._parent = p >>> c2._parent = p >>> p.get_node(['p', 'c1']) == c1 True >>> p.get_node(['p', 'c2']) == c2 True ''' with self._mutex: if path[0] == self._name: if len(path) == 1: return self elif path[1] in self._children: return self._children[path[1]].get_node(path[1:]) else: return None else: return None
Get a child node of this node, or this node, based on a path. @param path A list of path elements pointing to a node in the tree. For example, ['/', 'localhost', 'dir.host']. The first element in this path should be this node's name. @return The node pointed to by @ref path, or None if the path does not point to a node in the tree below this node. Example: >>> c1 = TreeNode(name='c1') >>> c2 = TreeNode(name='c2') >>> p = TreeNode(name='p', children={'c1':c1, 'c2':c2}) >>> c1._parent = p >>> c2._parent = p >>> p.get_node(['p', 'c1']) == c1 True >>> p.get_node(['p', 'c2']) == c2 True
Below is the the instruction that describes the task: ### Input: Get a child node of this node, or this node, based on a path. @param path A list of path elements pointing to a node in the tree. For example, ['/', 'localhost', 'dir.host']. The first element in this path should be this node's name. @return The node pointed to by @ref path, or None if the path does not point to a node in the tree below this node. Example: >>> c1 = TreeNode(name='c1') >>> c2 = TreeNode(name='c2') >>> p = TreeNode(name='p', children={'c1':c1, 'c2':c2}) >>> c1._parent = p >>> c2._parent = p >>> p.get_node(['p', 'c1']) == c1 True >>> p.get_node(['p', 'c2']) == c2 True ### Response: def get_node(self, path): '''Get a child node of this node, or this node, based on a path. @param path A list of path elements pointing to a node in the tree. For example, ['/', 'localhost', 'dir.host']. The first element in this path should be this node's name. @return The node pointed to by @ref path, or None if the path does not point to a node in the tree below this node. Example: >>> c1 = TreeNode(name='c1') >>> c2 = TreeNode(name='c2') >>> p = TreeNode(name='p', children={'c1':c1, 'c2':c2}) >>> c1._parent = p >>> c2._parent = p >>> p.get_node(['p', 'c1']) == c1 True >>> p.get_node(['p', 'c2']) == c2 True ''' with self._mutex: if path[0] == self._name: if len(path) == 1: return self elif path[1] in self._children: return self._children[path[1]].get_node(path[1:]) else: return None else: return None
def get_href(self): """Convert path to a URL that can be passed to XML responses. Byte string, UTF-8 encoded, quoted. See http://www.webdav.org/specs/rfc4918.html#rfc.section.8.3 We are using the path-absolute option. i.e. starting with '/'. URI ; See section 3.2.1 of [RFC2068] """ # Nautilus chokes, if href encodes '(' as '%28' # So we don't encode 'extra' and 'safe' characters (see rfc2068 3.2.1) safe = "/" + "!*'()," + "$-_|." return compat.quote( self.provider.mount_path + self.provider.share_path + self.get_preferred_path(), safe=safe, )
Convert path to a URL that can be passed to XML responses. Byte string, UTF-8 encoded, quoted. See http://www.webdav.org/specs/rfc4918.html#rfc.section.8.3 We are using the path-absolute option. i.e. starting with '/'. URI ; See section 3.2.1 of [RFC2068]
Below is the the instruction that describes the task: ### Input: Convert path to a URL that can be passed to XML responses. Byte string, UTF-8 encoded, quoted. See http://www.webdav.org/specs/rfc4918.html#rfc.section.8.3 We are using the path-absolute option. i.e. starting with '/'. URI ; See section 3.2.1 of [RFC2068] ### Response: def get_href(self): """Convert path to a URL that can be passed to XML responses. Byte string, UTF-8 encoded, quoted. See http://www.webdav.org/specs/rfc4918.html#rfc.section.8.3 We are using the path-absolute option. i.e. starting with '/'. URI ; See section 3.2.1 of [RFC2068] """ # Nautilus chokes, if href encodes '(' as '%28' # So we don't encode 'extra' and 'safe' characters (see rfc2068 3.2.1) safe = "/" + "!*'()," + "$-_|." return compat.quote( self.provider.mount_path + self.provider.share_path + self.get_preferred_path(), safe=safe, )
def to_bag_of_genomes(self, clustering_object): """ Creates a bag of genomes representation for data mining purposes Each document (genome) in the representation is a set of metadata key and value pairs belonging to the same cluster. The bag of genomes are saved under ./bag_of_genomes/ directory :param clustering_object: The clustering.rst object """ meta_files = Parser._get_files('meta', self._path) meta_dict = {} for f in meta_files: meta_dict[Parser.get_sample_id(f)] = f clusters = [] if isinstance(clustering_object, Clustering): if Clustering.is_pyclustering_instance(clustering_object.model): no_clusters = len(clustering_object.model.get_clusters()) else: no_clusters = clustering_object.model.n_clusters for c in range(0, no_clusters): clusters.append(clustering_object.retrieve_cluster(self.data, c).index.get_level_values(-1).values) elif isinstance(clustering_object, Biclustering): no_clusters = clustering_object.model.n_clusters[0] # 0 for the rows no_col_clusters = clustering_object.model.n_clusters[1] # 1 for the columns for c in range(0, no_clusters): clusters.append(clustering_object.retrieve_bicluster(self.data, c*no_col_clusters, 0).index.get_level_values(-1).values) # sample names # to create the bag of genomes files print("creating the bag_of_genomes...") for c in tqdm(range(0, no_clusters)): document = open('./bag_of_genomes/document' + str(c) + '.bag_of_genome', 'w') for sample in clusters[c]: f = open(meta_dict[sample], 'r') for line in f: line = line.replace(' ', '_') splitted = line.split('\t') document.write(splitted[0] + '=' + splitted[1]) f.close() document.close()
Creates a bag of genomes representation for data mining purposes Each document (genome) in the representation is a set of metadata key and value pairs belonging to the same cluster. The bag of genomes are saved under ./bag_of_genomes/ directory :param clustering_object: The clustering.rst object
Below is the the instruction that describes the task: ### Input: Creates a bag of genomes representation for data mining purposes Each document (genome) in the representation is a set of metadata key and value pairs belonging to the same cluster. The bag of genomes are saved under ./bag_of_genomes/ directory :param clustering_object: The clustering.rst object ### Response: def to_bag_of_genomes(self, clustering_object): """ Creates a bag of genomes representation for data mining purposes Each document (genome) in the representation is a set of metadata key and value pairs belonging to the same cluster. The bag of genomes are saved under ./bag_of_genomes/ directory :param clustering_object: The clustering.rst object """ meta_files = Parser._get_files('meta', self._path) meta_dict = {} for f in meta_files: meta_dict[Parser.get_sample_id(f)] = f clusters = [] if isinstance(clustering_object, Clustering): if Clustering.is_pyclustering_instance(clustering_object.model): no_clusters = len(clustering_object.model.get_clusters()) else: no_clusters = clustering_object.model.n_clusters for c in range(0, no_clusters): clusters.append(clustering_object.retrieve_cluster(self.data, c).index.get_level_values(-1).values) elif isinstance(clustering_object, Biclustering): no_clusters = clustering_object.model.n_clusters[0] # 0 for the rows no_col_clusters = clustering_object.model.n_clusters[1] # 1 for the columns for c in range(0, no_clusters): clusters.append(clustering_object.retrieve_bicluster(self.data, c*no_col_clusters, 0).index.get_level_values(-1).values) # sample names # to create the bag of genomes files print("creating the bag_of_genomes...") for c in tqdm(range(0, no_clusters)): document = open('./bag_of_genomes/document' + str(c) + '.bag_of_genome', 'w') for sample in clusters[c]: f = open(meta_dict[sample], 'r') for line in f: line = line.replace(' ', '_') splitted = line.split('\t') document.write(splitted[0] + '=' + splitted[1]) f.close() document.close()
def pick_up_tip(self, location: Union[types.Location, Well] = None, presses: int = 3, increment: float = 1.0) -> 'InstrumentContext': """ Pick up a tip for the pipette to run liquid-handling commands with If no location is passed, the Pipette will pick up the next available tip in its :py:attr:`InstrumentContext.tip_racks` list. The tip to pick up can be manually specified with the `location` argument. The `location` argument can be specified in several ways: * If the only thing to specify is which well from which to pick up a tip, `location` can be a :py:class:`.Well`. For instance, if you have a tip rack in a variable called `tiprack`, you can pick up a specific tip from it with ``instr.pick_up_tip(tiprack.wells()[0])``. This style of call can be used to make the robot pick up a tip from a tip rack that was not specified when creating the :py:class:`.InstrumentContext`. * If the position to move to in the well needs to be specified, for instance to tell the robot to run its pick up tip routine starting closer to or farther from the top of the tip, `location` can be a :py:class:`.types.Location`; for instance, you can call ``instr.pick_up_tip(tiprack.wells()[0].top())``. :param location: The location from which to pick up a tip. :type location: :py:class:`.types.Location` or :py:class:`.Well` to pick up a tip from. :param presses: The number of times to lower and then raise the pipette when picking up a tip, to ensure a good seal (0 [zero] will result in the pipette hovering over the tip but not picking it up--generally not desireable, but could be used for dry-run). :type presses: int :param increment: The additional distance to travel on each successive press (e.g.: if `presses=3` and `increment=1.0`, then the first press will travel down into the tip by 3.5mm, the second by 4.5mm, and the third by 5.5mm). :type increment: float :returns: This instance """ num_channels = self.channels def _select_tiprack_from_list(tip_racks) -> Tuple[Labware, Well]: try: tr = tip_racks[0] except IndexError: raise OutOfTipsError next_tip = tr.next_tip(num_channels) if next_tip: return tr, next_tip else: return _select_tiprack_from_list(tip_racks[1:]) if location and isinstance(location, types.Location): if isinstance(location.labware, Labware): tiprack = location.labware target: Well = tiprack.next_tip(num_channels) # type: ignore if not target: raise OutOfTipsError elif isinstance(location.labware, Well): tiprack = location.labware.parent target = location.labware elif location and isinstance(location, Well): tiprack = location.parent target = location elif not location: tiprack, target = _select_tiprack_from_list(self.tip_racks) else: raise TypeError( "If specified, location should be an instance of " "types.Location (e.g. the return value from " "tiprack.wells()[0].top()) or a Well (e.g. tiprack.wells()[0]." " However, it is a {}".format(location)) assert tiprack.is_tiprack, "{} is not a tiprack".format(str(tiprack)) self.move_to(target.top()) self._hw_manager.hardware.pick_up_tip( self._mount, tiprack.tip_length, presses, increment) # Note that the hardware API pick_up_tip action includes homing z after tiprack.use_tips(target, num_channels) self._last_tip_picked_up_from = target return self
Pick up a tip for the pipette to run liquid-handling commands with If no location is passed, the Pipette will pick up the next available tip in its :py:attr:`InstrumentContext.tip_racks` list. The tip to pick up can be manually specified with the `location` argument. The `location` argument can be specified in several ways: * If the only thing to specify is which well from which to pick up a tip, `location` can be a :py:class:`.Well`. For instance, if you have a tip rack in a variable called `tiprack`, you can pick up a specific tip from it with ``instr.pick_up_tip(tiprack.wells()[0])``. This style of call can be used to make the robot pick up a tip from a tip rack that was not specified when creating the :py:class:`.InstrumentContext`. * If the position to move to in the well needs to be specified, for instance to tell the robot to run its pick up tip routine starting closer to or farther from the top of the tip, `location` can be a :py:class:`.types.Location`; for instance, you can call ``instr.pick_up_tip(tiprack.wells()[0].top())``. :param location: The location from which to pick up a tip. :type location: :py:class:`.types.Location` or :py:class:`.Well` to pick up a tip from. :param presses: The number of times to lower and then raise the pipette when picking up a tip, to ensure a good seal (0 [zero] will result in the pipette hovering over the tip but not picking it up--generally not desireable, but could be used for dry-run). :type presses: int :param increment: The additional distance to travel on each successive press (e.g.: if `presses=3` and `increment=1.0`, then the first press will travel down into the tip by 3.5mm, the second by 4.5mm, and the third by 5.5mm). :type increment: float :returns: This instance
Below is the the instruction that describes the task: ### Input: Pick up a tip for the pipette to run liquid-handling commands with If no location is passed, the Pipette will pick up the next available tip in its :py:attr:`InstrumentContext.tip_racks` list. The tip to pick up can be manually specified with the `location` argument. The `location` argument can be specified in several ways: * If the only thing to specify is which well from which to pick up a tip, `location` can be a :py:class:`.Well`. For instance, if you have a tip rack in a variable called `tiprack`, you can pick up a specific tip from it with ``instr.pick_up_tip(tiprack.wells()[0])``. This style of call can be used to make the robot pick up a tip from a tip rack that was not specified when creating the :py:class:`.InstrumentContext`. * If the position to move to in the well needs to be specified, for instance to tell the robot to run its pick up tip routine starting closer to or farther from the top of the tip, `location` can be a :py:class:`.types.Location`; for instance, you can call ``instr.pick_up_tip(tiprack.wells()[0].top())``. :param location: The location from which to pick up a tip. :type location: :py:class:`.types.Location` or :py:class:`.Well` to pick up a tip from. :param presses: The number of times to lower and then raise the pipette when picking up a tip, to ensure a good seal (0 [zero] will result in the pipette hovering over the tip but not picking it up--generally not desireable, but could be used for dry-run). :type presses: int :param increment: The additional distance to travel on each successive press (e.g.: if `presses=3` and `increment=1.0`, then the first press will travel down into the tip by 3.5mm, the second by 4.5mm, and the third by 5.5mm). :type increment: float :returns: This instance ### Response: def pick_up_tip(self, location: Union[types.Location, Well] = None, presses: int = 3, increment: float = 1.0) -> 'InstrumentContext': """ Pick up a tip for the pipette to run liquid-handling commands with If no location is passed, the Pipette will pick up the next available tip in its :py:attr:`InstrumentContext.tip_racks` list. The tip to pick up can be manually specified with the `location` argument. The `location` argument can be specified in several ways: * If the only thing to specify is which well from which to pick up a tip, `location` can be a :py:class:`.Well`. For instance, if you have a tip rack in a variable called `tiprack`, you can pick up a specific tip from it with ``instr.pick_up_tip(tiprack.wells()[0])``. This style of call can be used to make the robot pick up a tip from a tip rack that was not specified when creating the :py:class:`.InstrumentContext`. * If the position to move to in the well needs to be specified, for instance to tell the robot to run its pick up tip routine starting closer to or farther from the top of the tip, `location` can be a :py:class:`.types.Location`; for instance, you can call ``instr.pick_up_tip(tiprack.wells()[0].top())``. :param location: The location from which to pick up a tip. :type location: :py:class:`.types.Location` or :py:class:`.Well` to pick up a tip from. :param presses: The number of times to lower and then raise the pipette when picking up a tip, to ensure a good seal (0 [zero] will result in the pipette hovering over the tip but not picking it up--generally not desireable, but could be used for dry-run). :type presses: int :param increment: The additional distance to travel on each successive press (e.g.: if `presses=3` and `increment=1.0`, then the first press will travel down into the tip by 3.5mm, the second by 4.5mm, and the third by 5.5mm). :type increment: float :returns: This instance """ num_channels = self.channels def _select_tiprack_from_list(tip_racks) -> Tuple[Labware, Well]: try: tr = tip_racks[0] except IndexError: raise OutOfTipsError next_tip = tr.next_tip(num_channels) if next_tip: return tr, next_tip else: return _select_tiprack_from_list(tip_racks[1:]) if location and isinstance(location, types.Location): if isinstance(location.labware, Labware): tiprack = location.labware target: Well = tiprack.next_tip(num_channels) # type: ignore if not target: raise OutOfTipsError elif isinstance(location.labware, Well): tiprack = location.labware.parent target = location.labware elif location and isinstance(location, Well): tiprack = location.parent target = location elif not location: tiprack, target = _select_tiprack_from_list(self.tip_racks) else: raise TypeError( "If specified, location should be an instance of " "types.Location (e.g. the return value from " "tiprack.wells()[0].top()) or a Well (e.g. tiprack.wells()[0]." " However, it is a {}".format(location)) assert tiprack.is_tiprack, "{} is not a tiprack".format(str(tiprack)) self.move_to(target.top()) self._hw_manager.hardware.pick_up_tip( self._mount, tiprack.tip_length, presses, increment) # Note that the hardware API pick_up_tip action includes homing z after tiprack.use_tips(target, num_channels) self._last_tip_picked_up_from = target return self
def get_tree_from_branch(self, ref): ''' Return a pygit2.Tree object matching a head ref fetched into refs/remotes/origin/ ''' try: return self.peel(self.repo.lookup_reference( 'refs/remotes/origin/{0}'.format(ref))).tree except KeyError: return None
Return a pygit2.Tree object matching a head ref fetched into refs/remotes/origin/
Below is the the instruction that describes the task: ### Input: Return a pygit2.Tree object matching a head ref fetched into refs/remotes/origin/ ### Response: def get_tree_from_branch(self, ref): ''' Return a pygit2.Tree object matching a head ref fetched into refs/remotes/origin/ ''' try: return self.peel(self.repo.lookup_reference( 'refs/remotes/origin/{0}'.format(ref))).tree except KeyError: return None
def destinations(stop): """Get destination information.""" from pyruter.api import Departures async def get_destinations(): """Get departure information.""" async with aiohttp.ClientSession() as session: data = Departures(LOOP, stop, session=session) result = await data.get_final_destination() print(json.dumps(result, indent=4, sort_keys=True, ensure_ascii=False)) LOOP.run_until_complete(get_destinations())
Get destination information.
Below is the the instruction that describes the task: ### Input: Get destination information. ### Response: def destinations(stop): """Get destination information.""" from pyruter.api import Departures async def get_destinations(): """Get departure information.""" async with aiohttp.ClientSession() as session: data = Departures(LOOP, stop, session=session) result = await data.get_final_destination() print(json.dumps(result, indent=4, sort_keys=True, ensure_ascii=False)) LOOP.run_until_complete(get_destinations())
def add_column(self, table, name='ID', data_type='int(11)', after_col=None, null=False, primary_key=False): """Add a column to an existing table.""" location = 'AFTER {0}'.format(after_col) if after_col else 'FIRST' null_ = 'NULL' if null else 'NOT NULL' comment = "COMMENT 'Column auto created by mysql-toolkit'" pk = 'AUTO_INCREMENT PRIMARY KEY {0}'.format(comment) if primary_key else '' query = 'ALTER TABLE {0} ADD COLUMN {1} {2} {3} {4} {5}'.format(wrap(table), name, data_type, null_, pk, location) self.execute(query) self._printer("\tAdded column '{0}' to '{1}' {2}".format(name, table, '(Primary Key)' if primary_key else '')) return name
Add a column to an existing table.
Below is the the instruction that describes the task: ### Input: Add a column to an existing table. ### Response: def add_column(self, table, name='ID', data_type='int(11)', after_col=None, null=False, primary_key=False): """Add a column to an existing table.""" location = 'AFTER {0}'.format(after_col) if after_col else 'FIRST' null_ = 'NULL' if null else 'NOT NULL' comment = "COMMENT 'Column auto created by mysql-toolkit'" pk = 'AUTO_INCREMENT PRIMARY KEY {0}'.format(comment) if primary_key else '' query = 'ALTER TABLE {0} ADD COLUMN {1} {2} {3} {4} {5}'.format(wrap(table), name, data_type, null_, pk, location) self.execute(query) self._printer("\tAdded column '{0}' to '{1}' {2}".format(name, table, '(Primary Key)' if primary_key else '')) return name
def ASSIGN(self, node): """This is a custom implementation of ASSIGN derived from handleChildren() in pyflakes 1.3.0. The point here is that on module level, there's type aliases that we want to bind eagerly, but defer computation of the values of the assignments (the type aliases might have forward references). """ if not isinstance(self.scope, ModuleScope): return super().ASSIGN(node) for target in node.targets: self.handleNode(target, node) self.deferHandleNode(node.value, node)
This is a custom implementation of ASSIGN derived from handleChildren() in pyflakes 1.3.0. The point here is that on module level, there's type aliases that we want to bind eagerly, but defer computation of the values of the assignments (the type aliases might have forward references).
Below is the the instruction that describes the task: ### Input: This is a custom implementation of ASSIGN derived from handleChildren() in pyflakes 1.3.0. The point here is that on module level, there's type aliases that we want to bind eagerly, but defer computation of the values of the assignments (the type aliases might have forward references). ### Response: def ASSIGN(self, node): """This is a custom implementation of ASSIGN derived from handleChildren() in pyflakes 1.3.0. The point here is that on module level, there's type aliases that we want to bind eagerly, but defer computation of the values of the assignments (the type aliases might have forward references). """ if not isinstance(self.scope, ModuleScope): return super().ASSIGN(node) for target in node.targets: self.handleNode(target, node) self.deferHandleNode(node.value, node)
def table(self, header, body): """Rendering table element. Wrap header and body in it. :param header: header part of the table. :param body: body part of the table. """ table = '\n.. list-table::\n' if header and not header.isspace(): table = (table + self.indent + ':header-rows: 1\n\n' + self._indent_block(header) + '\n') else: table = table + '\n' table = table + self._indent_block(body) + '\n\n' return table
Rendering table element. Wrap header and body in it. :param header: header part of the table. :param body: body part of the table.
Below is the the instruction that describes the task: ### Input: Rendering table element. Wrap header and body in it. :param header: header part of the table. :param body: body part of the table. ### Response: def table(self, header, body): """Rendering table element. Wrap header and body in it. :param header: header part of the table. :param body: body part of the table. """ table = '\n.. list-table::\n' if header and not header.isspace(): table = (table + self.indent + ':header-rows: 1\n\n' + self._indent_block(header) + '\n') else: table = table + '\n' table = table + self._indent_block(body) + '\n\n' return table
def help_center_category_translations_missing(self, category_id, **kwargs): "https://developer.zendesk.com/rest_api/docs/help_center/translations#list-missing-translations" api_path = "/api/v2/help_center/categories/{category_id}/translations/missing.json" api_path = api_path.format(category_id=category_id) return self.call(api_path, **kwargs)
https://developer.zendesk.com/rest_api/docs/help_center/translations#list-missing-translations
Below is the the instruction that describes the task: ### Input: https://developer.zendesk.com/rest_api/docs/help_center/translations#list-missing-translations ### Response: def help_center_category_translations_missing(self, category_id, **kwargs): "https://developer.zendesk.com/rest_api/docs/help_center/translations#list-missing-translations" api_path = "/api/v2/help_center/categories/{category_id}/translations/missing.json" api_path = api_path.format(category_id=category_id) return self.call(api_path, **kwargs)
def transform(self, node, results): u""" a,b,c,d,e,f,*g,h,i = range(100) changes to _3to2list = list(range(100)) a,b,c,d,e,f,g,h,i, = _3to2list[:6] + [_3to2list[6:-2]] + _3to2list[-2:] and for a,b,*c,d,e in iter_of_iters: do_stuff changes to for _3to2iter in iter_of_iters: _3to2list = list(_3to2iter) a,b,c,d,e, = _3to2list[:2] + [_3to2list[2:-2]] + _3to2list[-2:] do_stuff """ self.LISTNAME = self.new_name(u"_3to2list") self.ITERNAME = self.new_name(u"_3to2iter") expl, impl = results.get(u"expl"), results.get(u"impl") if expl is not None: setup_line, power_line = self.fix_explicit_context(node, results) setup_line.prefix = expl.prefix power_line.prefix = indentation(expl.parent) setup_line.append_child(Newline()) parent = node.parent i = node.remove() parent.insert_child(i, power_line) parent.insert_child(i, setup_line) elif impl is not None: setup_line, power_line = self.fix_implicit_context(node, results) suitify(node) suite = [k for k in node.children if k.type == syms.suite][0] setup_line.prefix = u"" power_line.prefix = suite.children[1].value suite.children[2].prefix = indentation(suite.children[2]) suite.insert_child(2, Newline()) suite.insert_child(2, power_line) suite.insert_child(2, Newline()) suite.insert_child(2, setup_line) results.get(u"lst").replace(Name(self.ITERNAME, prefix=u" "))
u""" a,b,c,d,e,f,*g,h,i = range(100) changes to _3to2list = list(range(100)) a,b,c,d,e,f,g,h,i, = _3to2list[:6] + [_3to2list[6:-2]] + _3to2list[-2:] and for a,b,*c,d,e in iter_of_iters: do_stuff changes to for _3to2iter in iter_of_iters: _3to2list = list(_3to2iter) a,b,c,d,e, = _3to2list[:2] + [_3to2list[2:-2]] + _3to2list[-2:] do_stuff
Below is the the instruction that describes the task: ### Input: u""" a,b,c,d,e,f,*g,h,i = range(100) changes to _3to2list = list(range(100)) a,b,c,d,e,f,g,h,i, = _3to2list[:6] + [_3to2list[6:-2]] + _3to2list[-2:] and for a,b,*c,d,e in iter_of_iters: do_stuff changes to for _3to2iter in iter_of_iters: _3to2list = list(_3to2iter) a,b,c,d,e, = _3to2list[:2] + [_3to2list[2:-2]] + _3to2list[-2:] do_stuff ### Response: def transform(self, node, results): u""" a,b,c,d,e,f,*g,h,i = range(100) changes to _3to2list = list(range(100)) a,b,c,d,e,f,g,h,i, = _3to2list[:6] + [_3to2list[6:-2]] + _3to2list[-2:] and for a,b,*c,d,e in iter_of_iters: do_stuff changes to for _3to2iter in iter_of_iters: _3to2list = list(_3to2iter) a,b,c,d,e, = _3to2list[:2] + [_3to2list[2:-2]] + _3to2list[-2:] do_stuff """ self.LISTNAME = self.new_name(u"_3to2list") self.ITERNAME = self.new_name(u"_3to2iter") expl, impl = results.get(u"expl"), results.get(u"impl") if expl is not None: setup_line, power_line = self.fix_explicit_context(node, results) setup_line.prefix = expl.prefix power_line.prefix = indentation(expl.parent) setup_line.append_child(Newline()) parent = node.parent i = node.remove() parent.insert_child(i, power_line) parent.insert_child(i, setup_line) elif impl is not None: setup_line, power_line = self.fix_implicit_context(node, results) suitify(node) suite = [k for k in node.children if k.type == syms.suite][0] setup_line.prefix = u"" power_line.prefix = suite.children[1].value suite.children[2].prefix = indentation(suite.children[2]) suite.insert_child(2, Newline()) suite.insert_child(2, power_line) suite.insert_child(2, Newline()) suite.insert_child(2, setup_line) results.get(u"lst").replace(Name(self.ITERNAME, prefix=u" "))
def from_google(cls, google_x, google_y, zoom): """Creates a tile from Google format X Y and zoom""" max_tile = (2 ** zoom) - 1 assert 0 <= google_x <= max_tile, 'Google X needs to be a value between 0 and (2^zoom) -1.' assert 0 <= google_y <= max_tile, 'Google Y needs to be a value between 0 and (2^zoom) -1.' return cls(tms_x=google_x, tms_y=(2 ** zoom - 1) - google_y, zoom=zoom)
Creates a tile from Google format X Y and zoom
Below is the the instruction that describes the task: ### Input: Creates a tile from Google format X Y and zoom ### Response: def from_google(cls, google_x, google_y, zoom): """Creates a tile from Google format X Y and zoom""" max_tile = (2 ** zoom) - 1 assert 0 <= google_x <= max_tile, 'Google X needs to be a value between 0 and (2^zoom) -1.' assert 0 <= google_y <= max_tile, 'Google Y needs to be a value between 0 and (2^zoom) -1.' return cls(tms_x=google_x, tms_y=(2 ** zoom - 1) - google_y, zoom=zoom)
def after_unassign(reference_analysis): """Removes the reference analysis from the system """ analysis_events.after_unassign(reference_analysis) ref_sample = reference_analysis.aq_parent ref_sample.manage_delObjects([reference_analysis.getId()])
Removes the reference analysis from the system
Below is the the instruction that describes the task: ### Input: Removes the reference analysis from the system ### Response: def after_unassign(reference_analysis): """Removes the reference analysis from the system """ analysis_events.after_unassign(reference_analysis) ref_sample = reference_analysis.aq_parent ref_sample.manage_delObjects([reference_analysis.getId()])
def advertiseBrokerWorkerDown(exctype, value, traceback): """Hook advertizing the broker if an impromptu shutdown is occuring.""" if not scoop.SHUTDOWN_REQUESTED: execQueue.shutdown() sys.__excepthook__(exctype, value, traceback)
Hook advertizing the broker if an impromptu shutdown is occuring.
Below is the the instruction that describes the task: ### Input: Hook advertizing the broker if an impromptu shutdown is occuring. ### Response: def advertiseBrokerWorkerDown(exctype, value, traceback): """Hook advertizing the broker if an impromptu shutdown is occuring.""" if not scoop.SHUTDOWN_REQUESTED: execQueue.shutdown() sys.__excepthook__(exctype, value, traceback)
def extend(self, iterable, new_name=False): """ Add further arrays from an iterable to this list Parameters ---------- iterable Any iterable that contains :class:`InteractiveBase` instances %(ArrayList.rename.parameters.new_name)s Raises ------ %(ArrayList.rename.raises)s See Also -------- list.extend, append, rename""" # extend those arrays that aren't alredy in the list super(ArrayList, self).extend(t[0] for t in filter( lambda t: t[1] is not None, ( self.rename(arr, new_name) for arr in iterable)))
Add further arrays from an iterable to this list Parameters ---------- iterable Any iterable that contains :class:`InteractiveBase` instances %(ArrayList.rename.parameters.new_name)s Raises ------ %(ArrayList.rename.raises)s See Also -------- list.extend, append, rename
Below is the the instruction that describes the task: ### Input: Add further arrays from an iterable to this list Parameters ---------- iterable Any iterable that contains :class:`InteractiveBase` instances %(ArrayList.rename.parameters.new_name)s Raises ------ %(ArrayList.rename.raises)s See Also -------- list.extend, append, rename ### Response: def extend(self, iterable, new_name=False): """ Add further arrays from an iterable to this list Parameters ---------- iterable Any iterable that contains :class:`InteractiveBase` instances %(ArrayList.rename.parameters.new_name)s Raises ------ %(ArrayList.rename.raises)s See Also -------- list.extend, append, rename""" # extend those arrays that aren't alredy in the list super(ArrayList, self).extend(t[0] for t in filter( lambda t: t[1] is not None, ( self.rename(arr, new_name) for arr in iterable)))
def apply_vnc_actions(self, vnc_actions): """ Play a list of vnc_actions forward over the current keysyms state NOTE: Since we are squashing a set of diffs into a single keyboard state, some information may be lost. For example if the Z key is down, then we receive [(Z-up), (Z-down)], the output will not reflect any change in Z You can make each frame shorter to offset this effect. """ for event in vnc_actions: if isinstance(event, spaces.KeyEvent): if event.down: self._down_keysyms.add(event.key) else: self._down_keysyms.discard(event.key) logger.debug("AtariKeyState._down_keysyms: {}".format(self._down_keysyms))
Play a list of vnc_actions forward over the current keysyms state NOTE: Since we are squashing a set of diffs into a single keyboard state, some information may be lost. For example if the Z key is down, then we receive [(Z-up), (Z-down)], the output will not reflect any change in Z You can make each frame shorter to offset this effect.
Below is the the instruction that describes the task: ### Input: Play a list of vnc_actions forward over the current keysyms state NOTE: Since we are squashing a set of diffs into a single keyboard state, some information may be lost. For example if the Z key is down, then we receive [(Z-up), (Z-down)], the output will not reflect any change in Z You can make each frame shorter to offset this effect. ### Response: def apply_vnc_actions(self, vnc_actions): """ Play a list of vnc_actions forward over the current keysyms state NOTE: Since we are squashing a set of diffs into a single keyboard state, some information may be lost. For example if the Z key is down, then we receive [(Z-up), (Z-down)], the output will not reflect any change in Z You can make each frame shorter to offset this effect. """ for event in vnc_actions: if isinstance(event, spaces.KeyEvent): if event.down: self._down_keysyms.add(event.key) else: self._down_keysyms.discard(event.key) logger.debug("AtariKeyState._down_keysyms: {}".format(self._down_keysyms))
def save_default_values(dsp, path): """ Write Dispatcher default values in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_default_values(dsp, file_name) """ import dill with open(path, 'wb') as f: dill.dump(dsp.default_values, f)
Write Dispatcher default values in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_default_values(dsp, file_name)
Below is the the instruction that describes the task: ### Input: Write Dispatcher default values in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_default_values(dsp, file_name) ### Response: def save_default_values(dsp, path): """ Write Dispatcher default values in Python pickle format. Pickles are a serialized byte stream of a Python object. This format will preserve Python objects used as nodes or edges. :param dsp: A dispatcher that identifies the model adopted. :type dsp: schedula.Dispatcher :param path: File or filename to write. File names ending in .gz or .bz2 will be compressed. :type path: str, file .. testsetup:: >>> from tempfile import mkstemp >>> file_name = mkstemp()[1] Example:: >>> from schedula import Dispatcher >>> dsp = Dispatcher() >>> dsp.add_data('a', default_value=1) 'a' >>> dsp.add_function(function=max, inputs=['a', 'b'], outputs=['c']) 'max' >>> save_default_values(dsp, file_name) """ import dill with open(path, 'wb') as f: dill.dump(dsp.default_values, f)
def mixed(self): """ Returns a dictionary where the values are either single values, or a list of values when a key/value appears more than once in this dictionary. This is similar to the kind of dictionary often used to represent the variables in a web request. """ result = {} multi = {} for key, value in self._items: if key in result: # We do this to not clobber any lists that are # *actual* values in this dictionary: if key in multi: result[key].append(value) else: result[key] = [result[key], value] multi[key] = None else: result[key] = value return result
Returns a dictionary where the values are either single values, or a list of values when a key/value appears more than once in this dictionary. This is similar to the kind of dictionary often used to represent the variables in a web request.
Below is the the instruction that describes the task: ### Input: Returns a dictionary where the values are either single values, or a list of values when a key/value appears more than once in this dictionary. This is similar to the kind of dictionary often used to represent the variables in a web request. ### Response: def mixed(self): """ Returns a dictionary where the values are either single values, or a list of values when a key/value appears more than once in this dictionary. This is similar to the kind of dictionary often used to represent the variables in a web request. """ result = {} multi = {} for key, value in self._items: if key in result: # We do this to not clobber any lists that are # *actual* values in this dictionary: if key in multi: result[key].append(value) else: result[key] = [result[key], value] multi[key] = None else: result[key] = value return result
def next(self): """Next point in iteration """ if self.probability == 1: x, y = next(self.scan) else: while True: x, y = next(self.scan) if random.random() <= self.probability: break return x, y
Next point in iteration
Below is the the instruction that describes the task: ### Input: Next point in iteration ### Response: def next(self): """Next point in iteration """ if self.probability == 1: x, y = next(self.scan) else: while True: x, y = next(self.scan) if random.random() <= self.probability: break return x, y
def set_headers(self, headers): """ Set the headers of our csv writer :param list headers: list of dict with label and name key (label is mandatory : used for the export) """ self.headers = [] if 'order' in self.options: for element in self.options['order']: for header in headers: if header['key'] == element: self.headers.append(header) break else: self.headers = headers
Set the headers of our csv writer :param list headers: list of dict with label and name key (label is mandatory : used for the export)
Below is the the instruction that describes the task: ### Input: Set the headers of our csv writer :param list headers: list of dict with label and name key (label is mandatory : used for the export) ### Response: def set_headers(self, headers): """ Set the headers of our csv writer :param list headers: list of dict with label and name key (label is mandatory : used for the export) """ self.headers = [] if 'order' in self.options: for element in self.options['order']: for header in headers: if header['key'] == element: self.headers.append(header) break else: self.headers = headers
def _get_media(self): """ Construct Media as a dynamic property. .. Note:: For more information visit https://docs.djangoproject.com/en/stable/topics/forms/media/#media-as-a-dynamic-property """ lang = get_language() select2_js = (settings.SELECT2_JS,) if settings.SELECT2_JS else () select2_css = (settings.SELECT2_CSS,) if settings.SELECT2_CSS else () i18n_name = SELECT2_TRANSLATIONS.get(lang) if i18n_name not in settings.SELECT2_I18N_AVAILABLE_LANGUAGES: i18n_name = None i18n_file = ('%s/%s.js' % (settings.SELECT2_I18N_PATH, i18n_name),) if i18n_name else () return forms.Media( js=select2_js + i18n_file + ('django_select2/django_select2.js',), css={'screen': select2_css} )
Construct Media as a dynamic property. .. Note:: For more information visit https://docs.djangoproject.com/en/stable/topics/forms/media/#media-as-a-dynamic-property
Below is the the instruction that describes the task: ### Input: Construct Media as a dynamic property. .. Note:: For more information visit https://docs.djangoproject.com/en/stable/topics/forms/media/#media-as-a-dynamic-property ### Response: def _get_media(self): """ Construct Media as a dynamic property. .. Note:: For more information visit https://docs.djangoproject.com/en/stable/topics/forms/media/#media-as-a-dynamic-property """ lang = get_language() select2_js = (settings.SELECT2_JS,) if settings.SELECT2_JS else () select2_css = (settings.SELECT2_CSS,) if settings.SELECT2_CSS else () i18n_name = SELECT2_TRANSLATIONS.get(lang) if i18n_name not in settings.SELECT2_I18N_AVAILABLE_LANGUAGES: i18n_name = None i18n_file = ('%s/%s.js' % (settings.SELECT2_I18N_PATH, i18n_name),) if i18n_name else () return forms.Media( js=select2_js + i18n_file + ('django_select2/django_select2.js',), css={'screen': select2_css} )
def get_rich_events(self, item): """ In the events there are some common fields with the task. The name of the field must be the same in the task and in the event so we can filer using it in task and event at the same time. * Fields that don't change: the field does not change with the events in a task so the value is always the same in the events of a task. * Fields that change: the value of teh field changes with events """ # To get values from the task eitem = self.get_rich_item(item) # Fields that don't change never task_fields_nochange = ['author_userName', 'creation_date', 'url', 'id', 'bug_id'] # Follow changes in this fields task_fields_change = ['priority_value', 'status', 'assigned_to_userName', 'tags_custom_analyzed'] task_change = {} for f in task_fields_change: task_change[f] = None task_change['status'] = TASK_OPEN_STATUS task_change['tags_custom_analyzed'] = eitem['tags_custom_analyzed'] # Events are in transactions field (changes in fields) transactions = item['data']['transactions'] if not transactions: return [] for t in transactions: event = {} # Needed for incremental updates from the item event['metadata__updated_on'] = item['metadata__updated_on'] event['origin'] = item['origin'] # Real event data event['transactionID'] = t['transactionID'] event['type'] = t['transactionType'] event['username'] = None if 'authorData' in t and 'userName' in t['authorData']: event['event_author_name'] = t['authorData']['userName'] event['update_date'] = unixtime_to_datetime(float(t['dateCreated'])).isoformat() event['oldValue'] = '' event['newValue'] = '' if event['type'] == 'core:edge': for val in t['oldValue']: if val in self.phab_ids_names: val = self.phab_ids_names[val] event['oldValue'] += "," + val event['oldValue'] = event['oldValue'][1:] # remove first comma for val in t['newValue']: if val in self.phab_ids_names: val = self.phab_ids_names[val] event['newValue'] += "," + val event['newValue'] = event['newValue'][1:] # remove first comma elif event['type'] in ['status', 'description', 'priority', 'reassign', 'title', 'space', 'core:create', 'parent']: # Convert to str so the field is always a string event['oldValue'] = str(t['oldValue']) if event['oldValue'] in self.phab_ids_names: event['oldValue'] = self.phab_ids_names[event['oldValue']] event['newValue'] = str(t['newValue']) if event['newValue'] in self.phab_ids_names: event['newValue'] = self.phab_ids_names[event['newValue']] elif event['type'] == 'core:comment': event['newValue'] = t['comments'] elif event['type'] == 'core:subscribers': event['newValue'] = ",".join(t['newValue']) else: # logger.debug("Event type %s old to new value not supported", t['transactionType']) pass for f in task_fields_nochange: # The field name must be the same than in task for filtering event[f] = eitem[f] # To track history of some fields if event['type'] in ['status']: task_change['status'] = event['newValue'] elif event['type'] == 'priority': task_change['priority'] = event['newValue'] elif event['type'] == 'core:edge': task_change['tags_custom_analyzed'] = [event['newValue']] if event['type'] in ['reassign']: # Try to get the userName and not the user id if event['newValue'] in self.phab_ids_names: task_change['assigned_to_userName'] = self.phab_ids_names[event['newValue']] event['newValue'] = task_change['assigned_to_userName'] else: task_change['assigned_to_userName'] = event['newValue'] if event['oldValue'] in self.phab_ids_names: # Try to get the userName and not the user id event['oldValue'] = self.phab_ids_names[event['oldValue']] for f in task_change: event[f] = task_change[f] yield event
In the events there are some common fields with the task. The name of the field must be the same in the task and in the event so we can filer using it in task and event at the same time. * Fields that don't change: the field does not change with the events in a task so the value is always the same in the events of a task. * Fields that change: the value of teh field changes with events
Below is the the instruction that describes the task: ### Input: In the events there are some common fields with the task. The name of the field must be the same in the task and in the event so we can filer using it in task and event at the same time. * Fields that don't change: the field does not change with the events in a task so the value is always the same in the events of a task. * Fields that change: the value of teh field changes with events ### Response: def get_rich_events(self, item): """ In the events there are some common fields with the task. The name of the field must be the same in the task and in the event so we can filer using it in task and event at the same time. * Fields that don't change: the field does not change with the events in a task so the value is always the same in the events of a task. * Fields that change: the value of teh field changes with events """ # To get values from the task eitem = self.get_rich_item(item) # Fields that don't change never task_fields_nochange = ['author_userName', 'creation_date', 'url', 'id', 'bug_id'] # Follow changes in this fields task_fields_change = ['priority_value', 'status', 'assigned_to_userName', 'tags_custom_analyzed'] task_change = {} for f in task_fields_change: task_change[f] = None task_change['status'] = TASK_OPEN_STATUS task_change['tags_custom_analyzed'] = eitem['tags_custom_analyzed'] # Events are in transactions field (changes in fields) transactions = item['data']['transactions'] if not transactions: return [] for t in transactions: event = {} # Needed for incremental updates from the item event['metadata__updated_on'] = item['metadata__updated_on'] event['origin'] = item['origin'] # Real event data event['transactionID'] = t['transactionID'] event['type'] = t['transactionType'] event['username'] = None if 'authorData' in t and 'userName' in t['authorData']: event['event_author_name'] = t['authorData']['userName'] event['update_date'] = unixtime_to_datetime(float(t['dateCreated'])).isoformat() event['oldValue'] = '' event['newValue'] = '' if event['type'] == 'core:edge': for val in t['oldValue']: if val in self.phab_ids_names: val = self.phab_ids_names[val] event['oldValue'] += "," + val event['oldValue'] = event['oldValue'][1:] # remove first comma for val in t['newValue']: if val in self.phab_ids_names: val = self.phab_ids_names[val] event['newValue'] += "," + val event['newValue'] = event['newValue'][1:] # remove first comma elif event['type'] in ['status', 'description', 'priority', 'reassign', 'title', 'space', 'core:create', 'parent']: # Convert to str so the field is always a string event['oldValue'] = str(t['oldValue']) if event['oldValue'] in self.phab_ids_names: event['oldValue'] = self.phab_ids_names[event['oldValue']] event['newValue'] = str(t['newValue']) if event['newValue'] in self.phab_ids_names: event['newValue'] = self.phab_ids_names[event['newValue']] elif event['type'] == 'core:comment': event['newValue'] = t['comments'] elif event['type'] == 'core:subscribers': event['newValue'] = ",".join(t['newValue']) else: # logger.debug("Event type %s old to new value not supported", t['transactionType']) pass for f in task_fields_nochange: # The field name must be the same than in task for filtering event[f] = eitem[f] # To track history of some fields if event['type'] in ['status']: task_change['status'] = event['newValue'] elif event['type'] == 'priority': task_change['priority'] = event['newValue'] elif event['type'] == 'core:edge': task_change['tags_custom_analyzed'] = [event['newValue']] if event['type'] in ['reassign']: # Try to get the userName and not the user id if event['newValue'] in self.phab_ids_names: task_change['assigned_to_userName'] = self.phab_ids_names[event['newValue']] event['newValue'] = task_change['assigned_to_userName'] else: task_change['assigned_to_userName'] = event['newValue'] if event['oldValue'] in self.phab_ids_names: # Try to get the userName and not the user id event['oldValue'] = self.phab_ids_names[event['oldValue']] for f in task_change: event[f] = task_change[f] yield event
def spawn_worker(params): """ This method has to be module level function :type params: Params """ setup_logging(params) log.info("Adding worker: idx=%s\tconcurrency=%s\tresults=%s", params.worker_index, params.concurrency, params.report) worker = Worker(params) worker.start() worker.join()
This method has to be module level function :type params: Params
Below is the the instruction that describes the task: ### Input: This method has to be module level function :type params: Params ### Response: def spawn_worker(params): """ This method has to be module level function :type params: Params """ setup_logging(params) log.info("Adding worker: idx=%s\tconcurrency=%s\tresults=%s", params.worker_index, params.concurrency, params.report) worker = Worker(params) worker.start() worker.join()
def __load_config(self): '''Find and load .scuba.yml ''' # top_path is where .scuba.yml is found, and becomes the top of our bind mount. # top_rel is the relative path from top_path to the current working directory, # and is where we'll set the working directory in the container (relative to # the bind mount point). try: top_path, top_rel = find_config() self.config = load_config(os.path.join(top_path, SCUBA_YML)) except ConfigNotFoundError as cfgerr: # SCUBA_YML can be missing if --image was given. # In this case, we assume a default config if not self.image_override: raise ScubaError(str(cfgerr)) top_path, top_rel = os.getcwd(), '' self.config = ScubaConfig(image=None) except ConfigError as cfgerr: raise ScubaError(str(cfgerr)) # Mount scuba root directory at the same path in the container... self.add_volume(top_path, top_path) # ...and set the working dir relative to it self.set_workdir(os.path.join(top_path, top_rel)) self.add_env('SCUBA_ROOT', top_path)
Find and load .scuba.yml
Below is the the instruction that describes the task: ### Input: Find and load .scuba.yml ### Response: def __load_config(self): '''Find and load .scuba.yml ''' # top_path is where .scuba.yml is found, and becomes the top of our bind mount. # top_rel is the relative path from top_path to the current working directory, # and is where we'll set the working directory in the container (relative to # the bind mount point). try: top_path, top_rel = find_config() self.config = load_config(os.path.join(top_path, SCUBA_YML)) except ConfigNotFoundError as cfgerr: # SCUBA_YML can be missing if --image was given. # In this case, we assume a default config if not self.image_override: raise ScubaError(str(cfgerr)) top_path, top_rel = os.getcwd(), '' self.config = ScubaConfig(image=None) except ConfigError as cfgerr: raise ScubaError(str(cfgerr)) # Mount scuba root directory at the same path in the container... self.add_volume(top_path, top_path) # ...and set the working dir relative to it self.set_workdir(os.path.join(top_path, top_rel)) self.add_env('SCUBA_ROOT', top_path)
def recv(self, message, ref, pos): """ Called when a message is received with decoded JSON content """ # Get action action = message.get('action', None) # Show the message we got if action != 'pingdog': self.debug("{} - Receive: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), message, ref, pos.uuid), color="cyan") # Check the action that it is requesting if action == 'get_config': # Get all the hardware connected to this POS answer = {} answer['action'] = 'config' answer['commit'] = pos.commit answer['hardware'] = [] for hw in pos.hardwares.filter(enable=True): # Prepare to send back the config answer['hardware'].append({'kind': hw.kind, 'config': hw.get_config(), 'uuid': hw.uuid.hex}) self.debug(u"{} - Send: {} - {}".format(pos.name.encode('ascii', 'ignore'), answer, pos.uuid), color='green') self.send(answer, ref, pos) elif action == 'subscribe': # Get UUID uid = message.get('uuid', None) # Check if we got a valid UUID if uid: uid = uuid.UUID(uid) poshw = POSHardware.objects.filter(uuid=uid).first() if poshw: if poshw.enable: # Suscribe this websocket to group self.debug("{} - Subscribed to '{}' - {}".format(pos.name.encode('ascii', 'ignore'), uid.hex, pos.uuid), color="purple") Group(uid.hex).add(self.message.reply_channel) self.send({'action': 'subscribed', 'uuid': uid.hex, 'key': poshw.key}, ref, pos) else: self.send_error("You cannot subscribe to a disabled Hardware!", ref, pos) else: self.send_error("You cannot subscribe to a Hardware that is not available, UUID not found!", ref, pos) else: self.send_error("You have tried to subscribe to a UUID but didn't specify any or is invalid", ref, pos) elif action == 'msg': uid = message.get('uuid', None) msg = message.get('msg', None) if uid: origin = POSHardware.objects.filter(uuid=uuid.UUID(uid)).first() if origin: self.debug("{} - Got a message from {}: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), origin.uuid, msg, ref, pos.uuid), color='purple') origin.recv(msg) else: self.debug("{} - Got a message from UNKNOWN {}: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), uid, msg, ref, pos.uuid), color='purple') else: self.debug("{} - Got a message from NO-UUID: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), msg, ref, pos.uuid), color='purple') elif action == 'ping': super(POSConsumer, self).send({'message': json.dumps({'action': 'pong', 'ref': ref})}) elif action == 'pong': self.debug("{} - Got PONG {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), message.get('ref', '-'), ref, pos.uuid), color='white') elif action == 'pingdog': super(POSConsumer, self).send({'message': json.dumps({'action': 'pongdog', 'ref': ref})}) elif action == 'error': uid = message.get('uuid', None) msg = message.get('error', 'No error') if uid: self.error("{} - Got an error from {}: {} (UUID:{}) (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), pos.uuid, msg, uid, ref, pos.uuid)) else: self.error("{} - Got an error from {}: {}) (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), pos.uuid, msg, ref, pos.uuid)) log = POSLog() log.pos = pos if uid: poshw = POSHardware.objects.filter(uuid=uid).first() if poshw: log.poshw = poshw log.uuid = poshw.uuid else: log.uuid = pos.uuid log.log = message.get('error', None) log.save() else: # Unknown action self.send_error("Unknown action '{}'".format(action), ref, pos)
Called when a message is received with decoded JSON content
Below is the the instruction that describes the task: ### Input: Called when a message is received with decoded JSON content ### Response: def recv(self, message, ref, pos): """ Called when a message is received with decoded JSON content """ # Get action action = message.get('action', None) # Show the message we got if action != 'pingdog': self.debug("{} - Receive: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), message, ref, pos.uuid), color="cyan") # Check the action that it is requesting if action == 'get_config': # Get all the hardware connected to this POS answer = {} answer['action'] = 'config' answer['commit'] = pos.commit answer['hardware'] = [] for hw in pos.hardwares.filter(enable=True): # Prepare to send back the config answer['hardware'].append({'kind': hw.kind, 'config': hw.get_config(), 'uuid': hw.uuid.hex}) self.debug(u"{} - Send: {} - {}".format(pos.name.encode('ascii', 'ignore'), answer, pos.uuid), color='green') self.send(answer, ref, pos) elif action == 'subscribe': # Get UUID uid = message.get('uuid', None) # Check if we got a valid UUID if uid: uid = uuid.UUID(uid) poshw = POSHardware.objects.filter(uuid=uid).first() if poshw: if poshw.enable: # Suscribe this websocket to group self.debug("{} - Subscribed to '{}' - {}".format(pos.name.encode('ascii', 'ignore'), uid.hex, pos.uuid), color="purple") Group(uid.hex).add(self.message.reply_channel) self.send({'action': 'subscribed', 'uuid': uid.hex, 'key': poshw.key}, ref, pos) else: self.send_error("You cannot subscribe to a disabled Hardware!", ref, pos) else: self.send_error("You cannot subscribe to a Hardware that is not available, UUID not found!", ref, pos) else: self.send_error("You have tried to subscribe to a UUID but didn't specify any or is invalid", ref, pos) elif action == 'msg': uid = message.get('uuid', None) msg = message.get('msg', None) if uid: origin = POSHardware.objects.filter(uuid=uuid.UUID(uid)).first() if origin: self.debug("{} - Got a message from {}: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), origin.uuid, msg, ref, pos.uuid), color='purple') origin.recv(msg) else: self.debug("{} - Got a message from UNKNOWN {}: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), uid, msg, ref, pos.uuid), color='purple') else: self.debug("{} - Got a message from NO-UUID: {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), msg, ref, pos.uuid), color='purple') elif action == 'ping': super(POSConsumer, self).send({'message': json.dumps({'action': 'pong', 'ref': ref})}) elif action == 'pong': self.debug("{} - Got PONG {} (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), message.get('ref', '-'), ref, pos.uuid), color='white') elif action == 'pingdog': super(POSConsumer, self).send({'message': json.dumps({'action': 'pongdog', 'ref': ref})}) elif action == 'error': uid = message.get('uuid', None) msg = message.get('error', 'No error') if uid: self.error("{} - Got an error from {}: {} (UUID:{}) (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), pos.uuid, msg, uid, ref, pos.uuid)) else: self.error("{} - Got an error from {}: {}) (ref:{}) - {}".format(pos.name.encode('ascii', 'ignore'), pos.uuid, msg, ref, pos.uuid)) log = POSLog() log.pos = pos if uid: poshw = POSHardware.objects.filter(uuid=uid).first() if poshw: log.poshw = poshw log.uuid = poshw.uuid else: log.uuid = pos.uuid log.log = message.get('error', None) log.save() else: # Unknown action self.send_error("Unknown action '{}'".format(action), ref, pos)
def absolute_position(self, x, y): """return the absolute position of x,y in user space w.r.t. default user space""" (a, b, c, d, e, f) = self._currentMatrix xp = a * x + c * y + e yp = b * x + d * y + f return xp, yp
return the absolute position of x,y in user space w.r.t. default user space
Below is the the instruction that describes the task: ### Input: return the absolute position of x,y in user space w.r.t. default user space ### Response: def absolute_position(self, x, y): """return the absolute position of x,y in user space w.r.t. default user space""" (a, b, c, d, e, f) = self._currentMatrix xp = a * x + c * y + e yp = b * x + d * y + f return xp, yp
def noteon(self, chan, key, vel): """Play a note.""" if key < 0 or key > 128: return False if chan < 0: return False if vel < 0 or vel > 128: return False return fluid_synth_noteon(self.synth, chan, key, vel)
Play a note.
Below is the the instruction that describes the task: ### Input: Play a note. ### Response: def noteon(self, chan, key, vel): """Play a note.""" if key < 0 or key > 128: return False if chan < 0: return False if vel < 0 or vel > 128: return False return fluid_synth_noteon(self.synth, chan, key, vel)
def is_addable(self, device, automount=True): """Check if device can be added with ``auto_add``.""" if not self.is_automount(device, automount): return False if device.is_filesystem: return not device.is_mounted if device.is_crypto: return self._prompt and not device.is_unlocked if device.is_partition_table: return any(self.is_addable(dev) for dev in self.get_all_handleable() if dev.partition_slave == device) return False
Check if device can be added with ``auto_add``.
Below is the the instruction that describes the task: ### Input: Check if device can be added with ``auto_add``. ### Response: def is_addable(self, device, automount=True): """Check if device can be added with ``auto_add``.""" if not self.is_automount(device, automount): return False if device.is_filesystem: return not device.is_mounted if device.is_crypto: return self._prompt and not device.is_unlocked if device.is_partition_table: return any(self.is_addable(dev) for dev in self.get_all_handleable() if dev.partition_slave == device) return False
def RIBNextHopLimitExceeded_originator_switch_info_switchIdentifier(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") RIBNextHopLimitExceeded = ET.SubElement(config, "RIBNextHopLimitExceeded", xmlns="http://brocade.com/ns/brocade-notification-stream") originator_switch_info = ET.SubElement(RIBNextHopLimitExceeded, "originator-switch-info") switchIdentifier = ET.SubElement(originator_switch_info, "switchIdentifier") switchIdentifier.text = kwargs.pop('switchIdentifier') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def RIBNextHopLimitExceeded_originator_switch_info_switchIdentifier(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") RIBNextHopLimitExceeded = ET.SubElement(config, "RIBNextHopLimitExceeded", xmlns="http://brocade.com/ns/brocade-notification-stream") originator_switch_info = ET.SubElement(RIBNextHopLimitExceeded, "originator-switch-info") switchIdentifier = ET.SubElement(originator_switch_info, "switchIdentifier") switchIdentifier.text = kwargs.pop('switchIdentifier') callback = kwargs.pop('callback', self._callback) return callback(config)
def fade_speed(self, value): """ Setter for **self.__fade_speed** attribute. :param value: Attribute value. :type value: float """ if value is not None: assert type(value) is float, "'{0}' attribute: '{1}' type is not 'float'!".format("fade_speed", value) assert value >= 0, "'{0}' attribute: '{1}' need to be exactly positive!".format("fade_speed", value) self.__fade_speed = value
Setter for **self.__fade_speed** attribute. :param value: Attribute value. :type value: float
Below is the the instruction that describes the task: ### Input: Setter for **self.__fade_speed** attribute. :param value: Attribute value. :type value: float ### Response: def fade_speed(self, value): """ Setter for **self.__fade_speed** attribute. :param value: Attribute value. :type value: float """ if value is not None: assert type(value) is float, "'{0}' attribute: '{1}' type is not 'float'!".format("fade_speed", value) assert value >= 0, "'{0}' attribute: '{1}' need to be exactly positive!".format("fade_speed", value) self.__fade_speed = value
def main(argv): """ Main program. @return: none """ global g_script_name g_script_name = os.path.basename(argv[0]) parse_config_file() parse_args(argv) url = 'https://0xdata.atlassian.net/rest/api/2/search?jql=sprint="' + urllib.quote(g_sprint) + '"&maxResults=1000' r = requests.get(url, auth=(g_user, g_pass)) if (r.status_code != 200): print("ERROR: status code is " + str(r.status_code)) sys.exit(1) j = r.json() issues = j[u'issues'] pm = PeopleManager() for issue in issues: pm.add(issue) pm.emit() print("")
Main program. @return: none
Below is the the instruction that describes the task: ### Input: Main program. @return: none ### Response: def main(argv): """ Main program. @return: none """ global g_script_name g_script_name = os.path.basename(argv[0]) parse_config_file() parse_args(argv) url = 'https://0xdata.atlassian.net/rest/api/2/search?jql=sprint="' + urllib.quote(g_sprint) + '"&maxResults=1000' r = requests.get(url, auth=(g_user, g_pass)) if (r.status_code != 200): print("ERROR: status code is " + str(r.status_code)) sys.exit(1) j = r.json() issues = j[u'issues'] pm = PeopleManager() for issue in issues: pm.add(issue) pm.emit() print("")
def register(self, prefix, viewset, basename, factory=None, permission=None): """ Factory and permission are likely only going to exist until I have enough time to write a permissions module for PRF. :param prefix: the uri route prefix. :param viewset: The ViewSet class to route. :param basename: Used to name the route in pyramid. :param factory: Optional, root factory to be used as the context to the route. :param permission: Optional, permission to assign the route. """ lookup = self.get_lookup(viewset) routes = self.get_routes(viewset) for route in routes: # Only actions which actually exist on the viewset will be bound mapping = self.get_method_map(viewset, route.mapping) if not mapping: continue # empty viewset url = route.url.format( prefix=prefix, lookup=lookup, trailing_slash=self.trailing_slash ) view = viewset.as_view(mapping, **route.initkwargs) name = route.name.format(basename=basename) if factory: self.configurator.add_route(name, url, factory=factory) else: self.configurator.add_route(name, url) self.configurator.add_view(view, route_name=name, permission=permission)
Factory and permission are likely only going to exist until I have enough time to write a permissions module for PRF. :param prefix: the uri route prefix. :param viewset: The ViewSet class to route. :param basename: Used to name the route in pyramid. :param factory: Optional, root factory to be used as the context to the route. :param permission: Optional, permission to assign the route.
Below is the the instruction that describes the task: ### Input: Factory and permission are likely only going to exist until I have enough time to write a permissions module for PRF. :param prefix: the uri route prefix. :param viewset: The ViewSet class to route. :param basename: Used to name the route in pyramid. :param factory: Optional, root factory to be used as the context to the route. :param permission: Optional, permission to assign the route. ### Response: def register(self, prefix, viewset, basename, factory=None, permission=None): """ Factory and permission are likely only going to exist until I have enough time to write a permissions module for PRF. :param prefix: the uri route prefix. :param viewset: The ViewSet class to route. :param basename: Used to name the route in pyramid. :param factory: Optional, root factory to be used as the context to the route. :param permission: Optional, permission to assign the route. """ lookup = self.get_lookup(viewset) routes = self.get_routes(viewset) for route in routes: # Only actions which actually exist on the viewset will be bound mapping = self.get_method_map(viewset, route.mapping) if not mapping: continue # empty viewset url = route.url.format( prefix=prefix, lookup=lookup, trailing_slash=self.trailing_slash ) view = viewset.as_view(mapping, **route.initkwargs) name = route.name.format(basename=basename) if factory: self.configurator.add_route(name, url, factory=factory) else: self.configurator.add_route(name, url) self.configurator.add_view(view, route_name=name, permission=permission)
def unlocked(self): """ Is the store unlocked so that I can decrypt the content? """ if self.password is not None: return bool(self.password) else: if ( "UNLOCK" in os.environ and os.environ["UNLOCK"] and self.config_key in self.config and self.config[self.config_key] ): log.debug("Trying to use environmental " "variable to unlock wallet") self.unlock(os.environ.get("UNLOCK")) return bool(self.password) return False
Is the store unlocked so that I can decrypt the content?
Below is the the instruction that describes the task: ### Input: Is the store unlocked so that I can decrypt the content? ### Response: def unlocked(self): """ Is the store unlocked so that I can decrypt the content? """ if self.password is not None: return bool(self.password) else: if ( "UNLOCK" in os.environ and os.environ["UNLOCK"] and self.config_key in self.config and self.config[self.config_key] ): log.debug("Trying to use environmental " "variable to unlock wallet") self.unlock(os.environ.get("UNLOCK")) return bool(self.password) return False
def rollback(*args, **kwargs): ''' Restore previous revision of Conda environment according to most recent action in :attr:`MICRODROP_CONDA_ACTIONS`. .. versionchanged:: 0.18 Add support for action revision files compressed using ``bz2``. .. versionchanged:: 0.24 Remove channels argument. Use Conda channels as configured in Conda environment. Note that channels can still be explicitly set through :data:`*args`. Parameters ---------- *args Extra arguments to pass to Conda ``install`` roll-back command. Returns ------- int, dict Revision after roll back and Conda installation log object (from JSON Conda install output). See also -------- `wheeler-microfluidics/microdrop#200 <https://github.com/wheeler-microfluidics/microdrop/issues/200>` ''' action_files = MICRODROP_CONDA_ACTIONS.files() if not action_files: # No action files, return current revision. logger.debug('No rollback actions have been recorded.') revisions_js = ch.conda_exec('list', '--revisions', '--json', verbose=False) revisions = json.loads(revisions_js) return revisions[-1]['rev'] # Get file associated with most recent action. cre_rev = re.compile(r'rev(?P<rev>\d+)') action_file = sorted([(int(cre_rev.match(file_i.namebase).group('rev')), file_i) for file_i in action_files if cre_rev.match(file_i.namebase)], reverse=True)[0] # Do rollback (i.e., install state of previous revision). if action_file.ext.lower() == '.bz2': # Assume file is compressed using bz2. with bz2.BZ2File(action_file, mode='r') as input_: action = json.load(input_) else: # Assume it is raw JSON. with action_file.open('r') as input_: action = json.load(input_) rollback_revision = action['revisions'][-2] conda_args = (['install', '--json'] + list(args) + ['--revision', str(rollback_revision)]) install_log_js = ch.conda_exec(*conda_args, verbose=False) install_log = json.loads(install_log_js.split('\x00')[-1]) logger.debug('Rolled back to revision %s', rollback_revision) return rollback_revision, install_log
Restore previous revision of Conda environment according to most recent action in :attr:`MICRODROP_CONDA_ACTIONS`. .. versionchanged:: 0.18 Add support for action revision files compressed using ``bz2``. .. versionchanged:: 0.24 Remove channels argument. Use Conda channels as configured in Conda environment. Note that channels can still be explicitly set through :data:`*args`. Parameters ---------- *args Extra arguments to pass to Conda ``install`` roll-back command. Returns ------- int, dict Revision after roll back and Conda installation log object (from JSON Conda install output). See also -------- `wheeler-microfluidics/microdrop#200 <https://github.com/wheeler-microfluidics/microdrop/issues/200>`
Below is the the instruction that describes the task: ### Input: Restore previous revision of Conda environment according to most recent action in :attr:`MICRODROP_CONDA_ACTIONS`. .. versionchanged:: 0.18 Add support for action revision files compressed using ``bz2``. .. versionchanged:: 0.24 Remove channels argument. Use Conda channels as configured in Conda environment. Note that channels can still be explicitly set through :data:`*args`. Parameters ---------- *args Extra arguments to pass to Conda ``install`` roll-back command. Returns ------- int, dict Revision after roll back and Conda installation log object (from JSON Conda install output). See also -------- `wheeler-microfluidics/microdrop#200 <https://github.com/wheeler-microfluidics/microdrop/issues/200>` ### Response: def rollback(*args, **kwargs): ''' Restore previous revision of Conda environment according to most recent action in :attr:`MICRODROP_CONDA_ACTIONS`. .. versionchanged:: 0.18 Add support for action revision files compressed using ``bz2``. .. versionchanged:: 0.24 Remove channels argument. Use Conda channels as configured in Conda environment. Note that channels can still be explicitly set through :data:`*args`. Parameters ---------- *args Extra arguments to pass to Conda ``install`` roll-back command. Returns ------- int, dict Revision after roll back and Conda installation log object (from JSON Conda install output). See also -------- `wheeler-microfluidics/microdrop#200 <https://github.com/wheeler-microfluidics/microdrop/issues/200>` ''' action_files = MICRODROP_CONDA_ACTIONS.files() if not action_files: # No action files, return current revision. logger.debug('No rollback actions have been recorded.') revisions_js = ch.conda_exec('list', '--revisions', '--json', verbose=False) revisions = json.loads(revisions_js) return revisions[-1]['rev'] # Get file associated with most recent action. cre_rev = re.compile(r'rev(?P<rev>\d+)') action_file = sorted([(int(cre_rev.match(file_i.namebase).group('rev')), file_i) for file_i in action_files if cre_rev.match(file_i.namebase)], reverse=True)[0] # Do rollback (i.e., install state of previous revision). if action_file.ext.lower() == '.bz2': # Assume file is compressed using bz2. with bz2.BZ2File(action_file, mode='r') as input_: action = json.load(input_) else: # Assume it is raw JSON. with action_file.open('r') as input_: action = json.load(input_) rollback_revision = action['revisions'][-2] conda_args = (['install', '--json'] + list(args) + ['--revision', str(rollback_revision)]) install_log_js = ch.conda_exec(*conda_args, verbose=False) install_log = json.loads(install_log_js.split('\x00')[-1]) logger.debug('Rolled back to revision %s', rollback_revision) return rollback_revision, install_log
def join(self, column_label, other, other_label=None): """Creates a new table with the columns of self and other, containing rows for all values of a column that appear in both tables. Args: ``column_label`` (``str``): label of column in self that is used to join rows of ``other``. ``other``: Table object to join with self on matching values of ``column_label``. Kwargs: ``other_label`` (``str``): default None, assumes ``column_label``. Otherwise in ``other`` used to join rows. Returns: New table self joined with ``other`` by matching values in ``column_label`` and ``other_label``. If the resulting join is empty, returns None. >>> table = Table().with_columns('a', make_array(9, 3, 3, 1), ... 'b', make_array(1, 2, 2, 10), ... 'c', make_array(3, 4, 5, 6)) >>> table a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 >>> table2 = Table().with_columns( 'a', make_array(9, 1, 1, 1), ... 'd', make_array(1, 2, 2, 10), ... 'e', make_array(3, 4, 5, 6)) >>> table2 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table.join('a', table2) a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'a') # Equivalent to previous join a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'd') # Repeat column labels relabeled a | b | c | a_2 | e 1 | 10 | 6 | 9 | 3 >>> table2 #table2 has three rows with a = 1 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table #table has only one row with a = 1 a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 """ if self.num_rows == 0 or other.num_rows == 0: return None if not other_label: other_label = column_label self_rows = self.index_by(column_label) other_rows = other.index_by(other_label) # Gather joined rows from self_rows that have join values in other_rows joined_rows = [] for v, rows in self_rows.items(): if v in other_rows: joined_rows += [row + o for row in rows for o in other_rows[v]] if not joined_rows: return None # Build joined table self_labels = list(self.labels) other_labels = [self._unused_label(s) for s in other.labels] other_labels_map = dict(zip(other.labels, other_labels)) joined = type(self)(self_labels + other_labels).with_rows(joined_rows) # Copy formats from both tables joined._formats.update(self._formats) for label in other._formats: joined._formats[other_labels_map[label]] = other._formats[label] # Remove redundant column, but perhaps save its formatting del joined[other_labels_map[other_label]] if column_label not in self._formats and other_label in other._formats: joined._formats[column_label] = other._formats[other_label] return joined.move_to_start(column_label).sort(column_label)
Creates a new table with the columns of self and other, containing rows for all values of a column that appear in both tables. Args: ``column_label`` (``str``): label of column in self that is used to join rows of ``other``. ``other``: Table object to join with self on matching values of ``column_label``. Kwargs: ``other_label`` (``str``): default None, assumes ``column_label``. Otherwise in ``other`` used to join rows. Returns: New table self joined with ``other`` by matching values in ``column_label`` and ``other_label``. If the resulting join is empty, returns None. >>> table = Table().with_columns('a', make_array(9, 3, 3, 1), ... 'b', make_array(1, 2, 2, 10), ... 'c', make_array(3, 4, 5, 6)) >>> table a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 >>> table2 = Table().with_columns( 'a', make_array(9, 1, 1, 1), ... 'd', make_array(1, 2, 2, 10), ... 'e', make_array(3, 4, 5, 6)) >>> table2 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table.join('a', table2) a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'a') # Equivalent to previous join a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'd') # Repeat column labels relabeled a | b | c | a_2 | e 1 | 10 | 6 | 9 | 3 >>> table2 #table2 has three rows with a = 1 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table #table has only one row with a = 1 a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6
Below is the the instruction that describes the task: ### Input: Creates a new table with the columns of self and other, containing rows for all values of a column that appear in both tables. Args: ``column_label`` (``str``): label of column in self that is used to join rows of ``other``. ``other``: Table object to join with self on matching values of ``column_label``. Kwargs: ``other_label`` (``str``): default None, assumes ``column_label``. Otherwise in ``other`` used to join rows. Returns: New table self joined with ``other`` by matching values in ``column_label`` and ``other_label``. If the resulting join is empty, returns None. >>> table = Table().with_columns('a', make_array(9, 3, 3, 1), ... 'b', make_array(1, 2, 2, 10), ... 'c', make_array(3, 4, 5, 6)) >>> table a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 >>> table2 = Table().with_columns( 'a', make_array(9, 1, 1, 1), ... 'd', make_array(1, 2, 2, 10), ... 'e', make_array(3, 4, 5, 6)) >>> table2 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table.join('a', table2) a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'a') # Equivalent to previous join a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'd') # Repeat column labels relabeled a | b | c | a_2 | e 1 | 10 | 6 | 9 | 3 >>> table2 #table2 has three rows with a = 1 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table #table has only one row with a = 1 a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 ### Response: def join(self, column_label, other, other_label=None): """Creates a new table with the columns of self and other, containing rows for all values of a column that appear in both tables. Args: ``column_label`` (``str``): label of column in self that is used to join rows of ``other``. ``other``: Table object to join with self on matching values of ``column_label``. Kwargs: ``other_label`` (``str``): default None, assumes ``column_label``. Otherwise in ``other`` used to join rows. Returns: New table self joined with ``other`` by matching values in ``column_label`` and ``other_label``. If the resulting join is empty, returns None. >>> table = Table().with_columns('a', make_array(9, 3, 3, 1), ... 'b', make_array(1, 2, 2, 10), ... 'c', make_array(3, 4, 5, 6)) >>> table a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 >>> table2 = Table().with_columns( 'a', make_array(9, 1, 1, 1), ... 'd', make_array(1, 2, 2, 10), ... 'e', make_array(3, 4, 5, 6)) >>> table2 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table.join('a', table2) a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'a') # Equivalent to previous join a | b | c | d | e 1 | 10 | 6 | 2 | 4 1 | 10 | 6 | 2 | 5 1 | 10 | 6 | 10 | 6 9 | 1 | 3 | 1 | 3 >>> table.join('a', table2, 'd') # Repeat column labels relabeled a | b | c | a_2 | e 1 | 10 | 6 | 9 | 3 >>> table2 #table2 has three rows with a = 1 a | d | e 9 | 1 | 3 1 | 2 | 4 1 | 2 | 5 1 | 10 | 6 >>> table #table has only one row with a = 1 a | b | c 9 | 1 | 3 3 | 2 | 4 3 | 2 | 5 1 | 10 | 6 """ if self.num_rows == 0 or other.num_rows == 0: return None if not other_label: other_label = column_label self_rows = self.index_by(column_label) other_rows = other.index_by(other_label) # Gather joined rows from self_rows that have join values in other_rows joined_rows = [] for v, rows in self_rows.items(): if v in other_rows: joined_rows += [row + o for row in rows for o in other_rows[v]] if not joined_rows: return None # Build joined table self_labels = list(self.labels) other_labels = [self._unused_label(s) for s in other.labels] other_labels_map = dict(zip(other.labels, other_labels)) joined = type(self)(self_labels + other_labels).with_rows(joined_rows) # Copy formats from both tables joined._formats.update(self._formats) for label in other._formats: joined._formats[other_labels_map[label]] = other._formats[label] # Remove redundant column, but perhaps save its formatting del joined[other_labels_map[other_label]] if column_label not in self._formats and other_label in other._formats: joined._formats[column_label] = other._formats[other_label] return joined.move_to_start(column_label).sort(column_label)
def create_table(self, tablename, columns, primary_key=None, force_recreate=False): """ :Parameters: - tablename: string - columns: list or tuples, with each element be a string like 'id INT NOT NULL UNIQUE' - primary_key: list or tuples, with elements be the column names - force_recreate: When table of the same name already exists, if this is True, drop that table; if False, raise exception :Return: Nothing """ if self.is_table_existed(tablename): if force_recreate: self.drop_table(tablename) else: raise MonSQLException('TABLE ALREADY EXISTS') columns_specs = ','.join(columns) if primary_key is not None: if len(primary_key) == 0: raise MonSQLException('PRIMARY KEY MUST AT LEAST CONTAINS ONE COLUMN') columns_specs += ',PRIMARY KEY(%s)' %(','.join(primary_key)) sql = 'CREATE TABLE %s(%s)' %(tablename, columns_specs) self.__cursor.execute(sql) self.__db.commit()
:Parameters: - tablename: string - columns: list or tuples, with each element be a string like 'id INT NOT NULL UNIQUE' - primary_key: list or tuples, with elements be the column names - force_recreate: When table of the same name already exists, if this is True, drop that table; if False, raise exception :Return: Nothing
Below is the the instruction that describes the task: ### Input: :Parameters: - tablename: string - columns: list or tuples, with each element be a string like 'id INT NOT NULL UNIQUE' - primary_key: list or tuples, with elements be the column names - force_recreate: When table of the same name already exists, if this is True, drop that table; if False, raise exception :Return: Nothing ### Response: def create_table(self, tablename, columns, primary_key=None, force_recreate=False): """ :Parameters: - tablename: string - columns: list or tuples, with each element be a string like 'id INT NOT NULL UNIQUE' - primary_key: list or tuples, with elements be the column names - force_recreate: When table of the same name already exists, if this is True, drop that table; if False, raise exception :Return: Nothing """ if self.is_table_existed(tablename): if force_recreate: self.drop_table(tablename) else: raise MonSQLException('TABLE ALREADY EXISTS') columns_specs = ','.join(columns) if primary_key is not None: if len(primary_key) == 0: raise MonSQLException('PRIMARY KEY MUST AT LEAST CONTAINS ONE COLUMN') columns_specs += ',PRIMARY KEY(%s)' %(','.join(primary_key)) sql = 'CREATE TABLE %s(%s)' %(tablename, columns_specs) self.__cursor.execute(sql) self.__db.commit()
def football_data(season='1314', data_set='football_data'): """Football data from English games since 1993. This downloads data from football-data.co.uk for the given season. """ def league2num(string): league_dict = {'E0':0, 'E1':1, 'E2': 2, 'E3': 3, 'EC':4} return league_dict[string] def football2num(string): if string in football_dict: return football_dict[string] else: football_dict[string] = len(football_dict)+1 return len(football_dict)+1 data_set_season = data_set + '_' + season data_resources[data_set_season] = copy.deepcopy(data_resources[data_set]) data_resources[data_set_season]['urls'][0]+=season + '/' start_year = int(season[0:2]) end_year = int(season[2:4]) files = ['E0.csv', 'E1.csv', 'E2.csv', 'E3.csv'] if start_year>4 and start_year < 93: files += ['EC.csv'] data_resources[data_set_season]['files'] = [files] if not data_available(data_set_season): download_data(data_set_season) from matplotlib import pyplot as pb for file in reversed(files): filename = os.path.join(data_path, data_set_season, file) # rewrite files removing blank rows. writename = os.path.join(data_path, data_set_season, 'temp.csv') input = open(filename, 'rb') output = open(writename, 'wb') writer = csv.writer(output) for row in csv.reader(input): if any(field.strip() for field in row): writer.writerow(row) input.close() output.close() table = np.loadtxt(writename,skiprows=1, usecols=(0, 1, 2, 3, 4, 5), converters = {0: league2num, 1: pb.datestr2num, 2:football2num, 3:football2num}, delimiter=',') X = table[:, :4] Y = table[:, 4:] return data_details_return({'X': X, 'Y': Y}, data_set)
Football data from English games since 1993. This downloads data from football-data.co.uk for the given season.
Below is the the instruction that describes the task: ### Input: Football data from English games since 1993. This downloads data from football-data.co.uk for the given season. ### Response: def football_data(season='1314', data_set='football_data'): """Football data from English games since 1993. This downloads data from football-data.co.uk for the given season. """ def league2num(string): league_dict = {'E0':0, 'E1':1, 'E2': 2, 'E3': 3, 'EC':4} return league_dict[string] def football2num(string): if string in football_dict: return football_dict[string] else: football_dict[string] = len(football_dict)+1 return len(football_dict)+1 data_set_season = data_set + '_' + season data_resources[data_set_season] = copy.deepcopy(data_resources[data_set]) data_resources[data_set_season]['urls'][0]+=season + '/' start_year = int(season[0:2]) end_year = int(season[2:4]) files = ['E0.csv', 'E1.csv', 'E2.csv', 'E3.csv'] if start_year>4 and start_year < 93: files += ['EC.csv'] data_resources[data_set_season]['files'] = [files] if not data_available(data_set_season): download_data(data_set_season) from matplotlib import pyplot as pb for file in reversed(files): filename = os.path.join(data_path, data_set_season, file) # rewrite files removing blank rows. writename = os.path.join(data_path, data_set_season, 'temp.csv') input = open(filename, 'rb') output = open(writename, 'wb') writer = csv.writer(output) for row in csv.reader(input): if any(field.strip() for field in row): writer.writerow(row) input.close() output.close() table = np.loadtxt(writename,skiprows=1, usecols=(0, 1, 2, 3, 4, 5), converters = {0: league2num, 1: pb.datestr2num, 2:football2num, 3:football2num}, delimiter=',') X = table[:, :4] Y = table[:, 4:] return data_details_return({'X': X, 'Y': Y}, data_set)
def render_compressed(self, package, package_name, package_type): """Render HTML for the package. If ``PIPELINE_ENABLED`` is ``True``, this will render the package's output file (using :py:meth:`render_compressed_output`). Otherwise, this will render the package's source files (using :py:meth:`render_compressed_sources`). Subclasses can override this method to provide custom behavior for determining what to render. """ if settings.PIPELINE_ENABLED: return self.render_compressed_output(package, package_name, package_type) else: return self.render_compressed_sources(package, package_name, package_type)
Render HTML for the package. If ``PIPELINE_ENABLED`` is ``True``, this will render the package's output file (using :py:meth:`render_compressed_output`). Otherwise, this will render the package's source files (using :py:meth:`render_compressed_sources`). Subclasses can override this method to provide custom behavior for determining what to render.
Below is the the instruction that describes the task: ### Input: Render HTML for the package. If ``PIPELINE_ENABLED`` is ``True``, this will render the package's output file (using :py:meth:`render_compressed_output`). Otherwise, this will render the package's source files (using :py:meth:`render_compressed_sources`). Subclasses can override this method to provide custom behavior for determining what to render. ### Response: def render_compressed(self, package, package_name, package_type): """Render HTML for the package. If ``PIPELINE_ENABLED`` is ``True``, this will render the package's output file (using :py:meth:`render_compressed_output`). Otherwise, this will render the package's source files (using :py:meth:`render_compressed_sources`). Subclasses can override this method to provide custom behavior for determining what to render. """ if settings.PIPELINE_ENABLED: return self.render_compressed_output(package, package_name, package_type) else: return self.render_compressed_sources(package, package_name, package_type)
def configure(self, options, conf): """Configure plugin. """ if not self.can_configure: return self.enabled = options.detailedErrors self.conf = conf
Configure plugin.
Below is the the instruction that describes the task: ### Input: Configure plugin. ### Response: def configure(self, options, conf): """Configure plugin. """ if not self.can_configure: return self.enabled = options.detailedErrors self.conf = conf
def _allocated_entries_bitmap(self): '''Creates a generator that returns all allocated entries in the bitmap. Yields: int: The bit index of the allocated entries. ''' for entry_number in range(len(self._bitmap) * 8): if self.entry_allocated(entry_number): yield entry_number
Creates a generator that returns all allocated entries in the bitmap. Yields: int: The bit index of the allocated entries.
Below is the the instruction that describes the task: ### Input: Creates a generator that returns all allocated entries in the bitmap. Yields: int: The bit index of the allocated entries. ### Response: def _allocated_entries_bitmap(self): '''Creates a generator that returns all allocated entries in the bitmap. Yields: int: The bit index of the allocated entries. ''' for entry_number in range(len(self._bitmap) * 8): if self.entry_allocated(entry_number): yield entry_number
def detect_Concordia(dat_orig, s_freq, time, opts): """Spindle detection, experimental Concordia method. Similar to Moelle 2011 and Nir2011. Parameters ---------- dat_orig : ndarray (dtype='float') vector with the data for one channel s_freq : float sampling frequency opts : instance of 'DetectSpindle' 'det_butter' : dict parameters for 'butter', 'moving_rms' : dict parameters for 'moving_rms' 'smooth' : dict parameters for 'smooth' 'det_thresh' : float low detection threshold 'det_thresh_hi' : float high detection threshold 'sel_thresh' : float selection threshold 'duration' : tuple of float min and max duration of spindles Returns ------- list of dict list of detected spindles dict 'det_value_lo', 'det_value_hi' with detection values, 'sel_value' with selection value float spindle density, per 30-s epoch """ dat_det = transform_signal(dat_orig, s_freq, 'butter', opts.det_butter) dat_det = transform_signal(dat_det, s_freq, 'moving_rms', opts.moving_rms) dat_det = transform_signal(dat_det, s_freq, 'smooth', opts.smooth) det_value_lo = define_threshold(dat_det, s_freq, 'mean+std', opts.det_thresh) det_value_hi = define_threshold(dat_det, s_freq, 'mean+std', opts.det_thresh_hi) sel_value = define_threshold(dat_det, s_freq, 'mean+std', opts.sel_thresh) events = detect_events(dat_det, 'between_thresh', value=(det_value_lo, det_value_hi)) if events is not None: events = _merge_close(dat_det, events, time, opts.tolerance) events = select_events(dat_det, events, 'above_thresh', sel_value) events = within_duration(events, time, opts.duration) events = _merge_close(dat_det, events, time, opts.min_interval) events = remove_straddlers(events, time, s_freq) power_peaks = peak_in_power(events, dat_orig, s_freq, opts.power_peaks) powers = power_in_band(events, dat_orig, s_freq, opts.frequency) sp_in_chan = make_spindles(events, power_peaks, powers, dat_det, dat_orig, time, s_freq) else: lg.info('No spindle found') sp_in_chan = [] values = {'det_value_lo': det_value_lo, 'sel_value': sel_value} density = len(sp_in_chan) * s_freq * 30 / len(dat_orig) return sp_in_chan, values, density
Spindle detection, experimental Concordia method. Similar to Moelle 2011 and Nir2011. Parameters ---------- dat_orig : ndarray (dtype='float') vector with the data for one channel s_freq : float sampling frequency opts : instance of 'DetectSpindle' 'det_butter' : dict parameters for 'butter', 'moving_rms' : dict parameters for 'moving_rms' 'smooth' : dict parameters for 'smooth' 'det_thresh' : float low detection threshold 'det_thresh_hi' : float high detection threshold 'sel_thresh' : float selection threshold 'duration' : tuple of float min and max duration of spindles Returns ------- list of dict list of detected spindles dict 'det_value_lo', 'det_value_hi' with detection values, 'sel_value' with selection value float spindle density, per 30-s epoch
Below is the the instruction that describes the task: ### Input: Spindle detection, experimental Concordia method. Similar to Moelle 2011 and Nir2011. Parameters ---------- dat_orig : ndarray (dtype='float') vector with the data for one channel s_freq : float sampling frequency opts : instance of 'DetectSpindle' 'det_butter' : dict parameters for 'butter', 'moving_rms' : dict parameters for 'moving_rms' 'smooth' : dict parameters for 'smooth' 'det_thresh' : float low detection threshold 'det_thresh_hi' : float high detection threshold 'sel_thresh' : float selection threshold 'duration' : tuple of float min and max duration of spindles Returns ------- list of dict list of detected spindles dict 'det_value_lo', 'det_value_hi' with detection values, 'sel_value' with selection value float spindle density, per 30-s epoch ### Response: def detect_Concordia(dat_orig, s_freq, time, opts): """Spindle detection, experimental Concordia method. Similar to Moelle 2011 and Nir2011. Parameters ---------- dat_orig : ndarray (dtype='float') vector with the data for one channel s_freq : float sampling frequency opts : instance of 'DetectSpindle' 'det_butter' : dict parameters for 'butter', 'moving_rms' : dict parameters for 'moving_rms' 'smooth' : dict parameters for 'smooth' 'det_thresh' : float low detection threshold 'det_thresh_hi' : float high detection threshold 'sel_thresh' : float selection threshold 'duration' : tuple of float min and max duration of spindles Returns ------- list of dict list of detected spindles dict 'det_value_lo', 'det_value_hi' with detection values, 'sel_value' with selection value float spindle density, per 30-s epoch """ dat_det = transform_signal(dat_orig, s_freq, 'butter', opts.det_butter) dat_det = transform_signal(dat_det, s_freq, 'moving_rms', opts.moving_rms) dat_det = transform_signal(dat_det, s_freq, 'smooth', opts.smooth) det_value_lo = define_threshold(dat_det, s_freq, 'mean+std', opts.det_thresh) det_value_hi = define_threshold(dat_det, s_freq, 'mean+std', opts.det_thresh_hi) sel_value = define_threshold(dat_det, s_freq, 'mean+std', opts.sel_thresh) events = detect_events(dat_det, 'between_thresh', value=(det_value_lo, det_value_hi)) if events is not None: events = _merge_close(dat_det, events, time, opts.tolerance) events = select_events(dat_det, events, 'above_thresh', sel_value) events = within_duration(events, time, opts.duration) events = _merge_close(dat_det, events, time, opts.min_interval) events = remove_straddlers(events, time, s_freq) power_peaks = peak_in_power(events, dat_orig, s_freq, opts.power_peaks) powers = power_in_band(events, dat_orig, s_freq, opts.frequency) sp_in_chan = make_spindles(events, power_peaks, powers, dat_det, dat_orig, time, s_freq) else: lg.info('No spindle found') sp_in_chan = [] values = {'det_value_lo': det_value_lo, 'sel_value': sel_value} density = len(sp_in_chan) * s_freq * 30 / len(dat_orig) return sp_in_chan, values, density
def OnTableListToggle(self, event): """Table list toggle event handler""" table_list_panel_info = \ self.main_window._mgr.GetPane("table_list_panel") self._toggle_pane(table_list_panel_info) event.Skip()
Table list toggle event handler
Below is the the instruction that describes the task: ### Input: Table list toggle event handler ### Response: def OnTableListToggle(self, event): """Table list toggle event handler""" table_list_panel_info = \ self.main_window._mgr.GetPane("table_list_panel") self._toggle_pane(table_list_panel_info) event.Skip()
def select_file_ids(db, user_id): """ Get all file ids for a user. """ return list( db.execute( select([files.c.id]) .where(files.c.user_id == user_id) ) )
Get all file ids for a user.
Below is the the instruction that describes the task: ### Input: Get all file ids for a user. ### Response: def select_file_ids(db, user_id): """ Get all file ids for a user. """ return list( db.execute( select([files.c.id]) .where(files.c.user_id == user_id) ) )
def load(steps, reload=False): """ safely load steps in place, excluding those that fail Args: steps: the steps to load """ # work on collections by default for fewer isinstance() calls per call to load() if reload: _STEP_CACHE.clear() if callable(steps): steps = steps() if not isinstance(steps, collections.Iterable): return load([steps])[0] loaded = [] for s in steps: digest = s._digest if digest in _STEP_CACHE: loaded.append(_STEP_CACHE[digest]) else: try: s.load() _STEP_CACHE[digest] = s loaded.append(s) except(Exception): logging.warn('Error during step load:\n%s' % util.indent(traceback.format_exc())) return loaded
safely load steps in place, excluding those that fail Args: steps: the steps to load
Below is the the instruction that describes the task: ### Input: safely load steps in place, excluding those that fail Args: steps: the steps to load ### Response: def load(steps, reload=False): """ safely load steps in place, excluding those that fail Args: steps: the steps to load """ # work on collections by default for fewer isinstance() calls per call to load() if reload: _STEP_CACHE.clear() if callable(steps): steps = steps() if not isinstance(steps, collections.Iterable): return load([steps])[0] loaded = [] for s in steps: digest = s._digest if digest in _STEP_CACHE: loaded.append(_STEP_CACHE[digest]) else: try: s.load() _STEP_CACHE[digest] = s loaded.append(s) except(Exception): logging.warn('Error during step load:\n%s' % util.indent(traceback.format_exc())) return loaded
def transfer_header(self, fw=sys.stdout): """ transfer_header() copies header to a new file. print_header() creates a new header. """ print("\n".join(self.header), file=fw)
transfer_header() copies header to a new file. print_header() creates a new header.
Below is the the instruction that describes the task: ### Input: transfer_header() copies header to a new file. print_header() creates a new header. ### Response: def transfer_header(self, fw=sys.stdout): """ transfer_header() copies header to a new file. print_header() creates a new header. """ print("\n".join(self.header), file=fw)
def load(self): """ Load the object defined by the plugin entry point """ print("[DEBUG] Loading plugin {} from {}".format(self.name, self.source)) import pydoc path, attr = self.source.split(":") module = pydoc.locate(path) return getattr(module, attr)
Load the object defined by the plugin entry point
Below is the the instruction that describes the task: ### Input: Load the object defined by the plugin entry point ### Response: def load(self): """ Load the object defined by the plugin entry point """ print("[DEBUG] Loading plugin {} from {}".format(self.name, self.source)) import pydoc path, attr = self.source.split(":") module = pydoc.locate(path) return getattr(module, attr)
def update(self, params, ignore_set=False, overwrite=False): """Set instance values from dictionary. :param dict params: Click context params. :param bool ignore_set: Skip already-set values instead of raising AttributeError. :param bool overwrite: Allow overwriting already-set values. """ log = logging.getLogger(__name__) valid = {i[0] for i in self} for key, value in params.items(): if not hasattr(self, key): raise AttributeError("'{}' object has no attribute '{}'".format(self.__class__.__name__, key)) if key not in valid: message = "'{}' object does not support item assignment on '{}'" raise AttributeError(message.format(self.__class__.__name__, key)) if key in self._already_set: if ignore_set: log.debug('%s already set in config, skipping.', key) continue if not overwrite: message = "'{}' object does not support item re-assignment on '{}'" raise AttributeError(message.format(self.__class__.__name__, key)) setattr(self, key, value) self._already_set.add(key)
Set instance values from dictionary. :param dict params: Click context params. :param bool ignore_set: Skip already-set values instead of raising AttributeError. :param bool overwrite: Allow overwriting already-set values.
Below is the the instruction that describes the task: ### Input: Set instance values from dictionary. :param dict params: Click context params. :param bool ignore_set: Skip already-set values instead of raising AttributeError. :param bool overwrite: Allow overwriting already-set values. ### Response: def update(self, params, ignore_set=False, overwrite=False): """Set instance values from dictionary. :param dict params: Click context params. :param bool ignore_set: Skip already-set values instead of raising AttributeError. :param bool overwrite: Allow overwriting already-set values. """ log = logging.getLogger(__name__) valid = {i[0] for i in self} for key, value in params.items(): if not hasattr(self, key): raise AttributeError("'{}' object has no attribute '{}'".format(self.__class__.__name__, key)) if key not in valid: message = "'{}' object does not support item assignment on '{}'" raise AttributeError(message.format(self.__class__.__name__, key)) if key in self._already_set: if ignore_set: log.debug('%s already set in config, skipping.', key) continue if not overwrite: message = "'{}' object does not support item re-assignment on '{}'" raise AttributeError(message.format(self.__class__.__name__, key)) setattr(self, key, value) self._already_set.add(key)
def log_to_file(log_path, log_urllib=False, limit=None): """ Add file_handler to logger""" log_path = log_path file_handler = logging.FileHandler(log_path) if limit: file_handler = RotatingFileHandler( log_path, mode='a', maxBytes=limit * 1024 * 1024, backupCount=2, encoding=None, delay=0) fmt = '[%(asctime)s %(filename)18s] %(levelname)-7s - %(message)7s' date_fmt = '%Y-%m-%d %H:%M:%S' formatter = logging.Formatter(fmt, datefmt=date_fmt) file_handler.setFormatter(formatter) logger.addHandler(file_handler) if log_urllib: urllib_logger.addHandler(file_handler) urllib_logger.setLevel(logging.DEBUG)
Add file_handler to logger
Below is the the instruction that describes the task: ### Input: Add file_handler to logger ### Response: def log_to_file(log_path, log_urllib=False, limit=None): """ Add file_handler to logger""" log_path = log_path file_handler = logging.FileHandler(log_path) if limit: file_handler = RotatingFileHandler( log_path, mode='a', maxBytes=limit * 1024 * 1024, backupCount=2, encoding=None, delay=0) fmt = '[%(asctime)s %(filename)18s] %(levelname)-7s - %(message)7s' date_fmt = '%Y-%m-%d %H:%M:%S' formatter = logging.Formatter(fmt, datefmt=date_fmt) file_handler.setFormatter(formatter) logger.addHandler(file_handler) if log_urllib: urllib_logger.addHandler(file_handler) urllib_logger.setLevel(logging.DEBUG)
def deserialize(self, buffer=bytes(), index=Index(), **options): """ De-serializes the `Pointer` field from the byte *buffer* starting at the begin of the *buffer* or with the given *index* by mapping the bytes to the :attr:`value` of the `Pointer` field in accordance with the decoding *byte order* for the de-serialization and the decoding :attr:`byte_order` of the `Pointer` field. The specific decoding :attr:`byte_order` of the `Pointer` field overrules the decoding *byte order* for the de-serialization. Returns the :class:`Index` of the *buffer* after the `Pointer` field. Optional the de-serialization of the referenced :attr:`data` object of the `Pointer` field can be enabled. :param bytes buffer: byte stream. :param Index index: current read :class:`Index` within the *buffer*. :keyword byte_order: decoding byte order for the de-serialization. :type byte_order: :class:`Byteorder`, :class:`str` :keyword bool nested: if ``True`` a `Pointer` field de-serialize its referenced :attr:`data` object as well (chained method call). Each :class:`Pointer` field uses for the de-serialization of its referenced :attr:`data` object its own :attr:`bytestream`. """ # Field index = super().deserialize(buffer, index, **options) # Data Object if self._data and get_nested(options): options[Option.byte_order.value] = self.data_byte_order self._data.deserialize(self._data_stream, Index(0, 0, self.address, self.base_address, False), **options) return index
De-serializes the `Pointer` field from the byte *buffer* starting at the begin of the *buffer* or with the given *index* by mapping the bytes to the :attr:`value` of the `Pointer` field in accordance with the decoding *byte order* for the de-serialization and the decoding :attr:`byte_order` of the `Pointer` field. The specific decoding :attr:`byte_order` of the `Pointer` field overrules the decoding *byte order* for the de-serialization. Returns the :class:`Index` of the *buffer* after the `Pointer` field. Optional the de-serialization of the referenced :attr:`data` object of the `Pointer` field can be enabled. :param bytes buffer: byte stream. :param Index index: current read :class:`Index` within the *buffer*. :keyword byte_order: decoding byte order for the de-serialization. :type byte_order: :class:`Byteorder`, :class:`str` :keyword bool nested: if ``True`` a `Pointer` field de-serialize its referenced :attr:`data` object as well (chained method call). Each :class:`Pointer` field uses for the de-serialization of its referenced :attr:`data` object its own :attr:`bytestream`.
Below is the the instruction that describes the task: ### Input: De-serializes the `Pointer` field from the byte *buffer* starting at the begin of the *buffer* or with the given *index* by mapping the bytes to the :attr:`value` of the `Pointer` field in accordance with the decoding *byte order* for the de-serialization and the decoding :attr:`byte_order` of the `Pointer` field. The specific decoding :attr:`byte_order` of the `Pointer` field overrules the decoding *byte order* for the de-serialization. Returns the :class:`Index` of the *buffer* after the `Pointer` field. Optional the de-serialization of the referenced :attr:`data` object of the `Pointer` field can be enabled. :param bytes buffer: byte stream. :param Index index: current read :class:`Index` within the *buffer*. :keyword byte_order: decoding byte order for the de-serialization. :type byte_order: :class:`Byteorder`, :class:`str` :keyword bool nested: if ``True`` a `Pointer` field de-serialize its referenced :attr:`data` object as well (chained method call). Each :class:`Pointer` field uses for the de-serialization of its referenced :attr:`data` object its own :attr:`bytestream`. ### Response: def deserialize(self, buffer=bytes(), index=Index(), **options): """ De-serializes the `Pointer` field from the byte *buffer* starting at the begin of the *buffer* or with the given *index* by mapping the bytes to the :attr:`value` of the `Pointer` field in accordance with the decoding *byte order* for the de-serialization and the decoding :attr:`byte_order` of the `Pointer` field. The specific decoding :attr:`byte_order` of the `Pointer` field overrules the decoding *byte order* for the de-serialization. Returns the :class:`Index` of the *buffer* after the `Pointer` field. Optional the de-serialization of the referenced :attr:`data` object of the `Pointer` field can be enabled. :param bytes buffer: byte stream. :param Index index: current read :class:`Index` within the *buffer*. :keyword byte_order: decoding byte order for the de-serialization. :type byte_order: :class:`Byteorder`, :class:`str` :keyword bool nested: if ``True`` a `Pointer` field de-serialize its referenced :attr:`data` object as well (chained method call). Each :class:`Pointer` field uses for the de-serialization of its referenced :attr:`data` object its own :attr:`bytestream`. """ # Field index = super().deserialize(buffer, index, **options) # Data Object if self._data and get_nested(options): options[Option.byte_order.value] = self.data_byte_order self._data.deserialize(self._data_stream, Index(0, 0, self.address, self.base_address, False), **options) return index
def to_json_type(self): """ Return a dictionary for the document with values converted to JSON safe types. """ document_dict = self._json_safe(self._document) self._remove_keys(document_dict, self._private_fields) return document_dict
Return a dictionary for the document with values converted to JSON safe types.
Below is the the instruction that describes the task: ### Input: Return a dictionary for the document with values converted to JSON safe types. ### Response: def to_json_type(self): """ Return a dictionary for the document with values converted to JSON safe types. """ document_dict = self._json_safe(self._document) self._remove_keys(document_dict, self._private_fields) return document_dict
def guess_peb_size(path): """Determine the most likely block size Arguments: Str:path -- Path to file. Returns: Int -- PEB size. Searches file for Magic Number, picks most common length between them. """ file_offset = 0 offsets = [] f = open(path, 'rb') f.seek(0,2) file_size = f.tell()+1 f.seek(0) for _ in range(0, file_size, FILE_CHUNK_SZ): buf = f.read(FILE_CHUNK_SZ) for m in re.finditer(UBI_EC_HDR_MAGIC, buf): start = m.start() if not file_offset: file_offset = start idx = start else: idx = start+file_offset offsets.append(idx) file_offset += FILE_CHUNK_SZ f.close() occurances = {} for i in range(0, len(offsets)): try: diff = offsets[i] - offsets[i-1] except: diff = offsets[i] if diff not in occurances: occurances[diff] = 0 occurances[diff] += 1 most_frequent = 0 block_size = None for offset in occurances: if occurances[offset] > most_frequent: most_frequent = occurances[offset] block_size = offset return block_size
Determine the most likely block size Arguments: Str:path -- Path to file. Returns: Int -- PEB size. Searches file for Magic Number, picks most common length between them.
Below is the the instruction that describes the task: ### Input: Determine the most likely block size Arguments: Str:path -- Path to file. Returns: Int -- PEB size. Searches file for Magic Number, picks most common length between them. ### Response: def guess_peb_size(path): """Determine the most likely block size Arguments: Str:path -- Path to file. Returns: Int -- PEB size. Searches file for Magic Number, picks most common length between them. """ file_offset = 0 offsets = [] f = open(path, 'rb') f.seek(0,2) file_size = f.tell()+1 f.seek(0) for _ in range(0, file_size, FILE_CHUNK_SZ): buf = f.read(FILE_CHUNK_SZ) for m in re.finditer(UBI_EC_HDR_MAGIC, buf): start = m.start() if not file_offset: file_offset = start idx = start else: idx = start+file_offset offsets.append(idx) file_offset += FILE_CHUNK_SZ f.close() occurances = {} for i in range(0, len(offsets)): try: diff = offsets[i] - offsets[i-1] except: diff = offsets[i] if diff not in occurances: occurances[diff] = 0 occurances[diff] += 1 most_frequent = 0 block_size = None for offset in occurances: if occurances[offset] > most_frequent: most_frequent = occurances[offset] block_size = offset return block_size
def run(self): """Train the model on randomly generated batches""" _, test_data = self.data.load(train=False, test=True) try: self.model.fit_generator( self.samples_to_batches(self.generate_samples(), self.args.batch_size), steps_per_epoch=self.args.steps_per_epoch, epochs=self.epoch + self.args.epochs, validation_data=test_data, callbacks=self.callbacks, initial_epoch=self.epoch ) finally: self.model.save(self.args.model) save_params(self.args.model)
Train the model on randomly generated batches
Below is the the instruction that describes the task: ### Input: Train the model on randomly generated batches ### Response: def run(self): """Train the model on randomly generated batches""" _, test_data = self.data.load(train=False, test=True) try: self.model.fit_generator( self.samples_to_batches(self.generate_samples(), self.args.batch_size), steps_per_epoch=self.args.steps_per_epoch, epochs=self.epoch + self.args.epochs, validation_data=test_data, callbacks=self.callbacks, initial_epoch=self.epoch ) finally: self.model.save(self.args.model) save_params(self.args.model)
def load_vcoef(filename): """Loads a set of vector coefficients that were saved in MATLAB. The third number on the first line is the directivity calculated within the MATLAB code.""" with open(filename) as f: lines = f.readlines() lst = lines[0].split(',') nmax = int(lst[0]) mmax = int(lst[1]) directivity = float(lst[2]) L = (nmax + 1) + mmax * (2 * nmax - mmax + 1); vec1 = np.zeros(L, dtype=np.complex128) vec2 = np.zeros(L, dtype=np.complex128) lines.pop(0) n = 0 in_vec2 = False for line in lines: if line.strip() == 'break': n = 0 in_vec2 = True; else: lst = line.split(',') re = float(lst[0]) im = float(lst[1]) if in_vec2: vec2[n] = re + 1j * im else: vec1[n] = re + 1j * im n += 1 return (sp.VectorCoefs(vec1, vec2, nmax, mmax), directivity)
Loads a set of vector coefficients that were saved in MATLAB. The third number on the first line is the directivity calculated within the MATLAB code.
Below is the the instruction that describes the task: ### Input: Loads a set of vector coefficients that were saved in MATLAB. The third number on the first line is the directivity calculated within the MATLAB code. ### Response: def load_vcoef(filename): """Loads a set of vector coefficients that were saved in MATLAB. The third number on the first line is the directivity calculated within the MATLAB code.""" with open(filename) as f: lines = f.readlines() lst = lines[0].split(',') nmax = int(lst[0]) mmax = int(lst[1]) directivity = float(lst[2]) L = (nmax + 1) + mmax * (2 * nmax - mmax + 1); vec1 = np.zeros(L, dtype=np.complex128) vec2 = np.zeros(L, dtype=np.complex128) lines.pop(0) n = 0 in_vec2 = False for line in lines: if line.strip() == 'break': n = 0 in_vec2 = True; else: lst = line.split(',') re = float(lst[0]) im = float(lst[1]) if in_vec2: vec2[n] = re + 1j * im else: vec1[n] = re + 1j * im n += 1 return (sp.VectorCoefs(vec1, vec2, nmax, mmax), directivity)
def get_closest_dt_padded_idx(dt, dt_list, pad=timedelta(days=30)): """Get indices of dt_list that is closest to input dt +/- pad days """ #If pad is in decimal days if not isinstance(pad, timedelta): pad = timedelta(days=pad) from pygeotools.lib import malib dt_list = malib.checkma(dt_list, fix=False) dt_diff = np.abs(dt - dt_list) valid_idx = (dt_diff.data < pad).nonzero()[0] return valid_idx
Get indices of dt_list that is closest to input dt +/- pad days
Below is the the instruction that describes the task: ### Input: Get indices of dt_list that is closest to input dt +/- pad days ### Response: def get_closest_dt_padded_idx(dt, dt_list, pad=timedelta(days=30)): """Get indices of dt_list that is closest to input dt +/- pad days """ #If pad is in decimal days if not isinstance(pad, timedelta): pad = timedelta(days=pad) from pygeotools.lib import malib dt_list = malib.checkma(dt_list, fix=False) dt_diff = np.abs(dt - dt_list) valid_idx = (dt_diff.data < pad).nonzero()[0] return valid_idx
def utc2et(utcstr): """ Convert an input time from Calendar or Julian Date format, UTC, to ephemeris seconds past J2000. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/utc2et_c.html :param utcstr: Input time string, UTC. :type utcstr: str :return: Output epoch, ephemeris seconds past J2000. :rtype: float """ utcstr = stypes.stringToCharP(utcstr) et = ctypes.c_double() libspice.utc2et_c(utcstr, ctypes.byref(et)) return et.value
Convert an input time from Calendar or Julian Date format, UTC, to ephemeris seconds past J2000. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/utc2et_c.html :param utcstr: Input time string, UTC. :type utcstr: str :return: Output epoch, ephemeris seconds past J2000. :rtype: float
Below is the the instruction that describes the task: ### Input: Convert an input time from Calendar or Julian Date format, UTC, to ephemeris seconds past J2000. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/utc2et_c.html :param utcstr: Input time string, UTC. :type utcstr: str :return: Output epoch, ephemeris seconds past J2000. :rtype: float ### Response: def utc2et(utcstr): """ Convert an input time from Calendar or Julian Date format, UTC, to ephemeris seconds past J2000. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/utc2et_c.html :param utcstr: Input time string, UTC. :type utcstr: str :return: Output epoch, ephemeris seconds past J2000. :rtype: float """ utcstr = stypes.stringToCharP(utcstr) et = ctypes.c_double() libspice.utc2et_c(utcstr, ctypes.byref(et)) return et.value
def _set_alarm_owner(self, v, load=False): """ Setter method for alarm_owner, mapped from YANG variable /rmon/alarm_entry/alarm_owner (owner-string) If this variable is read-only (config: false) in the source YANG file, then _set_alarm_owner is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_alarm_owner() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="alarm-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """alarm_owner must be of a type compatible with owner-string""", 'defined-type': "brocade-rmon:owner-string", 'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="alarm-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True)""", }) self.__alarm_owner = t if hasattr(self, '_set'): self._set()
Setter method for alarm_owner, mapped from YANG variable /rmon/alarm_entry/alarm_owner (owner-string) If this variable is read-only (config: false) in the source YANG file, then _set_alarm_owner is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_alarm_owner() directly.
Below is the the instruction that describes the task: ### Input: Setter method for alarm_owner, mapped from YANG variable /rmon/alarm_entry/alarm_owner (owner-string) If this variable is read-only (config: false) in the source YANG file, then _set_alarm_owner is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_alarm_owner() directly. ### Response: def _set_alarm_owner(self, v, load=False): """ Setter method for alarm_owner, mapped from YANG variable /rmon/alarm_entry/alarm_owner (owner-string) If this variable is read-only (config: false) in the source YANG file, then _set_alarm_owner is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_alarm_owner() directly. """ if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="alarm-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True) except (TypeError, ValueError): raise ValueError({ 'error-string': """alarm_owner must be of a type compatible with owner-string""", 'defined-type': "brocade-rmon:owner-string", 'generated-type': """YANGDynClass(base=RestrictedClassType(base_type=unicode, restriction_dict={'pattern': u'[a-zA-Z]{1}([-a-zA-Z0-9\\.\\\\\\\\@#\\+\\*\\(\\)=\\{~\\}%<>=$_\\[\\]\\|]{0,14})', 'length': [u'1 .. 15']}), is_leaf=True, yang_name="alarm-owner", rest_name="owner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u'tailf-common': {u'info': u'Owner identity', u'alt-name': u'owner'}}, namespace='urn:brocade.com:mgmt:brocade-rmon', defining_module='brocade-rmon', yang_type='owner-string', is_config=True)""", }) self.__alarm_owner = t if hasattr(self, '_set'): self._set()
def cmd(): """Implementation of time subcommand. Other Parameters: conf.time conf.core """ sdat = StagyyData(conf.core.path) if sdat.tseries is None: return if conf.time.fraction is not None: if not 0 < conf.time.fraction <= 1: raise InvalidTimeFractionError(conf.time.fraction) conf.time.tend = None t_0 = sdat.tseries.iloc[0].loc['t'] t_f = sdat.tseries.iloc[-1].loc['t'] conf.time.tstart = (t_0 * conf.time.fraction + t_f * (1 - conf.time.fraction)) lovs = misc.list_of_vars(conf.time.plot) if lovs: plot_time_series(sdat, lovs) if conf.time.compstat: compstat(sdat, conf.time.tstart, conf.time.tend)
Implementation of time subcommand. Other Parameters: conf.time conf.core
Below is the the instruction that describes the task: ### Input: Implementation of time subcommand. Other Parameters: conf.time conf.core ### Response: def cmd(): """Implementation of time subcommand. Other Parameters: conf.time conf.core """ sdat = StagyyData(conf.core.path) if sdat.tseries is None: return if conf.time.fraction is not None: if not 0 < conf.time.fraction <= 1: raise InvalidTimeFractionError(conf.time.fraction) conf.time.tend = None t_0 = sdat.tseries.iloc[0].loc['t'] t_f = sdat.tseries.iloc[-1].loc['t'] conf.time.tstart = (t_0 * conf.time.fraction + t_f * (1 - conf.time.fraction)) lovs = misc.list_of_vars(conf.time.plot) if lovs: plot_time_series(sdat, lovs) if conf.time.compstat: compstat(sdat, conf.time.tstart, conf.time.tend)
def get_statistics_by_account(self, account_id, term_id): """ Returns statistics for the given account_id and term_id. https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.department_statistics """ url = ("/api/v1/accounts/sis_account_id:%s/analytics/" "terms/sis_term_id:%s/statistics.json") % (account_id, term_id) return self._get_resource(url)
Returns statistics for the given account_id and term_id. https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.department_statistics
Below is the the instruction that describes the task: ### Input: Returns statistics for the given account_id and term_id. https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.department_statistics ### Response: def get_statistics_by_account(self, account_id, term_id): """ Returns statistics for the given account_id and term_id. https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.department_statistics """ url = ("/api/v1/accounts/sis_account_id:%s/analytics/" "terms/sis_term_id:%s/statistics.json") % (account_id, term_id) return self._get_resource(url)
def detach(self, sync=True): """ Detach this result from its parent session by fetching the remainder of this result from the network into the buffer. :returns: number of records fetched """ if self.attached(): return self._session.detach(self, sync=sync) else: return 0
Detach this result from its parent session by fetching the remainder of this result from the network into the buffer. :returns: number of records fetched
Below is the the instruction that describes the task: ### Input: Detach this result from its parent session by fetching the remainder of this result from the network into the buffer. :returns: number of records fetched ### Response: def detach(self, sync=True): """ Detach this result from its parent session by fetching the remainder of this result from the network into the buffer. :returns: number of records fetched """ if self.attached(): return self._session.detach(self, sync=sync) else: return 0
def from_prefix(cls, container, prefix): """Create from prefix object.""" if cls._is_gs_folder(prefix): name, suffix, extra = prefix.name.partition(cls._gs_folder_suffix) if (suffix, extra) == (cls._gs_folder_suffix, ''): # Patch GS specific folder to remove suffix. prefix.name = name return super(GsObject, cls).from_prefix(container, prefix)
Create from prefix object.
Below is the the instruction that describes the task: ### Input: Create from prefix object. ### Response: def from_prefix(cls, container, prefix): """Create from prefix object.""" if cls._is_gs_folder(prefix): name, suffix, extra = prefix.name.partition(cls._gs_folder_suffix) if (suffix, extra) == (cls._gs_folder_suffix, ''): # Patch GS specific folder to remove suffix. prefix.name = name return super(GsObject, cls).from_prefix(container, prefix)
def download_url(self, project, file_name, run=None, entity=None): """Generate download urls Args: project (str): The project to download file_name (str): The name of the file to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: A dict of extensions and urls { "url": "https://weights.url", "updatedAt": '2013-04-26T22:22:23.832Z', 'md5': 'mZFLkyvTelC5g8XnyQrpOw==' } """ query = gql(''' query Model($name: String!, $fileName: String!, $entity: String!, $run: String!) { model(name: $name, entityName: $entity) { bucket(name: $run) { files(names: [$fileName]) { edges { node { name url md5 updatedAt } } } } } } ''') query_result = self.gql(query, variable_values={ 'name': project, 'run': run or self.settings('run'), 'fileName': file_name, 'entity': entity or self.settings('entity')}) files = self._flatten_edges(query_result['model']['bucket']['files']) return files[0] if len(files) > 0 and files[0].get('updatedAt') else None
Generate download urls Args: project (str): The project to download file_name (str): The name of the file to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: A dict of extensions and urls { "url": "https://weights.url", "updatedAt": '2013-04-26T22:22:23.832Z', 'md5': 'mZFLkyvTelC5g8XnyQrpOw==' }
Below is the the instruction that describes the task: ### Input: Generate download urls Args: project (str): The project to download file_name (str): The name of the file to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: A dict of extensions and urls { "url": "https://weights.url", "updatedAt": '2013-04-26T22:22:23.832Z', 'md5': 'mZFLkyvTelC5g8XnyQrpOw==' } ### Response: def download_url(self, project, file_name, run=None, entity=None): """Generate download urls Args: project (str): The project to download file_name (str): The name of the file to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: A dict of extensions and urls { "url": "https://weights.url", "updatedAt": '2013-04-26T22:22:23.832Z', 'md5': 'mZFLkyvTelC5g8XnyQrpOw==' } """ query = gql(''' query Model($name: String!, $fileName: String!, $entity: String!, $run: String!) { model(name: $name, entityName: $entity) { bucket(name: $run) { files(names: [$fileName]) { edges { node { name url md5 updatedAt } } } } } } ''') query_result = self.gql(query, variable_values={ 'name': project, 'run': run or self.settings('run'), 'fileName': file_name, 'entity': entity or self.settings('entity')}) files = self._flatten_edges(query_result['model']['bucket']['files']) return files[0] if len(files) > 0 and files[0].get('updatedAt') else None
def put(cont, path=None, local_file=None, profile=None): ''' Create a new container, or upload an object to a container. CLI Example to create a container: .. code-block:: bash salt myminion swift.put mycontainer CLI Example to upload an object to a container: .. code-block:: bash salt myminion swift.put mycontainer remotepath local_file=/path/to/file ''' swift_conn = _auth(profile) if path is None: return swift_conn.put_container(cont) elif local_file is not None: return swift_conn.put_object(cont, path, local_file) else: return False
Create a new container, or upload an object to a container. CLI Example to create a container: .. code-block:: bash salt myminion swift.put mycontainer CLI Example to upload an object to a container: .. code-block:: bash salt myminion swift.put mycontainer remotepath local_file=/path/to/file
Below is the the instruction that describes the task: ### Input: Create a new container, or upload an object to a container. CLI Example to create a container: .. code-block:: bash salt myminion swift.put mycontainer CLI Example to upload an object to a container: .. code-block:: bash salt myminion swift.put mycontainer remotepath local_file=/path/to/file ### Response: def put(cont, path=None, local_file=None, profile=None): ''' Create a new container, or upload an object to a container. CLI Example to create a container: .. code-block:: bash salt myminion swift.put mycontainer CLI Example to upload an object to a container: .. code-block:: bash salt myminion swift.put mycontainer remotepath local_file=/path/to/file ''' swift_conn = _auth(profile) if path is None: return swift_conn.put_container(cont) elif local_file is not None: return swift_conn.put_object(cont, path, local_file) else: return False
def get_project_export(self, project_id): """ Get project info for export """ try: result = self._request('/getprojectexport/', {'projectid': project_id}) return TildaProject(**result) except NetworkError: return []
Get project info for export
Below is the the instruction that describes the task: ### Input: Get project info for export ### Response: def get_project_export(self, project_id): """ Get project info for export """ try: result = self._request('/getprojectexport/', {'projectid': project_id}) return TildaProject(**result) except NetworkError: return []
def remove_fields(layer, fields_to_remove): """Remove fields from a vector layer. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_remove: List of fields to remove. :type fields_to_remove: list """ index_to_remove = [] data_provider = layer.dataProvider() for field in fields_to_remove: index = layer.fields().lookupField(field) if index != -1: index_to_remove.append(index) data_provider.deleteAttributes(index_to_remove) layer.updateFields()
Remove fields from a vector layer. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_remove: List of fields to remove. :type fields_to_remove: list
Below is the the instruction that describes the task: ### Input: Remove fields from a vector layer. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_remove: List of fields to remove. :type fields_to_remove: list ### Response: def remove_fields(layer, fields_to_remove): """Remove fields from a vector layer. :param layer: The vector layer. :type layer: QgsVectorLayer :param fields_to_remove: List of fields to remove. :type fields_to_remove: list """ index_to_remove = [] data_provider = layer.dataProvider() for field in fields_to_remove: index = layer.fields().lookupField(field) if index != -1: index_to_remove.append(index) data_provider.deleteAttributes(index_to_remove) layer.updateFields()
def rgb2gray(rgb): """Convert an RGB image (or images) to grayscale. Parameters ---------- rgb : ndarray RGB image as Nr x Nc x 3 or Nr x Nc x 3 x K array Returns ------- gry : ndarray Grayscale image as Nr x Nc or Nr x Nc x K array """ w = sla.atleast_nd(rgb.ndim, np.array([0.299, 0.587, 0.144], dtype=rgb.dtype, ndmin=3)) return np.sum(w * rgb, axis=2)
Convert an RGB image (or images) to grayscale. Parameters ---------- rgb : ndarray RGB image as Nr x Nc x 3 or Nr x Nc x 3 x K array Returns ------- gry : ndarray Grayscale image as Nr x Nc or Nr x Nc x K array
Below is the the instruction that describes the task: ### Input: Convert an RGB image (or images) to grayscale. Parameters ---------- rgb : ndarray RGB image as Nr x Nc x 3 or Nr x Nc x 3 x K array Returns ------- gry : ndarray Grayscale image as Nr x Nc or Nr x Nc x K array ### Response: def rgb2gray(rgb): """Convert an RGB image (or images) to grayscale. Parameters ---------- rgb : ndarray RGB image as Nr x Nc x 3 or Nr x Nc x 3 x K array Returns ------- gry : ndarray Grayscale image as Nr x Nc or Nr x Nc x K array """ w = sla.atleast_nd(rgb.ndim, np.array([0.299, 0.587, 0.144], dtype=rgb.dtype, ndmin=3)) return np.sum(w * rgb, axis=2)
def wait_for_complete(queue_name, job_list=None, job_name_prefix=None, poll_interval=10, idle_log_timeout=None, kill_on_log_timeout=False, stash_log_method=None, tag_instances=False, result_record=None): """Return when all jobs in the given list finished. If not job list is given, return when all jobs in queue finished. Parameters ---------- queue_name : str The name of the queue to wait for completion. job_list : Optional[list(dict)] A list of jobID-s in a dict, as returned by the submit function. Example: [{'jobId': 'e6b00f24-a466-4a72-b735-d205e29117b4'}, ...] If not given, this function will return if all jobs completed. job_name_prefix : Optional[str] A prefix for the name of the jobs to wait for. This is useful if the explicit job list is not available but filtering is needed. poll_interval : Optional[int] The time delay between API calls to check the job statuses. idle_log_timeout : Optional[int] or None If not None, then track the logs of the active jobs, and if new output is not produced after `idle_log_timeout` seconds, a warning is printed. If `kill_on_log_timeout` is set to True, the job will also be terminated. kill_on_log_timeout : Optional[bool] If True, and if `idle_log_timeout` is set, jobs will be terminated after timeout. This has no effect if `idle_log_timeout` is None. Default is False. stash_log_method : Optional[str] Select a method to store the job logs, either 's3' or 'local'. If no method is specified, the logs will not be loaded off of AWS. If 's3' is specified, then `job_name_prefix` must also be given, as this will indicate where on s3 to store the logs. tag_instances : bool Default is False. If True, apply tags to the instances. This is toady typically done by each job, so in most cases this should not be needed. result_record : dict A dict which will be modified in place to record the results of the job. """ if stash_log_method == 's3' and job_name_prefix is None: raise Exception('A job_name_prefix is required to post logs on s3.') start_time = datetime.now() if job_list is None: job_id_list = [] else: job_id_list = [job['jobId'] for job in job_list] if result_record is None: result_record = {} def get_jobs_by_status(status, job_id_filter=None, job_name_prefix=None): res = batch_client.list_jobs(jobQueue=queue_name, jobStatus=status, maxResults=10000) jobs = res['jobSummaryList'] if job_name_prefix: jobs = [job for job in jobs if job['jobName'].startswith(job_name_prefix)] if job_id_filter: jobs = [job_def for job_def in jobs if job_def['jobId'] in job_id_filter] return jobs job_log_dict = {} def check_logs(job_defs): """Updates teh job_log_dict.""" stalled_jobs = set() # Check the status of all the jobs we're tracking. for job_def in job_defs: try: # Get the logs for this job. log_lines = get_job_log(job_def, write_file=False) # Get the job id. jid = job_def['jobId'] now = datetime.now() if jid not in job_log_dict.keys(): # If the job is new... logger.info("Adding job %s to the log tracker at %s." % (jid, now)) job_log_dict[jid] = {'log': log_lines, 'last change time': now} elif len(job_log_dict[jid]['log']) == len(log_lines): # If the job log hasn't changed, announce as such, and check # to see if it has been the same for longer than stall time. check_dt = now - job_log_dict[jid]['last change time'] logger.warning(('Job \'%s\' has not produced output for ' '%d seconds.') % (job_def['jobName'], check_dt.seconds)) if check_dt.seconds > idle_log_timeout: logger.warning("Job \'%s\' has stalled." % job_def['jobName']) stalled_jobs.add(jid) else: # If the job is known, and the logs have changed, update the # "last change time". old_log = job_log_dict[jid]['log'] old_log += log_lines[len(old_log):] job_log_dict[jid]['last change time'] = now except Exception as e: # Sometimes due to sync et al. issues, a part of this will fail. # Such things are usually transitory issues so we keep trying. logger.error("Failed to check log for: %s" % str(job_def)) logger.exception(e) # Pass up the set of job id's for stalled jobs. return stalled_jobs # Don't start watching jobs added after this command was initialized. observed_job_def_dict = {} def get_dict_of_job_tuples(job_defs): return {jdef['jobId']: [(k, jdef[k]) for k in ['jobName', 'jobId']] for jdef in job_defs} batch_client = boto3.client('batch') if tag_instances: ecs_cluster_name = get_ecs_cluster_for_queue(queue_name, batch_client) terminate_msg = 'Job log has stalled for at least %f minutes.' terminated_jobs = set() stashed_id_set = set() while True: pre_run = [] for status in ('SUBMITTED', 'PENDING', 'RUNNABLE', 'STARTING'): pre_run += get_jobs_by_status(status, job_id_list, job_name_prefix) running = get_jobs_by_status('RUNNING', job_id_list, job_name_prefix) failed = get_jobs_by_status('FAILED', job_id_list, job_name_prefix) done = get_jobs_by_status('SUCCEEDED', job_id_list, job_name_prefix) observed_job_def_dict.update(get_dict_of_job_tuples(pre_run + running)) logger.info('(%d s)=(pre: %d, running: %d, failed: %d, done: %d)' % ((datetime.now() - start_time).seconds, len(pre_run), len(running), len(failed), len(done))) # Check the logs for new output, and possibly terminate some jobs. stalled_jobs = check_logs(running) if idle_log_timeout is not None: if kill_on_log_timeout: # Keep track of terminated jobs so we don't send a terminate # message twice. for jid in stalled_jobs - terminated_jobs: batch_client.terminate_job( jobId=jid, reason=terminate_msg % (idle_log_timeout/60.0) ) logger.info('Terminating %s.' % jid) terminated_jobs.add(jid) if job_id_list: if (len(failed) + len(done)) == len(job_id_list): ret = 0 break else: if (len(failed) + len(done) > 0) and \ (len(pre_run) + len(running) == 0): ret = 0 break if tag_instances: tag_instances_on_cluster(ecs_cluster_name) # Stash the logs of things that have finished so far. Note that jobs # terminated in this round will not be picked up until the next round. if stash_log_method: stash_logs(observed_job_def_dict, done, failed, queue_name, stash_log_method, job_name_prefix, start_time.strftime('%Y%m%d_%H%M%S'), ids_stashed=stashed_id_set) sleep(poll_interval) # Pick up any stragglers if stash_log_method: stash_logs(observed_job_def_dict, done, failed, queue_name, stash_log_method, job_name_prefix, start_time.strftime('%Y%m%d_%H%M%S'), ids_stashed=stashed_id_set) result_record['terminated'] = terminated_jobs result_record['failed'] = failed result_record['succeeded'] = done return ret
Return when all jobs in the given list finished. If not job list is given, return when all jobs in queue finished. Parameters ---------- queue_name : str The name of the queue to wait for completion. job_list : Optional[list(dict)] A list of jobID-s in a dict, as returned by the submit function. Example: [{'jobId': 'e6b00f24-a466-4a72-b735-d205e29117b4'}, ...] If not given, this function will return if all jobs completed. job_name_prefix : Optional[str] A prefix for the name of the jobs to wait for. This is useful if the explicit job list is not available but filtering is needed. poll_interval : Optional[int] The time delay between API calls to check the job statuses. idle_log_timeout : Optional[int] or None If not None, then track the logs of the active jobs, and if new output is not produced after `idle_log_timeout` seconds, a warning is printed. If `kill_on_log_timeout` is set to True, the job will also be terminated. kill_on_log_timeout : Optional[bool] If True, and if `idle_log_timeout` is set, jobs will be terminated after timeout. This has no effect if `idle_log_timeout` is None. Default is False. stash_log_method : Optional[str] Select a method to store the job logs, either 's3' or 'local'. If no method is specified, the logs will not be loaded off of AWS. If 's3' is specified, then `job_name_prefix` must also be given, as this will indicate where on s3 to store the logs. tag_instances : bool Default is False. If True, apply tags to the instances. This is toady typically done by each job, so in most cases this should not be needed. result_record : dict A dict which will be modified in place to record the results of the job.
Below is the the instruction that describes the task: ### Input: Return when all jobs in the given list finished. If not job list is given, return when all jobs in queue finished. Parameters ---------- queue_name : str The name of the queue to wait for completion. job_list : Optional[list(dict)] A list of jobID-s in a dict, as returned by the submit function. Example: [{'jobId': 'e6b00f24-a466-4a72-b735-d205e29117b4'}, ...] If not given, this function will return if all jobs completed. job_name_prefix : Optional[str] A prefix for the name of the jobs to wait for. This is useful if the explicit job list is not available but filtering is needed. poll_interval : Optional[int] The time delay between API calls to check the job statuses. idle_log_timeout : Optional[int] or None If not None, then track the logs of the active jobs, and if new output is not produced after `idle_log_timeout` seconds, a warning is printed. If `kill_on_log_timeout` is set to True, the job will also be terminated. kill_on_log_timeout : Optional[bool] If True, and if `idle_log_timeout` is set, jobs will be terminated after timeout. This has no effect if `idle_log_timeout` is None. Default is False. stash_log_method : Optional[str] Select a method to store the job logs, either 's3' or 'local'. If no method is specified, the logs will not be loaded off of AWS. If 's3' is specified, then `job_name_prefix` must also be given, as this will indicate where on s3 to store the logs. tag_instances : bool Default is False. If True, apply tags to the instances. This is toady typically done by each job, so in most cases this should not be needed. result_record : dict A dict which will be modified in place to record the results of the job. ### Response: def wait_for_complete(queue_name, job_list=None, job_name_prefix=None, poll_interval=10, idle_log_timeout=None, kill_on_log_timeout=False, stash_log_method=None, tag_instances=False, result_record=None): """Return when all jobs in the given list finished. If not job list is given, return when all jobs in queue finished. Parameters ---------- queue_name : str The name of the queue to wait for completion. job_list : Optional[list(dict)] A list of jobID-s in a dict, as returned by the submit function. Example: [{'jobId': 'e6b00f24-a466-4a72-b735-d205e29117b4'}, ...] If not given, this function will return if all jobs completed. job_name_prefix : Optional[str] A prefix for the name of the jobs to wait for. This is useful if the explicit job list is not available but filtering is needed. poll_interval : Optional[int] The time delay between API calls to check the job statuses. idle_log_timeout : Optional[int] or None If not None, then track the logs of the active jobs, and if new output is not produced after `idle_log_timeout` seconds, a warning is printed. If `kill_on_log_timeout` is set to True, the job will also be terminated. kill_on_log_timeout : Optional[bool] If True, and if `idle_log_timeout` is set, jobs will be terminated after timeout. This has no effect if `idle_log_timeout` is None. Default is False. stash_log_method : Optional[str] Select a method to store the job logs, either 's3' or 'local'. If no method is specified, the logs will not be loaded off of AWS. If 's3' is specified, then `job_name_prefix` must also be given, as this will indicate where on s3 to store the logs. tag_instances : bool Default is False. If True, apply tags to the instances. This is toady typically done by each job, so in most cases this should not be needed. result_record : dict A dict which will be modified in place to record the results of the job. """ if stash_log_method == 's3' and job_name_prefix is None: raise Exception('A job_name_prefix is required to post logs on s3.') start_time = datetime.now() if job_list is None: job_id_list = [] else: job_id_list = [job['jobId'] for job in job_list] if result_record is None: result_record = {} def get_jobs_by_status(status, job_id_filter=None, job_name_prefix=None): res = batch_client.list_jobs(jobQueue=queue_name, jobStatus=status, maxResults=10000) jobs = res['jobSummaryList'] if job_name_prefix: jobs = [job for job in jobs if job['jobName'].startswith(job_name_prefix)] if job_id_filter: jobs = [job_def for job_def in jobs if job_def['jobId'] in job_id_filter] return jobs job_log_dict = {} def check_logs(job_defs): """Updates teh job_log_dict.""" stalled_jobs = set() # Check the status of all the jobs we're tracking. for job_def in job_defs: try: # Get the logs for this job. log_lines = get_job_log(job_def, write_file=False) # Get the job id. jid = job_def['jobId'] now = datetime.now() if jid not in job_log_dict.keys(): # If the job is new... logger.info("Adding job %s to the log tracker at %s." % (jid, now)) job_log_dict[jid] = {'log': log_lines, 'last change time': now} elif len(job_log_dict[jid]['log']) == len(log_lines): # If the job log hasn't changed, announce as such, and check # to see if it has been the same for longer than stall time. check_dt = now - job_log_dict[jid]['last change time'] logger.warning(('Job \'%s\' has not produced output for ' '%d seconds.') % (job_def['jobName'], check_dt.seconds)) if check_dt.seconds > idle_log_timeout: logger.warning("Job \'%s\' has stalled." % job_def['jobName']) stalled_jobs.add(jid) else: # If the job is known, and the logs have changed, update the # "last change time". old_log = job_log_dict[jid]['log'] old_log += log_lines[len(old_log):] job_log_dict[jid]['last change time'] = now except Exception as e: # Sometimes due to sync et al. issues, a part of this will fail. # Such things are usually transitory issues so we keep trying. logger.error("Failed to check log for: %s" % str(job_def)) logger.exception(e) # Pass up the set of job id's for stalled jobs. return stalled_jobs # Don't start watching jobs added after this command was initialized. observed_job_def_dict = {} def get_dict_of_job_tuples(job_defs): return {jdef['jobId']: [(k, jdef[k]) for k in ['jobName', 'jobId']] for jdef in job_defs} batch_client = boto3.client('batch') if tag_instances: ecs_cluster_name = get_ecs_cluster_for_queue(queue_name, batch_client) terminate_msg = 'Job log has stalled for at least %f minutes.' terminated_jobs = set() stashed_id_set = set() while True: pre_run = [] for status in ('SUBMITTED', 'PENDING', 'RUNNABLE', 'STARTING'): pre_run += get_jobs_by_status(status, job_id_list, job_name_prefix) running = get_jobs_by_status('RUNNING', job_id_list, job_name_prefix) failed = get_jobs_by_status('FAILED', job_id_list, job_name_prefix) done = get_jobs_by_status('SUCCEEDED', job_id_list, job_name_prefix) observed_job_def_dict.update(get_dict_of_job_tuples(pre_run + running)) logger.info('(%d s)=(pre: %d, running: %d, failed: %d, done: %d)' % ((datetime.now() - start_time).seconds, len(pre_run), len(running), len(failed), len(done))) # Check the logs for new output, and possibly terminate some jobs. stalled_jobs = check_logs(running) if idle_log_timeout is not None: if kill_on_log_timeout: # Keep track of terminated jobs so we don't send a terminate # message twice. for jid in stalled_jobs - terminated_jobs: batch_client.terminate_job( jobId=jid, reason=terminate_msg % (idle_log_timeout/60.0) ) logger.info('Terminating %s.' % jid) terminated_jobs.add(jid) if job_id_list: if (len(failed) + len(done)) == len(job_id_list): ret = 0 break else: if (len(failed) + len(done) > 0) and \ (len(pre_run) + len(running) == 0): ret = 0 break if tag_instances: tag_instances_on_cluster(ecs_cluster_name) # Stash the logs of things that have finished so far. Note that jobs # terminated in this round will not be picked up until the next round. if stash_log_method: stash_logs(observed_job_def_dict, done, failed, queue_name, stash_log_method, job_name_prefix, start_time.strftime('%Y%m%d_%H%M%S'), ids_stashed=stashed_id_set) sleep(poll_interval) # Pick up any stragglers if stash_log_method: stash_logs(observed_job_def_dict, done, failed, queue_name, stash_log_method, job_name_prefix, start_time.strftime('%Y%m%d_%H%M%S'), ids_stashed=stashed_id_set) result_record['terminated'] = terminated_jobs result_record['failed'] = failed result_record['succeeded'] = done return ret
def _check_update_fw(self, tenant_id, drvr_name): """Update the Firewall config by calling the driver. This function calls the device manager routine to update the device with modified FW cfg. """ if self.fwid_attr[tenant_id].is_fw_complete(): fw_dict = self.fwid_attr[tenant_id].get_fw_dict() self.modify_fw_device(tenant_id, fw_dict.get('fw_id'), fw_dict)
Update the Firewall config by calling the driver. This function calls the device manager routine to update the device with modified FW cfg.
Below is the the instruction that describes the task: ### Input: Update the Firewall config by calling the driver. This function calls the device manager routine to update the device with modified FW cfg. ### Response: def _check_update_fw(self, tenant_id, drvr_name): """Update the Firewall config by calling the driver. This function calls the device manager routine to update the device with modified FW cfg. """ if self.fwid_attr[tenant_id].is_fw_complete(): fw_dict = self.fwid_attr[tenant_id].get_fw_dict() self.modify_fw_device(tenant_id, fw_dict.get('fw_id'), fw_dict)
def run(self): """ For each incoming message, decode the JSON, evaluate expression, encode as JSON and write that to the output file. """ for line in self.incoming: message = loads(line) result = self._evaluate(message) if result is self._SKIP: continue self.output.write(dumps(result, cls=_DatetimeJSONEncoder) + b"\n")
For each incoming message, decode the JSON, evaluate expression, encode as JSON and write that to the output file.
Below is the the instruction that describes the task: ### Input: For each incoming message, decode the JSON, evaluate expression, encode as JSON and write that to the output file. ### Response: def run(self): """ For each incoming message, decode the JSON, evaluate expression, encode as JSON and write that to the output file. """ for line in self.incoming: message = loads(line) result = self._evaluate(message) if result is self._SKIP: continue self.output.write(dumps(result, cls=_DatetimeJSONEncoder) + b"\n")
def get_tags(self): """ Get all tags and post count of each tag. :return: dict_item(tag_name, Pair(count_all, count_published)) """ posts = self.get_posts(include_draft=True) result = {} for post in posts: for tag_name in set(post.tags): result[tag_name] = result.setdefault( tag_name, Pair(0, 0)) + Pair(1, 0 if post.is_draft else 1) return list(result.items())
Get all tags and post count of each tag. :return: dict_item(tag_name, Pair(count_all, count_published))
Below is the the instruction that describes the task: ### Input: Get all tags and post count of each tag. :return: dict_item(tag_name, Pair(count_all, count_published)) ### Response: def get_tags(self): """ Get all tags and post count of each tag. :return: dict_item(tag_name, Pair(count_all, count_published)) """ posts = self.get_posts(include_draft=True) result = {} for post in posts: for tag_name in set(post.tags): result[tag_name] = result.setdefault( tag_name, Pair(0, 0)) + Pair(1, 0 if post.is_draft else 1) return list(result.items())
def project_interval_forward(self, c_interval): """ project c_interval on the source transcript to the destination transcript :param c_interval: an :class:`hgvs.interval.Interval` object on the source transcript :returns: c_interval: an :class:`hgvs.interval.Interval` object on the destination transcript """ return self.dst_tm.g_to_c(self.src_tm.c_to_g(c_interval))
project c_interval on the source transcript to the destination transcript :param c_interval: an :class:`hgvs.interval.Interval` object on the source transcript :returns: c_interval: an :class:`hgvs.interval.Interval` object on the destination transcript
Below is the the instruction that describes the task: ### Input: project c_interval on the source transcript to the destination transcript :param c_interval: an :class:`hgvs.interval.Interval` object on the source transcript :returns: c_interval: an :class:`hgvs.interval.Interval` object on the destination transcript ### Response: def project_interval_forward(self, c_interval): """ project c_interval on the source transcript to the destination transcript :param c_interval: an :class:`hgvs.interval.Interval` object on the source transcript :returns: c_interval: an :class:`hgvs.interval.Interval` object on the destination transcript """ return self.dst_tm.g_to_c(self.src_tm.c_to_g(c_interval))
def content(): """Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 3.2.3 :returns: A message object without brand element. :rtype: safe.messaging.message.Message """ message = m.Message() paragraph = m.Paragraph( m.Image( 'file:///%s/img/screenshots/' 'petabencana-screenshot.png' % resources_path()), style_class='text-center' ) message.add(paragraph) link = m.Link('https://petabencana.id', 'PetaBencana.id') body = m.Paragraph(tr( 'This tool will fetch current flood data for Jakarta from '), link) tips = m.BulletedList() tips.add(tr( 'Check the output directory is correct. Note that the saved ' 'dataset will be called jakarta_flood.shp (and associated files).' )) tips.add(tr( 'If you wish you can specify a prefix to ' 'add in front of this default name. For example using a prefix ' 'of \'foo-\' will cause the downloaded files to be saved as e.g. ' '\'foo-rw-jakarta-flood.shp\'. Note that the only allowed prefix ' 'characters are A-Z, a-z, 0-9 and the characters \'-\' and \'_\'. ' 'You can leave this blank if you prefer.' )) tips.add(tr( 'If a dataset already exists in the output directory it will be ' 'overwritten if the "overwrite existing files" checkbox is ticked.' )) tips.add(tr( 'If the "include date/time in output filename" option is ticked, ' 'the filename will be prefixed with a time stamp e.g. ' '\'foo-22-Mar-2015-08-01-2015-rw-jakarta-flood.shp\' where the date ' 'timestamp is in the form DD-MMM-YYYY.' )) tips.add(tr( 'This tool requires a working internet connection and fetching ' 'data will consume your bandwidth.')) tips.add(m.Link( production_api['help_url'], text=tr( 'Downloaded data is copyright the PetaBencana contributors' ' (click for more info).') )) message.add(body) message.add(tips) return message
Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 3.2.3 :returns: A message object without brand element. :rtype: safe.messaging.message.Message
Below is the the instruction that describes the task: ### Input: Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 3.2.3 :returns: A message object without brand element. :rtype: safe.messaging.message.Message ### Response: def content(): """Helper method that returns just the content. This method was added so that the text could be reused in the dock_help module. .. versionadded:: 3.2.3 :returns: A message object without brand element. :rtype: safe.messaging.message.Message """ message = m.Message() paragraph = m.Paragraph( m.Image( 'file:///%s/img/screenshots/' 'petabencana-screenshot.png' % resources_path()), style_class='text-center' ) message.add(paragraph) link = m.Link('https://petabencana.id', 'PetaBencana.id') body = m.Paragraph(tr( 'This tool will fetch current flood data for Jakarta from '), link) tips = m.BulletedList() tips.add(tr( 'Check the output directory is correct. Note that the saved ' 'dataset will be called jakarta_flood.shp (and associated files).' )) tips.add(tr( 'If you wish you can specify a prefix to ' 'add in front of this default name. For example using a prefix ' 'of \'foo-\' will cause the downloaded files to be saved as e.g. ' '\'foo-rw-jakarta-flood.shp\'. Note that the only allowed prefix ' 'characters are A-Z, a-z, 0-9 and the characters \'-\' and \'_\'. ' 'You can leave this blank if you prefer.' )) tips.add(tr( 'If a dataset already exists in the output directory it will be ' 'overwritten if the "overwrite existing files" checkbox is ticked.' )) tips.add(tr( 'If the "include date/time in output filename" option is ticked, ' 'the filename will be prefixed with a time stamp e.g. ' '\'foo-22-Mar-2015-08-01-2015-rw-jakarta-flood.shp\' where the date ' 'timestamp is in the form DD-MMM-YYYY.' )) tips.add(tr( 'This tool requires a working internet connection and fetching ' 'data will consume your bandwidth.')) tips.add(m.Link( production_api['help_url'], text=tr( 'Downloaded data is copyright the PetaBencana contributors' ' (click for more info).') )) message.add(body) message.add(tips) return message
def run(self, args): """ Gives user permission based on auth_role arg and sends email to that user. :param args Namespace arguments parsed from the command line """ email = args.email # email of person to send email to username = args.username # username of person to send email to, will be None if email is specified force_send = args.resend # is this a resend so we should force sending auth_role = args.auth_role # authorization role(project permissions) to give to the user msg_file = args.msg_file # message file who's contents will be sent with the share message = read_argument_file_contents(msg_file) print("Sharing project.") to_user = self.remote_store.lookup_or_register_user_by_email_or_username(email, username) try: project = self.fetch_project(args, must_exist=True, include_children=False) dest_email = self.service.share(project, to_user, force_send, auth_role, message) print("Share email message sent to " + dest_email) except D4S2Error as ex: if ex.warning: print(ex.message) else: raise
Gives user permission based on auth_role arg and sends email to that user. :param args Namespace arguments parsed from the command line
Below is the the instruction that describes the task: ### Input: Gives user permission based on auth_role arg and sends email to that user. :param args Namespace arguments parsed from the command line ### Response: def run(self, args): """ Gives user permission based on auth_role arg and sends email to that user. :param args Namespace arguments parsed from the command line """ email = args.email # email of person to send email to username = args.username # username of person to send email to, will be None if email is specified force_send = args.resend # is this a resend so we should force sending auth_role = args.auth_role # authorization role(project permissions) to give to the user msg_file = args.msg_file # message file who's contents will be sent with the share message = read_argument_file_contents(msg_file) print("Sharing project.") to_user = self.remote_store.lookup_or_register_user_by_email_or_username(email, username) try: project = self.fetch_project(args, must_exist=True, include_children=False) dest_email = self.service.share(project, to_user, force_send, auth_role, message) print("Share email message sent to " + dest_email) except D4S2Error as ex: if ex.warning: print(ex.message) else: raise
def show_ntp_input_rbridge_id(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") show_ntp = ET.Element("show_ntp") config = show_ntp input = ET.SubElement(show_ntp, "input") rbridge_id = ET.SubElement(input, "rbridge-id") rbridge_id.text = kwargs.pop('rbridge_id') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def show_ntp_input_rbridge_id(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") show_ntp = ET.Element("show_ntp") config = show_ntp input = ET.SubElement(show_ntp, "input") rbridge_id = ET.SubElement(input, "rbridge-id") rbridge_id.text = kwargs.pop('rbridge_id') callback = kwargs.pop('callback', self._callback) return callback(config)
def ser_iuwt_recomposition(in1, scale_adjust, smoothed_array): """ This function calls the a trous algorithm code to recompose the input into a single array. This is the implementation of the isotropic undecimated wavelet transform recomposition for a single CPU core. INPUTS: in1 (no default): Array containing wavelet coefficients. scale_adjust (no default): Indicates the number of truncated array pages. smoothed_array (default=None): For a complete inverse transform, this must be the smoothest approximation. OUTPUTS: recomposition Array containing the reconstructed image. """ wavelet_filter = (1./16)*np.array([1,4,6,4,1]) # Filter-bank for use in the a trous algorithm. # Determines scale with adjustment and creates a zero array to store the output, unless smoothed_array is given. max_scale = in1.shape[0] + scale_adjust if smoothed_array is None: recomposition = np.zeros([in1.shape[1], in1.shape[2]]) else: recomposition = smoothed_array # The following loops call the a trous algorithm code to recompose the input. The first loop assumes that there are # non-zero wavelet coefficients at scales above scale_adjust, while the second loop completes the recomposition # on the scales less than scale_adjust. for i in range(max_scale-1, scale_adjust-1, -1): recomposition = ser_a_trous(recomposition, wavelet_filter, i) + in1[i-scale_adjust,:,:] if scale_adjust>0: for i in range(scale_adjust-1, -1, -1): recomposition = ser_a_trous(recomposition, wavelet_filter, i) return recomposition
This function calls the a trous algorithm code to recompose the input into a single array. This is the implementation of the isotropic undecimated wavelet transform recomposition for a single CPU core. INPUTS: in1 (no default): Array containing wavelet coefficients. scale_adjust (no default): Indicates the number of truncated array pages. smoothed_array (default=None): For a complete inverse transform, this must be the smoothest approximation. OUTPUTS: recomposition Array containing the reconstructed image.
Below is the the instruction that describes the task: ### Input: This function calls the a trous algorithm code to recompose the input into a single array. This is the implementation of the isotropic undecimated wavelet transform recomposition for a single CPU core. INPUTS: in1 (no default): Array containing wavelet coefficients. scale_adjust (no default): Indicates the number of truncated array pages. smoothed_array (default=None): For a complete inverse transform, this must be the smoothest approximation. OUTPUTS: recomposition Array containing the reconstructed image. ### Response: def ser_iuwt_recomposition(in1, scale_adjust, smoothed_array): """ This function calls the a trous algorithm code to recompose the input into a single array. This is the implementation of the isotropic undecimated wavelet transform recomposition for a single CPU core. INPUTS: in1 (no default): Array containing wavelet coefficients. scale_adjust (no default): Indicates the number of truncated array pages. smoothed_array (default=None): For a complete inverse transform, this must be the smoothest approximation. OUTPUTS: recomposition Array containing the reconstructed image. """ wavelet_filter = (1./16)*np.array([1,4,6,4,1]) # Filter-bank for use in the a trous algorithm. # Determines scale with adjustment and creates a zero array to store the output, unless smoothed_array is given. max_scale = in1.shape[0] + scale_adjust if smoothed_array is None: recomposition = np.zeros([in1.shape[1], in1.shape[2]]) else: recomposition = smoothed_array # The following loops call the a trous algorithm code to recompose the input. The first loop assumes that there are # non-zero wavelet coefficients at scales above scale_adjust, while the second loop completes the recomposition # on the scales less than scale_adjust. for i in range(max_scale-1, scale_adjust-1, -1): recomposition = ser_a_trous(recomposition, wavelet_filter, i) + in1[i-scale_adjust,:,:] if scale_adjust>0: for i in range(scale_adjust-1, -1, -1): recomposition = ser_a_trous(recomposition, wavelet_filter, i) return recomposition
def ustr(text): """ Convert a string to Python 2 unicode or Python 3 string as appropriate to the version of Python in use. """ if text is not None: if sys.version_info >= (3, 0): return str(text) else: return unicode(text) else: return text
Convert a string to Python 2 unicode or Python 3 string as appropriate to the version of Python in use.
Below is the the instruction that describes the task: ### Input: Convert a string to Python 2 unicode or Python 3 string as appropriate to the version of Python in use. ### Response: def ustr(text): """ Convert a string to Python 2 unicode or Python 3 string as appropriate to the version of Python in use. """ if text is not None: if sys.version_info >= (3, 0): return str(text) else: return unicode(text) else: return text
def is_period(arr): """ Check whether an array-like is a periodical index. .. deprecated:: 0.24.0 Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is a periodical index. Examples -------- >>> is_period([1, 2, 3]) False >>> is_period(pd.Index([1, 2, 3])) False >>> is_period(pd.PeriodIndex(["2017-01-01"], freq="D")) True """ warnings.warn("'is_period' is deprecated and will be removed in a future " "version. Use 'is_period_dtype' or is_period_arraylike' " "instead.", FutureWarning, stacklevel=2) return isinstance(arr, ABCPeriodIndex) or is_period_arraylike(arr)
Check whether an array-like is a periodical index. .. deprecated:: 0.24.0 Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is a periodical index. Examples -------- >>> is_period([1, 2, 3]) False >>> is_period(pd.Index([1, 2, 3])) False >>> is_period(pd.PeriodIndex(["2017-01-01"], freq="D")) True
Below is the the instruction that describes the task: ### Input: Check whether an array-like is a periodical index. .. deprecated:: 0.24.0 Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is a periodical index. Examples -------- >>> is_period([1, 2, 3]) False >>> is_period(pd.Index([1, 2, 3])) False >>> is_period(pd.PeriodIndex(["2017-01-01"], freq="D")) True ### Response: def is_period(arr): """ Check whether an array-like is a periodical index. .. deprecated:: 0.24.0 Parameters ---------- arr : array-like The array-like to check. Returns ------- boolean Whether or not the array-like is a periodical index. Examples -------- >>> is_period([1, 2, 3]) False >>> is_period(pd.Index([1, 2, 3])) False >>> is_period(pd.PeriodIndex(["2017-01-01"], freq="D")) True """ warnings.warn("'is_period' is deprecated and will be removed in a future " "version. Use 'is_period_dtype' or is_period_arraylike' " "instead.", FutureWarning, stacklevel=2) return isinstance(arr, ABCPeriodIndex) or is_period_arraylike(arr)
def Sum(a, axis, keep_dims): """ Sum reduction op. """ return np.sum(a, axis=axis if not isinstance(axis, np.ndarray) else tuple(axis), keepdims=keep_dims),
Sum reduction op.
Below is the the instruction that describes the task: ### Input: Sum reduction op. ### Response: def Sum(a, axis, keep_dims): """ Sum reduction op. """ return np.sum(a, axis=axis if not isinstance(axis, np.ndarray) else tuple(axis), keepdims=keep_dims),
def get_zone_temperature(self, zone_name): """ Get the temperature for a zone """ zone = self.get_zone(zone_name) if zone is None: raise RuntimeError("Unknown zone") return zone['currentTemperature']
Get the temperature for a zone
Below is the the instruction that describes the task: ### Input: Get the temperature for a zone ### Response: def get_zone_temperature(self, zone_name): """ Get the temperature for a zone """ zone = self.get_zone(zone_name) if zone is None: raise RuntimeError("Unknown zone") return zone['currentTemperature']
def follow_redirections(self, request): """Follow all redirections of http response.""" log.debug(LOG_CHECK, "follow all redirections") if self.is_redirect(): # run connection plugins for old connection self.aggregate.plugin_manager.run_connection_plugins(self) response = None for response in self.get_redirects(request): newurl = response.url log.debug(LOG_CHECK, "Redirected to %r", newurl) self.aliases.append(newurl) # XXX on redirect errors this is not printed self.add_info(_("Redirected to `%(url)s'.") % {'url': newurl}) # Reset extern and recalculate self.extern = None self.set_extern(newurl) self.urlparts = strformat.url_unicode_split(newurl) self.build_url_parts() self.url_connection = response self.headers = response.headers self.url = urlutil.urlunsplit(self.urlparts) self.scheme = self.urlparts[0].lower() self._add_ssl_info() self._add_response_info() if self.is_redirect(): # run connection plugins for old connection self.aggregate.plugin_manager.run_connection_plugins(self)
Follow all redirections of http response.
Below is the the instruction that describes the task: ### Input: Follow all redirections of http response. ### Response: def follow_redirections(self, request): """Follow all redirections of http response.""" log.debug(LOG_CHECK, "follow all redirections") if self.is_redirect(): # run connection plugins for old connection self.aggregate.plugin_manager.run_connection_plugins(self) response = None for response in self.get_redirects(request): newurl = response.url log.debug(LOG_CHECK, "Redirected to %r", newurl) self.aliases.append(newurl) # XXX on redirect errors this is not printed self.add_info(_("Redirected to `%(url)s'.") % {'url': newurl}) # Reset extern and recalculate self.extern = None self.set_extern(newurl) self.urlparts = strformat.url_unicode_split(newurl) self.build_url_parts() self.url_connection = response self.headers = response.headers self.url = urlutil.urlunsplit(self.urlparts) self.scheme = self.urlparts[0].lower() self._add_ssl_info() self._add_response_info() if self.is_redirect(): # run connection plugins for old connection self.aggregate.plugin_manager.run_connection_plugins(self)
def check_cuda_devices(): """Output some information on CUDA-enabled devices on your computer, including current memory usage. Modified to only get number of devices. It's a port of https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1 from C to Python with ctypes, so it can run without compiling anything. Note that this is a direct translation with no attempt to make the code Pythonic. It's meant as a general demonstration on how to obtain CUDA device information from Python without resorting to nvidia-smi or a compiled Python extension. .. note:: Author: Jan Schlüter, https://gist.github.com/63a664160d016a491b2cbea15913d549.git """ import ctypes # Some constants taken from cuda.h CUDA_SUCCESS = 0 libnames = ('libcuda.so', 'libcuda.dylib', 'cuda.dll') for libname in libnames: try: cuda = ctypes.CDLL(libname) except OSError: continue else: break else: # raise OSError("could not load any of: " + ' '.join(libnames)) return 0 nGpus = ctypes.c_int() error_str = ctypes.c_char_p() result = cuda.cuInit(0) if result != CUDA_SUCCESS: cuda.cuGetErrorString(result, ctypes.byref(error_str)) # print("cuInit failed with error code %d: %s" % (result, error_str.value.decode())) return 0 result = cuda.cuDeviceGetCount(ctypes.byref(nGpus)) if result != CUDA_SUCCESS: cuda.cuGetErrorString(result, ctypes.byref(error_str)) # print("cuDeviceGetCount failed with error code %d: %s" % (result, error_str.value.decode())) return 0 # print("Found %d device(s)." % nGpus.value) return nGpus.value
Output some information on CUDA-enabled devices on your computer, including current memory usage. Modified to only get number of devices. It's a port of https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1 from C to Python with ctypes, so it can run without compiling anything. Note that this is a direct translation with no attempt to make the code Pythonic. It's meant as a general demonstration on how to obtain CUDA device information from Python without resorting to nvidia-smi or a compiled Python extension. .. note:: Author: Jan Schlüter, https://gist.github.com/63a664160d016a491b2cbea15913d549.git
Below is the the instruction that describes the task: ### Input: Output some information on CUDA-enabled devices on your computer, including current memory usage. Modified to only get number of devices. It's a port of https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1 from C to Python with ctypes, so it can run without compiling anything. Note that this is a direct translation with no attempt to make the code Pythonic. It's meant as a general demonstration on how to obtain CUDA device information from Python without resorting to nvidia-smi or a compiled Python extension. .. note:: Author: Jan Schlüter, https://gist.github.com/63a664160d016a491b2cbea15913d549.git ### Response: def check_cuda_devices(): """Output some information on CUDA-enabled devices on your computer, including current memory usage. Modified to only get number of devices. It's a port of https://gist.github.com/f0k/0d6431e3faa60bffc788f8b4daa029b1 from C to Python with ctypes, so it can run without compiling anything. Note that this is a direct translation with no attempt to make the code Pythonic. It's meant as a general demonstration on how to obtain CUDA device information from Python without resorting to nvidia-smi or a compiled Python extension. .. note:: Author: Jan Schlüter, https://gist.github.com/63a664160d016a491b2cbea15913d549.git """ import ctypes # Some constants taken from cuda.h CUDA_SUCCESS = 0 libnames = ('libcuda.so', 'libcuda.dylib', 'cuda.dll') for libname in libnames: try: cuda = ctypes.CDLL(libname) except OSError: continue else: break else: # raise OSError("could not load any of: " + ' '.join(libnames)) return 0 nGpus = ctypes.c_int() error_str = ctypes.c_char_p() result = cuda.cuInit(0) if result != CUDA_SUCCESS: cuda.cuGetErrorString(result, ctypes.byref(error_str)) # print("cuInit failed with error code %d: %s" % (result, error_str.value.decode())) return 0 result = cuda.cuDeviceGetCount(ctypes.byref(nGpus)) if result != CUDA_SUCCESS: cuda.cuGetErrorString(result, ctypes.byref(error_str)) # print("cuDeviceGetCount failed with error code %d: %s" % (result, error_str.value.decode())) return 0 # print("Found %d device(s)." % nGpus.value) return nGpus.value
def version_from_branch(branch): """ return version information from a git branch name """ try: return tuple( map( int, re.match(r"^.*(?P<version>\d+(\.\d+)+).*$", branch) .groupdict()["version"] .split("."), ) ) except AttributeError as attr_err: raise ValueError( f"Branch {branch} seems to not have a version in its name." ) from attr_err
return version information from a git branch name
Below is the the instruction that describes the task: ### Input: return version information from a git branch name ### Response: def version_from_branch(branch): """ return version information from a git branch name """ try: return tuple( map( int, re.match(r"^.*(?P<version>\d+(\.\d+)+).*$", branch) .groupdict()["version"] .split("."), ) ) except AttributeError as attr_err: raise ValueError( f"Branch {branch} seems to not have a version in its name." ) from attr_err
def set_hostname(self, value=None, default=False, disable=False): """Configures the global system hostname setting EosVersion: 4.13.7M Args: value (str): The hostname value default (bool): Controls use of the default keyword disable (bool): Controls the use of the no keyword Returns: bool: True if the commands are completed successfully """ cmd = self.command_builder('hostname', value=value, default=default, disable=disable) return self.configure(cmd)
Configures the global system hostname setting EosVersion: 4.13.7M Args: value (str): The hostname value default (bool): Controls use of the default keyword disable (bool): Controls the use of the no keyword Returns: bool: True if the commands are completed successfully
Below is the the instruction that describes the task: ### Input: Configures the global system hostname setting EosVersion: 4.13.7M Args: value (str): The hostname value default (bool): Controls use of the default keyword disable (bool): Controls the use of the no keyword Returns: bool: True if the commands are completed successfully ### Response: def set_hostname(self, value=None, default=False, disable=False): """Configures the global system hostname setting EosVersion: 4.13.7M Args: value (str): The hostname value default (bool): Controls use of the default keyword disable (bool): Controls the use of the no keyword Returns: bool: True if the commands are completed successfully """ cmd = self.command_builder('hostname', value=value, default=default, disable=disable) return self.configure(cmd)
def _step5(self, word): """step5() removes a final -e if m() > 1, and changes -ll to -l if m() > 1. """ if word[-1] == 'e': a = self._m(word, len(word)-1) if a > 1 or (a == 1 and not self._cvc(word, len(word)-2)): word = word[:-1] if word.endswith('ll') and self._m(word, len(word)-1) > 1: word = word[:-1] return word
step5() removes a final -e if m() > 1, and changes -ll to -l if m() > 1.
Below is the the instruction that describes the task: ### Input: step5() removes a final -e if m() > 1, and changes -ll to -l if m() > 1. ### Response: def _step5(self, word): """step5() removes a final -e if m() > 1, and changes -ll to -l if m() > 1. """ if word[-1] == 'e': a = self._m(word, len(word)-1) if a > 1 or (a == 1 and not self._cvc(word, len(word)-2)): word = word[:-1] if word.endswith('ll') and self._m(word, len(word)-1) > 1: word = word[:-1] return word
def _get_constrained_labels(self,remove_dimensions=False,**kwargs): """ returns labels which have updated minima and maxima, depending on the kwargs supplied to this """ new_labels = [] for label_no,label in enumerate(self.labels): new_label = LabelDimension(label) remove = False for k in kwargs: if k == label.name: new_label.max = kwargs[k] new_label.min = kwargs[k] remove = True if k == label.name+'__lt': if new_label.units == '1': new_label.max = np.min([new_label.max,kwargs[k]-1]) # is this right? else: new_label.max = np.min([new_label.max,kwargs[k]]) #remove = True if k == label.name+'__lte': new_label.max = np.min([new_label.max,kwargs[k]]) #remove = True if k == label.name+'__gt': if new_label.units == '1': new_label.min = np.max([new_label.min,kwargs[k]+1]) else: new_label.min = np.max([new_label.min,kwargs[k]]) #remove = True if k == label.name+'__gte': new_label.min = np.max([new_label.min,kwargs[k]]) #remove = True if k == label.name+'__evals': remove = True if remove_dimensions: if remove: # skipping removed labels continue new_labels.append(new_label) return new_labels
returns labels which have updated minima and maxima, depending on the kwargs supplied to this
Below is the the instruction that describes the task: ### Input: returns labels which have updated minima and maxima, depending on the kwargs supplied to this ### Response: def _get_constrained_labels(self,remove_dimensions=False,**kwargs): """ returns labels which have updated minima and maxima, depending on the kwargs supplied to this """ new_labels = [] for label_no,label in enumerate(self.labels): new_label = LabelDimension(label) remove = False for k in kwargs: if k == label.name: new_label.max = kwargs[k] new_label.min = kwargs[k] remove = True if k == label.name+'__lt': if new_label.units == '1': new_label.max = np.min([new_label.max,kwargs[k]-1]) # is this right? else: new_label.max = np.min([new_label.max,kwargs[k]]) #remove = True if k == label.name+'__lte': new_label.max = np.min([new_label.max,kwargs[k]]) #remove = True if k == label.name+'__gt': if new_label.units == '1': new_label.min = np.max([new_label.min,kwargs[k]+1]) else: new_label.min = np.max([new_label.min,kwargs[k]]) #remove = True if k == label.name+'__gte': new_label.min = np.max([new_label.min,kwargs[k]]) #remove = True if k == label.name+'__evals': remove = True if remove_dimensions: if remove: # skipping removed labels continue new_labels.append(new_label) return new_labels
def get_cds_time(days, msecs): """Get the datetime object of the time since epoch given in days and milliseconds of day """ return datetime(1958, 1, 1) + timedelta(days=float(days), milliseconds=float(msecs))
Get the datetime object of the time since epoch given in days and milliseconds of day
Below is the the instruction that describes the task: ### Input: Get the datetime object of the time since epoch given in days and milliseconds of day ### Response: def get_cds_time(days, msecs): """Get the datetime object of the time since epoch given in days and milliseconds of day """ return datetime(1958, 1, 1) + timedelta(days=float(days), milliseconds=float(msecs))
def authentication_required(function): """Annotation for methods that require auth.""" def wrapped(self, *args, **kwargs): if not (self.token or self.apiKey): msg = "You must be authenticated to use this method" raise AuthenticationError(msg) else: return function(self, *args, **kwargs) return wrapped
Annotation for methods that require auth.
Below is the the instruction that describes the task: ### Input: Annotation for methods that require auth. ### Response: def authentication_required(function): """Annotation for methods that require auth.""" def wrapped(self, *args, **kwargs): if not (self.token or self.apiKey): msg = "You must be authenticated to use this method" raise AuthenticationError(msg) else: return function(self, *args, **kwargs) return wrapped
def notify(msg, msg_type=0, t=None): "Show system notification with duration t (ms)" if platform.system() == 'Darwin': command = notify_command_osx(msg, msg_type, t) else: command = notify_command_linux(msg, t) os.system(command.encode('utf-8'))
Show system notification with duration t (ms)
Below is the the instruction that describes the task: ### Input: Show system notification with duration t (ms) ### Response: def notify(msg, msg_type=0, t=None): "Show system notification with duration t (ms)" if platform.system() == 'Darwin': command = notify_command_osx(msg, msg_type, t) else: command = notify_command_linux(msg, t) os.system(command.encode('utf-8'))
def transitingPlanets(self): """ Returns a list of transiting planet objects """ transitingPlanets = [] for planet in self.planets: try: if planet.isTransiting: transitingPlanets.append(planet) except KeyError: # No 'discoverymethod' tag - this also filters Solar System planets pass return transitingPlanets
Returns a list of transiting planet objects
Below is the the instruction that describes the task: ### Input: Returns a list of transiting planet objects ### Response: def transitingPlanets(self): """ Returns a list of transiting planet objects """ transitingPlanets = [] for planet in self.planets: try: if planet.isTransiting: transitingPlanets.append(planet) except KeyError: # No 'discoverymethod' tag - this also filters Solar System planets pass return transitingPlanets
def conjugate(self): """ Obtain the conjugate of the quaternion. This is simply the same quaternion but with the sign of the imaginary (vector) parts reversed. """ new = self.copy() new.x *= -1 new.y *= -1 new.z *= -1 return new
Obtain the conjugate of the quaternion. This is simply the same quaternion but with the sign of the imaginary (vector) parts reversed.
Below is the the instruction that describes the task: ### Input: Obtain the conjugate of the quaternion. This is simply the same quaternion but with the sign of the imaginary (vector) parts reversed. ### Response: def conjugate(self): """ Obtain the conjugate of the quaternion. This is simply the same quaternion but with the sign of the imaginary (vector) parts reversed. """ new = self.copy() new.x *= -1 new.y *= -1 new.z *= -1 return new