code
stringlengths
4
4.48k
docstring
stringlengths
1
6.45k
_id
stringlengths
24
24
def list_deployment_for_all_namespaces(self, **kwargs): <NEW_LINE> <INDENT> kwargs['_return_http_data_only'] = True <NEW_LINE> if kwargs.get('callback'): <NEW_LINE> <INDENT> return self.list_deployment_for_all_namespaces_with_http_info(**kwargs) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> (data) = self.list_deployment_for_all_namespaces_with_http_info(**kwargs) <NEW_LINE> return data
list or watch objects of kind Deployment This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.list_deployment_for_all_namespaces(callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str pretty: If 'true', then the output is pretty printed. :param str label_selector: A selector to restrict the list of returned objects by their labels. Defaults to everything. :param str field_selector: A selector to restrict the list of returned objects by their fields. Defaults to everything. :param bool watch: Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. :param str resource_version: When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv. :param int timeout_seconds: Timeout for the list/watch call. :return: V1beta1DeploymentList If the method is called asynchronously, returns the request thread.
625941bfeab8aa0e5d26daa2
def generate_BW(self, iperf): <NEW_LINE> <INDENT> idx=[] <NEW_LINE> value=[] <NEW_LINE> duration = iperf.get('start').get('test_start').get('duration') <NEW_LINE> for i in iperf.get('intervals'): <NEW_LINE> <INDENT> for ii in i.get('streams'): <NEW_LINE> <INDENT> if (round(float(ii.get('start')), 0)) <= duration: <NEW_LINE> <INDENT> idx.append(round(float(ii.get('start')), 0)) <NEW_LINE> value.append(round(float(ii.get('bits_per_second')) / (10*1024*1024), 3)) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return pd.Series(value, index=idx)
Do the actual formatting.
625941bf5fc7496912cc38c9
def obtain_solution(self, init_mean, init_var): <NEW_LINE> <INDENT> mean, var, t = tf.cast(init_mean, dtype=tf.float32), tf.cast(init_var, dtype=tf.float32), 0 <NEW_LINE> tf_dist = tfd.TruncatedNormal(low=-2, high=2, loc=tf.zeros_like(mean), scale=tf.ones_like(var)) <NEW_LINE> def _cond(mean, var, t): <NEW_LINE> <INDENT> bool1 = tf.less(t, self.max_iters) <NEW_LINE> bool2 = tf.less(self.epsilon, tf.reduce_max(var)) <NEW_LINE> return tf.math.logical_and(x=bool1, y=bool2) <NEW_LINE> <DEDENT> def _body(mean, var, t): <NEW_LINE> <INDENT> lb_dist, ub_dist = mean - self.lb, self.ub - mean <NEW_LINE> constrained_var = tf.math.minimum( tf.math.minimum(tf.math.square(lb_dist / 2), tf.math.square(ub_dist / 2)), var) <NEW_LINE> constrained_var = tf.cast(constrained_var, dtype=tf.float32) <NEW_LINE> samples = tf_dist.sample(sample_shape=self.popsize) <NEW_LINE> samples = tf.cast(samples, tf.float32) * tf.math.sqrt(constrained_var) + mean <NEW_LINE> costs = self.cost_function(samples) <NEW_LINE> elites = tf.gather(samples, tf.argsort(costs))[:self.num_elites] <NEW_LINE> new_mean = tf.math.reduce_mean(elites, axis=0) <NEW_LINE> new_var = tf.math.reduce_variance(elites, axis=0) <NEW_LINE> mean = self.alpha * mean + (1 - self.alpha) * new_mean <NEW_LINE> var = self.alpha * var + (1 - self.alpha) * new_var <NEW_LINE> t += 1 <NEW_LINE> return mean, var, t <NEW_LINE> <DEDENT> mean, var, t = tf.while_loop(cond=_cond, body=_body, loop_vars=[mean, var, t]) <NEW_LINE> return mean
[Tensorflow Eager compatible] Optimizes the cost function using the provided initial candidate distribution Arguments: init_mean (np.ndarray): The mean of the initial candidate distribution. init_var (np.ndarray): The variance of the initial candidate distribution.
625941bf2ae34c7f2600d07c
def set_iam_policy(self, policy, client=None, timeout=_DEFAULT_TIMEOUT): <NEW_LINE> <INDENT> client = self._require_client(client) <NEW_LINE> query_params = {} <NEW_LINE> if self.user_project is not None: <NEW_LINE> <INDENT> query_params["userProject"] = self.user_project <NEW_LINE> <DEDENT> resource = policy.to_api_repr() <NEW_LINE> resource["resourceId"] = self.path <NEW_LINE> info = client._connection.api_request( method="PUT", path="%s/iam" % (self.path,), query_params=query_params, data=resource, _target_object=None, timeout=timeout, ) <NEW_LINE> return Policy.from_api_repr(info)
Update the IAM policy for the bucket. .. note: Blob- / object-level IAM support does not yet exist and methods currently call an internal ACL backend not providing any utility beyond the blob's :attr:`acl` at this time. The API may be enhanced in the future and is currently undocumented. Use :attr:`acl` for managing object access control. If :attr:`user_project` is set on the bucket, bills the API request to that project. :type policy: :class:`google.api_core.iam.Policy` :param policy: policy instance used to update bucket's IAM policy. :type client: :class:`~google.cloud.storage.client.Client` or ``NoneType`` :param client: (Optional) The client to use. If not passed, falls back to the ``client`` stored on the current bucket. :type timeout: float or tuple :param timeout: (Optional) The amount of time, in seconds, to wait for the server response. Can also be passed as a tuple (connect_timeout, read_timeout). See :meth:`requests.Session.request` documentation for details. :rtype: :class:`google.api_core.iam.Policy` :returns: the policy instance, based on the resource returned from the ``setIamPolicy`` API request.
625941bf0a366e3fb873e763
def replace(self, new_minion): <NEW_LINE> <INDENT> self._remove_auras_and_effects() <NEW_LINE> new_minion.index = self.index <NEW_LINE> new_minion.player = self.player <NEW_LINE> new_minion.game = self.game <NEW_LINE> new_minion.active = True <NEW_LINE> new_minion.exhausted = True <NEW_LINE> self.game.minion_counter += 1 <NEW_LINE> new_minion.born = self.game.minion_counter <NEW_LINE> if self.index >= len(self.player.minions): <NEW_LINE> <INDENT> print("".join([str(minion) for minion in self.player.minions])) <NEW_LINE> raise ValueError("What is even happening?") <NEW_LINE> <DEDENT> self.player.minions[self.index] = new_minion <NEW_LINE> for effect in new_minion._effects_to_add: <NEW_LINE> <INDENT> new_minion.add_effect(effect) <NEW_LINE> <DEDENT> for aura in self.player.minion_auras: <NEW_LINE> <INDENT> if aura.match(new_minion): <NEW_LINE> <INDENT> aura.status.act(self, new_minion) <NEW_LINE> <DEDENT> <DEDENT> for aura in new_minion._auras_to_add: <NEW_LINE> <INDENT> new_minion.add_aura(aura) <NEW_LINE> <DEDENT> new_minion.health += new_minion.calculate_max_health() - new_minion.base_health
Replaces this minion with another one :param hearthbreaker.game_objects.Minion new_minion: The minion to replace this minion with
625941bfab23a570cc2500cb
def _log(self, msg, v=1): <NEW_LINE> <INDENT> if self.verbosity >= v: <NEW_LINE> <INDENT> self.stdout.write('%s\n' % msg)
Log messages to standard out, depending on the verbosity.
625941bf45492302aab5e20c
def testDDA(self, **kw): <NEW_LINE> <INDENT> DDAkey = (kw["Directory"], kw.get("WantReadDirectory", False), kw.get("WantWriteDirectory", False)) <NEW_LINE> try: <NEW_LINE> <INDENT> return self.testedDDA[DDAkey] <NEW_LINE> <DEDENT> except KeyError: <NEW_LINE> <INDENT> pass <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> requestResult = self._submitCmd("__global", "TestDDARequest", **kw) <NEW_LINE> <DEDENT> except FCPProtocolError as e: <NEW_LINE> <INDENT> self._log(DETAIL, str(e)) <NEW_LINE> return False <NEW_LINE> <DEDENT> writeFilename = None <NEW_LINE> kw = {} <NEW_LINE> kw['Directory'] = requestResult['Directory'] <NEW_LINE> if 'ReadFilename' in requestResult: <NEW_LINE> <INDENT> readFilename = requestResult['ReadFilename'] <NEW_LINE> try: <NEW_LINE> <INDENT> readFile = open(readFilename, 'rb') <NEW_LINE> readFileContents = readFile.read().decode('utf-8') <NEW_LINE> readFile.close() <NEW_LINE> <DEDENT> except FileNotFoundError: <NEW_LINE> <INDENT> readFileContents = '' <NEW_LINE> <DEDENT> kw['ReadFilename'] = readFilename <NEW_LINE> kw['ReadContent'] = readFileContents <NEW_LINE> <DEDENT> if 'WriteFilename' in requestResult and 'ContentToWrite' in requestResult: <NEW_LINE> <INDENT> writeFilename = requestResult['WriteFilename'] <NEW_LINE> contentToWrite = requestResult['ContentToWrite'].encode('utf-8') <NEW_LINE> try: <NEW_LINE> <INDENT> writeFile = open(writeFilename, "w+b") <NEW_LINE> writeFile.write(contentToWrite) <NEW_LINE> writeFile.close() <NEW_LINE> writeFileStatObject = os.stat(writeFilename) <NEW_LINE> writeFileMode = writeFileStatObject.st_mode <NEW_LINE> os.chmod(writeFilename, writeFileMode | stat.S_IREAD | stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH) <NEW_LINE> <DEDENT> except FileNotFoundError: <NEW_LINE> <INDENT> pass <NEW_LINE> <DEDENT> <DEDENT> responseResult = self._submitCmd("__global", "TestDDAResponse", **kw) <NEW_LINE> if writeFilename is not None: <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> os.remove(writeFilename) <NEW_LINE> <DEDENT> except OSError: <NEW_LINE> <INDENT> pass <NEW_LINE> <DEDENT> <DEDENT> self.testedDDA[DDAkey] = responseResult <NEW_LINE> return responseResult
Test for Direct Disk Access capability on a directory (can the node and the FCP client both access the same directory?) Keywords: - callback - if given, this should be a callable which accepts 2 arguments: - status - will be one of 'successful', 'failed' or 'pending' - value - depends on status: - if status is 'successful', this will contain the value returned from the command - if status is 'failed' or 'pending', this will contain a dict containing the response from node - Directory - directory to test - WithReadDirectory - default False - if True, want node to read from directory for a put operation - WithWriteDirectory - default False - if True, want node to write to directory for a get operation
625941bf7d43ff24873a2be9
def _setup_workspace(workspace_path): <NEW_LINE> <INDENT> workspace_path.mkdir(parents=True, exist_ok=True) <NEW_LINE> Path(workspace_path.joinpath("build")).mkdir(parents=True, exist_ok=True)
Setup a barebones workspace with the correct directory structure and files.
625941bf07d97122c41787d1
def find_element_raw(array): <NEW_LINE> <INDENT> return array
Шукає послідовність елементів
625941bfbd1bec0571d90579
def offset_datetime(exif_dict, t_sec): <NEW_LINE> <INDENT> curr_dt = exif_dict['Exif'][piexif.ExifIFD.DateTimeOriginal] <NEW_LINE> dt_obj = datetime.strptime(curr_dt, '%Y:%m:%d %H:%M:%S') <NEW_LINE> new_dt_obj = dt_obj + timedelta(seconds=t_sec) <NEW_LINE> new_dt = new_dt_obj.strftime('%Y:%m:%d %H:%M:%S') <NEW_LINE> exif_dict['Exif'][piexif.ExifIFD.DateTimeOriginal] = new_dt <NEW_LINE> exif_dict['Exif'][piexif.ExifIFD.DateTimeDigitized] = new_dt <NEW_LINE> exif_dict['0th'][piexif.ImageIFD.DateTime] = new_dt <NEW_LINE> new_dt_str = new_dt_obj.strftime('%Y%m%d_%H%M%S') <NEW_LINE> return (exif_dict, new_dt_str)
Offset datetime by t seconds, based on DateTimeOriginal. All other time stamps are over-written. t can be positive or negative. piexif.ImageIFD.DateTime piexif.ExifIFD.DateTimeDigitized piexif.ExifIFD.DateTimeOriginal
625941bfbd1bec0571d9057a
def isCompleted(self): <NEW_LINE> <INDENT> return self.pollTask();
Wrapper around IProgress.completed.
625941bfb7558d58953c4e64
def add_task(self, task, priority=0): <NEW_LINE> <INDENT> if task in self.entry_finder: <NEW_LINE> <INDENT> self.remove_task(task) <NEW_LINE> <DEDENT> count = next(self.counter) <NEW_LINE> entry = [priority, count, task] <NEW_LINE> self.entry_finder[task] = entry <NEW_LINE> heapq.heappush(self.pqueue, entry)
Add a new task or update the priority of an existing task
625941bfac7a0e7691ed401c
def invert(chord): <NEW_LINE> <INDENT> return chord[1:] + [chord[0]]
Inverts a given chord one time
625941bf0383005118ecf52f
def do_get(self, line): <NEW_LINE> <INDENT> parts = line.split() <NEW_LINE> if len(parts) != 2: <NEW_LINE> <INDENT> raise DfsException( "Wrong amount of arguments. Expect: 2, actual: {0}".format(len(parts))) <NEW_LINE> <DEDENT> src_dfs_path = self._to_dfs_abs_path(parts[0]) <NEW_LINE> file = parts[1] <NEW_LINE> dst_local_path = self._to_local_abs_path(file) <NEW_LINE> if path.exists(dst_local_path): <NEW_LINE> <INDENT> answer = input( "local file '%s' exists. Try to Overwrite [y/N]? " % (dst_local_path,)) <NEW_LINE> if answer.lower() != 'y': <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> <DEDENT> self.cmds.get(src_dfs_path, dst_local_path)
syntax: get SRC_DFS_PATH DST_LOCAL_PATH Download file SRC_DFS_PATH from nodes to the local DST_LOCAL_PATH
625941bf090684286d50ec2e
def error_response(errors, **kwargs): <NEW_LINE> <INDENT> error_dict = {'errors': errors} <NEW_LINE> if 'message' in kwargs: <NEW_LINE> <INDENT> error_dict['message'] = kwargs['message'] <NEW_LINE> <DEDENT> return json_response( error_dict, indent=2, status=kwargs.get('status', 400) )
Returns an error response for v3 Forge API.
625941bfaad79263cf390989
def test_remover_imovel(self): <NEW_LINE> <INDENT> self.lado.imoveis.get(ordem=7).delete() <NEW_LINE> self.lado.imoveis.get(ordem=12).delete() <NEW_LINE> self.lado.imoveis.get(ordem=1).delete() <NEW_LINE> ordem = [i.ordem for i in self.lado.imoveis.all()] <NEW_LINE> total = self.lado.imoveis.count() <NEW_LINE> self.assertEqual(ordem, range(1, total+1))
Ordem de imoveis se mantém ao remover
625941bf63b5f9789fde7030
def get_fixed_number_of_bottom_neurons(probe, num_bottom_neurons, class_to_idx): <NEW_LINE> <INDENT> ordering, _ = get_neuron_ordering(probe, class_to_idx) <NEW_LINE> return ordering[-num_bottom_neurons:]
Get global bottom neurons. This method returns a fixed number of bottoms neurons from the global ordering computed using ``interpretation.linear_probe.get_neuron_ordering``. .. note:: Absolute weight values are used for selection, instead of raw signed values Parameters ---------- probe : interpretation.linear_probe.LinearProbe Trained probe model num_bottom_neurons : int Number of bottom neurons for selection class_to_idx : dict Class to class index mapping. Usually returned by ``interpretation.utils.create_tensors``. Returns ------- global_bottom_neurons : numpy.ndarray Numpy array of size ``num_bottom_neurons`` with bottom neurons using the global ordering
625941bf4527f215b584c3a5
def quicksort(xs): <NEW_LINE> <INDENT> _quicksort(xs, 0, len(xs)-1) <NEW_LINE> return xs
n2 - Recursively: pick a pivot, reorder arround the pivot.
625941bf23849d37ff7b2fdb
def render_static_lightbuffer_object(self,obj): <NEW_LINE> <INDENT> obj.set_alt_camera(self.static_lightmap.camera) <NEW_LINE> obj.render( True ) <NEW_LINE> obj.reset_alt_camera()
Render an object in 'lightmap' space, with appropriate overrides for lights
625941bf21bff66bcd6848a0
def column_to_list(data, index): <NEW_LINE> <INDENT> column_list = [] <NEW_LINE> for item in data: <NEW_LINE> <INDENT> column_list.append(item[index]) <NEW_LINE> <DEDENT> return column_list
Pega uma coluna do data_list e transforma em lista. INPUT: date: list. O dado em si, transformado em lista index: int. O index da coluna que você quer transforma em lista OUTPUT: column_list: list. A lista da coluna selecionada
625941bf596a897236089a0e
def _set_shape (self, newshape): <NEW_LINE> <INDENT> self._data.shape = newshape <NEW_LINE> if self._mask is not nomask: <NEW_LINE> <INDENT> self._mask = self._mask.copy() <NEW_LINE> self._mask.shape = newshape
Set the array's shape.
625941bfa17c0f6771cbdf9e
def unique_file_name(prefix, time_t, ext): <NEW_LINE> <INDENT> name_base = prefix + "_" + format_date(time_t) <NEW_LINE> name = name_base + "." + ext <NEW_LINE> if uos.exists(name): <NEW_LINE> <INDENT> for i in range(1, 9999): <NEW_LINE> <INDENT> name = "{}_x{}.{}".format(name_base, i, ext) <NEW_LINE> if not uos.exists(name): <NEW_LINE> <INDENT> break <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return name
Creates a unique file name using this pattern: prefix_YYYY_MM_DD_HH_MM_SS.ext if such a file already exists, appends _xN to file name above, where N is an incrementing number, until a unique file name is found :return: unique file name :rtype: str
625941bf56ac1b37e626411f
def p_array_operator_insides_bounded(p): <NEW_LINE> <INDENT> p[0] = ('sublist-stepped', {'start':p[1], 'end':p[3], 'step':KS_Null()})
array_operator_insides_bounded : expression COLON expression
625941bf97e22403b379cee4
def tokens(fn, opts): <NEW_LINE> <INDENT> with open(fn) as f: <NEW_LINE> <INDENT> for x in runac.lex(f.read()): <NEW_LINE> <INDENT> print(x.name, x.value, (x.source_pos.lineno, x.source_pos.colno))
Print a list of tokens and location info
625941bfd18da76e2353241e
def extract(self): <NEW_LINE> <INDENT> pass
Extract the content from the HTML document or from the URL
625941bf7d43ff24873a2bea
def get(self, spell_id: str): <NEW_LINE> <INDENT> return self.spells[spell_id]
Cette fonction permet de recuperer un sort grace a son id :param spell_id: -> Identifiant de sort :return: -> Le sort
625941bf44b2445a33931fe2
def __repr__(self): <NEW_LINE> <INDENT> return("{}('{}, {}, {}, {}, {}')".format(self.__class__.__name__, self.filename, self.enterbox, self.msgbox, self.re, self.json))
Returns representation of the object
625941bf498bea3a759b99fb
def logging_wrapper(original_function): <NEW_LINE> <INDENT> @wraps(original_function) <NEW_LINE> def wrapper(*args, **kwargs): <NEW_LINE> <INDENT> logging.basicConfig(filename='{}.log'.format(original_function.__name__), format='%(asctime)s %(levelname)-8s %(message)s', datefmt='%a, %d %b %Y %H:%M:%S', level=logging.INFO) <NEW_LINE> logging.info( 'Ran with args: {}, and kwargs: {}'.format(args, kwargs)) <NEW_LINE> return original_function(*args, **kwargs) <NEW_LINE> <DEDENT> return wrapper
:type original_function: object :rtype: object
625941bf10dbd63aa1bd2af1
def get_oligo_min_dist(self): <NEW_LINE> <INDENT> return self.config.getint("OLIGOS", "min_dist")
Reads consecutive oligo minimum distance from Database .config
625941bf23849d37ff7b2fdc
def rob(self, root): <NEW_LINE> <INDENT> def loot(root, hm): <NEW_LINE> <INDENT> left, right, total = 0, 0, 0 <NEW_LINE> if not root: <NEW_LINE> <INDENT> return 0 <NEW_LINE> <DEDENT> elif root in hm: <NEW_LINE> <INDENT> return hm[root] <NEW_LINE> <DEDENT> robbed = loot(root.left, hm) + loot(root.right, hm) <NEW_LINE> if root.left: <NEW_LINE> <INDENT> left = loot(root.left.left, hm) + loot(root.left.right, hm) <NEW_LINE> <DEDENT> if root.right: <NEW_LINE> <INDENT> right = loot(root.right.left, hm) + loot(root.right.right, hm) <NEW_LINE> <DEDENT> notrobbed = left + right + root.val <NEW_LINE> total = max(robbed, notrobbed) <NEW_LINE> hm[root] = total <NEW_LINE> return total <NEW_LINE> <DEDENT> return loot(root, {})
:type root: TreeNode :rtype: int
625941bf566aa707497f44b8
def visualize_grid(Xs, ubound=255.0, padding=1): <NEW_LINE> <INDENT> pixel_sz = 2 <NEW_LINE> (H, W, C, N) = Xs.shape <NEW_LINE> Xs_resize = np.zeros((H*pixel_sz, W*pixel_sz, C, N)) <NEW_LINE> Xs = (ubound*(Xs-np.min(Xs))/(np.max(Xs)-np.min(Xs))).astype('uint8') <NEW_LINE> for c in range(C): <NEW_LINE> <INDENT> for n in range(N): <NEW_LINE> <INDENT> Xs_resize[:,:,c,n] = imresize(Xs[:,:,c,n], 200, interp='nearest') <NEW_LINE> <DEDENT> <DEDENT> Xs = Xs_resize <NEW_LINE> (H, W, C, N) = Xs.shape <NEW_LINE> low, high = np.min(Xs), np.max(Xs) <NEW_LINE> if C==1 or C==3: <NEW_LINE> <INDENT> grid_size_H = int(ceil(sqrt(N))) <NEW_LINE> grid_size_W = int(ceil(sqrt(N))) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> grid_size_H = N <NEW_LINE> grid_size_W = C <NEW_LINE> <DEDENT> count = 0 <NEW_LINE> grid_height = H * grid_size_H + padding * (grid_size_H-1) <NEW_LINE> grid_width = W * grid_size_W + padding * (grid_size_W-1) <NEW_LINE> grid = np.zeros((grid_height, grid_width, C)) <NEW_LINE> y0, y1 = 0, H <NEW_LINE> for y in range(grid_size_H): <NEW_LINE> <INDENT> x0, x1 = 0, W <NEW_LINE> for x in range(grid_size_W): <NEW_LINE> <INDENT> if C==1 or C==3: <NEW_LINE> <INDENT> img = Xs[:,:,:,count] <NEW_LINE> count += 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> img = np.expand_dims(Xs[:,:,x,y], axis=-1) <NEW_LINE> <DEDENT> grid[y0:y1, x0:x1, :] = ubound * (img - low) / (high - low) <NEW_LINE> x0 += W + padding <NEW_LINE> x1 += W + padding <NEW_LINE> <DEDENT> y0 += H + padding <NEW_LINE> y1 += H + padding <NEW_LINE> <DEDENT> if C!=3: <NEW_LINE> <INDENT> grid = grid[:,:,0] <NEW_LINE> <DEDENT> return grid
Reshape a 4D tensor of image data to a grid for easy visualization. Inputs: - Xs: Data of shape (H, W, C, N) - ubound: Output grid will have values scaled to the range [0, ubound] - padding: The number of blank pixels between elements of the grid
625941bf07d97122c41787d2
def assertContainsIgnoreWhitespace(self, response, text, count=None, status_code=200, msg_prefix=''): <NEW_LINE> <INDENT> if msg_prefix: <NEW_LINE> <INDENT> msg_prefix += ": " <NEW_LINE> <DEDENT> self.assertEqual(response.status_code, status_code, msg_prefix + "Couldn't retrieve content: Response code was %d" " (expected %d)" % (response.status_code, status_code)) <NEW_LINE> text = smart_str(text, response._charset) <NEW_LINE> real_count = WHITESPACE_RE.sub("", response.content).count(WHITESPACE_RE.sub("", text)) <NEW_LINE> if count is not None: <NEW_LINE> <INDENT> self.assertEqual(real_count, count, msg_prefix + "Found %d instances of '%s' in response" " (expected %d)" % (real_count, text, count)) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.assertTrue(real_count != 0, msg_prefix + "Couldn't find '%s' in response" % text)
Asserts that a response indicates that some content was retrieved successfully, (i.e., the HTTP status code was as expected), and that ``text`` occurs ``count`` times in the content of the response. If ``count`` is None, the count doesn't matter - the assertion is true if the text occurs at least once in the response.
625941bf656771135c3eb7b8
def update(self, params, hide=False): <NEW_LINE> <INDENT> for key,value in params.iteritems(): <NEW_LINE> <INDENT> if isinstance(value, Exception): <NEW_LINE> <INDENT> self.__errors.append( unicode(value) ) <NEW_LINE> <DEDENT> elif isinstance(key, Parameter): <NEW_LINE> <INDENT> self[key.alias] = value <NEW_LINE> self.validated[key.name] = value if is_serializable(value) else unicode(value) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self[key] = value <NEW_LINE> if not hide: <NEW_LINE> <INDENT> self.validated[key] = value if is_serializable(value) else unicode(value)
Updates parameters
625941bfbde94217f3682d3f
def update(self): <NEW_LINE> <INDENT> if not self.started:return <NEW_LINE> if time.time()-self.lastupdate>=self.animation_delay/1000.0: <NEW_LINE> <INDENT> self.lastupdate=time.time() <NEW_LINE> self.next()
Update animation state
625941bf4c3428357757c276
def internal_callback(name): <NEW_LINE> <INDENT> mdns = listener.services[name] <NEW_LINE> _discover_chromecast(hass, ChromecastInfo(*mdns))
Called when zeroconf has discovered a new chromecast.
625941bf26068e7796caec26
def list_branches(cwd, remote=False, user=None, ignore_retcode=False): <NEW_LINE> <INDENT> cwd = _expand_path(cwd, user) <NEW_LINE> command = ['git', 'for-each-ref', '--format', '%(refname:short)', 'refs/{0}/'.format('heads' if not remote else 'remotes')] <NEW_LINE> return _git_run(command, cwd=cwd, runas=user, ignore_retcode=ignore_retcode)['stdout'].splitlines()
.. versionadded:: 2015.8.0 Return a list of branches cwd The path to the git checkout remote : False If ``True``, list remote branches. Otherwise, local branches will be listed. .. warning:: This option will only return remote branches of which the local checkout is aware, use :py:func:`git.fetch <salt.modules.git.fetch>` to update remotes. user User under which to run the git command. By default, the command is run by the user under which the minion is running. ignore_retcode : False If ``True``, do not log an error to the minion log if the git command returns a nonzero exit status. .. versionadded:: 2015.8.0 CLI Examples: .. code-block:: bash salt myminion git.list_branches /path/to/repo salt myminion git.list_branches /path/to/repo remote=True
625941bfde87d2750b85fcdc
def htmlescape(text): <NEW_LINE> <INDENT> return text.replace('&','&amp;').replace('<','&lt;').replace('>','&gt;')
Escape special HTML characters, namely &, <, >.
625941bf2eb69b55b151c7f8
def get_node_uuid(self): <NEW_LINE> <INDENT> if self.node is None or 'uuid' not in self.node: <NEW_LINE> <INDENT> raise errors.UnknownNodeError() <NEW_LINE> <DEDENT> return self.node['uuid']
Get UUID for Ironic node. If the agent has not yet heartbeated to Ironic, it will not have the UUID and this will raise an exception. :returns: A string containing the UUID for the Ironic node. :raises: UnknownNodeError if UUID is unknown.
625941bf9c8ee82313fbb6c0
def merge(self, nums1: List[int], m: int, nums2: List[int], n: int) -> None: <NEW_LINE> <INDENT> i = len(nums1)-1 <NEW_LINE> while m and n: <NEW_LINE> <INDENT> if nums2[n-1] > nums1[m-1]: <NEW_LINE> <INDENT> nums1[i] = nums2[n-1] <NEW_LINE> n -= 1 <NEW_LINE> i -= 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> nums1[i] = nums1[m-1] <NEW_LINE> m -= 1 <NEW_LINE> i -= 1 <NEW_LINE> <DEDENT> <DEDENT> if n > 0: <NEW_LINE> <INDENT> nums1[:i+1] = nums2[:n]
Do not return anything, modify nums1 in-place instead.
625941bff7d966606f6a9f4d
def class_score(self, sents, labels): <NEW_LINE> <INDENT> self.C.train(mode=False) <NEW_LINE> self.Emb.train(mode=False) <NEW_LINE> with torch.no_grad(): <NEW_LINE> <INDENT> _size = 0 <NEW_LINE> _batch = [] <NEW_LINE> preds = [] <NEW_LINE> for sent in sents: <NEW_LINE> <INDENT> _size += 1 <NEW_LINE> l = len(sent) <NEW_LINE> if l > self.max_len: <NEW_LINE> <INDENT> sent = sent[:self.max_len] <NEW_LINE> <DEDENT> sent_id = [self.vocab.word2id[w] for w in sent] <NEW_LINE> padding = [self.vocab.word2id['<pad>']] * (self.max_len - l) <NEW_LINE> bare = gpu_wrapper(torch.LongTensor(sent_id + padding)) <NEW_LINE> _batch.append(bare) <NEW_LINE> if _size == self.batch_size: <NEW_LINE> <INDENT> _size = 0 <NEW_LINE> batch = torch.stack(_batch, dim=0) <NEW_LINE> emb = self.Emb(batch) <NEW_LINE> cls = self.C(emb).squeeze(1) <NEW_LINE> pred = (cls > 0.5).float() <NEW_LINE> preds.append(pred) <NEW_LINE> _batch = [] <NEW_LINE> <DEDENT> <DEDENT> if _size != 0: <NEW_LINE> <INDENT> batch = torch.stack(_batch, dim=0) <NEW_LINE> emb = self.Emb(batch) <NEW_LINE> cls = self.C(emb).squeeze(1) <NEW_LINE> pred = (cls > 0.5).float() <NEW_LINE> preds.append(pred) <NEW_LINE> <DEDENT> preds = torch.cat(preds, dim=0) <NEW_LINE> labels = gpu_wrapper(torch.tensor(np.array(labels, dtype=np.float32))) <NEW_LINE> assert preds.shape[0] == labels.shape[0] <NEW_LINE> n_wrong = torch.abs(preds - labels).sum().item() <NEW_LINE> n_all = preds.shape[0] <NEW_LINE> <DEDENT> self.C.train(mode=True) <NEW_LINE> self.Emb.train(mode=True) <NEW_LINE> return (n_all - n_wrong) / n_all
:param sents: [[str x T] x N] :param labels: [int x N] :return: float, accuracy of classification.
625941bf4a966d76dd550f59
def findComplement(self, num): <NEW_LINE> <INDENT> ret = 0 <NEW_LINE> i = 0 <NEW_LINE> while num > 0: <NEW_LINE> <INDENT> if num & 1 == 0: <NEW_LINE> <INDENT> ret |= 1<<i <NEW_LINE> <DEDENT> num = num >> 1 <NEW_LINE> i += 1 <NEW_LINE> <DEDENT> return ret
:type num: int :rtype: int
625941bf004d5f362079a281
def arm_and_takeoff(vehicle, aTargetAltitude): <NEW_LINE> <INDENT> print("Basic pre-arm checks") <NEW_LINE> while not vehicle.is_armable: <NEW_LINE> <INDENT> print(" Waiting for vehicle to initialise...") <NEW_LINE> time.sleep(1) <NEW_LINE> <DEDENT> print("Arming motors") <NEW_LINE> vehicle.mode = VehicleMode("GUIDED") <NEW_LINE> while not vehicle.armed: <NEW_LINE> <INDENT> vehicle.armed = True <NEW_LINE> print(" Waiting for arming...") <NEW_LINE> time.sleep(1) <NEW_LINE> <DEDENT> print("Taking off!") <NEW_LINE> vehicle.simple_takeoff(aTargetAltitude) <NEW_LINE> while True: <NEW_LINE> <INDENT> print(" Altitude: ", vehicle.location.global_relative_frame.alt) <NEW_LINE> if vehicle.location.global_relative_frame.alt >= aTargetAltitude * 0.95: <NEW_LINE> <INDENT> print("Reached target altitude") <NEW_LINE> break <NEW_LINE> <DEDENT> time.sleep(1)
Arms vehicle and fly to aTargetAltitude.
625941bf0383005118ecf530
def extract(self, raw_str, label_names): <NEW_LINE> <INDENT> return self._extract(raw_str, label_names)
@params raw_str: extract label name from this str @return label name extracted from this string or None if no such label in this string
625941bfa8370b77170527ec
def get_queryset(self): <NEW_LINE> <INDENT> process_class = self.flow_class.process_class <NEW_LINE> queryset = process_class.objects.filter(flow_class=self.flow_class) <NEW_LINE> ordering = self.get_ordering() <NEW_LINE> if ordering: <NEW_LINE> <INDENT> if isinstance(ordering, six.string_types): <NEW_LINE> <INDENT> ordering = (ordering,) <NEW_LINE> <DEDENT> queryset = queryset.order_by(*ordering) <NEW_LINE> <DEDENT> return queryset
Filtered process list.
625941bf57b8e32f524833e5
def add_to_cat_lovers_list(user, cat_lovers): <NEW_LINE> <INDENT> if user not in cat_lovers: <NEW_LINE> <INDENT> cat_lovers.append(user)
Adds the user to the list of cat lovers who will be provided interesting facts about cats
625941bf76e4537e8c3515bd
def mean(a, m): <NEW_LINE> <INDENT> if np.shape(m)!=np.shape(a): <NEW_LINE> <INDENT> raise RuntimeError('The image and mask do not have the same shape') <NEW_LINE> <DEDENT> return(np.mean(a[np.where(m > 0)]))
masked mean Parameters ---------- a : ndarray a grayscale image m : ndarray a binary image Returns ------- mean of the grayscale image where the binary image is greater than 0 : float Raises ------ RuntimeError if the image and mask do not have the same shape
625941bf07f4c71912b113cc
def tex_from_skew_array(array, with_lines=False, align='b'): <NEW_LINE> <INDENT> if with_lines: <NEW_LINE> <INDENT> nones=[1 if not None in row else 1+len(row)-row[::-1].index(None) for row in array] <NEW_LINE> def end_line(r): <NEW_LINE> <INDENT> if r==0: <NEW_LINE> <INDENT> return r'\cline{%s-%s}'%(nones[0],len(array[0])) <NEW_LINE> <DEDENT> elif r==len(array): <NEW_LINE> <INDENT> start=nones[r-1] <NEW_LINE> finish=len(array[r-1]) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> start=min(nones[r], nones[r-1]) <NEW_LINE> finish=max(len(array[r]), len(array[r-1])) <NEW_LINE> <DEDENT> return r'\\' if start>finish else r'\\\cline{%s-%s}'%(start, finish) <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> end_line=lambda r: r'\\' <NEW_LINE> <DEDENT> tex=r'\raisebox{-.6ex}{$\begin{array}[%s]{*{%s}c}'%(align,max(map(len,array))) <NEW_LINE> tex+=end_line(0)+'\n' <NEW_LINE> for r in range(len(array)): <NEW_LINE> <INDENT> tex+='&'.join('' if c is None else r'\lr{%s}'%c for c in array[r]) <NEW_LINE> tex+=end_line(r+1)+'\n' <NEW_LINE> <DEDENT> return tex+r'\end{array}$}'
This function creates latex code for a "skew composition" ``array``. That is, for a two dimensional array in which each row can begin with an arbitrary number ``None``'s and the remaining entries could, in principe, be anything but probably should be strings or integers of similar width. A row consisting completely of ``None``'s is allowed. INPUT: - ``array`` -- The array - ``with_lines`` -- (Default: ``False``) If ``True`` lines are drawn, if ``False`` they are not - ``align`` -- (Default: ``'b'``) Determines the alignment on the latex array environments EXAMPLES:: sage: array=[[None, 2,3,4],[None,None],[5,6,7,8]] sage: print(sage.combinat.output.tex_from_skew_array(array)) \raisebox{-.6ex}{$\begin{array}[b]{*{4}c}\\ &\lr{2}&\lr{3}&\lr{4}\\ &\\ \lr{5}&\lr{6}&\lr{7}&\lr{8}\\ \end{array}$}
625941bf55399d3f055885ff
def refresh(self): <NEW_LINE> <INDENT> self._prefix_cbx.setChecked(self._model.prefix_check) <NEW_LINE> self._prefix_line.setText(self._model.prefix) <NEW_LINE> self._remove_first_cbx.setChecked(self._model.remove_first) <NEW_LINE> self._remove_first_spn.setValue(self._model.remove_first_value) <NEW_LINE> self._suffix_cbx.setChecked(self._model.suffix_check) <NEW_LINE> self._suffix_line.setText(self._model.suffix) <NEW_LINE> self._remove_first_cbx.setChecked(self._model.remove_last) <NEW_LINE> self._remove_last_spn.setValue(self._model.remove_last_value) <NEW_LINE> suffixes = self._model.suffixes <NEW_LINE> self._prefix_combo.clear() <NEW_LINE> self._suffix_combo.clear() <NEW_LINE> self._prefix_combo.setVisible(bool(suffixes)) <NEW_LINE> self._suffix_combo.setVisible(bool(suffixes)) <NEW_LINE> if suffixes: <NEW_LINE> <INDENT> self._prefix_combo.addItem('Select prefix ...') <NEW_LINE> self._suffix_combo.addItem('Select suffix ...') <NEW_LINE> format_items = ['{}: "{}"'.format(list(suffix.keys())[0], list(suffix.values())[0]) for suffix in suffixes] <NEW_LINE> for i, item in enumerate(format_items): <NEW_LINE> <INDENT> item_index = i + 1 <NEW_LINE> self._prefix_combo.addItem(item) <NEW_LINE> self._suffix_combo.addItem(item) <NEW_LINE> self._prefix_combo.setItemData(item_index, list(suffixes[i].values())[0]) <NEW_LINE> self._suffix_combo.setItemData(item_index, list(suffixes[i].values())[0])
Syncs view to the current state of its model
625941bf3539df3088e2e297
def pass_strength_checker(text): <NEW_LINE> <INDENT> if pass_length_regex.search(text) is None: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> if pass_upper_regex.search(text) is None: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> if pass_lower_regex.search(text) is None: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> if pass_digit_regex.search(text) is None: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return True
Check if a password is strong.
625941bfbe8e80087fb20b92
def book_button_press(self, obj, event): <NEW_LINE> <INDENT> if event.type == Gdk.EventType._2BUTTON_PRESS and event.button == 1: <NEW_LINE> <INDENT> self.on_setup_clicked(obj) <NEW_LINE> <DEDENT> elif is_right_click(event): <NEW_LINE> <INDENT> self.build_book_context_menu(event)
Double-click on the current book selection is the same as setup. Right click evokes the context menu.
625941bf26238365f5f0edb7
def __click_init_execute(self,dep="",arr=""): <NEW_LINE> <INDENT> self.init_exe_han(dep,arr,self.tabs,self.__click_add,self.__click_del)
出発地と到着地の画面でOKが押された
625941bf8c3a873295158303
def make_dirs(self): <NEW_LINE> <INDENT> self.base_dir = "data" <NEW_LINE> self.base_dir += "/"+self.cal_tel.lower() <NEW_LINE> self.base_dir += "/"+self.cal_inst.lower() <NEW_LINE> self.bcf_dir = self.base_dir+"/bcf" <NEW_LINE> if len(self.path) > 0: <NEW_LINE> <INDENT> self.base_path = self.path+"/"+self.base_dir <NEW_LINE> self.bcf_path = self.path+"/"+self.bcf_dir <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.base_path = self.base_dir <NEW_LINE> self.bcf_path = self.bcf_dir <NEW_LINE> <DEDENT> if not os.path.isdir(self.bcf_path): <NEW_LINE> <INDENT> os.makedirs(self.bcf_path) <NEW_LINE> <DEDENT> return
Generate CALDB directory structure for one observation identifier. The structure is given by data/<tel>/<inst>/bcf/ where <tel> is "cta" and <inst> is the instrument specified in the CALDB constructor (the instrument may be used for different array configurations). Parameters: None Keywords: None
625941bf60cbc95b062c648e
def read_args() -> Dict[str, str]: <NEW_LINE> <INDENT> if len(sys.argv) != 4: <NEW_LINE> <INDENT> print("Usage: python -m projects.pj01.weather [FILE] [COLUMN] [OPERATION]") <NEW_LINE> exit() <NEW_LINE> <DEDENT> return { "file": sys.argv[1], "column": sys.argv[2], "operation": sys.argv[3] }
Check for valid CLI arguments nd return them in a dictionary.
625941bf4f88993c3716bfb6
def get_value(self, address): <NEW_LINE> <INDENT> if self.valid_address(address): <NEW_LINE> <INDENT> return self.segment[address] <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> print("The address you requested a value is not valid") <NEW_LINE> return None
Returns a value related to an address
625941bf26068e7796caec27
def proximity_matrix(rf, X, normalize=True): <NEW_LINE> <INDENT> leaves = rf.apply(X) <NEW_LINE> n_trees = leaves.shape[1] <NEW_LINE> prox_mat = np.zeros((leaves.shape[0], leaves.shape[0])) <NEW_LINE> for i in range(n_trees): <NEW_LINE> <INDENT> a = leaves[:, i] <NEW_LINE> prox_mat += 1 * np.equal.outer(a, a) <NEW_LINE> <DEDENT> if normalize: <NEW_LINE> <INDENT> prox_mat = prox_mat / n_trees <NEW_LINE> <DEDENT> return prox_mat
Calculate proximity matrix :param rf: :param X: :param normalize: :return:
625941bf73bcbd0ca4b2bfc3
def test_page_9(self): <NEW_LINE> <INDENT> file_name = 'tests/samples/01060115.pdf' <NEW_LINE> self._run_test(file_name, [8]) <NEW_LINE> self.assertEqual(13, len(self.device.titles)) <NEW_LINE> self.assertEqual(42, len(self.device.paragraphs)) <NEW_LINE> self.assertEqual(self.device.as_html().split('\n'), self.get_expected(file_name+'.9').split('\n'))
This document is pre-2006; it is a different version of DRE.
625941bf99cbb53fe6792b33
def _add_seq2seq(self): <NEW_LINE> <INDENT> hps = self._hps <NEW_LINE> vsize = self._vocab.size() <NEW_LINE> with tf.variable_scope('seq2seq'): <NEW_LINE> <INDENT> self.rand_unif_init = tf.random_uniform_initializer(-hps.rand_unif_init_mag, hps.rand_unif_init_mag, seed=123) <NEW_LINE> self.trunc_norm_init = tf.truncated_normal_initializer(stddev=hps.trunc_norm_init_std) <NEW_LINE> with tf.variable_scope('embedding'): <NEW_LINE> <INDENT> if FLAGS.embedding: <NEW_LINE> <INDENT> embedding = tf.Variable(self.embedding_place) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> embedding = tf.get_variable('embedding', [vsize, hps.emb_dim], dtype=tf.float32, initializer=self.trunc_norm_init) <NEW_LINE> <DEDENT> if hps.mode=="train": self._add_emb_vis(embedding) <NEW_LINE> emb_enc_inputs = tf.nn.embedding_lookup(embedding, self._enc_batch) <NEW_LINE> emb_dec_inputs = [tf.nn.embedding_lookup(embedding, x) for x in tf.unstack(self._dec_batch, axis=1)] <NEW_LINE> <DEDENT> enc_outputs, fw_st, bw_st = self._add_encoder(emb_enc_inputs, self._enc_lens) <NEW_LINE> self._enc_states = enc_outputs <NEW_LINE> self._dec_in_state = self._reduce_states(fw_st, bw_st) <NEW_LINE> with tf.variable_scope('decoder'): <NEW_LINE> <INDENT> (self.decoder_outputs, self._dec_out_state, self.attn_dists, self.p_gens, self.coverage, self.vocab_scores, self.final_dists, self.samples, self.greedy_search_samples, self.temporal_es, self.sampling_rewards, self.greedy_rewards) = self._add_decoder(emb_dec_inputs, embedding) <NEW_LINE> <DEDENT> if FLAGS.use_discounted_rewards and hps.rl_training and hps.mode in ['train', 'eval']: <NEW_LINE> <INDENT> self.sampling_discounted_rewards = tf.stack(self.discount_rewards(tf.unstack(self.sampling_rewards))) <NEW_LINE> self.greedy_discounted_rewards = tf.stack(self.discount_rewards(tf.unstack(self.greedy_rewards))) <NEW_LINE> <DEDENT> elif FLAGS.use_intermediate_rewards and hps.rl_training and hps.mode in ['train', 'eval']: <NEW_LINE> <INDENT> self.sampling_discounted_rewards = tf.stack(self.intermediate_rewards(tf.unstack(self.sampling_rewards))) <NEW_LINE> self.greedy_discounted_rewards = tf.stack(self.intermediate_rewards(tf.unstack(self.greedy_rewards))) <NEW_LINE> <DEDENT> elif hps.ac_training and hps.mode in ['train', 'eval']: <NEW_LINE> <INDENT> self.sampled_sentences = tf.transpose(tf.stack(self.samples), perm=[1,2,0]) <NEW_LINE> self.greedy_search_sentences = tf.transpose(tf.stack(self.greedy_search_samples), perm=[1,2,0]) <NEW_LINE> <DEDENT> <DEDENT> if hps.mode == "decode": <NEW_LINE> <INDENT> assert len(self.final_dists)==1 <NEW_LINE> self.final_dists = self.final_dists[0] <NEW_LINE> topk_probs, self._topk_ids = tf.nn.top_k(self.final_dists, hps.batch_size*2) <NEW_LINE> self._topk_log_probs = tf.log(topk_probs)
Add the whole sequence-to-sequence model to the graph.
625941bf5166f23b2e1a50a5
def _compile_subroutineBody(self, tokens, subroutine_name, subroutine_type, subroutine_kind): <NEW_LINE> <INDENT> cmds = '' <NEW_LINE> token = self._checkToken(tokens, "missing '{' after the subroutine declaration", idx1=1, neq='{') <NEW_LINE> self._compile_varDec(tokens) <NEW_LINE> fct_name = self._class_name+'.'+subroutine_name <NEW_LINE> nVars = self._symbol_table.varCount('local') <NEW_LINE> cmds += self._vm_encoder.encodeFunction(fct_name, nVars) <NEW_LINE> if subroutine_kind == 'constructor': <NEW_LINE> <INDENT> number = self._symbol_table.varCount('field') <NEW_LINE> cmds += self._vm_encoder.encodePush('const', number) <NEW_LINE> cmds += self._vm_encoder.encodeCall('Memory.alloc', 1) <NEW_LINE> cmds += self._vm_encoder.encodePop('pointer', 0) <NEW_LINE> <DEDENT> elif subroutine_kind == 'method': <NEW_LINE> <INDENT> cmds += self._vm_encoder.encodePush('arg', 0) <NEW_LINE> cmds += self._vm_encoder.encodePop('pointer', 0) <NEW_LINE> <DEDENT> cmds += self._compile_statements(tokens) <NEW_LINE> token = self._checkToken(tokens, "missing '}' at the end of subroutine", idx1=1, neq='}') <NEW_LINE> return cmds
Compiles the body a subroutine. args: tokens: list of sets, the format of a set is ( tokens_kind, token, file_line ) subroutine_name: (str) identifier subroutine_type: (str) int, char, boolean, void or identifier subroutine_kind: (str) constructor, function or method ret: string of VM commands representing the subroutine's body
625941bf94891a1f4081b9f4
def serve_page(self, view, url, params): <NEW_LINE> <INDENT> domain, path = self.get_domain_and_path(url) <NEW_LINE> if domain == "new": <NEW_LINE> <INDENT> window_title = self.get_page_title(path) <NEW_LINE> compwin = ComposeWindow(window_title or "Murmeli") <NEW_LINE> compwin.set_page_server(self) <NEW_LINE> compwin.show_page("<html></html>") <NEW_LINE> compwin.navigate_to(path, params) <NEW_LINE> self.extra_windows.add(compwin) <NEW_LINE> self.extra_windows = set(win for win in self.extra_windows if win.isVisible()) <NEW_LINE> return <NEW_LINE> <DEDENT> page_set = self.page_sets.get(domain) <NEW_LINE> if not page_set: <NEW_LINE> <INDENT> page_set = self.page_sets.get("") <NEW_LINE> <DEDENT> if page_set: <NEW_LINE> <INDENT> page_set.serve_page(view, path, params)
Serve the page associated with the given url and parameters
625941bf60cbc95b062c648f
def encode_auth_token(self, user_id): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> payload = { 'exp': datetime.datetime.utcnow() + datetime.timedelta(days=1, seconds=0), 'iat': datetime.datetime.utcnow(), 'sub': user_id } <NEW_LINE> return jwt.encode( payload, DevelopmentConfig().SECRET_KEY, algorithm='HS256' ) <NEW_LINE> <DEDENT> except Exception as token_exception: <NEW_LINE> <INDENT> return token_exception
Generates the Auth Token :return: string
625941bf3d592f4c4ed1cfc0
def __init__(self): <NEW_LINE> <INDENT> self.db = init_db()
Initialize the user model class
625941bf851cf427c661a45e
def _repackage_scan(self, entropy_server): <NEW_LINE> <INDENT> packages = set() <NEW_LINE> spm = entropy_server.Spm() <NEW_LINE> for dep in self._repackage: <NEW_LINE> <INDENT> package_id, repository_id = entropy_server.atom_match(dep) <NEW_LINE> if package_id == -1: <NEW_LINE> <INDENT> entropy_server.output( "%s: %s" % ( darkred(_("Cannot find package")), bold(dep), ), header=darkred(" !!! "), importance=1, level="warning") <NEW_LINE> continue <NEW_LINE> <DEDENT> repo = entropy_server.open_repository(repository_id) <NEW_LINE> try: <NEW_LINE> <INDENT> spm_uid = spm.resolve_package_uid(repo, package_id) <NEW_LINE> <DEDENT> except spm.Error as err: <NEW_LINE> <INDENT> entropy_server.output( "%s: %s, %s" % ( darkred(_("Cannot find package")), bold(dep), err, ), header=darkred(" !!! "), importance=1, level="warning") <NEW_LINE> continue <NEW_LINE> <DEDENT> spm_name = spm.convert_from_entropy_package_name( repo.retrieveAtom(package_id)) <NEW_LINE> packages.add(spm_name) <NEW_LINE> <DEDENT> return packages
If in repackage mode (self._repackage not empty), scan for packages to re-package and return them.
625941bf8c3a873295158304
def histogram_plot(dataset, label): <NEW_LINE> <INDENT> hist, bins = np.histogram(dataset, bins=n_classes) <NEW_LINE> width = 0.7 * (bins[1] - bins[0]) <NEW_LINE> center = (bins[:-1] + bins[1:]) / 2 <NEW_LINE> plt.bar(center, hist, align='center', width=width) <NEW_LINE> plt.xlabel(label) <NEW_LINE> plt.ylabel("Image count") <NEW_LINE> plt.show()
Plots a histogram of the input data. Parameters: dataset: Input data to be plotted as a histogram. lanel: A string to be used as a label for the histogram.
625941bfa8ecb033257d301a
def rand_fliplr(*imagez): <NEW_LINE> <INDENT> return rand_apply_onebatch(np.fliplr, imagez)
flip together
625941bf462c4b4f79d1d61c
def move_horizontally(rainbow_list): <NEW_LINE> <INDENT> start_time = time.time() <NEW_LINE> time.clock() <NEW_LINE> seconds_elapsed = 0 <NEW_LINE> while seconds_elapsed < 10: <NEW_LINE> <INDENT> for rainbow in rainbow_list: <NEW_LINE> <INDENT> seconds_elapsed = time.time() - start_time <NEW_LINE> unicornhat.set_pixels(rainbow) <NEW_LINE> unicornhat.show() <NEW_LINE> time.sleep(0.5)
Cycles through 4 rainbows to make them appear to move horizontally Parameters: rainbow_list: a list containing 4 horizontal rainbows that will ripple Programs that use this function: - Moving Horizontal Rainbow 1: passes in mh_rainbows_1 - Moving Horizontal Rainbow 2: passes in mh_rainbows_2
625941bfbf627c535bc1311a
def state(self, timer): <NEW_LINE> <INDENT> if self.ptr == None: <NEW_LINE> <INDENT> raise Exception("self is disposed") <NEW_LINE> <DEDENT> if timer.ptr == None: <NEW_LINE> <INDENT> raise Exception("timer is disposed") <NEW_LINE> <DEDENT> result = TotalPlaytimeComponentState(livesplit_core_native.TotalPlaytimeComponent_state(self.ptr, timer.ptr)) <NEW_LINE> return result
Calculates the component's state based on the timer provided.
625941bf442bda511e8be368
def set_wm_strut_partial(window, left, right, top, bottom, left_start_y, left_end_y, right_start_y, right_end_y, top_start_x, top_end_x, bottom_start_x, bottom_end_x): <NEW_LINE> <INDENT> packed = struct.pack('I' * 12, left, right, top, bottom, left_start_y, left_end_y, right_start_y, right_end_y, top_start_x, top_end_x, bottom_start_x, bottom_end_x) <NEW_LINE> return c.core.ChangeProperty(xcb.xproto.PropMode.Replace, window, atom('_NET_WM_STRUT_PARTIAL'), CARDINAL, 32, 12, packed)
Sets the partial struts for a window. :param window: A window identifier. :param left: Width of area at left side of screen. :type left: CARDINAL/32 :param right: Width of area at right side of screen. :type right: CARDINAL/32 :param top: Height of area at top side of screen. :type top: CARDINAL/32 :param bottom: Height of area at bottom side of screen. :type bottom: CARDINAL/32 :param left_start_y: :param left_end_y: :param right_start_y: :param right_end_y: :param top_start_x: :param top_end_x: :param bottom_start_x: :param bottom_end_x: :rtype: xcb.VoidCookie
625941bfb545ff76a8913d62
def handle_accesslevel(self, args): <NEW_LINE> <INDENT> if len(self.components) or len(self.current_sections): <NEW_LINE> <INDENT> self.error('Access levels need to be specified before any parameters or sections are defined') <NEW_LINE> <DEDENT> self.accesslevels.append({ 'name': args['name'], 'label': args['label'], }) <NEW_LINE> self.accesslevel_names.append(args['name']) <NEW_LINE> self.printv('Added accesslevel \'' + args['name'] + '\', labeled \'' + args['label'] + '\'')
Add an access level. Access levels must be specified in order from easiest to most complex, before any sections or parameters are defined.
625941bfcad5886f8bd26f26
@with_setup(setup, teardown) <NEW_LINE> def test_value(): <NEW_LINE> <INDENT> count(111, 1, 'A', 'B') <NEW_LINE> count(222, 1, 'A', 'B') <NEW_LINE> assert count.value('Subject', 111) == 1 <NEW_LINE> assert count.value('Subject', 222) == 1 <NEW_LINE> assert count.value('Session', 1) == 2 <NEW_LINE> assert count.value('Alpha', 'A') == 2 <NEW_LINE> assert count.value('Beta', 'B') == 2
Test value method
625941bf5e10d32532c5ee73
def test_commentability(self): <NEW_LINE> <INDENT> self.assertIsNotNone(self.comment1, "this comment should be created") <NEW_LINE> self.assertIsNotNone(self.comment2, "this comment should be created") <NEW_LINE> self.assertIsNotNone(self.comment3, "this comment should be created") <NEW_LINE> self.assertIsNotNone(self.comment4, "this comment should be created") <NEW_LINE> self.assertIsNotNone(self.comment6, "this comment should be created") <NEW_LINE> self.assertIsNone(self.comment5, "this comment should be none") <NEW_LINE> self.assertIsNone(self.comment7, "this comment should be none")
Tester que les commentaires ont été créés ou non
625941bf4428ac0f6e5ba73e
def c3_hbar(self): <NEW_LINE> <INDENT> from random import randint <NEW_LINE> from zoom.vis.c3 import hbar <NEW_LINE> page_title = 'C3 Horizontal Bar Chart' <NEW_LINE> xaxis_label = 'Month' <NEW_LINE> legend = 'North', 'South' <NEW_LINE> labels = ( 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec' ) <NEW_LINE> data = [(m, randint(1, 100), randint(1, 100)) for m in labels] <NEW_LINE> visualization = hbar(data, legend=legend, title='Page Hits by Month') <NEW_LINE> return locals()
c3 Horizontal Bar Example showing how to generate a horizontal bar chart using c3 module.
625941bf66673b3332b91fdd
def test_run_hooks_and_send_no_hooks(self): <NEW_LINE> <INDENT> m_client = Mock() <NEW_LINE> tracer = SynchronousTracer(m_client) <NEW_LINE> m_span = Mock() <NEW_LINE> with patch('beeline.trace._should_sample') as m_sample_fn: <NEW_LINE> <INDENT> m_sample_fn.return_value = True <NEW_LINE> tracer._run_hooks_and_send(m_span) <NEW_LINE> <DEDENT> m_span.event.send.assert_not_called() <NEW_LINE> m_span.event.send_presampled.assert_called_once_with()
ensure send works when no hooks defined
625941bf9f2886367277a7db
def kernel_smooth(axis, mus, params, aggr=NP.add, kernel=gaussian_pdf): <NEW_LINE> <INDENT> vec = NP.zeros_like(axis) <NEW_LINE> for x in mus: <NEW_LINE> <INDENT> aggr(kernel(axis, x, params), vec, vec) <NEW_LINE> <DEDENT> return vec
Cumulating the Gaussian distribution Arguments: mus -- a set of mus params -- the parameters for kernels, e.g., sigma for Gaussian axis -- a vector of x coordinates each element of which is a sample point aggr -- the aggragation method to be used for the set of Gaussian distribution. Default=NP.add kernel -- name of kernel distribution to use for smoothing
625941bfb545ff76a8913d63
def original_url(open_file): <NEW_LINE> <INDENT> match = re.search('<link rel="original" href="([^"]+)">', open_file.read()) <NEW_LINE> if not match: <NEW_LINE> <INDENT> return '' <NEW_LINE> <DEDENT> return match.group(1)
Return the original URL that FathomFox embedded in a given sample.
625941bf8e7ae83300e4af18
def get_yesterdays_date(): <NEW_LINE> <INDENT> return datetime.strftime(datetime.now() - timedelta(1), '%Y%m%d')
Generates a date string to get info for yesterdays NBA games using python datetime library. :return: Returns string in format: YYYYMMDD to be used in HTTP Requests - Brandan Quinn 2/2/19 11:36am
625941bfac7a0e7691ed401d
def erdosRenyi(V, p, s, d): <NEW_LINE> <INDENT> G = [[] for n in range(V)] <NEW_LINE> GA = [] <NEW_LINE> rdm = random <NEW_LINE> rdm.seed(s) <NEW_LINE> for u in range(V): <NEW_LINE> <INDENT> for v in range(u, V): <NEW_LINE> <INDENT> if u != v: <NEW_LINE> <INDENT> odds = rdm.random() <NEW_LINE> if odds <= p: <NEW_LINE> <INDENT> G[u].append(v) <NEW_LINE> GA.append((u,v)) <NEW_LINE> if d: <NEW_LINE> <INDENT> G[v].append(u) <NEW_LINE> GA.append((v,u)) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> <DEDENT> <DEDENT> return G, GA
V: # of nodes p: probability of generating an edge s: seed d: Undirected if True else Directed
625941bf091ae35668666eaf
def test_code_cb_61(gb): <NEW_LINE> <INDENT> gb.cpu.register.C = 0b00010000 <NEW_LINE> cycles = op.code_cb_61(gb) <NEW_LINE> assert cycles == 8 <NEW_LINE> assert_registers(gb, C=0b00010000, F=0b00100000) <NEW_LINE> gb.cpu.register.C = 0b11101111 <NEW_LINE> cycles = op.code_cb_61(gb) <NEW_LINE> assert cycles == 8 <NEW_LINE> assert_registers(gb, C=0b11101111, F=0b10100000) <NEW_LINE> assert_memory(gb)
BIT 4,C - Test what is the value of bit 4
625941bf7cff6e4e811178d2
def __init__( self, nart, pstate ): <NEW_LINE> <INDENT> self.adv = pstate.adv <NEW_LINE> self.rank = pstate.rank or self.rank <NEW_LINE> self.elements = pstate.elements.copy() <NEW_LINE> self.subplots = dict() <NEW_LINE> self.memo = None <NEW_LINE> self.__class__._used += 1 <NEW_LINE> self.move_records = list() <NEW_LINE> self.indie_plots = list() <NEW_LINE> self._temp_scenes = list() <NEW_LINE> self.locked_elements = set() <NEW_LINE> self.call_on_end = list() <NEW_LINE> self._rumor_memo_delivered = False <NEW_LINE> allok = self.custom_init( nart ) <NEW_LINE> if not allok: <NEW_LINE> <INDENT> self.fail(nart) <NEW_LINE> <DEDENT> elif self.UNIQUE: <NEW_LINE> <INDENT> nart.camp.uniques.add(self.__class__)
Initialize + install this plot, or raise PlotError
625941bff548e778e58cd4c9
def test_bin_data(): <NEW_LINE> <INDENT> data = [ (1.1, 2), (1.2, 3), (1.3, 4), (1.4, 6), (1.5, 8), (1.6, 9), ] <NEW_LINE> result = bin_data(data, 'mean', 3) <NEW_LINE> assert [2.5, 5.0, 8.5] == [v[1] for v in result]
Test binning of sensor data.
625941bf507cdc57c6306c22
def save_log_entry(self, desired_antenna=None, desired_chipid=None, message=None, message_timestamp=None): <NEW_LINE> <INDENT> self.logger.info('Log entry received at %s for station:%s, Ant#:%s, chipid=%s: %s' % (message_timestamp, self.station_id, desired_antenna, desired_chipid, message)) <NEW_LINE> return True
Dummy function to write a log entry for the given antenna, chipid, and station. In reality, these log entries would be in a database. :param desired_antenna: integer: Specifies a single physical antenna (1-256), or 0/None :param desired_chipid: bytes: Specifies a single physical SMARTbox or FNDH unique serial number, or None. :param message: string: Message text :param message_timestamp: integer: Unix timestamp :return: True for success, False for failure
625941bfc4546d3d9de7297e
def get_object_subset(self, ctx, sub_object_ids): <NEW_LINE> <INDENT> ws = workspaceService(self.workspaceURL, token=ctx["token"]) <NEW_LINE> data = ws.get_object_subset(sub_object_ids) <NEW_LINE> if not isinstance(data, list): <NEW_LINE> <INDENT> raise ValueError('Method get_object_subset return value ' + 'data is not type list as required.') <NEW_LINE> <DEDENT> return [data]
:param sub_object_ids: instance of list of type "SubObjectIdentity" -> structure: parameter "ref" of String, parameter "included" of list of String :returns: instance of list of type "ObjectData" -> structure: parameter "data" of unspecified object
625941bf24f1403a92600ab5
def train(self, data_iterable, epochs=None, total_examples=None, total_words=None, queue_factor=2, report_delay=1.0, callbacks=(), **kwargs): <NEW_LINE> <INDENT> self._set_train_params(**kwargs) <NEW_LINE> if callbacks: <NEW_LINE> <INDENT> self.callbacks = callbacks <NEW_LINE> <DEDENT> self.epochs = epochs <NEW_LINE> self._check_training_sanity( epochs=epochs, total_examples=total_examples, total_words=total_words, **kwargs) <NEW_LINE> for callback in self.callbacks: <NEW_LINE> <INDENT> callback.on_train_begin(self) <NEW_LINE> <DEDENT> trained_word_count = 0 <NEW_LINE> raw_word_count = 0 <NEW_LINE> start = default_timer() - 0.00001 <NEW_LINE> job_tally = 0 <NEW_LINE> for cur_epoch in range(self.epochs): <NEW_LINE> <INDENT> for callback in self.callbacks: <NEW_LINE> <INDENT> callback.on_epoch_begin(self) <NEW_LINE> <DEDENT> trained_word_count_epoch, raw_word_count_epoch, job_tally_epoch = self._train_epoch( data_iterable, cur_epoch=cur_epoch, total_examples=total_examples, total_words=total_words, queue_factor=queue_factor, report_delay=report_delay) <NEW_LINE> trained_word_count += trained_word_count_epoch <NEW_LINE> raw_word_count += raw_word_count_epoch <NEW_LINE> job_tally += job_tally_epoch <NEW_LINE> for callback in self.callbacks: <NEW_LINE> <INDENT> callback.on_epoch_end(self) <NEW_LINE> <DEDENT> <DEDENT> total_elapsed = default_timer() - start <NEW_LINE> self._log_train_end(raw_word_count, trained_word_count, total_elapsed, job_tally) <NEW_LINE> self.train_count += 1 <NEW_LINE> self._clear_post_train() <NEW_LINE> for callback in self.callbacks: <NEW_LINE> <INDENT> callback.on_train_end(self) <NEW_LINE> <DEDENT> return trained_word_count, raw_word_count
Handle multi-worker training.
625941bf9f2886367277a7dc
def __init__(self, id=None, name=None, created_at=None): <NEW_LINE> <INDENT> self._id = None <NEW_LINE> self._name = None <NEW_LINE> self._created_at = None <NEW_LINE> self.discriminator = None <NEW_LINE> self.id = id <NEW_LINE> self.name = name <NEW_LINE> self.created_at = created_at
GroupMember - a model defined in Swagger
625941bf6e29344779a62561
def mask_around_lonlat(image_in,y_lonlat_in): <NEW_LINE> <INDENT> batch_size,timesteps,features=np.shape(image_in) <NEW_LINE> mask=[]; <NEW_LINE> image_input=np.copy(image_in) <NEW_LINE> for i in range(int(FLAGS.batch_size)): <NEW_LINE> <INDENT> mask_t=[]; <NEW_LINE> for t in range(int(timesteps)): <NEW_LINE> <INDENT> image=image_input[i,t,:] <NEW_LINE> image=np.reshape(image, [h,w,channels]) <NEW_LINE> lon,lat=y_lonlat_in[i,t,:] <NEW_LINE> lon_index=int(lon*w) <NEW_LINE> lat_index=int(lat*h) <NEW_LINE> lat_lb=lat_index-10 <NEW_LINE> lat_up=lat_index+10 <NEW_LINE> lon_lb=lon_index-10 <NEW_LINE> lon_up=lon_index+10 <NEW_LINE> if float(lat_index-10)<0.0 : <NEW_LINE> <INDENT> lat_lb=0 <NEW_LINE> lat_up=lat_lb+20; <NEW_LINE> <DEDENT> if float(lat_index+10)>(h-1): <NEW_LINE> <INDENT> lat_up=h <NEW_LINE> lat_lb=h-20 <NEW_LINE> <DEDENT> if float(lon_index-10)<0.0: <NEW_LINE> <INDENT> lon_lb=0 <NEW_LINE> lon_up=lon_lb+20 <NEW_LINE> <DEDENT> if float(lon_index+10)>(w-1): <NEW_LINE> <INDENT> lon_up=w <NEW_LINE> lon_lb=w-20 <NEW_LINE> <DEDENT> image[ 0:lat_lb, :, :]=0 <NEW_LINE> image[ lat_up:h, :, :]=0 <NEW_LINE> image[ :,0:lon_lb, :]=0 <NEW_LINE> image[ :,lon_up:w,:]=0 <NEW_LINE> mask_t.append(image) <NEW_LINE> <DEDENT> mask.append([mask_t]) <NEW_LINE> <DEDENT> mask=np.asarray(np.concatenate(mask,axis=0)) <NEW_LINE> mask=np.reshape(mask,[FLAGS.batch_size,timesteps,h*w*channels]) <NEW_LINE> return mask
From large image,"image", crop sub region(10x10) centering (lon,lat) Args: image_in: X-[bachsize,timesteps,feature_size(h*w)*channels]: (batch_size,timesteps,22188) y_lonlat_in: [batchsize,timesteps,2] : (batch_size,timesteps,2) Returns: masked_image: (batch_size,timesteps, 86(h)*129(w)*2(channels)): Mask size is 10x10
625941bf3539df3088e2e298
def set_external_logger(self, extlogger): <NEW_LINE> <INDENT> if extlogger: <NEW_LINE> <INDENT> self._logger = extlogger
Let the consumer pass an external logger
625941bfeab8aa0e5d26daa4
def search(session, **kwargs): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> url = DB_API + '/search?' <NEW_LINE> for param, value in kwargs.items(): <NEW_LINE> <INDENT> url += f'{param}={value}&' <NEW_LINE> <DEDENT> res = session.get(url) <NEW_LINE> data = res.json() <NEW_LINE> if res.status_code != 200 or 'results' not in data.keys(): <NEW_LINE> <INDENT> raise Exception(f'Unexpected error when querying Discogs API ({res.status_code})') <NEW_LINE> <DEDENT> if not data['results']: <NEW_LINE> <INDENT> raise Exception('No results found') <NEW_LINE> <DEDENT> return data['results'][0] <NEW_LINE> <DEDENT> except Exception as err: <NEW_LINE> <INDENT> print(f'Failed to find release for search {kwargs} in Discogs database: {err}') <NEW_LINE> raise
Searches the Discogs API for a release object Arguments: session (requests.Session) - API session object **kwargs (dict) - All kwargs are added as query parameters in the search call Returns: dict - The first result returned in the search Raises: Exception if release cannot be found
625941bfd8ef3951e324348a
def print_lyrics(song_name): <NEW_LINE> <INDENT> lyrics = get_lyrics(song_name) <NEW_LINE> if lyrics is None: <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> lyrics = strip_lyrics(get_lyrics(song_name)) <NEW_LINE> print(lyrics)
Given a song title, prints the lyrics.
625941bf956e5f7376d70dbb
def __invert__(self): <NEW_LINE> <INDENT> return self.operate(inv)
Implement the ``~`` operator. When used with SQL expressions, results in a NOT operation, equivalent to :func:`~.expression.not_`, that is:: ~a is equivalent to:: from sqlalchemy import not_ not_(a)
625941bf046cf37aa974cc97
def simplify_givens(net, var, givens): <NEW_LINE> <INDENT> parents = net.get_parents(var) <NEW_LINE> descendants = get_descendants(net, var) <NEW_LINE> new_givens = {} <NEW_LINE> for g in givens: <NEW_LINE> <INDENT> if g in parents: <NEW_LINE> <INDENT> new_givens[g] = givens[g] <NEW_LINE> <DEDENT> if g in descendants: <NEW_LINE> <INDENT> return givens <NEW_LINE> <DEDENT> <DEDENT> if set(parents) != set(new_givens.keys()): <NEW_LINE> <INDENT> return givens <NEW_LINE> <DEDENT> return new_givens
If givens include every parent of var and no descendants, returns a simplified list of givens, keeping only parents. Does not modify original givens. Otherwise, if not all parents are given, or if a descendant is given, returns original givens.
625941bf1f037a2d8b94614b
def __getattr__(self, attr): <NEW_LINE> <INDENT> if attr in self.meta.keys(): <NEW_LINE> <INDENT> return self.meta[attr] <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> raise AttributeError("'%s' not in metadata" % attr)
Dispatch attribute access to the metadata.
625941bfdc8b845886cb5481
def respond(journal_id, answer=True): <NEW_LINE> <INDENT> user = session["user_id"] <NEW_LINE> if answer: <NEW_LINE> <INDENT> Member.create({'journal_id': journal_id, 'user_id': user}) <NEW_LINE> <DEDENT> Invite.delete({'journal_id': journal_id, 'user_id': user}) <NEW_LINE> return render_index()
accept an invite
625941bf7cff6e4e811178d3
def test_underage_follows(self): <NEW_LINE> <INDENT> result = set(users.underage_follows(USERS)) <NEW_LINE> self.assertSetEqual(self.answer, result)
got expected set
625941bf4f88993c3716bfb7
def md5_sum(path): <NEW_LINE> <INDENT> rex = re.compile(r"^([a-f0-9]{32})\s+(\S+)$") <NEW_LINE> for line in remote_popen(f"md5sum {path}").split("\n"): <NEW_LINE> <INDENT> m = rex.match(line.strip()) <NEW_LINE> if m: <NEW_LINE> <INDENT> return m.group(1) <NEW_LINE> <DEDENT> <DEDENT> return None
Get the checksum of the passed path or None if non-existent
625941bfb830903b967e985a
def transp(self): <NEW_LINE> <INDENT> self.grid2matrix(mat_ids = [0]) <NEW_LINE> self.matrix_res = ~self.matrices[0] <NEW_LINE> self.create_cells(self.frame_res, self.matrix_res.n_rows, self.matrix_res.n_cols, self.matrix_res.items, "~A")
Performs matrix transposition, writes result to results matrix, and creates output cells in results frame. :return: None.
625941bf090684286d50ec30
def add_items_to_message(msg, log_dict): <NEW_LINE> <INDENT> out = msg <NEW_LINE> for key, value in log_dict.items(): <NEW_LINE> <INDENT> out += " {}={}".format(key, value) <NEW_LINE> <DEDENT> return out
Utility function to add dictionary items to a log message.
625941bfa8ecb033257d301b
def __init__(self, code, reason): <NEW_LINE> <INDENT> self._code = code <NEW_LINE> try: <NEW_LINE> <INDENT> self._name = wsframeproto.CloseReason(code).name <NEW_LINE> <DEDENT> except ValueError: <NEW_LINE> <INDENT> if 1000 <= code <= 2999: <NEW_LINE> <INDENT> self._name = 'RFC_RESERVED' <NEW_LINE> <DEDENT> elif 3000 <= code <= 3999: <NEW_LINE> <INDENT> self._name = 'IANA_RESERVED' <NEW_LINE> <DEDENT> elif 4000 <= code <= 4999: <NEW_LINE> <INDENT> self._name = 'PRIVATE_RESERVED' <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self._name = 'INVALID_CODE' <NEW_LINE> <DEDENT> <DEDENT> self._reason = reason
Constructor. :param int code: :param str reason:
625941bf8a349b6b435e80c1
def test_optional_picklable(): <NEW_LINE> <INDENT> ctx = Rules(std.atoms, std.floats) <NEW_LINE> actions = [ std.optional(verb=verb, typ=typ, ctx=ctx) for verb in [JSON2PY, PY2JSON] for typ in [Optional[str], Optional[float], Optional[int], Optional[bool]] ] <NEW_LINE> assert None not in actions <NEW_LINE> dumps(actions)
Test that actions generated by the optional rule can be pickled.
625941bf76d4e153a657ea7d
def speed_to_cadences(bicycle, speed, digits=None): <NEW_LINE> <INDENT> b = bicycle <NEW_LINE> attrs = ['front_cogs', 'rear_cogs', 'crank_length', 'rear_wheel'] <NEW_LINE> check_attrs(b, *attrs) <NEW_LINE> check_attrs(b.rear_wheel, 'diameter') <NEW_LINE> gr = gain_ratios(b) <NEW_LINE> result = {} <NEW_LINE> for (k, g) in gr.items(): <NEW_LINE> <INDENT> result[k] = speed/(2*pi*b.crank_length*g*(3600/1e6)) <NEW_LINE> <DEDENT> if digits is not None: <NEW_LINE> <INDENT> result = {k: round(v, digits) for k, v in result.items()} <NEW_LINE> <DEDENT> return result
Return cadences in hertz (revolutions per second). Speed is measured in kilometers per hour. Assume the following bicycle attributes are non-null and non-empty: - front_cogs - rear_cogs - crank_length - rear_wheel Raise a ``ValueError``, if that is not the case. EXAMPLES:: >>> w = Wheel(diameter=600) >>> b = Bicycle(front_cogs=[40], rear_cogs=[20, 30], crank_length=100, rear_wheel=w) >>> speed_to_cadences(b, 18.1, digits=1) {(40, 30): 2.0, (40, 20): 1.3}
625941bf38b623060ff0ad3b
def evaluate(self, x_test, y_test): <NEW_LINE> <INDENT> prediction = self.predict(x_test) <NEW_LINE> loss_val = self.loss(prediction, y_test) <NEW_LINE> eval_str = f'{self.loss_function}: {format(loss_val, ".4f")}' <NEW_LINE> if self.metric_function is not None: <NEW_LINE> <INDENT> metric_val = self.metric(prediction, y_test) <NEW_LINE> eval_str += f' - {self.metric_function}: {format(metric_val, ".4f")}' <NEW_LINE> <DEDENT> return eval_str
Return the MSE of the model prediction Parameters ---------- x_test : np.array X_test is assumed to be a list of all the inputs to be forward propagated. In particular it is assumed that the first index of X_test is the index that inputs is accessed by y_test : np.array y_test is the associated list of outputs to the list of inputs X_test. Returns ------- str The error
625941bf442bda511e8be369
def run(self, commit=False): <NEW_LINE> <INDENT> sign = KojiSignRPMs(self.env["koji_profile"], self.rpmsign_class, log_level=logging.DEBUG) <NEW_LINE> for i in self.details(commit=commit): <NEW_LINE> <INDENT> sign.logger.info(i) <NEW_LINE> <DEDENT> msg = "Reading RPM information from koji" <NEW_LINE> sign.logger.info(msg) <NEW_LINE> rpm_info_list = sign.get_latest_tagged_rpms(self.koji_tag) <NEW_LINE> if self.packages: <NEW_LINE> <INDENT> rpm_info_list = sign.filter_rpm_info_list_by_packages(rpm_info_list, self.packages) <NEW_LINE> <DEDENT> sign.sign(rpm_info_list, self.sigkeys, just_sign=self.just_sign, just_write=self.just_write, commit=commit)
Print command details, get command and run it. :param commit: Disable dry-run, apply changes for real. :type commit: bool=False
625941bf50812a4eaa59c271