text_prompt
stringlengths
157
13.1k
code_prompt
stringlengths
7
19.8k
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def sampen(data, emb_dim=2, tolerance=None, dist=rowwise_chebyshev, debug_plot=False, debug_data=False, plot_file=None): """ Computes the sample entropy of the given data. Explanation of the sample entropy: The sample entropy of a time series is defined as the negative natural logarithm of the conditional probability that two sequences similar for emb_dim points remain similar at the next point, excluding self-matches. A lower value for the sample entropy therefore corresponds to a higher probability indicating more self-similarity. Explanation of the algorithm: The algorithm constructs all subsequences of length emb_dim where dist(s_i, s_j) < tolerance. The same process is repeated for all subsequences of length emb_dim + 1. The sum of similar sequence pairs with length emb_dim + 1 is divided by the sum of similar sequence pairs with length emb_dim. The result of the algorithm is the negative logarithm of this ratio/probability. References: .. [se_1] J. S. Richman and J. R. Moorman, “Physiological time-series analysis using approximate entropy and sample entropy,” American Journal of Physiology-Heart and Circulatory Physiology, vol. 278, no. 6, pp. H2039–H2049, 2000. Reference code: .. [se_a] "sample_entropy" function in R-package "pracma", url: https://cran.r-project.org/web/packages/pracma/pracma.pdf Args: data (array-like of float): input data Kwargs: emb_dim (int): the embedding dimension (length of vectors to compare) tolerance (float): distance threshold for two template vectors to be considered equal (default: 0.2 * std(data)) dist (function (2d-array, 1d-array) -> 1d-array): distance function used to calculate the distance between template vectors. Sampen is defined using ``rowwise_chebyshev``. You should only use something else, if you are sure that you need it. debug_plot (boolean): if True, a histogram of the individual distances for m and m+1 debug_data (boolean): if True, debugging data will be returned alongside the result plot_file (str): if debug_plot is True and plot_file is not None, the plot will be saved under the given file name instead of directly showing it through ``plt.show()`` Returns: float: the sample entropy of the data (negative logarithm of ratio between similar template vectors of length emb_dim + 1 and emb_dim) [float list, float list]: Lists of lists of the form ``[dists_m, dists_m1]`` containing the distances between template vectors for m (dists_m) and for m + 1 (dists_m1). """
data = np.asarray(data) if tolerance is None: tolerance = 0.2 * np.std(data) n = len(data) # build matrix of "template vectors" # (all consecutive subsequences of length m) # x0 x1 x2 x3 ... xm-1 # x1 x2 x3 x4 ... xm # x2 x3 x4 x5 ... xm+1 # ... # x_n-m-1 ... xn-1 # since we need two of these matrices for m = emb_dim and m = emb_dim +1, # we build one that is large enough => shape (emb_dim+1, n-emb_dim) # note that we ignore the last possible template vector with length emb_dim, # because this vector has no corresponding vector of length m+1 and thus does # not count towards the conditional probability # (otherwise first dimension would be n-emb_dim+1 and not n-emb_dim) tVecs = delay_embedding(np.asarray(data), emb_dim+1, lag=1) plot_data = [] counts = [] for m in [emb_dim, emb_dim + 1]: counts.append(0) plot_data.append([]) # get the matrix that we need for the current m tVecsM = tVecs[:n - m + 1, :m] # successively calculate distances between each pair of template vectors for i in range(len(tVecsM) - 1): dsts = dist(tVecsM[i + 1:], tVecsM[i]) if debug_plot: plot_data[-1].extend(dsts) # count how many distances are smaller than the tolerance counts[-1] += np.sum(dsts < tolerance) if counts[1] == 0: # log would be infinite => cannot determine saen saen = np.inf else: saen = -np.log(1.0 * counts[1] / counts[0]) if debug_plot: plot_dists(plot_data, tolerance, m, title="sampEn = {:.3f}".format(saen), fname=plot_file) if debug_data: return (saen, plot_data) else: return saen
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def binary_n(total_N, min_n=50): """ Creates a list of values by successively halving the total length total_N until the resulting value is less than min_n. Non-integer results are rounded down. Args: total_N (int): total length Kwargs: min_n (int): minimal length after division Returns: list of integers: """
max_exp = np.log2(1.0 * total_N / min_n) max_exp = int(np.floor(max_exp)) return [int(np.floor(1.0 * total_N / (2**i))) for i in range(1, max_exp + 1)]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def expected_h(nvals, fit="RANSAC"): """ Uses expected_rs to calculate the expected value for the Hurst exponent h based on the values of n used for the calculation. Args: nvals (iterable of int): the values of n used to calculate the individual (R/S)_n KWargs: fit (str): the fitting method to use for the line fit, either 'poly' for normal least squares polynomial fitting or 'RANSAC' for RANSAC-fitting which is more robust to outliers Returns: float: expected h for white noise """
rsvals = [expected_rs(n) for n in nvals] poly = poly_fit(np.log(nvals), np.log(rsvals), 1, fit=fit) return poly[0]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def corr_dim(data, emb_dim, rvals=None, dist=rowwise_euclidean, fit="RANSAC", debug_plot=False, debug_data=False, plot_file=None): """ Calculates the correlation dimension with the Grassberger-Procaccia algorithm Explanation of correlation dimension: The correlation dimension is a characteristic measure that can be used to describe the geometry of chaotic attractors. It is defined using the correlation sum C(r) which is the fraction of pairs of points X_i in the phase space whose distance is smaller than r. If the relation between C(r) and r can be described by the power law C(r) ~ r^D then D is called the correlation dimension of the system. In a d-dimensional system, the maximum value for D is d. This value is obtained for systems that expand uniformly in each dimension with time. The lowest possible value is 0 for a system with constant C(r) (i.e. a system that visits just one point in the phase space). Generally if D is lower than d and the system has an attractor, this attractor is called "strange" and D is a measure of this "strangeness". Explanation of the algorithm: The Grassberger-Procaccia algorithm calculates C(r) for a range of different r and then fits a straight line into the plot of log(C(r)) versus log(r). This version of the algorithm is created for one-dimensional (scalar) time series. Therefore, before calculating C(r), a delay embedding of the time series is performed to yield emb_dim dimensional vectors value for emb_dim allows to reconstruct higher dimensional dynamics and avoids "systematic errors due to corrections to scaling". References: .. [cd_1] P. Grassberger and I. Procaccia, “Characterization of strange attractors,” Physical review letters, vol. 50, no. 5, p. 346, 1983. .. [cd_2] P. Grassberger and I. Procaccia, “Measuring the strangeness of strange attractors,” Physica D: Nonlinear Phenomena, vol. 9, no. 1, pp. 189–208, 1983. .. [cd_3] P. Grassberger, “Grassberger-Procaccia algorithm,” Scholarpedia, vol. 2, no. 5, p. 3043. urL: http://www.scholarpedia.org/article/Grassberger-Procaccia_algorithm Reference Code: .. [cd_a] "corrDim" function in R package "fractal", url: https://cran.r-project.org/web/packages/fractal/fractal.pdf .. [cd_b] Peng Yuehua, "Correlation dimension", url: http://de.mathworks.com/matlabcentral/fileexchange/24089-correlation-dimension Args: data (array-like of float): time series of data points emb_dim (int): embedding dimension Kwargs: rvals (iterable of float): list of values for to use for r (default: logarithmic_r(0.1 * std, 0.5 * std, 1.03)) dist (function (2d-array, 1d-array) -> 1d-array): row-wise difference function fit (str): the fitting method to use for the line fit, either 'poly' for normal least squares polynomial fitting or 'RANSAC' for RANSAC-fitting which is more robust to outliers debug_plot (boolean): if True, a simple plot of the final line-fitting step will be shown debug_data (boolean): if True, debugging data will be returned alongside the result plot_file (str): if debug_plot is True and plot_file is not None, the plot will be saved under the given file name instead of directly showing it through ``plt.show()`` Returns: float: correlation dimension as slope of the line fitted to log(r) vs log(C(r)) (1d-vector, 1d-vector, list): only present if debug_data is True: debug data of the form ``(rvals, csums, poly)`` where ``rvals`` are the values used for log(r), ``csums`` are the corresponding log(C(r)) and ``poly`` are the line coefficients (``[slope, intercept]``) """
data = np.asarray(data) # TODO what are good values for r? # TODO do this for multiple values of emb_dim? if rvals is None: sd = np.std(data) rvals = logarithmic_r(0.1 * sd, 0.5 * sd, 1.03) n = len(data) orbit = delay_embedding(data, emb_dim, lag=1) dists = np.array([dist(orbit, orbit[i]) for i in range(len(orbit))]) csums = [] for r in rvals: s = 1.0 / (n * (n - 1)) * np.sum(dists < r) csums.append(s) csums = np.array(csums) # filter zeros from csums nonzero = np.where(csums != 0) rvals = np.array(rvals)[nonzero] csums = csums[nonzero] if len(csums) == 0: # all sums are zero => we cannot fit a line poly = [np.nan, np.nan] else: poly = poly_fit(np.log(rvals), np.log(csums), 1) if debug_plot: plot_reg(np.log(rvals), np.log(csums), poly, "log(r)", "log(C(r))", fname=plot_file) if debug_data: return (poly[0], (np.log(rvals), np.log(csums), poly)) else: return poly[0]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def check_status(status, expected, path, headers=None, resp_headers=None, body=None, extras=None): """Check HTTP response status is expected. Args: status: HTTP response status. int. expected: a list of expected statuses. A list of ints. path: filename or a path prefix. headers: HTTP request headers. resp_headers: HTTP response headers. body: HTTP response body. extras: extra info to be logged verbatim if error occurs. Raises: AuthorizationError: if authorization failed. NotFoundError: if an object that's expected to exist doesn't. TimeoutError: if HTTP request timed out. ServerError: if server experienced some errors. FatalError: if any other unexpected errors occurred. """
if status in expected: return msg = ('Expect status %r from Google Storage. But got status %d.\n' 'Path: %r.\n' 'Request headers: %r.\n' 'Response headers: %r.\n' 'Body: %r.\n' 'Extra info: %r.\n' % (expected, status, path, headers, resp_headers, body, extras)) if status == httplib.UNAUTHORIZED: raise AuthorizationError(msg) elif status == httplib.FORBIDDEN: raise ForbiddenError(msg) elif status == httplib.NOT_FOUND: raise NotFoundError(msg) elif status == httplib.REQUEST_TIMEOUT: raise TimeoutError(msg) elif status == httplib.REQUESTED_RANGE_NOT_SATISFIABLE: raise InvalidRange(msg) elif (status == httplib.OK and 308 in expected and httplib.OK not in expected): raise FileClosedError(msg) elif status >= 500: raise ServerError(msg) else: raise FatalError(msg)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_default_retry_params(): """Get default RetryParams for current request and current thread. Returns: A new instance of the default RetryParams. """
default = getattr(_thread_local_settings, 'default_retry_params', None) if default is None or not default.belong_to_current_request(): return RetryParams() else: return copy.copy(default)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _should_retry(resp): """Given a urlfetch response, decide whether to retry that request."""
return (resp.status_code == httplib.REQUEST_TIMEOUT or (resp.status_code >= 500 and resp.status_code < 600))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _eager_tasklet(tasklet): """Decorator to turn tasklet to run eagerly."""
@utils.wrapping(tasklet) def eager_wrapper(*args, **kwds): fut = tasklet(*args, **kwds) _run_until_rpc() return fut return eager_wrapper
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def run(self, tasklet, **kwds): """Run a tasklet with retry. The retry should be transparent to the caller: if no results are successful, the exception or result from the last retry is returned to the caller. Args: tasklet: the tasklet to run. **kwds: keywords arguments to run the tasklet. Raises: The exception from running the tasklet. Returns: The result from running the tasklet. """
start_time = time.time() n = 1 while True: e = None result = None got_result = False try: result = yield tasklet(**kwds) got_result = True if not self.should_retry(result): raise ndb.Return(result) except runtime.DeadlineExceededError: logging.debug( 'Tasklet has exceeded request deadline after %s seconds total', time.time() - start_time) raise except self.retriable_exceptions as e: pass if n == 1: logging.debug('Tasklet is %r', tasklet) delay = self.retry_params.delay(n, start_time) if delay <= 0: logging.debug( 'Tasklet failed after %s attempts and %s seconds in total', n, time.time() - start_time) if got_result: raise ndb.Return(result) elif e is not None: raise e else: assert False, 'Should never reach here.' if got_result: logging.debug( 'Got result %r from tasklet.', result) else: logging.debug( 'Got exception "%r" from tasklet.', e) logging.debug('Retry in %s seconds.', delay) n += 1 yield tasklets.sleep(delay)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _check(cls, name, val, can_be_zero=False, val_type=float): """Check init arguments. Args: name: name of the argument. For logging purpose. val: value. Value has to be non negative number. can_be_zero: whether value can be zero. val_type: Python type of the value. Returns: The value. Raises: ValueError: when invalid value is passed in. TypeError: when invalid value type is passed in. """
valid_types = [val_type] if val_type is float: valid_types.append(int) if type(val) not in valid_types: raise TypeError( 'Expect type %s for parameter %s' % (val_type.__name__, name)) if val < 0: raise ValueError( 'Value for parameter %s has to be greater than 0' % name) if not can_be_zero and val == 0: raise ValueError( 'Value for parameter %s can not be 0' % name) return val
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def delay(self, n, start_time): """Calculate delay before the next retry. Args: n: the number of current attempt. The first attempt should be 1. start_time: the time when retry started in unix time. Returns: Number of seconds to wait before next retry. -1 if retry should give up. """
if (n > self.max_retries or (n > self.min_retries and time.time() - start_time > self.max_retry_period)): return -1 return min( math.pow(self.backoff_factor, n-1) * self.initial_delay, self.max_delay)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def open(filename, mode='r', content_type=None, options=None, read_buffer_size=storage_api.ReadBuffer.DEFAULT_BUFFER_SIZE, retry_params=None, _account_id=None, offset=0): """Opens a Google Cloud Storage file and returns it as a File-like object. Args: filename: A Google Cloud Storage filename of form '/bucket/filename'. mode: 'r' for reading mode. 'w' for writing mode. In reading mode, the file must exist. In writing mode, a file will be created or be overrode. content_type: The MIME type of the file. str. Only valid in writing mode. options: A str->basestring dict to specify additional headers to pass to GCS e.g. {'x-goog-acl': 'private', 'x-goog-meta-foo': 'foo'}. Supported options are x-goog-acl, x-goog-meta-, cache-control, content-disposition, and content-encoding. Only valid in writing mode. See https://developers.google.com/storage/docs/reference-headers for details. read_buffer_size: The buffer size for read. Read keeps a buffer and prefetches another one. To minimize blocking for large files, always read by buffer size. To minimize number of RPC requests for small files, set a large buffer size. Max is 30MB. retry_params: An instance of api_utils.RetryParams for subsequent calls to GCS from this file handle. If None, the default one is used. _account_id: Internal-use only. offset: Number of bytes to skip at the start of the file. If None, 0 is used. Returns: A reading or writing buffer that supports File-like interface. Buffer must be closed after operations are done. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if an object that's expected to exist doesn't. ValueError: invalid open mode or if content_type or options are specified in reading mode. """
common.validate_file_path(filename) api = storage_api._get_storage_api(retry_params=retry_params, account_id=_account_id) filename = api_utils._quote_filename(filename) if mode == 'w': common.validate_options(options) return storage_api.StreamingBuffer(api, filename, content_type, options) elif mode == 'r': if content_type or options: raise ValueError('Options and content_type can only be specified ' 'for writing mode.') return storage_api.ReadBuffer(api, filename, buffer_size=read_buffer_size, offset=offset) else: raise ValueError('Invalid mode %s.' % mode)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def delete(filename, retry_params=None, _account_id=None): """Delete a Google Cloud Storage file. Args: filename: A Google Cloud Storage filename of form '/bucket/filename'. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. _account_id: Internal-use only. Raises: errors.NotFoundError: if the file doesn't exist prior to deletion. """
api = storage_api._get_storage_api(retry_params=retry_params, account_id=_account_id) common.validate_file_path(filename) filename = api_utils._quote_filename(filename) status, resp_headers, content = api.delete_object(filename) errors.check_status(status, [204], filename, resp_headers=resp_headers, body=content)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_location(bucket, retry_params=None, _account_id=None): """Returns the location for the given bucket. https://cloud.google.com/storage/docs/bucket-locations Args: bucket: A Google Cloud Storage bucket of form '/bucket'. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. _account_id: Internal-use only. Returns: The location as a string. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if the bucket does not exist. """
return _get_bucket_attribute(bucket, 'location', 'LocationConstraint', retry_params=retry_params, _account_id=_account_id)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_storage_class(bucket, retry_params=None, _account_id=None): """Returns the storage class for the given bucket. https://cloud.google.com/storage/docs/storage-classes Args: bucket: A Google Cloud Storage bucket of form '/bucket'. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. _account_id: Internal-use only. Returns: The storage class as a string. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if the bucket does not exist. """
return _get_bucket_attribute(bucket, 'storageClass', 'StorageClass', retry_params=retry_params, _account_id=_account_id)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_bucket_attribute(bucket, query_param, xml_response_tag, retry_params=None, _account_id=None): """Helper method to request a bucket parameter and parse the response. Args: bucket: A Google Cloud Storage bucket of form '/bucket'. query_param: The query parameter to include in the get bucket request. xml_response_tag: The expected tag in the xml response. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. _account_id: Internal-use only. Returns: The xml value as a string. None if the returned xml does not match expected format. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if the bucket does not exist. """
api = storage_api._get_storage_api(retry_params=retry_params, account_id=_account_id) common.validate_bucket_path(bucket) status, headers, content = api.get_bucket('%s?%s' % (bucket, query_param)) errors.check_status(status, [200], bucket, resp_headers=headers, body=content) root = ET.fromstring(content) if root.tag == xml_response_tag and root.text: return root.text return None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def stat(filename, retry_params=None, _account_id=None): """Get GCSFileStat of a Google Cloud storage file. Args: filename: A Google Cloud Storage filename of form '/bucket/filename'. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. _account_id: Internal-use only. Returns: a GCSFileStat object containing info about this file. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if an object that's expected to exist doesn't. """
common.validate_file_path(filename) api = storage_api._get_storage_api(retry_params=retry_params, account_id=_account_id) status, headers, content = api.head_object( api_utils._quote_filename(filename)) errors.check_status(status, [200], filename, resp_headers=headers, body=content) file_stat = common.GCSFileStat( filename=filename, st_size=common.get_stored_content_length(headers), st_ctime=common.http_time_to_posix(headers.get('last-modified')), etag=headers.get('etag'), content_type=headers.get('content-type'), metadata=common.get_metadata(headers)) return file_stat
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def copy2(src, dst, metadata=None, retry_params=None): """Copy the file content from src to dst. Args: src: /bucket/filename dst: /bucket/filename metadata: a dict of metadata for this copy. If None, old metadata is copied. For example, {'x-goog-meta-foo': 'bar'}. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if an object that's expected to exist doesn't. """
common.validate_file_path(src) common.validate_file_path(dst) if metadata is None: metadata = {} copy_meta = 'COPY' else: copy_meta = 'REPLACE' metadata.update({'x-goog-copy-source': src, 'x-goog-metadata-directive': copy_meta}) api = storage_api._get_storage_api(retry_params=retry_params) status, resp_headers, content = api.put_object( api_utils._quote_filename(dst), headers=metadata) errors.check_status(status, [200], src, metadata, resp_headers, body=content)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def listbucket(path_prefix, marker=None, prefix=None, max_keys=None, delimiter=None, retry_params=None, _account_id=None): """Returns a GCSFileStat iterator over a bucket. Optional arguments can limit the result to a subset of files under bucket. This function has two modes: 1. List bucket mode: Lists all files in the bucket without any concept of hierarchy. GCS doesn't have real directory hierarchies. 2. Directory emulation mode: If you specify the 'delimiter' argument, it is used as a path separator to emulate a hierarchy of directories. In this mode, the "path_prefix" argument should end in the delimiter specified (thus designates a logical directory). The logical directory's contents, both files and subdirectories, are listed. The names of subdirectories returned will end with the delimiter. So listbucket can be called with the subdirectory name to list the subdirectory's contents. Args: path_prefix: A Google Cloud Storage path of format "/bucket" or "/bucket/prefix". Only objects whose fullpath starts with the path_prefix will be returned. marker: Another path prefix. Only objects whose fullpath starts lexicographically after marker will be returned (exclusive). prefix: Deprecated. Use path_prefix. max_keys: The limit on the number of objects to return. int. For best performance, specify max_keys only if you know how many objects you want. Otherwise, this method requests large batches and handles pagination for you. delimiter: Use to turn on directory mode. str of one or multiple chars that your bucket uses as its directory separator. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. _account_id: Internal-use only. Examples: For files "/bucket/a", "/bucket/bar/1" "/bucket/foo", "/bucket/foo/1", "/bucket/foo/2/1", "/bucket/foo/3/1", Regular mode: listbucket("/bucket/f", marker="/bucket/foo/1") will match "/bucket/foo/2/1", "/bucket/foo/3/1". Directory mode: listbucket("/bucket/", delimiter="/") will match "/bucket/a, "/bucket/bar/" "/bucket/foo", "/bucket/foo/". listbucket("/bucket/foo/", delimiter="/") will match "/bucket/foo/1", "/bucket/foo/2/", "/bucket/foo/3/" Returns: Regular mode: A GCSFileStat iterator over matched files ordered by filename. The iterator returns GCSFileStat objects. filename, etag, st_size, st_ctime, and is_dir are set. Directory emulation mode: A GCSFileStat iterator over matched files and directories ordered by name. The iterator returns GCSFileStat objects. For directories, only the filename and is_dir fields are set. The last name yielded can be used as next call's marker. """
if prefix: common.validate_bucket_path(path_prefix) bucket = path_prefix else: bucket, prefix = common._process_path_prefix(path_prefix) if marker and marker.startswith(bucket): marker = marker[len(bucket) + 1:] api = storage_api._get_storage_api(retry_params=retry_params, account_id=_account_id) options = {} if marker: options['marker'] = marker if max_keys: options['max-keys'] = max_keys if prefix: options['prefix'] = prefix if delimiter: options['delimiter'] = delimiter return _Bucket(api, bucket, options)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def compose(list_of_files, destination_file, files_metadata=None, content_type=None, retry_params=None, _account_id=None): """Runs the GCS Compose on the given files. Merges between 2 and 32 files into one file. Composite files may even be built from other existing composites, provided that the total component count does not exceed 1024. See here for details: https://cloud.google.com/storage/docs/composite-objects Args: list_of_files: List of file name strings with no leading slashes or bucket. destination_file: Path to the output file. Must have the bucket in the path. files_metadata: Optional, file metadata, order must match list_of_files, see link for available options: https://cloud.google.com/storage/docs/composite-objects#_Xml content_type: Optional, used to specify content-header of the output file. retry_params: Optional, an api_utils.RetryParams for this call to GCS. If None,the default one is used. _account_id: Internal-use only. Raises: ValueError: If the number of files is outside the range of 2-32. """
api = storage_api._get_storage_api(retry_params=retry_params, account_id=_account_id) if os.getenv('SERVER_SOFTWARE').startswith('Dev'): def _temp_func(file_list, destination_file, content_type): bucket = '/' + destination_file.split('/')[1] + '/' with open(destination_file, 'w', content_type=content_type) as gcs_merge: for source_file in file_list: with open(bucket + source_file['Name'], 'r') as gcs_source: gcs_merge.write(gcs_source.read()) compose_object = _temp_func else: compose_object = api.compose_object file_list, _ = _validate_compose_list(destination_file, list_of_files, files_metadata, 32) compose_object(file_list, destination_file, content_type)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _validate_compose_list(destination_file, file_list, files_metadata=None, number_of_files=32): """Validates the file_list and merges the file_list, files_metadata. Args: destination: Path to the file (ie. /destination_bucket/destination_file). file_list: List of files to compose, see compose for details. files_metadata: Meta details for each file in the file_list. number_of_files: Maximum number of files allowed in the list. Returns: A tuple (list_of_files, bucket): list_of_files: Ready to use dict version of the list. bucket: bucket name extracted from the file paths. """
common.validate_file_path(destination_file) bucket = destination_file[0:(destination_file.index('/', 1) + 1)] try: if isinstance(file_list, types.StringTypes): raise TypeError list_len = len(file_list) except TypeError: raise TypeError('file_list must be a list') if list_len > number_of_files: raise ValueError( 'Compose attempted to create composite with too many' '(%i) components; limit is (%i).' % (list_len, number_of_files)) if list_len <= 0: raise ValueError('Compose operation requires at' ' least one component; 0 provided.') if files_metadata is None: files_metadata = [] elif len(files_metadata) > list_len: raise ValueError('files_metadata contains more entries(%i)' ' than file_list(%i)' % (len(files_metadata), list_len)) list_of_files = [] for source_file, meta_data in itertools.izip_longest(file_list, files_metadata): if not isinstance(source_file, str): raise TypeError('Each item of file_list must be a string') if source_file.startswith('/'): logging.warn('Detected a "/" at the start of the file, ' 'Unless the file name contains a "/" it ' ' may cause files to be misread') if source_file.startswith(bucket): logging.warn('Detected bucket name at the start of the file, ' 'must not specify the bucket when listing file_names.' ' May cause files to be misread') common.validate_file_path(bucket + source_file) list_entry = {} if meta_data is not None: list_entry.update(meta_data) list_entry['Name'] = source_file list_of_files.append(list_entry) return list_of_files, bucket
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _next_file_gen(self, root): """Generator for next file element in the document. Args: root: root element of the XML tree. Yields: GCSFileStat for the next file. """
for e in root.getiterator(common._T_CONTENTS): st_ctime, size, etag, key = None, None, None, None for child in e.getiterator('*'): if child.tag == common._T_LAST_MODIFIED: st_ctime = common.dt_str_to_posix(child.text) elif child.tag == common._T_ETAG: etag = child.text elif child.tag == common._T_SIZE: size = child.text elif child.tag == common._T_KEY: key = child.text yield common.GCSFileStat(self._path + '/' + key, size, etag, st_ctime) e.clear() yield None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _next_dir_gen(self, root): """Generator for next directory element in the document. Args: root: root element in the XML tree. Yields: GCSFileStat for the next directory. """
for e in root.getiterator(common._T_COMMON_PREFIXES): yield common.GCSFileStat( self._path + '/' + e.find(common._T_PREFIX).text, st_size=None, etag=None, st_ctime=None, is_dir=True) e.clear() yield None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _should_get_another_batch(self, content): """Whether to issue another GET bucket call. Args: content: response XML. Returns: True if should, also update self._options for the next request. False otherwise. """
if ('max-keys' in self._options and self._options['max-keys'] <= common._MAX_GET_BUCKET_RESULT): return False elements = self._find_elements( content, set([common._T_IS_TRUNCATED, common._T_NEXT_MARKER])) if elements.get(common._T_IS_TRUNCATED, 'false').lower() != 'true': return False next_marker = elements.get(common._T_NEXT_MARKER) if next_marker is None: self._options.pop('marker', None) return False self._options['marker'] = next_marker return True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _find_elements(self, result, elements): """Find interesting elements from XML. This function tries to only look for specified elements without parsing the entire XML. The specified elements is better located near the beginning. Args: result: response XML. elements: a set of interesting element tags. Returns: A dict from element tag to element value. """
element_mapping = {} result = StringIO.StringIO(result) for _, e in ET.iterparse(result, events=('end',)): if not elements: break if e.tag in elements: element_mapping[e.tag] = e.text elements.remove(e.tag) return element_mapping
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def create_file(self, filename): """Create a file. The retry_params specified in the open call will override the default retry params for this particular file handle. Args: filename: filename. """
self.response.write('Creating file %s\n' % filename) write_retry_params = gcs.RetryParams(backoff_factor=1.1) gcs_file = gcs.open(filename, 'w', content_type='text/plain', options={'x-goog-meta-foo': 'foo', 'x-goog-meta-bar': 'bar'}, retry_params=write_retry_params) gcs_file.write('abcde\n') gcs_file.write('f'*1024*4 + '\n') gcs_file.close() self.tmp_filenames_to_clean_up.append(filename)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def list_bucket(self, bucket): """Create several files and paginate through them. Production apps should set page_size to a practical value. Args: bucket: bucket. """
self.response.write('Listbucket result:\n') page_size = 1 stats = gcs.listbucket(bucket + '/foo', max_keys=page_size) while True: count = 0 for stat in stats: count += 1 self.response.write(repr(stat)) self.response.write('\n') if count != page_size or count == 0: break stats = gcs.listbucket(bucket + '/foo', max_keys=page_size, marker=stat.filename)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def CreateFile(filename): """Create a GCS file with GCS client lib. Args: filename: GCS filename. Returns: The corresponding string blobkey for this GCS file. """
with gcs.open(filename, 'w') as f: f.write('abcde\n') blobstore_filename = '/gs' + filename return blobstore.create_gs_key(blobstore_filename)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _make_token_async(scopes, service_account_id): """Get a fresh authentication token. Args: scopes: A list of scopes. service_account_id: Internal-use only. Raises: An ndb.Return with a tuple (token, expiration_time) where expiration_time is seconds since the epoch. """
rpc = app_identity.create_rpc() app_identity.make_get_access_token_call(rpc, scopes, service_account_id) token, expires_at = yield rpc raise ndb.Return((token, expires_at))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _make_sync_method(name): """Helper to synthesize a synchronous method from an async method name. Used by the @add_sync_methods class decorator below. Args: name: The name of the synchronous method. Returns: A method (with first argument 'self') that retrieves and calls self.<name>, passing its own arguments, expects it to return a Future, and then waits for and returns that Future's result. """
def sync_wrapper(self, *args, **kwds): method = getattr(self, name) future = method(*args, **kwds) return future.get_result() return sync_wrapper
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def add_sync_methods(cls): """Class decorator to add synchronous methods corresponding to async methods. This modifies the class in place, adding additional methods to it. If a synchronous method of a given name already exists it is not replaced. Args: cls: A class. Returns: The same class, modified in place. """
for name in cls.__dict__.keys(): if name.endswith('_async'): sync_name = name[:-6] if not hasattr(cls, sync_name): setattr(cls, sync_name, _make_sync_method(name)) return cls
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def do_request_async(self, url, method='GET', headers=None, payload=None, deadline=None, callback=None): """Issue one HTTP request. It performs async retries using tasklets. Args: url: the url to fetch. method: the method in which to fetch. headers: the http headers. payload: the data to submit in the fetch. deadline: the deadline in which to make the call. callback: the call to make once completed. Yields: The async fetch of the url. """
retry_wrapper = api_utils._RetryWrapper( self.retry_params, retriable_exceptions=api_utils._RETRIABLE_EXCEPTIONS, should_retry=api_utils._should_retry) resp = yield retry_wrapper.run( self.urlfetch_async, url=url, method=method, headers=headers, payload=payload, deadline=deadline, callback=callback, follow_redirects=False) raise ndb.Return((resp.status_code, resp.headers, resp.content))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_metadata(headers): """Get user defined options from HTTP response headers."""
return dict((k, v) for k, v in headers.iteritems() if any(k.lower().startswith(valid) for valid in _GCS_METADATA))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _process_path_prefix(path_prefix): """Validate and process a Google Cloud Stoarge path prefix. Args: path_prefix: a Google Cloud Storage path prefix of format '/bucket/prefix' or '/bucket/' or '/bucket'. Raises: ValueError: if path is invalid. Returns: a tuple of /bucket and prefix. prefix can be None. """
_validate_path(path_prefix) if not _GCS_PATH_PREFIX_REGEX.match(path_prefix): raise ValueError('Path prefix should have format /bucket, /bucket/, ' 'or /bucket/prefix but got %s.' % path_prefix) bucket_name_end = path_prefix.find('/', 1) bucket = path_prefix prefix = None if bucket_name_end != -1: bucket = path_prefix[:bucket_name_end] prefix = path_prefix[bucket_name_end + 1:] or None return bucket, prefix
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _validate_path(path): """Basic validation of Google Storage paths. Args: path: a Google Storage path. It should have form '/bucket/filename' or '/bucket'. Raises: ValueError: if path is invalid. TypeError: if path is not of type basestring. """
if not path: raise ValueError('Path is empty') if not isinstance(path, basestring): raise TypeError('Path should be a string but is %s (%s).' % (path.__class__, path))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def validate_options(options): """Validate Google Cloud Storage options. Args: options: a str->basestring dict of options to pass to Google Cloud Storage. Raises: ValueError: if option is not supported. TypeError: if option is not of type str or value of an option is not of type basestring. """
if not options: return for k, v in options.iteritems(): if not isinstance(k, str): raise TypeError('option %r should be a str.' % k) if not any(k.lower().startswith(valid) for valid in _GCS_OPTIONS): raise ValueError('option %s is not supported.' % k) if not isinstance(v, basestring): raise TypeError('value %r for option %s should be of type basestring.' % (v, k))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def dt_str_to_posix(dt_str): """format str to posix. datetime str is of format %Y-%m-%dT%H:%M:%S.%fZ, e.g. 2013-04-12T00:22:27.978Z. According to ISO 8601, T is a separator between date and time when they are on the same line. Z indicates UTC (zero meridian). A pointer: http://www.cl.cam.ac.uk/~mgk25/iso-time.html This is used to parse LastModified node from GCS's GET bucket XML response. Args: dt_str: A datetime str. Returns: A float of secs from unix epoch. By posix definition, epoch is midnight 1970/1/1 UTC. """
parsable, _ = dt_str.split('.') dt = datetime.datetime.strptime(parsable, _DT_FORMAT) return calendar.timegm(dt.utctimetuple())
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def posix_to_dt_str(posix): """Reverse of str_to_datetime. This is used by GCS stub to generate GET bucket XML response. Args: posix: A float of secs from unix epoch. Returns: A datetime str. """
dt = datetime.datetime.utcfromtimestamp(posix) dt_str = dt.strftime(_DT_FORMAT) return dt_str + '.000Z'
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def local_run(): """Whether we should hit GCS dev appserver stub."""
server_software = os.environ.get('SERVER_SOFTWARE') if server_software is None: return True if 'remote_api' in server_software: return False if server_software.startswith(('Development', 'testutil')): return True return False
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def memory_usage(method): """Log memory usage before and after a method."""
def wrapper(*args, **kwargs): logging.info('Memory before method %s is %s.', method.__name__, runtime.memory_usage().current()) result = method(*args, **kwargs) logging.info('Memory after method %s is %s', method.__name__, runtime.memory_usage().current()) return result return wrapper
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_storage_api(retry_params, account_id=None): """Returns storage_api instance for API methods. Args: retry_params: An instance of api_utils.RetryParams. If none, thread's default will be used. account_id: Internal-use only. Returns: A storage_api instance to handle urlfetch work to GCS. On dev appserver, this instance will talk to a local stub by default. However, if you pass the arguments --appidentity_email_address and --appidentity_private_key_path to dev_appserver.py it will attempt to use the real GCS with these credentials. Alternatively, you can set a specific access token with common.set_access_token. You can also pass --default_gcs_bucket_name to set the default bucket. """
api = _StorageApi(_StorageApi.full_control_scope, service_account_id=account_id, retry_params=retry_params) # when running local unit tests, the service account is test@localhost # from google.appengine.api.app_identity.app_identity_stub.APP_SERVICE_ACCOUNT_NAME service_account = app_identity.get_service_account_name() if (common.local_run() and not common.get_access_token() and (not service_account or service_account.endswith('@localhost'))): api.api_url = common.local_api_url() if common.get_access_token(): api.token = common.get_access_token() return api
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def post_object_async(self, path, **kwds): """POST to an object."""
return self.do_request_async(self.api_url + path, 'POST', **kwds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def put_object_async(self, path, **kwds): """PUT an object."""
return self.do_request_async(self.api_url + path, 'PUT', **kwds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_object_async(self, path, **kwds): """GET an object. Note: No payload argument is supported. """
return self.do_request_async(self.api_url + path, 'GET', **kwds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def delete_object_async(self, path, **kwds): """DELETE an object. Note: No payload argument is supported. """
return self.do_request_async(self.api_url + path, 'DELETE', **kwds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def head_object_async(self, path, **kwds): """HEAD an object. Depending on request headers, HEAD returns various object properties, e.g. Content-Length, Last-Modified, and ETag. Note: No payload argument is supported. """
return self.do_request_async(self.api_url + path, 'HEAD', **kwds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_bucket_async(self, path, **kwds): """GET a bucket."""
return self.do_request_async(self.api_url + path, 'GET', **kwds)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def compose_object(self, file_list, destination_file, content_type): """COMPOSE multiple objects together. Using the given list of files, calls the put object with the compose flag. This call merges all the files into the destination file. Args: file_list: list of dicts with the file name. destination_file: Path to the destination file. content_type: Content type for the destination file. """
xml_setting_list = ['<ComposeRequest>'] for meta_data in file_list: xml_setting_list.append('<Component>') for key, val in meta_data.iteritems(): xml_setting_list.append('<%s>%s</%s>' % (key, val, key)) xml_setting_list.append('</Component>') xml_setting_list.append('</ComposeRequest>') xml = ''.join(xml_setting_list) if content_type is not None: headers = {'Content-Type': content_type} else: headers = None status, resp_headers, content = self.put_object( api_utils._quote_filename(destination_file) + '?compose', payload=xml, headers=headers) errors.check_status(status, [200], destination_file, resp_headers, body=content)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def readline(self, size=-1): """Read one line delimited by '\n' from the file. A trailing newline character is kept in the string. It may be absent when a file ends with an incomplete line. If the size argument is non-negative, it specifies the maximum string size (counting the newline) to return. A negative size is the same as unspecified. Empty string is returned only when EOF is encountered immediately. Args: size: Maximum number of bytes to read. If not specified, readline stops only on '\n' or EOF. Returns: The data read as a string. Raises: IOError: When this buffer is closed. """
self._check_open() if size == 0 or not self._remaining(): return '' data_list = [] newline_offset = self._buffer.find_newline(size) while newline_offset < 0: data = self._buffer.read(size) size -= len(data) self._offset += len(data) data_list.append(data) if size == 0 or not self._remaining(): return ''.join(data_list) self._buffer.reset(self._buffer_future.get_result()) self._request_next_buffer() newline_offset = self._buffer.find_newline(size) data = self._buffer.read_to_offset(newline_offset + 1) self._offset += len(data) data_list.append(data) return ''.join(data_list)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read(self, size=-1): """Read data from RAW file. Args: size: Number of bytes to read as integer. Actual number of bytes read is always equal to size unless EOF is reached. If size is negative or unspecified, read the entire file. Returns: data read as str. Raises: IOError: When this buffer is closed. """
self._check_open() if not self._remaining(): return '' data_list = [] while True: remaining = self._buffer.remaining() if size >= 0 and size < remaining: data_list.append(self._buffer.read(size)) self._offset += size break else: size -= remaining self._offset += remaining data_list.append(self._buffer.read()) if self._buffer_future is None: if size < 0 or size >= self._remaining(): needs = self._remaining() else: needs = size data_list.extend(self._get_segments(self._offset, needs)) self._offset += needs break if self._buffer_future: self._buffer.reset(self._buffer_future.get_result()) self._buffer_future = None if self._buffer_future is None: self._request_next_buffer() return ''.join(data_list)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _request_next_buffer(self): """Request next buffer. Requires self._offset and self._buffer are in consistent state. """
self._buffer_future = None next_offset = self._offset + self._buffer.remaining() if next_offset != self._file_size: self._buffer_future = self._get_segment(next_offset, self._buffer_size)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_segments(self, start, request_size): """Get segments of the file from Google Storage as a list. A large request is broken into segments to avoid hitting urlfetch response size limit. Each segment is returned from a separate urlfetch. Args: start: start offset to request. Inclusive. Have to be within the range of the file. request_size: number of bytes to request. Returns: A list of file segments in order """
if not request_size: return [] end = start + request_size futures = [] while request_size > self._max_request_size: futures.append(self._get_segment(start, self._max_request_size)) request_size -= self._max_request_size start += self._max_request_size if start < end: futures.append(self._get_segment(start, end - start)) return [fut.get_result() for fut in futures]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_segment(self, start, request_size, check_response=True): """Get a segment of the file from Google Storage. Args: start: start offset of the segment. Inclusive. Have to be within the range of the file. request_size: number of bytes to request. Have to be small enough for a single urlfetch request. May go over the logical range of the file. check_response: True to check the validity of GCS response automatically before the future returns. False otherwise. See Yields section. Yields: If check_response is True, the segment [start, start + request_size) of the file. Otherwise, a tuple. The first element is the unverified file segment. The second element is a closure that checks response. Caller should first invoke the closure before consuing the file segment. Raises: ValueError: if the file has changed while reading. """
end = start + request_size - 1 content_range = '%d-%d' % (start, end) headers = {'Range': 'bytes=' + content_range} status, resp_headers, content = yield self._api.get_object_async( self._path, headers=headers) def _checker(): errors.check_status(status, [200, 206], self._path, headers, resp_headers, body=content) self._check_etag(resp_headers.get('etag')) if check_response: _checker() raise ndb.Return(content) raise ndb.Return(content, _checker)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _check_etag(self, etag): """Check if etag is the same across requests to GCS. If self._etag is None, set it. If etag is set, check that the new etag equals the old one. In the __init__ method, we fire one HEAD and one GET request using ndb tasklet. One of them would return first and set the first value. Args: etag: etag from a GCS HTTP response. None if etag is not part of the response header. It could be None for example in the case of GCS composite file. Raises: ValueError: if two etags are not equal. """
if etag is None: return elif self._etag is None: self._etag = etag elif self._etag != etag: raise ValueError('File on GCS has changed while reading.')
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def seek(self, offset, whence=os.SEEK_SET): """Set the file's current offset. Note if the new offset is out of bound, it is adjusted to either 0 or EOF. Args: offset: seek offset as number. whence: seek mode. Supported modes are os.SEEK_SET (absolute seek), os.SEEK_CUR (seek relative to the current position), and os.SEEK_END (seek relative to the end, offset should be negative). Raises: IOError: When this buffer is closed. ValueError: When whence is invalid. """
self._check_open() self._buffer.reset() self._buffer_future = None if whence == os.SEEK_SET: self._offset = offset elif whence == os.SEEK_CUR: self._offset += offset elif whence == os.SEEK_END: self._offset = self._file_size + offset else: raise ValueError('Whence mode %s is invalid.' % str(whence)) self._offset = min(self._offset, self._file_size) self._offset = max(self._offset, 0) if self._remaining(): self._request_next_buffer()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def find_newline(self, size=-1): """Search for newline char in buffer starting from current offset. Args: size: number of bytes to search. -1 means all. Returns: offset of newline char in buffer. -1 if doesn't exist. """
if size < 0: return self._buffer.find('\n', self._offset) return self._buffer.find('\n', self._offset, self._offset + size)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write(self, data): """Write some bytes. Args: data: data to write. str. Raises: TypeError: if data is not of type str. """
self._check_open() if not isinstance(data, str): raise TypeError('Expected str but got %s.' % type(data)) if not data: return self._buffer.append(data) self._buffered += len(data) self._offset += len(data) if self._buffered >= self._flushsize: self._flush()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def close(self): """Flush the buffer and finalize the file. When this returns the new file is available for reading. """
if not self.closed: self.closed = True self._flush(finish=True) self._buffer = None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _flush(self, finish=False): """Internal API to flush. Buffer is flushed to GCS only when the total amount of buffered data is at least self._blocksize, or to flush the final (incomplete) block of the file with finish=True. """
while ((finish and self._buffered >= 0) or (not finish and self._buffered >= self._blocksize)): tmp_buffer = [] tmp_buffer_len = 0 excess = 0 while self._buffer: buf = self._buffer.popleft() size = len(buf) self._buffered -= size tmp_buffer.append(buf) tmp_buffer_len += size if tmp_buffer_len >= self._maxrequestsize: excess = tmp_buffer_len - self._maxrequestsize break if not finish and ( tmp_buffer_len % self._blocksize + self._buffered < self._blocksize): excess = tmp_buffer_len % self._blocksize break if excess: over = tmp_buffer.pop() size = len(over) assert size >= excess tmp_buffer_len -= size head, tail = over[:-excess], over[-excess:] self._buffer.appendleft(tail) self._buffered += len(tail) if head: tmp_buffer.append(head) tmp_buffer_len += len(head) data = ''.join(tmp_buffer) file_len = '*' if finish and not self._buffered: file_len = self._written + len(data) self._send_data(data, self._written, file_len) self._written += len(data) if file_len != '*': break
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _send_data(self, data, start_offset, file_len): """Send the block to the storage service. This is a utility method that does not modify self. Args: data: data to send in str. start_offset: start offset of the data in relation to the file. file_len: an int if this is the last data to append to the file. Otherwise '*'. """
headers = {} end_offset = start_offset + len(data) - 1 if data: headers['content-range'] = ('bytes %d-%d/%s' % (start_offset, end_offset, file_len)) else: headers['content-range'] = ('bytes */%s' % file_len) status, response_headers, content = self._api.put_object( self._path_with_token, payload=data, headers=headers) if file_len == '*': expected = 308 else: expected = 200 errors.check_status(status, [expected], self._path, headers, response_headers, content, {'upload_path': self._path_with_token})
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_offset_from_gcs(self): """Get the last offset that has been written to GCS. This is a utility method that does not modify self. Returns: an int of the last offset written to GCS by this upload, inclusive. -1 means nothing has been written. """
headers = {'content-range': 'bytes */*'} status, response_headers, content = self._api.put_object( self._path_with_token, headers=headers) errors.check_status(status, [308], self._path, headers, response_headers, content, {'upload_path': self._path_with_token}) val = response_headers.get('range') if val is None: return -1 _, offset = val.rsplit('-', 1) return int(offset)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _force_close(self, file_length=None): """Close this buffer on file_length. Finalize this upload immediately on file_length. Contents that are still in memory will not be uploaded. This is a utility method that does not modify self. Args: file_length: file length. Must match what has been uploaded. If None, it will be queried from GCS. """
if file_length is None: file_length = self._get_offset_from_gcs() + 1 self._send_data('', 0, file_length)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def receive_data(self, data): """Add data to our internal recieve buffer. This does not actually do any processing on the data, just stores it. To trigger processing, you have to call :meth:`next_event`. Args: data (:term:`bytes-like object`): The new data that was just received. Special case: If *data* is an empty byte-string like ``b""``, then this indicates that the remote side has closed the connection (end of file). Normally this is convenient, because standard Python APIs like :meth:`file.read` or :meth:`socket.recv` use ``b""`` to indicate end-of-file, while other failures to read are indicated using other mechanisms like raising :exc:`TimeoutError`. When using such an API you can just blindly pass through whatever you get from ``read`` to :meth:`receive_data`, and everything will work. But, if you have an API where reading an empty string is a valid non-EOF condition, then you need to be aware of this and make sure to check for such strings and avoid passing them to :meth:`receive_data`. Returns: Nothing, but after calling this you should call :meth:`next_event` to parse the newly received data. Raises: RuntimeError: Raised if you pass an empty *data*, indicating EOF, and then pass a non-empty *data*, indicating more data that somehow arrived after the EOF. (Calling ``receive_data(b"")`` multiple times is fine, and equivalent to calling it once.) """
if data: if self._receive_buffer_closed: raise RuntimeError( "received close, then received more data?") self._receive_buffer += data else: self._receive_buffer_closed = True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def next_event(self): """Parse the next event out of our receive buffer, update our internal state, and return it. This is a mutating operation -- think of it like calling :func:`next` on an iterator. Returns: : One of three things: 1) An event object -- see :ref:`events`. 2) The special constant :data:`NEED_DATA`, which indicates that you need to read more data from your socket and pass it to :meth:`receive_data` before this method will be able to return any more events. 3) The special constant :data:`PAUSED`, which indicates that we are not in a state where we can process incoming data (usually because the peer has finished their part of the current request/response cycle, and you have not yet called :meth:`start_next_cycle`). See :ref:`flow-control` for details. Raises: RemoteProtocolError: The peer has misbehaved. You should close the connection (possibly after sending some kind of 4xx response). Once this method returns :class:`ConnectionClosed` once, then all subsequent calls will also return :class:`ConnectionClosed`. If this method raises any exception besides :exc:`RemoteProtocolError` then that's a bug -- if it happens please file a bug report! If this method raises any exception then it also sets :attr:`Connection.their_state` to :data:`ERROR` -- see :ref:`error-handling` for discussion. """
if self.their_state is ERROR: raise RemoteProtocolError( "Can't receive data when peer state is ERROR") try: event = self._extract_next_receive_event() if event not in [NEED_DATA, PAUSED]: self._process_event(self.their_role, event) self._receive_buffer.compress() if event is NEED_DATA: if len(self._receive_buffer) > self._max_incomplete_event_size: # 431 is "Request header fields too large" which is pretty # much the only situation where we can get here raise RemoteProtocolError("Receive buffer too long", error_status_hint=431) if self._receive_buffer_closed: # We're still trying to complete some event, but that's # never going to happen because no more data is coming raise RemoteProtocolError( "peer unexpectedly closed connection") return event except BaseException as exc: self._process_error(self.their_role) if isinstance(exc, LocalProtocolError): exc._reraise_as_remote_protocol_error() else: raise
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def send(self, event): """Convert a high-level event into bytes that can be sent to the peer, while updating our internal state machine. Args: event: The :ref:`event <events>` to send. Returns: If ``type(event) is ConnectionClosed``, then returns ``None``. Otherwise, returns a :term:`bytes-like object`. Raises: LocalProtocolError: Sending this event at this time would violate our understanding of the HTTP/1.1 protocol. If this method raises any exception then it also sets :attr:`Connection.our_state` to :data:`ERROR` -- see :ref:`error-handling` for discussion. """
data_list = self.send_with_data_passthrough(event) if data_list is None: return None else: return b"".join(data_list)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def adam7_generate(width, height): """ Generate the coordinates for the reduced scanlines of an Adam7 interlaced image of size `width` by `height` pixels. Yields a generator for each pass, and each pass generator yields a series of (x, y, xstep) triples, each one identifying a reduced scanline consisting of pixels starting at (x, y) and taking every xstep pixel to the right. """
for xstart, ystart, xstep, ystep in adam7: if xstart >= width: continue yield ((xstart, y, xstep) for y in range(ystart, height, ystep))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_chunk(outfile, tag, data=b''): """ Write a PNG chunk to the output file, including length and checksum. """
data = bytes(data) # http://www.w3.org/TR/PNG/#5Chunk-layout outfile.write(struct.pack("!I", len(data))) outfile.write(tag) outfile.write(data) checksum = zlib.crc32(tag) checksum = zlib.crc32(data, checksum) checksum &= 2 ** 32 - 1 outfile.write(struct.pack("!I", checksum))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def pack_rows(rows, bitdepth): """Yield packed rows that are a byte array. Each byte is packed with the values from several pixels. """
assert bitdepth < 8 assert 8 % bitdepth == 0 # samples per byte spb = int(8 / bitdepth) def make_byte(block): """Take a block of (2, 4, or 8) values, and pack them into a single byte. """ res = 0 for v in block: res = (res << bitdepth) + v return res for row in rows: a = bytearray(row) # Adding padding bytes so we can group into a whole # number of spb-tuples. n = float(len(a)) extra = math.ceil(n / spb) * spb - n a.extend([0] * int(extra)) # Pack into bytes. # Each block is the samples for one byte. blocks = group(a, spb) yield bytearray(make_byte(block) for block in blocks)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def unpack_rows(rows): """Unpack each row from being 16-bits per value, to being a sequence of bytes. """
for row in rows: fmt = '!%dH' % len(row) yield bytearray(struct.pack(fmt, *row))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def is_natural(x): """A non-negative integer."""
try: is_integer = int(x) == x except (TypeError, ValueError): return False return is_integer and x >= 0
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def binary_stdout(): """ A sys.stdout that accepts bytes. """
# First there is a Python3 issue. try: stdout = sys.stdout.buffer except AttributeError: # Probably Python 2, where bytes are strings. stdout = sys.stdout # On Windows the C runtime file orientation needs changing. if sys.platform == "win32": import msvcrt import os msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY) return stdout
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_packed(self, outfile, rows): """ Write PNG file to `outfile`. `rows` should be an iterator that yields each packed row; a packed row being a sequence of packed bytes. The rows have a filter byte prefixed and are then compressed into one or more IDAT chunks. They are not processed any further, so if bitdepth is other than 1, 2, 4, 8, 16, the pixel values should have been scaled before passing them to this method. This method does work for interlaced images but it is best avoided. For interlaced images, the rows should be presented in the order that they appear in the file. """
self.write_preamble(outfile) # http://www.w3.org/TR/PNG/#11IDAT if self.compression is not None: compressor = zlib.compressobj(self.compression) else: compressor = zlib.compressobj() # data accumulates bytes to be compressed for the IDAT chunk; # it's compressed when sufficiently large. data = bytearray() for i, row in enumerate(rows): # Add "None" filter type. # Currently, it's essential that this filter type be used # for every scanline as # we do not mark the first row of a reduced pass image; # that means we could accidentally compute # the wrong filtered scanline if we used # "up", "average", or "paeth" on such a line. data.append(0) data.extend(row) if len(data) > self.chunk_limit: # :todo: bytes() only necessary in Python 2 compressed = compressor.compress(bytes(data)) if len(compressed): write_chunk(outfile, b'IDAT', compressed) data = bytearray() compressed = compressor.compress(bytes(data)) flushed = compressor.flush() if len(compressed) or len(flushed): write_chunk(outfile, b'IDAT', compressed + flushed) # http://www.w3.org/TR/PNG/#11IEND write_chunk(outfile, b'IEND') return i + 1
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def array_scanlines_interlace(self, pixels): """ Generator for interlaced scanlines from an array. `pixels` is the full source image as a single array of values. The generator yields each scanline of the reduced passes in turn, each scanline being a sequence of values. """
# http://www.w3.org/TR/PNG/#8InterlaceMethods # Array type. fmt = 'BH'[self.bitdepth > 8] # Value per row vpr = self.width * self.planes # Each iteration generates a scanline starting at (x, y) # and consisting of every xstep pixels. for lines in adam7_generate(self.width, self.height): for x, y, xstep in lines: # Pixels per row (of reduced image) ppr = int(math.ceil((self.width - x) / float(xstep))) # Values per row (of reduced image) reduced_row_len = ppr * self.planes if xstep == 1: # Easy case: line is a simple slice. offset = y * vpr yield pixels[offset: offset + vpr] continue # We have to step by xstep, # which we can do one plane at a time # using the step in Python slices. row = array(fmt) # There's no easier way to set the length of an array row.extend(pixels[0:reduced_row_len]) offset = y * vpr + x * self.planes end_offset = (y + 1) * vpr skip = self.planes * xstep for i in range(self.planes): row[i::self.planes] = \ pixels[offset + i: end_offset: skip] yield row
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write(self, file): """Write the image to the open file object. See `.save()` if you have a filename. In general, you can only call this method once; after it has been called the first time the PNG image is written, the source data will have been streamed, and cannot be streamed again. """
w = Writer(**self.info) w.write(file, self.rows)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _deinterlace(self, raw): """ Read raw pixel data, undo filters, deinterlace, and flatten. Return a single array of values. """
# Values per row (of the target image) vpr = self.width * self.planes # Values per image vpi = vpr * self.height # Interleaving writes to the output array randomly # (well, not quite), so the entire output array must be in memory. # Make a result array, and make it big enough. if self.bitdepth > 8: a = array('H', [0] * vpi) else: a = bytearray([0] * vpi) source_offset = 0 for lines in adam7_generate(self.width, self.height): # The previous (reconstructed) scanline. # `None` at the beginning of a pass # to indicate that there is no previous line. recon = None for x, y, xstep in lines: # Pixels per row (reduced pass image) ppr = int(math.ceil((self.width - x) / float(xstep))) # Row size in bytes for this pass. row_size = int(math.ceil(self.psize * ppr)) filter_type = raw[source_offset] source_offset += 1 scanline = raw[source_offset: source_offset + row_size] source_offset += row_size recon = self.undo_filter(filter_type, scanline, recon) # Convert so that there is one element per pixel value flat = self._bytes_to_values(recon, width=ppr) if xstep == 1: assert x == 0 offset = y * vpr a[offset: offset + vpr] = flat else: offset = y * vpr + x * self.planes end_offset = (y + 1) * vpr skip = self.planes * xstep for i in range(self.planes): a[offset + i: end_offset: skip] = \ flat[i:: self.planes] return a
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _bytes_to_values(self, bs, width=None): """Convert a packed row of bytes into a row of values. Result will be a freshly allocated object, not shared with the argument. """
if self.bitdepth == 8: return bytearray(bs) if self.bitdepth == 16: return array('H', struct.unpack('!%dH' % (len(bs) // 2), bs)) assert self.bitdepth < 8 if width is None: width = self.width # Samples per byte spb = 8 // self.bitdepth out = bytearray() mask = 2**self.bitdepth - 1 shifts = [self.bitdepth * i for i in reversed(list(range(spb)))] for o in bs: out.extend([mask & (o >> i) for i in shifts]) return out[:width]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _iter_straight_packed(self, byte_blocks): """Iterator that undoes the effect of filtering; yields each row as a sequence of packed bytes. Assumes input is straightlaced. `byte_blocks` should be an iterable that yields the raw bytes in blocks of arbitrary size. """
# length of row, in bytes rb = self.row_bytes a = bytearray() # The previous (reconstructed) scanline. # None indicates first line of image. recon = None for some_bytes in byte_blocks: a.extend(some_bytes) while len(a) >= rb + 1: filter_type = a[0] scanline = a[1: rb + 1] del a[: rb + 1] recon = self.undo_filter(filter_type, scanline, recon) yield recon if len(a) != 0: # :file:format We get here with a file format error: # when the available bytes (after decompressing) do not # pack into exact rows. raise FormatError('Wrong size for decompressed IDAT chunk.') assert len(a) == 0
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def preamble(self, lenient=False): """ Extract the image metadata by reading the initial part of the PNG file up to the start of the ``IDAT`` chunk. All the chunks that precede the ``IDAT`` chunk are read and either processed for metadata or discarded. If the optional `lenient` argument evaluates to `True`, checksum failures will raise warnings rather than exceptions. """
self.validate_signature() while True: if not self.atchunk: self.atchunk = self._chunk_len_type() if self.atchunk is None: raise FormatError('This PNG file has no IDAT chunks.') if self.atchunk[1] == b'IDAT': return self.process_chunk(lenient=lenient)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _dehex(s): """Liberally convert from hex string to binary string."""
import re import binascii # Remove all non-hexadecimal digits s = re.sub(br'[^a-fA-F\d]', b'', s) # binscii.unhexlify works in Python 2 and Python 3 (unlike # thing.decode('hex')). return binascii.unhexlify(s)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def s15f16l(s): """Convert sequence of ICC s15Fixed16 to list of float."""
# Note: As long as float has at least 32 bits of mantissa, all # values are preserved. n = len(s) // 4 t = struct.unpack('>%dl' % n, s) return map((2**-16).__mul__, t)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def RDcurv(s): """Convert ICC curveType."""
# See [ICC 2001] 6.5.3 assert s[0:4] == 'curv' count, = struct.unpack('>L', s[8:12]) if count == 0: return dict(gamma=1) table = struct.unpack('>%dH' % count, s[12:]) if count == 1: return dict(gamma=table[0] * 2 ** -8) return table
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def RDvcgt(s): """Convert Apple CMVideoCardGammaType."""
# See # http://developer.apple.com/documentation/GraphicsImaging/Reference/ColorSync_Manager/Reference/reference.html#//apple_ref/c/tdef/CMVideoCardGammaType assert s[0:4] == 'vcgt' tagtype, = struct.unpack('>L', s[8:12]) if tagtype != 0: return s[8:] if tagtype == 0: # Table. channels, count, size = struct.unpack('>3H', s[12:18]) if size == 1: fmt = 'B' elif size == 2: fmt = 'H' else: return s[8:] n = len(s[18:]) // size t = struct.unpack('>%d%s' % (n, fmt), s[18:]) t = group(t, count) return size, t return s[8:]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def greyInput(self): """Adjust ``self.d`` dictionary for greyscale input device. ``profileclass`` is 'scnr', ``colourspace`` is 'GRAY', ``pcs`` is 'XYZ '. """
self.d.update(dict(profileclass='scnr', colourspace='GRAY', pcs='XYZ ')) return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write(self, out): """Write ICC Profile to the file."""
if not self.rawtagtable: self.rawtagtable = self.rawtagdict.items() tags = tagblock(self.rawtagtable) self.writeHeader(out, 128 + len(tags)) out.write(tags) out.flush() return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def writeHeader(self, out, size=999): """Add default values to the instance's `d` dictionary, then write a header out onto the file stream. The size of the profile must be specified using the `size` argument. """
def defaultkey(d, key, value): """Add ``[key]==value`` to the dictionary `d`, but only if it does not have that key already. """ if key in d: return d[key] = value z = '\x00' * 4 defaults = dict(preferredCMM=z, version='02000000', profileclass=z, colourspace=z, pcs='XYZ ', created=writeICCdatetime(), acsp='acsp', platform=z, flag=0, manufacturer=z, model=0, deviceattributes=0, intent=0, pcsilluminant=encodefuns()['XYZ'](*D50()), creator=z, ) for k, v in defaults.items(): defaultkey(self.d, k, v) hl = map(self.d.__getitem__, ['preferredCMM', 'version', 'profileclass', 'colourspace', 'pcs', 'created', 'acsp', 'platform', 'flag', 'manufacturer', 'model', 'deviceattributes', 'intent', 'pcsilluminant', 'creator']) # Convert to struct.pack input hl[1] = int(hl[1], 16) out.write(struct.pack('>L4sL4s4s4s12s4s4sL4sLQL12s4s', size, *hl)) out.write('\x00' * 44) return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def convert(f, output=sys.stdout): """Convert Plan 9 file to PNG format. Works with either uncompressed or compressed files. """
r = f.read(11) if r == 'compressed\n': png(output, *decompress(f)) else: png(output, *glue(f, r))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def bitdepthof(pixel): """Return the bitdepth for a Plan9 pixel format string."""
maxd = 0 for c in re.findall(r'[a-z]\d*', pixel): if c[0] != 'x': maxd = max(maxd, int(c[1:])) return maxd
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def decompress(f): """Decompress a Plan 9 image file. Assumes f is already cued past the initial 'compressed\n' string. """
r = meta(f.read(60)) return r, decomprest(f, r[4])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def decomprest(f, rows): """Iterator that decompresses the rest of a file once the metadata have been consumed."""
row = 0 while row < rows: row, o = deblock(f) yield o
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def prepareList(self, listFile=False, noSample=False): """ Load and filter the server list for only the servers we care about """
logging.debug("Loading resolver file") listFileLocation = self.listLocal if not listFile else listFile # Resolve the user part of the path listLocal = os.path.expanduser(listFileLocation) # Check local file location exists and is writable assert os.path.isdir(os.path.dirname(listLocal)),\ "{0} is not a directory!".format(os.path.dirname(listLocal)) assert os.access(os.path.dirname(listLocal), os.W_OK),\ "{0} is not writable!".format(os.path.dirname(listLocal)) # Open and yaml parse the resolver list with open(listLocal) as ll: raw = ll.read() # Use safe_load, just to be safe. serverList = yaml.safe_load(raw) # Remove all but the specified countries from the server list if self.country is not None: logging.debug("Filtering serverList for country {0}" .format(self.country)) serverList = [d for d in serverList if d['country'] == self.country] if len(serverList) == 0: raise ValueError("There are no servers avaliable " "with the country code {0}" .format(self.country)) # Get selected number of servers if self.maxServers == 'ALL' or noSample: # Set servers to the number of servers we have self.maxServers = len(serverList) elif self.maxServers > len(serverList): # We were asked for more servers than exist in the list logging.warning( "You asked me to query {0} servers, but I only have " "{1} servers in my serverlist".format( self.maxServers, len(serverList) ) ) # Fallback to setting it to all self.maxServers = len(serverList) # Get a random selection of the specified number # of servers from the list self.serverList = random.sample(serverList, self.maxServers) return self.serverList
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def query(self, domain, recordType, progress=True): """ Run the query Query spins out multiple thread workers to query each server @param domain: Domain to query @param recordType: Type of record to query for @param progress: Write progress to stdout @type domain: str @type recordType: str """
# Ignore domain validation, if someone wants to lookup an invalid # domain let them, just ensure it's a string assert type(domain) == str, "Domain must be a string" # Ensure record type is valid, and in our list of allowed records recordType = recordType.upper() assert recordType in self.lookupRecordTypes, \ "Record type is not in valid list of record types {0}". \ format(', '.join(self.lookupRecordTypes)) self.domain = domain self.recordType = recordType self.resultsColated = [] self.results = [] if len(self.serverList) == 0: logging.warning("Server list is empty. Attempting " "to populate with prepareList") self.prepareList() logging.debug("Starting query against {0} servers".format( len(self.serverList))) workers = [] startTime = datetime.utcnow() serverCounter = 0 # Run continuously while waiting for results while len(self.results) < len(self.serverList): # Count the workers still running runningWorkers = len([w for w in workers if w.result is None]) # Get the results of any finished workers for i, w in enumerate(workers): if w.result: # Add the results and get rid of the worker from the # worker list self.results.append(w.result) workers.pop(i) # Output progress if progress: # Output progress on one line that updates if terminal # supports it sys.stdout.write( "\r\x1b[KStatus: Queried {0} of {1} servers, duration: {2}" .format(len(self.results), len(self.serverList), (datetime.utcnow() - startTime)) ) # Make sure the stdout updates sys.stdout.flush() # Start more workers if needed if runningWorkers < self.maxWorkers: logging.debug("Starting {0} workers".format( self.maxWorkers - runningWorkers)) # Start however many workers we need # based on max workers - running workers for i in range(0, self.maxWorkers - runningWorkers): if serverCounter < len(self.serverList): # Create a new thread with all the details wt = QueryWorker() wt.server = self.serverList[serverCounter] wt.domain = domain wt.recType = recordType wt.daemon = True # Add it to the worker tracker workers.append(wt) # Start it wt.start() serverCounter += 1 # Pause a little bit time.sleep(0.1) # Now colate the results # Group by number of servers with the same response for r in self.results: # Result already in collation if r['results'] in [rs['results'] for rs in self.resultsColated]: cid = [ i for i, rs in enumerate(self.resultsColated) if r['results'] == rs['results'] ][0] self.resultsColated[cid]['servers'].append(r['server']) else: self.resultsColated.append( { 'servers': [ r['server'] ], 'results': r['results'], 'success': r['success'] } ) if progress: sys.stdout.write("\n\n") logging.debug("There are {0} unique results".format( len(self.resultsColated)))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def outputSimple(self): """ Simple output mode """
out = [] errors = [] successfulResponses = \ len([True for rsp in self.results if rsp['success']]) out.append("INFO QUERIED {0}".format( len(self.serverList))) out.append("INFO SUCCESS {0}".format( successfulResponses)) out.append("INFO ERROR {0}".format( len(self.serverList) - successfulResponses)) for rsp in self.resultsColated: if rsp['success']: out.append("RESULT {0} {1}".format( len(rsp['servers']), "|".join(rsp['results']) )) else: errors.append("ERROR {0} {1}".format( len(rsp['servers']), "|".join(rsp['results']) )) out += errors sys.stdout.write("\n".join(out)) sys.stdout.write("\n")
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def run(self): """ Do a single DNS query against a server """
logging.debug("Querying server {0}".format(self.server['ip'])) try: # Create a DNS resolver query rsvr = dns.resolver.Resolver() rsvr.nameservers = [self.server['ip']] rsvr.lifetime = 5 rsvr.timeout = 5 qry = rsvr.query(self.domain, self.recType) # Get the results, sort for consistancy results = sorted([r.to_text() for r in qry]) success = True # Handle all the various exceptions except dns.resolver.NXDOMAIN: success = False results = ['NXDOMAIN'] except dns.resolver.NoNameservers: success = False results = ['No Nameservers'] except dns.resolver.NoAnswer: success = False results = ['No Answer'] except dns.resolver.Timeout: success = False results = ['Server Timeout'] # Save the results self.result = { 'server': self.server, 'results': results, 'success': success }
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def aws_to_unix_id(aws_key_id): """Converts a AWS Key ID into a UID"""
uid_bytes = hashlib.sha256(aws_key_id.encode()).digest()[-2:] if USING_PYTHON2: return 2000 + int(from_bytes(uid_bytes) // 2) else: return 2000 + (int.from_bytes(uid_bytes, byteorder=sys.byteorder) // 2)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _gorg(cls): """This function exists for compatibility with old typing versions."""
assert isinstance(cls, GenericMeta) if hasattr(cls, '_gorg'): return cls._gorg while cls.__origin__ is not None: cls = cls.__origin__ return cls
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _eval_args(args): """Internal helper for get_args."""
res = [] for arg in args: if not isinstance(arg, tuple): res.append(arg) elif is_callable_type(arg[0]): callable_args = _eval_args(arg[1:]) if len(arg) == 2: res.append(Callable[[], callable_args[0]]) elif arg[1] is Ellipsis: res.append(Callable[..., callable_args[1]]) else: res.append(Callable[list(callable_args[:-1]), callable_args[-1]]) else: res.append(type(arg[0]).__getitem__(arg[0], _eval_args(arg[1:]))) return tuple(res)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def on(self, eventName, cb): """ Subscribe to an igv.js event. :param Name of the event. Currently only "locuschange" is supported. :type str :param cb - callback function taking a single argument. For the locuschange event this argument will contain a dictionary of the form {chr, start, end} :type function """
self.eventHandlers[eventName] = cb return self._send({ "id": self.igv_id, "command": "on", "eventName": eventName })
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def log_request(request: str, trim_log_values: bool = False, **kwargs: Any) -> None: """Log a request"""
return log_(request, request_logger, logging.INFO, trim=trim_log_values, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def log_response(response: str, trim_log_values: bool = False, **kwargs: Any) -> None: """Log a response"""
return log_(response, response_logger, logging.INFO, trim=trim_log_values, **kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def validate(request: Union[Dict, List], schema: dict) -> Union[Dict, List]: """ Wraps jsonschema.validate, returning the same object passed in. Args: request: The deserialized-from-json request. schema: The jsonschema schema to validate against. Raises: jsonschema.ValidationError """
jsonschema_validate(request, schema) return request