text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upload(self, sys_id, file_path, name=None, multipart=False):
"""Attaches a new file to the provided record :param sys_id: the sys_id of the record to attach the file to :param file_path: local absolute path of the file to upload :param name: custom name for the uploaded file (instead of basename) :param multipart: whether or not to use multipart :return: the inserted record """ |
if not isinstance(multipart, bool):
raise InvalidUsage('Multipart must be of type bool')
resource = self.resource
if name is None:
name = os.path.basename(file_path)
resource.parameters.add_custom({
'table_name': self.table_name,
'table_sys_id': sys_id,
'file_name': name
})
data = open(file_path, 'rb').read()
headers = {}
if multipart:
headers["Content-Type"] = "multipart/form-data"
path_append = '/upload'
else:
headers["Content-Type"] = "text/plain"
path_append = '/file'
return resource.request(method='POST', data=data, headers=headers, path_append=path_append) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_path(path):
"""Validates the provided path :param path: path to validate (string) :raise: :InvalidUsage: If validation fails. """ |
if not isinstance(path, six.string_types) or not re.match('^/(?:[._a-zA-Z0-9-]/?)+[^/]$', path):
raise InvalidUsage(
"Path validation failed - Expected: '/<component>[/component], got: %s" % path
)
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_base_url(use_ssl, instance=None, host=None):
"""Formats the base URL either `host` or `instance` :return: Base URL string """ |
if instance is not None:
host = ("%s.service-now.com" % instance).rstrip('/')
if use_ssl is True:
return "https://%s" % host
return "http://%s" % host |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_oauth_session(self):
"""Creates a new OAuth session :return: - OAuth2Session object """ |
return self._get_session(
OAuth2Session(
client_id=self.client_id,
token=self.token,
token_updater=self.token_updater,
auto_refresh_url=self.token_url,
auto_refresh_kwargs={
"client_id": self.client_id,
"client_secret": self.client_secret
}
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_token(self, token):
"""Validate and set token :param token: the token (dict) to set """ |
if not token:
self.token = None
return
expected_keys = ['token_type', 'refresh_token', 'access_token', 'scope', 'expires_in', 'expires_at']
if not isinstance(token, dict) or not set(token) >= set(expected_keys):
raise InvalidUsage("Expected a token dictionary containing the following keys: {0}"
.format(expected_keys))
# Set sanitized token
self.token = dict((k, v) for k, v in token.items() if k in expected_keys) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_token(self, user, password):
"""Takes user and password credentials and generates a new token :param user: user :param password: password :return: - dictionary containing token data :raises: - TokenCreateError: If there was an error generating the new token """ |
logger.debug('(TOKEN_CREATE) :: User: %s' % user)
session = OAuth2Session(client=LegacyApplicationClient(client_id=self.client_id))
try:
return dict(session.fetch_token(token_url=self.token_url,
username=user,
password=password,
client_id=self.client_id,
client_secret=self.client_secret))
except OAuth2Error as exception:
raise TokenCreateError('Error creating user token', exception.description, exception.status_code) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def order_descending(self):
"""Sets ordering of field descending""" |
self._query.append('ORDERBYDESC{0}'.format(self.current_field))
self.c_oper = inspect.currentframe().f_back.f_code.co_name
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def equals(self, data):
"""Adds new `IN` or `=` condition depending on if a list or string was provided :param data: string or list of values :raise: - QueryTypeError: if `data` is of an unexpected type """ |
if isinstance(data, six.string_types):
return self._add_condition('=', data, types=[int, str])
elif isinstance(data, list):
return self._add_condition('IN', ",".join(map(str, data)), types=[str])
raise QueryTypeError('Expected value of type `str` or `list`, not %s' % type(data)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def greater_than(self, greater_than):
"""Adds new `>` condition :param greater_than: str or datetime compatible object (naive UTC datetime or tz-aware datetime) :raise: - QueryTypeError: if `greater_than` is of an unexpected type """ |
if hasattr(greater_than, 'strftime'):
greater_than = datetime_as_utc(greater_than).strftime('%Y-%m-%d %H:%M:%S')
elif isinstance(greater_than, six.string_types):
raise QueryTypeError('Expected value of type `int` or instance of `datetime`, not %s' % type(greater_than))
return self._add_condition('>', greater_than, types=[int, str]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def less_than(self, less_than):
"""Adds new `<` condition :param less_than: str or datetime compatible object (naive UTC datetime or tz-aware datetime) :raise: - QueryTypeError: if `less_than` is of an unexpected type """ |
if hasattr(less_than, 'strftime'):
less_than = datetime_as_utc(less_than).strftime('%Y-%m-%d %H:%M:%S')
elif isinstance(less_than, six.string_types):
raise QueryTypeError('Expected value of type `int` or instance of `datetime`, not %s' % type(less_than))
return self._add_condition('<', less_than, types=[int, str]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def between(self, start, end):
"""Adds new `BETWEEN` condition :param start: int or datetime compatible object (in SNOW user's timezone) :param end: int or datetime compatible object (in SNOW user's timezone) :raise: - QueryTypeError: if start or end arguments is of an invalid type """ |
if hasattr(start, 'strftime') and hasattr(end, 'strftime'):
dt_between = (
'javascript:gs.dateGenerate("%(start)s")'
"@"
'javascript:gs.dateGenerate("%(end)s")'
) % {
'start': start.strftime('%Y-%m-%d %H:%M:%S'),
'end': end.strftime('%Y-%m-%d %H:%M:%S')
}
elif isinstance(start, int) and isinstance(end, int):
dt_between = '%d@%d' % (start, end)
else:
raise QueryTypeError("Expected `start` and `end` of type `int` "
"or instance of `datetime`, not %s and %s" % (type(start), type(end)))
return self._add_condition('BETWEEN', dt_between, types=[str]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_condition(self, operator, operand, types):
"""Appends condition to self._query after performing validation :param operator: operator (str) :param operand: operand :param types: allowed types :raise: - QueryMissingField: if a field hasn't been set - QueryMultipleExpressions: if a condition already has been set - QueryTypeError: if the value is of an unexpected type """ |
if not self.current_field:
raise QueryMissingField("Conditions requires a field()")
elif not type(operand) in types:
caller = inspect.currentframe().f_back.f_code.co_name
raise QueryTypeError("Invalid type passed to %s() , expected: %s" % (caller, types))
elif self.c_oper:
raise QueryMultipleExpressions("Expected logical operator after expression")
self.c_oper = inspect.currentframe().f_back.f_code.co_name
self._query.append("%(current_field)s%(operator)s%(operand)s" % {
'current_field': self.current_field,
'operator': operator,
'operand': operand
})
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_logical_operator(self, operator):
"""Adds a logical operator in query :param operator: logical operator (str) :raise: - QueryExpressionError: if a expression hasn't been set """ |
if not self.c_oper:
raise QueryExpressionError("Logical operators must be preceded by an expression")
self.current_field = None
self.c_oper = None
self.l_oper = inspect.currentframe().f_back.f_code.co_name
self._query.append(operator)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_custom(self, params):
"""Adds new custom parameter after making sure it's of type dict. :param params: Dictionary containing one or more parameters """ |
if isinstance(params, dict) is False:
raise InvalidUsage("custom parameters must be of type `dict`")
self._custom_params.update(params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def offset(self, offset):
"""Sets `sysparm_offset`, usually used to accomplish pagination :param offset: Number of records to skip before fetching records :raise: :InvalidUsage: if offset is of an unexpected type """ |
if not isinstance(offset, int) or isinstance(offset, bool):
raise InvalidUsage('Offset must be an integer')
self._sysparms['sysparm_offset'] = offset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fields(self, fields):
"""Sets `sysparm_fields` after joining the given list of `fields` :param fields: List of fields to include in the response :raise: :InvalidUsage: if fields is of an unexpected type """ |
if not isinstance(fields, list):
raise InvalidUsage('fields must be of type `list`')
self._sysparms['sysparm_fields'] = ",".join(fields) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exclude_reference_link(self, exclude):
"""Sets `sysparm_exclude_reference_link` to a bool value :param exclude: bool """ |
if not isinstance(exclude, bool):
raise InvalidUsage('exclude_reference_link must be of type bool')
self._sysparms['sysparm_exclude_reference_link'] = exclude |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def suppress_pagination_header(self, suppress):
"""Enables or disables pagination header by setting `sysparm_suppress_pagination_header` :param suppress: bool """ |
if not isinstance(suppress, bool):
raise InvalidUsage('suppress_pagination_header must be of type bool')
self._sysparms['sysparm_suppress_pagination_header'] = suppress |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def quit(self):
""" This could be called from another thread, so let's do this via alarm """ |
def q(*args):
raise urwid.ExitMainLoop()
self.worker.shutdown(wait=False)
self.ui_worker.shutdown(wait=False)
self.loop.set_alarm_in(0, q) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_main_widget(self, widget, redraw):
""" add provided widget to widget list and display it :param widget: :return: """ |
self.set_body(widget)
self.reload_footer()
if redraw:
logger.debug("redraw main widget")
self.refresh() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def display_buffer(self, buffer, redraw=True):
""" display provided buffer :param buffer: Buffer :return: """ |
logger.debug("display buffer %r", buffer)
self.buffer_movement_history.append(buffer)
self.current_buffer = buffer
self._set_main_widget(buffer.widget, redraw=redraw) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_and_display_buffer(self, buffer, redraw=True):
""" add provided buffer to buffer list and display it :param buffer: :return: """ |
# FIXME: some buffers have arguments, do a proper comparison -- override __eq__
if buffer not in self.buffers:
logger.debug("adding new buffer {!r}".format(buffer))
self.buffers.append(buffer)
self.display_buffer(buffer, redraw=redraw) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pick_and_display_buffer(self, i):
""" pick i-th buffer from list and display it :param i: int :return: None """ |
if len(self.buffers) == 1:
# we don't need to display anything
# listing is already displayed
return
else:
try:
self.display_buffer(self.buffers[i])
except IndexError:
# i > len
self.display_buffer(self.buffers[0]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refresh(self):
""" explicitely refresh user interface; useful when changing widgets dynamically """ |
logger.debug("refresh user interface")
try:
with self.refresh_lock:
self.draw_screen()
except AssertionError:
logger.warning("application is not running")
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def strip_from_ansi_esc_sequences(text):
""" find ANSI escape sequences in text and remove them :param text: str :return: list, should be passed to ListBox """ |
# esc[ + values + control character
# h, l, p commands are complicated, let's ignore them
seq_regex = r"\x1b\[[0-9;]*[mKJusDCBAfH]"
regex = re.compile(seq_regex)
start = 0
response = ""
for match in regex.finditer(text):
end = match.start()
response += text[start:end]
start = match.end()
response += text[start:len(text)]
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def realtime_updates(self):
""" fetch realtime events from docker and pass them to buffers :return: None """ |
# TODO: make this available for every buffer
logger.info("starting receiving events from docker")
it = self.d.realtime_updates()
while True:
try:
event = next(it)
except NotifyError as ex:
self.ui.notify_message("error when receiving realtime events from docker: %s" % ex,
level="error")
return
# FIXME: we should pass events to all buffers
# ATM the buffers can't be rendered since they are not displayed
# and hence traceback like this: ListBoxError("Listbox contents too short! ...
logger.debug("pass event to current buffer %s", self.ui.current_buffer)
try:
self.ui.current_buffer.process_realtime_event(event)
except Exception as ex:
# swallow any exc
logger.error("error while processing runtime event: %r", ex) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_dirs():
"""Make required directories to hold logfile. :returns: str """ |
try:
top_dir = os.path.abspath(os.path.expanduser(os.environ["XDG_CACHE_HOME"]))
except KeyError:
top_dir = os.path.abspath(os.path.expanduser("~/.cache"))
our_cache_dir = os.path.join(top_dir, PROJECT_NAME)
os.makedirs(our_cache_dir, mode=0o775, exist_ok=True)
return our_cache_dir |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def humanize_bytes(bytesize, precision=2):
""" Humanize byte size figures https://gist.github.com/moird/3684595 """ |
abbrevs = (
(1 << 50, 'PB'),
(1 << 40, 'TB'),
(1 << 30, 'GB'),
(1 << 20, 'MB'),
(1 << 10, 'kB'),
(1, 'bytes')
)
if bytesize == 1:
return '1 byte'
for factor, suffix in abbrevs:
if bytesize >= factor:
break
if factor == 1:
precision = 0
return '%.*f %s' % (precision, bytesize / float(factor), suffix) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def metadata_get(self, path, cached=True):
""" get metadata from inspect, specified by path :param path: list of str :param cached: bool, use cached version of inspect if available """ |
try:
value = graceful_chain_get(self.inspect(cached=cached).response, *path)
except docker.errors.NotFound:
logger.warning("object %s is not available anymore", self)
raise NotAvailableAnymore()
return value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unique_size(self):
""" Size of ONLY this particular layer :return: int or None """ |
self._virtual_size = self._virtual_size or \
graceful_chain_get(self.data, "VirtualSize", default=0)
try:
return self._virtual_size - self._shared_size
except TypeError:
return 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def image_id(self):
|
try:
# docker >= 1.9
image_id = self.data["ImageID"]
except KeyError:
# docker <= 1.8
image_id = self.metadata_get(["Image"])
return image_id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def net(self):
""" get ACTIVE port mappings of a container :return: dict: { "host_port": "container_port" } """ |
try:
return NetData(self.inspect(cached=True).response)
except docker.errors.NotFound:
raise NotAvailableAnymore() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def top(self):
""" list of processes in a running container :return: None or list of dicts """ |
# let's get resources from .stats()
ps_args = "-eo pid,ppid,wchan,args"
# returns {"Processes": [values], "Titles": [values]}
# it's easier to play with list of dicts: [{"pid": 1, "ppid": 0}]
try:
response = self.d.top(self.container_id, ps_args=ps_args)
except docker.errors.APIError as ex:
logger.warning("error getting processes: %r", ex)
return []
# TODO: sort?
logger.debug(json.dumps(response, indent=2))
return [dict(zip(response["Titles"], process))
for process in response["Processes"] or []] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filter(self, containers=True, images=True, stopped=True, cached=False, sort_by_created=True):
""" since django is so awesome, let's use their ORM API :return: """ |
content = []
containers_o = None
images_o = None
# return containers when containers=False and running=True
if containers or not stopped:
containers_o = self.get_containers(cached=cached, stopped=stopped)
content += containers_o.response
if images:
images_o = self.get_images(cached=cached)
content += images_o.response
if sort_by_created:
content.sort(key=attrgetter("natural_sort_value"), reverse=True)
return content, containers_o, images_o |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def column_widths(self, size, focus=False):
""" Return a list of column widths. 0 values in the list mean hide corresponding column completely """ |
maxcol = size[0]
self._cache_maxcol = maxcol
widths = [width for i, (w, (t, width, b)) in enumerate(self.contents)]
self._cache_column_widths = widths
return widths |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query(self, query_string=""):
""" query and display, also apply filters :param query_string: str :return: None """ |
def query_notify(operation):
w = get_operation_notify_widget(operation, display_always=False)
if w:
self.ui.notify_widget(w)
if query_string is not None:
self.filter_query = query_string.strip()
# FIXME: this could be part of filter command since it's command line
backend_query = {
"cached": False,
"containers": True,
"images": True,
}
def containers():
backend_query["containers"] = True
backend_query["images"] = not backend_query["images"]
backend_query["cached"] = True
def images():
backend_query["containers"] = not backend_query["containers"]
backend_query["images"] = True
backend_query["cached"] = True
def running():
backend_query["stopped"] = False
backend_query["cached"] = True
backend_query["images"] = False
query_conf = [
{
"query_keys": ["t", "type"],
"query_values": ["c", "container", "containers"],
"callback": containers
}, {
"query_keys": ["t", "type"],
"query_values": ["i", "images", "images"],
"callback": images
}, {
"query_keys": ["s", "state"],
"query_values": ["r", "running"],
"callback": running
},
]
query_list = re.split(r"[\s,]", self.filter_query)
unprocessed = []
for query_str in query_list:
if not query_str:
continue
# process here x=y queries and pass rest to parent filter()
try:
query_key, query_value = query_str.split("=", 1)
except ValueError:
unprocessed.append(query_str)
else:
logger.debug("looking up query key %r and query value %r", query_key, query_value)
for c in query_conf:
if query_key in c["query_keys"] and query_value in c["query_values"]:
c["callback"]()
break
else:
raise NotifyError("Invalid query string: %r", query_str)
widgets = []
logger.debug("doing query %s", backend_query)
query, c_op, i_op = self.d.filter(**backend_query)
for o in query:
try:
line = MainLineWidget(o)
except NotAvailableAnymore:
continue
widgets.append(line)
if unprocessed:
new_query = " ".join(unprocessed)
logger.debug("doing parent query for unprocessed string: %r", new_query)
super().filter(new_query, widgets_to_filter=widgets)
else:
self.set_body(widgets)
self.ro_content = widgets
query_notify(i_op)
query_notify(c_op) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_command(self, command_input, docker_object=None, buffer=None, size=None):
""" return command instance which is the actual command to be executed :param command_input: str, command name and its args: "command arg arg2=val opt" :param docker_object: :param buffer: :return: instance of Command """ |
logger.debug("get command for command input %r", command_input)
if not command_input:
# noop, don't do anything
return
if command_input[0] in ["/"]: # we could add here !, @, ...
command_name = command_input[0]
unparsed_command_args = shlex.split(command_input[1:])
else:
command_input_list = shlex.split(command_input)
command_name = command_input_list[0]
unparsed_command_args = command_input_list[1:]
try:
CommandClass = commands_mapping[command_name]
except KeyError:
logger.info("no such command: %r", command_name)
raise NoSuchCommand("There is no such command: %s" % command_name)
else:
cmd = CommandClass(ui=self.ui, docker_backend=self.docker_backend,
docker_object=docker_object, buffer=buffer, size=size)
cmd.process_args(unparsed_command_args)
return cmd |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def read(self):
""" Read pin value @rtype: int @return: I{0} when LOW, I{1} when HIGH """ |
val = self._fd.read()
self._fd.seek(0)
return int(val) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getVersion():
# @NoSelf """ Shows the code version. """ |
print('epochs version:', str(CDFepoch.version) + '.' +
str(CDFepoch.release) + '.'+str(CDFepoch.increment)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getLeapSecondLastUpdated():
# @NoSelf """ Shows the latest date a leap second was added to the leap second table. """ |
print('Leap second last updated:', str(CDFepoch.LTS[-1][0]) + '-' +
str(CDFepoch.LTS[-1][1]) + '-' + str(CDFepoch.LTS[-1][2])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def close(self):
'''
Closes the CDF Class.
1. If compression was set, this is where the compressed file is
written.
2. If a checksum is needed, this will place the checksum at the end
of the file.
'''
if self.compressed_file is None:
with self.path.open('rb+') as f:
f.seek(0, 2)
eof = f.tell()
self._update_offset_value(f, self.gdr_head+36, 8, eof)
if self.checksum:
f.write(self._md5_compute(f))
return
# %%
with self.path.open('rb+') as f:
f.seek(0, 2)
eof = f.tell()
self._update_offset_value(f, self.gdr_head+36, 8, eof)
with self.compressed_file.open('wb+') as g:
g.write(bytearray.fromhex(CDF.V3magicNUMBER_1))
g.write(bytearray.fromhex(CDF.V3magicNUMBER_2c))
self._write_ccr(f, g, self.compression)
if self.checksum:
g.seek(0, 2)
g.write(self._md5_compute(g))
self.path.unlink() # NOTE: for Windows this is necessary
self.compressed_file.rename(self.path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_var_data_sparse(self, f, zVar, var, dataType, numElems, recVary,
oneblock):
'''
Writes a VVR and a VXR for this block of sparse data
Parameters:
f : file
The open CDF file
zVar : bool
True if this is for a z variable
var : int
The variable number
dataType : int
The CDF data type of this variable
numElems : str
The number of elements in each record
recVary : bool
True if the value varies across records
oneblock: list
A list of data in the form [startrec, endrec, [data]]
Returns:
recend : int
Just the "endrec" value input by the user in "oneblock"
'''
rec_start = oneblock[0]
rec_end = oneblock[1]
indata = oneblock[2]
numValues = self._num_values(zVar, var)
# Convert oneblock[2] into a byte stream
_, data = self._convert_data(dataType, numElems, numValues, indata)
# Gather dimension information
if zVar:
vdr_offset = self.zvarsinfo[var][1]
else:
vdr_offset = self.rvarsinfo[var][1]
# Write one VVR
offset = self._write_vvr(f, data)
f.seek(vdr_offset+28, 0)
# Get first VXR
vxrOne = int.from_bytes(f.read(8), 'big', signed=True)
foundSpot = 0
usedEntries = 0
currentVXR = 0
# Search through VXRs to find an open one
while foundSpot == 0 and vxrOne > 0:
# have a VXR
f.seek(vxrOne, 0)
currentVXR = f.tell()
f.seek(vxrOne+12, 0)
vxrNext = int.from_bytes(f.read(8), 'big', signed=True)
nEntries = int.from_bytes(f.read(4), 'big', signed=True)
usedEntries = int.from_bytes(f.read(4), 'big', signed=True)
if (usedEntries == nEntries):
# all entries are used -- check the next vxr in link
vxrOne = vxrNext
else:
# found a vxr with an vailable entry spot
foundSpot = 1
# vxrOne == 0 from vdr's vxrhead vxrOne == -1 from a vxr's vxrnext
if (vxrOne == 0 or vxrOne == -1):
# no available vxr... create a new one
currentVXR = self._create_vxr(f, rec_start, rec_end, vdr_offset,
currentVXR, offset)
else:
self._use_vxrentry(f, currentVXR, rec_start, rec_end, offset)
# Modify the VDR's MaxRec if needed
f.seek(vdr_offset+24, 0)
recNumc = int.from_bytes(f.read(4), 'big', signed=True)
if (rec_end > recNumc):
self._update_offset_value(f, vdr_offset+24, 4, rec_end)
return rec_end |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _create_vxr(self, f, recStart, recEnd, currentVDR, priorVXR, vvrOffset):
'''
Create a VXR AND use a VXR
Parameters:
f : file
The open CDF file
recStart : int
The start record of this block
recEnd : int
The ending record of this block
currentVDR : int
The byte location of the variables VDR
priorVXR : int
The byte location of the previous VXR
vvrOffset : int
The byte location of ther VVR
Returns:
vxroffset : int
The byte location of the created vxr
'''
# add a VXR, use an entry, and link it to the prior VXR if it exists
vxroffset = self._write_vxr(f)
self._use_vxrentry(f, vxroffset, recStart, recEnd, vvrOffset)
if (priorVXR == 0):
# VDR's VXRhead
self._update_offset_value(f, currentVDR+28, 8, vxroffset)
else:
# VXR's next
self._update_offset_value(f, priorVXR+12, 8, vxroffset)
# VDR's VXRtail
self._update_offset_value(f, currentVDR+36, 8, vxroffset)
return vxroffset |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _use_vxrentry(self, f, VXRoffset, recStart, recEnd, offset):
'''
Adds a VVR pointer to a VXR
'''
# Select the next unused entry in a VXR for a VVR/CVVR
f.seek(VXRoffset+20)
# num entries
numEntries = int.from_bytes(f.read(4), 'big', signed=True)
# used entries
usedEntries = int.from_bytes(f.read(4), 'big', signed=True)
# VXR's First
self._update_offset_value(f, VXRoffset+28+4*usedEntries, 4, recStart)
# VXR's Last
self._update_offset_value(f, VXRoffset+28+4*numEntries+4*usedEntries,
4, recEnd)
# VXR's Offset
self._update_offset_value(f, VXRoffset+28+2*4*numEntries+8*usedEntries,
8, offset)
# VXR's NusedEntries
usedEntries += 1
self._update_offset_value(f, VXRoffset+24, 4, usedEntries)
return usedEntries |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _add_vxr_levels_r(self, f, vxrhead, numVXRs):
'''
Build a new level of VXRs... make VXRs more tree-like
From:
VXR1 -> VXR2 -> VXR3 -> VXR4 -> ... -> VXRn
To:
new VXR1
/ | \
VXR2 VXR3 VXR4
/ | \
...
VXR5 .......... VXRn
Parameters:
f : file
The open CDF file
vxrhead : int
The byte location of the first VXR for a variable
numVXRs : int
The total number of VXRs
Returns:
newVXRhead : int
The byte location of the newest VXR head
newvxroff : int
The byte location of the last VXR head
'''
newNumVXRs = int(numVXRs / CDF.NUM_VXRlvl_ENTRIES)
remaining = int(numVXRs % CDF.NUM_VXRlvl_ENTRIES)
vxroff = vxrhead
prevxroff = -1
if (remaining != 0):
newNumVXRs += 1
CDF.level += 1
for x in range(0, newNumVXRs):
newvxroff = self._write_vxr(f, numEntries=CDF.NUM_VXRlvl_ENTRIES)
if (x > 0):
self._update_offset_value(f, prevxroff+12, 8, newvxroff)
else:
newvxrhead = newvxroff
prevxroff = newvxroff
if (x == (newNumVXRs - 1)):
if (remaining == 0):
endEntry = CDF.NUM_VXRlvl_ENTRIES
else:
endEntry = remaining
else:
endEntry = CDF.NUM_VXRlvl_ENTRIES
for _ in range(0, endEntry):
recFirst, recLast = self._get_recrange(f, vxroff)
self._use_vxrentry(f, newvxroff, recFirst, recLast, vxroff)
vxroff = self._read_offset_value(f, vxroff+12, 8)
vxroff = vxrhead
# Break the horizontal links
for x in range(0, numVXRs):
nvxroff = self._read_offset_value(f, vxroff+12, 8)
self._update_offset_value(f, vxroff+12, 8, 0)
vxroff = nvxroff
# Iterate this process if we're over NUM_VXRlvl_ENTRIES
if (newNumVXRs > CDF.NUM_VXRlvl_ENTRIES):
return self._add_vxr_levels_r(f, newvxrhead, newNumVXRs)
else:
return newvxrhead, newvxroff |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _update_vdr_vxrheadtail(self, f, vdr_offset, VXRoffset):
'''
This sets a VXR to be the first and last VXR in the VDR
'''
# VDR's VXRhead
self._update_offset_value(f, vdr_offset+28, 8, VXRoffset)
# VDR's VXRtail
self._update_offset_value(f, vdr_offset+36, 8, VXRoffset) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _get_recrange(self, f, VXRoffset):
'''
Finds the first and last record numbers pointed by the VXR
Assumes the VXRs are in order
'''
f.seek(VXRoffset+20)
# Num entries
numEntries = int.from_bytes(f.read(4), 'big', signed=True)
# used entries
usedEntries = int.from_bytes(f.read(4), 'big', signed=True)
# VXR's First record
firstRec = int.from_bytes(f.read(4), 'big', signed=True)
# VXR's Last record
f.seek(VXRoffset+28+(4*numEntries+4*(usedEntries-1)))
lastRec = int.from_bytes(f.read(4), 'big', signed=True)
return firstRec, lastRec |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _datatype_size(datatype, numElms): # @NoSelf
'''
Gets datatype size
Parameters:
datatype : int
CDF variable data type
numElms : int
number of elements
Returns:
numBytes : int
The number of bytes for the data
'''
sizes = {1: 1,
2: 2,
4: 4,
8: 8,
11: 1,
12: 2,
14: 4,
21: 4,
22: 8,
31: 8,
32: 16,
33: 8,
41: 1,
44: 4,
45: 8,
51: 1,
52: 1}
try:
if (isinstance(datatype, int)):
if (datatype == 51 or datatype == 52):
return numElms
else:
return sizes[datatype]
else:
datatype = datatype.upper()
if (datatype == 'CDF_INT1' or datatype == 'CDF_UINT1' or
datatype == 'CDF_BYTE'):
return 1
elif (datatype == 'CDF_INT2' or datatype == 'CDF_UINT2'):
return 2
elif (datatype == 'CDF_INT4' or datatype == 'CDF_UINT4'):
return 4
elif (datatype == 'CDF_INT8' or datatype == 'CDF_TIME_TT2000'):
return 8
elif (datatype == 'CDF_REAL4' or datatype == 'CDF_FLOAT'):
return 4
elif (datatype == 'CDF_REAL8' or datatype == 'CDF_DOUBLE' or
datatype == 'CDF_EPOCH'):
return 8
elif (datatype == 'CDF_EPOCH16'):
return 16
elif (datatype == 'CDF_CHAR' or datatype == 'CDF_UCHAR'):
return numElms
else:
return -1
except Exception:
return -1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_adr(self, f, gORv, name):
'''
Writes and ADR to the end of the file.
Additionally, it will update the offset values to either the previous ADR
or the ADRhead field in the GDR.
Parameters:
f : file
The open CDF file
gORv : bool
True if a global attribute, False if variable attribute
name : str
name of the attribute
Returns:
num : int
The attribute number
byte_loc : int
The current location in file f
'''
f.seek(0, 2)
byte_loc = f.tell()
block_size = CDF.ADR_BASE_SIZE64
section_type = CDF.ADR_
nextADR = 0
headAgrEDR = 0
if (gORv == True):
scope = 1
else:
scope = 2
num = len(self.attrs)
ngrEntries = 0
maxgrEntry = -1
rfuA = 0
headAzEDR = 0
nzEntries = 0
maxzEntry = -1
rfuE = -1
adr = bytearray(block_size)
adr[0:8] = struct.pack('>q', block_size)
adr[8:12] = struct.pack('>i', section_type)
adr[12:20] = struct.pack('>q', nextADR)
adr[20:28] = struct.pack('>q', headAgrEDR)
adr[28:32] = struct.pack('>i', scope)
adr[32:36] = struct.pack('>i', num)
adr[36:40] = struct.pack('>i', ngrEntries)
adr[40:44] = struct.pack('>i', maxgrEntry)
adr[44:48] = struct.pack('>i', rfuA)
adr[48:56] = struct.pack('>q', headAzEDR)
adr[56:60] = struct.pack('>i', nzEntries)
adr[60:64] = struct.pack('>i', maxzEntry)
adr[64:68] = struct.pack('>i', rfuE)
tofill = 256 - len(name)
adr[68:324] = (name+'\0'*tofill).encode()
f.write(adr)
info = []
info.append(name)
info.append(scope)
info.append(byte_loc)
self.attrsinfo[num] = info
if (scope == 1):
self.gattrs.append(name)
else:
self.vattrs.append(name)
self.attrs.append(name)
if (num > 0):
# ADR's ADRnext
self._update_offset_value(f, self.attrsinfo[num-1][2]+12, 8,
byte_loc)
else:
# GDR's ADRhead
self._update_offset_value(f, self.gdr_head+28, 8, byte_loc)
# GDR's NumAttr
self._update_offset_value(f, self.gdr_head+48, 4, num+1)
return num, byte_loc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_aedr(self, f, gORz, attrNum, entryNum, value, pdataType,
pnumElems, zVar):
'''
Writes an aedr into the end of the file.
Parameters:
f : file
The current open CDF file
gORz : bool
True if this entry is for a global or z variable, False if r variable
attrNum : int
Number of the attribute this aedr belongs to.
entryNum : int
Number of the entry
value :
The value of this entry
pdataType : int
The CDF data type of the value
pnumElems : int
Number of elements in the value.
zVar : bool
True if this entry belongs to a z variable
Returns:
byte_loc : int
This current location in the file after writing the aedr.
'''
f.seek(0, 2)
byte_loc = f.tell()
if (gORz == True or zVar != True):
section_type = CDF.AgrEDR_
else:
section_type = CDF.AzEDR_
nextAEDR = 0
if pdataType is None:
# Figure out Data Type if not supplied
if isinstance(value, (list, tuple)):
avalue = value[0]
else:
avalue = value
if (isinstance(avalue, int)):
pdataType = CDF.CDF_INT8
elif (isinstance(avalue, float)):
pdataType = CDF.CDF_FLOAT
elif (isinstance(avalue, complex)):
pdataType = CDF.CDF_EPOCH16
else:
# assume a boolean
pdataType = CDF.CDF_INT1
if pnumElems is None:
# Figure out number of elements if not supplied
if isinstance(value, str):
pdataType = CDF.CDF_CHAR
pnumElems = len(value)
else:
if isinstance(value, (list, tuple)):
pnumElems = len(value)
else:
pnumElems = 1
dataType = pdataType
numElems = pnumElems
rfuB = 0
rfuC = 0
rfuD = -1
rfuE = -1
if gORz:
numStrings = 0
else:
if (isinstance(value, str)):
numStrings = value.count('\\N ') + 1
else:
numStrings = 0
recs, cdata = self._convert_data(dataType, numElems, 1, value)
if (dataType == 51):
numElems = len(cdata)
block_size = len(cdata) + 56
aedr = bytearray(block_size)
aedr[0:8] = struct.pack('>q', block_size)
aedr[8:12] = struct.pack('>i', section_type)
aedr[12:20] = struct.pack('>q', nextAEDR)
aedr[20:24] = struct.pack('>i', attrNum)
aedr[24:28] = struct.pack('>i', dataType)
aedr[28:32] = struct.pack('>i', entryNum)
aedr[32:36] = struct.pack('>i', numElems)
aedr[36:40] = struct.pack('>i', numStrings)
aedr[40:44] = struct.pack('>i', rfuB)
aedr[44:48] = struct.pack('>i', rfuC)
aedr[48:52] = struct.pack('>i', rfuD)
aedr[52:56] = struct.pack('>i', rfuE)
aedr[56:block_size] = cdata
f.write(aedr)
return byte_loc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_vxr(self, f, numEntries=None):
'''
Creates a VXR at the end of the file.
Returns byte location of the VXR
The First, Last, and Offset fields will need to be filled in later
'''
f.seek(0, 2)
byte_loc = f.tell()
section_type = CDF.VXR_
nextVXR = 0
if (numEntries == None):
nEntries = CDF.NUM_VXR_ENTRIES
else:
nEntries = int(numEntries)
block_size = CDF.VXR_BASE_SIZE64 + (4 + 4 + 8) * nEntries
nUsedEntries = 0
firsts = [-1] * nEntries
lasts = [-1] * nEntries
offsets = [-1] * nEntries
vxr = bytearray(block_size)
vxr[0:8] = struct.pack('>q', block_size)
vxr[8:12] = struct.pack('>i', section_type)
vxr[12:20] = struct.pack('>q', nextVXR)
vxr[20:24] = struct.pack('>i', nEntries)
vxr[24:28] = struct.pack('>i', nUsedEntries)
estart = 28 + 4*nEntries
vxr[28:estart] = struct.pack('>%si' % nEntries, *firsts)
eend = estart + 4*nEntries
vxr[estart:eend] = struct.pack('>%si' % nEntries, *lasts)
vxr[eend:block_size] = struct.pack('>%sq' % nEntries, *offsets)
f.write(vxr)
return byte_loc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_vvr(self, f, data):
'''
Writes a vvr to the end of file "f" with the byte stream "data".
'''
f.seek(0, 2)
byte_loc = f.tell()
block_size = CDF.VVR_BASE_SIZE64 + len(data)
section_type = CDF.VVR_
vvr1 = bytearray(12)
vvr1[0:8] = struct.pack('>q', block_size)
vvr1[8:12] = struct.pack('>i', section_type)
f.write(vvr1)
f.write(data)
return byte_loc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_cpr(self, f, cType, parameter) -> int:
'''
Write compression info to the end of the file in a CPR.
'''
f.seek(0, 2)
byte_loc = f.tell()
block_size = CDF.CPR_BASE_SIZE64 + 4
section_type = CDF.CPR_
rfuA = 0
pCount = 1
cpr = bytearray(block_size)
cpr[0:8] = struct.pack('>q', block_size)
cpr[8:12] = struct.pack('>i', section_type)
cpr[12:16] = struct.pack('>i', cType)
cpr[16:20] = struct.pack('>i', rfuA)
cpr[20:24] = struct.pack('>i', pCount)
cpr[24:28] = struct.pack('>i', parameter)
f.write(cpr)
return byte_loc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_cvvr(self, f, data):
'''
Write compressed "data" variable to the end of the file in a CVVR
'''
f.seek(0, 2)
byte_loc = f.tell()
cSize = len(data)
block_size = CDF.CVVR_BASE_SIZE64 + cSize
section_type = CDF.CVVR_
rfuA = 0
cvvr1 = bytearray(24)
cvvr1[0:8] = struct.pack('>q', block_size)
cvvr1[8:12] = struct.pack('>i', section_type)
cvvr1[12:16] = struct.pack('>i', rfuA)
cvvr1[16:24] = struct.pack('>q', cSize)
f.write(cvvr1)
f.write(data)
return byte_loc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _write_ccr(self, f, g, level: int):
'''
Write a CCR to file "g" from file "f" with level "level".
Currently, only handles gzip compression.
Parameters:
f : file
Uncompressed file to read from
g : file
File to read the compressed file into
level : int
The level of the compression from 0 to 9
Returns: None
'''
f.seek(8)
data = f.read()
uSize = len(data)
section_type = CDF.CCR_
rfuA = 0
cData = gzip.compress(data, level)
block_size = CDF.CCR_BASE_SIZE64 + len(cData)
cprOffset = 0
ccr1 = bytearray(32)
#ccr1[0:4] = binascii.unhexlify(CDF.V3magicNUMBER_1)
#ccr1[4:8] = binascii.unhexlify(CDF.V3magicNUMBER_2c)
ccr1[0:8] = struct.pack('>q', block_size)
ccr1[8:12] = struct.pack('>i', section_type)
ccr1[12:20] = struct.pack('>q', cprOffset)
ccr1[20:28] = struct.pack('>q', uSize)
ccr1[28:32] = struct.pack('>i', rfuA)
g.seek(0, 2)
g.write(ccr1)
g.write(cData)
cprOffset = self._write_cpr(g, CDF.GZIP_COMPRESSION, level)
self._update_offset_value(g, 20, 8, cprOffset) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _convert_type(data_type): # @NoSelf
'''
Converts CDF data types into python types
'''
if data_type in (1, 41):
dt_string = 'b'
elif data_type == 2:
dt_string = 'h'
elif data_type == 4:
dt_string = 'i'
elif data_type in (8, 33):
dt_string = 'q'
elif data_type == 11:
dt_string = 'B'
elif data_type == 12:
dt_string = 'H'
elif data_type == 14:
dt_string = 'I'
elif data_type in (21, 44):
dt_string = 'f'
elif data_type in (22, 45, 31):
dt_string = 'd'
elif data_type == 32:
dt_string = 'd'
elif data_type in (51, 52):
dt_string = 's'
else:
dt_string = ''
return dt_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _convert_nptype(data_type, data): # @NoSelf
'''
Converts "data" of CDF type "data_type" into a numpy array
'''
if data_type in (1, 41):
return np.int8(data).tobytes()
elif data_type == 2:
return np.int16(data).tobytes()
elif data_type == 4:
return np.int32(data).tobytes()
elif (data_type == 8) or (data_type == 33):
return np.int64(data).tobytes()
elif data_type == 11:
return np.uint8(data).tobytes()
elif data_type == 12:
return np.uint16(data).tobytes()
elif data_type == 14:
return np.uint32(data).tobytes()
elif (data_type == 21) or (data_type == 44):
return np.float32(data).tobytes()
elif (data_type == 22) or (data_type == 45) or (data_type == 31):
return np.float64(data).tobytes()
elif (data_type == 32):
return np.complex128(data).tobytes()
else:
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _default_pad(self, data_type, numElems):
'''
Determines the default pad data for a "data_type"
'''
order = self._convert_option()
if (data_type == 1) or (data_type == 41):
pad_value = struct.pack(order+'b', -127)
elif data_type == 2:
pad_value = struct.pack(order+'h', -32767)
elif data_type == 4:
pad_value = struct.pack(order+'i', -2147483647)
elif (data_type == 8) or (data_type == 33):
pad_value = struct.pack(order+'q', -9223372036854775807)
elif data_type == 11:
pad_value = struct.pack(order+'B', 254)
elif data_type == 12:
pad_value = struct.pack(order+'H', 65534)
elif data_type == 14:
pad_value = struct.pack(order+'I', 4294967294)
elif (data_type == 21) or (data_type == 44):
pad_value = struct.pack(order+'f', -1.0E30)
elif (data_type == 22) or (data_type == 45):
pad_value = struct.pack(order+'d', -1.0E30)
elif (data_type == 31):
pad_value = struct.pack(order+'d', 0.0)
elif (data_type == 32):
pad_value = struct.pack(order+'2d', *[0.0, 0.0])
elif (data_type == 51) or (data_type == 52):
tmpPad = str(' '*numElems).encode()
form = str(numElems)
pad_value = struct.pack(form+'b', *tmpPad)
return pad_value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _num_values(self, zVar, varNum):
'''
Determines the number of values in a record.
Set zVar=True if this is a zvariable.
'''
values = 1
if (zVar == True):
numDims = self.zvarsinfo[varNum][2]
dimSizes = self.zvarsinfo[varNum][3]
dimVary = self.zvarsinfo[varNum][4]
else:
numDims = self.rvarsinfo[varNum][2]
dimSizes = self.rvarsinfo[varNum][3]
dimVary = self.rvarsinfo[varNum][4]
if (numDims < 1):
return values
else:
for x in range(0, numDims):
if (zVar == True):
values = values * dimSizes[x]
else:
if (dimVary[x] != 0):
values = values * dimSizes[x]
return values |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _read_offset_value(self, f, offset, size):
'''
Reads an integer value from file "f" at location "offset".
'''
f.seek(offset, 0)
if (size == 8):
return int.from_bytes(f.read(8), 'big', signed=True)
else:
return int.from_bytes(f.read(4), 'big', signed=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _update_offset_value(self, f, offset, size, value):
'''
Writes "value" into location "offset" in file "f".
'''
f.seek(offset, 0)
if (size == 8):
f.write(struct.pack('>q', value))
else:
f.write(struct.pack('>i', value)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _update_aedr_link(self, f, attrNum, zVar, varNum, offset):
'''
Updates variable aedr links
Parameters:
f : file
The open CDF file
attrNum : int
The number of the attribute to change
zVar : bool
True if we are updating a z variable attribute
varNum : int
The variable number associated with this aedr
offset : int
The offset in the file to the AEDR
Returns: None
'''
# The offset to this AEDR's ADR
adr_offset = self.attrsinfo[attrNum][2]
# Get the number of entries
if zVar:
f.seek(adr_offset+56, 0)
# ADR's NzEntries
entries = int.from_bytes(f.read(4), 'big', signed=True)
# ADR's MAXzEntry
maxEntry = int.from_bytes(f.read(4), 'big', signed=True)
else:
f.seek(adr_offset+36, 0)
# ADR's NgrEntries
entries = int.from_bytes(f.read(4), 'big', signed=True)
# ADR's MAXgrEntry
maxEntry = int.from_bytes(f.read(4), 'big', signed=True)
if (entries == 0):
# If this is the first entry, update the ADR to reflect
if zVar:
# AzEDRhead
self._update_offset_value(f, adr_offset+48, 8, offset)
# NzEntries
self._update_offset_value(f, adr_offset+56, 4, 1)
# MaxzEntry
self._update_offset_value(f, adr_offset+60, 4, varNum)
else:
# AgrEDRhead
self._update_offset_value(f, adr_offset+20, 8, offset)
# NgrEntries
self._update_offset_value(f, adr_offset+36, 4, 1)
# MaxgrEntry
self._update_offset_value(f, adr_offset+40, 4, varNum)
else:
if zVar:
f.seek(adr_offset+48, 0)
head = int.from_bytes(f.read(8), 'big', signed=True)
else:
f.seek(adr_offset+20, 0)
head = int.from_bytes(f.read(8), 'big', signed=True)
aedr = head
previous_aedr = head
done = False
# For each entry, re-adjust file offsets if needed
for _ in range(0, entries):
f.seek(aedr+28, 0)
# Get variable number for entry
num = int.from_bytes(f.read(4), 'big', signed=True)
if (num > varNum):
# insert an aedr to the chain
# AEDRnext
self._update_offset_value(f, previous_aedr+12, 8, offset)
# AEDRnext
self._update_offset_value(f, offset+12, 8, aedr)
done = True
break
else:
# move to the next aedr in chain
f.seek(aedr+12, 0)
previous_aedr = aedr
aedr = int.from_bytes(f.read(8), 'big', signed=True)
# If no link was made, update the last found aedr
if not done:
self._update_offset_value(f, previous_aedr+12, 8, offset)
if zVar:
self._update_offset_value(f, adr_offset+56, 4, entries+1)
if (maxEntry < varNum):
self._update_offset_value(f, adr_offset+60, 4, varNum)
else:
self._update_offset_value(f, adr_offset+36, 4, entries+1)
if (maxEntry < varNum):
self._update_offset_value(f, adr_offset+40, 4, varNum) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _md5_compute(self, f):
'''
Computes the checksum of the file
'''
md5 = hashlib.md5()
block_size = 16384
f.seek(0, 2)
remaining = f.tell()
f.seek(0)
while (remaining > block_size):
data = f.read(block_size)
remaining = remaining - block_size
md5.update(data)
if remaining > 0:
data = f.read(remaining)
md5.update(data)
return md5.digest() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cdf_info(self):
""" Returns a dictionary that shows the basic CDF information. This information includes | ['CDF'] | the name of the CDF | | ['Version'] | the version of the CDF | | ['Encoding'] | the endianness of the CDF | | ['Majority'] | the row/column majority | | ['zVariables']| the dictionary for zVariable numbers and their corresponding names | | ['rVariables']| the dictionary for rVariable numbers and their corresponding names | | ['Attributes']| the dictionary for attribute numbers and their corresponding names and scopes | | ['Checksum'] | the checksum indicator | | ['Num_rdim'] | the number of dimensions, applicable only to rVariables | | ['rDim_sizes'] | the dimensional sizes, applicable only to rVariables | | ['Compressed']| CDF is compressed at the file-level | | ['LeapSecondUpdated']| The last updated for the leap second table, if applicable | """ |
mycdf_info = {}
mycdf_info['CDF'] = self.file
mycdf_info['Version'] = self._version
mycdf_info['Encoding'] = self._encoding
mycdf_info['Majority'] = self._majority
mycdf_info['rVariables'], mycdf_info['zVariables'] = self._get_varnames()
mycdf_info['Attributes'] = self._get_attnames()
mycdf_info['Copyright'] = self._copyright
mycdf_info['Checksum'] = self._md5
mycdf_info['Num_rdim'] = self._num_rdim
mycdf_info['rDim_sizes'] = self._rdim_sizes
mycdf_info['Compressed'] = self._compressed
if (self.cdfversion > 2):
mycdf_info['LeapSecondUpdated'] = self._leap_second_updated
return mycdf_info |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def varinq(self, variable):
""" Returns a dictionary that shows the basic variable information. This information includes | ['Variable'] | the name of the variable | | ['Num'] | the variable number | | ['Var_Type'] | the variable type: zVariable or rVariable | | ['Data_Type'] | the variable's CDF data type | | ['Num_Elements']| the number of elements of the variable | | ['Num_Dims'] | the dimensionality of the variable record | | ['Dim_Sizes'] | the shape of the variable record | | ['Sparse'] | the variable's record sparseness | | ['Last_Rec'] | the maximum written record number (0-based) | | ['Dim_Vary'] | the dimensional variance(s) | | ['Rec_Vary'] | the record variance | | ['Pad'] | the padded value if set | | ['Compress'] | the GZIP compression level, 0 to 9. 0 if not compressed | | ['Block_Factor']| the blocking factor if the variable is compressed | Parameters variable : """ |
vdr_info = self.varget(variable=variable, inq=True)
if vdr_info is None:
raise KeyError("Variable {} not found.".format(variable))
var = {}
var['Variable'] = vdr_info['name']
var['Num'] = vdr_info['variable_number']
var['Var_Type'] = CDF._variable_token(vdr_info['section_type'])
var['Data_Type'] = vdr_info['data_type']
var['Data_Type_Description'] = CDF._datatype_token(vdr_info['data_type'])
var['Num_Elements'] = vdr_info['num_elements']
var['Num_Dims'] = vdr_info['num_dims']
var['Dim_Sizes'] = vdr_info['dim_sizes']
var['Sparse'] = CDF._sparse_token(vdr_info['sparse'])
var['Last_Rec'] = vdr_info['max_records']
var['Rec_Vary'] = vdr_info['record_vary']
var['Dim_Vary'] = vdr_info['dim_vary']
if ('pad' in vdr_info):
var['Pad'] = vdr_info['pad']
var['Compress'] = vdr_info['compression_level']
if ('blocking_factor' in vdr_info):
var['Block_Factor'] = vdr_info['blocking_factor']
return var |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def attinq(self, attribute=None):
""" Get attribute information. Returns ------- dict Dictionary of attribution infromation. """ |
position = self._first_adr
if isinstance(attribute, str):
for _ in range(0, self._num_att):
name, next_adr = self._read_adr_fast(position)
if name.strip().lower() == attribute.strip().lower():
return self._read_adr(position)
position = next_adr
raise KeyError('No attribute {}'.format(attribute))
elif isinstance(attribute, int):
if (attribute < 0 or attribute > self._num_zvariable):
raise KeyError('No attribute {}'.format(attribute))
for _ in range(0, attribute):
name, next_adr = self._read_adr_fast(position)
position = next_adr
return self._read_adr(position)
else:
print('Please set attribute keyword equal to the name or ',
'number of an attribute')
attrs = self._get_attnames()
print(attrs)
for x in range(0, self._num_att):
name = list(attrs[x].keys())[0]
print('NAME: ' + name + ', NUMBER: ' + str(x) + ', SCOPE: ' + attrs[x][name])
return attrs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def epochrange(self, epoch=None, starttime=None, endtime=None):
""" Get epoch range. Returns a list of the record numbers, representing the corresponding starting and ending records within the time range from the epoch data. A None is returned if there is no data either written or found in the time range. """ |
return self.varget(variable=epoch, starttime=starttime,
endtime=endtime, record_range_only=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def globalattsget(self, expand=False, to_np=True):
""" Gets all global attributes. This function returns all of the global attribute entries, in a dictionary (in the form of 'attribute': {entry: value} pair) from a CDF. If there is no entry found, None is returned. If expand is entered with non-False, then each entry's data type is also returned in a list form as [entry, 'CDF_xxxx']. For attributes without any entries, they will also return with None value. """ |
byte_loc = self._first_adr
return_dict = {}
for _ in range(0, self._num_att):
adr_info = self._read_adr(byte_loc)
if (adr_info['scope'] != 1):
byte_loc = adr_info['next_adr_location']
continue
if (adr_info['num_gr_entry'] == 0):
if (expand is not False):
return_dict[adr_info['name']] = None
byte_loc = adr_info['next_adr_location']
continue
if (expand is False):
entries = []
else:
entries = {}
aedr_byte_loc = adr_info['first_gr_entry']
for _ in range(0, adr_info['num_gr_entry']):
if (self.cdfversion == 3):
aedr_info = self._read_aedr(aedr_byte_loc, to_np=to_np)
else:
aedr_info = self._read_aedr2(aedr_byte_loc, to_np=to_np)
entryData = aedr_info['entry']
if (expand is False):
entries.append(entryData)
else:
entryWithType = []
if (isinstance(entryData, str)):
entryWithType.append(entryData)
else:
dataType = aedr_info['data_type']
if (len(entryData.tolist()) == 1):
if (dataType != 31 and dataType != 32 and dataType != 33):
entryWithType.append(entryData.tolist()[0])
else:
if (dataType != 33):
entryWithType.append(epoch.CDFepoch.encode(entryData.tolist()[0],
iso_8601=False))
else:
entryWithType.append(epoch.CDFepoch.encode(entryData.tolist()[0]))
else:
if (dataType != 31 and dataType != 32 and dataType != 33):
entryWithType.append(entryData.tolist())
else:
if (dataType != 33):
entryWithType.append(epoch.CDFepoch.encode(entryData.tolist(),
iso_8601=False))
else:
entryWithType.append(epoch.CDFepoch.encode(entryData.tolist()))
entryWithType.append(CDF._datatype_token(aedr_info['data_type']))
entries[aedr_info['entry_num']] = entryWithType
aedr_byte_loc = aedr_info['next_aedr']
if (len(entries) != 0):
if (expand is False):
if (len(entries) == 1):
return_dict[adr_info['name']] = entries[0]
else:
return_dict[adr_info['name']] = entries
else:
return_dict[adr_info['name']] = entries
byte_loc = adr_info['next_adr_location']
return return_dict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def varattsget(self, variable=None, expand=False, to_np=True):
""" Gets all variable attributes. Unlike attget, which returns a single attribute entry value, this function returns all of the variable attribute entries, in a dictionary (in the form of 'attribute': value pair) for a variable. If there is no entry found, None is returned. If no variable name is provided, a list of variables are printed. If expand is entered with non-False, then each entry's data type is also returned in a list form as [entry, 'CDF_xxxx']. For attributes without any entries, they will also return with None value. """ |
if (isinstance(variable, int) and self._num_zvariable > 0 and self._num_rvariable > 0):
print('This CDF has both r and z variables. Use variable name')
return None
if isinstance(variable, str):
position = self._first_zvariable
num_variables = self._num_zvariable
for zVar in [1, 0]:
for _ in range(0, num_variables):
if (self.cdfversion == 3):
name, vdr_next = self._read_vdr_fast(position)
else:
name, vdr_next = self._read_vdr_fast2(position)
if name.strip().lower() == variable.strip().lower():
if (self.cdfversion == 3):
vdr_info = self._read_vdr(position)
else:
vdr_info = self._read_vdr2(position)
return self._read_varatts(vdr_info['variable_number'], zVar, expand, to_np=to_np)
position = vdr_next
position = self._first_rvariable
num_variables = self._num_rvariable
print('No variable by this name:', variable)
return None
elif isinstance(variable, int):
if self._num_zvariable > 0:
num_variable = self._num_zvariable
zVar = True
else:
num_variable = self._num_rvariable
zVar = False
if (variable < 0 or variable >= num_variable):
print('No variable by this number:', variable)
return None
return self._read_varatts(variable, zVar, expand, to_np=to_np)
else:
print('Please set variable keyword equal to the name or ',
'number of an variable')
rvars, zvars = self._get_varnames()
print("RVARIABLES: ")
for x in rvars:
print("NAME: " + str(x))
print("ZVARIABLES: ")
for x in zvars:
print("NAME: " + str(x))
return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _uncompress_file(self, path):
'''
Writes the current file into a file in the temporary directory.
If that doesn't work, create a new file in the CDFs directory.
'''
with self.file.open('rb') as f:
if (self.cdfversion == 3):
data_start, data_size, cType, _ = self._read_ccr(8)
else:
data_start, data_size, cType, _ = self._read_ccr2(8)
if cType != 5:
return
f.seek(data_start)
decompressed_data = gzip.decompress(f.read(data_size))
newpath = pathlib.Path(tempfile.NamedTemporaryFile(suffix='.cdf').name)
with newpath.open('wb') as g:
g.write(bytearray.fromhex('cdf30001'))
g.write(bytearray.fromhex('0000ffff'))
g.write(decompressed_data)
return newpath |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _convert_option(self):
'''
Determines how to convert CDF byte ordering to the system
byte ordering.
'''
if sys.byteorder == 'little' and self._endian() == 'big-endian':
# big->little
order = '>'
elif sys.byteorder == 'big' and self._endian() == 'little-endian':
# little->big
order = '<'
else:
# no conversion
order = '='
return order |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _num_values(self, vdr_dict):
'''
Returns the number of values in a record, using a given VDR
dictionary. Multiplies the dimension sizes of each dimension,
if it is varying.
'''
values = 1
for x in range(0, vdr_dict['num_dims']):
if (vdr_dict['dim_vary'][x] != 0):
values = values * vdr_dict['dim_sizes'][x]
return values |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _convert_type(self, data_type):
'''
CDF data types to python struct data types
'''
if (data_type == 1) or (data_type == 41):
dt_string = 'b'
elif data_type == 2:
dt_string = 'h'
elif data_type == 4:
dt_string = 'i'
elif (data_type == 8) or (data_type == 33):
dt_string = 'q'
elif data_type == 11:
dt_string = 'B'
elif data_type == 12:
dt_string = 'H'
elif data_type == 14:
dt_string = 'I'
elif (data_type == 21) or (data_type == 44):
dt_string = 'f'
elif (data_type == 22) or (data_type == 45) or (data_type == 31):
dt_string = 'd'
elif (data_type == 32):
dt_string = 'd'
elif (data_type == 51) or (data_type == 52):
dt_string = 's'
return dt_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _default_pad(self, data_type, num_elms): # @NoSelf
'''
The default pad values by CDF data type
'''
order = self._convert_option()
if (data_type == 51 or data_type == 52):
return str(' '*num_elms)
if (data_type == 1) or (data_type == 41):
pad_value = struct.pack(order+'b', -127)
dt_string = 'i1'
elif data_type == 2:
pad_value = struct.pack(order+'h', -32767)
dt_string = 'i2'
elif data_type == 4:
pad_value = struct.pack(order+'i', -2147483647)
dt_string = 'i4'
elif (data_type == 8) or (data_type == 33):
pad_value = struct.pack(order+'q', -9223372036854775807)
dt_string = 'i8'
elif data_type == 11:
pad_value = struct.pack(order+'B', 254)
dt_string = 'u1'
elif data_type == 12:
pad_value = struct.pack(order+'H', 65534)
dt_string = 'u2'
elif data_type == 14:
pad_value = struct.pack(order+'I', 4294967294)
dt_string = 'u4'
elif (data_type == 21) or (data_type == 44):
pad_value = struct.pack(order+'f', -1.0E30)
dt_string = 'f'
elif (data_type == 22) or (data_type == 45) or (data_type == 31):
pad_value = struct.pack(order+'d', -1.0E30)
dt_string = 'd'
else:
# (data_type == 32):
pad_value = struct.pack(order+'2d', *[-1.0E30, -1.0E30])
dt_string = 'c16'
dt = np.dtype(dt_string)
ret = np.frombuffer(pad_value, dtype=dt, count=1)
ret.setflags('WRITEABLE')
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _convert_np_data(data, data_type, num_elems): # @NoSelf
'''
Converts a single np data into byte stream.
'''
if (data_type == 51 or data_type == 52):
if (data == ''):
return ('\x00'*num_elems).encode()
else:
return data.ljust(num_elems, '\x00').encode('utf-8')
elif (data_type == 32):
data_stream = data.real.tobytes()
data_stream += data.imag.tobytes()
return data_stream
else:
return data.tobytes() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _read_vvr_block(self, offset):
'''
Returns a VVR or decompressed CVVR block
'''
with self.file.open('rb') as f:
f.seek(offset, 0)
block_size = int.from_bytes(f.read(8), 'big')
block = f.read(block_size-8)
section_type = int.from_bytes(block[0:4], 'big')
if section_type == 13:
# a CVVR
compressed_size = int.from_bytes(block[12:16], 'big')
return gzip.decompress(block[16:16+compressed_size])
elif section_type == 7:
# a VVR
return block[4:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _find_block(starts, ends, cur_block, rec_num): # @NoSelf
'''
Finds the block that rec_num is in if it is found. Otherwise it returns -1.
It also returns the block that has the physical data either at or
preceeding the rec_num.
It could be -1 if the preceeding block does not exists.
'''
total = len(starts)
if (cur_block == -1):
cur_block = 0
for x in range(cur_block, total):
if (starts[x] <= rec_num and ends[x] >= rec_num):
return x, x
if (starts[x] > rec_num):
break
return -1, x-1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _convert_data(self, data, data_type, num_recs, num_values, num_elems):
'''
Converts data to the appropriate type using the struct.unpack method,
rather than using numpy.
'''
if (data_type == 51 or data_type == 52):
return [data[i:i+num_elems].decode('utf-8') for i in
range(0, num_recs*num_values*num_elems, num_elems)]
else:
tofrom = self._convert_option()
dt_string = self._convert_type(data_type)
form = tofrom + str(num_recs*num_values*num_elems) + dt_string
value_len = CDF._type_size(data_type, num_elems)
return list(struct.unpack_from(form,
data[0:num_recs*num_values*value_len])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getVersion():
# @NoSelf """ Shows the code version and last modified date. """ |
print('CDFread version:', str(CDF.version) + '.' + str(CDF.release) +
'.' + str(CDF.increment))
print('Date: 2018/01/11') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_access_token(self):
'''
Returns an access token for the specified subscription.
This method uses a cache to limit the number of requests to the token service.
A fresh token can be re-used during its lifetime of 10 minutes. After a successful
request to the token service, this method caches the access token. Subsequent
invocations of the method return the cached token for the next 5 minutes. After
5 minutes, a new token is fetched from the token service and the cache is updated.
'''
if (self.token is None) or (datetime.utcnow() > self.reuse_token_until):
headers = {'Ocp-Apim-Subscription-Key': self.client_secret}
response = requests.post(self.base_url, headers=headers)
response.raise_for_status()
self.token = response.content
self.reuse_token_until = datetime.utcnow() + timedelta(minutes=5)
return self.token.decode('utf-8') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main(from_lang, to_lang, provider, secret_access_key, output_only, text):
""" Python command line tool to make on line translations \b Example: \b \t $ translate-cli -t zh the book is on the table \t 碗是在桌子上。 \b Available languages: \b \t https://en.wikipedia.org/wiki/ISO_639-1 """ |
text = ' '.join(text)
kwargs = dict(from_lang=from_lang, to_lang=to_lang, provider=provider)
if provider != DEFAULT_PROVIDER:
kwargs['secret_access_key'] = secret_access_key
translator = Translator(**kwargs)
translation = translator.translate(text)
if sys.version_info.major == 2:
translation = translation.encode(locale.getpreferredencoding())
if output_only:
click.echo(translation)
return translation
click.echo('\nTranslation: {}'.format(translation))
click.echo('-' * 25)
click.echo('Translated by: {}'.format(translator.provider.name))
return translation |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_organization_id(server_config, label):
"""Return the ID of the organization with label ``label``. :param server_config: A dict of information about the server being talked to. The dict should include the keys "url", "auth" and "verify". :param label: A string label that will be used when searching. Every organization should have a unique label. :returns: An organization ID. (Typically an integer.) """ |
response = requests.get(
server_config['url'] + '/katello/api/v2/organizations',
data=json.dumps({'search': 'label={}'.format(label)}),
auth=server_config['auth'],
headers={'content-type': 'application/json'},
verify=server_config['verify'],
)
response.raise_for_status()
decoded = response.json()
if decoded['subtotal'] != 1:
print(
'Expected to find one organization, but instead found {0}. Search '
'results: {1}'.format(decoded['subtotal'], decoded['results'])
)
exit(1)
return decoded['results'][0]['id'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _make_entity_from_id(entity_cls, entity_obj_or_id, server_config):
"""Given an entity object or an ID, return an entity object. If the value passed in is an object that is a subclass of :class:`Entity`, return that value. Otherwise, create an object of the type that ``field`` references, give that object an ID of ``field_value``, and return that object. :param entity_cls: An :class:`Entity` subclass. :param entity_obj_or_id: Either a :class:`nailgun.entity_mixins.Entity` object or an entity ID. :returns: An ``entity_cls`` object. :rtype: nailgun.entity_mixins.Entity """ |
if isinstance(entity_obj_or_id, entity_cls):
return entity_obj_or_id
return entity_cls(server_config, id=entity_obj_or_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_entity_id(field_name, attrs):
"""Find the ID for a one to one relationship. The server may return JSON data in the following forms for a :class:`nailgun.entity_fields.OneToOneField`:: 'user': None 'user': {'name': 'Alice Hayes', 'login': 'ahayes', 'id': 1} 'user_id': 1 'user_id': None Search ``attrs`` for a one to one ``field_name`` and return its ID. :param field_name: A string. The name of a field. :param attrs: A dict. A JSON payload as returned from a server. :returns: Either an entity ID or None. """ |
field_name_id = field_name + '_id'
if field_name in attrs:
if attrs[field_name] is None:
return None
elif 'id' in attrs[field_name]:
return attrs[field_name]['id']
if field_name_id in attrs:
return attrs[field_name_id]
else:
raise MissingValueError(
'Cannot find a value for the "{0}" field. Searched for keys named '
'{1}, but available keys are {2}.'
.format(field_name, (field_name, field_name_id), attrs.keys())
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_entity_ids(field_name, attrs):
"""Find the IDs for a one to many relationship. The server may return JSON data in the following forms for a :class:`nailgun.entity_fields.OneToManyField`:: 'user': [{'id': 1, …}, {'id': 42, …}] 'users': [{'id': 1, …}, {'id': 42, …}] 'user_ids': [1, 42] Search ``attrs`` for a one to many ``field_name`` and return its ID. :param field_name: A string. The name of a field. :param attrs: A dict. A JSON payload as returned from a server. :returns: An iterable of entity IDs. """ |
field_name_ids = field_name + '_ids'
plural_field_name = pluralize(field_name)
if field_name_ids in attrs:
return attrs[field_name_ids]
elif field_name in attrs:
return [entity['id'] for entity in attrs[field_name]]
elif plural_field_name in attrs:
return [entity['id'] for entity in attrs[plural_field_name]]
else:
raise MissingValueError(
'Cannot find a value for the "{0}" field. Searched for keys named '
'{1}, but available keys are {2}.'
.format(
field_name,
(field_name_ids, field_name, plural_field_name),
attrs.keys()
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json_serializable(obj):
""" Transforms obj into a json serializable object. :param obj: entity or any json serializable object :return: serializable object """ |
if isinstance(obj, Entity):
return obj.to_json_dict()
if isinstance(obj, dict):
return {k: to_json_serializable(v) for k, v in obj.items()}
elif isinstance(obj, (list, tuple)):
return [to_json_serializable(v) for v in obj]
elif isinstance(obj, datetime):
return obj.strftime('%Y-%m-%d %H:%M:%S')
elif isinstance(obj, date):
return obj.strftime('%Y-%m-%d')
return obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def path(self, which=None):
"""Return the path to the current entity. Return the path to base entities of this entity's type if: * ``which`` is ``'base'``, or * ``which`` is ``None`` and instance attribute ``id`` is unset. Return the path to this exact entity if instance attribute ``id`` is set and: * ``which`` is ``'self'``, or * ``which`` is ``None``. Raise :class:`NoSuchPathError` otherwise. Child classes may choose to extend this method, especially if a child entity offers more than the two URLs supported by default. If extended, then the extending class should check for custom parameters before calling ``super``:: def path(self, which):
if which == 'custom': return urljoin(…) super(ChildEntity, self).__init__(which) This will allow the extending method to accept a custom parameter without accidentally raising a :class:`NoSuchPathError`. :param which: A string. Optional. Valid arguments are 'self' and 'base'. :return: A string. A fully qualified URL. :raises nailgun.entity_mixins.NoSuchPathError: If no path can be built. """ |
# It is OK that member ``self._meta`` is not found. Subclasses are
# required to set that attribute if they wish to use this method.
#
# Beware of leading and trailing slashes:
#
# urljoin('example.com', 'foo') => 'foo'
# urljoin('example.com/', 'foo') => 'example.com/foo'
# urljoin('example.com', '/foo') => '/foo'
# urljoin('example.com/', '/foo') => '/foo'
#
base = urljoin(
self._server_config.url + '/',
self._meta['api_path'] # pylint:disable=no-member
)
if which == 'base' or (which is None and not hasattr(self, 'id')):
return base
elif (which == 'self' or which is None) and hasattr(self, 'id'):
return urljoin(base + '/', str(self.id)) # pylint:disable=E1101
raise NoSuchPathError |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_values(self):
"""Return a copy of field values on the current object. This method is almost identical to ``vars(self).copy()``. However, only instance attributes that correspond to a field are included in the returned dict. :return: A dict mapping field names to user-provided values. """ |
attrs = vars(self).copy()
attrs.pop('_server_config')
attrs.pop('_fields')
attrs.pop('_meta')
if '_path_fields' in attrs:
attrs.pop('_path_fields')
return attrs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_json_dict(self, filter_fcn=None):
"""Create a dict with Entity properties for json encoding. It can be overridden by subclasses for each standard serialization doesn't work. By default it call _to_json_dict on OneToOne fields and build a list calling the same method on each OneToMany object's fields. Fields can be filtered accordingly to 'filter_fcn'. This callable receives field's name as first parameter and fields itself as second parameter. It must return True if field's value should be included on dict and False otherwise. If not provided field will not be filtered. :type filter_fcn: callable :return: dct """ |
fields, values = self.get_fields(), self.get_values()
filtered_fields = fields.items()
if filter_fcn is not None:
filtered_fields = (
tpl for tpl in filtered_fields if filter_fcn(tpl[0], tpl[1])
)
json_dct = {}
for field_name, field in filtered_fields:
if field_name in values:
value = values[field_name]
if value is None:
json_dct[field_name] = None
# This conditions is needed because some times you get
# None on an OneToOneField what lead to an error
# on bellow condition, e.g., calling value.to_json_dict()
# when value is None
elif isinstance(field, OneToOneField):
json_dct[field_name] = value.to_json_dict()
elif isinstance(field, OneToManyField):
json_dct[field_name] = [
entity.to_json_dict() for entity in value
]
else:
json_dct[field_name] = to_json_serializable(value)
return json_dct |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compare(self, other, filter_fcn=None):
"""Returns True if properties can be compared in terms of eq. Entity's Fields can be filtered accordingly to 'filter_fcn'. This callable receives field's name as first parameter and field itself as second parameter. It must return True if field's value should be included on comparison and False otherwise. If not provided field's marked as unique will not be compared by default. 'id' and 'name' are examples of unique fields commonly ignored. Check Entities fields for fields marked with 'unique=True' :param other: entity to compare :param filter_fcn: callable :return: boolean """ |
if not isinstance(other, type(self)):
return False
if filter_fcn is None:
def filter_unique(_, field):
"""Filter function for unique fields"""
return not field.unique
filter_fcn = filter_unique
return self.to_json_dict(filter_fcn) == other.to_json_dict(filter_fcn) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_missing(self):
"""Automagically populate all required instance attributes. Iterate through the set of all required class :class:`nailgun.entity_fields.Field` defined on ``type(self)`` and create a corresponding instance attribute if none exists. Subclasses should override this method if there is some relationship between two required fields. :return: Nothing. This method relies on side-effects. """ |
for field_name, field in self.get_fields().items():
if field.required and not hasattr(self, field_name):
# Most `gen_value` methods return a value such as an integer,
# string or dictionary, but OneTo{One,Many}Field.gen_value
# returns the referenced class.
if hasattr(field, 'default'):
value = field.default
elif hasattr(field, 'choices'):
value = gen_choice(field.choices)
elif isinstance(field, OneToOneField):
value = field.gen_value()(self._server_config).create(True)
elif isinstance(field, OneToManyField):
value = [
field.gen_value()(self._server_config).create(True)
]
else:
value = field.gen_value()
setattr(self, field_name, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_payload(self, fields=None):
"""Create a payload of values that can be sent to the server. By default, this method behaves just like :func:`_payload`. However, one can also specify a certain set of fields that should be returned. For more information, see :meth:`update`. """ |
values = self.get_values()
if fields is not None:
values = {field: values[field] for field in fields}
return _payload(self.get_fields(), values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def search_payload(self, fields=None, query=None):
"""Create a search query. Do the following: 1. Generate a search query. By default, all values returned by :meth:`nailgun.entity_mixins.Entity.get_values` are used. If ``fields`` is specified, only the named values are used. 2. Merge ``query`` in to the generated search query. 3. Return the result. The rules for generating a search query can be illustrated by example. Let's say that we have an entity with an :class:`nailgun.entity_fields.IntegerField`, a :class:`nailgun.entity_fields.OneToOneField` and a :class:`nailgun.entity_fields.OneToManyField`:: True True True This method appends "_id" and "_ids" on to the names of each ``OneToOneField`` and ``OneToManyField``, respectively:: {'id': 1, 'one_id': 2, 'many_ids': [3, 4]} By default, all fields are used. But you can specify a set of field names to use:: {'id': 1} {'one_id': 2} {'id': 1, 'one_id': 2} If a ``query`` is specified, it is merged in to the generated query:: {'id': 5, 'one_id': 2, 'many_ids': [3, 4]} {'id': 1, 'one_id': 2, 'many_ids': [3, 4], 'per_page': 1000} .. WARNING:: This method currently generates an extremely naive search query that will be wrong in many cases. In addition, Satellite currently accepts invalid search queries without complaint. Make sure to check the API documentation for your version of Satellite against what this method produces. :param fields: See :meth:`search`. :param query: See :meth:`search`. :returns: A dict that can be encoded as JSON and used in a search. """ |
if fields is None:
fields = set(self.get_values().keys())
if query is None:
query = {}
payload = {}
fields_dict = self.get_fields()
for field in fields:
value = getattr(self, field)
if isinstance(fields_dict[field], OneToOneField):
payload[field + '_id'] = value.id
elif isinstance(fields_dict[field], OneToManyField):
payload[field + '_ids'] = [entity.id for entity in value]
else:
payload[field] = value
payload.update(query)
return payload |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def search_normalize(self, results):
"""Normalize search results so they can be used to create new entities. See :meth:`search` for an example of how to use this method. Here's a simplified example:: results = self.search_json() results = self.search_normalize(results) entity = SomeEntity(some_cfg, **results[0]) At this time, it is possible to parse all search results without knowing what search query was sent to the server. However, it is possible that certain responses can only be parsed if the search query is known. If that is the case, this method will be given a new ``payload`` argument, where ``payload`` is the query sent to the server. As a precaution, the following is higly recommended: * :meth:`search` may alter ``fields`` and ``query`` at will. * :meth:`search_payload` may alter ``fields`` and ``query`` in an idempotent manner. * No other method should alter ``fields`` or ``query``. :param results: A list of dicts, where each dict is a set of attributes for one entity. The contents of these dicts are as is returned from the server. :returns: A list of dicts, where each dict is a set of attributes for one entity. The contents of these dicts have been normalized and can be used to instantiate entities. """ |
fields = self.get_fields()
normalized = []
for result in results:
# For each field that we know about, copy the corresponding field
# from the server's search result. If any extra attributes are
# copied over, Entity.__init__ will raise a NoSuchFieldError.
# Examples of problematic results from server:
#
# * organization_id (denormalized OneToOne. see above)
# * organizations, organization_ids (denormalized OneToMany. above)
# * updated_at, created_at (these may be handled in the future)
# * sp_subnet (Host.sp_subnet is an undocumented field)
#
attrs = {}
for field_name, field in fields.items():
if isinstance(field, OneToOneField):
try:
attrs[field_name] = _get_entity_id(field_name, result)
except MissingValueError:
pass
elif isinstance(field, OneToManyField):
try:
attrs[field_name] = _get_entity_ids(field_name, result)
except MissingValueError:
pass
else:
try:
attrs[field_name] = result[field_name]
except KeyError:
pass
normalized.append(attrs)
return normalized |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def search_filter(entities, filters):
"""Read all ``entities`` and locally filter them. This method can be used like so:: entities = EntitySearchMixin(entities, {'name': 'foo'}) In this example, only entities where ``entity.name == 'foo'`` holds true are returned. An arbitrary number of field names and values may be provided as filters. .. NOTE:: This method calls :meth:`EntityReadMixin.read`. As a result, this method only works when called on a class that also inherits from :class:`EntityReadMixin`. :param entities: A list of :class:`Entity` objects. All list items should be of the same type. :param filters: A dict in the form ``{field_name: field_value, …}``. :raises nailgun.entity_mixins.NoSuchFieldError: If any of the fields named in ``filters`` do not exist on the entities being filtered. :raises: ``NotImplementedError`` If any of the fields named in ``filters`` are a :class:`nailgun.entity_fields.OneToOneField` or :class:`nailgun.entity_fields.OneToManyField`. """ |
# Check to make sure all arguments are sane.
if len(entities) == 0:
return entities
fields = entities[0].get_fields() # assume all entities are identical
if not set(filters).issubset(fields):
raise NoSuchFieldError(
'Valid filters are {0}, but received {1} instead.'
.format(fields.keys(), filters.keys())
)
for field_name in filters:
if isinstance(fields[field_name], (OneToOneField, OneToManyField)):
raise NotImplementedError(
'Search results cannot (yet?) be locally filtered by '
'`OneToOneField`s and `OneToManyField`s. {0} is a {1}.'
.format(field_name, type(fields[field_name]).__name__)
)
# The arguments are sane. Filter away!
filtered = [entity.read() for entity in entities] # don't alter inputs
for field_name, field_value in filters.items():
filtered = [
entity for entity in filtered
if getattr(entity, field_name) == field_value
]
return filtered |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_config_file_path(xdg_config_dir, xdg_config_file):
"""Search ``XDG_CONFIG_DIRS`` for a config file and return the first found. Search each of the standard XDG configuration directories for a configuration file. Return as soon as a configuration file is found. Beware that by the time client code attempts to open the file, it may be gone or otherwise inaccessible. :param xdg_config_dir: A string. The name of the directory that is suffixed to the end of each of the ``XDG_CONFIG_DIRS`` paths. :param xdg_config_file: A string. The name of the configuration file that is being searched for. :returns: A ``str`` path to a configuration file. :raises nailgun.config.ConfigFileError: When no configuration file can be found. """ |
for config_dir in BaseDirectory.load_config_paths(xdg_config_dir):
path = join(config_dir, xdg_config_file)
if isfile(path):
return path
raise ConfigFileError(
'No configuration files could be located after searching for a file '
'named "{0}" in the standard XDG configuration paths, such as '
'"~/.config/{1}/".'.format(xdg_config_file, xdg_config_dir)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(cls, label='default', path=None):
"""Delete a server configuration. This method is thread safe. :param label: A string. The configuration identified by ``label`` is deleted. :param path: A string. The configuration file to be manipulated. Defaults to what is returned by :func:`nailgun.config._get_config_file_path`. :returns: ``None`` """ |
if path is None:
path = _get_config_file_path(
cls._xdg_config_dir,
cls._xdg_config_file
)
cls._file_lock.acquire()
try:
with open(path) as config_file:
config = json.load(config_file)
del config[label]
with open(path, 'w') as config_file:
json.dump(config, config_file)
finally:
cls._file_lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_labels(cls, path=None):
"""Get all server configuration labels. :param path: A string. The configuration file to be manipulated. Defaults to what is returned by :func:`nailgun.config._get_config_file_path`. :returns: Server configuration labels, where each label is a string. """ |
if path is None:
path = _get_config_file_path(
cls._xdg_config_dir,
cls._xdg_config_file
)
with open(path) as config_file:
# keys() returns a list in Python 2 and a view in Python 3.
return tuple(json.load(config_file).keys()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self, label='default', path=None):
"""Save the current connection configuration to a file. This method is thread safe. :param label: A string. An identifier for the current configuration. This allows multiple configurations with unique labels to be saved in a single file. If a configuration identified by ``label`` already exists in the destination configuration file, it is replaced. :param path: A string. The configuration file to be manipulated. By default, an XDG-compliant configuration file is used. A configuration file is created if one does not exist already. :returns: ``None`` """ |
# What will we write out?
cfg = vars(self)
if 'version' in cfg: # pragma: no cover
cfg['version'] = str(cfg['version'])
# Where is the file we're writing to?
if path is None:
path = join(
BaseDirectory.save_config_path(self._xdg_config_dir),
self._xdg_config_file
)
self._file_lock.acquire()
try:
# Either read an existing config or make an empty one. Then update
# the config and write it out.
try:
with open(path) as config_file:
config = json.load(config_file)
except IOError: # pragma: no cover
config = {}
config[label] = cfg
with open(path, 'w') as config_file:
json.dump(config, config_file)
finally:
self._file_lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _log_request(method, url, kwargs, data=None, params=None):
"""Log out information about the arguments given. The arguments provided to this function correspond to the arguments that one can pass to ``requests.request``. :return: Nothing is returned. """ |
logger.debug(
'Making HTTP %s request to %s with %s, %s and %s.',
method,
url,
'options {0}'.format(kwargs) if len(kwargs) > 0 else 'no options',
'params {0}'.format(params) if params else 'no params',
'data {0}'.format(data) if data is not None else 'no data',
) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.