text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SetVoltage(self, v):
"""Set the output voltage, 0 to disable. """ |
if v == 0:
self._SendStruct("BBB", 0x01, 0x01, 0x00)
else:
self._SendStruct("BBB", 0x01, 0x01, int((v - 2.0) * 100)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SetMaxCurrent(self, i):
"""Set the max output current. """ |
if i < 0 or i > 8:
raise MonsoonError(("Target max current %sA, is out of acceptable "
"range [0, 8].") % i)
val = 1023 - int((i / 8) * 1023)
self._SendStruct("BBB", 0x01, 0x0a, val & 0xff)
self._SendStruct("BBB", 0x01, 0x0b, val >> 8) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def SetMaxPowerUpCurrent(self, i):
"""Set the max power up current. """ |
if i < 0 or i > 8:
raise MonsoonError(("Target max current %sA, is out of acceptable "
"range [0, 8].") % i)
val = 1023 - int((i / 8) * 1023)
self._SendStruct("BBB", 0x01, 0x08, val & 0xff)
self._SendStruct("BBB", 0x01, 0x09, val >> 8) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _FlushInput(self):
""" Flush all read data until no more available. """ |
self.ser.flush()
flushed = 0
while True:
ready_r, ready_w, ready_x = select.select([self.ser], [],
[self.ser], 0)
if len(ready_x) > 0:
logging.error("Exception from serial port.")
return None
elif len(ready_r) > 0:
flushed += 1
self.ser.read(1) # This may cause underlying buffering.
self.ser.flush() # Flush the underlying buffer too.
else:
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def average_current(self):
"""Average current in the unit of mA. """ |
len_data_pt = len(self.data_points)
if len_data_pt == 0:
return 0
cur = sum(self.data_points) * 1000 / len_data_pt
return round(cur, self.sr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def total_charge(self):
"""Total charged used in the unit of mAh. """ |
charge = (sum(self.data_points) / self.hz) * 1000 / 3600
return round(charge, self.sr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def total_power(self):
"""Total power used. """ |
power = self.average_current * self.voltage
return round(power, self.sr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_to_text_file(monsoon_data, file_path):
"""Save multiple MonsoonData objects to a text file. Args: monsoon_data: A list of MonsoonData objects to write to a text file. file_path: The full path of the file to save to, including the file name. """ |
if not monsoon_data:
raise MonsoonError("Attempting to write empty Monsoon data to "
"file, abort")
utils.create_dir(os.path.dirname(file_path))
with io.open(file_path, 'w', encoding='utf-8') as f:
for md in monsoon_data:
f.write(str(md))
f.write(MonsoonData.delimiter) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_text_file(file_path):
"""Load MonsoonData objects from a text file generated by MonsoonData.save_to_text_file. Args: file_path: The full path of the file load from, including the file name. Returns: A list of MonsoonData objects. """ |
results = []
with io.open(file_path, 'r', encoding='utf-8') as f:
data_strs = f.read().split(MonsoonData.delimiter)
for data_str in data_strs:
results.append(MonsoonData.from_string(data_str))
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validate_data(self):
"""Verifies that the data points contained in the class are valid. """ |
msg = "Error! Expected {} timestamps, found {}.".format(
len(self._data_points), len(self._timestamps))
if len(self._data_points) != len(self._timestamps):
raise MonsoonError(msg) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_offset(self, new_offset):
"""Updates how many data points to skip in caculations. Always use this function to update offset instead of directly setting self.offset. Args: new_offset: The new offset. """ |
self.offset = new_offset
self.data_points = self._data_points[self.offset:]
self.timestamps = self._timestamps[self.offset:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_data_with_timestamps(self):
"""Returns the data points with timestamps. Returns: A list of tuples in the format of (timestamp, data) """ |
result = []
for t, d in zip(self.timestamps, self.data_points):
result.append(t, round(d, self.lr))
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_average_record(self, n):
"""Returns a list of average current numbers, each representing the average over the last n data points. Args: n: Number of data points to average over. Returns: A list of average current values. """ |
history_deque = collections.deque()
averages = []
for d in self.data_points:
history_deque.appendleft(d)
if len(history_deque) > n:
history_deque.pop()
avg = sum(history_deque) / len(history_deque)
averages.append(round(avg, self.lr))
return averages |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_voltage(self, volt, ramp=False):
"""Sets the output voltage of monsoon. Args: volt: Voltage to set the output to. ramp: If true, the output voltage will be increased gradually to prevent tripping Monsoon overvoltage. """ |
if ramp:
self.mon.RampVoltage(self.mon.start_voltage, volt)
else:
self.mon.SetVoltage(volt) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def take_samples(self, sample_hz, sample_num, sample_offset=0, live=False):
"""Take samples of the current value supplied by monsoon. This is the actual measurement for power consumption. This function blocks until the number of samples requested has been fulfilled. Args: hz: Number of points to take for every second. sample_num: Number of samples to take. offset: The number of initial data points to discard in MonsoonData calculations. sample_num is extended by offset to compensate. live: Print each sample in console as measurement goes on. Returns: A MonsoonData object representing the data obtained in this sampling. None if sampling is unsuccessful. """ |
sys.stdout.flush()
voltage = self.mon.GetVoltage()
self.log.info("Taking samples at %dhz for %ds, voltage %.2fv.",
sample_hz, (sample_num / sample_hz), voltage)
sample_num += sample_offset
# Make sure state is normal
self.mon.StopDataCollection()
status = self.mon.GetStatus()
native_hz = status["sampleRate"] * 1000
# Collect and average samples as specified
self.mon.StartDataCollection()
# In case sample_hz doesn't divide native_hz exactly, use this
# invariant: 'offset' = (consumed samples) * sample_hz -
# (emitted samples) * native_hz
# This is the error accumulator in a variation of Bresenham's
# algorithm.
emitted = offset = 0
collected = []
# past n samples for rolling average
history_deque = collections.deque()
current_values = []
timestamps = []
try:
last_flush = time.time()
while emitted < sample_num or sample_num == -1:
# The number of raw samples to consume before emitting the next
# output
need = int((native_hz - offset + sample_hz - 1) / sample_hz)
if need > len(collected): # still need more input samples
samples = self.mon.CollectData()
if not samples:
break
collected.extend(samples)
else:
# Have enough data, generate output samples.
# Adjust for consuming 'need' input samples.
offset += need * sample_hz
# maybe multiple, if sample_hz > native_hz
while offset >= native_hz:
# TODO(angli): Optimize "collected" operations.
this_sample = sum(collected[:need]) / need
this_time = int(time.time())
timestamps.append(this_time)
if live:
self.log.info("%s %s", this_time, this_sample)
current_values.append(this_sample)
sys.stdout.flush()
offset -= native_hz
emitted += 1 # adjust for emitting 1 output sample
collected = collected[need:]
now = time.time()
if now - last_flush >= 0.99: # flush every second
sys.stdout.flush()
last_flush = now
except Exception as e:
pass
self.mon.StopDataCollection()
try:
return MonsoonData(
current_values,
timestamps,
sample_hz,
voltage,
offset=sample_offset)
except:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def usb(self, state):
"""Sets the monsoon's USB passthrough mode. This is specific to the USB port in front of the monsoon box which connects to the powered device, NOT the USB that is used to talk to the monsoon itself. "Off" means USB always off. "On" means USB always on. "Auto" means USB is automatically turned off when sampling is going on, and turned back on when sampling finishes. Args: stats: The state to set the USB passthrough to. Returns: True if the state is legal and set. False otherwise. """ |
state_lookup = {"off": 0, "on": 1, "auto": 2}
state = state.lower()
if state in state_lookup:
current_state = self.mon.GetUsbPassthrough()
while (current_state != state_lookup[state]):
self.mon.SetUsbPassthrough(state_lookup[state])
time.sleep(1)
current_state = self.mon.GetUsbPassthrough()
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def measure_power(self, hz, duration, tag, offset=30):
"""Measure power consumption of the attached device. Because it takes some time for the device to calm down after the usb connection is cut, an offset is set for each measurement. The default is 30s. The total time taken to measure will be (duration + offset). Args: hz: Number of samples to take per second. duration: Number of seconds to take samples for in each step. offset: The number of seconds of initial data to discard. tag: A string that's the name of the collected data group. Returns: A MonsoonData object with the measured power data. """ |
num = duration * hz
oset = offset * hz
data = None
self.usb("auto")
time.sleep(1)
with self.dut.handle_usb_disconnect():
time.sleep(1)
try:
data = self.take_samples(hz, num, sample_offset=oset)
if not data:
raise MonsoonError(
"No data was collected in measurement %s." % tag)
data.tag = tag
self.dut.log.info("Measurement summary: %s", repr(data))
return data
finally:
self.mon.StopDataCollection()
self.log.info("Finished taking samples, reconnecting to dut.")
self.usb("on")
self.dut.adb.wait_for_device(timeout=DEFAULT_TIMEOUT_USB_ON)
# Wait for device to come back online.
time.sleep(10)
self.dut.log.info("Dut reconnected.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _set_details(self, content):
"""Sets the `details` field. Args: content: the content to extract details from. """ |
try:
self.details = str(content)
except UnicodeEncodeError:
if sys.version_info < (3, 0):
# If Py2 threw encode error, convert to unicode.
self.details = unicode(content)
else:
# We should never hit this in Py3, if this happens, record
# an encoded version of the content for users to handle.
logging.error(
'Unable to decode "%s" in Py3, encoding in utf-8.',
content)
self.details = content.encode('utf-8') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_config_file(path):
"""Loads a test config file. The test config file has to be in YAML format. Args: path: A string that is the full path to the config file, including the file name. Returns: A dict that represents info in the config file. """ |
with io.open(utils.abs_path(path), 'r', encoding='utf-8') as f:
conf = yaml.load(f)
return conf |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_snippet_client(self, name, package):
"""Adds a snippet client to the management. Args: name: string, the attribute name to which to attach the snippet client. E.g. `name='maps'` attaches the snippet client to `ad.maps`. package: string, the package name of the snippet apk to connect to. Raises: Error, if a duplicated name or package is passed in. """ |
# Should not load snippet with the same name more than once.
if name in self._snippet_clients:
raise Error(
self,
'Name "%s" is already registered with package "%s", it cannot '
'be used again.' %
(name, self._snippet_clients[name].client.package))
# Should not load the same snippet package more than once.
for snippet_name, client in self._snippet_clients.items():
if package == client.package:
raise Error(
self,
'Snippet package "%s" has already been loaded under name'
' "%s".' % (package, snippet_name))
client = snippet_client.SnippetClient(package=package, ad=self._device)
client.start_app_and_connect()
self._snippet_clients[name] = client |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_snippet_client(self, name):
"""Removes a snippet client from management. Args: name: string, the name of the snippet client to remove. Raises: Error: if no snippet client is managed under the specified name. """ |
if name not in self._snippet_clients:
raise Error(self._device, MISSING_SNIPPET_CLIENT_MSG % name)
client = self._snippet_clients.pop(name)
client.stop_app() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
"""Starts all the snippet clients under management.""" |
for client in self._snippet_clients.values():
if not client.is_alive:
self._device.log.debug('Starting SnippetClient<%s>.',
client.package)
client.start_app_and_connect()
else:
self._device.log.debug(
'Not startng SnippetClient<%s> because it is already alive.',
client.package) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
"""Stops all the snippet clients under management.""" |
for client in self._snippet_clients.values():
if client.is_alive:
self._device.log.debug('Stopping SnippetClient<%s>.',
client.package)
client.stop_app()
else:
self._device.log.debug(
'Not stopping SnippetClient<%s> because it is not alive.',
client.package) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pause(self):
"""Pauses all the snippet clients under management. This clears the host port of a client because a new port will be allocated in `resume`. """ |
for client in self._snippet_clients.values():
self._device.log.debug(
'Clearing host port %d of SnippetClient<%s>.',
client.host_port, client.package)
client.clear_host_port() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def resume(self):
"""Resumes all paused snippet clients.""" |
for client in self._snippet_clients.values():
# Resume is only applicable if a client is alive and does not have
# a host port.
if client.is_alive and client.host_port is None:
self._device.log.debug('Resuming SnippetClient<%s>.',
client.package)
client.restore_app_connection()
else:
self._device.log.debug('Not resuming SnippetClient<%s>.',
client.package) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def poll_events(self):
"""Continuously polls all types of events from sl4a. Events are sorted by name and store in separate queues. If there are registered handlers, the handlers will be called with corresponding event immediately upon event discovery, and the event won't be stored. If exceptions occur, stop the dispatcher and return """ |
while self.started:
event_obj = None
event_name = None
try:
event_obj = self._sl4a.eventWait(50000)
except:
if self.started:
print("Exception happened during polling.")
print(traceback.format_exc())
raise
if not event_obj:
continue
elif 'name' not in event_obj:
print("Received Malformed event {}".format(event_obj))
continue
else:
event_name = event_obj['name']
# if handler registered, process event
if event_name in self.handlers:
self.handle_subscribed_event(event_obj, event_name)
if event_name == "EventDispatcherShutdown":
self._sl4a.closeSl4aSession()
break
else:
self.lock.acquire()
if event_name in self.event_dict: # otherwise, cache event
self.event_dict[event_name].put(event_obj)
else:
q = queue.Queue()
q.put(event_obj)
self.event_dict[event_name] = q
self.lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_handler(self, handler, event_name, args):
"""Registers an event handler. One type of event can only have one event handler associated with it. Args: handler: The event handler function to be registered. event_name: Name of the event the handler is for. args: User arguments to be passed to the handler when it's called. Raises: IllegalStateError: Raised if attempts to register a handler after the dispatcher starts running. DuplicateError: Raised if attempts to register more than one handler for one type of event. """ |
if self.started:
raise IllegalStateError(("Can't register service after polling is"
" started"))
self.lock.acquire()
try:
if event_name in self.handlers:
raise DuplicateError('A handler for {} already exists'.format(
event_name))
self.handlers[event_name] = (handler, args)
finally:
self.lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start(self):
"""Starts the event dispatcher. Initiates executor and start polling events. Raises: IllegalStateError: Can't start a dispatcher again when it's already running. """ |
if not self.started:
self.started = True
self.executor = ThreadPoolExecutor(max_workers=32)
self.poller = self.executor.submit(self.poll_events)
else:
raise IllegalStateError("Dispatcher is already started.") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean_up(self):
"""Clean up and release resources after the event dispatcher polling loop has been broken. The following things happen: 1. Clear all events and flags. 2. Close the sl4a client the event_dispatcher object holds. 3. Shut down executor without waiting. """ |
if not self.started:
return
self.started = False
self.clear_all_events()
# At this point, the sl4a apk is destroyed and nothing is listening on
# the socket. Avoid sending any sl4a commands; just clean up the socket
# and return.
self._sl4a.disconnect()
self.poller.set_result("Done")
# The polling thread is guaranteed to finish after a max of 60 seconds,
# so we don't wait here.
self.executor.shutdown(wait=False) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pop_event(self, event_name, timeout=DEFAULT_TIMEOUT):
"""Pop an event from its queue. Return and remove the oldest entry of an event. Block until an event of specified name is available or times out if timeout is set. Args: event_name: Name of the event to be popped. timeout: Number of seconds to wait when event is not present. Never times out if None. Returns: The oldest entry of the specified event. None if timed out. Raises: IllegalStateError: Raised if pop is called before the dispatcher starts polling. """ |
if not self.started:
raise IllegalStateError(
"Dispatcher needs to be started before popping.")
e_queue = self.get_event_q(event_name)
if not e_queue:
raise TypeError("Failed to get an event queue for {}".format(
event_name))
try:
# Block for timeout
if timeout:
return e_queue.get(True, timeout)
# Non-blocking poll for event
elif timeout == 0:
return e_queue.get(False)
else:
# Block forever on event wait
return e_queue.get(True)
except queue.Empty:
raise queue.Empty('Timeout after {}s waiting for event: {}'.format(
timeout, event_name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wait_for_event(self, event_name, predicate, timeout=DEFAULT_TIMEOUT, *args, **kwargs):
"""Wait for an event that satisfies a predicate to appear. Continuously pop events of a particular name and check against the predicate until an event that satisfies the predicate is popped or timed out. Note this will remove all the events of the same name that do not satisfy the predicate in the process. Args: event_name: Name of the event to be popped. predicate: A function that takes an event and returns True if the predicate is satisfied, False otherwise. timeout: Number of seconds to wait. *args: Optional positional args passed to predicate(). **kwargs: Optional keyword args passed to predicate(). Returns: The event that satisfies the predicate. Raises: queue.Empty: Raised if no event that satisfies the predicate was found before time out. """ |
deadline = time.time() + timeout
while True:
event = None
try:
event = self.pop_event(event_name, 1)
except queue.Empty:
pass
if event and predicate(event, *args, **kwargs):
return event
if time.time() > deadline:
raise queue.Empty(
'Timeout after {}s waiting for event: {}'.format(
timeout, event_name)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pop_events(self, regex_pattern, timeout):
"""Pop events whose names match a regex pattern. If such event(s) exist, pop one event from each event queue that satisfies the condition. Otherwise, wait for an event that satisfies the condition to occur, with timeout. Results are sorted by timestamp in ascending order. Args: regex_pattern: The regular expression pattern that an event name should match in order to be popped. timeout: Number of seconds to wait for events in case no event matching the condition exits when the function is called. Returns: Events whose names match a regex pattern. Empty if none exist and the wait timed out. Raises: IllegalStateError: Raised if pop is called before the dispatcher starts polling. queue.Empty: Raised if no event was found before time out. """ |
if not self.started:
raise IllegalStateError(
"Dispatcher needs to be started before popping.")
deadline = time.time() + timeout
while True:
#TODO: fix the sleep loop
results = self._match_and_pop(regex_pattern)
if len(results) != 0 or time.time() > deadline:
break
time.sleep(1)
if len(results) == 0:
raise queue.Empty('Timeout after {}s waiting for event: {}'.format(
timeout, regex_pattern))
return sorted(results, key=lambda event: event['time']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_event_q(self, event_name):
"""Obtain the queue storing events of the specified name. If no event of this name has been polled, wait for one to. Returns: A queue storing all the events of the specified name. None if timed out. Raises: queue.Empty: Raised if the queue does not exist and timeout has passed. """ |
self.lock.acquire()
if not event_name in self.event_dict or self.event_dict[
event_name] is None:
self.event_dict[event_name] = queue.Queue()
self.lock.release()
event_queue = self.event_dict[event_name]
return event_queue |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_subscribed_event(self, event_obj, event_name):
"""Execute the registered handler of an event. Retrieve the handler and its arguments, and execute the handler in a new thread. Args: event_obj: Json object of the event. event_name: Name of the event to call handler for. """ |
handler, args = self.handlers[event_name]
self.executor.submit(handler, event_obj, *args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _handle(self, event_handler, event_name, user_args, event_timeout, cond, cond_timeout):
"""Pop an event of specified type and calls its handler on it. If condition is not None, block until condition is met or timeout. """ |
if cond:
cond.wait(cond_timeout)
event = self.pop_event(event_name, event_timeout)
return event_handler(event, *user_args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_event(self, event_handler, event_name, user_args, event_timeout=None, cond=None, cond_timeout=None):
"""Handle events that don't have registered handlers In a new thread, poll one event of specified type from its queue and execute its handler. If no such event exists, the thread waits until one appears. Args: event_handler: Handler for the event, which should take at least one argument - the event json object. event_name: Name of the event to be handled. user_args: User arguments for the handler; to be passed in after the event json. event_timeout: Number of seconds to wait for the event to come. cond: A condition to wait on before executing the handler. Should be a threading.Event object. cond_timeout: Number of seconds to wait before the condition times out. Never times out if None. Returns: A concurrent.Future object associated with the handler. If blocking call worker.result() is triggered, the handler needs to return something to unblock. """ |
worker = self.executor.submit(self._handle, event_handler, event_name,
user_args, event_timeout, cond,
cond_timeout)
return worker |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pop_all(self, event_name):
"""Return and remove all stored events of a specified name. Pops all events from their queue. May miss the latest ones. If no event is available, return immediately. Args: event_name: Name of the events to be popped. Returns: List of the desired events. Raises: IllegalStateError: Raised if pop is called before the dispatcher starts polling. """ |
if not self.started:
raise IllegalStateError(("Dispatcher needs to be started before "
"popping."))
results = []
try:
self.lock.acquire()
while True:
e = self.event_dict[event_name].get(block=False)
results.append(e)
except (queue.Empty, KeyError):
return results
finally:
self.lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_events(self, event_name):
"""Clear all events of a particular name. Args: event_name: Name of the events to be popped. """ |
self.lock.acquire()
try:
q = self.get_event_q(event_name)
q.queue.clear()
except queue.Empty:
return
finally:
self.lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clear_all_events(self):
"""Clear all event queues and their cached events.""" |
self.lock.acquire()
self.event_dict.clear()
self.lock.release() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_dict(event_dict):
"""Create a SnippetEvent object from a dictionary. Args: event_dict: a dictionary representing an event. Returns: A SnippetEvent object. """ |
return SnippetEvent(
callback_id=event_dict['callbackId'],
name=event_dict['name'],
creation_time=event_dict['time'],
data=event_dict['data']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_dir(path):
"""Creates a directory if it does not exist already. Args: path: The path of the directory to create. """ |
full_path = abs_path(path)
if not os.path.exists(full_path):
try:
os.makedirs(full_path)
except OSError as e:
# ignore the error for dir already exist.
if e.errno != os.errno.EEXIST:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_alias(target_path, alias_path):
"""Creates an alias at 'alias_path' pointing to the file 'target_path'. On Unix, this is implemented via symlink. On Windows, this is done by creating a Windows shortcut file. Args: target_path: Destination path that the alias should point to. alias_path: Path at which to create the new alias. """ |
if platform.system() == 'Windows' and not alias_path.endswith('.lnk'):
alias_path += '.lnk'
if os.path.lexists(alias_path):
os.remove(alias_path)
if platform.system() == 'Windows':
from win32com import client
shell = client.Dispatch('WScript.Shell')
shortcut = shell.CreateShortCut(alias_path)
shortcut.Targetpath = target_path
shortcut.save()
else:
os.symlink(target_path, alias_path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def epoch_to_human_time(epoch_time):
"""Converts an epoch timestamp to human readable time. This essentially converts an output of get_current_epoch_time to an output of get_current_human_time Args: epoch_time: An integer representing an epoch timestamp in milliseconds. Returns: A time string representing the input time. None if input param is invalid. """ |
if isinstance(epoch_time, int):
try:
d = datetime.datetime.fromtimestamp(epoch_time / 1000)
return d.strftime("%m-%d-%Y %H:%M:%S ")
except ValueError:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_files(paths, file_predicate):
"""Locate files whose names and extensions match the given predicate in the specified directories. Args: paths: A list of directory paths where to find the files. file_predicate: A function that returns True if the file name and extension are desired. Returns: A list of files that match the predicate. """ |
file_list = []
for path in paths:
p = abs_path(path)
for dirPath, _, fileList in os.walk(p):
for fname in fileList:
name, ext = os.path.splitext(fname)
if file_predicate(name, ext):
file_list.append((dirPath, name, ext))
return file_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_file_to_base64_str(f_path):
"""Loads the content of a file into a base64 string. Args: f_path: full path to the file including the file name. Returns: A base64 string representing the content of the file in utf-8 encoding. """ |
path = abs_path(f_path)
with io.open(path, 'rb') as f:
f_bytes = f.read()
base64_str = base64.b64encode(f_bytes).decode("utf-8")
return base64_str |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_field(item_list, cond, comparator, target_field):
"""Finds the value of a field in a dict object that satisfies certain conditions. Args: item_list: A list of dict objects. cond: A param that defines the condition. comparator: A function that checks if an dict satisfies the condition. target_field: Name of the field whose value to be returned if an item satisfies the condition. Returns: Target value or None if no item satisfies the condition. """ |
for item in item_list:
if comparator(item, cond) and target_field in item:
return item[target_field]
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rand_ascii_str(length):
"""Generates a random string of specified length, composed of ascii letters and digits. Args: length: The number of characters in the string. Returns: The random string generated. """ |
letters = [random.choice(ascii_letters_and_digits) for _ in range(length)]
return ''.join(letters) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def concurrent_exec(func, param_list):
"""Executes a function with different parameters pseudo-concurrently. This is basically a map function. Each element (should be an iterable) in the param_list is unpacked and passed into the function. Due to Python's GIL, there's no true concurrency. This is suited for IO-bound tasks. Args: func: The function that parforms a task. param_list: A list of iterables, each being a set of params to be passed into the function. Returns: A list of return values from each function execution. If an execution caused an exception, the exception object will be the corresponding result. """ |
with concurrent.futures.ThreadPoolExecutor(max_workers=30) as executor:
# Start the load operations and mark each future with its params
future_to_params = {executor.submit(func, *p): p for p in param_list}
return_vals = []
for future in concurrent.futures.as_completed(future_to_params):
params = future_to_params[future]
try:
return_vals.append(future.result())
except Exception as exc:
logging.exception("{} generated an exception: {}".format(
params, traceback.format_exc()))
return_vals.append(exc)
return return_vals |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_command(cmd, stdout=None, stderr=None, shell=False, timeout=None, cwd=None, env=None):
"""Runs a command in a subprocess. This function is very similar to subprocess.check_output. The main difference is that it returns the return code and std error output as well as supporting a timeout parameter. Args: cmd: string or list of strings, the command to run. See subprocess.Popen() documentation. stdout: file handle, the file handle to write std out to. If None is given, then subprocess.PIPE is used. See subprocess.Popen() documentation. stdee: file handle, the file handle to write std err to. If None is given, then subprocess.PIPE is used. See subprocess.Popen() documentation. shell: bool, True to run this command through the system shell, False to invoke it directly. See subprocess.Popen() docs. timeout: float, the number of seconds to wait before timing out. If not specified, no timeout takes effect. cwd: string, the path to change the child's current directory to before it is executed. Note that this directory is not considered when searching the executable, so you can't specify the program's path relative to cwd. env: dict, a mapping that defines the environment variables for the new process. Default behavior is inheriting the current process' environment. Returns: A 3-tuple of the consisting of the return code, the std output, and the std error. Raises: psutil.TimeoutExpired: The command timed out. """ |
# Only import psutil when actually needed.
# psutil may cause import error in certain env. This way the utils module
# doesn't crash upon import.
import psutil
if stdout is None:
stdout = subprocess.PIPE
if stderr is None:
stderr = subprocess.PIPE
process = psutil.Popen(
cmd, stdout=stdout, stderr=stderr, shell=shell, cwd=cwd, env=env)
timer = None
timer_triggered = threading.Event()
if timeout and timeout > 0:
# The wait method on process will hang when used with PIPEs with large
# outputs, so use a timer thread instead.
def timeout_expired():
timer_triggered.set()
process.terminate()
timer = threading.Timer(timeout, timeout_expired)
timer.start()
# If the command takes longer than the timeout, then the timer thread
# will kill the subprocess, which will make it terminate.
(out, err) = process.communicate()
if timer is not None:
timer.cancel()
if timer_triggered.is_set():
raise psutil.TimeoutExpired(timeout, pid=process.pid)
return (process.returncode, out, err) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_standing_subprocess(cmd, shell=False, env=None):
"""Starts a long-running subprocess. This is not a blocking call and the subprocess started by it should be explicitly terminated with stop_standing_subprocess. For short-running commands, you should use subprocess.check_call, which blocks. Args: cmd: string, the command to start the subprocess with. shell: bool, True to run this command through the system shell, False to invoke it directly. See subprocess.Proc() docs. env: dict, a custom environment to run the standing subprocess. If not specified, inherits the current environment. See subprocess.Popen() docs. Returns: The subprocess that was started. """ |
logging.debug('Starting standing subprocess with: %s', cmd)
proc = subprocess.Popen(
cmd,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=shell,
env=env)
# Leaving stdin open causes problems for input, e.g. breaking the
# code.inspect() shell (http://stackoverflow.com/a/25512460/1612937), so
# explicitly close it assuming it is not needed for standing subprocesses.
proc.stdin.close()
proc.stdin = None
logging.debug('Started standing subprocess %d', proc.pid)
return proc |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stop_standing_subprocess(proc):
"""Stops a subprocess started by start_standing_subprocess. Before killing the process, we check if the process is running, if it has terminated, Error is raised. Catches and ignores the PermissionError which only happens on Macs. Args: proc: Subprocess to terminate. Raises: Error: if the subprocess could not be stopped. """ |
# Only import psutil when actually needed.
# psutil may cause import error in certain env. This way the utils module
# doesn't crash upon import.
import psutil
pid = proc.pid
logging.debug('Stopping standing subprocess %d', pid)
process = psutil.Process(pid)
failed = []
try:
children = process.children(recursive=True)
except AttributeError:
# Handle versions <3.0.0 of psutil.
children = process.get_children(recursive=True)
for child in children:
try:
child.kill()
child.wait(timeout=10)
except psutil.NoSuchProcess:
# Ignore if the child process has already terminated.
pass
except:
failed.append(child.pid)
logging.exception('Failed to kill standing subprocess %d',
child.pid)
try:
process.kill()
process.wait(timeout=10)
except psutil.NoSuchProcess:
# Ignore if the process has already terminated.
pass
except:
failed.append(pid)
logging.exception('Failed to kill standing subprocess %d', pid)
if failed:
raise Error('Failed to kill standing subprocesses: %s' % failed)
# Call wait and close pipes on the original Python object so we don't get
# runtime warnings.
if proc.stdout:
proc.stdout.close()
if proc.stderr:
proc.stderr.close()
proc.wait()
logging.debug('Stopped standing subprocess %d', pid) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_available_host_port():
"""Gets a host port number available for adb forward. Returns: An integer representing a port number on the host available for adb forward. Raises: Error: when no port is found after MAX_PORT_ALLOCATION_RETRY times. """ |
# Only import adb module if needed.
from mobly.controllers.android_device_lib import adb
for _ in range(MAX_PORT_ALLOCATION_RETRY):
port = portpicker.PickUnusedPort()
# Make sure adb is not using this port so we don't accidentally
# interrupt ongoing runs by trying to bind to the port.
if port not in adb.list_occupied_adb_ports():
return port
raise Error('Failed to find available port after {} retries'.format(
MAX_PORT_ALLOCATION_RETRY)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def grep(regex, output):
"""Similar to linux's `grep`, this returns the line in an output stream that matches a given regex pattern. It does not rely on the `grep` binary and is not sensitive to line endings, so it can be used cross-platform. Args: regex: string, a regex that matches the expected pattern. output: byte string, the raw output of the adb cmd. Returns: A list of strings, all of which are output lines that matches the regex pattern. """ |
lines = output.decode('utf-8').strip().splitlines()
results = []
for line in lines:
if re.search(regex, line):
results.append(line.strip())
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cli_cmd_to_string(args):
"""Converts a cmd arg list to string. Args: args: list of strings, the arguments of a command. Returns: String representation of the command. """ |
if isinstance(args, basestring):
# Return directly if it's already a string.
return args
return ' '.join([pipes.quote(arg) for arg in args]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exe_cmd(*cmds):
"""Executes commands in a new shell. Directing stderr to PIPE. This is fastboot's own exe_cmd because of its peculiar way of writing non-error info to stderr. Args: cmds: A sequence of commands and arguments. Returns: The output of the command run. Raises: Exception: An error occurred during the command execution. """ |
cmd = ' '.join(cmds)
proc = Popen(cmd, stdout=PIPE, stderr=PIPE, shell=True)
(out, err) = proc.communicate()
if not err:
return out
return err |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def verify_controller_module(module):
"""Verifies a module object follows the required interface for controllers. The interface is explained in the docstring of `base_test.BaseTestClass.register_controller`. Args: module: An object that is a controller module. This is usually imported with import statements or loaded by importlib. Raises: ControllerError: if the module does not match the Mobly controller interface, or one of the required members is null. """ |
required_attributes = ('create', 'destroy', 'MOBLY_CONTROLLER_CONFIG_NAME')
for attr in required_attributes:
if not hasattr(module, attr):
raise signals.ControllerError(
'Module %s missing required controller module attribute'
' %s.' % (module.__name__, attr))
if not getattr(module, attr):
raise signals.ControllerError(
'Controller interface %s in %s cannot be null.' %
(attr, module.__name__)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_controller(self, module, required=True, min_number=1):
"""Loads a controller module and returns its loaded devices. This is to be used in a mobly test class. Args: module: A module that follows the controller module interface. required: A bool. If True, failing to register the specified controller module raises exceptions. If False, the objects failed to instantiate will be skipped. min_number: An integer that is the minimum number of controller objects to be created. Default is one, since you should not register a controller module without expecting at least one object. Returns: A list of controller objects instantiated from controller_module, or None if no config existed for this controller and it was not a required controller. Raises: ControllerError: * The controller module has already been registered. * The actual number of objects instantiated is less than the * `min_number`. * `required` is True and no corresponding config can be found. * Any other error occurred in the registration process. """ |
verify_controller_module(module)
# Use the module's name as the ref name
module_ref_name = module.__name__.split('.')[-1]
if module_ref_name in self._controller_objects:
raise signals.ControllerError(
'Controller module %s has already been registered. It cannot '
'be registered again.' % module_ref_name)
# Create controller objects.
module_config_name = module.MOBLY_CONTROLLER_CONFIG_NAME
if module_config_name not in self.controller_configs:
if required:
raise signals.ControllerError(
'No corresponding config found for %s' %
module_config_name)
logging.warning(
'No corresponding config found for optional controller %s',
module_config_name)
return None
try:
# Make a deep copy of the config to pass to the controller module,
# in case the controller module modifies the config internally.
original_config = self.controller_configs[module_config_name]
controller_config = copy.deepcopy(original_config)
objects = module.create(controller_config)
except:
logging.exception(
'Failed to initialize objects for controller %s, abort!',
module_config_name)
raise
if not isinstance(objects, list):
raise signals.ControllerError(
'Controller module %s did not return a list of objects, abort.'
% module_ref_name)
# Check we got enough controller objects to continue.
actual_number = len(objects)
if actual_number < min_number:
module.destroy(objects)
raise signals.ControllerError(
'Expected to get at least %d controller objects, got %d.' %
(min_number, actual_number))
# Save a shallow copy of the list for internal usage, so tests can't
# affect internal registry by manipulating the object list.
self._controller_objects[module_ref_name] = copy.copy(objects)
logging.debug('Found %d objects for controller %s', len(objects),
module_config_name)
self._controller_modules[module_ref_name] = module
return objects |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unregister_controllers(self):
"""Destroy controller objects and clear internal registry. This will be called after each test class. """ |
# TODO(xpconanfan): actually record these errors instead of just
# logging them.
for name, module in self._controller_modules.items():
logging.debug('Destroying %s.', name)
with expects.expect_no_raises(
'Exception occurred destroying %s.' % name):
module.destroy(self._controller_objects[name])
self._controller_objects = collections.OrderedDict()
self._controller_modules = {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _create_controller_info_record(self, controller_module_name):
"""Creates controller info record for a particular controller type. Info is retrieved from all the controller objects spawned from the specified module, using the controller module's `get_info` function. Args: controller_module_name: string, the name of the controller module to retrieve info from. Returns: A records.ControllerInfoRecord object. """ |
module = self._controller_modules[controller_module_name]
controller_info = None
try:
controller_info = module.get_info(
copy.copy(self._controller_objects[controller_module_name]))
except AttributeError:
logging.warning('No optional debug info found for controller '
'%s. To provide it, implement `get_info`.',
controller_module_name)
try:
yaml.dump(controller_info)
except TypeError:
logging.warning('The info of controller %s in class "%s" is not '
'YAML serializable! Coercing it to string.',
controller_module_name, self._class_name)
controller_info = str(controller_info)
return records.ControllerInfoRecord(
self._class_name, module.MOBLY_CONTROLLER_CONFIG_NAME,
controller_info) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_controller_info_records(self):
"""Get the info records for all the controller objects in the manager. New info records for each controller object are created for every call so the latest info is included. Returns: List of records.ControllerInfoRecord objects. Each opject conatins the info of a type of controller """ |
info_records = []
for controller_module_name in self._controller_objects.keys():
with expects.expect_no_raises(
'Failed to collect controller info from %s' %
controller_module_name):
record = self._create_controller_info_record(
controller_module_name)
if record:
info_records.append(record)
return info_records |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dict_to_op(d, index_name, doc_type, op_type='index'):
""" Create a bulk-indexing operation from the given dictionary. """ |
if d is None:
return d
op_types = ('create', 'delete', 'index', 'update')
if op_type not in op_types:
msg = 'Unknown operation type "{}", must be one of: {}'
raise Exception(msg.format(op_type, ', '.join(op_types)))
if 'id' not in d:
raise Exception('"id" key not found')
operation = {
'_op_type': op_type,
'_index': index_name,
'_type': doc_type,
'_id': d.pop('id'),
}
operation.update(d)
return operation |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_dict(obj):
""" Create a filtered dict from the given object. Note: This function is currently specific to the FailureLine model. """ |
if not isinstance(obj.test, str):
# TODO: can we handle this in the DB?
# Reftests used to use tuple indicies, which we can't support.
# This is fixed upstream, but we also need to handle it here to allow
# for older branches.
return
keys = [
'id',
'job_guid',
'test',
'subtest',
'status',
'expected',
'message',
'best_classification',
'best_is_verified',
]
all_fields = obj.to_dict()
return {k: v for k, v in all_fields.items() if k in keys} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_step(self, lineno, name="Unnamed step", timestamp=None):
"""Create a new step and update the state to reflect we're now in the middle of a step.""" |
self.state = self.STATES['step_in_progress']
self.stepnum += 1
self.steps.append({
"name": name,
"started": timestamp,
"started_linenumber": lineno,
"errors": [],
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def end_step(self, lineno, timestamp=None, result_code=None):
"""Fill in the current step's summary and update the state to show the current step has ended.""" |
self.state = self.STATES['step_finished']
step_errors = self.sub_parser.get_artifact()
step_error_count = len(step_errors)
if step_error_count > settings.PARSER_MAX_STEP_ERROR_LINES:
step_errors = step_errors[:settings.PARSER_MAX_STEP_ERROR_LINES]
self.artifact["errors_truncated"] = True
self.current_step.update({
"finished": timestamp,
"finished_linenumber": lineno,
# Whilst the result code is present on both the start and end buildbot-style step
# markers, for Taskcluster logs the start marker line lies about the result, since
# the log output is unbuffered, so Taskcluster does not know the real result at
# that point. As such, we only set the result when ending a step.
"result": self.RESULT_DICT.get(result_code, "unknown"),
"errors": step_errors
})
# reset the sub_parser for the next step
self.sub_parser.clear() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_line(self, line, lineno):
"""Parse a single line of the log""" |
match = self.RE_TINDERBOXPRINT.match(line) if line else None
if match:
line = match.group('line')
for regexp_item in self.TINDERBOX_REGEXP_TUPLE:
match = regexp_item['re'].match(line)
if match:
artifact = match.groupdict()
# handle duplicate fields
for to_field, from_field in regexp_item['duplicates_fields'].items():
# if to_field not present or None copy form from_field
if to_field not in artifact or artifact[to_field] is None:
artifact[to_field] = artifact[from_field]
artifact.update(regexp_item['base_dict'])
self.artifact.append(artifact)
return
# default case: consider it html content
# try to detect title/value splitting on <br/>
artifact = {"content_type": "raw_html", }
if "<br/>" in line:
title, value = line.split("<br/>", 1)
artifact["title"] = title
artifact["value"] = value
# or similar long lines if they contain a url
elif "href" in line and "title" in line:
def parse_url_line(line_data):
class TpLineParser(HTMLParser):
def handle_starttag(self, tag, attrs):
d = dict(attrs)
artifact["url"] = d['href']
artifact["title"] = d['title']
def handle_data(self, data):
artifact["value"] = data
p = TpLineParser()
p.feed(line_data)
p.close()
# strip ^M returns on windows lines otherwise
# handle_data will yield no data 'value'
parse_url_line(line.replace('\r', ''))
else:
artifact["value"] = line
self.artifact.append(artifact) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_line(self, line, lineno):
"""Check a single line for an error. Keeps track of the linenumber""" |
# TaskCluster logs are a bit wonky.
#
# TaskCluster logs begin with output coming from TaskCluster itself,
# before it has transitioned control of the task to the configured
# process. These "internal" logs look like the following:
#
# [taskcluster 2016-09-09 17:41:43.544Z] Worker Group: us-west-2b
#
# If an error occurs during this "setup" phase, TaskCluster may emit
# lines beginning with ``[taskcluster:error]``.
#
# Once control has transitioned from TaskCluster to the configured
# task process, lines can be whatever the configured process emits.
# The popular ``run-task`` wrapper prefixes output to emulate
# TaskCluster's "internal" logs. e.g.
#
# [vcs 2016-09-09T17:45:02.842230Z] adding changesets
#
# This prefixing can confuse error parsing. So, we strip it.
#
# Because regular expression matching and string manipulation can be
# expensive when performed on every line, we only strip the TaskCluster
# log prefix if we know we're in a TaskCluster log.
# First line of TaskCluster logs almost certainly has this.
if line.startswith('[taskcluster '):
self.is_taskcluster = True
# For performance reasons, only do this if we have identified as
# a TC task.
if self.is_taskcluster:
line = re.sub(self.RE_TASKCLUSTER_NORMAL_PREFIX, "", line)
if self.is_error_line(line):
self.add(line, lineno) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def retrieve(self, request, project, pk=None):
""" Returns a job_log_url object given its ID """ |
log = JobLog.objects.get(id=pk)
return Response(self._log_as_dict(log)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def linear_weights(i, n):
"""A window function that falls off arithmetically. This is used to calculate a weighted moving average (WMA) that gives higher weight to changes near the point being analyzed, and smooth out changes at the opposite edge of the moving window. See bug 879903 for details. """ |
if i >= n:
return 0.0
return float(n - i) / float(n) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calc_t(w1, w2, weight_fn=None):
"""Perform a Students t-test on the two sets of revision data. See the analyze() function for a description of the `weight_fn` argument. """ |
if not w1 or not w2:
return 0
s1 = analyze(w1, weight_fn)
s2 = analyze(w2, weight_fn)
delta_s = s2['avg'] - s1['avg']
if delta_s == 0:
return 0
if s1['variance'] == 0 and s2['variance'] == 0:
return float('inf')
return delta_s / (((s1['variance'] / s1['n']) + (s2['variance'] / s2['n'])) ** 0.5) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _remove_existing_jobs(data):
""" Remove jobs from data where we already have them in the same state. 1. split the incoming jobs into pending, running and complete. 2. fetch the ``job_guids`` from the db that are in the same state as they are in ``data``. 3. build a new list of jobs in ``new_data`` that are not already in the db and pass that back. It could end up empty at that point. """ |
new_data = []
guids = [datum['job']['job_guid'] for datum in data]
state_map = {
guid: state for (guid, state) in Job.objects.filter(
guid__in=guids).values_list('guid', 'state')
}
for datum in data:
job = datum['job']
if not state_map.get(job['job_guid']):
new_data.append(datum)
else:
# should not transition from running to pending,
# or completed to any other state
current_state = state_map[job['job_guid']]
if current_state == 'completed' or (
job['state'] == 'pending' and
current_state == 'running'):
continue
new_data.append(datum)
return new_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _schedule_log_parsing(job, job_logs, result):
"""Kick off the initial task that parses the log data. log_data is a list of job log objects and the result for that job """ |
# importing here to avoid an import loop
from treeherder.log_parser.tasks import parse_logs
task_types = {
"errorsummary_json",
"buildbot_text",
"builds-4h"
}
job_log_ids = []
for job_log in job_logs:
# a log can be submitted already parsed. So only schedule
# a parsing task if it's ``pending``
# the submitter is then responsible for submitting the
# text_log_summary artifact
if job_log.status != JobLog.PENDING:
continue
# if this is not a known type of log, abort parse
if job_log.name not in task_types:
continue
job_log_ids.append(job_log.id)
# TODO: Replace the use of different queues for failures vs not with the
# RabbitMQ priority feature (since the idea behind separate queues was
# only to ensure failures are dealt with first if there is a backlog).
if result != 'success':
queue = 'log_parser_fail'
priority = 'failures'
else:
queue = 'log_parser'
priority = "normal"
parse_logs.apply_async(queue=queue,
args=[job.id, job_log_ids, priority]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def store_job_data(repository, data):
""" Store job data instances into jobs db Example: [ { "revision": "24fd64b8251fac5cf60b54a915bffa7e51f636b5", "job": { "job_guid": "d19375ce775f0dc166de01daa5d2e8a73a8e8ebf", "name": "xpcshell", "desc": "foo", "job_symbol": "XP", "group_name": "Shelliness", "group_symbol": "XPC", "product_name": "firefox", "state": "TODO", "result": 0, "reason": "scheduler", "who": "sendchange-unittest", "submit_timestamp": 1365732271, "start_timestamp": "20130411165317", "end_timestamp": "1365733932" "machine": "tst-linux64-ec2-314", "build_platform": { "platform": "Ubuntu VM 12.04", "os_name": "linux", "architecture": "x86_64" }, "machine_platform": { "platform": "Ubuntu VM 12.04", "os_name": "linux", "architecture": "x86_64" }, "option_collection": { "opt": true }, "log_references": [ { "name": "unittest" } ], artifacts:[{ name:"", log_urls:[ ] blob:"" }], }, "superseded": [] }, ] """ |
# Ensure that we have job data to process
if not data:
return
# remove any existing jobs that already have the same state
data = _remove_existing_jobs(data)
if not data:
return
superseded_job_guid_placeholders = []
# TODO: Refactor this now that store_job_data() is only over called with one job at a time.
for datum in data:
try:
# TODO: this might be a good place to check the datum against
# a JSON schema to ensure all the fields are valid. Then
# the exception we caught would be much more informative. That
# being said, if/when we transition to only using the pulse
# job consumer, then the data will always be vetted with a
# JSON schema before we get to this point.
job = datum['job']
revision = datum['revision']
superseded = datum.get('superseded', [])
revision_field = 'revision__startswith' if len(revision) < 40 else 'revision'
filter_kwargs = {'repository': repository, revision_field: revision}
push_id = Push.objects.values_list('id', flat=True).get(**filter_kwargs)
# load job
job_guid = _load_job(repository, job, push_id)
for superseded_guid in superseded:
superseded_job_guid_placeholders.append(
# superseded by guid, superseded guid
[job_guid, superseded_guid]
)
except Exception as e:
# Surface the error immediately unless running in production, where we'd
# rather report it on New Relic and not block storing the remaining jobs.
# TODO: Once buildbot support is removed, remove this as part of
# refactoring this method to process just one job at a time.
if 'DYNO' not in os.environ:
raise
logger.exception(e)
# make more fields visible in new relic for the job
# where we encountered the error
datum.update(datum.get("job", {}))
newrelic.agent.record_exception(params=datum)
# skip any jobs that hit errors in these stages.
continue
# Update the result/state of any jobs that were superseded by those ingested above.
if superseded_job_guid_placeholders:
for (job_guid, superseded_by_guid) in superseded_job_guid_placeholders:
Job.objects.filter(guid=superseded_by_guid).update(
result='superseded',
state='completed') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_exchange(connection, name, create=False):
""" Get a Kombu Exchange object using the passed in name. Can create an Exchange but this is typically not wanted in production-like environments and only useful for testing. """ |
exchange = Exchange(name, type="topic", passive=not create)
# bind the exchange to our connection so operations can be performed on it
bound_exchange = exchange(connection)
# ensure the exchange exists. Throw an error if it was created with
# passive=True and it doesn't exist.
bound_exchange.declare()
return bound_exchange |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _process(self, project, build_system, job_priorities):
'''Return list of ref_data_name for job_priorities'''
jobs = []
# we cache the reference data names in order to reduce API calls
cache_key = '{}-{}-ref_data_names_cache'.format(project, build_system)
ref_data_names_map = cache.get(cache_key)
if not ref_data_names_map:
# cache expired so re-build the reference data names map; the map
# contains the ref_data_name of every treeherder *test* job for this project
ref_data_names_map = self._build_ref_data_names(project, build_system)
# update the cache
cache.set(cache_key, ref_data_names_map, SETA_REF_DATA_NAMES_CACHE_TIMEOUT)
# now check the JobPriority table against the list of valid runnable
for jp in job_priorities:
# if this JobPriority entry is no longer supported in SETA then ignore it
if not valid_platform(jp.platform):
continue
if is_job_blacklisted(jp.testtype):
continue
key = jp.unique_identifier()
if key in ref_data_names_map:
# e.g. desktop-test-linux64-pgo/opt-reftest-13 or builder name
jobs.append(ref_data_names_map[key])
else:
logger.warning('Job priority (%s) not found in accepted jobs list', jp)
return jobs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _build_ref_data_names(self, project, build_system):
'''
We want all reference data names for every task that runs on a specific project.
For example:
* Buildbot - "Windows 8 64-bit mozilla-inbound debug test web-platform-tests-1"
* TaskCluster = "test-linux64/opt-mochitest-webgl-e10s-1"
'''
ignored_jobs = []
ref_data_names = {}
runnable_jobs = list_runnable_jobs(project)
for job in runnable_jobs:
# get testtype e.g. web-platform-tests-4
testtype = parse_testtype(
build_system_type=job['build_system_type'],
job_type_name=job['job_type_name'],
platform_option=job['platform_option'],
ref_data_name=job['ref_data_name']
)
if not valid_platform(job['platform']):
continue
if is_job_blacklisted(testtype):
ignored_jobs.append(job['ref_data_name'])
continue
key = unique_key(testtype=testtype,
buildtype=job['platform_option'],
platform=job['platform'])
if build_system == '*':
ref_data_names[key] = job['ref_data_name']
elif job['build_system_type'] == build_system:
ref_data_names[key] = job['ref_data_name']
for ref_data_name in sorted(ignored_jobs):
logger.info('Ignoring %s', ref_data_name)
return ref_data_names |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def collect_fields(node):
""" Get all the unique field names that are eligible for optimization Requested a function like this be added to the ``info`` object upstream in graphene_django: https://github.com/graphql-python/graphene-django/issues/230 """ |
fields = set()
for leaf in node:
if leaf.get('kind', None) == "Field":
fields.add(leaf["name"]["value"])
if leaf.get("selection_set", None):
fields = fields.union(collect_fields(leaf["selection_set"]["selections"]))
return fields |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def optimize(qs, info_dict, field_map):
"""Add either select_related or prefetch_related to fields of the qs""" |
fields = collect_fields(info_dict)
for field in fields:
if field in field_map:
field_name, opt = field_map[field]
qs = (qs.prefetch_related(field_name)
if opt == "prefetch" else qs.select_related(field_name))
return qs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_connection(url):
""" Build an Elasticsearch connection with the given url Elastic.co's Heroku addon doesn't create credientials with access to the cluster by default so they aren't exposed in the URL they provide either. This function works around the situation by grabbing our credentials from the environment via Django settings and building a connection with them. """ |
username = os.environ.get('ELASTICSEARCH_USERNAME')
password = os.environ.get('ELASTICSEARCH_PASSWORD')
if username and password:
return Elasticsearch(url, http_auth=(username, password))
return Elasticsearch(url) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_artifact(self):
"""Return the job artifact built by the parser.""" |
self.artifact[self.parser.name] = self.parser.get_artifact()
return self.artifact |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def precise_matcher(text_log_error):
"""Query for TextLogErrorMatches identical to matches of the given TextLogError.""" |
failure_line = text_log_error.metadata.failure_line
logger.debug("Looking for test match in failure %d", failure_line.id)
if failure_line.action != "test_result" or failure_line.message is None:
return
f = {
'text_log_error___metadata__failure_line__action': 'test_result',
'text_log_error___metadata__failure_line__test': failure_line.test,
'text_log_error___metadata__failure_line__subtest': failure_line.subtest,
'text_log_error___metadata__failure_line__status': failure_line.status,
'text_log_error___metadata__failure_line__expected': failure_line.expected,
'text_log_error___metadata__failure_line__message': failure_line.message
}
qwargs = (
Q(text_log_error___metadata__best_classification=None)
& (Q(text_log_error___metadata__best_is_verified=True)
| Q(text_log_error__step__job=text_log_error.step.job))
)
qs = (TextLogErrorMatch.objects.filter(**f)
.exclude(qwargs)
.order_by('-score', '-classified_failure'))
if not qs:
return
# chunk through the QuerySet because it could potentially be very large
# time bound each call to the scoring function to avoid job timeouts
# returns an iterable of (score, classified_failure_id) tuples
chunks = chunked_qs_reverse(qs, chunk_size=20000)
return chain.from_iterable(time_boxed(score_matches, chunks, time_budget=500)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def elasticsearch_matcher(text_log_error):
""" Query Elasticsearch and score the results. Uses a filtered search checking test, status, expected, and the message as a phrase query with non-alphabet tokens removed. """ |
# Note: Elasticsearch is currently disabled in all environments (see bug 1527868).
if not settings.ELASTICSEARCH_URL:
return []
failure_line = text_log_error.metadata.failure_line
if failure_line.action != "test_result" or not failure_line.message:
logger.debug("Skipped elasticsearch matching")
return
filters = [
{'term': {'test': failure_line.test}},
{'term': {'status': failure_line.status}},
{'term': {'expected': failure_line.expected}},
{'exists': {'field': 'best_classification'}}
]
if failure_line.subtest:
query = filters.append({'term': {'subtest': failure_line.subtest}})
query = {
'query': {
'bool': {
'filter': filters,
'must': [{
'match_phrase': {
'message': failure_line.message[:1024],
},
}],
},
},
}
try:
results = search(query)
except Exception:
logger.error("Elasticsearch lookup failed: %s %s %s %s %s",
failure_line.test, failure_line.subtest, failure_line.status,
failure_line.expected, failure_line.message)
raise
if len(results) > 1:
args = (
text_log_error.id,
failure_line.id,
len(results),
)
logger.info('text_log_error=%i failure_line=%i Elasticsearch produced %i results' % args)
newrelic.agent.record_custom_event('es_matches', {
'num_results': len(results),
'text_log_error_id': text_log_error.id,
'failure_line_id': failure_line.id,
})
scorer = MatchScorer(failure_line.message)
matches = [(item, item['message']) for item in results]
best_match = scorer.best_match(matches)
if not best_match:
return
score, es_result = best_match
# TODO: score all results and return
# TODO: just return results with score above cut off?
return [(score, es_result['best_classification'])] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def crash_signature_matcher(text_log_error):
""" Query for TextLogErrorMatches with the same crash signature. Produces two queries, first checking if the same test produces matches and secondly checking without the same test but lowering the produced scores. """ |
failure_line = text_log_error.metadata.failure_line
if (failure_line.action != "crash" or
failure_line.signature is None or
failure_line.signature == "None"):
return
f = {
'text_log_error___metadata__failure_line__action': 'crash',
'text_log_error___metadata__failure_line__signature': failure_line.signature,
}
qwargs = (
Q(text_log_error___metadata__best_classification=None)
& (Q(text_log_error___metadata__best_is_verified=True)
| Q(text_log_error__step__job=text_log_error.step.job))
)
qs = (TextLogErrorMatch.objects.filter(**f)
.exclude(qwargs)
.select_related('text_log_error', 'text_log_error___metadata')
.order_by('-score', '-classified_failure'))
size = 20000
time_budget = 500
# See if we can get any matches when filtering by the same test
first_attempt = qs.filter(text_log_error___metadata__failure_line__test=failure_line.test)
chunks = chunked_qs_reverse(first_attempt, chunk_size=size)
scored_matches = chain.from_iterable(time_boxed(score_matches, chunks, time_budget))
if scored_matches:
return scored_matches
# try again without filtering to the test but applying a .8 score multiplyer
chunks = chunked_qs_reverse(qs, chunk_size=size)
scored_matches = chain.from_iterable(time_boxed(
score_matches,
chunks,
time_budget,
score_multiplier=(8, 10),
))
return scored_matches |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def best_match(self, matches):
""" Find the most similar string to self.target. Given a list of candidate strings find the closest match to self.target, returning the best match with a score indicating closeness of match. :param matches: A list of candidate matches :returns: A tuple of (score, best_match) """ |
best_match = None
for match, message in matches:
self.matcher.set_seq1(message)
ratio = self.matcher.quick_ratio()
if best_match is None or ratio >= best_match[0]:
new_ratio = self.matcher.ratio()
if best_match is None or new_ratio > best_match[0]:
best_match = (new_ratio, match)
return best_match |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def store_push_data(repository, pushes):
""" Stores push data in the treeherder database pushes = [ { "revision": "8afdb7debc82a8b6e0d56449dfdf916c77a7bf80", "push_timestamp": 1378293517, "author": "some-sheriff@mozilla.com", "revisions": [ { "comment": "Bug 911954 - Add forward declaration of JSScript to TraceLogging.h, r=h4writer", "author": "John Doe <jdoe@mozilla.com>", "revision": "8afdb7debc82a8b6e0d56449dfdf916c77a7bf80" }, ] }, ] returns = { } """ |
if not pushes:
logger.info("No new pushes to store")
return
for push in pushes:
store_push(repository, push) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cycle_data(self, repository, cycle_interval, chunk_size, sleep_time):
"""Delete data older than cycle_interval, splitting the target data into chunks of chunk_size size.""" |
max_timestamp = datetime.datetime.now() - cycle_interval
# seperate datums into chunks
while True:
perf_datums_to_cycle = list(self.filter(
repository=repository,
push_timestamp__lt=max_timestamp).values_list('id', flat=True)[:chunk_size])
if not perf_datums_to_cycle:
# we're done!
break
self.filter(id__in=perf_datums_to_cycle).delete()
if sleep_time:
# Allow some time for other queries to get through
time.sleep(sleep_time)
# also remove any signatures which are (no longer) associated with
# a job
for signature in PerformanceSignature.objects.filter(
repository=repository):
if not self.filter(signature=signature).exists():
signature.delete() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_representation(self, failure_line):
""" Manually add matches our wrapper of the TLEMetadata -> TLE relation. I could not work out how to do this multiple relation jump with DRF (or even if it was possible) so using this manual method instead. """ |
try:
matches = failure_line.error.matches.all()
except AttributeError: # failure_line.error can return None
matches = []
tle_serializer = TextLogErrorMatchSerializer(matches, many=True)
classified_failures = models.ClassifiedFailure.objects.filter(error_matches__in=matches)
cf_serializer = ClassifiedFailureSerializer(classified_failures, many=True)
response = super().to_representation(failure_line)
response['matches'] = tle_serializer.data
response['classified_failures'] = cf_serializer.data
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_whiteboard_status(self, whiteboard):
"""Extracts stockwell text from a bug's whiteboard status to determine whether it matches specified stockwell text; returns a boolean.""" |
stockwell_text = re.search(r'\[stockwell (.+?)\]', whiteboard)
if stockwell_text is not None:
text = stockwell_text.group(1).split(':')[0]
if text == 'fixed' or text == 'disable-recommended' or text == 'infra' or text == 'disabled':
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_bug_details(self, bug_ids):
"""Fetches bug metadata from bugzilla and returns an encoded dict if successful, otherwise returns None.""" |
params = {'include_fields': 'product, component, priority, whiteboard, id'}
params['id'] = bug_ids
try:
response = self.session.get(settings.BZ_API_URL + '/rest/bug', headers=self.session.headers,
params=params, timeout=30)
response.raise_for_status()
except RequestException as e:
logger.warning('error fetching bugzilla metadata for bugs due to {}'.format(e))
return None
if response.headers['Content-Type'] == 'text/html; charset=UTF-8':
return None
data = response.json()
if 'bugs' not in data:
return None
return data['bugs'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_alt_date_bug_totals(self, startday, endday, bug_ids):
"""use previously fetched bug_ids to check for total failures exceeding 150 in 21 days""" |
bugs = (BugJobMap.failures.by_date(startday, endday)
.filter(bug_id__in=bug_ids)
.values('bug_id')
.annotate(total=Count('id'))
.values('bug_id', 'total'))
return {bug['bug_id']: bug['total'] for bug in bugs if bug['total'] >= 150} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def transform(testtype):
'''
A lot of these transformations are from tasks before task labels and some of them are if we
grab data directly from Treeherder jobs endpoint instead of runnable jobs API.
'''
# XXX: Evaluate which of these transformations are still valid
if testtype.startswith('[funsize'):
return None
testtype = testtype.split('/opt-')[-1]
testtype = testtype.split('/debug-')[-1]
# this is plain-reftests for android
testtype = testtype.replace('plain-', '')
testtype = testtype.strip()
# https://bugzilla.mozilla.org/show_bug.cgi?id=1313844
testtype = testtype.replace('browser-chrome-e10s', 'e10s-browser-chrome')
testtype = testtype.replace('devtools-chrome-e10s', 'e10s-devtools-chrome')
testtype = testtype.replace('[TC] Android 4.3 API15+ ', '')
# mochitest-gl-1 <-- Android 4.3 armv7 API 15+ mozilla-inbound opt test mochitest-gl-1
# mochitest-webgl-9 <-- test-android-4.3-arm7-api-15/opt-mochitest-webgl-9
testtype = testtype.replace('webgl-', 'gl-')
return testtype |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_username_from_userinfo(self, user_info):
""" Get the user's username from the jwt sub property """ |
subject = user_info['sub']
email = user_info['email']
if "Mozilla-LDAP" in subject:
return "mozilla-ldap/" + email
elif "email" in subject:
return "email/" + email
elif "github" in subject:
return "github/" + email
elif "google" in subject:
return "google/" + email
# Firefox account
elif "oauth2" in subject:
return "oauth2/" + email
else:
raise AuthenticationFailed("Unrecognized identity") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_user_info(self, access_token, id_token):
""" Extracts the user info payload from the Id Token. Example return value: { "at_hash": "<HASH>", "aud": "<HASH>", "email_verified": true, "email": "fsurname@mozilla.com", "exp": 1551259495, "family_name": "Surname", "given_name": "Firstname", "https://sso.mozilla.com/claim/groups": [ "all_scm_level_1", "all_scm_level_2", "all_scm_level_3", ], "iat": 1550654695, "iss": "https://auth.mozilla.auth0.com/", "name": "Firstname Surname", "nickname": "Firstname Surname", "nonce": "<HASH>", "picture": "<GRAVATAR_URL>", "sub": "ad|Mozilla-LDAP|fsurname", "updated_at": "2019-02-20T09:24:55.449Z", } """ |
# JWT Validator
# Per https://auth0.com/docs/quickstart/backend/python/01-authorization#create-the-jwt-validation-decorator
try:
unverified_header = jwt.get_unverified_header(id_token)
except jwt.JWTError:
raise AuthError('Unable to decode the Id token header')
if 'kid' not in unverified_header:
raise AuthError('Id token header missing RSA key ID')
rsa_key = None
for key in jwks["keys"]:
if key["kid"] == unverified_header["kid"]:
rsa_key = {
"kty": key["kty"],
"kid": key["kid"],
"use": key["use"],
"n": key["n"],
"e": key["e"]
}
break
if not rsa_key:
raise AuthError('Id token using unrecognised RSA key ID')
try:
# https://python-jose.readthedocs.io/en/latest/jwt/api.html#jose.jwt.decode
user_info = jwt.decode(
id_token,
rsa_key,
algorithms=['RS256'],
audience=AUTH0_CLIENTID,
access_token=access_token,
issuer="https://"+AUTH0_DOMAIN+"/"
)
return user_info
except jwt.ExpiredSignatureError:
raise AuthError('Id token is expired')
except jwt.JWTClaimsError:
raise AuthError("Incorrect claims: please check the audience and issuer")
except jwt.JWTError:
raise AuthError("Invalid header: Unable to parse authentication") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _calculate_session_expiry(self, request, user_info):
"""Returns the number of seconds after which the Django session should expire.""" |
access_token_expiry_timestamp = self._get_access_token_expiry(request)
id_token_expiry_timestamp = self._get_id_token_expiry(user_info)
now_in_seconds = int(time.time())
# The session length is set to match whichever token expiration time is closer.
earliest_expiration_timestamp = min(access_token_expiry_timestamp, id_token_expiry_timestamp)
seconds_until_expiry = earliest_expiration_timestamp - now_in_seconds
if seconds_until_expiry <= 0:
raise AuthError('Session expiry time has already passed!')
return seconds_until_expiry |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _unique_key(job):
"""Return a key to query our uniqueness mapping system. This makes sure that we use a consistent key between our code and selecting jobs from the table. """ |
return unique_key(testtype=str(job['testtype']),
buildtype=str(job['platform_option']),
platform=str(job['platform'])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _sanitize_data(runnable_jobs_data):
"""We receive data from runnable jobs api and return the sanitized data that meets our needs. This is a loop to remove duplicates (including buildsystem -> * transformations if needed) By doing this, it allows us to have a single database query It returns sanitized_list which will contain a subset which excludes: * jobs that don't specify the platform * jobs that don't specify the testtype * if the job appears again, we replace build_system_type with '*'. By doing so, if a job appears under both 'buildbot' and 'taskcluster', its build_system_type will be '*' """ |
job_build_system_type = {}
sanitized_list = []
for job in runnable_jobs_data:
if not valid_platform(job['platform']):
logger.info('Invalid platform %s', job['platform'])
continue
testtype = parse_testtype(
build_system_type=job['build_system_type'],
job_type_name=job['job_type_name'],
platform_option=job['platform_option'],
ref_data_name=job['ref_data_name']
)
if not testtype:
continue
# NOTE: This is *all* the data we need from the runnable API
new_job = {
'build_system_type': job['build_system_type'], # e.g. {buildbot,taskcluster,*}
'platform': job['platform'], # e.g. windows8-64
'platform_option': job['platform_option'], # e.g. {opt,debug}
'testtype': testtype, # e.g. web-platform-tests-1
}
key = _unique_key(new_job)
# Let's build a map of all the jobs and if duplicated change the build_system_type to *
if key not in job_build_system_type:
job_build_system_type[key] = job['build_system_type']
sanitized_list.append(new_job)
elif new_job['build_system_type'] != job_build_system_type[key]:
new_job['build_system_type'] = job_build_system_type[key]
# This will *replace* the previous build system type with '*'
# This guarantees that we don't have duplicates
sanitized_list[sanitized_list.index(new_job)]['build_system_type'] = '*'
return sanitized_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_table(data):
"""Add new jobs to the priority table and update the build system if required. data - it is a list of dictionaries that describe a job type returns the number of new, failed and updated jobs """ |
jp_index, priority, expiration_date = _initialize_values()
total_jobs = len(data)
new_jobs, failed_changes, updated_jobs = 0, 0, 0
# Loop through sanitized jobs, add new jobs and update the build system if needed
for job in data:
key = _unique_key(job)
if key in jp_index:
# We already know about this job, we might need to update the build system
# We're seeing the job again with another build system (e.g. buildbot vs
# taskcluster). We need to change it to '*'
if jp_index[key]['build_system_type'] != '*' and jp_index[key]['build_system_type'] != job["build_system_type"]:
db_job = JobPriority.objects.get(pk=jp_index[key]['pk'])
db_job.buildsystem = '*'
db_job.save()
logger.info('Updated %s/%s from %s to %s',
db_job.testtype, db_job.buildtype,
job['build_system_type'], db_job.buildsystem)
updated_jobs += 1
else:
# We have a new job from runnablejobs to add to our master list
try:
jobpriority = JobPriority(
testtype=str(job["testtype"]),
buildtype=str(job["platform_option"]),
platform=str(job["platform"]),
priority=priority,
expiration_date=expiration_date,
buildsystem=job["build_system_type"]
)
jobpriority.save()
logger.info('New job was found (%s,%s,%s,%s)',
job['testtype'], job['platform_option'], job['platform'],
job["build_system_type"])
new_jobs += 1
except Exception as error:
logger.warning(str(error))
failed_changes += 1
logger.info('We have %s new jobs and %s updated jobs out of %s total jobs processed.',
new_jobs, updated_jobs, total_jobs)
if failed_changes != 0:
logger.warning('We have failed %s changes out of %s total jobs processed.',
failed_changes, total_jobs)
return new_jobs, failed_changes, updated_jobs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_preseed():
""" Update JobPriority information from preseed.json The preseed data has these fields: buildtype, testtype, platform, priority, expiration_date The expiration_date field defaults to 2 weeks when inserted in the table The expiration_date field has the format "YYYY-MM-DD", however, it can have "*" to indicate to never expire The default priority is 1, however, if we want to force coalescing we can do that The fields buildtype, testtype and platform can have * which makes ut match all flavors of the * field. For example: (linux64, pgo, *) matches all Linux 64 pgo tests """ |
if not JobPriority.objects.exists():
return
preseed = preseed_data()
for job in preseed:
queryset = JobPriority.objects.all()
for field in ('testtype', 'buildtype', 'platform'):
if job[field] != '*':
queryset = queryset.filter(**{field: job[field]})
# Deal with the case where we have a new entry in preseed
if not queryset:
create_new_entry(job)
else:
# We can have wildcards, so loop on all returned values in data
for jp in queryset:
process_job_priority(jp, job) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def all_valid_time_intervals():
'''
Helper method to return all possible valid time intervals for data
stored by Perfherder
'''
return [PerformanceTimeInterval.DAY,
PerformanceTimeInterval.WEEK,
PerformanceTimeInterval.TWO_WEEKS,
PerformanceTimeInterval.SIXTY_DAYS,
PerformanceTimeInterval.NINETY_DAYS,
PerformanceTimeInterval.ONE_YEAR] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_property_names(self):
'''
Returns all property names in this collection of signatures
'''
property_names = set()
for signature_value in self.values():
for property_name in signature_value.keys():
property_names.add(property_name)
return property_names |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_property_values(self, property_name):
'''
Returns all property values for a particular property name in this collection
'''
property_values = set()
for signature_value in self.values():
if signature_value.get(property_name):
property_values.add(signature_value[property_name])
return property_values |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.