text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_machine_data_changed(self, callback):
"""Set the callback function to consume on machine data changed events. Callback receives a IMachineDataChangedEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_machine_data_changed
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_machine_registered(self, callback):
"""Set the callback function to consume on machine registered events. Callback receives a IMachineRegisteredEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_machine_registered
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_snapshot_deleted(self, callback):
"""Set the callback function to consume on snapshot deleted events. Callback receives a ISnapshotDeletedEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_snapshot_deleted
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_snapshot_taken(self, callback):
"""Set the callback function to consume on snapshot taken events. Callback receives a ISnapshotTakenEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_snapshot_taken
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_snapshot_changed(self, callback):
"""Set the callback function to consume on snapshot changed events which occur when snapshot properties have been changed. Callback receives a ISnapshotChangedEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_snapshot_changed
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_guest_property_changed(self, callback):
"""Set the callback function to consume on guest property changed events. Callback receives a IGuestPropertyChangedEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_guest_property_changed
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_on_session_state_changed(self, callback):
"""Set the callback function to consume on session state changed events. Callback receives a ISessionStateChangedEvent object. Returns the callback_id """ |
event_type = library.VBoxEventType.on_session_state_changed
return self.event_source.register_callback(callback, event_type) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def type_to_interface(event_type):
"""Return the event interface object that corresponds to the event type enumeration""" |
global _lookup
if not isinstance(event_type, library.VBoxEventType):
raise TypeError("event_type was not of VBoxEventType")
if not _lookup:
for attr in dir(library):
event_interface = getattr(library, attr)
if not inspect.isclass(event_interface):
continue
if not issubclass(event_interface, library.Interface):
continue
et = getattr(event_interface, 'id', None)
if et is None:
continue
if not isinstance(et, library.VBoxEventType):
continue
_lookup[int(et)] = event_interface
return _lookup[int(event_type)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_callback(callback, event_source, event_type):
"""register a callback function against an event_source for a given event_type. Arguments: callback - function to call when the event occurs event_source - the source to monitor events in event_type - the type of event we're monitoring for returns the registration id (callback_id) """ |
global _callbacks
event_interface = type_to_interface(event_type)
listener = event_source.create_listener()
event_source.register_listener(listener, [event_type], False)
quit = threading.Event()
t = threading.Thread(target=_event_monitor, args=(callback,
event_source,
listener,
event_interface,
quit))
t.daemon = True
t.start()
while t.is_alive() is False:
continue
_callbacks[t.ident] = (t, quit)
return t.ident |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unregister_callback(callback_id):
"""unregister a callback registration""" |
global _callbacks
obj = _callbacks.pop(callback_id, None)
threads = []
if obj is not None:
t, quit = obj
quit.set()
threads.append(t)
for t in threads:
t.join() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove(self, delete=True):
"""Unregister and optionally delete associated config Options: delete - remove all elements of this VM from the system Return the IMedia from unregistered VM """ |
if self.state >= library.MachineState.running:
session = virtualbox.Session()
self.lock_machine(session, library.LockType.shared)
try:
progress = session.console.power_down()
progress.wait_for_completion(-1)
except Exception:
print("Error powering off machine", file=sys.stderr)
session.unlock_machine()
time.sleep(0.5) # TODO figure out how to ensure session is really unlocked...
settings_dir = os.path.dirname(self.settings_file_path)
if delete:
option = library.CleanupMode.detach_all_return_hard_disks_only
else:
option = library.CleanupMode.detach_all_return_none
media = self.unregister(option)
if delete:
progress = self.delete_config(media)
progress.wait_for_completion(-1)
media = []
# if delete - At some point in time virtualbox didn't do a full cleanup
# of this dir. Let's double check it has been cleaned up.
if delete and os.path.exists(settings_dir):
shutil.rmtree(settings_dir)
return media |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clone(self, snapshot_name_or_id=None, mode=library.CloneMode.machine_state, options=None, name=None, uuid=None, groups=None, basefolder='', register=True):
"""Clone this Machine Options: snapshot_name_or_id - value can be either ISnapshot, name, or id mode - set the CloneMode value options - define the CloneOptions options name - define a name of the new VM uuid - set the uuid of the new VM groups - specify which groups the new VM will exist under basefolder - specify which folder to set the VM up under register - register this VM with the server Note: Default values create a linked clone from the current machine state Return a IMachine object for the newly cloned vm """ |
if options is None:
options = [library.CloneOptions.link]
if groups is None:
groups = []
vbox = virtualbox.VirtualBox()
if snapshot_name_or_id is not None:
if isinstance(snapshot_name_or_id, basestring):
snapshot = self.find_snapshot(snapshot_name_or_id)
else:
snapshot = snapshot_name_or_id
vm = snapshot.machine
else:
# linked clone can only be created from a snapshot...
# try grabbing the current_snapshot
if library.CloneOptions.link in options:
vm = self.current_snapshot.machine
else:
vm = self
if name is None:
name = "%s Clone" % vm.name
# Build the settings file
create_flags = ''
if uuid is not None:
create_flags = "UUID=%s" % uuid
primary_group = ''
if groups:
primary_group = groups[0]
# Make sure this settings file does not already exist
test_name = name
settings_file = ''
for i in range(1, 1000):
settings_file = vbox.compose_machine_filename(test_name,
primary_group,
create_flags,
basefolder)
if not os.path.exists(os.path.dirname(settings_file)):
break
test_name = "%s (%s)" % (name, i)
name = test_name
# Create the new machine and clone it!
vm_clone = vbox.create_machine(settings_file, name, groups, '', create_flags)
progress = vm.clone_to(vm_clone, mode, options)
progress.wait_for_completion(-1)
if register:
vbox.register_machine(vm_clone)
return vm_clone |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_session(self, lock_type=library.LockType.shared, session=None):
"""Lock this machine Arguments: lock_type - see IMachine.lock_machine for details session - optionally define a session object to lock this machine against. If not defined, a new ISession object is created to lock against return an ISession object """ |
if session is None:
session = library.ISession()
# NOTE: The following hack handles the issue of unknown machine state.
# This occurs most frequently when a machine is powered off and
# in spite waiting for the completion event to end, the state of
# machine still raises the following Error:
# virtualbox.library.VBoxErrorVmError: 0x80bb0003 (Failed to \
# get a console object from the direct session (Unknown \
# Status 0x80BB0002))
error = None
for _ in range(10):
try:
self.lock_machine(session, lock_type)
except Exception as exc:
error = exc
time.sleep(1)
continue
else:
break
else:
if error is not None:
raise Exception("Failed to create clone - %s" % error)
return session |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_final_value(self, description_type, value):
"""Set the value for the given description type. in description_type type :class:`VirtualSystemDescriptionType` in value type str """ |
types, _, _, vbox_values, extra_config = self.get_description()
# find offset to description type
for offset, t in enumerate(types):
if t == description_type:
break
else:
raise Exception("Failed to find type for %s" % description_type)
enabled = [True] * len(types)
vbox_values = list(vbox_values)
extra_config = list(extra_config)
if isinstance(value, basestring):
final_value = value
elif isinstance(value, Enum):
final_value = str(value._value)
elif isinstance(value, int):
final_value = str(value)
else:
raise ValueError("Incorrect value type.")
vbox_values[offset] = final_value
self.set_final_values(enabled, vbox_values, extra_config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chain_return_value(future, loop, return_value):
"""Compatible way to return a value in all Pythons. PEP 479, raise StopIteration(value) from a coroutine won't work forever, but "return value" doesn't work in Python 2. Instead, Motor methods that return values resolve a Future with it, and are implemented with callbacks rather than a coroutine internally. """ |
chained = asyncio.Future(loop=loop)
def copy(_future):
if _future.exception() is not None:
chained.set_exception(_future.exception())
else:
chained.set_result(return_value)
future._future.add_done_callback(functools.partial(loop.add_callback, copy))
return chained |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pymongo_class_wrapper(f, pymongo_class):
"""Executes the coroutine f and wraps its result in a Motor class. See WrapAsync. """ |
@functools.wraps(f)
@coroutine
def _wrapper(self, *args, **kwargs):
result = yield f(self, *args, **kwargs)
# Don't call isinstance(), not checking subclasses.
if result.__class__ == pymongo_class:
# Delegate to the current object to wrap the result.
raise gen.Return(self.wrap(result))
else:
raise gen.Return(result)
return _wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def aggregate(self, pipeline, **kwargs):
"""Execute an aggregation pipeline on this collection. The aggregation can be run on a secondary if the client is connected to a replica set and its ``read_preference`` is not :attr:`PRIMARY`. :Parameters: - `pipeline`: a single command or list of aggregation commands - `session` (optional):
a :class:`~pymongo.client_session.ClientSession`, created with :meth:`~MotorClient.start_session`. - `**kwargs`: send arbitrary parameters to the aggregate command Returns a :class:`MotorCommandCursor` that can be iterated like a cursor from :meth:`find`:: pipeline = [{'$project': {'name': {'$toUpper': '$name'}}}] cursor = collection.aggregate(pipeline) while (yield cursor.fetch_next):
doc = cursor.next_object() print(doc) In Python 3.5 and newer, aggregation cursors can be iterated elegantly in native coroutines with `async for`:: async def f():
async for doc in collection.aggregate(pipeline):
print(doc) :class:`MotorCommandCursor` does not allow the ``explain`` option. To explain MongoDB's query plan for the aggregation, use :meth:`MotorDatabase.command`:: async def f():
plan = await db.command( 'aggregate', 'COLLECTION-NAME', pipeline=[{'$project': {'x': 1}}], explain=True) print(plan) .. versionchanged:: 1.0 :meth:`aggregate` now **always** returns a cursor. .. versionchanged:: 0.5 :meth:`aggregate` now returns a cursor by default, and the cursor is returned immediately without a ``yield``. See :ref:`aggregation changes in Motor 0.5 <aggregate_changes_0_5>`. .. versionchanged:: 0.2 Added cursor support. .. _aggregate command: http://docs.mongodb.org/manual/applications/aggregation """ |
cursor_class = create_class_with_framework(
AgnosticLatentCommandCursor, self._framework, self.__module__)
# Latent cursor that will send initial command on first "async for".
return cursor_class(self, self._async_aggregate, pipeline,
**unwrap_kwargs_session(kwargs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_more(self):
"""Initial query or getMore. Returns a Future.""" |
if not self.alive:
raise pymongo.errors.InvalidOperation(
"Can't call get_more() on a MotorCursor that has been"
" exhausted or killed.")
self.started = True
return self._refresh() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_list(self, length):
"""Get a list of documents. .. testsetup:: to_list MongoClient().test.test_collection.delete_many({}) MongoClient().test.test_collection.insert_many([{'_id': i} for i in range(4)]) from tornado import ioloop .. doctest:: to_list [{'_id': 0}, {'_id': 1}] [{'_id': 2}, {'_id': 3}] done :Parameters: - `length`: maximum number of documents to return for this call, or None Returns a Future. .. versionchanged:: 2.0 No longer accepts a callback argument. .. versionchanged:: 0.2 `callback` must be passed as a keyword argument, like ``to_list(10, callback=callback)``, and the `length` parameter is no longer optional. """ |
if length is not None:
if not isinstance(length, int):
raise TypeError('length must be an int, not %r' % length)
elif length < 0:
raise ValueError('length must be non-negative')
if self._query_flags() & _QUERY_OPTIONS['tailable_cursor']:
raise pymongo.errors.InvalidOperation(
"Can't call to_list on tailable cursor")
future = self._framework.get_future(self.get_io_loop())
if not self.alive:
future.set_result([])
else:
the_list = []
self._framework.add_future(
self.get_io_loop(),
self._get_more(),
self._to_list, length, the_list, future)
return future |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self):
"""Close this change stream. Stops any "async for" loops using this change stream. """ |
if self.delegate:
return self._close()
# Never started.
future = self._framework.get_future(self.get_io_loop())
future.set_result(None)
return future |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def asynchronize( framework, sync_method, doc=None, wrap_class=None, unwrap_class=None):
"""Decorate `sync_method` so it returns a Future. The method runs on a thread and resolves the Future when it completes. :Parameters: - `motor_class`: Motor class being created, e.g. MotorClient. - `framework`: An asynchronous framework - `sync_method`: Unbound method of pymongo Collection, Database, MongoClient, etc. - `doc`: Optionally override sync_method's docstring - `wrap_class`: Optional PyMongo class, wrap a returned object of this PyMongo class in the equivalent Motor class - `unwrap_class` Optional Motor class name, unwrap an argument with this Motor class name and pass the wrapped PyMongo object instead """ |
@functools.wraps(sync_method)
def method(self, *args, **kwargs):
if unwrap_class is not None:
# Don't call isinstance(), not checking subclasses.
unwrapped_args = [
obj.delegate
if obj.__class__.__name__.endswith(
(unwrap_class, 'MotorClientSession'))
else obj
for obj in args]
unwrapped_kwargs = {
key: (obj.delegate
if obj.__class__.__name__.endswith(
(unwrap_class, 'MotorClientSession'))
else obj)
for key, obj in kwargs.items()}
else:
# For speed, don't call unwrap_args_session/unwrap_kwargs_session.
unwrapped_args = [
obj.delegate
if obj.__class__.__name__.endswith('MotorClientSession')
else obj
for obj in args]
unwrapped_kwargs = {
key: (obj.delegate
if obj.__class__.__name__.endswith('MotorClientSession')
else obj)
for key, obj in kwargs.items()}
loop = self.get_io_loop()
return framework.run_on_executor(loop,
sync_method,
self.delegate,
*unwrapped_args,
**unwrapped_kwargs)
if wrap_class is not None:
method = framework.pymongo_class_wrapper(method, wrap_class)
method.is_wrap_method = True # For Synchro.
# This is for the benefit of motor_extensions.py, which needs this info to
# generate documentation with Sphinx.
method.is_async_method = True
name = sync_method.__name__
method.pymongo_method_name = name
if doc is not None:
method.__doc__ = doc
return method |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wrap_synchro(fn):
"""If decorated Synchro function returns a Motor object, wrap in a Synchro object. """ |
@functools.wraps(fn)
def _wrap_synchro(*args, **kwargs):
motor_obj = fn(*args, **kwargs)
# Not all Motor classes appear here, only those we need to return
# from methods like map_reduce() or create_collection()
if isinstance(motor_obj, motor.MotorCollection):
client = MongoClient(delegate=motor_obj.database.client)
database = Database(client, motor_obj.database.name)
return Collection(database, motor_obj.name, delegate=motor_obj)
if isinstance(motor_obj, motor.motor_tornado.MotorClientSession):
return ClientSession(delegate=motor_obj)
if isinstance(motor_obj, _MotorTransactionContext):
return _SynchroTransactionContext(motor_obj)
if isinstance(motor_obj, motor.MotorDatabase):
client = MongoClient(delegate=motor_obj.client)
return Database(client, motor_obj.name, delegate=motor_obj)
if isinstance(motor_obj, motor.motor_tornado.MotorChangeStream):
return ChangeStream(motor_obj)
if isinstance(motor_obj, motor.motor_tornado.MotorLatentCommandCursor):
return CommandCursor(motor_obj)
if isinstance(motor_obj, motor.motor_tornado.MotorCommandCursor):
return CommandCursor(motor_obj)
if isinstance(motor_obj, _MotorRawBatchCommandCursor):
return CommandCursor(motor_obj)
if isinstance(motor_obj, motor.motor_tornado.MotorCursor):
return Cursor(motor_obj)
if isinstance(motor_obj, _MotorRawBatchCursor):
return Cursor(motor_obj)
if isinstance(motor_obj, motor.MotorGridIn):
return GridIn(None, delegate=motor_obj)
if isinstance(motor_obj, motor.MotorGridOut):
return GridOut(None, delegate=motor_obj)
if isinstance(motor_obj, motor.motor_tornado.MotorGridOutCursor):
return GridOutCursor(motor_obj)
else:
return motor_obj
return _wrap_synchro |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unwrap_synchro(fn):
"""Unwrap Synchro objects passed to a method and pass Motor objects instead. """ |
@functools.wraps(fn)
def _unwrap_synchro(*args, **kwargs):
def _unwrap_obj(obj):
if isinstance(obj, Synchro):
return obj.delegate
else:
return obj
args = [_unwrap_obj(arg) for arg in args]
kwargs = dict([
(key, _unwrap_obj(value)) for key, value in kwargs.items()])
return fn(*args, **kwargs)
return _unwrap_synchro |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open(self):
"""Retrieve this file's attributes from the server. Returns a Future. .. versionchanged:: 2.0 No longer accepts a callback argument. .. versionchanged:: 0.2 :class:`~motor.MotorGridOut` now opens itself on demand, calling ``open`` explicitly is rarely needed. """ |
return self._framework.chain_return_value(self._ensure_file(),
self.get_io_loop(),
self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(self, *args, **kwargs):
"""Find and return the files collection documents that match ``filter``. Returns a cursor that iterates across files matching arbitrary queries on the files collection. Can be combined with other modifiers for additional control. For example:: cursor = bucket.find({"filename": "lisa.txt"}, no_cursor_timeout=True) while (yield cursor.fetch_next):
grid_out = cursor.next_object() data = yield grid_out.read() This iterates through all versions of "lisa.txt" stored in GridFS. Note that setting no_cursor_timeout to True may be important to prevent the cursor from timing out during long multi-file processing work. As another example, the call:: most_recent_three = fs.find().sort("uploadDate", -1).limit(3) would return a cursor to the three most recently uploaded files in GridFS. Follows a similar interface to :meth:`~motor.MotorCollection.find` in :class:`~motor.MotorCollection`. :Parameters: - `filter`: Search query. - `batch_size` (optional):
The number of documents to return per batch. - `limit` (optional):
The maximum number of documents to return. - `no_cursor_timeout` (optional):
The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to True prevent that. - `skip` (optional):
The number of documents to skip before returning. - `sort` (optional):
The order by which to sort results. Defaults to None. - `session` (optional):
a :class:`~pymongo.client_session.ClientSession`, created with :meth:`~MotorClient.start_session`. If a :class:`~pymongo.client_session.ClientSession` is passed to :meth:`find`, all returned :class:`MotorGridOut` instances are associated with that session. .. versionchanged:: 1.2 Added session parameter. """ |
cursor = self.delegate.find(*args, **kwargs)
grid_out_cursor = create_class_with_framework(
AgnosticGridOutCursor, self._framework, self.__module__)
return grid_out_cursor(cursor, self.collection) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_motor_attr(motor_class, name, *defargs):
"""If any Motor attributes can't be accessed, grab the equivalent PyMongo attribute. While we're at it, store some info about each attribute in the global motor_info dict. """ |
attr = safe_getattr(motor_class, name)
# Store some info for process_motor_nodes()
full_name = '%s.%s.%s' % (
motor_class.__module__, motor_class.__name__, name)
full_name_legacy = 'motor.%s.%s.%s' % (
motor_class.__module__, motor_class.__name__, name)
# These sub-attributes are set in motor.asynchronize()
has_coroutine_annotation = getattr(attr, 'coroutine_annotation', False)
is_async_method = getattr(attr, 'is_async_method', False)
is_cursor_method = getattr(attr, 'is_motorcursor_chaining_method', False)
if is_async_method or is_cursor_method:
pymongo_method = getattr(
motor_class.__delegate_class__, attr.pymongo_method_name)
else:
pymongo_method = None
# attr.doc is set by statement like 'error = AsyncRead(doc="OBSOLETE")'.
is_pymongo_doc = pymongo_method and attr.__doc__ == pymongo_method.__doc__
motor_info[full_name] = motor_info[full_name_legacy] = {
'is_async_method': is_async_method or has_coroutine_annotation,
'is_pymongo_docstring': is_pymongo_doc,
'pymongo_method': pymongo_method,
}
return attr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_attached_instruments( self, expected: Dict[Mount, str])\ -> Dict[Mount, Dict[str, Optional[str]]]: """ Find the instruments attached to our mounts. :param expected: A dict that may contain a mapping from mount to strings that should prefix instrument model names. When instruments are scanned, they are matched against the expectation (if present) and a :py:attr:`RuntimeError` is raised if there is no match. :raises RuntimeError: If an instrument is expected but not found. :returns: A dict with mounts as the top-level keys. Each mount value is a dict with keys 'mount' (containing an instrument model name or `None`) and 'id' (containing the serial number of the pipette attached to that mount, or `None`). """ |
to_return: Dict[Mount, Dict[str, Optional[str]]] = {}
for mount in Mount:
found_model = self._smoothie_driver.read_pipette_model(
mount.name.lower())
found_id = self._smoothie_driver.read_pipette_id(
mount.name.lower())
expected_instr = expected.get(mount, None)
if expected_instr and\
(not found_model or not found_model.startswith(expected_instr)):
raise RuntimeError(
'mount {}: instrument {} was requested but {} is present'
.format(mount.name, expected_instr, found_model))
to_return[mount] = {
'model': found_model,
'id': found_id}
return to_return |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def probe(self, axis: str, distance: float) -> Dict[str, float]: """ Run a probe and return the new position dict """ |
return self._smoothie_driver.probe_axis(axis, distance) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def delay(self, duration_s: int):
""" Pause and sleep """ |
self.pause()
await asyncio.sleep(duration_s)
self.resume() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init_pipette():
""" Finds pipettes attached to the robot currently and chooses the correct one to add to the session. :return: The pipette type and mount chosen for deck calibration """ |
global session
pipette_info = set_current_mount(session.adapter, session)
pipette = pipette_info['pipette']
res = {}
if pipette:
session.current_model = pipette_info['model']
if not feature_flags.use_protocol_api_v2():
mount = pipette.mount
session.current_mount = mount
else:
mount = pipette.get('mount')
session.current_mount = mount_by_name[mount]
session.pipettes[mount] = pipette
res = {'mount': mount, 'model': pipette_info['model']}
log.info("Pipette info {}".format(session.pipettes))
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_current_mount(hardware, session):
""" Choose the pipette in which to execute commands. If there is no pipette, or it is uncommissioned, the pipette is not mounted. :attached_pipettes attached_pipettes: Information obtained from the current pipettes attached to the robot. This looks like the following: :dict with keys 'left' and 'right' and a model string for each mount, or 'uncommissioned' if no model string available :return: The selected pipette """ |
pipette = None
right_channel = None
left_channel = None
right_pipette, left_pipette = get_pipettes(hardware)
if right_pipette:
if not feature_flags.use_protocol_api_v2():
right_channel = right_pipette.channels
else:
right_channel = right_pipette.get('channels')
right_pipette['mount'] = 'right'
if left_pipette:
if not feature_flags.use_protocol_api_v2():
left_channel = left_pipette.channels
else:
left_channel = left_pipette.get('channels')
left_pipette['mount'] = 'left'
if right_channel == 1:
pipette = right_pipette
elif left_channel == 1:
pipette = left_pipette
elif right_pipette:
pipette = right_pipette
session.cp = CriticalPoint.FRONT_NOZZLE
elif left_pipette:
pipette = left_pipette
session.cp = CriticalPoint.FRONT_NOZZLE
model, id = _get_model_name(pipette, hardware)
session.pipette_id = id
return {'pipette': pipette, 'model': model} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def attach_tip(data):
""" Attach a tip to the current pipette :param data: Information obtained from a POST request. The content type is application/json. The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'attach tip' 'tipLength': a float representing how much the length of a pipette increases when a tip is added } """ |
global session
tip_length = data.get('tipLength')
if not tip_length:
message = 'Error: "tipLength" must be specified in request'
status = 400
else:
if not feature_flags.use_protocol_api_v2():
pipette = session.pipettes[session.current_mount]
if pipette.tip_attached:
log.warning('attach tip called while tip already attached')
pipette._remove_tip(pipette._tip_length)
pipette._add_tip(tip_length)
else:
session.adapter.add_tip(session.current_mount, tip_length)
if session.cp:
session.cp = CriticalPoint.FRONT_NOZZLE
session.tip_length = tip_length
message = "Tip length set: {}".format(tip_length)
status = 200
return web.json_response({'message': message}, status=status) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def detach_tip(data):
""" Detach the tip from the current pipette :param data: Information obtained from a POST request. The content type is application/json. The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'detach tip' } """ |
global session
if not feature_flags.use_protocol_api_v2():
pipette = session.pipettes[session.current_mount]
if not pipette.tip_attached:
log.warning('detach tip called with no tip')
pipette._remove_tip(session.tip_length)
else:
session.adapter.remove_tip(session.current_mount)
if session.cp:
session.cp = CriticalPoint.NOZZLE
session.tip_length = None
return web.json_response({'message': "Tip removed"}, status=200) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def run_jog(data):
""" Allow the user to jog the selected pipette around the deck map :param data: Information obtained from a POST request. The content type is application/json The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'jog' 'axis': The current axis you wish to move 'direction': The direction you wish to move (+ or -) 'step': The increment you wish to move } :return: The position you are moving to based on axis, direction, step given by the user. """ |
axis = data.get('axis')
direction = data.get('direction')
step = data.get('step')
if axis not in ('x', 'y', 'z'):
message = '"axis" must be "x", "y", or "z"'
status = 400
elif direction not in (-1, 1):
message = '"direction" must be -1 or 1'
status = 400
elif step is None:
message = '"step" must be specified'
status = 400
else:
position = jog(
axis,
direction,
step,
session.adapter,
session.current_mount,
session.cp)
message = 'Jogged to {}'.format(position)
status = 200
return web.json_response({'message': message}, status=status) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def move(data):
""" Allow the user to move the selected pipette to a specific point :param data: Information obtained from a POST request. The content type is application/json The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'move' 'point': The name of the point to move to. Must be one of ["1", "2", "3", "safeZ", "attachTip"] } :return: The position you are moving to """ |
global session
point_name = data.get('point')
point = safe_points().get(point_name)
if point and len(point) == 3:
if not feature_flags.use_protocol_api_v2():
pipette = session.pipettes[session.current_mount]
channels = pipette.channels
# For multichannel pipettes in the V1 session, we use the tip closest
# to the front of the robot rather than the back (this is the tip that
# would go into well H1 of a plate when pipetting from the first row of
# a 96 well plate, for instance). Since moves are issued for the A1 tip
# we have to adjust the target point by 2 * Y_OFFSET_MULTI (where the
# offset value is the distance from the axial center of the pipette to
# the A1 tip). By sending the A1 tip to to the adjusted target, the H1
# tip should go to the desired point. Y_OFFSET_MULT must then be backed
# out of xy positions saved in the `save_xy` handler
# (not 2 * Y_OFFSET_MULTI, because the axial center of the pipette
# will only be off by 1* Y_OFFSET_MULTI).
if not channels == 1:
x = point[0]
y = point[1] + pipette_config.Y_OFFSET_MULTI * 2
z = point[2]
point = (x, y, z)
pipette.move_to((session.adapter.deck, point), strategy='arc')
else:
if not point_name == 'attachTip':
intermediate_pos = position(
session.current_mount, session.adapter, session.cp)
session.adapter.move_to(
session.current_mount,
Point(
x=intermediate_pos[0],
y=intermediate_pos[1],
z=session.tip_length),
critical_point=session.cp)
session.adapter.move_to(
session.current_mount,
Point(x=point[0], y=point[1], z=session.tip_length),
critical_point=session.cp)
session.adapter.move_to(
session.current_mount,
Point(x=point[0], y=point[1], z=point[2]),
critical_point=session.cp)
else:
if session.cp:
session.cp = CriticalPoint.NOZZLE
session.adapter.move_to(
session.current_mount,
Point(x=point[0], y=point[1], z=point[2]),
critical_point=session.cp)
message = 'Moved to {}'.format(point)
status = 200
else:
message = '"point" must be one of "1", "2", "3", "safeZ", "attachTip"'
status = 400
return web.json_response({'message': message}, status=status) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def save_xy(data):
""" Save the current XY values for the calibration data :param data: Information obtained from a POST request. The content type is application/json. The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'save xy' 'point': a string ID ['1', '2', or '3'] of the calibration point to save } """ |
global session
valid_points = list(session.points.keys())
point = data.get('point')
if point not in valid_points:
message = 'point must be one of {}'.format(valid_points)
status = 400
elif not session.current_mount:
message = "Mount must be set before calibrating"
status = 400
else:
if not feature_flags.use_protocol_api_v2():
mount = 'Z' if session.current_mount == 'left' else 'A'
x, y, _ = position(mount, session.adapter)
if session.pipettes[session.current_mount].channels != 1:
# See note in `move`
y = y - pipette_config.Y_OFFSET_MULTI
if session.current_mount == 'left':
dx, dy, _ = session.adapter.config.mount_offset
x = x + dx
y = y + dy
else:
x, y, _ = position(
session.current_mount, session.adapter, session.cp)
session.points[point] = (x, y)
message = "Saved point {} value: {}".format(
point, session.points[point])
status = 200
return web.json_response({'message': message}, status=status) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def save_z(data):
""" Save the current Z height value for the calibration data :param data: Information obtained from a POST request. The content type is application/json. The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'save z' } """ |
if not session.tip_length:
message = "Tip length must be set before calibrating"
status = 400
else:
if not feature_flags.use_protocol_api_v2():
mount = 'Z' if session.current_mount == 'left' else 'A'
actual_z = position(
mount, session.adapter)[-1]
length_offset = pipette_config.load(
session.current_model, session.pipette_id).model_offset[-1]
session.z_value = actual_z - session.tip_length + length_offset
else:
session.z_value = position(
session.current_mount, session.adapter, session.cp)[-1]
message = "Saved z: {}".format(session.z_value)
status = 200
return web.json_response({'message': message}, status=status) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def release(data):
""" Release a session :param data: Information obtained from a POST request. The content type is application/json. The correct packet form should be as follows: { 'token': UUID token from current session start 'command': 'release' } """ |
global session
if not feature_flags.use_protocol_api_v2():
session.adapter.remove_instrument('left')
session.adapter.remove_instrument('right')
else:
session.adapter.cache_instruments()
session = None
return web.json_response({"message": "calibration session released"}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def dispatch(request):
""" Routes commands to subhandlers based on the command field in the body. """ |
if session:
message = ''
data = await request.json()
try:
log.info("Dispatching {}".format(data))
_id = data.get('token')
if not _id:
message = '"token" field required for calibration requests'
raise AssertionError
command = data.get('command')
if not command:
message = '"command" field required for calibration requests'
raise AssertionError
if _id == session.id:
res = await router[command](data)
else:
res = web.json_response(
{'message': 'Invalid token: {}'.format(_id)}, status=403)
except AssertionError:
res = web.json_response({'message': message}, status=400)
except Exception as e:
res = web.json_response(
{'message': 'Exception {} raised by dispatch of {}: {}'.format(
type(e), data, e)},
status=500)
else:
res = web.json_response(
{'message': 'Session must be started before issuing commands'},
status=418)
return res |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _serial_poller(self):
""" Priority-sorted list of checks Highest priority is the 'halt' channel, which is used to kill the thread and the serial communication channel and allow everything to be cleaned up. Second is the lid-open interrupt, which should trigger a callback (typically to halt the robot). Third is an enqueued command to send to the Thermocycler. Fourth (if no other work is available) is to query the Thermocycler for its current temp, target temp, and time remaining in its current cycle. """ |
while True:
_next = dict(self._poller.poll(POLLING_FREQUENCY_MS))
if self._halt_read_file.fileno() in _next:
log.debug("Poller [{}]: halt".format(hash(self)))
self._halt_read_file.read()
# Note: this is discarded because we send a set message to halt
# the thread--don't currently need to parse it
break
elif self._connection.fileno() in _next:
# Lid-open interrupt
log.debug("Poller [{}]: interrupt".format(hash(self)))
res = self._connection.read_until(SERIAL_ACK)
self._interrupt_callback(res)
elif self._send_read_file.fileno() in _next:
self._send_read_file.read(1)
command, callback = self._command_queue.get()
log.debug("Poller [{}]: send {}".format(hash(self), command))
res = self._send_command(command)
callback(res)
else:
# Nothing else to do--update device status
log.debug("Poller [{}]: updating temp".format(hash(self)))
res = self._send_command(GCODES['GET_PLATE_TEMP'])
self._status_callback(res)
log.info("Exiting TC poller loop [{}]".format(hash(self))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def disconnect(self):
'''
Disconnect from the serial port
'''
if self._poll_stop_event:
self._poll_stop_event.set()
if self._driver:
if self.status != 'idle':
self.deactivate()
self._driver.disconnect() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def temp_connect(self, hardware: hc.API):
""" Connect temporarily to the specified hardware controller. This should be used as a context manager: .. code-block :: python with ctx.temp_connect(hw):
# do some tasks ctx.home() # after the with block, the context is connected to the same # hardware control API it was connected to before, even if # an error occured in the code inside the with block """ |
old_hw = self._hw_manager.hardware
try:
self._hw_manager.set_hw(hardware)
yield self
finally:
self._hw_manager.set_hw(old_hw) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def connect(self, hardware: hc.API):
""" Connect to a running hardware API. This can be either a simulator or a full hardware controller. Note that there is no true disconnected state for a :py:class:`.ProtocolContext`; :py:meth:`disconnect` simply creates a new simulator and replaces the current hardware with it. """ |
self._hw_manager.set_hw(hardware)
self._hw_manager.hardware.cache_instruments() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_labware( self, labware_obj: Labware, location: types.DeckLocation) -> Labware: """ Specify the presence of a piece of labware on the OT2 deck. This function loads the labware specified by `labware` (previously loaded from a configuration file) to the location specified by `location`. :param Labware labware: The labware object to load :param location: The slot into which to load the labware such as 1 or '1' :type location: int or str """ |
self._deck_layout[location] = labware_obj
return labware_obj |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_labware_by_name( self, labware_name: str, location: types.DeckLocation, label: str = None) -> Labware: """ A convenience function to specify a piece of labware by name. For labware already defined by Opentrons, this is a convient way to collapse the two stages of labware initialization (creating the labware and adding it to the protocol) into one. This function returns the created and initialized labware for use later in the protocol. :param str labware_name: The name of the labware to load :param location: The slot into which to load the labware such as 1 or '1' :type location: int or str :param str label: An optional special name to give the labware. If specified, this is the name the labware will appear as in the run log and the calibration view in the Opentrons app. """ |
labware = load(labware_name,
self._deck_layout.position_for(location),
label)
return self.load_labware(labware, location) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_instrument( self, instrument_name: str, mount: Union[types.Mount, str], tip_racks: List[Labware] = None, replace: bool = False) -> 'InstrumentContext': """ Load a specific instrument required by the protocol. This value will actually be checked when the protocol runs, to ensure that the correct instrument is attached in the specified location. :param str instrument_name: The name of the instrument model, or a prefix. For instance, 'p10_single' may be used to request a P10 single regardless of the version. :param mount: The mount in which this instrument should be attached. This can either be an instance of the enum type :py:class:`.types.Mount` or one of the strings `'left'` and `'right'`. :type mount: types.Mount or str :param tip_racks: A list of tip racks from which to pick tips if :py:meth:`.InstrumentContext.pick_up_tip` is called without arguments. :type tip_racks: List[:py:class:`.Labware`] :param bool replace: Indicate that the currently-loaded instrument in `mount` (if such an instrument exists) should be replaced by `instrument_name`. """ |
if isinstance(mount, str):
try:
checked_mount = types.Mount[mount.upper()]
except KeyError:
raise ValueError(
"If mount is specified as a string, it should be either"
"'left' or 'right' (ignoring capitalization, which the"
" system strips), not {}".format(mount))
elif isinstance(mount, types.Mount):
checked_mount = mount
else:
raise TypeError(
"mount should be either an instance of opentrons.types.Mount"
" or a string, but is {}.".format(mount))
self._log.info("Trying to load {} on {} mount"
.format(instrument_name, checked_mount.name.lower()))
instr = self._instruments[checked_mount]
if instr and not replace:
raise RuntimeError("Instrument already present in {} mount: {}"
.format(checked_mount.name.lower(),
instr.name))
attached = {att_mount: instr.get('name', None)
for att_mount, instr
in self._hw_manager.hardware.attached_instruments.items()}
attached[checked_mount] = instrument_name
self._log.debug("cache instruments expectation: {}"
.format(attached))
self._hw_manager.hardware.cache_instruments(attached)
# If the cache call didn’t raise, the instrument is attached
new_instr = InstrumentContext(
ctx=self,
hardware_mgr=self._hw_manager,
mount=checked_mount,
tip_racks=tip_racks,
log_parent=self._log)
self._instruments[checked_mount] = new_instr
self._log.info("Instrument {} loaded".format(new_instr))
return new_instr |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def loaded_instruments(self) -> Dict[str, Optional['InstrumentContext']]: """ Get the instruments that have been loaded into the protocol. :returns: A dict mapping mount names in lowercase to the instrument in that mount, or `None` if no instrument is present. """ |
return {mount.name.lower(): instr for mount, instr
in self._instruments.items()} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delay(self, seconds=0, minutes=0, msg=None):
""" Delay protocol execution for a specific amount of time. :param float seconds: A time to delay in seconds :param float minutes: A time to delay in minutes If both `seconds` and `minutes` are specified, they will be added. """ |
delay_time = seconds + minutes * 60
self._hw_manager.hardware.delay(delay_time) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def home(self):
""" Homes the robot. """ |
self._log.debug("home")
self._location_cache = None
self._hw_manager.hardware.home() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def blow_out(self, location: Union[types.Location, Well] = None ) -> 'InstrumentContext': """ Blow liquid out of the tip. If :py:attr:`dispense` is used to completely empty a pipette, usually a small amount of liquid will remain in the tip. This method moves the plunger past its usual stops to fully remove any remaining liquid from the tip. Regardless of how much liquid was in the tip when this function is called, after it is done the tip will be empty. :param location: The location to blow out into. If not specified, defaults to the current location of the pipette :type location: :py:class:`.Well` or :py:class:`.Location` or None :raises RuntimeError: If no location is specified and location cache is None. This should happen if `blow_out` is called without first calling a method that takes a location (eg, :py:meth:`.aspirate`, :py:meth:`dispense`) :returns: This instance """ |
if location is None:
if not self._ctx.location_cache:
raise RuntimeError('No valid current location cache present')
else:
location = self._ctx.location_cache.labware # type: ignore
# type checked below
if isinstance(location, Well):
if location.parent.is_tiprack:
self._log.warning('Blow_out being performed on a tiprack. '
'Please re-check your code')
target = location.top()
elif isinstance(location, types.Location) and not \
isinstance(location.labware, Well):
raise TypeError(
'location should be a Well or None, but it is {}'
.format(location))
else:
raise TypeError(
'location should be a Well or None, but it is {}'
.format(location))
self.move_to(target)
self._hw_manager.hardware.blow_out(self._mount)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def touch_tip(self, location: Well = None, radius: float = 1.0, v_offset: float = -1.0, speed: float = 60.0) -> 'InstrumentContext': """ Touch the pipette tip to the sides of a well, with the intent of removing left-over droplets :param location: If no location is passed, pipette will touch tip at current well's edges :type location: :py:class:`.Well` or None :param radius: Describes the proportion of the target well's radius. When `radius=1.0`, the pipette tip will move to the edge of the target well; when `radius=0.5`, it will move to 50% of the well's radius. Default: 1.0 (100%) :type radius: float :param v_offset: The offset in mm from the top of the well to touch tip A positive offset moves the tip higher above the well, while a negative offset moves it lower into the well Default: -1.0 mm :type v_offset: float :param speed: The speed for touch tip motion, in mm/s. Default: 60.0 mm/s, Max: 80.0 mm/s, Min: 20.0 mm/s :type speed: float :raises NoTipAttachedError: if no tip is attached to the pipette :raises RuntimeError: If no location is specified and location cache is None. This should happen if `touch_tip` is called without first calling a method that takes a location (eg, :py:meth:`.aspirate`, :py:meth:`dispense`) :returns: This instance .. note:: This is behavior change from legacy API (which accepts any :py:class:`.Placeable` as the ``location`` parameter) """ |
if not self.hw_pipette['has_tip']:
raise hc.NoTipAttachedError('Pipette has no tip to touch_tip()')
if speed > 80.0:
self._log.warning('Touch tip speed above limit. Setting to 80mm/s')
speed = 80.0
elif speed < 20.0:
self._log.warning('Touch tip speed below min. Setting to 20mm/s')
speed = 20.0
# If location is a valid well, move to the well first
if location is None:
if not self._ctx.location_cache:
raise RuntimeError('No valid current location cache present')
else:
location = self._ctx.location_cache.labware # type: ignore
# type checked below
if isinstance(location, Well):
if location.parent.is_tiprack:
self._log.warning('Touch_tip being performed on a tiprack. '
'Please re-check your code')
self.move_to(location.top())
else:
raise TypeError(
'location should be a Well, but it is {}'.format(location))
# Determine the touch_tip edges/points
offset_pt = types.Point(0, 0, v_offset)
well_edges = [
# right edge
location._from_center_cartesian(x=radius, y=0, z=1) + offset_pt,
# left edge
location._from_center_cartesian(x=-radius, y=0, z=1) + offset_pt,
# back edge
location._from_center_cartesian(x=0, y=radius, z=1) + offset_pt,
# front edge
location._from_center_cartesian(x=0, y=-radius, z=1) + offset_pt
]
for edge in well_edges:
self._hw_manager.hardware.move_to(self._mount, edge, speed)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def air_gap(self, volume: float = None, height: float = None) -> 'InstrumentContext': """ Pull air into the pipette current tip at the current location :param volume: The amount in uL to aspirate air into the tube. (Default will use all remaining volume in tip) :type volume: float :param height: The number of millimiters to move above the current Well to air-gap aspirate. (Default: 5mm above current Well) :type height: float :raises NoTipAttachedError: If no tip is attached to the pipette :raises RuntimeError: If location cache is None. This should happen if `touch_tip` is called without first calling a method that takes a location (eg, :py:meth:`.aspirate`, :py:meth:`dispense`) :returns: This instance """ |
if not self.hw_pipette['has_tip']:
raise hc.NoTipAttachedError('Pipette has no tip. Aborting air_gap')
if height is None:
height = 5
loc = self._ctx.location_cache
if not loc or not isinstance(loc.labware, Well):
raise RuntimeError('No previous Well cached to perform air gap')
target = loc.labware.top(height)
self.move_to(target)
self.aspirate(volume)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def return_tip(self) -> 'InstrumentContext': """ If a tip is currently attached to the pipette, then it will return the tip to it's location in the tiprack. It will not reset tip tracking so the well flag will remain False. :returns: This instance """ |
if not self.hw_pipette['has_tip']:
self._log.warning('Pipette has no tip to return')
loc = self._last_tip_picked_up_from
if not isinstance(loc, Well):
raise TypeError('Last tip location should be a Well but it is: '
'{}'.format(loc))
bot = loc.bottom()
bot = bot._replace(point=bot.point._replace(z=bot.point.z + 10))
self.drop_tip(bot)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pick_up_tip(self, location: Union[types.Location, Well] = None, presses: int = 3, increment: float = 1.0) -> 'InstrumentContext': """ Pick up a tip for the pipette to run liquid-handling commands with If no location is passed, the Pipette will pick up the next available tip in its :py:attr:`InstrumentContext.tip_racks` list. The tip to pick up can be manually specified with the `location` argument. The `location` argument can be specified in several ways: * If the only thing to specify is which well from which to pick up a tip, `location` can be a :py:class:`.Well`. For instance, if you have a tip rack in a variable called `tiprack`, you can pick up a specific tip from it with ``instr.pick_up_tip(tiprack.wells()[0])``. This style of call can be used to make the robot pick up a tip from a tip rack that was not specified when creating the :py:class:`.InstrumentContext`. * If the position to move to in the well needs to be specified, for instance to tell the robot to run its pick up tip routine starting closer to or farther from the top of the tip, `location` can be a :py:class:`.types.Location`; for instance, you can call ``instr.pick_up_tip(tiprack.wells()[0].top())``. :param location: The location from which to pick up a tip. :type location: :py:class:`.types.Location` or :py:class:`.Well` to pick up a tip from. :param presses: The number of times to lower and then raise the pipette when picking up a tip, to ensure a good seal (0 [zero] will result in the pipette hovering over the tip but not picking it up--generally not desireable, but could be used for dry-run). :type presses: int :param increment: The additional distance to travel on each successive press (e.g.: if `presses=3` and `increment=1.0`, then the first press will travel down into the tip by 3.5mm, the second by 4.5mm, and the third by 5.5mm). :type increment: float :returns: This instance """ |
num_channels = self.channels
def _select_tiprack_from_list(tip_racks) -> Tuple[Labware, Well]:
try:
tr = tip_racks[0]
except IndexError:
raise OutOfTipsError
next_tip = tr.next_tip(num_channels)
if next_tip:
return tr, next_tip
else:
return _select_tiprack_from_list(tip_racks[1:])
if location and isinstance(location, types.Location):
if isinstance(location.labware, Labware):
tiprack = location.labware
target: Well = tiprack.next_tip(num_channels) # type: ignore
if not target:
raise OutOfTipsError
elif isinstance(location.labware, Well):
tiprack = location.labware.parent
target = location.labware
elif location and isinstance(location, Well):
tiprack = location.parent
target = location
elif not location:
tiprack, target = _select_tiprack_from_list(self.tip_racks)
else:
raise TypeError(
"If specified, location should be an instance of "
"types.Location (e.g. the return value from "
"tiprack.wells()[0].top()) or a Well (e.g. tiprack.wells()[0]."
" However, it is a {}".format(location))
assert tiprack.is_tiprack, "{} is not a tiprack".format(str(tiprack))
self.move_to(target.top())
self._hw_manager.hardware.pick_up_tip(
self._mount, tiprack.tip_length, presses, increment)
# Note that the hardware API pick_up_tip action includes homing z after
tiprack.use_tips(target, num_channels)
self._last_tip_picked_up_from = target
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def drop_tip( self, location: Union[types.Location, Well] = None)\ -> 'InstrumentContext': """ Drop the current tip. If no location is passed, the Pipette will drop the tip into its :py:attr:`trash_container`, which if not specified defaults to the fixed trash in slot 12. The location in which to drop the tip can be manually specified with the `location` argument. The `location` argument can be specified in several ways: - If the only thing to specify is which well into which to drop a tip, `location` can be a :py:class:`.Well`. For instance, if you have a tip rack in a variable called `tiprack`, you can drop a tip into a specific well on that tiprack with the call `instr.pick_up_tip(tiprack.wells()[0])`. This style of call can be used to make the robot drop a tip into arbitrary labware. - If the position to drop the tip from as well as the :py:class:`.Well` to drop the tip into needs to be specified, for instance to tell the robot to drop a tip from an unusually large height above the tiprack, `location` can be a :py:class:`.types.Location`; for instance, you can call `instr.pick_up_tip(tiprack.wells()[0].top())`. .. note:: OT1 required homing the plunger after dropping tips, so the prior version of `drop_tip` automatically homed the plunger. This is no longer needed in OT2. If you need to home the plunger, use :py:meth:`home_plunger`. :param location: The location to drop the tip :type location: :py:class:`.types.Location` or :py:class:`.Well` or None :returns: This instance """ |
if location and isinstance(location, types.Location):
if isinstance(location.labware, Well):
target = location
else:
raise TypeError(
"If a location is specified as a types.Location (for "
"instance, as the result of a call to "
"tiprack.wells()[0].top()) it must be a location "
"relative to a well, since that is where a tip is "
"dropped. The passed location, however, is in "
"reference to {}".format(location.labware))
elif location and isinstance(location, Well):
if 'fixedTrash' in quirks_from_any_parent(location):
target = location.top()
else:
bot = location.bottom()
target = bot._replace(
point=bot.point._replace(z=bot.point.z + 10))
elif not location:
target = self.trash_container.wells()[0].top()
else:
raise TypeError(
"If specified, location should be an instance of "
"types.Location (e.g. the return value from "
"tiprack.wells()[0].top()) or a Well (e.g. tiprack.wells()[0]."
" However, it is a {}".format(location))
self.move_to(target)
self._hw_manager.hardware.drop_tip(self._mount)
self._last_tip_picked_up_from = None
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def home(self) -> 'InstrumentContext': """ Home the robot. :returns: This instance. """ |
def home_dummy(mount): pass
cmds.do_publish(self.broker, cmds.home, home_dummy,
'before', None, None, self._mount.name.lower())
self._hw_manager.hardware.home_z(self._mount)
self._hw_manager.hardware.home_plunger(self._mount)
cmds.do_publish(self.broker, cmds.home, home_dummy,
'after', self, None, self._mount.name.lower())
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distribute(self, volume: float, source: Well, dest: List[Well], *args, **kwargs) -> 'InstrumentContext': """ Move a volume of liquid from one source to multiple destinations. :param volume: The amount of volume to distribute to each destination well. :param source: A single well from where liquid will be aspirated. :param dest: List of Wells where liquid will be dispensed to. :param kwargs: See :py:meth:`transfer`. :returns: This instance """ |
self._log.debug("Distributing {} from {} to {}"
.format(volume, source, dest))
kwargs['mode'] = 'distribute'
kwargs['disposal_volume'] = kwargs.get('disposal_vol', self.min_volume)
return self.transfer(volume, source, dest, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move_to(self, location: types.Location, force_direct: bool = False, minimum_z_height: float = None ) -> 'InstrumentContext': """ Move the instrument. :param location: The location to move to. :type location: :py:class:`.types.Location` :param force_direct: If set to true, move directly to destination without arc motion. :param minimum_z_height: When specified, this Z margin is able to raise (but never lower) the mid-arc height. """ |
if self._ctx.location_cache:
from_lw = self._ctx.location_cache.labware
else:
from_lw = None
from_center = 'centerMultichannelOnWells'\
in quirks_from_any_parent(from_lw)
cp_override = CriticalPoint.XY_CENTER if from_center else None
from_loc = types.Location(
self._hw_manager.hardware.gantry_position(
self._mount, critical_point=cp_override),
from_lw)
moves = geometry.plan_moves(from_loc, location, self._ctx.deck,
force_direct=force_direct,
minimum_z_height=minimum_z_height)
self._log.debug("move_to: {}->{} via:\n\t{}"
.format(from_loc, location, moves))
try:
for move in moves:
self._hw_manager.hardware.move_to(
self._mount, move[0], critical_point=move[1])
except Exception:
self._ctx.location_cache = None
raise
else:
self._ctx.location_cache = location
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def type(self) -> str: """ One of `'single'` or `'multi'`. """ |
model = self.name
if 'single' in model:
return 'single'
elif 'multi' in model:
return 'multi'
else:
raise RuntimeError("Bad pipette model name: {}".format(model)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hw_pipette(self) -> Dict[str, Any]: """ View the information returned by the hardware API directly. :raises: a :py:class:`.types.PipetteNotAttachedError` if the pipette is no longer attached (should not happen). """ |
pipette = self._hw_manager.hardware.attached_instruments[self._mount]
if pipette is None:
raise types.PipetteNotAttachedError
return pipette |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_labware(self, labware: Labware) -> Labware: """ Load labware onto a Magnetic Module, checking if it is compatible """ |
if labware.magdeck_engage_height is None:
MODULE_LOG.warning(
"This labware ({}) is not explicitly compatible with the"
" Magnetic Module. You will have to specify a height when"
" calling engage().")
return super().load_labware(labware) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def engage(self, height: float = None, offset: float = None):
""" Raise the Magnetic Module's magnets. The destination of the magnets can be specified in several different ways, based on internally stored default heights for labware: - If neither `height` nor `offset` is specified, the magnets will raise to a reasonable default height based on the specified labware. - If `height` is specified, it should be a distance in mm from the home position of the magnets. - If `offset` is specified, it should be an offset in mm from the default position. A positive number moves the magnets higher and a negative number moves the magnets lower. Only certain labwares have defined engage heights for the Magnetic Module. If a labware that does not have a defined engage height is loaded on the Magnetic Module (or if no labware is loaded), then `height` must be specified. :param height: The height to raise the magnets to, in mm from home. :param offset: An offset relative to the default height for the labware in mm """ |
if height:
dist = height
elif self.labware and self.labware.magdeck_engage_height is not None:
dist = self.labware.magdeck_engage_height
if offset:
dist += offset
else:
raise ValueError(
"Currently loaded labware {} does not have a known engage "
"height; please specify explicitly with the height param"
.format(self.labware))
self._module.engage(dist) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open(self):
""" Opens the lid""" |
self._geometry.lid_status = self._module.open()
self._ctx.deck.recalculate_high_z()
return self._geometry.lid_status |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self):
""" Closes the lid""" |
self._geometry.lid_status = self._module.close()
self._ctx.deck.recalculate_high_z()
return self._geometry.lid_status |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_temperature(self, temp: float, hold_time: float = None, ramp_rate: float = None):
""" Set the target temperature, in C. Valid operational range yet to be determined. :param temp: The target temperature, in degrees C. :param hold_time: The time to hold after reaching temperature. If ``hold_time`` is not specified, the Thermocycler will hold this temperature indefinitely (requiring manual intervention to end the cycle). :param ramp_rate: The target rate of temperature change, in degC/sec. If ``ramp_rate`` is not specified, it will default to the maximum ramp rate as defined in the device configuration. """ |
return self._module.set_temperature(
temp=temp, hold_time=hold_time, ramp_rate=ramp_rate) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def _port_poll(is_old_bootloader, ports_before_switch=None):
""" Checks for the bootloader port """ |
new_port = ''
while not new_port:
if is_old_bootloader:
new_port = await _port_on_mode_switch(ports_before_switch)
else:
ports = await _discover_ports()
if ports:
discovered_ports = list(filter(
lambda x: x.endswith('bootloader'), ports))
if len(discovered_ports) == 1:
new_port = '/dev/modules/{}'.format(discovered_ports[0])
await asyncio.sleep(0.05)
return new_port |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def critical_point(self, cp_override: CriticalPoint = None) -> Point: """ The vector from the pipette's origin to its critical point. The critical point for a pipette is the end of the nozzle if no tip is attached, or the end of the tip if a tip is attached. If `cp_override` is specified and valid - so is either :py:attr:`CriticalPoint.NOZZLE` or :py:attr:`CriticalPoint.TIP` when we have a tip, or :py:attr:`CriticalPoint.XY_CENTER` - the specified critical point will be used. """ |
if not self.has_tip or cp_override == CriticalPoint.NOZZLE:
cp_type = CriticalPoint.NOZZLE
tip_length = 0.0
else:
cp_type = CriticalPoint.TIP
tip_length = self.current_tip_length
if cp_override == CriticalPoint.XY_CENTER:
mod_offset_xy = [0, 0, self.config.model_offset[2]]
cp_type = CriticalPoint.XY_CENTER
elif cp_override == CriticalPoint.FRONT_NOZZLE:
mod_offset_xy = [
0, -self.config.model_offset[1], self.config.model_offset[2]]
cp_type = CriticalPoint.FRONT_NOZZLE
else:
mod_offset_xy = self.config.model_offset
mod_and_tip = Point(mod_offset_xy[0],
mod_offset_xy[1],
mod_offset_xy[2] - tip_length)
cp = mod_and_tip + self._instrument_offset._replace(z=0)
if self._log.isEnabledFor(logging.DEBUG):
mo = 'model offset: {} + '.format(self.config.model_offset)\
if cp_type != CriticalPoint.XY_CENTER else ''
info_str = 'cp: {}{}: {}=({}instr offset xy: {}'\
.format(cp_type, '(from override)' if cp_override else '',
cp, mo,
self._instrument_offset._replace(z=0))
if cp_type == CriticalPoint.TIP:
info_str += '- current_tip_length: {}=(true tip length: {}'\
' - inst z: {}) (z only)'.format(
self.current_tip_length, self._current_tip_length,
self._instrument_offset.z)
info_str += ')'
self._log.debug(info_str)
return cp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def connect_to_robot():
'''
Connect over Serial to the Smoothieware motor driver
'''
print()
import optparse
from opentrons import robot
print('Connecting to robot...')
parser = optparse.OptionParser(usage='usage: %prog [options] ')
parser.add_option(
"-p", "--p", dest="port", default='',
type='str', help='serial port of the smoothie'
)
options, _ = parser.parse_args(args=None, values=None)
if options.port:
robot.connect(options.port)
else:
robot.connect()
return robot |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def write_identifiers(robot, mount, new_id, new_model):
'''
Send a bytearray to the specified mount, so that Smoothieware can
save the bytes to the pipette's memory
'''
robot._driver.write_pipette_id(mount, new_id)
read_id = robot._driver.read_pipette_id(mount)
_assert_the_same(new_id, read_id)
robot._driver.write_pipette_model(mount, new_model)
read_model = robot._driver.read_pipette_model(mount)
_assert_the_same(new_model, read_model) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _user_submitted_barcode(max_length):
'''
User can enter a serial number as a string of HEX values
Length of byte array must equal `num`
'''
barcode = input('BUTTON + SCAN: ').strip()
if len(barcode) > max_length:
raise Exception(BAD_BARCODE_MESSAGE.format(barcode))
# remove all characters before the letter P
# for example, remove ASCII selector code "\x1b(B" on chinese keyboards
barcode = barcode[barcode.index('P'):]
barcode = barcode.split('\r')[0].split('\n')[0] # remove any newlines
return barcode |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_mounted(unitname: str, mounted: bool, swallow_exc: bool = False):
""" Mount or unmount a unit. Worker for the contextlibs :param unitname: The systemd unit for the mount to affect. This probably should be one of :py:attr:`_SYSROOT_INACTIVE_UNIT` or :py:attr:`_BOOT_UNIT` but it could be anything :param mounted: ``True`` to start the mount unit, ``False`` to stop it :param swallow_exc: ``True`` to capture all exceptions, ``False`` to pass them upwards. This is useful for when you don't super care about the success of the mount, like when trying to restore the system after you write the boot part """ |
try:
if mounted:
LOG.info(f"Starting {unitname}")
interface().StartUnit(unitname, 'replace')
LOG.info(f"Started {unitname}")
else:
LOG.info(f"Stopping {unitname}")
interface().StopUnit(unitname, 'replace')
LOG.info(f"Stopped {unitname}")
except Exception:
LOG.info(
f"Exception {'starting' if mounted else 'stopping'} {unitname}")
if not swallow_exc:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def require_session(handler):
""" Decorator to ensure a session is properly in the request """ |
@functools.wraps(handler)
async def decorated(request: web.Request) -> web.Response:
request_session_token = request.match_info['session']
session = session_from_request(request)
if not session or request_session_token != session.token:
LOG.warning(f"request for invalid session {request_session_token}")
return web.json_response(
data={'error': 'bad-token',
'message': f'No such session {request_session_token}'},
status=404)
return await handler(request, session)
return decorated |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_version(path) -> str: """ Reads the version field from a package file :param path: the path to a valid package.json file :return: the version string or "unknown" """ |
if path and os.path.exists(path):
with open(path) as pkg:
package_dict = json.load(pkg)
version = package_dict.get('version')
else:
version = 'not available'
return version |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def calculate_tip_probe_hotspots( tip_length: float, tip_probe_settings: tip_probe_config)\ -> List[HotSpot]: """ Generate a list of tuples describing motions for doing the xy part of tip probe based on the config's description of the tip probe box. """ |
# probe_dimensions is the external bounding box of the probe unit
size_x, size_y, size_z = tip_probe_settings.dimensions
rel_x_start = (size_x / 2) + tip_probe_settings.switch_clearance
rel_y_start = (size_y / 2) + tip_probe_settings.switch_clearance
# Ensure that the nozzle will clear the probe unit and tip will clear deck
nozzle_safe_z = round((size_z - tip_length)
+ tip_probe_settings.z_clearance.normal, 3)
z_start = max(tip_probe_settings.z_clearance.deck, nozzle_safe_z)
switch_offset = tip_probe_settings.switch_offset
# Each list item defines axis we are probing for, starting position vector
# and travel distance
neg_x = HotSpot('x',
-rel_x_start,
switch_offset[0],
z_start,
size_x)
pos_x = HotSpot('x',
rel_x_start,
switch_offset[0],
z_start,
-size_x)
neg_y = HotSpot('y',
switch_offset[1],
-rel_y_start,
z_start,
size_y)
pos_y = HotSpot('y',
switch_offset[1],
rel_y_start,
z_start,
-size_y)
z = HotSpot(
'z',
0,
switch_offset[2],
tip_probe_settings.center[2] + tip_probe_settings.z_clearance.start,
-size_z)
return [
neg_x,
pos_x,
neg_y,
pos_y,
z
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init(loop=None, hardware: 'HardwareAPILike' = None):
""" Builds an application and sets up RPC and HTTP servers with it. :param loop: A specific aiohttp event loop to use. If not specified, the server will use the default event loop. :param hardware: The hardware manager or hardware adapter to connect to. If not specified, the server will use :py:attr:`opentrons.hardware` """ |
app = web.Application(loop=loop, middlewares=[error_middleware])
if hardware:
checked_hardware = hardware
else:
checked_hardware = opentrons.hardware
app['com.opentrons.hardware'] = checked_hardware
app['com.opentrons.rpc'] = RPCServer(app, MainRouter(checked_hardware))
app['com.opentrons.http'] = HTTPServer(app, CONFIG['log_dir'])
return app |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(hostname=None, port=None, path=None, loop=None):
""" The arguments are not all optional. Either a path or hostname+port should be specified; you have to specify one. """ |
if path:
log.debug("Starting Opentrons server application on {}".format(
path))
hostname, port = None, None
else:
log.debug("Starting Opentrons server application on {}:{}".format(
hostname, port))
path = None
web.run_app(init(loop), host=hostname, port=port, path=path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def discover() -> List[Tuple[str, str]]: """ Scan for connected modules and instantiate handler classes """ |
if IS_ROBOT and os.path.isdir('/dev/modules'):
devices = os.listdir('/dev/modules')
else:
devices = []
discovered_modules = []
module_port_regex = re.compile('|'.join(MODULE_TYPES.keys()), re.I)
for port in devices:
match = module_port_regex.search(port)
if match:
name = match.group().lower()
if name not in MODULE_TYPES:
log.warning("Unexpected module connected: {} on {}"
.format(name, port))
continue
absolute_port = '/dev/modules/{}'.format(port)
discovered_modules.append((absolute_port, name))
log.info('Discovered modules: {}'.format(discovered_modules))
return discovered_modules |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def update_firmware( module: AbstractModule, firmware_file: str, loop: Optional[asyncio.AbstractEventLoop]) -> AbstractModule: """ Update a module. If the update succeeds, an Module instance will be returned. Otherwise, raises an UpdateError with the reason for the failure. """ |
simulating = module.is_simulated
cls = type(module)
old_port = module.port
flash_port = await module.prep_for_update()
callback = module.interrupt_callback
del module
after_port, results = await update.update_firmware(flash_port,
firmware_file,
loop)
await asyncio.sleep(1.0)
new_port = after_port or old_port
if not results[0]:
raise UpdateError(results[1])
return await cls.build(
port=new_port,
interrupt_callback=callback,
simulating=simulating) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def move(self, pose_tree, x=None, y=None, z=None, home_flagged_axes=True):
""" Dispatch move command to the driver changing base of x, y and z from source coordinate system to destination. Value must be set for each axis that is mapped. home_flagged_axes: (default=True) This kwarg is passed to the driver. This ensures that any axes within this Mover's axis_mapping is homed before moving, if it has not yet done so. See driver docstring for details """ |
def defaults(_x, _y, _z):
_x = _x if x is not None else 0
_y = _y if y is not None else 0
_z = _z if z is not None else 0
return _x, _y, _z
dst_x, dst_y, dst_z = change_base(
pose_tree,
src=self._src,
dst=self._dst,
point=Point(*defaults(x, y, z)))
driver_target = {}
if 'x' in self._axis_mapping:
assert x is not None, "Value must be set for each axis mapped"
driver_target[self._axis_mapping['x']] = dst_x
if 'y' in self._axis_mapping:
assert y is not None, "Value must be set for each axis mapped"
driver_target[self._axis_mapping['y']] = dst_y
if 'z' in self._axis_mapping:
assert z is not None, "Value must be set for each axis mapped"
driver_target[self._axis_mapping['z']] = dst_z
self._driver.move(driver_target, home_flagged_axes=home_flagged_axes)
# Update pose with the new value. Since stepper motors are open loop
# there is no need to to query diver for position
return update(pose_tree, self, Point(*defaults(dst_x, dst_y, dst_z))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_update( filepath: str, progress_callback: Callable[[float], None]) -> Tuple[str, str]: """ Like otupdate.buildroot.file_actions.validate_update but moreso Checks for the rootfs, rootfs hash, bootfs, and bootfs hash. Returns the path to the rootfs and the path to the bootfs """ |
filenames = [ROOTFS_NAME, ROOTFS_HASH_NAME, BOOT_NAME, BOOT_HASH_NAME]
def zip_callback(progress):
progress_callback(progress/3.0)
files, sizes = unzip_update(filepath, zip_callback, filenames, filenames)
def rootfs_hash_callback(progress):
progress_callback(progress/3.0 + 0.33)
rootfs = files.get(ROOTFS_NAME)
assert rootfs
rootfs_calc_hash = hash_file(rootfs, rootfs_hash_callback,
file_size=sizes[ROOTFS_NAME])
rootfs_hashfile = files.get(ROOTFS_HASH_NAME)
assert rootfs_hashfile
rootfs_packaged_hash = open(rootfs_hashfile, 'rb').read().strip()
if rootfs_calc_hash != rootfs_packaged_hash:
msg = f"Hash mismatch (rootfs): calculated {rootfs_calc_hash} != "\
f"packaged {rootfs_packaged_hash}"
LOG.error(msg)
raise HashMismatch(msg)
def bootfs_hash_callback(progress):
progress_callback(progress/3.0 + 0.66)
bootfs = files.get(BOOT_NAME)
assert bootfs
bootfs_calc_hash = hash_file(bootfs, bootfs_hash_callback,
file_size=sizes[BOOT_NAME])
bootfs_hashfile = files.get(BOOT_HASH_NAME)
assert bootfs_hashfile
bootfs_packaged_hash = open(bootfs_hashfile, 'rb').read().strip()
if bootfs_calc_hash != bootfs_packaged_hash:
msg = f"Hash mismatch (bootfs): calculated {bootfs_calc_hash} != "\
f"packged {bootfs_packaged_hash}"
LOG.error(msg)
raise HashMismatch(msg)
return rootfs, bootfs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def patch_connection_file_paths(connection: str) -> str: """ Patch any paths in a connection to remove the balena host paths Undoes the changes applied by :py:meth:`opentrons.system.nmcli._rewrite_key_path_to_host_path` :param connection: The contents of a NetworkManager connection file :return: The patches contents, suitable for writing somewher """ |
new_conn_lines = []
for line in connection.split('\n'):
if '=' in line:
parts = line.split('=')
path_matches = re.search(
'/mnt/data/resin-data/[0-9]+/(.*)', parts[1])
if path_matches:
new_path = f'/data/{path_matches.group(1)}'
new_conn_lines.append(
'='.join([parts[0], new_path]))
LOG.info(
f"migrate_connection_file: {parts[0]}: "
f"{parts[1]}->{new_path}")
continue
new_conn_lines.append(line)
return '\n'.join(new_conn_lines) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def migrate(ignore: Sequence[str], name: str):
""" Copy everything in the app data to the root of the new partition :param ignore: Files to ignore in the root. This should be populated with the names (with no directory elements) of the migration update zipfile and everything unzipped from it. :param str: The name of the robot """ |
try:
with mount_data_partition() as datamount:
migrate_data(ignore, datamount, DATA_DIR_NAME)
migrate_connections(datamount)
migrate_hostname(datamount, name)
except Exception:
LOG.exception("Exception during data migration")
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def migrate_data(ignore: Sequence[str], new_data_path: str, old_data_path: str):
""" Copy everything in the app data to the root of the main data part :param ignore: A list of files that should be ignored in the root of /data :param new_data_path: Where the new data partition is mounted :param old_data_path: Where the old date files are """ |
# the new ’data’ path is actually /var and /data is in /var/data
dest_data = os.path.join(new_data_path, 'data')
LOG.info(f"migrate_data: copying {old_data_path} to {dest_data}")
os.makedirs(dest_data, exist_ok=True)
with os.scandir(old_data_path) as scanner:
for entry in scanner:
if entry.name in ignore:
LOG.info(f"migrate_data: ignoring {entry.name}")
continue
src = os.path.join(old_data_path, entry.name)
dest = os.path.join(dest_data, entry.name)
if os.path.exists(dest):
LOG.info(f"migrate_data: removing dest tree {dest}")
shutil.rmtree(dest, ignore_errors=True)
if entry.is_dir():
LOG.info(f"migrate_data: copying tree {src}->{dest}")
shutil.copytree(src, dest, symlinks=True,
ignore=migrate_files_to_ignore)
else:
LOG.info(f"migrate_data: copying file {src}->{dest}")
shutil.copy2(src, dest) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def migrate_system_connections(src_sc: str, dest_sc: str) -> bool: """ Migrate the contents of a system-connections dir :param dest_sc: The system-connections to copy to. Will be created if it does not exist :param src_sc: The system-connections to copy from :return: True if anything was moved """ |
found = False
LOG.info(f"migrate_system_connections: checking {dest_sc}")
os.makedirs(dest_sc, exist_ok=True)
with os.scandir(src_sc) as scanner:
for entry in scanner:
# ignore readme and sample
if entry.name.endswith('.ignore'):
continue
# ignore the hardwired connection added by api server
if entry.name == 'static-eth0':
continue
# ignore weird remnants of boot partition connections
if entry.name.startswith('._'):
continue
patched = patch_connection_file_paths(
open(os.path.join(src_sc, entry.name), 'r').read())
open(os.path.join(dest_sc, entry.name), 'w').write(patched)
LOG.info(f"migrate_connections: migrated {entry.name}")
found = True
return found |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def migrate_connections(new_data_path: str):
""" Migrate wifi connection files to new locations and patch them :param new_data_path: The path to where the new data partition is mounted """ |
dest_connections = os.path.join(
new_data_path, 'lib', 'NetworkManager', 'system-connections')
os.makedirs(dest_connections, exist_ok=True)
with mount_state_partition() as state_path:
src_connections = os.path.join(
state_path, 'root-overlay', 'etc', 'NetworkManager',
'system-connections')
LOG.info(f"migrate_connections: moving nmcli connections from"
f" {src_connections} to {dest_connections}")
found = migrate_system_connections(src_connections, dest_connections)
if found:
return
LOG.info(
"migrate_connections: No connections found in state, checking boot")
with mount_boot_partition() as boot_path:
src_connections = os.path.join(
boot_path, 'system-connections')
LOG.info(f"migrate_connections: moving nmcli connections from"
f" {src_connections} to {dest_connections}")
found = migrate_system_connections(src_connections, dest_connections)
if not found:
LOG.info("migrate_connections: No connections found in boot") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def migrate_hostname(dest_data: str, name: str):
""" Write the machine name to a couple different places :param dest_data: The path to the root of ``/var`` in buildroot :param name: The name The hostname gets written to: - dest_path/hostname (bind mounted to /etc/hostname) (https://www.freedesktop.org/software/systemd/man/hostname.html#) - dest_path/machine-info as the PRETTY_HOSTNAME (bind mounted to /etc/machine-info) (https://www.freedesktop.org/software/systemd/man/machine-info.html#) - dest_path/serial since we assume the resin name is the serial number We also create some basic defaults for the machine-info. """ |
if name.startswith('opentrons-'):
name = name[len('opentrons-'):]
LOG.info(
f"migrate_hostname: writing name {name} to {dest_data}/hostname,"
f" {dest_data}/machine-info, {dest_data}/serial")
with open(os.path.join(dest_data, 'hostname'), 'w') as hn:
hn.write(name + "\n")
with open(os.path.join(dest_data, 'machine-info'), 'w') as mi:
mi.write(f'PRETTY_HOSTNAME={name}\nDEPLOYMENT=production\n')
with open(os.path.join(dest_data, 'serial'), 'w') as ser:
ser.write(name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def infer_config_base_dir() -> Path: """ Return the directory to store data in. Defaults are ~/.opentrons if not on a pi; OT_API_CONFIG_DIR is respected here. When this module is imported, this function is called automatically and the result stored in :py:attr:`APP_DATA_DIR`. This directory may not exist when the module is imported. Even if it does exist, it may not contain data, or may require data to be moved to it. :return pathlib.Path: The path to the desired root settings dir. """ |
if 'OT_API_CONFIG_DIR' in os.environ:
return Path(os.environ['OT_API_CONFIG_DIR'])
elif IS_ROBOT:
return Path('/data')
else:
search = (Path.cwd(),
Path.home()/'.opentrons')
for path in search:
if (path/_CONFIG_FILENAME).exists():
return path
else:
return search[-1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_and_migrate() -> Dict[str, Path]: """ Ensure the settings directory tree is properly configured. This function does most of its work on the actual robot. It will move all settings files from wherever they happen to be to the proper place. On non-robots, this mostly just loads. In addition, it writes a default config and makes sure all directories required exist (though the files in them may not). """ |
if IS_ROBOT:
_migrate_robot()
base = infer_config_base_dir()
base.mkdir(parents=True, exist_ok=True)
index = _load_with_overrides(base)
return _ensure_paths_and_types(index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _load_with_overrides(base) -> Dict[str, str]: """ Load an config or write its defaults """ |
should_write = False
overrides = _get_environ_overrides()
try:
index = json.load((base/_CONFIG_FILENAME).open())
except (OSError, json.JSONDecodeError) as e:
sys.stderr.write("Error loading config from {}: {}\nRewriting...\n"
.format(str(base), e))
should_write = True
index = generate_config_index(overrides)
for key in CONFIG_ELEMENTS:
if key.name not in index:
sys.stderr.write(
f"New config index key {key.name}={key.default}"
"\nRewriting...\n")
if key.kind in (ConfigElementType.DIR, ConfigElementType.FILE):
index[key.name] = base/key.default
else:
index[key.name] = key.default
should_write = True
if should_write:
try:
write_config(index, path=base)
except Exception as e:
sys.stderr.write(
"Error writing config to {}: {}\nProceeding memory-only\n"
.format(str(base), e))
index.update(overrides)
return index |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _ensure_paths_and_types(index: Dict[str, str]) -> Dict[str, Path]: """ Take the direct results of loading the config and make sure the filesystem reflects them. """ |
configs_by_name = {ce.name: ce for ce in CONFIG_ELEMENTS}
correct_types: Dict[str, Path] = {}
for key, item in index.items():
if key not in configs_by_name: # old config, ignore
continue
if configs_by_name[key].kind == ConfigElementType.FILE:
it = Path(item)
it.parent.mkdir(parents=True, exist_ok=True)
correct_types[key] = it
elif configs_by_name[key].kind == ConfigElementType.DIR:
it = Path(item)
it.mkdir(parents=True, exist_ok=True)
correct_types[key] = it
else:
raise RuntimeError(
f"unhandled kind in ConfigElements: {key}: "
f"{configs_by_name[key].kind}")
return correct_types |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _legacy_index() -> Union[None, Dict[str, str]]: """ Try and load an index file from the various places it might exist. If the legacy file cannot be found or cannot be parsed, return None. This method should only be called on a robot. """ |
for index in _LEGACY_INDICES:
if index.exists():
try:
return json.load(open(index))
except (OSError, json.JSONDecodeError):
return None
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _find_most_recent_backup(normal_path: Optional[str]) -> Optional[str]: """ Find the most recent old settings to migrate. The input is the path to an unqualified settings file - e.g. /mnt/usbdrive/config/robotSettings.json This will return - None if the input is None (to support chaining from dict.get()) - The input if it exists, or - The file named normal_path-TIMESTAMP.json with the highest timestamp if one can be found, or - None """ |
if normal_path is None:
return None
if os.path.exists(normal_path):
return normal_path
dirname, basename = os.path.split(normal_path)
root, ext = os.path.splitext(basename)
backups = [fi for fi in os.listdir(dirname)
if fi.startswith(root) and fi.endswith(ext)]
ts_re = re.compile(r'.*-([0-9]+)' + ext + '$')
def ts_compare(filename):
match = ts_re.match(filename)
if not match:
return -1
else:
return int(match.group(1))
backups_sorted = sorted(backups, key=ts_compare)
if not backups_sorted:
return None
return os.path.join(dirname, backups_sorted[-1]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_config_index(defaults: Dict[str, str], base_dir=None) -> Dict[str, Path]: """ Determines where existing info can be found in the system, and creates a corresponding data dict that can be written to index.json in the baseDataDir. The information in the files defined by the config index is information required by the API itself and nothing else - labware definitions, feature flags, robot configurations. It does not include configuration files that relate to the rest of the system, such as network description file definitions. :param defaults: A dict of defaults to write, useful for specifying part (but not all) of the index succinctly. This is used both when loading a configuration file from disk and when generating a new one. :param base_dir: If specified, a base path used if this function has to generate defaults. If not specified, falls back to :py:attr:`CONFIG_BASE_DIR` :returns: The config object """ |
base = Path(base_dir) if base_dir else infer_config_base_dir()
def parse_or_default(
ce: ConfigElement, val: Optional[str]) -> Path:
if not val:
return base / Path(ce.default)
else:
return Path(val)
return {
ce.name: parse_or_default(ce,
defaults.get(ce.name))
for ce in CONFIG_ELEMENTS
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_config(config_data: Dict[str, Path], path: Path = None):
""" Save the config file. :param config_data: The index to save :param base_dir: The place to save the file. If ``None``, :py:meth:`infer_config_base_dir()` will be used Only keys that are in the config elements will be saved. """ |
path = Path(path) if path else infer_config_base_dir()
valid_names = [ce.name for ce in CONFIG_ELEMENTS]
try:
os.makedirs(path, exist_ok=True)
with (path/_CONFIG_FILENAME).open('w') as base_f:
json.dump({k: str(v) for k, v in config_data.items()
if k in valid_names},
base_f, indent=2)
except OSError as e:
sys.stderr.write("Config index write to {} failed: {}\n"
.format(path/_CONFIG_FILENAME, e)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
""" A CLI application for performing factory calibration of an Opentrons robot Instructions: - Robot must be set up with two 300ul or 50ul single-channel pipettes installed on the right-hand and left-hand mount. - Put a GEB 300ul tip onto the pipette. - Use the arrow keys to jog the robot over slot 5 in an open space that is not an engraving or a hole. - Use the 'q' and 'a' keys to jog the pipette up and down respectively until the tip is just touching the deck surface, then press 'z'. This will save the 'Z' height. - Press '1' to automatically go to the expected location of the first calibration point. Jog the robot until the tip is actually at the point, then press 'enter'. - Repeat with '2' and '3'. - After calibrating all three points, press the space bar to save the configuration. - Optionally, press 4,5,6 or 7 to validate the new configuration. - Press 'p' to perform tip probe. Press the space bar to save again. - Press 'm' to perform mount calibration. Press enter and then space bar to save again. - Press 'esc' to exit the program. """ |
prompt = input(
">>> Warning! Running this tool backup and clear any previous "
"calibration data. Proceed (y/[n])? ")
if prompt not in ['y', 'Y', 'yes']:
print('Exiting--prior configuration data not changed')
sys.exit()
# Notes:
# - 200ul tip is 51.7mm long when attached to a pipette
# - For xyz coordinates, (0, 0, 0) is the lower-left corner of the robot
cli = CLITool(
point_set=get_calibration_points(),
tip_length=51.7)
hardware = cli.hardware
backup_configuration_and_reload(hardware)
if not feature_flags.use_protocol_api_v2():
hardware.connect()
hardware.turn_on_rail_lights()
atexit.register(hardware.turn_off_rail_lights)
else:
hardware.set_lights(rails=True)
cli.home()
# lights help the script user to see the points on the deck
cli.ui_loop.run()
if feature_flags.use_protocol_api_v2():
hardware.set_lights(rails=False)
print('Robot config: \n', cli._config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def increase_step(self) -> str: """ Increase the jog resolution without overrunning the list of values """ |
if self._steps_index < len(self._steps) - 1:
self._steps_index = self._steps_index + 1
return 'step: {}'.format(self.current_step()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def decrease_step(self) -> str: """ Decrease the jog resolution without overrunning the list of values """ |
if self._steps_index > 0:
self._steps_index = self._steps_index - 1
return 'step: {}'.format(self.current_step()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _jog(self, axis, direction, step):
""" Move the pipette on `axis` in `direction` by `step` and update the position tracker """ |
jog(axis, direction, step, self.hardware, self._current_mount)
self.current_position = self._position()
return 'Jog: {}'.format([axis, str(direction), str(step)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def home(self) -> str: """ Return the robot to the home position and update the position tracker """ |
self.hardware.home()
self.current_position = self._position()
return 'Homed' |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_point(self) -> str: """ Indexes the measured data with the current point as a key and saves the current position once the 'Enter' key is pressed to the 'actual points' vector. """ |
if self._current_mount is left:
msg = self.save_mount_offset()
self._current_mount = right
elif self._current_mount is types.Mount.LEFT:
msg = self.save_mount_offset()
self._current_mount = types.Mount.RIGHT
else:
pos = self._position()[:-1]
self.actual_points[self._current_point] = pos
log.debug("Saving {} for point {}".format(
pos, self._current_point))
msg = 'saved #{}: {}'.format(
self._current_point, self.actual_points[self._current_point])
return msg |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.