text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _effectinit_raise_line_padding_on_focus(self, name, **kwargs):
"""Init the effect for the empty space around the focused entry. Keyword arguments can contain enlarge_time and padding. """ |
self._effects[name] = kwargs
if "enlarge_time" not in kwargs:
kwargs['enlarge_time'] = 0.5
if "padding" not in kwargs:
kwargs['padding'] = 10
kwargs['padding_pps'] = kwargs['padding'] / kwargs['enlarge_time']
# Now, every menu voices need additional infos
for o in self.options:
o['padding_line'] = 0.0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _effectupdate_raise_line_padding_on_focus(self, time_passed):
"""Gradually enlarge the padding of the focused line.""" |
data = self._effects['raise-line-padding-on-focus']
pps = data['padding_pps']
for i, option in enumerate(self.options):
if i == self.option:
# Raise me
if option['padding_line'] < data['padding']:
option['padding_line'] += pps * time_passed
elif option['padding_line'] > data['padding']:
option['padding_line'] = data['padding']
elif option['padding_line']:
if option['padding_line'] > 0:
option['padding_line'] -= pps * time_passed
elif option['padding_line'] < 0:
option['padding_line'] = 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _effectinit_raise_col_padding_on_focus(self, name, **kwargs):
"""Init the column padding on focus effect. Keyword arguments can contain enlarge_time and padding. """ |
self._effects[name] = kwargs
if "enlarge_time" not in kwargs:
kwargs['enlarge_time'] = 0.5
if "padding" not in kwargs:
kwargs['padding'] = 10
kwargs['padding_pps'] = kwargs['padding'] / kwargs['enlarge_time']
for option in self.options:
option['padding_col'] = 0.0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_choices_tuple(choices, get_display_name):
""" Make a tuple for the choices parameter for a data model field. :param choices: sequence of valid values for the model field :param get_display_name: callable that returns the human-readable name for a choice :return: A tuple of 2-tuples (choice, display_name) suitable for the choices parameter """ |
assert callable(get_display_name)
return tuple((x, get_display_name(x)) for x in choices) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def edit(self):
""" Edit the file """ |
self.changed = False
with self:
editor = self.get_editor()
cmd = [editor, self.name]
try:
res = subprocess.call(cmd)
except Exception as e:
print("Error launching editor %(editor)s" % locals())
print(e)
return
if res != 0:
msg = '%(editor)s returned error status %(res)d' % locals()
raise EditProcessException(msg)
new_data = self.read()
if new_data != self.data:
self.changed = self._save_diff(self.data, new_data)
self.data = new_data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _search_env(keys):
""" Search the environment for the supplied keys, returning the first one found or None if none was found. """ |
matches = (os.environ[key] for key in keys if key in os.environ)
return next(matches, None) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_editor(self):
""" Give preference to an XML_EDITOR or EDITOR defined in the environment. Otherwise use a default editor based on platform. """ |
env_search = ['EDITOR']
if 'xml' in self.content_type:
env_search.insert(0, 'XML_EDITOR')
default_editor = self.platform_default_editors[sys.platform]
return self._search_env(env_search) or default_editor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_json(cls, json_obj):
"""Build an Event from JSON. :param json_obj: JSON data representing a Cube Event :type json_obj: `String` or `json` :throws: `InvalidEventError` when any of time field is not present in json_obj. """ |
if isinstance(json_obj, str):
json_obj = json.loads(json_obj)
type = None
time = None
data = None
if cls.TYPE_FIELD_NAME in json_obj:
type = json_obj[cls.TYPE_FIELD_NAME]
if cls.TIME_FIELD_NAME in json_obj:
time = json_obj[cls.TIME_FIELD_NAME]
else:
raise InvalidEventError("{field} must be present!".format(
field=cls.TIME_FIELD_NAME))
if cls.DATA_FIELD_NAME in json_obj:
data = json_obj[cls.DATA_FIELD_NAME]
return cls(type, time, data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def autodiscover():
"""Autodiscover for urls.py""" |
# Get permissions based on urlpatterns from urls.py
url_conf = getattr(settings, 'ROOT_URLCONF', ())
resolver = urlresolvers.get_resolver(url_conf)
urlpatterns = resolver.url_patterns
permissions = generate_permissions(urlpatterns)
# Refresh permissions
refresh_permissions(permissions) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_driver_blacklist(driver):
# noqa: E501 """Retrieve the blacklist in the driver Retrieve the blacklist in the driver # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :rtype: Response """ |
response = errorIfUnauthorized(role='admin')
if response:
return response
else:
response = ApitaxResponse()
driver: Driver = LoadedDrivers.getDriver(driver)
response.body.add({'blacklist': driver.getDriverBlacklist()})
return Response(status=200, body=response.getResponseBody()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_driver_config(driver):
# noqa: E501 """Retrieve the config of a loaded driver Retrieve the config of a loaded driver # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :rtype: Response """ |
response = errorIfUnauthorized(role='admin')
if response:
return response
else:
response = ApitaxResponse()
# TODO: This needs an implementation, but likely requires a change to configs in apitaxcore
return Response(status=200, body=response.getResponseBody()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_driver_list():
# noqa: E501 """Retrieve the catalog of drivers Retrieve the catalog of drivers # noqa: E501 :rtype: Response """ |
response = errorIfUnauthorized(role='admin')
if response:
return response
else:
response = ApitaxResponse()
response.body.add({'drivers': LoadedDrivers.drivers})
return Response(status=200, body=response.getResponseBody()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_driver_whitelist(driver):
# noqa: E501 """Retrieve the whitelist in the driver Retrieve the whitelist in the driver # noqa: E501 :param driver: The driver to use for the request. ie. github :type driver: str :rtype: Response """ |
response = errorIfUnauthorized(role='admin')
if response:
return response
else:
response = ApitaxResponse()
driver: Driver = LoadedDrivers.getDriver(driver)
response.body.add({'whitelist': driver.getDriverWhitelist()})
return Response(status=200, body=response.getResponseBody()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tag_post_save_handler(sender, **kwargs):
""" Makes sure that a translation is created when a tag is saved. Also ensures that the original tag name gets updated when the english translation is updated. TODO: This will create two tags when a tag is saved through the admin """ |
instance = kwargs.get('instance')
try:
translation = instance.tagtitle_set.get(language='en')
except TagTitle.DoesNotExist:
translation = TagTitle.objects.create(
trans_name=instance.name, tag=instance, language='en')
if translation.trans_name != instance.name:
instance.name = translation.trans_name
instance.save_base(raw=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def require_editable(f):
""" Makes sure the registry key is editable before trying to edit it. """ |
def wrapper(self, *args, **kwargs):
if not self._edit:
raise RegistryKeyNotEditable("The key is not set as editable.")
return f(self, *args, **kwargs)
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def createHiddenFolder(self) -> 'File': """ Create Hidden Folder Create a hidden folder. Raise exception if auto delete isn't True. @return: Created folder. """ |
if not self._autoDelete:
raise Exception("Hidden folders can only be created within"
" an autoDelete directory")
return tempfile.mkdtemp(dir=self._path, prefix=".") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _listFilesWin(self) -> ['File']: """ List Files for Windows OS Search and list the files and folder in the current directory for the Windows file system. @return: List of directory files and folders. """ |
output = []
for dirname, dirnames, filenames in os.walk(self._path):
for subdirname in dirnames:
output.append(os.path.join(dirname, subdirname))
for filename in filenames:
output.append(os.path.join(dirname, filename))
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _listFilesPosix(self) -> ['File']: """ List Files for POSIX Search and list the files and folder in the current directory for the POSIX file system. @return: List of directory files and folders. """ |
find = "find %s -type f" % self._path
output = check_output(args=find.split()).strip().decode().split(
'\n')
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pathName(self, pathName: str):
""" Path Name Setter Set path name with passed in variable, create new directory and move previous directory contents to new path name. @param pathName: New path name string. @type pathName: String """ |
if self.pathName == pathName:
return
pathName = self.sanitise(pathName)
before = self.realPath
after = self._realPath(pathName)
assert (not os.path.exists(after))
newRealDir = os.path.dirname(after)
if not os.path.exists(newRealDir):
os.makedirs(newRealDir, DirSettings.defaultDirChmod)
shutil.move(before, after)
oldPathName = self._pathName
self._pathName = pathName
self._directory()._fileMoved(oldPathName, self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def namedTempFileReader(self) -> NamedTempFileReader: """ Named Temporary File Reader This provides an object compatible with NamedTemporaryFile, used for reading this files contents. This will still delete after the object falls out of scope. This solves the problem on windows where a NamedTemporaryFile can not be read while it's being written to """ |
# Get the weak ref
directory = self._directory()
assert isinstance(directory, Directory), (
"Expected Directory, receieved %s" % directory)
# Return the object
return NamedTempFileReader(directory, self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _realPath(self, newPathName: str = None) -> str: """ Private Real Path Get path name. @param newPathName: variable for new path name if passed argument. @type newPathName: String @return: Path Name as string. """ |
directory = self._directory()
assert directory
return os.path.join(directory.path,
newPathName if newPathName else self._pathName) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_file(package: str, fname: str) -> ModuleType: """Import file directly. This is a hack to import files from packages without importing <package>/__init__.py, its purpose is to allow import without requiring all the dependencies at this point. Args: package: Package to import from fname: File to import Returns: Imported module """ |
mod_name = fname.rstrip('.py')
spec = spec_from_file_location(mod_name, '{}/{}'.format(package, fname))
module = module_from_spec(spec)
spec.loader.exec_module(module)
return module |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def t_FLOAT(tok): # pylint: disable=locally-disabled,invalid-name
r'\d+\.\d+'
tok.value = (tok.type, float(tok.value))
return tok |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trigger_arbitrary_job(repo_name, builder, revision, auth, files=None, dry_run=False, extra_properties=None):
""" Request buildapi to trigger a job for us. We return the request or None if dry_run is True. Raises BuildapiAuthError if credentials are invalid. """ |
assert len(revision) == 40, \
'We do not accept revisions shorter than 40 chars'
url = _builders_api_url(repo_name, builder, revision)
payload = _payload(repo_name, revision, files, extra_properties)
if dry_run:
LOG.info("Dry-run: We were going to request a job for '{}'".format(builder))
LOG.info(" with this payload: {}".format(str(payload)))
LOG.info(" with these files: {}".format(files))
return None
# NOTE: A good response returns json with request_id as one of the keys
req = requests.post(
url,
headers={'Accept': 'application/json'},
data=payload,
auth=auth,
timeout=TCP_TIMEOUT,
)
if req.status_code == 401:
raise BuildapiAuthError("Your credentials were invalid. Please try again.")
elif req.status_code == 503:
raise BuildapiDown("Please file a bug {}".format(url))
try:
req.json()
return req
except ValueError:
LOG.info('repo: {}, builder: {}, revision: {}'.format(repo_name, builder, revision))
LOG.error("We did not get info from %s (status code: %s)" % (url, req.status_code))
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_retrigger_request(repo_name, request_id, auth, count=DEFAULT_COUNT_NUM, priority=DEFAULT_PRIORITY, dry_run=True):
""" Retrigger a request using buildapi self-serve. Returns a request. Buildapi documentation: POST /self-serve/{branch}/request Rebuild `request_id`, which must be passed in as a POST parameter. `priority` and `count` are also accepted as optional parameters. `count` defaults to 1, and represents the number of times this build will be rebuilt. """ |
url = '{}/{}/request'.format(SELF_SERVE, repo_name)
payload = {'request_id': request_id}
if count != DEFAULT_COUNT_NUM or priority != DEFAULT_PRIORITY:
payload.update({'count': count,
'priority': priority})
if dry_run:
LOG.info('We would make a POST request to %s with the payload: %s' % (url, str(payload)))
return None
LOG.info("We're going to re-trigger an existing completed job with request_id: %s %i time(s)."
% (request_id, count))
req = requests.post(
url,
headers={'Accept': 'application/json'},
data=payload,
auth=auth,
timeout=TCP_TIMEOUT,
)
# TODO: add debug message with job_id URL.
return req |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_cancel_request(repo_name, request_id, auth, dry_run=True):
""" Cancel a request using buildapi self-serve. Returns a request. Buildapi documentation: DELETE /self-serve/{branch}/request/{request_id} Cancel the given request """ |
url = '{}/{}/request/{}'.format(SELF_SERVE, repo_name, request_id)
if dry_run:
LOG.info('We would make a DELETE request to %s.' % url)
return None
LOG.info("We're going to cancel the job at %s" % url)
req = requests.delete(url, auth=auth, timeout=TCP_TIMEOUT)
# TODO: add debug message with the canceled job_id URL. Find a way
# to do that without doing an additional request.
return req |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_jobs_schedule(repo_name, revision, auth):
""" Query Buildapi for jobs. """ |
url = "%s/%s/rev/%s?format=json" % (SELF_SERVE, repo_name, revision)
LOG.debug("About to fetch %s" % url)
req = requests.get(url, auth=auth, timeout=TCP_TIMEOUT)
# If the revision doesn't exist on buildapi, that means there are
# no buildapi jobs for this revision
if req.status_code not in [200]:
return []
return req.json() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def query_pending_jobs(auth, repo_name=None, return_raw=False):
"""Return pending jobs""" |
url = '%s/pending?format=json' % HOST_ROOT
LOG.debug('About to fetch %s' % url)
req = requests.get(url, auth=auth, timeout=TCP_TIMEOUT)
# If the revision doesn't exist on buildapi, that means there are
# no builapi jobs for this revision
if req.status_code not in [200]:
return []
raw = req.json()
# If we don't want the data structure to be reduced
if return_raw:
return raw
# If we only want pending jobs of a specific repo
if repo_name and repo_name in list(raw['pending'].keys()):
repo_list = [repo_name]
else:
repo_list = list(raw['pending'].keys())
# Data structure to return
data = {}
for repo in repo_list:
data[repo] = {}
repo_jobs = raw['pending'][repo]
for revision in repo_jobs.items():
data[repo][revision[0]] = revision[1]
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_object(self, object):
""" Set object for rendering component and set object to all components :param object: :return: """ |
if self.object is False:
self.object = object
# Pass object along to child components for rendering
for component in self.components:
component.set_object(object) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fields(self):
"""Get all fields""" |
if not hasattr(self, '__fields'):
self.__fields = [
self.parse_field(field, index)
for index, field in enumerate(getattr(self, 'fields', []))
]
return self.__fields |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_field(self, field_data, index=0):
"""Parse field and add missing options""" |
field = {
'__index__': index,
}
if isinstance(field_data, str):
field.update(self.parse_string_field(field_data))
elif isinstance(field_data, dict):
field.update(field_data)
else:
raise TypeError('Expected a str or dict get {}'.format(type(field_data)))
if 'field' not in field:
field['field'] = None
if 'label' not in field and field['field']:
try:
field['label'] = self.object._meta.get_field(field['field']).verbose_name.capitalize()
except Exception:
field['label'] = field['field'].replace('_', '').capitalize()
elif 'label' not in field:
field['label'] = ''
if 'format' not in field:
field['format'] = '{0}'
# Set default options
for name, options in self.fields_options.items():
if 'default' in options and name not in field:
field[name] = options['default']
return field |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def render_field(self, field, data):
"""Render field for given data""" |
from trionyx.renderer import renderer
if 'value' in field:
value = field['value']
elif isinstance(data, object) and hasattr(data, field['field']):
value = getattr(data, field['field'])
if 'renderer' not in field:
value = renderer.render_field(data, field['field'], **field)
elif isinstance(data, dict) and field['field'] in data:
value = data.get(field['field'])
elif isinstance(data, list) and field['__index__'] < len(data):
value = data[field['__index__']]
else:
return ''
options = {key: value for key, value in field.items() if key not in ['value', 'data_object']}
if 'renderer' in field:
value = field['renderer'](value, data_object=data, **options)
else:
value = renderer.render_value(value, data_object=data, **options)
return field['format'].format(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_attr_text(self):
"""Get html attr text to render in template""" |
return ' '.join([
'{}="{}"'.format(key, value)
for key, value in self.attr.items()
]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def mk_dropdown_tree(cls, model, root_node, for_node=None):
'''
Override of ``treebeard`` method to enforce the same root.
'''
options = []
# The difference is that we only generate the subtree for the current root.
logger.debug("Using root node pk of %s" % root_node.pk)
cls.add_subtree(for_node, root_node, options)
return options[1:] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def scale_v2(vec, amount):
"""Return a new Vec2 with x and y from vec and multiplied by amount.""" |
return Vec2(vec.x * amount, vec.y * amount) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dot_v2(vec1, vec2):
"""Return the dot product of two vectors""" |
return vec1.x * vec2.x + vec1.y * vec2.y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cross_v2(vec1, vec2):
"""Return the crossproduct of the two vectors as a Vec2. Cross product doesn't really make sense in 2D, but return the Z component of the 3d result. """ |
return vec1.y * vec2.x - vec1.x * vec2.y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def truncate(self, max_length):
"""Truncate this vector so it's length does not exceed max.""" |
if self.length() > max_length:
# If it's longer than the max_length, scale to the max_length.
self.scale(max_length / self.length()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_scaled_v2(self, amount):
"""Return a new Vec2 with x and y multiplied by amount.""" |
return Vec2(self.x * amount, self.y * amount) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dot(self, vec):
"""Return the dot product of self and another Vec2.""" |
return self.x * vec.x + self.y * vec.y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cross(self, vec):
"""Return the 2d cross product of self with another vector. Cross product doesn't make sense in 2D, but return the Z component of the 3d result. """ |
return self.x * vec.y - vec.x * self.y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_action(self, action):
""" Retrieve a descriptor for the named action. Caches descriptors for efficiency. """ |
# If we don't have an action named that, bail out
if action not in self.wsgi_actions:
return None
# Generate an ActionDescriptor if necessary
if action not in self.wsgi_descriptors:
self.wsgi_descriptors[action] = actions.ActionDescriptor(
self.wsgi_actions[action],
self.wsgi_extensions.get(action, []),
self.wsgi_resp_type)
# OK, return the method descriptor
return self.wsgi_descriptors[action] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _route(self, action, method):
""" Given an action method, generates a route for it. """ |
# First thing, determine the path for the method
path = method._wsgi_path
methods = None
if path is None:
map_rule = self.wsgi_method_map.get(method.__name__)
if map_rule is None:
# Can't connect this method
LOG.warning("No path specified for action method %s() of "
"resource %s" % (method.__name__, self.wsgi_name))
return
# Compute the path and the method list
path = utils.norm_path(map_rule[0] % self.wsgi_name)
methods = map_rule[1]
# Compute route name
name = '%s_%s' % (self.wsgi_name, action)
# Set up path
path = getattr(self, 'wsgi_path_prefix', '') + path
# Build up the conditions
conditions = {}
if hasattr(method, '_wsgi_methods'):
conditions['method'] = methods if methods else method._wsgi_methods
if hasattr(method, '_wsgi_condition'):
conditions['function'] = method._wsgi_condition
# Create the route
self.wsgi_mapper.connect(name, path,
controller=self,
action=action,
conditions=conditions,
**getattr(method, '_wsgi_keywords', {})) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def on_message(self, data):
""" Parsing data, and try to call responding message """ |
# Trying to parse response
data = json.loads(data)
if not data["name"] is None:
logging.debug("%s: receiving message %s" % (data["name"], data["data"]))
fct = getattr(self, "on_" + data["name"])
try:
res = fct(Struct(data["data"]))
except:
# We try without Struct item (on transaction request this can happend)
res = fct(data["data"])
if res is not None:
self.write_message(res)
else:
logging.error("SockJSDefaultHandler: data.name was null") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def join(self, _id):
""" Join a room """ |
if not SockJSRoomHandler._room.has_key(self._gcls() + _id):
SockJSRoomHandler._room[self._gcls() + _id] = set()
SockJSRoomHandler._room[self._gcls() + _id].add(self) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def leave(self, _id):
""" Leave a room """ |
if SockJSRoomHandler._room.has_key(self._gcls() + _id):
SockJSRoomHandler._room[self._gcls() + _id].remove(self)
if len(SockJSRoomHandler._room[self._gcls() + _id]) == 0:
del SockJSRoomHandler._room[self._gcls() + _id] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getRoom(self, _id):
""" Retrieve a room from it's id """ |
if SockJSRoomHandler._room.has_key(self._gcls() + _id):
return SockJSRoomHandler._room[self._gcls() + _id]
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def publishToRoom(self, roomId, name, data, userList=None):
""" Publish to given room data submitted """ |
if userList is None:
userList = self.getRoom(roomId)
# Publish data to all room users
logging.debug("%s: broadcasting (name: %s, data: %s, number of users: %s)" % (self._gcls(), name, data, len(userList)))
self.broadcast(userList, {
"name": name,
"data": SockJSRoomHandler._parser.encode(data)
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def publishToOther(self, roomId, name, data):
""" Publish to only other people than myself """ |
tmpList = self.getRoom(roomId)
# Select everybody except me
userList = [x for x in tmpList if x is not self]
self.publishToRoom(roomId, name, data, userList) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def publishToMyself(self, roomId, name, data):
""" Publish to only myself """ |
self.publishToRoom(roomId, name, data, [self]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def isInRoom(self, _id):
""" Check a given user is in given room """ |
if SockJSRoomHandler._room.has_key(self._gcls() + _id):
if self in SockJSRoomHandler._room[self._gcls() + _id]:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def readline(self, size=None):
""" Read a line from the stream, including the trailing new line character. If `size` is set, don't read more than `size` bytes, even if the result does not represent a complete line. The last line read may not include a trailing new line character if one was not present in the underlying stream. """ |
if self._pos >= self.length:
return ''
if size:
amount = min(size, (self.length - self._pos))
else:
amount = self.length - self._pos
out = self.stream.readline(amount)
self._pos += len(out)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_tag(__matcher: str = 'v[0-9]*', *, strict: bool = True, git_dir: str = '.') -> str: """Find closest tag for a git repository. Note: This defaults to `Semantic Version`_ tag matching. Args: __matcher: Glob-style tag pattern to match strict: Allow commit-ish, if no tag found git_dir: Repository to search Returns: Matching tag name .. _Semantic Version: http://semver.org/ """ |
command = 'git describe --abbrev=12 --dirty'.split()
with chdir(git_dir):
try:
stdout = check_output(command + ['--match={}'.format(__matcher), ])
except CalledProcessError:
if strict:
raise
stdout = check_output(command + ['--always', ])
stdout = stdout.decode('ascii', 'replace')
return stdout.strip() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def on_message(self, *args, accept_query=False, matcher=None, **kwargs):
""" Convenience wrapper of `Client.on_message` pre-bound with `channel=self.name`. """ |
if accept_query:
def new_matcher(msg: Message):
ret = True
if matcher:
ret = matcher(msg)
if ret is None or ret is False:
return ret
if msg.recipient is not self and not isinstance(msg.sender, User):
return False
return ret
else:
kwargs.setdefault("channel", self.name)
new_matcher = matcher
return self.client.on_message(*args, matcher=new_matcher, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def await_message(self, *args, **kwargs) -> 'asyncio.Future[Message]': """ Block until a message matches. See `on_message` """ |
fut = asyncio.Future()
@self.on_message(*args, **kwargs)
async def handler(message):
fut.set_result(message)
# remove handler when done or cancelled
fut.add_done_callback(lambda _: self.remove_message_handler(handler))
return fut |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def await_command(self, *args, **kwargs) -> 'asyncio.Future[IrcMessage]': """ Block until a command matches. See `on_command` """ |
fut = asyncio.Future()
@self.on_command(*args, **kwargs)
async def handler(msg):
fut.set_result(msg)
# remove handler when done or cancelled
fut.add_done_callback(lambda _: self.remove_command_handler(handler))
return fut |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def message(self, recipient: str, text: str, notice: bool=False) -> None: """ Lower level messaging function used by User and Channel """ |
await self._send(cc.PRIVMSG if not notice else cc.NOTICE, recipient, rest=text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _bg(self, coro: coroutine) -> asyncio.Task: """Run coro in background, log errors""" |
async def runner():
try:
await coro
except:
self._log.exception("async: Coroutine raised exception")
return asyncio.ensure_future(runner()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _populate(self, client):
""" Populate module with the client when available """ |
self.client = client
for fn in self._buffered_calls:
self._log.debug("Executing buffered call {}".format(fn))
fn() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def datatype2schemacls( _datatype, _registry=None, _factory=None, _force=True, _besteffort=True, **kwargs ):
"""Get a schema class which has been associated to input data type by the registry or the factory in this order. :param type datatype: data type from where get associated schema. :param SchemaRegisgry _registry: registry from where call the getbydatatype . Default is the global registry. :param SchemaFactory _factory: factory from where call the getschemacls if getbydatatype returns None. Default is the global factory. :param bool _force: if true (default), force the building of schema class if no schema is associated to input data type. :param bool _besteffort: if True (default), try to resolve schema by inheritance. :param dict kwargs: factory builder kwargs. :rtype: type :return: Schema associated to input registry or factory. None if no association found. """ |
result = None
gdbt = getbydatatype if _registry is None else _registry.getbydatatype
result = gdbt(_datatype, besteffort=_besteffort)
if result is None:
gscls = getschemacls if _factory is None else _factory.getschemacls
result = gscls(_datatype, besteffort=_besteffort)
if result is None and _force:
_build = build if _factory is None else _factory.build
result = _build(_resource=_datatype, **kwargs)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def data2schema( _data=None, _force=False, _besteffort=True, _registry=None, _factory=None, _buildkwargs=None, **kwargs ):
"""Get the schema able to instanciate input data. The default value of schema will be data. Can be used such as a decorator: ..code-block:: python @data2schema def example():
pass # return a function schema @data2schema(_registry=myregistry) def example():
pass # return a function schema with specific registry ..warning:: return this function id _data is None. :param _data: data possibly generated by a schema. Required but in case of decorator. :param bool _force: if True (False by default), create the data schema on the fly if it does not exist. :param bool _besteffort: if True (default), find a schema class able to validate data class by inheritance. :param SchemaRegistry _registry: default registry to use. Global by default. :param SchemaFactory factory: default factory to use. Global by default. :param dict _buildkwargs: factory builder kwargs. :param kwargs: schema class kwargs. :return: Schema. :rtype: Schema. """ |
if _data is None:
return lambda _data: data2schema(
_data, _force=False, _besteffort=True, _registry=None,
_factory=None, _buildkwargs=None, **kwargs
)
result = None
fdata = _data() if isinstance(_data, DynamicValue) else _data
datatype = type(fdata)
content = getattr(fdata, '__dict__', {})
if _buildkwargs:
content.udpate(_buildkwargs)
schemacls = datatype2schemacls(
_datatype=datatype, _registry=_registry, _factory=_factory,
_force=_force, _besteffort=_besteffort, **content
)
if schemacls is not None:
result = schemacls(default=_data, **kwargs)
for attrname in dir(_data):
if not hasattr(schemacls, attrname):
attr = getattr(_data, attrname)
if attr is not None:
setattr(result, attrname, attr)
if result is None and _data is None:
result = AnySchema()
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def data2schemacls(_data, **kwargs):
"""Convert a data to a schema cls. :param data: object or dictionary from where get a schema cls. :return: schema class. :rtype: type """ |
content = {}
for key in list(kwargs): # fill kwargs
kwargs[key] = data2schema(kwargs[key])
if isinstance(_data, dict):
datacontent = iteritems(_data)
else:
datacontent = getmembers(_data)
for name, value in datacontent:
if name[0] == '_':
continue
if isinstance(value, dict):
schema = data2schemacls(value)()
else:
schema = data2schema(value)
content[name] = schema
content.update(kwargs) # update content
result = type('GeneratedSchema', (Schema,), content)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate(schema, data, owner=None):
"""Validate input data with input schema. :param Schema schema: schema able to validate input data. :param data: data to validate. :param Schema owner: input schema parent schema. :raises: Exception if the data is not validated. """ |
schema._validate(data=data, owner=owner) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dump(schema):
"""Get a serialized value of input schema. :param Schema schema: schema to serialize. :rtype: dict """ |
result = {}
for name, _ in iteritems(schema.getschemas()):
if hasattr(schema, name):
val = getattr(schema, name)
if isinstance(val, DynamicValue):
val = val()
if isinstance(val, Schema):
val = dump(val)
result[name] = val
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def updatecontent(schemacls=None, updateparents=True, exclude=None):
"""Transform all schema class attributes to schemas. It can be used such as a decorator in order to ensure to update attributes with the decorated schema but take care to the limitation to use old style method call for overidden methods. .. example: @updatecontent # update content at the end of its definition. class Test(Schema):
this = ThisSchema() # instance of Test. def __init__(self, *args, **kwargs):
Test.__init__(self, *args, **kwargs) # old style method call. :param type schemacls: sub class of Schema. :param bool updateparents: if True (default), update parent content. :param list exclude: attribute names to exclude from updating. :return: schemacls. """ |
if schemacls is None:
return lambda schemacls: updatecontent(
schemacls=schemacls, updateparents=updateparents, exclude=exclude
)
if updateparents and hasattr(schemacls, 'mro'):
schemaclasses = reversed(list(schemacls.mro()))
else:
schemaclasses = [schemacls]
for schemaclass in schemaclasses:
for name, member in iteritems(getattr(schemaclass, '__dict__', {})):
# transform only public members
if name[0] != '_' and (exclude is None or name not in exclude):
toset = False # flag for setting schemas
fmember = member
if isinstance(fmember, DynamicValue):
fmember = fmember()
toset = True
if isinstance(fmember, Schema):
schema = fmember
if not schema.name:
schema.name = name
else:
toset = True
data = member
if name == 'default':
if isinstance(fmember, ThisSchema):
data = schemaclass(*fmember.args, **fmember.kwargs)
schema = RefSchema(default=data, name=name)
elif isinstance(fmember, ThisSchema):
schema = schemaclass(
name=name, *fmember.args, **fmember.kwargs
)
elif member is None:
schema = AnySchema(name=name)
else:
schema = data2schema(_data=data, name=name)
if isinstance(schema, Schema) and toset:
try:
setattr(schemaclass, name, schema)
except (AttributeError, TypeError):
break
return schemacls |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def verify_key(self, url):
"""For verifying your API key. Provide the URL of your site or blog you will be checking spam from. """ |
response = self._request('verify-key', {
'blog': url,
'key': self._key
})
if response.status is 200:
# Read response (trimmed of whitespace)
return response.read().strip() == "valid"
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def comment_check(self, params):
"""For checking comments.""" |
# Check required params for comment-check
for required in ['blog', 'user_ip', 'user_agent']:
if required not in params:
raise MissingParams(required)
response = self._request('comment-check', params)
if response.status is 200:
# Read response (trimmed of whitespace)
return response.read().strip() == "true"
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def submit_ham(self, params):
"""For submitting a ham comment to Akismet.""" |
# Check required params for submit-ham
for required in ['blog', 'user_ip', 'user_agent']:
if required not in params:
raise MissingParams(required)
response = self._request('submit-ham', params)
if response.status is 200:
return response.read() == "true"
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _request(self, function, params, method='POST', headers={}):
"""Builds a request object.""" |
if method is 'POST':
params = urllib.parse.urlencode(params)
headers = { "Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain" }
path = '/%s/%s' % (self._version, function)
self._conn.request(method, path, params, headers)
return self._conn.getresponse() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def refresh_token():
# noqa: E501 """Refreshes login token using refresh token Refreshes login token using refresh token # noqa: E501 :rtype: UserAuth """ |
current_user = get_jwt_identity()
if not current_user:
return ErrorResponse(status=401, message="Not logged in")
access_token = create_access_token(identity=current_user)
return AuthResponse(status=201, message='Refreshed Access Token', access_token=access_token, auth=UserAuth()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build(_resource, _cache=True, **kwargs):
"""Build a schema from input _resource. :param _resource: object from where get the right schema. :param bool _cache: use cache system. :rtype: Schema. """ |
return _SCHEMAFACTORY.build(_resource=_resource, _cache=True, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def registerbuilder(self, builder, name=None):
"""Register a schema builder with a key name. Can be used such as a decorator where the builder can be the name for a short use. :param SchemaBuilder builder: schema builder. :param str name: builder name. Default is builder name or generated. """ |
if name is None:
name = uuid()
self._builders[name] = builder
return builder |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build(self, _resource, _cache=True, updatecontent=True, **kwargs):
"""Build a schema class from input _resource. :param _resource: object from where get the right schema. :param bool _cache: use _cache system. :param bool updatecontent: if True (default) update result. :rtype: Schema. """ |
result = None
if _cache and _resource in self._schemasbyresource:
result = self._schemasbyresource[_resource]
else:
for builder in self._builders.values():
try:
result = builder.build(_resource=_resource, **kwargs)
except Exception:
pass
else:
break
if result is None:
raise ValueError('No builder found for {0}'.format(_resource))
if _cache:
self._schemasbyresource[_resource] = result
if updatecontent:
from ..utils import updatecontent
updatecontent(result, updateparents=False)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def stop():
'''Stops lazarus, regardless of which mode it was started in.
For example:
>>> import lazarus
>>> lazarus.default()
>>> lazarus.stop()
'''
global _active
if not _active:
msg = 'lazarus is not active'
raise RuntimeWarning(msg)
_observer.stop()
_observer.join()
_deactivate() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _restart():
'''Schedule the restart; returning True if cancelled, False otherwise.'''
if _restart_cb:
# https://github.com/formwork-io/lazarus/issues/2
if _restart_cb() is not None:
# restart cancelled
return True
def down_watchdog():
_observer.stop()
_observer.join()
if _close_fds:
# close all fds...
_util.close_fds()
# declare a mulligan ;)
if _restart_func:
_restart_func()
_deactivate()
else:
_util.do_over()
_util.defer(down_watchdog)
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def default(restart_cb=None, restart_func=None, close_fds=True):
'''Sets up lazarus in default mode.
See the :py:func:`custom` function for a more powerful mode of use.
The default mode of lazarus is to watch all modules rooted at
``PYTHONPATH`` for changes and restart when they take place.
Keyword arguments:
restart_cb -- Callback invoked prior to restarting the process; allows
for any cleanup to occur prior to restarting. Returning anything other
than *None* in the callback will cancel the restart.
restart_func -- Function invoked to restart the process. This supplants
the default behavior of using *sys.executable* and *sys.argv*.
close_fds -- Whether all file descriptors other than *stdin*, *stdout*,
and *stderr* should be closed
A simple example:
>>> import lazarus
>>> lazarus.default()
>>> lazarus.stop()
'''
if _active:
msg = 'lazarus is already active'
raise RuntimeWarning(msg)
_python_path = os.getenv('PYTHONPATH')
if not _python_path:
msg = 'PYTHONPATH is not set'
raise RuntimeError(msg)
if restart_cb and not callable(restart_cb):
msg = 'restart_cb keyword argument is not callable'
raise TypeError(msg)
if restart_func and not callable(restart_func):
msg = 'restart_func keyword argument is not callable'
raise TypeError(msg)
global _close_fds
_close_fds = close_fds
try:
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
except ImportError as ie:
msg = 'no watchdog support (%s)' % str(ie)
raise RuntimeError(msg)
class _Handler(FileSystemEventHandler):
def __init__(self):
self.active = True
def dispatch(self, event):
if not self.active:
return
super(_Handler, self).dispatch(event)
def all_events(self, event):
if is_restart_event(event):
cancelled = _restart()
if not cancelled:
self.active = False
def on_created(self, event):
self.all_events(event)
def on_deleted(self, event):
self.all_events(event)
def on_modified(self, event):
self.all_events(event)
def on_moved(self, event):
self.all_events(event)
global _observer
_observer = Observer()
handler = _Handler()
_observer.schedule(handler, _python_path, recursive=True)
global _restart_cb
_restart_cb = restart_cb
global _restart_func
_restart_func = restart_func
_activate()
_observer.start() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def custom(srcpaths, event_cb=None, poll_interval=1, recurse=True,
restart_cb=None, restart_func=None, close_fds=True):
'''Sets up lazarus in custom mode.
See the :py:func:`default` function for a simpler mode of use.
The custom mode of lazarus is to watch all modules rooted at any of the
source paths provided for changes and restart when they take place.
Keyword arguments:
event_cb -- Callback invoked when a file rooted at a source path
changes. Without specifying an event callback, changes to any module
rooted at a source path will trigger a restart.
poll_interval -- Rate at which changes will be detected. The default
value of ``1`` means it may take up to one second to detect changes.
Decreasing this value may lead to unnecessary overhead.
recurse -- Whether to watch all subdirectories of every source path for
changes or only the paths provided.
restart_cb -- Callback invoked prior to restarting the process; allows
for any cleanup to occur prior to restarting. Returning anything other
than *None* in the callback will cancel the restart.
restart_func -- Function invoked to restart the process. This supplants
the default behavior of using *sys.executable* and *sys.argv*.
close_fds -- Whether all file descriptors other than *stdin*, *stdout*,
and *stderr* should be closed
An example of using a cleanup function prior to restarting:
>>> def cleanup():
... pass
>>> import lazarus
>>> lazarus.custom(os.curdir, restart_cb=cleanup)
>>> lazarus.stop()
An example of avoiding restarts when any ``__main__.py`` changes:
>>> def skip_main(event):
... if event.src_path == '__main__.py':
... return False
... return True
>>> import lazarus
>>> lazarus.custom(os.curdir, event_cb=skip_main)
>>> lazarus.stop()
'''
if _active:
msg = 'lazarus is already active'
raise RuntimeWarning(msg)
if restart_cb and not callable(restart_cb):
msg = 'restart_cb keyword argument is not callable'
raise TypeError(msg)
if restart_func and not callable(restart_func):
msg = 'restart_func keyword argument is not callable'
raise TypeError(msg)
global _close_fds
_close_fds = close_fds
try:
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
except ImportError as ie:
msg = 'no watchdog support (%s)' % str(ie)
raise RuntimeError(msg)
class _Handler(FileSystemEventHandler):
def __init__(self):
self.active = True
def dispatch(self, event):
if not self.active:
return
super(_Handler, self).dispatch(event)
def all_events(self, event):
# if caller wants event_cb control, defer _restart logic to them
# (caller decides whether this is a restart event)
if event_cb:
if event_cb(event):
cancelled = _restart()
if not cancelled:
self.active = False
# use default is_restart_event logic
elif is_restart_event(event):
cancelled = _restart()
if not cancelled:
self.active = False
self.active = False
def on_created(self, event):
self.all_events(event)
def on_deleted(self, event):
self.all_events(event)
def on_modified(self, event):
self.all_events(event)
def on_moved(self, event):
self.all_events(event)
global _observer
kwargs = {'timeout': poll_interval}
_observer = Observer(**kwargs)
global _restart_cb
_restart_cb = restart_cb
handler = _Handler()
srcpaths = _as_list(srcpaths)
kwargs = {}
if recurse:
kwargs['recursive'] = True
for srcpath in srcpaths:
_observer.schedule(handler, srcpath, **kwargs)
_activate()
_observer.start() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tty_stream(self):
""" Whether or not our stream is a tty """ |
return hasattr(self.options.stream, "isatty") \
and self.options.stream.isatty() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def color(self):
""" Whether or not color should be output """ |
return self.tty_stream if self.options.color is None \
else self.options.color |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def api_auth(func):
""" If the user is not logged in, this decorator looks for basic HTTP auth data in the request header. """ |
@wraps(func)
def _decorator(request, *args, **kwargs):
authentication = APIAuthentication(request)
if authentication.authenticate():
return func(request, *args, **kwargs)
raise Http404
return _decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _status_change(id, new_status):
"""Update the status of a job The status associated with the id is updated, an update command is issued to the job's pubsub, and and the old status is returned. Parameters id : str The job ID new_status : str The status change Returns ------- str The old status """ |
job_info = json.loads(r_client.get(id))
old_status = job_info['status']
job_info['status'] = new_status
_deposit_payload(job_info)
return old_status |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _deposit_payload(to_deposit):
"""Store job info, and publish an update Parameters to_deposit : dict The job info """ |
pubsub = to_deposit['pubsub']
id = to_deposit['id']
with r_client.pipeline() as pipe:
pipe.set(id, json.dumps(to_deposit), ex=REDIS_KEY_TIMEOUT)
pipe.publish(pubsub, json.dumps({"update": [id]}))
pipe.execute() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _redis_wrap(job_info, func, *args, **kwargs):
"""Wrap something to compute The function that will have available, via kwargs['moi_update_status'], a method to modify the job status. This method can be used within the executing function by: old_status = kwargs['moi_update_status']('my new status') Parameters job_info : dict Redis job details func : function A function to execute. This function must accept ``**kwargs``, and will have ``moi_update_status``, ``moi_context`` and ``moi_parent_id`` available. Raises ------ Exception If the function called raises, that exception is propagated. Returns ------- Anything the function executed returns. """ |
status_changer = partial(_status_change, job_info['id'])
kwargs['moi_update_status'] = status_changer
kwargs['moi_context'] = job_info['context']
kwargs['moi_parent_id'] = job_info['parent']
job_info['status'] = 'Running'
job_info['date_start'] = str(datetime.now())
_deposit_payload(job_info)
caught = None
try:
result = func(*args, **kwargs)
job_info['status'] = 'Success'
except Exception as e:
result = traceback.format_exception(*sys.exc_info())
job_info['status'] = 'Failed'
caught = e
finally:
job_info['result'] = result
job_info['date_end'] = str(datetime.now())
_deposit_payload(job_info)
if caught is None:
return result
else:
raise caught |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def submit(ctx_name, parent_id, name, url, func, *args, **kwargs):
"""Submit through a context Parameters ctx_name : str The name of the context to submit through parent_id : str The ID of the group that the job is a part of. name : str The name of the job url : str The handler that can take the results (e.g., /beta_diversity/) func : function The function to execute. Any returns from this function will be serialized and deposited into Redis using the uuid for a key. This function should raise if the method fails. args : tuple or None Any args for ``func`` kwargs : dict or None Any kwargs for ``func`` Returns ------- tuple, (str, str, AsyncResult) The job ID, parent ID and the IPython's AsyncResult object of the job """ |
if isinstance(ctx_name, Context):
ctx = ctx_name
else:
ctx = ctxs.get(ctx_name, ctxs[ctx_default])
return _submit(ctx, parent_id, name, url, func, *args, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _submit(ctx, parent_id, name, url, func, *args, **kwargs):
"""Submit a function to a cluster Parameters parent_id : str The ID of the group that the job is a part of. name : str The name of the job url : str The handler that can take the results (e.g., /beta_diversity/) func : function The function to execute. Any returns from this function will be serialized and deposited into Redis using the uuid for a key. This function should raise if the method fails. args : tuple or None Any args for ``func`` kwargs : dict or None Any kwargs for ``func`` Returns ------- tuple, (str, str, AsyncResult) The job ID, parent ID and the IPython's AsyncResult object of the job """ |
parent_info = r_client.get(parent_id)
if parent_info is None:
parent_info = create_info('unnamed', 'group', id=parent_id)
parent_id = parent_info['id']
r_client.set(parent_id, json.dumps(parent_info))
parent_pubsub_key = parent_id + ':pubsub'
job_info = create_info(name, 'job', url=url, parent=parent_id,
context=ctx.name, store=True)
job_info['status'] = 'Queued'
job_id = job_info['id']
with r_client.pipeline() as pipe:
pipe.set(job_id, json.dumps(job_info))
pipe.publish(parent_pubsub_key, json.dumps({'add': [job_id]}))
pipe.execute()
ar = ctx.bv.apply_async(_redis_wrap, job_info, func, *args, **kwargs)
return job_id, parent_id, ar |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def trc(postfix: Optional[str] = None, *, depth=1) -> logging.Logger: """ Automatically generate a logger from the calling function :param postfix: append another logger name on top this :param depth: depth of the call stack at which to capture the caller name :return: instance of a logger with a correct path to a current caller """ |
x = inspect.stack()[depth]
code = x[0].f_code
func = [obj for obj in gc.get_referrers(code) if inspect.isfunction(obj)][0]
mod = inspect.getmodule(x.frame)
parts = (mod.__name__, func.__qualname__)
if postfix:
parts += (postfix,)
logger_name = '.'.join(parts)
return logging.getLogger(logger_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _recv_loop(self):
""" Waits for data forever and feeds the input queue. """ |
while True:
try:
data = self._socket.recv(4096)
self._ibuffer += data
while '\r\n' in self._ibuffer:
line, self._ibuffer = self._ibuffer.split('\r\n', 1)
self.iqueue.put(line)
except Exception:
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _send_loop(self):
""" Waits for data in the output queue to send. """ |
while True:
try:
line = self.oqueue.get().splitlines()[0][:500]
self._obuffer += line + '\r\n'
while self._obuffer:
sent = self._socket.send(self._obuffer)
self._obuffer = self._obuffer[sent:]
except Exception:
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _create_socket(self):
""" Creates a new SSL enabled socket and sets its timeout. """ |
log.warning('No certificate check is performed for SSL connections')
s = super(SSL, self)._create_socket()
return wrap_socket(s) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_config(filename):
"""Load the event definitions from yaml config file.""" |
logger.debug("Event Definitions configuration file: %s", filename)
with open(filename, 'r') as cf:
config = cf.read()
try:
events_config = yaml.safe_load(config)
except yaml.YAMLError as err:
if hasattr(err, 'problem_mark'):
mark = err.problem_mark
errmsg = ("Invalid YAML syntax in Event Definitions file "
"%(file)s at line: %(line)s, column: %(column)s."
% dict(file=filename,
line=mark.line + 1,
column=mark.column + 1))
else:
errmsg = ("YAML error reading Event Definitions file "
"%(file)s"
% dict(file=filename))
logger.error(errmsg)
raise
logger.info("Event Definitions: %s", events_config)
return events_config |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _extract_when(body):
"""Extract the generated datetime from the notification.""" |
# NOTE: I am keeping the logic the same as it was in openstack
# code, However, *ALL* notifications should have a 'timestamp'
# field, it's part of the notification envelope spec. If this was
# put here because some openstack project is generating notifications
# without a timestamp, then that needs to be filed as a bug with the
# offending project (mdragon)
when = body.get('timestamp', body.get('_context_timestamp'))
if when:
return Datatype.datetime.convert(when)
return utcnow() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_buckets(self, offset=0, limit=100):
"""Limit breaks above 100""" |
# TODO: If limit > 100, do multiple fetches
if limit > 100:
raise Exception("Zenobase can't handle limits over 100")
return self._get("/users/{}/buckets/?order=label&offset={}&limit={}".format(self.client_id, offset, limit)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _create_table(db, table_name, columns, overwrite=False):
""" Create's `table_name` in `db` if it does not already exist, and adds any missing columns. :param db: An active SQLite3 Connection. :param table_name: The (unicode) name of the table to setup. :param columns: An iterable of column names to ensure exist. :param overwrite: If ``True`` and the table already exists, overwrite it. """ |
with contextlib.closing(db.cursor()) as c:
table_exists = c.execute((
u'SELECT EXISTS(SELECT 1 FROM sqlite_master'
u' WHERE type="table" and name=?) as "exists"'
), (table_name,)).fetchone()
if table_exists['exists']:
if not overwrite:
raise TableExists()
c.execute(u'DROP TABLE IF EXISTS "{table_name}"'.format(
table_name=table_name
))
# Create the table if it doesn't already exist.
c.execute((
u'CREATE TABLE IF NOT EXISTS "{table_name}"'
u'(id INTEGER PRIMARY KEY AUTOINCREMENT);'
).format(table_name=table_name))
# Cache the columns that are already there so we create only
# those that are missing.
c.execute(u'PRAGMA table_info("{table_name}");'.format(
table_name=table_name
))
results = c.fetchall()
existing_columns = set(r['name'] for r in results)
for header in columns:
if header in existing_columns:
continue
# In SQLite3, new columns can only be appended.
c.execute((
u'ALTER TABLE "{table_name}"'
u' ADD COLUMN "{col}" TEXT;'
).format(
table_name=table_name,
col=header
))
# Typically, table modifications occur outside of a
# transaction so this is just a precaution.
db.commit() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_colorbox():
"""Update Colorbox code from vendor tree""" |
base_name = os.path.dirname(__file__)
destination = os.path.join(base_name, "armstrong", "apps", "images", "static", "colorbox")
colorbox_source = os.path.join(base_name, "vendor", "colorbox")
colorbox_files = [
os.path.join(colorbox_source, "example1", "colorbox.css"),
os.path.join(colorbox_source, "example1", "images"),
os.path.join(colorbox_source, "colorbox", "jquery.colorbox-min.js"),
]
local("cp -R %s %s" % (" ".join(colorbox_files), destination))
# We're not supporting IE6, so we can drop the backfill
local("rm -rf %s" % (os.path.join(destination, "images", "ie6"))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def environment_variables_for_task(task):
""" This will build a dict with all the environment variables that should be present when running a build or deployment. :param task: A dict of the json payload with information about the build task. :return: A dict of environment variables. """ |
env = {
'CI': 'frigg',
'FRIGG': 'true',
'FRIGG_CI': 'true',
'GH_TOKEN': task['gh_token'],
'FRIGG_BUILD_BRANCH': task['branch'],
'FRIGG_BUILD_COMMIT_HASH': task['sha'],
'FRIGG_BUILD_DIR': '~/builds/{0}'.format(task['id']),
'FRIGG_BUILD_ID': task['id'],
'FRIGG_DOCKER_IMAGE': task['image'],
'FRIGG_WORKER': socket.getfqdn(),
}
if 'pull_request_id' in task:
env['FRIGG_PULL_REQUEST_ID'] = task['pull_request_id']
if 'build_number' in task:
env['FRIGG_BUILD_NUMBER'] = task['build_number']
if 'secrets' in task:
env.update(task['secrets'])
if 'environment_variables' in task:
env.update(task['environment_variables'])
return env |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def hkdf(self, chaining_key, input_key_material, dhlen=64):
"""Hash-based key derivation function Takes a ``chaining_key'' byte sequence of len HASHLEN, and an ``input_key_material'' byte sequence with length either zero bytes, 32 bytes or dhlen bytes. Returns two byte sequences of length HASHLEN""" |
if len(chaining_key) != self.HASHLEN:
raise HashError("Incorrect chaining key length")
if len(input_key_material) not in (0, 32, dhlen):
raise HashError("Incorrect input key material length")
temp_key = self.hmac_hash(chaining_key, input_key_material)
output1 = self.hmac_hash(temp_key, b'\x01')
output2 = self.hmac_hash(temp_key, output1 + b'\x02')
return output1, output2 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append(self, val):
"""Append byte string val to buffer If the result exceeds the length of the buffer, behavior depends on whether instance was initialized as strict. In strict mode, a ValueError is raised. In non-strict mode, the buffer is extended as necessary. """ |
new_len = self.length + len(val)
to_add = new_len - len(self.bfr)
if self.strict and to_add > 0:
raise ValueError("Cannot resize buffer")
self.bfr[self.length:new_len] = val
self.length = new_len |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addr_info(addr):
""" Interprets an address in standard tuple format to determine if it is valid, and, if so, which socket family it is. Returns the socket family. """ |
# If it's a string, it's in the UNIX family
if isinstance(addr, basestring):
return socket.AF_UNIX
# Verify that addr is a tuple
if not isinstance(addr, collections.Sequence):
raise ValueError("address is not a tuple")
# Make sure it has at least 2 fields
if len(addr) < 2:
raise ValueError("cannot understand address")
# Sanity-check the port number
if not (0 <= addr[1] < 65536):
raise ValueError("cannot understand port number")
# OK, first field should be an IP address; suck it out...
ipaddr = addr[0]
# Empty string means IPv4
if not ipaddr:
if len(addr) != 2:
raise ValueError("cannot understand address")
return socket.AF_INET
# See if it's valid...
if netaddr.valid_ipv6(ipaddr):
if len(addr) > 4:
raise ValueError("cannot understand address")
return socket.AF_INET6
elif netaddr.valid_ipv4(ipaddr):
if len(addr) != 2:
raise ValueError("cannot understand address")
return socket.AF_INET
raise ValueError("cannot understand address") |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_labels(X):
"""Helper function that generates a single 1D numpy.ndarray with labels which are good targets for stock logistic regression. Parameters: X (numpy.ndarray):
The input data matrix. This must be a numpy.ndarray with 3 dimensions or an iterable containing 2 numpy.ndarrays with 2 dimensions each. Each correspond to the data for one of the two classes, every row corresponds to one example of the data set, every column, one different feature. Returns: numpy.ndarray: With a single dimension, containing suitable labels for all rows and for all classes defined in X (depth). """ |
return numpy.hstack([k*numpy.ones(len(X[k]), dtype=int) for k in range(len(X))]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_bias(X):
"""Helper function to add a bias column to the input array X Parameters: X (numpy.ndarray):
The input data matrix. This must be a numpy.ndarray with 2 dimension wheres every row corresponds to one example of the data set, every column, one different feature. Returns: numpy.ndarray: The same input matrix X with an added (prefix) column of ones. """ |
return numpy.hstack((numpy.ones((len(X),1), dtype=X.dtype), X)) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.