text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def kill(options):
""" kill a specific job by id """ |
configuration = config.get_default()
app_url = configuration['app_url']
if options.deployment != None:
deployment_name = options.deployment
else:
deployment_name = configuration['deployment_name']
client_id = configuration['client_id']
client_secret = configuration['client_secret']
token_manager = auth.TokenManager(client_id=client_id,
client_secret=client_secret,
app_url=app_url)
job_details = data_engine.get_job_details(options.job_id,
deployment_name,
token_manager=token_manager,
app_url=app_url)
options.format = 'table'
if options.yes:
decision = 'Y'
else:
_print_jobs([job_details], token_manager, app_url, options)
decision = prompt('Are you sure you want to delete the above job? (Y/N)')
if decision == 'Y':
data_engine.delete_job(options.job_id.strip(),
deployment_name,
token_manager=token_manager,
app_url=app_url)
else:
raise JutException('Unexpected option "%s"' % decision) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, key):
""" Executes the callable registered at the specified key and returns its value. Subsequent queries are cached internally. `key` String key for a previously stored callable. """ |
if not key in self._actions:
return None
if not key in self._cache:
self._cache[key] = self._actions[key]()
return self._cache[key] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register(self, key, value):
""" Registers a callable with the specified key. `key` String key to identify a callable. `value` Callable object. """ |
self._actions[key] = value
# invalidate cache of results for existing key
if key in self._cache:
del self._cache[key] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, key):
""" Executes the callable registered at the specified key and returns its value along with type info. Subsequent queries are cached internally. `key` String key for a previously stored callable. """ |
obj = super(ExtRegistry, self).get(key)
if obj is None:
return obj
return (obj, self._type_info.get(key)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register(self, key, value, type_info):
""" Registers a callable with the specified key and type info. `key` String key to identify a callable. `value` Callable object. `type_info` Dictionary with type information about the value provided. """ |
# check for existing action
old_action = self._actions.get(key)
# update existing type info if value hasn't changed
if old_action == value and key in self._type_info:
self._type_info[key].update(type_info)
else:
self._type_info[key] = dict(type_info)
super(ExtRegistry, self).register(key, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_extractor(*args, **kwargs):
""" Initializes and runs an extractor """ |
# pdb.set_trace()
extractor = Extractor(*args, **kwargs)
result = extractor.run(**kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run(self, tag=None, output=None, **kwargs):
""" runs the extractor Args: ----- output: ['filepath', None] """ |
start = datetime.datetime.now()
count = 0
if tag:
tag = Uri(tag)
xml_generator = etree.iterparse(self.source,
#events=("start", "end"),
tag=tag.etree)
else:
xml_generator = etree.iterparse(self.source) #,
#events=("start", "end"))
i = 0
for event, element in xml_generator:
type_tags = element.findall(_RDF_TYPE_TAG)
rdf_types = [el.get(_RES_TAG)
for el in type_tags
if el.get(_RES_TAG)]
# print(rdf_types)
if str(self.filter_val) in rdf_types:
pdb.set_trace()
# print("%s - %s - %s - %s" % (event,
# element.tag,
# element.attrib,
# element.text))
count += 1
# if i == 100:
# break
i += 1
element.clear()
print("Found '{}' items in {}".format(count,
(datetime.datetime.now() - start))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def withTracebackPrint(ErrorType, thrownError, _traceback):
'''returns an Exception object for the given ErrorType of the thrownError
and the _traceback
can be used like withTracebackPrint(*sys.exc_info())'''
file = StringIO.StringIO()
traceback.print_exception(ErrorType, thrownError, _traceback, file = file)
return _loadError(ErrorType, thrownError, file.getvalue()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _newRemoteException(ErrorType):
'''create a new RemoteExceptionType from a given errortype'''
RemoteErrorBaseType = _RemoteExceptionMeta('', (ErrorType,), {})
class RemoteException(RemoteErrorBaseType):
BaseExceptionType = ErrorType
def __init__(self, thrownError, tracebackString):
self.thrownError = thrownError
self.tracebackString = tracebackString
RemoteErrorBaseType.__init__(self, *thrownError.args)
loadError = staticmethod(_loadError)
def __str__(self):
return '\n%s\n%s' % (self.tracebackString, self.thrownError)
def __reduce__(self):
args = (ErrorType, self.thrownError, self.tracebackString)
return self.loadError, args
RemoteException.__name__ = 'Remote' + ErrorType.__name__
return RemoteException |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _loadError(ErrorType, thrownError, tracebackString):
'''constructor of RemoteExceptions'''
RemoteException = asRemoteException(ErrorType)
return RemoteException(thrownError, tracebackString) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_exception(self, e):
"""called by flask when an exception happens. p4rr0t007 always returns a 500.html response that must exist under the given ``template_folder`` constructor param. """ |
sys.stderr.write("p4rr0t007 handled an error:")
sys.stderr.write(traceback.format_exc(e))
sys.stderr.flush()
self.log.exception('failed to handle {} {}'.format(request.method, request.url))
try:
return self.template_response(self.error_template_name, code=500)
except TemplateError as e:
sys.stderr.write('failed render the {}/{}'.format(self.template_folder, self.error_template_name))
sys.stderr.write(traceback.format_exc(e))
sys.stderr.flush()
return self.text_response('5ERV3R 3RR0R') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def button_input(self, title, message, buttons, default, timeout=None, dimensions=None):
'''
Function to accept input in the form of a button click.
'''
# Create the dialog box
self.response = default
self.top = tkinter.Tk()
self.top.title(title)
# Use dimensions if passes
if dimensions is not None:
self.top.minsize(width=dimensions[0], height=dimensions[1])
self.top.maxsize(width=dimensions[0], height=dimensions[1])
# Display a message
labelString = tkinter.StringVar()
labelString.set(message)
label = tkinter.Label(self.top, textvariable=labelString, relief=tkinter.RAISED)
label.pack(ipadx=100, ipady=10)
# Populate dialog box with buttons
for key in buttons.keys():
button = tkinter.Button(self.top, text=buttons[key], command=lambda key=key: self.selected(key))
button.pack(fill='both', pady=5, padx=10)
# Destroy the dialog box if there has been no button click within the timeout period
if timeout != None:
try:
self.top.after(timeout, lambda: self.top.destroy())
except:
pass
self.top.mainloop()
return self.response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rate_limited(max_per_second):
""" Sort of based off of an answer about rate limiting on Stack Overflow. Definitely **not** thread safe, so don't even think about it, buddy. """ |
import datetime
min_request_time = datetime.timedelta(seconds=max_per_second)
last_time_called = [None]
def decorate(func):
def rate_limited_function(*args, **kwargs):
if last_time_called[0]:
delta = datetime.datetime.now() - last_time_called[0]
if delta < datetime.timedelta.min:
raise chrw.exceptions.TimeIsBackToFront, "Call the Doc!"
elif delta < min_request_time:
msg = "Last request was {0}, should be at least {1}".format(
delta, min_request_time
)
raise chrw.exceptions.RequestRateTooHigh, msg
ret = func(*args, **kwargs)
last_time_called[0] = datetime.datetime.now()
return ret
return functools.update_wrapper(rate_limited_function, func)
return decorate |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def shorten(self, url, custom=None, give_delete=True):
""" Sends a URL shorten request to the API. :param url: the URL to shrink :type url: str :param custom: a custom URL to request :type custom: str :param give_delete: would we like a deletion key to be returned? :type give_delete: bool :return: the API response JSON dict :rtype: dict """ |
data = self.fetch("/submit", {
"long": url,
"short": custom if custom else "",
"delete": "true" if give_delete else ""
})
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, url, code):
""" Request a URL be deleted. This will only work if you supply the valid deletion code. :param url: the shortened url to delete :type url: str :param code: the deletion code given to you on URL shorten :type code: str :return: the deletion request's reply dict :rtype: dict """ |
data = self.fetch("/delete", {
"short": url,
"delete": code
})
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self, url, pdata, store_to_self=True):
""" This does the bulk of the work for the wrapper. It will send a POST request, to the API URL, with all required data, as well as the api_key given, and will handle various replies, raising exceptions as required. :param url: the url segment to POST to (unbuilt url, e.g., /submit, /expand) :type url: str :param pdata: a dictionary of data to POST :type pdata: dict :param store_to_self: should we store the reply (if any) to self.reply? :type store_to_self: bool :return: the API reply data :rtype: dict :raises: chrw.exceptions.ApiDisabled, chrw.exceptions.InvalidApiKey, chrw.exceptions.PartialFormData, chrw.exceptions.NonZeroException """ |
url = self.schema + '://' + self.base + url
post = dict(pdata.items() + {
"api_key": self.api_key
}.items())
self.post = post
res = requests.post(url, post, headers={"User-Agent": self.user_agent})
if self.require_200 and res.status_code != requests.codes.ok:
raise chrw.exceptions.RequestFailed, "Got HTTP reply {0}, needed {1}".format(res.status_code, requests.codes.ok)
if not res.json:
raise chrw.exceptions.InvalidDataReturned, "Invalid JSON data was returned"
if store_to_self:
self.reply = res.json
if res.json["enum"] == chrw.codes.api_disabled:
raise chrw.exceptions.ApiDisabled, res.json["message"]
elif res.json["enum"] == chrw.codes.no_such_key:
raise chrw.exceptions.InvalidApiKey, res.json["message"]
elif res.json["enum"] == chrw.codes.partial_form_data:
raise chrw.exceptions.PartialFormData, res.json["message"]
elif res.json["enum"] != chrw.codes.success:
__ = "Non-zero reply {0}: {1}".format(res.json["enum"], res.json["message"])
raise chrw.exceptions.NonZeroReply, __
return res.json |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def watch(path, handler):
"""Watch a directory for events. - path should be the directory to watch - handler should a function which takes an event_type and src_path and does something interesting. event_type will be one of 'created', 'deleted', 'modified', or 'moved'. src_path will be the absolute path to the file that triggered the event. """ |
# let the user just deal with events
@functools.wraps(handler)
def wrapper(self, event):
if not event.is_directory:
return handler(event.event_type, event.src_path)
attrs = {'on_any_event': wrapper}
EventHandler = type("EventHandler", (FileSystemEventHandler,), attrs)
observer = Observer()
observer.schedule(EventHandler(), path=path, recursive=True)
observer.start()
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def rotate(self, shift):
'''
Rotate 90 degrees clockwise `shift` times. If `shift` is negative,
rotate counter-clockwise.
'''
self.child_corners.values[:] = np.roll(self.child_corners
.values, shift, axis=0)
self.update_transform() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def message_length(message):
'''
message_length returns visual length of message.
Ascii chars are counted as 1, non-asciis are 2.
:param str message: random unicode mixed text
:rtype: int
'''
length = 0
for char in map(east_asian_width, message):
if char == 'W':
length += 2
elif char == 'Na':
length += 1
return length |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def migration(resource, version, previous_version=''):
"""Register a migration function""" |
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
migrated = func(*args, **kwargs)
return migrated
m = Migration(wrapper, resource, version, previous_version)
m.register()
return m
return decorator |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(resource_path, previous_version=None, package='perch.migrations'):
"""Create a new migration""" |
pkg, obj = resource_path.rsplit('.', 1)
module = importlib.import_module(pkg)
resource = getattr(module, obj)
version = uuid4().hex
target_module = importlib.import_module(package)
target_dir = os.path.dirname(target_module.__file__)
target_file = os.path.join(target_dir, resource.resource_type + '_' + version + '.py')
with open(target_file, 'w') as f:
f.write(MIGRATION_TEMPLATE.format(
resource_path=resource_path,
resource_type=resource.resource_type,
version=version,
previous_version=previous_version or '',
))
return target_file |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def collect(package='perch.migrations'):
""" Import all modules inside the perch.migrations package and return the registered migrations """ |
package = importlib.import_module(package)
for loader, name, is_pkg in pkgutil.walk_packages(package.__path__):
importlib.import_module(package.__name__ + '.' + name)
return _migrations |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_migrations(migrations):
""" Run migrations for a resource type :param: a dicitionary of migrations """ |
for resource, resource_migrations in migrations.items():
for version in resource_migrations:
to_migrate = yield resource_version.get(
key=[resource.resource_type, version],
include_docs=True)
for x in to_migrate['rows']:
instance = resource(**x['doc'])
instance = _migrate_resource(
instance,
resource_migrations,
version
)
yield instance._save() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _migrate_resource(instance, migrations, version=''):
""" Migrate a resource instance Subresources are migrated first, then the resource is recursively migrated :param instance: a perch.Document instance :param migrations: the migrations for a resource :param version: the current resource version to migrate """ |
if version not in migrations:
return instance
instance = _migrate_subresources(
instance,
migrations[version]['subresources']
)
for migration in migrations[version]['migrations']:
instance = migration(instance)
instance._resource['doc_version'] = unicode(migration.version)
instance = _migrate_resource(
instance,
migrations,
version=migration.version
)
return instance |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _migrate_subresources(parent, migrations):
""" Migrate a resource's subresources :param parent: the parent perch.Document instance :param migrations: the migrations for a resource """ |
for subresource, resource_migrations in migrations.items():
parent = _migrate_subresource(
subresource,
parent,
resource_migrations
)
return parent |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _migrate_subresource(subresource, parent, migrations):
""" Migrate a resource's subresource :param subresource: the perch.SubResource instance :param parent: the parent perch.Document instance :param migrations: the migrations for a resource """ |
for key, doc in getattr(parent, subresource.parent_key, {}).items():
for migration in migrations['migrations']:
instance = migration(subresource(id=key, **doc))
parent._resource['doc_version'] = unicode(migration.version)
instance = _migrate_subresources(
instance,
migrations['subresources']
)
doc = instance._resource
doc.pop('id', None)
doc.pop(instance.resource_type + '_id', None)
getattr(parent, subresource.parent_key)[key] = doc
return parent |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pop_frame(self):
""" Remove and return the frame at the top of the stack. :returns: The top frame :rtype: Frame :raises Exception: If there are no frames on the stack """ |
self.frames.pop(0)
if len(self.frames) == 0:
raise Exception("stack is exhausted")
return self.frames[0] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dict_to_querystring(dictionary):
"""Converts a dict to a querystring suitable to be appended to a URL.""" |
s = u""
for d in dictionary.keys():
s = unicode.format(u"{0}{1}={2}&", s, d, dictionary[d])
return s[:-1] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def visualise(seq, sort=lambda x: x[0]):
"""visualises as seq or dictionary""" |
frmt = "{:6} {:8,d} {}"
if isinstance(seq, dict):
seq = seq.items()
if sort:
seq = sorted(seq, key=sort)
mx, mn = max([i[1] for i in seq]), min([i[1] for i in seq])
range = mx - mn
for i in seq:
v = int((i[1] * 100) / range)
print (frmt.format(i[0], i[1], "*" * v)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def help(cls):
"""prints named colors""" |
print("for named colors use :")
for c in sorted(list(cls.colors.items())):
print("{:10} {}".format(*c)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def printc(cls, txt, color=colors.red):
"""Print in color.""" |
print(cls.color_txt(txt, color)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_all_cached_commit_times(root_folder):
""" Find the gitmit cached commit_times and return them if they are the right shape. This means the file is a list of dictionaries. If they aren't, issue a warning and return an empty list, it is just a cache after all! """ |
result = []
location = cache_location(root_folder)
if os.path.exists(location):
try:
result = json.load(open(location))
except (TypeError, ValueError) as error:
log.warning("Failed to open gitmit cached commit_times\tlocation=%s\terror=%s", location, error)
else:
if type(result) is not list or not all(type(item) is dict for item in result):
log.warning("Gitmit cached commit_times needs to be a list of dictionaries\tlocation=%s\tgot=%s", location, type(result))
result = []
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_cached_commit_times(root_folder, parent_dir, sorted_relpaths):
""" Get the cached commit times for the combination of this parent_dir and relpaths Return the commit assigned to this combination and the actual times! """ |
result = get_all_cached_commit_times(root_folder)
for item in result:
if sorted(item.get("sorted_relpaths", [])) == sorted_relpaths and item.get("parent_dir") == parent_dir:
return item.get("commit"), item.get("commit_times")
return None, {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _attach_files(filepaths, email_):
"""Take a list of filepaths and attach the files to a MIMEMultipart. Args: filepaths (list(str)):
A list of filepaths. email_ (email.MIMEMultipart):
A MIMEMultipart email_. """ |
for filepath in filepaths:
base = os.path.basename(filepath)
with open(filepath, "rb") as file:
part = MIMEApplication(file.read(), Name=base)
part["Content-Disposition"] = 'attachment; filename="%s"' % base
email_.attach(part) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def send_files_preconf(filepaths, config_path=CONFIG_PATH):
"""Send files using the config.ini settings. Args: filepaths (list(str)):
A list of filepaths. """ |
config = read_config(config_path)
subject = "PDF files from pdfebc"
message = ""
await send_with_attachments(subject, message, filepaths, config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_link(self, obj, attr=None):
""" removes link from obj.attr """ |
name = repr(self)
if not name:
return self
l = self.__class__._get_links()
v = WeakAttrLink(None, obj) if attr is None else WeakAttrLink(obj, attr)
if name in l:
if v in l[name]:
l[name].remove(v)
if not l[name]:
l.pop(name)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handle_import(self, options):
""" Gets posts from Blogger. """ |
blog_id = options.get("blog_id")
if blog_id is None:
raise CommandError("Usage is import_blogger %s" % self.args)
try:
from gdata import service
except ImportError:
raise CommandError("Could not import the gdata library.")
blogger = service.GDataService()
blogger.service = "blogger"
blogger.server = "www.blogger.com"
start_index = 1
processed_posts = []
new_posts = 1
while new_posts:
new_posts = 0
query = service.Query()
query.feed = "/feeds/%s/posts/full" % blog_id
query.max_results = 500
query.start_index = start_index
try:
feed = blogger.Get(query.ToUri())
except service.RequestError as err:
message = "There was a service error. The response was: " \
"%(status)s %(reason)s - %(body)s" % err.message
raise CommandError(message, blogger.server + query.feed,
err.message["status"])
for (i, entry) in enumerate(feed.entry):
# this basically gets the unique post ID from the URL to itself
# and pulls the ID off the end.
post_id = entry.GetSelfLink().href.split("/")[-1]
# Skip duplicate posts. Important for the last query.
if post_id in processed_posts:
continue
title = entry.title.text
content = entry.content.text
# this strips off the time zone info off the end as we want UTC
clean_date = entry.published.text[:re.search(r"\.\d{3}",
entry.published.text).end()]
published_date = self.parse_datetime(clean_date)
# TODO - issues with content not generating correct <P> tags
tags = [tag.term for tag in entry.category]
post = self.add_post(title=title, content=content,
pub_date=published_date, tags=tags)
# get the comments from the post feed and then add them to
# the post details
comment_url = "/feeds/%s/%s/comments/full?max-results=1000"
comments = blogger.Get(comment_url % (blog_id, post_id))
for comment in comments.entry:
email = comment.author[0].email.text
author_name = comment.author[0].name.text
# Strip off the time zone info off the end as we want UTC
clean_date = comment.published.text[:re.search(r"\.\d{3}",
comment.published.text).end()]
comment_date = self.parse_datetime(clean_date)
website = ""
if comment.author[0].uri:
website = comment.author[0].uri.text
body = comment.content.text
# add the comment as a dict to the end of the comments list
self.add_comment(post=post, name=author_name, email=email,
body=body, website=website,
pub_date=comment_date)
processed_posts.append(post_id)
new_posts += 1
start_index += 500 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def recipients(preferences, message, valid_paths, config):
""" The main API function. Accepts a fedmsg message as an argument. Returns a dict mapping context names to lists of recipients. """ |
rule_cache = dict()
results = defaultdict(list)
notified = set()
for preference in preferences:
user = preference['user']
context = preference['context']
if (user['openid'], context['name']) in notified:
continue
for filter in preference['filters']:
if matches(filter, message, valid_paths, rule_cache, config):
for detail_value in preference['detail_values']:
results[context['name']].append({
'user': user['openid'],
context['detail_name']: detail_value,
'filter_name': filter['name'],
'filter_id': filter['id'],
'filter_oneshot': filter['oneshot'],
'markup_messages': preference['markup_messages'],
'triggered_by_links': preference['triggered_by_links'],
'shorten_links': preference['shorten_links'],
'verbose': preference['verbose'],
})
notified.add((user['openid'], context['name']))
break
return results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def matches(filter, message, valid_paths, rule_cache, config):
""" Returns True if the given filter matches the given message. """ |
if not filter['rules']:
return False
for rule in filter['rules']:
fn = rule['fn']
negated = rule['negated']
arguments = rule['arguments']
rule_cache_key = rule['cache_key']
try:
if rule_cache_key not in rule_cache:
value = fn(config, message, **arguments)
if negated:
value = not value
rule_cache[rule_cache_key] = value
if not rule_cache[rule_cache_key]:
return False
except Exception as e:
log.exception(e)
# If something throws an exception then we do *not* have a match.
return False
# Then all rules matched on this filter..
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_preferences(session, config, valid_paths, cull_disabled=False, openid=None, cull_backends=None):
""" Every rule for every filter for every context for every user. Any preferences in the DB that are for contexts that are disabled in the config are omitted here. If the `openid` argument is None, then this is an expensive query that loads, practically, the whole database. However, if an openid string is submitted, then only the preferences of that user are returned (and this is less expensive). """ |
cull_backends = cull_backends or []
query = session.query(fmn.lib.models.Preference)
if openid:
query = query.filter(fmn.lib.models.Preference.openid==openid)
preferences = query.all()
return [
preference.__json__(reify=True)
for preference in preferences
if (
preference.context.name in config['fmn.backends']
and preference.context.name not in cull_backends
and (not cull_disabled or preference.enabled)
)
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_rules(root='fmn.rules'):
""" Load the big list of allowed callable rules. """ |
module = __import__(root, fromlist=[root.split('.')[0]])
hinting_helpers = fmn.lib.hinting.__dict__.values()
rules = {}
for name in dir(module):
obj = getattr(module, name)
# Ignore non-callables.
if not callable(obj):
continue
# Ignore our decorator and its friends
if obj in hinting_helpers:
continue
doc = inspect.getdoc(obj)
# It's crazy, but inspect (stdlib!) doesn't return unicode objs on py2.
if doc and hasattr(doc, 'decode'):
doc = doc.decode('utf-8')
if doc:
# If we have a docstring, then mark it up beautifully for display
# in the web app.
# FWIW, this should probably be moved into fmn.web since nowhere
# else are we going to want HTML... we'll still want raw .rst.
title, doc_as_rst = doc.split('\n', 1)
doc = docutils.examples.html_parts(doc_as_rst)['body']
soup = bs4.BeautifulSoup(doc, 'html5lib')
doc_no_links = ''.join(map(six.text_type, strip_anchor_tags(soup)))
doc = markupsafe.Markup(doc)
doc_no_links = markupsafe.Markup(doc_no_links)
else:
title = "UNDOCUMENTED"
doc = "No docs for %s:%s %r" % (root, name, obj)
doc_no_links = doc
rules[name] = {
'func': obj,
'submodule': obj.__module__.split('.')[-1],
'title': title.strip(),
'doc': doc.strip(),
'doc-no-links': doc_no_links.strip(),
'args': inspect.getargspec(obj)[0],
'datanommer-hints': getattr(obj, 'hints', {}),
'hints-invertible': getattr(obj, 'hinting_invertible', True),
'hints-callable': getattr(obj, 'hinting_callable', None),
}
rules = OrderedDict(
sorted(rules.items(), key=lambda x: x[1]['title'])
)
return {root: rules} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def use_with(data, fn, *attrs):
"""Apply a function on the attributes of the data :param data: an object :param fn: a function :param attrs: some attributes of the object :returns: an object Let's create some data first:: Usage:: 'Alice,30,F' """ |
args = [getattr(data, x) for x in attrs]
return fn(*args) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rmap(fn, coll, is_iterable=None):
"""A recursive map :param fn: a function :param coll: a list :param isiterable: a predicate function determining whether a value is iterable. :returns: a list [2, 4, [6, 8]] """ |
result = []
for x in coll:
if is_iterable is None:
is_iterable = isiterable
if is_iterable(x):
y = rmap(fn, x)
else:
y = fn(x)
result.append(y)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compose(*fns):
"""Return the function composed with the given functions :param fns: functions :returns: a function 8 .. note:: compose(fn1, fn2, fn3) is the same as fn1(fn2(fn3)) which means that the last function provided is the first to be applied. """ |
def compose2(f, g):
return lambda x: f(g(x))
return reduce(compose2, fns) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def groupby(fn, coll):
"""Group elements in sub-collections by fn :param fn: a function :param coll: a collection :returns: a dictionary {4: ['John', 'Eric'], 5: ['Terry'], 6: ['Graham'], 7: ['Mickael']} """ |
d = collections.defaultdict(list)
for item in coll:
key = fn(item)
d[key].append(item)
return dict(d) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reductions(fn, seq, acc=None):
"""Return the intermediate values of a reduction :param fn: a function :param seq: a sequence :param acc: the accumulator :returns: a list [1, 3, 6] [11, 13, 16] """ |
indexes = xrange(len(seq))
if acc:
return map(lambda i: reduce(lambda x, y: fn(x, y), seq[:i+1], acc), indexes)
else:
return map(lambda i: reduce(lambda x, y: fn(x, y), seq[:i+1]), indexes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def split(coll, factor):
"""Split a collection by using a factor :param coll: a collection :param factor: a collection of factors :returns: a dictionary {'classic': ['Debussy', 'Bach'], 'rock': ['Led Zeppelin', 'Metallica', 'Iron Maiden']} """ |
groups = groupby(lambda x: x[0], itertools.izip(factor, coll))
return dmap(lambda x: [y[1] for y in x], groups) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def assoc(_d, key, value):
"""Associate a key with a value in a dictionary :param _d: a dictionary :param key: a key in the dictionary :param value: a value for the key :returns: a new dictionary {'name': 'Holy Grail'} {} .. note:: the original dictionary is not modified """ |
d = deepcopy(_d)
d[key] = value
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pipe(data, *fns):
"""Apply functions recursively on your data :param data: the data :param fns: functions :returns: an object '43' """ |
return reduce(lambda acc, f: f(acc), fns, data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(records, column, values):
"""Update the column of records :param records: a list of dictionaries :param column: a string :param values: an iterable or a function :returns: new records with the columns updated [800000.0, 8000000.0, 18000000.0] [40, 400, 900] """ |
new_records = deepcopy(records)
if values.__class__.__name__ == 'function':
for row in new_records:
row[column] = values(row[column])
elif isiterable(values):
for i, row in enumerate(new_records):
row[column] = values[i]
else:
msg = "You must provide a function or an iterable."
raise ValueError(msg)
return new_records |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def duplicates(coll):
"""Return the duplicated items in the given collection :param coll: a collection :returns: a list of the duplicated items in the collection [1, 3] """ |
return list(set(x for x in coll if coll.count(x) > 1)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pluck(record, *keys, **kwargs):
"""Return the record with the selected keys :param record: a list of dictionaries :param keys: some keys from the record :param kwargs: keywords determining how to deal with the keys {'color': 'blue', 'name': 'Lancelot'} The keyword 'default' allows to replace a ``None`` value:: {'movie': 'Bilbo', 'nb_aliens': 0, 'year': 2014} """ |
default = kwargs.get('default', None)
return reduce(lambda a, x: assoc(a, x, record.get(x, default)), keys, {}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pluck_each(records, columns):
"""Return the records with the selected columns :param records: a list of dictionaries :param columns: a list or a tuple :returns: a list of dictionaries with the selected columns [{'year': 1975, 'title': 'The Holy Grail'}, {'year': 1979, 'title': 'Life of Brian'}, {'year': 1983, 'title': 'The Meaning of Life'}] """ |
return [pluck(records[i], *columns) for i, _ in enumerate(records)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def use(data, attrs):
"""Return the values of the attributes for the given data :param data: the data :param attrs: strings :returns: a list With a dict:: ['Metallica', None, 'James Hetfield'] With a non dict data structure:: ['Alice', 'F'] """ |
if isinstance(data, dict):
if not isiterable(attrs):
attrs = [attrs]
coll = map(data.get, attrs)
else:
coll = map(lambda x: getattr(data, x), attrs)
return coll |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_in(record, *keys, **kwargs):
"""Return the value corresponding to the keys in a nested record :param record: a dictionary :param keys: strings :param kwargs: keywords :returns: the value for the keys 'Lancelot' '?' """ |
default = kwargs.get('default', None)
return reduce(lambda a, x: a.get(x, default), keys, record) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def valueof(records, key):
"""Extract the value corresponding to the given key in all the dictionaries ['Robert Plant', 'James Hetfield'] """ |
if isinstance(records, dict):
records = [records]
return map(operator.itemgetter(key), records) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find(fn, record):
"""Apply a function on the record and return the corresponding new record :param fn: a function :param record: a dictionary :returns: a dictionary {'Graham': 35} """ |
values_result = fn(record.values())
keys_result = [k for k, v in record.items() if v == values_result]
return {keys_result[0]: values_result} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove(coll, value):
"""Remove all the occurrences of a given value :param coll: a collection :param value: the value to remove :returns: a list (0, 1, 1, 2, 3, 5) """ |
coll_class = coll.__class__
return coll_class(x for x in coll if x != value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def are_in(items, collection):
"""Return True for each item in the collection :param items: a sub-collection :param collection: a collection :returns: a list of booleans [True, False] """ |
if not isinstance(items, (list, tuple)):
items = (items, )
return map(lambda x: x in collection, items) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def monotony(seq):
"""Determine the monotony of a sequence :param seq: a sequence :returns: 1 if the sequence is sorted (increasing) :returns: 0 if it is not sorted :returns: -1 if it is sorted in reverse order (decreasing) 1 0 -1 """ |
if seq == sorted(seq):
return 1
elif seq == list(reversed(sorted(seq))):
return -1
else:
return 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def attributes(data):
"""Return all the non callable and non special attributes of the input data :param data: an object :returns: a list ['cols', 'name', 'rows'] """ |
return [x for x in dir(data) if not callable(x) and not x.startswith('__')] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dfilter(fn, record):
"""filter for a directory :param fn: A predicate function :param record: a dict :returns: a dict {'John': 27, 'Graham': 35} """ |
return dict([(k, v) for k, v in record.items() if fn(v)]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _filter_occurrences(count, relat_op):
"""Filter the occurrences with respect to the selected relational operators""" |
# Filter the occurrences equal (or not equal) to a given value
if "eq" in relat_op:
count = dfilter(lambda x: x == relat_op["eq"], count)
elif "ne" in relat_op:
count = dfilter(lambda x: x != relat_op["ne"], count)
# Filter the occurrences lower (or equal) than a given value
if "lt" in relat_op:
count = dfilter(lambda x: x < relat_op["lt"], count)
elif "le" in relat_op:
count = dfilter(lambda x: x <= relat_op["le"], count)
# Filter the occurrences greater (or equal) than a given value
if "gt" in relat_op:
count = dfilter(lambda x: x > relat_op["gt"], count)
elif "ge" in relat_op:
count = dfilter(lambda x: x >= relat_op["ge"], count)
return count |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def occurrences(coll, value=None, **options):
"""Return the occurrences of the elements in the collection :param coll: a collection :param value: a value in the collection :param options: an optional keyword used as a criterion to filter the values in the collection :returns: the frequency of the values in the collection as a dictionary {1: 2, 2: 1, 3: 1} 2 Filter the values of the occurrences that are <, <=, >, >=, == or != than a given number:: {1: 2, 2: 1, 3: 1} {1: 2} {1: 2} """ |
count = {}
for element in coll:
count[element] = count.get(element, 0) + 1
if options:
count = _filter_occurrences(count, options)
if value:
count = count.get(value, 0)
return count |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def indexof(coll, item, start=0, default=None):
"""Return the index of the item in the collection :param coll: iterable :param item: scalar :param start: (optional) The start index :default: The default value of the index if the item is not in the collection :returns: idx -- The index of the item in the collection 2 3 True """ |
if item in coll[start:]:
return list(coll).index(item, start)
else:
return default |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def indexesof(coll, item):
"""Return all the indexes of the item in the collection :param coll: the collection :param item: a value :returns: a list of indexes [2, 3] """ |
return [indexof(coll, item, i) for i in xrange(len(coll)) if coll[i] == item] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def count(fn, coll):
"""Return the count of True values returned by the predicate function applied to the collection :param fn: a predicate function :param coll: a collection :returns: an integer 2 """ |
return len([x for x in coll if fn(x) is True]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def icanhazascii(client, channel, nick, message, found):
""" A plugin for generating showing ascii artz """ |
global FLOOD_RATE, LAST_USED
now = time.time()
if channel in LAST_USED and (now - LAST_USED[channel]) < FLOOD_RATE:
return
LAST_USED[channel] = now
return found |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_application(self, environ):
""" Retrieve an application for a wsgi environ :param environ: The environ object sent by wsgi to an application """ |
host = self._get_host(environ)
subdomain = self._extract_subdomain(host)
return self._get_application(subdomain) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_application(self, subdomain):
""" Return a WSGI application for subdomain. The subdomain is passed to the create_application constructor as a keyword argument. :param subdomain: Subdomain to get or create an application with """ |
with self.lock:
app = self.instances.get(subdomain)
if app is None:
app = self.create_application(subdomain=subdomain)
self.instances[subdomain] = app
return app |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _extract_subdomain(host):
""" Returns a subdomain from a host. This host is typically the HTTP_HOST request envvar. If the host is an IP address, `None` is returned :param host: Request's target host """ |
host = host.split(':')[0]
# If the host is an IP address, there is no subdomain to extract
try:
# Check if the host is an ip address
socket.inet_aton(host)
except socket.error:
# It isn't an IP address, return the subdomain
return '.'.join(host.split('.')[:-2]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def are_labels_overlapping(label1, label2):
""" non-overlap cases |---------| lbl1 |--------| lbl2 |---------| lbl1 |---------| lbl2 :param label1: :param label2: :return: """ |
if (
label2.start_seconds > label1.end_seconds and
label2.start_seconds > label1.start_seconds and
label2.end_seconds > label1.start_seconds and
label2.end_seconds > label1.start_seconds) or (
label2.start_seconds < label1.start_seconds and
label2.end_seconds < label1.start_seconds and
label2.start_seconds < label1.end_seconds and
label2.end_seconds < label1.end_seconds):
return False
else:
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multiple_directory_files_loader(*args):
""" Loads all the files in each directory as values in a dict with the key being the relative file path of the directory. Updates the value if subsequent file paths are the same. """ |
d = dict()
def load_files(folder):
for (dirpath, dirnames, filenames) in os.walk(folder):
for f in filenames:
filepath = os.path.join(dirpath, f)
with open( filepath, 'r' ) as f:
key = filepath[len(os.path.commonprefix([root, filepath]))+1:]
d[ key ] = f.read()
for foldername in dirnames:
load_files(os.path.join(dirpath, foldername))
for root in args:
load_files(root)
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_root_file(self, filename):
""" Function used to send static files from the root of the domain. """ |
cache_timeout = self.get_send_file_max_age(filename)
return send_from_directory(self.config['ROOT_FOLDER'], filename,
cache_timeout=cache_timeout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_media_file(self, filename):
""" Function used to send media files from the media folder to the browser. """ |
cache_timeout = self.get_send_file_max_age(filename)
return send_from_directory(self.config['MEDIA_FOLDER'], filename,
cache_timeout=cache_timeout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_theme_file(self, filename):
""" Function used to send static theme files from the theme folder to the browser. """ |
cache_timeout = self.get_send_file_max_age(filename)
return send_from_directory(self.config['THEME_STATIC_FOLDER'], filename,
cache_timeout=cache_timeout) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def info(message, *args, **kwargs):
""" write a message to stdout """ |
if 'end' in kwargs:
end = kwargs['end']
else:
end = '\n'
if len(args) == 0:
sys.stdout.write(message)
else:
sys.stdout.write(message % args)
sys.stdout.write(end)
sys.stdout.flush() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def error(message, *args, **kwargs):
""" write a message to stderr """ |
if 'end' in kwargs:
end = kwargs['end']
else:
end = '\n'
if len(args) == 0:
sys.stderr.write(message)
else:
sys.stderr.write(message % args)
sys.stderr.write(end)
sys.stderr.flush() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def debug(message, *args, **kwargs):
""" debug output goes to stderr so you can still redirect the stdout to a file or another program. Controlled by the JUT_DEBUG environment variable being present """ |
if 'end' in kwargs:
end = kwargs['end']
else:
end = '\n'
if DEBUG:
if len(args) == 0:
sys.stderr.write(message)
else:
sys.stderr.write(message % args)
sys.stderr.write(end)
sys.stderr.flush() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, item):
""" Add an item to the work queue. :param item: The work item to add. An item may be of any type; however, if it is not hashable, then the work queue must either be initialized with ``unique`` set to ``False``, or a ``key`` callable must have been provided. """ |
# Are we to uniquify work items?
if self._unique:
key = self._key(item) if self._key else item
# If it already has been added to the queue, do nothing
if key in self._seen:
return
self._seen.add(key)
# Add the item to the queue
self._work.append(item)
# We'll keep a count of the number of items that have been
# through the queue
self._count += 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_value(cls, value: int) -> bool: """True if specified value exists in int enum; otherwise, False.""" |
return any(value == item.value for item in cls) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_name(cls, name: str, default: T = None) -> T: """Parse specified name for IntEnum; return default if not found.""" |
if not name:
return default
name = name.lower()
return next((item for item in cls if name == item.name.lower()), default) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_value(cls, value: int, default: T = None) -> T: """Parse specified value for IntEnum; return default if not found.""" |
return next((item for item in cls if value == item.value), default) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_names(cls, names: List[str]) -> T: """Parse specified names for IntEnum; return default if not found.""" |
value = 0
iterable = cls # type: Iterable
for name in names:
name = name.lower()
flag = next((item for item in iterable if name == item.name.lower()), None)
if not flag:
raise ValueError("{} is not a member of {}".format(
name, cls.__name__))
value = value | int(flag)
return cls(value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _extract_options(line):
r'''Given a line as it would appear in the authorized_keys file,
return an OrderedDict of options, and the remainder of a line as a
string.
>>> Key._extract_options(r'no-pty,command="sh" ssh-rsa AAAAB3NzaC1yc2EAAA...OFy5Lwc8Lo+Jk=')
(OrderedDict([('no-pty', True), ('command', 'sh')]), 'ssh-rsa AAAAB3NzaC1yc2EAAA...OFy5Lwc8Lo+Jk=')
>>> Key._extract_options(r'ssh-rsa AAAAB3NzaC1yc...Lwc8OFy5Lo+kU=')
(OrderedDict(), 'ssh-rsa AAAAB3NzaC1yc...Lwc8OFy5Lo+kU=')
'''
options = OrderedDict({})
quoted = False
escaped = False
option_name = ''
option_val = None
key_without_options = ''
in_options = True
in_option_name = True
for letter in line.strip():
if in_options:
if quoted:
if letter == "\\":
escaped = True
elif letter == '"':
if escaped:
option_val += letter
escaped = False
else:
quoted = False
else:
if escaped:
option_val += "\\"
escaped = False
option_val += letter
else: # not quoted
if letter == ' ':
# end of options
in_options = False
if (option_name in ['ssh-rsa', 'ssh-dss'] or
option_name.startswith('ecdsa-')):
# what we thought was an option name was really the
# key type, and there are no options
key_without_options = option_name + " "
option_name = ''
else:
if option_val is None:
options[option_name] = True
else:
options[option_name] = option_val
elif letter == '"':
quoted = True
elif letter == '=':
# '=' separated option name from value
in_option_name = False
if option_val is None:
option_val = ''
elif letter == ',':
# next option_name
if option_val is None:
options[option_name] = True
else:
options[option_name] = option_val
in_option_name = True
option_name = ''
option_val = None
else: # general unquoted letter
if in_option_name:
option_name += letter
else:
option_val += letter
else:
key_without_options += letter
if key_without_options == '':
# certain mal-formed keys (e.g. a line not containing any spaces)
# will be completely swallowed up by the above parser. It's
# better to follow the principle of least surprize and return the
# original line, allowing the error to be handled later.
return OrderedDict({}), line.strip()
else:
return options, key_without_options |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_pubkey_line(cls, line):
"""Generate Key instance from a a string. Raise ValueError if string is malformed""" |
options, key_without_options = cls._extract_options(line)
if key_without_options == '':
raise ValueError("Empty key")
# the key (with options stripped out) should consist of the fields
# "type", "data", and optionally "comment", separated by a space.
# The comment field may contain additional spaces
fields = key_without_options.strip().split(None, 2) # maxsplit=2
if len(fields) == 3:
type_str, data64, comment = fields
elif len(fields) == 2:
type_str, data64 = fields
comment = None
else: # len(fields) <= 1
raise ValueError("Key has insufficient number of fields")
try:
data = b64decode(data64)
except (binascii.Error, TypeError):
raise ValueError("Key contains invalid data")
key_type = next(iter_prefixed(data))
if key_type == b'ssh-rsa':
key_class = RSAKey
elif key_type == b'ssh-dss':
key_class = DSAKey
elif key_type.startswith(b'ecdsa-'):
key_class = ECDSAKey
else:
raise ValueError('Unknown key type {}'.format(key_type))
return key_class(b64decode(data64), comment, options=options) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def from_pubkey_file(cls, file):
"""Generate a Key instance from a file. Raise ValueError is key is malformed""" |
if hasattr(file, 'read'):
return cls.from_pubkey_line(file.read())
return cls.from_pubkey_line(open(file).read()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init():
"""Dynamically import engines that initialize successfully.""" |
import importlib
import os
import re
filenames = os.listdir(os.path.dirname(__file__))
module_names = set()
for filename in filenames:
match = re.match(r'^(?P<name>[A-Z_a-z]\w*)\.py[co]?$', filename)
if match:
module_names.add(match.group('name'))
for module_name in module_names:
try:
module = importlib.import_module('.' + module_name, __name__)
except ImportError:
continue
for name, member in module.__dict__.items():
if not isinstance(member, type):
# skip non-new-style classes
continue
if not issubclass(member, Engine):
# skip non-subclasses of Engine
continue
if member is Engine:
# skip "abstract" class Engine
continue
try:
handle = member.handle
except AttributeError:
continue
engines[handle] = member |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def str(self, value, tolerant=False, limit=1000, seen=frozenset()):
"""Transform value into a representation suitable for substitution.""" |
if value is None:
if tolerant:
return ""
raise ValueError("value is None")
if isinstance(value, (bool, numbers.Number, basestring)):
return str(value)
if not isinstance(value, collections.Iterable):
if not tolerant:
raise ValueError("unknown value type")
try:
name = value.name
except AttributeError:
try:
name = value.__name__
except AttributeError:
try:
name = value.__class__.__name__
except AttributeError:
return "<?>"
return "<%s>" % (name,)
is_mapping = isinstance(value, collections.Mapping)
if not seen:
wrap = "%s"
elif is_mapping:
wrap = "{%s}"
else:
wrap = "[%s]"
id_ = id(value)
if id_ in seen:
if tolerant:
return wrap % ("...",)
raise ValueError("recursive representation")
seen = seen.union((id_,))
if is_mapping:
items = [(self.str(n, tolerant=tolerant, limit=limit, seen=seen),
self.str(v, tolerant=tolerant, limit=limit, seen=seen))
for n, v in value.items()]
items.sort()
items = ("%s=%s" for n, v in items)
else:
it = iter(value)
items = [self.str(item, tolerant=tolerant, limit=limit, seen=seen)
for item in itertools.islice(
it,
len(value)
if isinstance(value, collections.Sized)
else limit)]
items.sort()
try:
next(it)
except StopIteration:
pass
else:
if not tolerant:
raise ValueError("iterable too long")
items.append("...")
return wrap % (", ".join(items),) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _match(self, doc, where):
"""Return True if 'doc' matches the 'where' condition.""" |
assert isinstance(where, dict), "where is not a dictionary"
assert isinstance(doc, dict), "doc is not a dictionary"
try:
return all([doc[k] == v for k, v in where.items()])
except KeyError:
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def import_module(module_name):
""" Imports a module. A single point of truth for importing modules to be documented by `pydoc`. In particular, it makes sure that the top module in `module_name` can be imported by using only the paths in `pydoc.import_path`. If a module has already been imported, then its corresponding entry in `sys.modules` is returned. This means that modules that have changed on disk cannot be re-imported in the same process and have its documentation updated. """ |
if import_path != sys.path:
# Such a kludge. Only restrict imports if the `import_path` has
# been changed. We don't want to always restrict imports, since
# providing a path to `imp.find_module` stops it from searching
# in special locations for built ins or frozen modules.
#
# The problem here is that this relies on the `sys.path` not being
# independently changed since the initialization of this module.
# If it is changed, then some packages may fail.
#
# Any other options available?
# Raises an exception if the parent module cannot be imported.
# This hopefully ensures that we only explicitly import modules
# contained in `pydoc.import_path`.
imp.find_module(module_name.split('.')[0], import_path)
if module_name in sys.modules:
return sys.modules[module_name]
else:
__import__(module_name)
return sys.modules[module_name] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _safe_import(module_name):
""" A function for safely importing `module_name`, where errors are suppressed and `stdout` and `stderr` are redirected to a null device. The obligation is on the caller to close `stdin` in order to avoid impolite modules from blocking on `stdin` when imported. """ |
class _Null (object):
def write(self, *_):
pass
sout, serr = sys.stdout, sys.stderr
sys.stdout, sys.stderr = _Null(), _Null()
try:
m = import_module(module_name)
except:
m = None
sys.stdout, sys.stderr = sout, serr
return m |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mro(self, cls):
""" Returns a method resolution list of documentation objects for `cls`, which must be a documentation object. The list will contain objects belonging to `pydoc.Class` or `pydoc.External`. Objects belonging to the former are exported classes either in this module or in one of its sub-modules. """ |
ups = inspect.getmro(cls.cls)
return list(map(lambda c: self.find_class(c), ups)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def descendents(self, cls):
""" Returns a descendent list of documentation objects for `cls`, which must be a documentation object. The list will contain objects belonging to `pydoc.Class` or `pydoc.External`. Objects belonging to the former are exported classes either in this module or in one of its sub-modules. """ |
if cls.cls == type or not hasattr(cls.cls, '__subclasses__'):
# Is this right?
return []
downs = cls.cls.__subclasses__()
return list(map(lambda c: self.find_class(c), downs)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_class(self, cls):
""" Given a Python `cls` object, try to find it in this module or in any of the exported identifiers of the submodules. """ |
for doc_cls in self.classes():
if cls is doc_cls.cls:
return doc_cls
for module in self.submodules():
doc_cls = module.find_class(cls)
if not isinstance(doc_cls, External):
return doc_cls
return External('%s.%s' % (cls.__module__, cls.__name__)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def variables(self):
""" Returns all documented module level variables in the module sorted alphabetically as a list of `pydoc.Variable`. """ |
p = lambda o: isinstance(o, Variable) and self._docfilter(o)
return sorted(filter(p, self.doc.values())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def classes(self):
""" Returns all documented module level classes in the module sorted alphabetically as a list of `pydoc.Class`. """ |
p = lambda o: isinstance(o, Class) and self._docfilter(o)
return sorted(filter(p, self.doc.values())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def functions(self):
""" Returns all documented module level functions in the module sorted alphabetically as a list of `pydoc.Function`. """ |
p = lambda o: isinstance(o, Function) and self._docfilter(o)
return sorted(filter(p, self.doc.values())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def submodules(self):
""" Returns all documented sub-modules in the module sorted alphabetically as a list of `pydoc.Module`. """ |
p = lambda o: isinstance(o, Module) and self._docfilter(o)
return sorted(filter(p, self.doc.values())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def __is_exported(self, name, module):
""" Returns `True` if and only if `pydoc` considers `name` to be a public identifier for this module where `name` was defined in the Python module `module`. If this module has an `__all__` attribute, then `name` is considered to be exported if and only if it is a member of this module's `__all__` list. If `__all__` is not set, then whether `name` is exported or not is heuristically determined. Firstly, if `name` starts with an underscore, it will not be considered exported. Secondly, if `name` was defined in a module other than this one, it will not be considered exported. In all other cases, `name` will be considered exported. """ |
if hasattr(self.module, '__all__'):
return name in self.module.__all__
if not _is_exported(name):
return False
if module is None:
return False
if module is not None and self.module.__name__ != module.__name__:
return name in self._declared_variables
return True |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.