text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _upload_media(self,directory,files=None,resize_request=None, \ movealbum_request=None,changetitle_request=None):
"""Uploads media file to FB, returns True if uploaded successfully, Will replace if already uploaded, If megapixels > 0, will scale photos before upload If no filename given, will go through all files in DB""" |
# Connect if we aren't already
if not self._connectToFB():
logger.error("%s - Couldn't connect to fb")
return False
_megapixels=self._load_megapixels(directory)
# Get an album ID (create album if not exists)
_album_id,_album_name=self._get_album(directory)
if not _megapixels:
mpstring="original"
else:
mpstring=("%0.1f MP"%(_megapixels))
# If no files given, use files from DB in dir
if not files:
db=self._loadDB(directory)
files=db.keys()
#If only one file given, make it a list
if isinstance(files,basestring):
files=[files]
files.sort()
for filename in files:
# Get title here if any
title=self._get_title(directory,filename)
if title:
print("%s - Uploading to fb, album[%s] size=%s title=%s"\
%(filename,_album_name,mpstring,title))
else:
print("%s - Uploading to fb, album[%s] size=%s"\
%(filename,_album_name,mpstring))
status=self._upload_or_replace_fb(directory,filename, \
_album_id, _megapixels,resize_request,movealbum_request,\
changetitle_request,title)
if not status:
return False
return True |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _title_uptodate(self,fullfile,pid,_title):
"""Check fb photo title against provided title, returns true if they match""" |
i=self.fb.get_object(pid)
if i.has_key('name'):
if _title == i['name']:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _already_in_album(self,fullfile,pid,album_id):
"""Check to see if photo with given pid is already in the album_id, returns true if this is the case """ |
logger.debug("fb: Checking if pid %s in album %s",pid,album_id)
pid_in_album=[]
# Get all photos in album
photos = self.fb.get_connections(str(album_id),"photos")['data']
# Get all pids in fb album
for photo in photos:
pid_in_album.append(photo['id'])
logger.debug("fb: album %d contains these photos: %s",album_id,pid_in_album)
# Check if our pid matches
if pid in pid_in_album:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def PrintSets(self):
"""Prints set name and number of photos in set""" |
sets=self._getphotosets()
for setname in sets:
print("%s [%d]"%(setname,sets[setname]['number_photos'])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chdir(__path: str) -> ContextManager: """Context handler to temporarily switch directories. Args: __path: Directory to change to Yields: Execution context in ``path`` """ |
old = os.getcwd()
try:
os.chdir(__path)
yield
finally:
os.chdir(old) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def env(**kwargs: Union[Dict[str, str], None]) -> ContextManager: """Context handler to temporarily alter environment. If you supply a value of ``None``, then the associated key will be deleted from the environment. Args: kwargs: Environment variables to override Yields: Execution context with modified environment """ |
old = os.environ.copy()
try:
os.environ.clear()
# This apparent duplication is because ``putenv`` doesn’t update
# ``os.environ``, and ``os.environ`` changes aren’t propagated to
# subprocesses.
for key, value in old.items():
os.environ[key] = value # NOQA: B003
os.putenv(key, value)
for key, value in kwargs.items():
if value is None:
del os.environ[key]
else:
os.environ[key] = value # NOQA: B003
os.putenv(key, value)
yield
finally:
os.environ.clear()
for key, value in old.items():
os.environ[key] = value # NOQA: B003
os.putenv(key, value) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distance_to(self, address, measure="Miles", httpclient=None):
"""Distance to another address """ |
if isinstance(address, Address) and self.latlng and address.latlng:
lat1, lon1 = map(float, self.latlng)
lat2, lon2 = map(float, address.latlng)
elif self.latlng and type(address) is tuple:
lat1, lon1 = map(float, self.latlng)
lat2, lon2 = address
else:
raise ValueError(":address must be type tuple or Address")
radius = 6371 # km
dlat = math.radians(lat2 - lat1)
dlon = math.radians(lon2 - lon1)
a = math.sin(dlat / 2) * math.sin(dlat / 2) + math.cos(math.radians(lat1)) \
* math.cos(math.radians(lat2)) * math.sin(dlon / 2) * math.sin(dlon / 2)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
d = radius * c
# d is in kilometers
if measure == self.KILOMETERS:
return d
elif measure == self.METERS:
return d / 1000
elif measure == self.MILES:
return d * .621371
else:
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def json_dump_hook(cfg, text: bool=False):
""" Dumps all the data into a JSON file. """ |
data = cfg.config.dump()
if not text:
json.dump(data, cfg.fd)
else:
return json.dumps(data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load(self, env=None):
""" Load a section values of given environment. If nothing to specified, use environmental variable. If unknown environment was specified, warn it on logger. :param env: environment key to load in a coercive manner :type env: string :rtype: dict """ |
self._load()
e = env or \
os.environ.get(RUNNING_MODE_ENVKEY, DEFAULT_RUNNING_MODE)
if e in self.config:
return self.config[e]
logging.warn("Environment '%s' was not found.", e) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def main(show_details:['-l']=False, cols:['-w', '--width']='', *files):
'''
List information about a particular file or set of files
:param show_details: Whether to show detailed info about files
:param cols: specify screen width
'''
print(files)
print(show_details)
print(cols) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def random_date():
'''Return a valid random date.'''
d = datetime.datetime.now().date()
d = d - datetime.timedelta(random.randint(20,2001))
return d |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def random_md_page():
'''Generate random markdown page content..
If the parameters are zero, instead of a fixed number of elements
it uses a random number.
'''
# headers #, ##
# blockquote >
# lists *
# codeblock (indent 4 spaces)
# hrule, 3 or more - in a line
# emphasis: word surrounded by one * or _
lines = []
lines.append("\n# " + random_title(False) + "\n") # add title
lines.append("\n" + random_text(1) + "\n") #and 1 paragraphs
for h in range(1,random.randint(2,5)):
lines.append("\n## " + random_title(False) + "\n") # add header
lines.append("\n" + random_paragraphs(random.randint(1,5)) + "\n") #and some paragraphs
for sh in range(1,random.randint(1,4)):
lines.append("\n### " + random_title(False) +"\n") # add subheader
lines.append("\n" + random_paragraphs(random.randint(4,13)) + "\n") #and some paragraphs
txt = "\n".join(lines)
return txt |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def all(self, **args):
""" Return all Lists. """ |
limit = args['limit'] if 'limit' in args else 20
offset = args['offset'] if 'offset' in args else 0
r = requests.get(
"https://kippt.com/api/lists?limit=%s&offset=%s" % (limit, offset),
headers=self.kippt.header
)
return (r.json()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, title, **args):
""" Create a new Kippt List. Parameters: - title (Required) - args Dictionary of other fields Accepted fields can be found here: https://github.com/kippt/api-documentation/blob/master/objects/list.md """ |
# Merge our title as a parameter and JSONify it.
data = json.dumps(dict({'title': title}, **args))
r = requests.post(
"https://kippt.com/api/lists",
headers=self.kippt.header,
data=data
)
return (r.json()) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cli(config_path, verbose):
"""Record-Recommender command line version.""" |
global config, store
if not config_path:
config_path = '/etc/record_recommender.yml'
config = get_config(config_path)
setup_logging(config)
store = FileStore(config) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(weeks, force):
"""Fetch newest PageViews and Downloads.""" |
weeks = get_last_weeks(weeks)
print(weeks)
recommender = RecordRecommender(config)
recommender.fetch_weeks(weeks, overwrite=force) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_recommender(ctx, weeks, processes):
""" Download and build the recommendations. - Fetch new statistics from the current week. - Generate recommendations. - Update the recommendations. """ |
weeks = get_last_weeks(weeks)
recommender = RecordRecommender(config)
# Redownload incomplete weeks
first_weeks = weeks[:2]
recommender.fetch_weeks(first_weeks, overwrite=True)
# Download missing weeks
recommender.fetch_weeks(weeks, overwrite=False)
print("Build Profiles")
ctx.invoke(profiles, weeks=weeks)
print("Generate Recommendations")
ctx.invoke(build, processes=processes) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def profiles(weeks):
""" Number of weeks to build. Starting with the current week. """ |
profiles = Profiles(store)
weeks = get_last_weeks(weeks) if isinstance(weeks, int) else weeks
print(weeks)
profiles.create(weeks) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build(processes):
""" Calculate all recommendations using the number of specified processes. The recommendations are calculated from the generated Profiles file. """ |
recommender = RecordRecommender(config)
recommender.create_all_recommendations(processes, ip_views=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _bind(self, _descriptor):
""" Bind a ResponseObject to a given action descriptor. This updates the default HTTP response code and selects the appropriate content type and serializer for the response. """ |
# If the method has a default code, use it
self._defcode = getattr(_descriptor.method, '_wsgi_code', 200)
# Set up content type and serializer
self.content_type, self.serializer = _descriptor.serializer(self.req) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _serialize(self):
""" Serialize the ResponseObject. Returns a webob `Response` object. """ |
# Do something appropriate if the response object is unbound
if self._defcode is None:
raise exceptions.UnboundResponse()
# Build the response
resp = self.response_class(request=self.req, status=self.code,
headerlist=self._headers.items())
# Do we have a body?
if self.result:
resp.content_type = self.content_type
resp.body = self.serializer(self.result)
# Return the response
return resp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def code(self):
""" The HTTP response code associated with this ResponseObject. If instantiated directly without overriding the code, returns 200 even if the default for the method is some other value. Can be set or deleted; in the latter case, the default will be restored. """ |
if self._code is not None:
return self._code
elif self._defcode is not None:
return self._defcode
return 200 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def report_message(report):
"""Report message.""" |
body = 'Error: return code != 0\n\n'
body += 'Archive: {}\n\n'.format(report['archive'])
body += 'Docker image: {}\n\n'.format(report['image'])
body += 'Docker container: {}\n\n'.format(report['container_id'])
return body |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def task_failure_message(task_report):
"""Task failure message.""" |
trace_list = traceback.format_tb(task_report['traceback'])
body = 'Error: task failure\n\n'
body += 'Task ID: {}\n\n'.format(task_report['task_id'])
body += 'Archive: {}\n\n'.format(task_report['archive'])
body += 'Docker image: {}\n\n'.format(task_report['image'])
body += 'Exception: {}\n\n'.format(task_report['exception'])
body += 'Traceback:\n {} {}'.format(
string.join(trace_list[:-1], ''), trace_list[-1])
return body |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def send_email(self, message):
"""Initiate a SMTP session and send an email.""" |
msg = MIMEMultipart()
msg['From'] = self.from_address
msg['To'] = self.to_address
msg['Subject'] = self.title
msg.attach(MIMEText('<pre>' + cgi.escape(message) + '</pre>', 'html'))
smtp = smtplib.SMTP(self.server, self.port,
timeout=self.timeout)
if self.tls_auth:
smtp.starttls()
smtp.login(self.user, self.password)
smtp.sendmail(self.from_address, self.to_address, msg.as_string())
smtp.quit() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jpath_parse(jpath):
""" Parse given JPath into chunks. Returns list of dictionaries describing all of the JPath chunks. :param str jpath: JPath to be parsed into chunks :return: JPath chunks as list of dicts :rtype: :py:class:`list` :raises JPathException: in case of invalid JPath syntax """ |
result = []
breadcrumbs = []
# Split JPath into chunks based on '.' character.
chunks = jpath.split('.')
for chnk in chunks:
match = RE_JPATH_CHUNK.match(chnk)
if match:
res = {}
# Record whole match.
res['m'] = chnk
# Record breadcrumb path.
breadcrumbs.append(chnk)
res['p'] = '.'.join(breadcrumbs)
# Handle node name.
res['n'] = match.group(1)
# Handle node index (optional, may be omitted).
if match.group(2):
res['i'] = match.group(3)
if str(res['i']) == '#':
res['i'] = -1
elif str(res['i']) == '*':
pass
else:
res['i'] = int(res['i']) - 1
result.append(res)
else:
raise JPathException("Invalid JPath chunk '{}'".format(chnk))
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jpath_values(structure, jpath):
""" Return all values at given JPath within given data structure. For performance reasons this method is intentionally not written as recursive. :param str structure: data structure to be searched :param str jpath: JPath to be evaluated :return: found values as a list :rtype: :py:class:`list` """ |
# Current working node set.
nodes_a = [structure]
# Next iteration working node set.
nodes_b = []
# Process sequentially all JPath chunks.
chunks = jpath_parse_c(jpath)
for chnk in chunks:
# Process all currently active nodes.
for node in nodes_a:
key = chnk['n']
if not isinstance(node, dict) and not isinstance(node, collections.Mapping):
continue
# Process indexed nodes.
if 'i' in chnk:
idx = chnk['i']
# Skip the node, if the key does not exist, the value is not
# a list-like object or the list is empty.
if not key in node or not (isinstance(node[key], (list, collections.MutableSequence))) or not node[key]:
continue
try:
# Handle '*' special index - append all nodes.
if str(idx) == '*':
nodes_b.extend(node[key])
# Append only node at particular index.
else:
nodes_b.append(node[key][idx])
except:
pass
# Process unindexed nodes.
else:
# Skip the node, if the key does not exist.
if not key in node:
continue
# Handle list values - expand them.
if isinstance(node[key], (list, collections.MutableSequence)):
for i in node[key]:
nodes_b.append(i)
# Handle scalar values.
else:
nodes_b.append(node[key])
nodes_a = nodes_b
nodes_b = []
return nodes_a |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jpath_exists(structure, jpath):
""" Check if node at given JPath within given data structure does exist. :param str structure: data structure to be searched :param str jpath: JPath to be evaluated :return: True or False :rtype: bool """ |
result = jpath_value(structure, jpath)
if not result is None:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def jpath_set(structure, jpath, value, overwrite = True, unique = False):
""" Set given JPath to given value within given structure. For performance reasons this method is intentionally not written as recursive. :param str structure: data structure to be searched :param str jpath: JPath to be evaluated :param any value: value of any type to be set at given path :param bool overwrite: enable/disable overwriting of already existing value :param bool unique: ensure uniqueness of value, works only for lists :return: numerical return code, one of the (:py:data:`RC_VALUE_SET`, :py:data:`RC_VALUE_EXISTS`, :py:data:`RC_VALUE_DUPLICATE`) :rtype: int """ |
chunks = jpath_parse_c(jpath)
size = len(chunks) - 1
current = structure
# Process chunks in order, enumeration is used for detection of the last JPath chunk.
for i, chnk in enumerate(chunks):
key = chnk['n']
if not isinstance(current, dict) and not isinstance(current, collections.Mapping):
raise JPathException("Expected dict-like structure to attach node '{}'".format(chnk['p']))
# Process indexed nodes.
if 'i' in chnk:
idx = chnk['i']
# Automatically create nodes for non-existent keys.
if not key in current:
current[key] = []
if not isinstance(current[key], list) and not isinstance(current[key], collections.MutableSequence):
raise JPathException("Expected list-like object under structure key '{}'".format(key))
# Detection of the last JPath chunk - node somewhere in the middle.
if i != size:
# Attempt to access node at given index.
try:
current = current[key][idx]
# IndexError: list index out of range
# Node at given index does not exist, append new one. Using insert()
# does not work, item is appended to the end of the list anyway.
# TypeError: list indices must be integers or slices, not str
# In the case list index was '*', we are appending to the end of
# list.
except (IndexError, TypeError):
current[key].append({})
current = current[key][-1]
# Detection of the last JPath chunk - node at the end.
else:
# Attempt to insert value at given index.
try:
if overwrite or not current[key][idx]:
current[key][idx] = value
else:
return RC_VALUE_EXISTS
# IndexError: list index out of range
# Node at given index does not exist, append new one. Using insert()
# does not work, item is appended to the end of the list anyway.
# TypeError: list indices must be integers or slices, not str
# In the case list index was '*', we are appending to the end of
# list.
except (IndexError, TypeError):
# At this point only deal with unique, overwrite does not make
# sense, because we would not be here otherwise.
if not unique or not value in current[key]:
current[key].append(value)
else:
return RC_VALUE_DUPLICATE
# Process unindexed nodes.
else:
# Detection of the last JPath chunk - node somewhere in the middle.
if i != size:
# Automatically create nodes for non-existent keys.
if not key in current:
current[key] = {}
if not isinstance(current[key], dict) and not isinstance(current[key], collections.Mapping):
raise JPathException("Expected dict-like object under structure key '{}'".format(key))
current = current[key]
# Detection of the last JPath chunk - node at the end.
else:
if overwrite or not key in current:
current[key] = value
else:
return RC_VALUE_EXISTS
return RC_VALUE_SET |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def queue_ramp_dicts(ramp_dict_list, server_ip_and_port):
"""Simple utility function to queue up a list of dictionaries.""" |
client = server.ClientForServer(server.BECServer, server_ip_and_port)
for dct in ramp_dict_list:
client.queue_ramp(dct)
client.start({}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def flatten_dict(dct, separator='-->', allowed_types=[int, float, bool]):
"""Returns a list of string identifiers for each element in dct. Recursively scans through dct and finds every element whose type is in allowed_types and adds a string indentifier for it. eg: dct = { 'a': 'a string', 'b': { 'c': 1.0, 'd': True } } flatten_dict(dct) would return ['a', 'b-->c', 'b-->d'] """ |
flat_list = []
for key in sorted(dct):
if key[:2] == '__':
continue
key_type = type(dct[key])
if key_type in allowed_types:
flat_list.append(str(key))
elif key_type is dict:
sub_list = flatten_dict(dct[key])
sub_list = [str(key) + separator + sl for sl in sub_list]
flat_list += sub_list
return flat_list |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_dict_item(dct, name_string, set_to):
"""Sets dictionary item identified by name_string to set_to. name_string is the indentifier generated using flatten_dict. Maintains the type of the orginal object in dct and tries to convert set_to to that type. """ |
key_strings = str(name_string).split('-->')
d = dct
for ks in key_strings[:-1]:
d = d[ks]
item_type = type(d[key_strings[-1]])
d[key_strings[-1]] = item_type(set_to) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_name(obj, setting_name='LONG_NAME_FORMAT'):
""" Returns the correct order of the name according to the current language. """ |
nickname = obj.get_nickname()
romanized_first_name = obj.get_romanized_first_name()
romanized_last_name = obj.get_romanized_last_name()
non_romanized_first_name = obj.get_non_romanized_first_name()
non_romanized_last_name = obj.get_non_romanized_last_name()
non_translated_title = obj.get_title()
non_translated_gender = obj.get_gender()
# when the title is blank, gettext returns weird header text. So if this
# occurs, we will pass it on blank without gettext
if non_translated_title:
title = gettext(non_translated_title)
else:
title = non_translated_title
if non_translated_gender:
gender = gettext(non_translated_gender)
else:
gender = non_translated_gender
format_string = u'{}'.format(get_format(setting_name))
format_kwargs = {}
if '{n}' in format_string:
format_kwargs.update({'n': nickname})
if '{N}' in format_string:
format_kwargs.update({'N': nickname.upper()})
if '{f}' in format_string:
format_kwargs.update({'f': romanized_first_name})
if '{F}' in format_string:
format_kwargs.update({'F': romanized_first_name.upper()})
if '{l}' in format_string:
format_kwargs.update({'l': romanized_last_name})
if '{L}' in format_string:
format_kwargs.update({'L': romanized_last_name.upper()})
if '{a}' in format_string:
format_kwargs.update({'a': non_romanized_first_name})
if '{A}' in format_string:
format_kwargs.update({'A': non_romanized_first_name.upper()})
if '{x}' in format_string:
format_kwargs.update({'x': non_romanized_last_name})
if '{X}' in format_string:
format_kwargs.update({'X': non_romanized_last_name.upper()})
if '{t}' in format_string:
format_kwargs.update({'t': title})
if '{T}' in format_string:
format_kwargs.update({'T': title.upper()})
if '{g}' in format_string:
format_kwargs.update({'g': gender})
if '{G}' in format_string:
format_kwargs.update({'G': gender.upper()})
return format_string.format(**format_kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def timethis(func):
"""A wrapper use for timeit.""" |
func_module, func_name = func.__module__, func.__name__
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = _time_perf_counter()
r = func(*args, **kwargs)
end = _time_perf_counter()
print('timethis : <{}.{}> : {}'.format(func_module, func_name, end - start))
return r
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_info(name, info_type, url=None, parent=None, id=None, context=ctx_default, store=False):
"""Return a group object""" |
id = str(uuid4()) if id is None else id
pubsub = _pubsub_key(id)
info = {'id': id,
'type': info_type,
'pubsub': pubsub,
'url': url,
'parent': parent,
'context': context,
'name': name,
'status': 'Queued' if info_type == 'job' else None,
'date_start': None,
'date_end': None,
'date_created': str(datetime.now()),
'result': None}
if store:
r_client.set(id, json_encode(info))
if parent is not None:
r_client.sadd(_children_key(parent), id)
return info |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_id_from_user(user):
"""Get an ID from a user, creates if necessary""" |
id = r_client.hget('user-id-map', user)
if id is None:
id = str(uuid4())
r_client.hset('user-id-map', user, id)
r_client.hset('user-id-map', id, user)
return id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def traverse(self, id_=None):
"""Traverse groups and yield info dicts for jobs""" |
if id_ is None:
id_ = self.group
nodes = r_client.smembers(_children_key(id_))
while nodes:
current_id = nodes.pop()
details = r_client.get(current_id)
if details is None:
# child has expired or been deleted, remove from :children
r_client.srem(_children_key(id_), current_id)
continue
details = self._decode(details)
if details['type'] == 'group':
children = r_client.smembers(_children_key(details['id']))
if children is not None:
nodes.update(children)
yield details |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def close(self):
"""Unsubscribe the group and all jobs being listened too""" |
for channel in self._listening_to:
self.toredis.unsubscribe(channel)
self.toredis.unsubscribe(self.group_pubsub) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def listen_for_updates(self):
"""Attach a callback on the group pubsub""" |
self.toredis.subscribe(self.group_pubsub, callback=self.callback) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def listen_to_node(self, id_):
"""Attach a callback on the job pubsub if it exists""" |
if r_client.get(id_) is None:
return
else:
self.toredis.subscribe(_pubsub_key(id_), callback=self.callback)
self._listening_to[_pubsub_key(id_)] = id_
return id_ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def unlisten_to_node(self, id_):
"""Stop listening to a job Parameters id_ : str An ID to remove Returns -------- str or None The ID removed or None if the ID was not removed """ |
id_pubsub = _pubsub_key(id_)
if id_pubsub in self._listening_to:
del self._listening_to[id_pubsub]
self.toredis.unsubscribe(id_pubsub)
parent = json_decode(r_client.get(id_)).get('parent', None)
if parent is not None:
r_client.srem(_children_key(parent), id_)
r_client.srem(self.group_children, id_)
return id_ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def action(self, verb, args):
"""Process the described action Parameters verb : str, {'add', 'remove', 'get'} The specific action to perform args : {list, set, tuple} Any relevant arguments for the action. Raises ------ TypeError If args is an unrecognized type ValueError If the action specified is unrecognized Returns ------- list Elements dependent on the action """ |
if not isinstance(args, (list, set, tuple)):
raise TypeError("args is unknown type: %s" % type(args))
if verb == 'add':
response = ({'add': i} for i in self._action_add(args))
elif verb == 'remove':
response = ({'remove': i} for i in self._action_remove(args))
elif verb == 'get':
response = ({'get': i} for i in self._action_get(args))
else:
raise ValueError("Unknown action: %s" % verb)
self.forwarder(response) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _action_add(self, ids):
"""Add IDs to the group Parameters ids : {list, set, tuple, generator} of str The IDs to add Returns ------- list of dict The details of the added jobs """ |
return self._action_get((self.listen_to_node(id_) for id_ in ids)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _action_remove(self, ids):
"""Remove IDs from the group Parameters ids : {list, set, tuple, generator} of str The IDs to remove Returns ------- list of dict The details of the removed jobs """ |
return self._action_get((self.unlisten_to_node(id_) for id_ in ids)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _action_get(self, ids):
"""Get the details for ids Parameters ids : {list, set, tuple, generator} of str The IDs to get Notes ----- If ids is empty, then all IDs are returned. Returns ------- list of dict The details of the jobs """ |
if not ids:
ids = self.jobs
result = []
ids = set(ids)
while ids:
id_ = ids.pop()
if id_ is None:
continue
try:
payload = r_client.get(id_)
except ResponseError:
# wrong key type
continue
try:
payload = self._decode(payload)
except ValueError:
# unable to decode or data doesn't exist in redis
continue
else:
result.append(payload)
if payload['type'] == 'group':
for obj in self.traverse(id_):
ids.add(obj['id'])
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
"""Main method that runs the build""" |
data = Common.open_file(F_INFO)
config = Common.open_file(F_CONFIG)
file_full_path = ""
env = load_jinja2_env(config['p_template'])
for index, page in data.iteritems():
logging.info('Creating ' + index + ' page:')
template = env.get_template(page['f_template'] + \
config['f_template_ext'])
for lang, content in page['content'].items():
if lang == "NaL":
if page['f_directory'] != '':
Common.make_dir(config['p_build'] + page['f_directory'])
file_full_path = config['p_build'] + page['f_directory'] + \
page['f_name'] +page['f_endtype']
else:
if page['f_directory'] != '':
Common.make_dir(config['p_build'] + lang + '/' + \
page['f_directory'])
file_full_path = config['p_build'] + lang + '/' + \
page['f_directory'] + page['f_name'] +page['f_endtype']
with open(file_full_path, 'w') as target_file:
target_file.write(template.render(content))
logging.info('Page ' + index + ' created.') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def configure(self, *, hwm: int=None, rcvtimeo: int=None, sndtimeo: int=None, linger: int=None) -> 'Socket': """ Allows to configure some common socket options and configurations, while allowing method chaining """ |
if hwm is not None:
self.set_hwm(hwm)
if rcvtimeo is not None:
self.setsockopt(zmq.RCVTIMEO, rcvtimeo)
if sndtimeo is not None:
self.setsockopt(zmq.SNDTIMEO, sndtimeo)
if linger is not None:
self.setsockopt(zmq.LINGER, linger)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
""" Waits for the next multipart message and asserts that it contains the given data. """ |
expect_all(await self.recv_multipart(), data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_payment(redirect_url, client_name, client_email, payment_owner, validate_address_cmd, *validate_item_cmds):
""" Function used to generate a payment on pagseguro See facade_tests to undestand the steps before generating a payment @param redirect_url: the url where payment status change must be sent @param client_name: client's name @param client_email: client's email @param payment_owner: owner of payment. Her payments can be listed with search_payments function @param validate_address_cmd: cmd generated with validate_address_cmd function @param validate_item_cmds: list of cmds generated with validate_item_cmd function @return: A command that generate the payment when executed """ |
return GeneratePayment(redirect_url, client_name, client_email, payment_owner, validate_address_cmd,
*validate_item_cmds) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_address_cmd(street, number, quarter, postalcode, town, state, complement="Sem Complemento"):
""" Build an address form to be used with payment function """ |
return ValidateAddressCmd(street=street, number=number, quarter=quarter, postalcode=postalcode, town=town,
state=state, complement=complement) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_item_cmd(description, price, quantity, reference=None):
""" Create a commando to save items from the order. A list of items or commands must be created to save a order @param description: Item's description @param price: Item's price @param quantity: Item's quantity @param reference: a product reference for the item. Must be a Node @return: A Command that validate and save a item """ |
return ValidateItemCmd(description=description, price=price, quantity=quantity, reference=reference) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def search_all_payments(payment_status=None, page_size=20, start_cursor=None, offset=0, use_cache=True, cache_begin=True, relations=None):
""" Returns a command to search all payments ordered by creation desc @param payment_status: The payment status. If None is going to return results independent from status @param page_size: number of payments per page @param start_cursor: cursor to continue the search @param offset: offset number of payment on search @param use_cache: indicates with should use cache or not for results @param cache_begin: indicates with should use cache on beginning or not for results @param relations: list of relations to bring with payment objects. possible values on list: logs, pay_items, owner @return: Returns a command to search all payments ordered by creation desc """ |
if payment_status:
return PaymentsByStatusSearch(payment_status, page_size, start_cursor, offset, use_cache,
cache_begin, relations)
return AllPaymentsSearch(page_size, start_cursor, offset, use_cache, cache_begin, relations) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_config(name=__name__):
""" Get a configuration parser for a given TAPP name. Reads config.ini files only, not in-database configuration records. :param name: The tapp name to get a configuration for. :rtype: ConfigParser :return: A config parser matching the given name """ |
cfg = ConfigParser()
path = os.environ.get('%s_CONFIG_FILE' % name.upper())
if path is None or path == "":
fname = '/etc/tapp/%s.ini' % name
if isfile(fname):
path = fname
elif isfile('cfg.ini'):
path = 'cfg.ini'
else:
raise ValueError("Unable to get configuration for tapp %s" % name)
cfg.read(path)
return cfg |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def setup_logging(name, prefix="trademanager", cfg=None):
""" Create a logger, based on the given configuration. Accepts LOGFILE and LOGLEVEL settings. :param name: the name of the tapp to log :param cfg: The configuration object with logging info. :return: The session and the engine as a list (in that order) """ |
logname = "/var/log/%s/%s_tapp.log" % (prefix, name)
logfile = cfg.get('log', 'LOGFILE') if cfg is not None and \
cfg.get('log', 'LOGFILE') is not None and cfg.get('log', 'LOGFILE') != "" else logname
loglevel = cfg.get('log', 'LOGLEVEL') if cfg is not None and \
cfg.get('log', 'LOGLEVEL') is not None else logging.INFO
logging.basicConfig(filename=logfile, level=loglevel)
return logging.getLogger(name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def destandardize_variables(self, tv, blin, bvar, errBeta, nonmissing):
"""Destandardize betas and other components.""" |
return self.test_variables.destandardize(tv, blin, bvar, errBeta, nonmissing) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def freeze_subjects(self):
"""Converts variable data into numpy arrays. This is required after all subjects have been added via the add_subject function, since we don't know ahead of time who is participating in the analysis due to various filtering possibilities. """ |
self.phenotype_data = numpy.array(self.phenotype_data)
self.covariate_data = numpy.array(self.covariate_data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_subject(self, ind_id, sex=None, phenotype=None):
"""Add new subject to study, with optional sex and phenotype Throws MalformedInputFile if sex is can't be converted to int """ |
self.pedigree_data[ind_id] = len(self.phenotype_data[0])
if phenotype != None:
if type(self.phenotype_data) is list:
self.phenotype_data[0].append(phenotype)
else:
self.phenotype_data[-1, len(self.individual_mask)] = phenotype
self.individual_mask.append(0)
if PhenoCovar.sex_as_covariate:
try:
self.covariate_data[0].append(float(sex))
except Exception, e:
raise MalformedInputFile("Invalid setting, %s, for sex in pedigree" % (sex)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def format_time(date_obj, time_obj=None, datebox=False, dt_type=None, classes=None):
""" Returns formatted HTML5 elements based on given datetime object. By default returns a time element, but will return a .datebox if requested. dt_type allows passing dt_start or dt_end for hcal formatting. link allows passing a url to the datebox. classes allows sending arbitrary classnames. Useful for properly microformatting elements. Usage:: {% format_time obj.pub_date %} {% format_time obj.start_date 'datebox' 'dtstart' %} {% format_time obj.end_date obj.end_time 'datebox' 'dt_end' %} """ |
if not time_obj:
time_obj = getattr(date_obj, 'time', None)
if dt_type:
classes = '{0} {1}'.format(classes, dt_type)
if datebox:
classes = '{0} {1}'.format(classes, datebox)
return {
'date_obj': date_obj,
'time_obj': time_obj,
'datebox': datebox,
'current_year': datetime.date.today().year,
'classes': classes
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def short_timesince(date):
""" A shorter version of Django's built-in timesince filter. Selects only the first part of the returned string, splitting on the comma. Falls back on default Django timesince if it fails. Example: 3 days, 20 hours becomes "3 days". """ |
try:
t = timesince(date).split(", ")[0]
except IndexError:
t = timesince(date)
return t |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def validate_values(func, dic):
""" Validate each value in ``dic`` by passing it through ``func``. Raise a ``ValueError`` if ``func`` does not return ``True``. """ |
for value_name, value in dic.items():
if not func(value):
raise ValueError('{} can not be {}'.format(value_name, value)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def find_input_documents(self):
"""Find all tex documents input by this root document. Returns ------- paths : list List of filepaths for input documents. Paths are relative to the document (i.e., as written in the latex document). """ |
paths = []
itr = chain(texutils.input_pattern.finditer(self.text),
texutils.input_ifexists_pattern.finditer(self.text))
for match in itr:
fname = match.group(1)
if not fname.endswith('.tex'):
full_fname = ".".join((fname, 'tex'))
else:
full_fname = fname
paths.append(full_fname)
return paths |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sections(self):
"""List with tuples of section names and positions. Positions of section names are measured by cumulative word count. """ |
sections = []
for match in texutils.section_pattern.finditer(self.text):
textbefore = self.text[0:match.start()]
wordsbefore = nlputils.wordify(textbefore)
numwordsbefore = len(wordsbefore)
sections.append((numwordsbefore, match.group(1)))
self._sections = sections
return sections |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bib_path(self):
"""Absolute file path to the .bib bibliography document.""" |
bib_name = self.bib_name
# FIXME need to bake in search paths for tex documents in all platforms
osx_path = os.path.expanduser(
"~/Library/texmf/bibtex/bib/{0}".format(bib_name))
if self._file_exists(bib_name):
return bib_name # bib is in project directory
elif os.path.exists(osx_path):
return osx_path
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, path):
"""Write the document's text to a ``path`` on the filesystem.""" |
with codecs.open(path, 'w', encoding='utf-8') as f:
f.write(self.text) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bibitems(self):
"""List of bibitem strings appearing in the document.""" |
bibitems = []
lines = self.text.split('\n')
for i, line in enumerate(lines):
if line.lstrip().startswith(u'\\bibitem'):
# accept this line
# check if next line is also part of bibitem
# FIXME ugh, re-write
j = 1
while True:
try:
if (lines[i + j].startswith(u'\\bibitem') is False) \
and (lines[i + j] != '\n'):
line += lines[i + j]
elif "\end{document}" in lines[i + j]:
break
else:
break
except IndexError:
break
else:
print line
j += 1
print "finished", line
bibitems.append(line)
return bibitems |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def inline_inputs(self):
"""Inline all input latex files references by this document. The inlining is accomplished recursively. The document is modified in place. """ |
self.text = texutils.inline(self.text,
os.path.dirname(self._filepath))
# Remove children
self._children = {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clearLayout(layout):
"""Removes all widgets in the layout. Useful when opening a new file, want to clear everything.""" |
while layout.count():
child = layout.takeAt(0)
child.widget().deleteLater() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_dataset(request, dataset_id=None):
"""Handles creation of Dataset models from form POST information. Data Dictionary entry. If dataset_id is passed as an argument, this function edits a given dataset rather than adding a dataset. Otherwise, a new model is saved to the database. Adds article URLs and scrapes page for headline, saves tags to DB. Returns the same page if validation fails, otherwise returns a redirect to data dictionary creation/edit. """ |
# Save form to create dataset model
# Populate non-form fields
if dataset_id:
dataset_instance = Dataset.objects.get(pk=dataset_id)
metadata_form = DatasetUploadForm(
request.POST,
request.FILES,
instance=dataset_instance
)
# metadata_form.base_fields['dataset_file'].required = False
else:
metadata_form = DatasetUploadForm(
request.POST,
request.FILES
)
if metadata_form.is_valid():
dataset_metadata = metadata_form.save(commit=False)
dataset_metadata.uploaded_by = request.user.email
dataset_metadata.slug = slugify(dataset_metadata.title)
# Find vertical from hub
dataset_metadata.vertical_slug = metadata_form.get_vertical_from_hub(
dataset_metadata.hub_slug
)
dataset_metadata.source_slug = slugify(dataset_metadata.source)
# Save to database so that we can add Articles,
# DataDictionaries, other foreignkeyed/M2M'd models.
dataset_metadata.save()
# Create relationships
url_list = metadata_form.cleaned_data['appears_in'].split(', ')
tag_list = metadata_form.cleaned_data['tags'].split(', ')
# print(tag_list)
dictionary = DataDictionary()
dictionary.author = request.user.email
dictionary.save()
dataset_metadata.data_dictionary = dictionary
dataset_metadata.save()
for url in url_list:
url = url.strip()
if len(url) > 0:
article, created = Article.objects.get_or_create(url=url)
if created:
article_req = requests.get(url)
if article_req.status_code == 200:
# We good. Get the HTML.
page = article_req.content
soup = BeautifulSoup(page, 'html.parser')
# Looking for <meta ... property="og:title">
meta_title_tag = soup.find(
'meta',
attrs={'property': 'og:title'}
)
try:
# print "Trying og:title..."
# print meta_title_tag
title = meta_title_tag['content']
except (TypeError, KeyError):
# TypeError implies meta_title_tag is None;
# KeyError implies that meta_title_tag does not
# have a content property.
title_tag = soup.find('title')
try:
# print "Falling back to title..."
# print title_tag
title = title_tag.text
except (TypeError, KeyError):
description_tag = soup.find(
'meta',
attrs={'property': 'og:description'}
)
try:
# print "Falling back to description..."
# print description_tag
title = description_tag['content']
# Fallback value. Display is handled in models.
except (TypeError, KeyError):
title = None
article.title = title
article.save()
dataset_metadata.appears_in.add(article)
for tag in tag_list:
if tag:
cleanTag = tag.strip().lower()
tagToAdd, created = Tag.objects.get_or_create(
slug=slugify(cleanTag),
defaults={'word': cleanTag}
)
dataset_metadata.tags.add(tagToAdd)
return redirect(
'datafreezer_datadict_edit',
dataset_id=dataset_metadata.id
)
return render(
request,
'datafreezer/upload.html',
{
'fileUploadForm': metadata_form,
}
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_csv_headers(dataset_id):
"""Return the first row of a CSV as a list of headers.""" |
data = Dataset.objects.get(pk=dataset_id)
with open(data.dataset_file.path, 'r') as datasetFile:
csvReader = reader(datasetFile, delimiter=',', quotechar='"')
headers = next(csvReader)
# print headers
return headers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def grab_names_from_emails(email_list):
"""Return a dictionary mapping names to email addresses. Only gives a response if the email is found in the staff API/JSON. Expects an API of the format = [ { 'email': 'foo@bar.net', 'fullName': 'Frank Oo' }, ] """ |
all_staff = STAFF_LIST
emails_names = {}
for email in email_list:
for person in all_staff:
if email == person['email'] and email not in emails_names:
emails_names[email] = person['fullName']
# print emails_names[email]
for email in email_list:
matched = False
for assignment in emails_names:
if email == assignment:
matched = True
if not matched:
emails_names[email] = email
return emails_names |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tag_lookup(request):
"""JSON endpoint that returns a list of potential tags. Used for upload template autocomplete. """ |
tag = request.GET['tag']
tagSlug = slugify(tag.strip())
tagCandidates = Tag.objects.values('word').filter(slug__startswith=tagSlug)
tags = json.dumps([candidate['word'] for candidate in tagCandidates])
return HttpResponse(tags, content_type='application/json') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def source_lookup(request):
"""JSON endpoint that returns a list of potential sources. Used for upload template autocomplete. """ |
source = request.GET['source']
source_slug = slugify(source.strip())
source_candidates = Dataset.objects.values('source').filter(
source_slug__startswith=source_slug
)
sources = json.dumps([cand['source'] for cand in source_candidates])
return HttpResponse(sources, content_type='application/json') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def download_data_dictionary(request, dataset_id):
"""Generates and returns compiled data dictionary from database. Returned as a CSV response. """ |
dataset = Dataset.objects.get(pk=dataset_id)
dataDict = dataset.data_dictionary
fields = DataDictionaryField.objects.filter(
parent_dict=dataDict
).order_by('columnIndex')
response = HttpResponse(content_type='text/csv')
csvName = slugify(dataset.title + ' data dict') + '.csv'
response['Content-Disposition'] = 'attachment; filename=%s' % (csvName)
csvWriter = writer(response)
metaHeader = [
'Data Dictionary for {0} prepared by {1}'.format(
dataset.title,
dataset.uploaded_by
)
]
csvWriter.writerow(metaHeader)
trueHeader = ['Column Index', 'Heading', 'Description', 'Data Type']
csvWriter.writerow(trueHeader)
for field in fields:
mappedIndex = field.COLUMN_INDEX_CHOICES[field.columnIndex-1][1]
csvWriter.writerow(
[mappedIndex, field.heading, field.description, field.dataType]
)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def home(request):
"""Renders Datafreezer homepage. Includes recent uploads.""" |
recent_uploads = Dataset.objects.order_by('-date_uploaded')[:11]
email_list = [upload.uploaded_by.strip() for upload in recent_uploads]
# print all_staff
emails_names = grab_names_from_emails(email_list)
# print emails_names
for upload in recent_uploads:
for item in emails_names:
if upload.uploaded_by == item:
upload.fullName = emails_names[item]
for upload in recent_uploads:
if not hasattr(upload, 'fullName'):
upload.fullName = upload.uploaded_by
return render(
request,
'datafreezer/home.html',
{
'recent_uploads': recent_uploads,
'heading': 'Most Recent Uploads'
}
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def edit_dataset_metadata(request, dataset_id=None):
"""Renders a template to upload or edit a Dataset. """ |
if request.method == 'POST':
return add_dataset(request, dataset_id)
elif request.method == 'GET':
# create a blank form
# Edit
if dataset_id:
metadata_form = DatasetUploadForm(
instance=get_object_or_404(Dataset, pk=dataset_id)
)
# Upload
else:
metadata_form = DatasetUploadForm()
return render(
request,
'datafreezer/upload.html',
{
'fileUploadForm': metadata_form,
}
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dataset_detail(request, dataset_id):
"""Renders individual dataset detail page.""" |
active_dataset = get_object_or_404(Dataset, pk=dataset_id)
datadict_id = active_dataset.data_dictionary_id
datadict = DataDictionaryField.objects.filter(
parent_dict=datadict_id
).order_by('columnIndex')
uploader_name = grab_names_from_emails([active_dataset.uploaded_by])
tags = Tag.objects.filter(dataset=dataset_id)
articles = Article.objects.filter(dataset=dataset_id)
for hub in HUBS_LIST:
if hub['slug'] == active_dataset.hub_slug:
active_dataset.hub = hub['name']
active_dataset.vertical = hub['vertical']['name']
if len(uploader_name) == 0:
uploader_name = active_dataset.uploaded_by
else:
uploader_name = uploader_name[active_dataset.uploaded_by]
return render(
request,
'datafreezer/dataset_details.html',
{
'dataset': active_dataset,
'datadict': datadict,
'uploader_name': uploader_name,
'tags': tags,
'articles': articles,
}
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, request):
"""Handle HTTP GET request. Returns template and context from generate_page_title and generate_sections to populate template. """ |
sections_list = self.generate_sections()
p = Paginator(sections_list, 25)
page = request.GET.get('page')
try:
sections = p.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
sections = p.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), return last page of results.
sections = p.page(p.num_pages)
context = {
'sections': sections,
'page_title': self.generate_page_title(),
'browse_type': self.browse_type
}
return render(
request,
self.template_path,
context
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, request):
"""View for HTTP GET method. Returns template and context from generate_page_title and generate_sections to populate template. """ |
sections = self.generate_sections()
if self.paginated:
p = Paginator(sections, 25)
page = request.GET.get('page')
try:
sections = p.page(page)
except PageNotAnInteger:
# If page is not an integer, deliver first page.
sections = p.page(1)
except EmptyPage:
# If page is out of range (e.g. 9999), return last page
# of results.
sections = p.page(p.num_pages)
pageUpper = int(p.num_pages) / 2
try:
pageLower = int(page) / 2
except TypeError:
pageLower = -999
else:
pageUpper = None
pageLower = None
context = {
'sections': sections,
'page_title': self.generate_page_title(),
'browse_type': self.browse_type,
'pageUpper': pageUpper,
'pageLower': pageLower
}
return render(
request,
self.template_path,
context
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_sections(self):
"""Return all hubs, slugs, and upload counts.""" |
datasets = Dataset.objects.values(
'hub_slug'
).annotate(
upload_count=Count(
'hub_slug'
)
).order_by('-upload_count')
return [
{
'count': dataset['upload_count'],
'name': get_hub_name_from_slug(dataset['hub_slug']),
'slug': dataset['hub_slug']
}
for dataset in datasets
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_sections(self):
"""Return all authors to which datasets have been attributed.""" |
authors = Dataset.objects.values(
'uploaded_by'
).annotate(
upload_count=Count(
'uploaded_by'
)
).order_by('-upload_count')
email_name_map = grab_names_from_emails(
[row['uploaded_by'] for row in authors]
)
for author in authors:
for emailKey in email_name_map:
if author['uploaded_by'] == emailKey:
author['name'] = email_name_map[emailKey]
return [
{
'slug': author['uploaded_by'],
'name': author['name'],
'count': author['upload_count']
}
for author in authors
] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_sections(self):
"""Return dictionary of source section slugs, name, and counts.""" |
sources = Dataset.objects.values(
'source', 'source_slug'
).annotate(source_count=Count('source_slug'))
return sorted([
{
'slug': source['source_slug'],
'name': source['source'],
'count': source['source_count']
}
for source in sources
], key=lambda k: k['count'], reverse=True) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get(self, request, slug):
"""Basic functionality for GET request to view. """ |
matching_datasets = self.generate_matching_datasets(slug)
if matching_datasets is None:
raise Http404("Datasets meeting these criteria do not exist.")
base_context = {
'datasets': matching_datasets,
'num_datasets': matching_datasets.count(),
'page_title': self.generate_page_title(slug),
}
additional_context = self.generate_additional_context(
matching_datasets
)
base_context.update(additional_context)
context = base_context
return render(
request,
self.template_path,
context
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_additional_context(self, matching_datasets):
"""Return additional information about matching datasets. Includes upload counts, related hubs, related tags. """ |
dataset_ids = [upload.id for upload in matching_datasets]
tags = Tag.objects.filter(
dataset__in=dataset_ids
).distinct().annotate(
Count('word')
).order_by('-word__count')[:5]
hubs = matching_datasets.values("hub_slug").annotate(
Count('hub_slug')
).order_by('-hub_slug__count')
if hubs:
most_used_hub = get_hub_name_from_slug(hubs[0]['hub_slug'])
hub_slug = hubs[0]['hub_slug']
else:
most_used_hub = None
hub_slug = None
return {
'tags': tags,
'hub': most_used_hub,
'hub_slug': hub_slug,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_additional_context(self, matching_datasets):
"""Return context about matching datasets.""" |
related_tags = Tag.objects.filter(
dataset__in=matching_datasets
).distinct().annotate(
Count('word')
).order_by('-word__count')[:5]
return {
'related_tags': related_tags
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_matching_datasets(self, data_slug):
"""Return datasets that belong to a vertical by querying hubs. """ |
matching_hubs = VERTICAL_HUB_MAP[data_slug]['hubs']
return Dataset.objects.filter(hub_slug__in=matching_hubs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_additional_context(self, matching_datasets):
"""Generate top tags and authors for a given Vertical.""" |
top_tags = Tag.objects.filter(
dataset__in=matching_datasets
).annotate(
tag_count=Count('word')
).order_by('-tag_count')[:3]
top_authors = Dataset.objects.filter(
id__in=matching_datasets
).values('uploaded_by').annotate(
author_count=Count('uploaded_by')
).order_by('-author_count')[:3]
for author in top_authors:
author['fullName'] = grab_names_from_emails([
author['uploaded_by']
])[author['uploaded_by']]
return {
'top_tags': top_tags,
'top_authors': top_authors
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_additional_context(self, matching_datasets):
"""Return top tags for a source.""" |
top_tags = Tag.objects.filter(
dataset__in=matching_datasets
).annotate(
tag_count=Count('word')
).order_by('-tag_count')[:3]
return {
'top_tags': top_tags
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fact_by_id(self, fact_id):
""" Obtains fact data by it's id. As the fact is unique, it returns a tuple like: (activity_id, start_time, end_time, description). If there is no fact with id == fact_id, a NoHamsterData exception will be raise """ |
columns = 'activity_id, start_time, end_time, description'
query = "SELECT %s FROM facts WHERE id = %s"
result = self._query(query % (columns, fact_id))
if result:
return result[0] # there only one fact with the id
else:
raise NoHamsterData('facts', fact_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_activity_by_id(self, activity_id):
""" Obtains activity data by it's id. As the activity is unique, it returns a tuple like: (name, category_id). If there is no activity with id == activity_id, a NoHamsterData exception will be raise """ |
columns = 'name, category_id'
query = "SELECT %s FROM activities WHERE id = %s"
result = self._query(query % (columns, activity_id))
if result:
return result[0] # there only one fact with the id
else:
raise NoHamsterData('activities', activity_id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_tags_by_fact_id(self, fact_id):
""" Obtains the tags associated by a fact_id. This function returns a list of the tags name associated to a fact, such as ['foo', 'bar', 'eggs']. If the fact has no tags, it will return a empty list. If there are no fact with id == fact_id, a NoHamsterData exception will be raise """ |
if not fact_id in self.all_facts_id:
raise NoHamsterData('facts', fact_id)
query = "SELECT tag_id FROM fact_tags WHERE fact_id = %s"
return [self.tags[row[0]] for row in self._query(query % fact_id)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def log_user_in(app_id, token, ticket, response, cookie_name='user',
url_detail='https://pswdless.appspot.com/rest/detail'):
'''
Log the user in setting the user data dictionary in cookie
Returns a command that execute the logic
'''
return LogUserIn(app_id, token, ticket, response, cookie_name, url_detail) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def fetch_user(app_id, token, ticket, url_detail='https://pswdless.appspot.com/rest/detail'):
'''
Fetch the user deatil from Passwordless
'''
return FetchUserWithValidation(app_id, token, ticket, url_detail) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def send_login_email(app_id, token, hook, email=None, user_id=None, lang="en_US",
url_login='https://pswdless.appspot.com/rest/login'):
'''
Contact password-less server to send user a email containing the login link
'''
return SendLoginEmail(app_id, token, hook, email, user_id, lang, url_login) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def history_file(self, location=None):
"""Return history file location. """ |
if location:
# Hardcoded location passed from the config file.
if os.path.exists(location):
return location
else:
logger.warn("The specified history file %s doesn't exist",
location)
filenames = []
for base in ['CHANGES', 'HISTORY', 'CHANGELOG']:
filenames.append(base)
for extension in ['rst', 'txt', 'markdown']:
filenames.append('.'.join([base, extension]))
history = self.filefind(filenames)
if history:
return history |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tag_exists(self, version):
"""Check if a tag has already been created with the name of the version. """ |
for tag in self.available_tags():
if tag == version:
return True
return False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_version(self, version):
"""Find out where to change the version and change it. There are three places where the version can be defined. The first one is an explicitly defined Python file with a ``__version__`` attribute. The second one is some version.txt that gets read by setup.py. The third is directly in setup.py. """ |
if self.get_python_file_version():
setup_cfg = pypi.SetupConfig()
filename = setup_cfg.python_file_with_version()
lines = open(filename).read().split('\n')
for index, line in enumerate(lines):
match = UNDERSCORED_VERSION_PATTERN.search(line)
if match:
lines[index] = "__version__ = '%s'" % version
contents = '\n'.join(lines)
open(filename, 'w').write(contents)
logger.info("Set __version__ in %s to %r", filename, version)
return
versionfile = self.filefind(['version.txt', 'version'])
if versionfile:
# We have a version.txt file but does it match the setup.py
# version (if any)?
setup_version = self.get_setup_py_version()
if not setup_version or (setup_version ==
self.get_version_txt_version()):
open(versionfile, 'w').write(version + '\n')
logger.info("Changed %s to %r", versionfile, version)
return
good_version = "version = '%s'" % version
line_number = 0
setup_lines = open('setup.py').read().split('\n')
for line in setup_lines:
match = VERSION_PATTERN.search(line)
if match:
logger.debug("Matching version line found: %r", line)
if line.startswith(' '):
# oh, probably ' version = 1.0,' line.
indentation = line.split('version')[0]
# Note: no spaces around the '='.
good_version = indentation + "version='%s'," % version
setup_lines[line_number] = good_version
break
line_number += 1
contents = '\n'.join(setup_lines)
open('setup.py', 'w').write(contents)
logger.info("Set setup.py's version to %r", version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def terminal_size():
""" Detect the current size of terminal window as a numer of rows and columns. """ |
try:
(rows, columns) = os.popen('stty size', 'r').read().split()
rows = int(rows)
columns = int(columns)
return (columns, rows)
# Currently ignore any errors and return some reasonable default values.
# Errors may occur, when the library is used in non-terminal application
# like daemon.
except:
pass
return (80, 24) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_setup(self, content = None, **kwargs):
""" Get current setup by combining internal settings with the ones given. """ |
if content is None:
content = self.content
if content is None:
raise Exception("No content given to be displayed")
setup = {}
for setting in self.list_settings():
if setting[0] in kwargs:
setup[setting[0]] = kwargs.get(setting[0])
else:
setup[setting[0]] = self._settings.get(setting[0])
return (content, setup) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _es_data(settings):
""" Extract data formating related subset of widget settings. """ |
return {k: settings[k] for k in (ConsoleWidget.SETTING_DATA_FORMATING,
ConsoleWidget.SETTING_DATA_TYPE)} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _es_content(settings):
""" Extract content formating related subset of widget settings. """ |
return {k: settings[k] for k in (ConsoleWidget.SETTING_WIDTH,
ConsoleWidget.SETTING_ALIGN,
ConsoleWidget.SETTING_PADDING,
ConsoleWidget.SETTING_PADDING_LEFT,
ConsoleWidget.SETTING_PADDING_RIGHT,
ConsoleWidget.SETTING_PADDING_CHAR)} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.