text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def length(
cls, request,
vector: (Ptypes.body,
Vector('The vector to analyse.'))) -> [
(200, 'Ok', Float),
(400, 'Wrong vector format')]:
'''Return the modulo of a vector.'''
log.info('Computing the length of vector {}'.format(vector))
try:
Respond(200, sqrt(vector['x'] ** 2 +
vector['y'] ** 2 +
vector.get('z', 0) ** 2))
except ValueError:
Respond(400) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def place(slot_name, dttime):
""" Set a timer to be published at the specified minute. """ |
dttime = datetime.strptime(dttime, '%Y-%m-%d %H:%M:%S')
dttime = dttime.replace(second=0, microsecond=0)
try:
area.context['timers'][dttime].add(slot_name)
except KeyError:
area.context['timers'][dttime] = {slot_name}
area.publish({'status': 'placed'}, slot=slot_name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def has_table(table_name):
"""Return True if table exists, False otherwise.""" |
return db.engine.dialect.has_table(
db.engine.connect(),
table_name
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_migration_ctx(**kwargs):
"""Create an alembic migration context.""" |
env = EnvironmentContext(Config(), None)
env.configure(
connection=db.engine.connect(),
sqlalchemy_module_prefix='db.',
**kwargs
)
return env.get_context() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_operations(ctx=None, **kwargs):
"""Create an alembic operations object.""" |
if ctx is None:
ctx = create_migration_ctx(**kwargs)
operations = Operations(ctx)
operations.has_table = has_table
return operations |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def produce_upgrade_operations( ctx=None, metadata=None, include_symbol=None, include_object=None, **kwargs):
"""Produce a list of upgrade statements.""" |
if metadata is None:
# Note, all SQLAlchemy models must have been loaded to produce
# accurate results.
metadata = db.metadata
if ctx is None:
ctx = create_migration_ctx(target_metadata=metadata, **kwargs)
template_args = {}
imports = set()
_produce_migration_diffs(
ctx, template_args, imports,
include_object=include_object,
include_symbol=include_symbol,
**kwargs
)
return template_args |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def handleArgs(self, event):
"""Nose2 hook for the handling the command line args""" |
# settings resolution order:
# command line > cfg file > environ
if self.djsettings:
os.environ['DJANGO_SETTINGS_MODULE'] = self.djsettings
if self.djconfig:
os.environ['DJANGO_CONFIGURATION'] = self.djconfig
# test for django-configurations package
try:
from configurations import importer
importer.install()
except ImportError:
pass
from django.conf import settings
try:
from south.management.commands import patch_for_test_db_setup
patch_for_test_db_setup()
except ImportError:
pass |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def check_signature(signature, private_key, full_path, payload):
""" Checks signature received and verifies that we are able to re-create it from the private key, path, and payload given. :param signature: Signature received from request. :param private_key: Base 64, url encoded private key. :full_path: Full path of request, including GET query string (excluding host) :payload: The request.POST data if present. None if not. :returns: Boolean of whether signature matched or not. """ |
if isinstance(private_key, bytes):
private_key = private_key.decode("ascii")
if isinstance(payload, bytes):
payload = payload.decode()
url_to_check = _strip_signature_from_url(signature, full_path)
computed_signature = apysigner.get_signature(private_key, url_to_check, payload)
return constant_time_compare(signature, computed_signature) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_param(key):
""" Parse the query param looking for filters Determine the field to filter on & the operator to be used when filtering. :param key: The query parameter to the left of the equal sign :return: tuple of string field name & string operator """ |
regex = re.compile(r'filter\[([A-Za-z0-9_./]+)\]')
match = regex.match(key)
if match:
field_and_oper = match.groups()[0].split('__')
if len(field_and_oper) == 1:
return field_and_oper[0], 'eq'
elif len(field_and_oper) == 2:
return tuple(field_and_oper)
else:
raise InvalidQueryParams(**{
'detail': 'The filter query param of "%s" is not '
'supported. Multiple filter operators are '
'not allowed in a single expression.' % key,
'links': LINK,
'parameter': PARAM,
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validate_field(param, fields):
""" Ensure the field exists on the model """ |
if '/' not in param.field and param.field not in fields:
raise InvalidQueryParams(**{
'detail': 'The filter query param of "%s" is not possible. The '
'resource requested does not have a "%s" field. Please '
'modify your request & retry.' % (param, param.field),
'links': LINK,
'parameter': PARAM,
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validate_rel(param, rels):
""" Validate relationship based filters We don't support nested filters currently. FIX: Ensure the relationship filter field exists on the relationships model! """ |
if param.field.count('/') > 1:
raise InvalidQueryParams(**{
'detail': 'The filter query param of "%s" is attempting to '
'filter on a nested relationship which is not '
'currently supported.' % param,
'links': LINK,
'parameter': PARAM,
})
elif '/' in param.field:
model_field = param.field.split('/')[0]
if model_field not in rels:
raise InvalidQueryParams(**{
'detail': 'The filter query param of "%s" is attempting to '
'filter on a relationship but the "%s" field is '
'NOT a relationship field.' % (param, model_field),
'links': LINK,
'parameter': PARAM,
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _validate_param(param):
# pylint: disable=too-many-branches """ Ensure the filter cast properly according to the operator """ |
detail = None
if param.oper not in goldman.config.QUERY_FILTERS:
detail = 'The query filter {} is not a supported ' \
'operator. Please change {} & retry your ' \
'request'.format(param.oper, param)
elif param.oper in goldman.config.GEO_FILTERS:
try:
if not isinstance(param.val, list) or len(param.val) <= 2:
raise ValueError
else:
param.val = [float(i) for i in param.val]
except ValueError:
detail = 'The query filter {} requires a list ' \
'of floats for geo evaluation. Please ' \
'modify your request & retry'.format(param)
elif param.oper in goldman.config.ENUM_FILTERS:
if not isinstance(param.val, list):
param.val = [param.val]
param.val = tuple(param.val)
elif isinstance(param.val, list):
detail = 'The query filter {} should not be specified more ' \
'than once or have multiple values. Please modify ' \
'your request & retry'.format(param)
elif param.oper in goldman.config.BOOL_FILTERS:
try:
param.val = str_to_bool(param.val)
except ValueError:
detail = 'The query filter {} requires a boolean ' \
'for evaluation. Please modify your ' \
'request & retry'.format(param)
elif param.oper in goldman.config.DATE_FILTERS:
try:
param.val = str_to_dt(param.val)
except ValueError:
detail = 'The query filter {} supports only an ' \
'epoch or ISO 8601 timestamp. Please ' \
'modify your request & retry'.format(param)
elif param.oper in goldman.config.NUM_FILTERS:
try:
param.val = int(param.val)
except ValueError:
detail = 'The query filter {} requires a number ' \
'for evaluation. Please modify your ' \
'request & retry'.format(param)
if detail:
raise InvalidQueryParams(**{
'detail': detail,
'links': LINK,
'parameter': PARAM,
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def init(req, model):
""" Return an array of Filter objects. """ |
fields = model.all_fields
rels = model.relationships
params = []
for key, val in req.params.items():
try:
field, oper = _parse_param(key)
except (TypeError, ValueError):
continue
try:
local_field, foreign_filter = field.split('/')
field_type = getattr(model, local_field)
foreign_field = field_type.field
foreign_rtype = field_type.rtype
if hasattr(field_type, 'local_field'):
local_field = field_type.local_field
param = FilterRel(foreign_field, foreign_filter, foreign_rtype,
local_field, field, oper, val)
except AttributeError:
raise InvalidQueryParams(**{
'detail': 'The filter query param "%s" specified a filter '
'containing a "." indicating a relationship filter '
'but a relationship by that name does not exist '
'on the requested resource.' % key,
'links': LINK,
'parameter': PARAM,
})
except ValueError:
param = Filter(field, oper, val)
_validate_param(param)
_validate_rel(param, rels)
_validate_field(param, fields)
params.append(param)
return params |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def git_check():
"""Check for uncomitted changes""" |
git_status = subprocess.check_output(['git', 'status', '--porcelain'])
if len(git_status) is 0:
print(Fore.GREEN + 'All changes committed' + Style.RESET_ALL)
else:
exit(Fore.RED + 'Please commit all files to continue') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_version_number(update_level='patch'):
"""Update version number Returns a semantic_version object""" |
"""Find current version"""
temp_file = version_file().parent / ("~" + version_file().name)
with open(str(temp_file), 'w') as g:
with open(str(version_file()), 'r') as f:
for line in f:
version_matches = bare_version_re.match(line)
if version_matches:
bare_version_str = version_matches.groups(0)[0]
if semantic_version.validate(bare_version_str):
current_version = Version(bare_version_str)
print("{}Current version is {}".format(" "*4, current_version))
else:
current_version = Version.coerce(bare_version_str)
if not text.query_yes_quit("{}I think the version is {}. Use it?".format(" "*4, current_version), default="yes"):
exit(Fore.RED + 'Please set an initial version number to continue')
"""Determine new version number"""
if update_level is 'major':
current_version = current_version.next_major()
elif update_level is 'minor':
current_version = current_version.next_minor()
elif update_level is 'patch':
current_version = current_version.next_patch()
elif update_level is 'prerelease':
if not current_version.prerelease:
current_version = current_version.next_patch()
current_version.prerelease = ('dev', )
else:
exit(Fore.RED + 'Cannot update version in {} mode'.format(update_level))
print("{}New version is {}".format(" "*4, current_version))
"""Update version number"""
line = '__version__ = "{}"'.format(current_version)
print(line, file=g, end="")
print('', file=g) # add a blank line at the end of the file
shutil.copyfile(str(temp_file), str(version_file()))
os.remove(str(temp_file))
return(current_version) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_release_to_changelog(version):
"""Add release line at the top of the first list it finds Assumes your changelog in managed with `releases`""" |
temp_file = changelog_file().parent / ("~" + changelog_file().name)
now = datetime.today()
release_added = False
with open(str(temp_file), 'w') as g:
with open(str(changelog_file()), 'r') as f:
for line in f:
list_match = list_match_re.match(line)
if list_match and not release_added:
release_line = "{}{} :release:`{} <{}-{:02}-{:02}>`".format(
list_match.group("leading"),
list_match.group("mark"),
version, now.year, now.month, now.day)
print(release_line, file=g)
release_added = True
print(line, file=g, end="")
if not release_added:
release_line = "{}{} :release:`{} <{}-{:02}-{:02}>`".format(
" ", "-", version, now.year, now.month, now.day)
print(release_line, file=g)
print('', file=g) # add a blank line at the end of the file
shutil.copyfile(str(temp_file), str(changelog_file()))
os.remove(str(temp_file)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def run_sphinx():
"""Runs Sphinx via it's `make html` command""" |
old_dir = here_directory()
os.chdir(str(doc_directory()))
doc_status = subprocess.check_call(['make', 'html'], shell=True)
os.chdir(str(old_dir)) # go back to former working directory
if doc_status is not 0:
exit(Fore.RED + 'Something broke generating your documentation...') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_tasks(target=None):
"""Returns a list of all the projects and tasks available in the `acorn` database directory. Args: target (str):
directory to list the projects for. Defaults to the configured database directory. Returns: dict: keys are project names; values are lists of tasks associated with the project. """ |
from os import getcwd, chdir
from glob import glob
original = getcwd()
if target is None:# pragma: no cover
target = _dbdir()
chdir(target)
result = {}
for filename in glob("*.*.json"):
project, task = filename.split('.')[0:2]
if project not in result:
result[project] = []
result[project].append(task)
#Set the working directory back to what it was.
chdir(original)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_task(project_, task_):
"""Sets the active project and task. All subsequent logging will be saved to the database with that project and task. Args: project_ (str):
active project name; a project can have multiple tasks. task_ (str):
active task name. Logging is separated at the project and task level. """ |
global project, task
project = project_
task = task_
msg.okay("Set project name to {}.{}".format(project, task), 2) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cleanup():
"""Saves all the open databases to JSON so that the kernel can be shut down without losing in-memory collections. """ |
failed = {}
success = []
for dbname, db in dbs.items():
try:
#Force the database save, even if the time hasn't elapsed yet.
db.save(True)
success.append(dbname)
except: # pragma: no cover
import sys, traceback
xcls, xerr = sys.exc_info()[0:2]
failed[dbname] = traceback.format_tb(sys.exc_info()[2])
for sdb in success:
if writeable:
msg.okay("Project {0}.{1} saved successfully.".format(*sdb), 0)
for fdb, tb in failed.items(): # pragma: no cover
msg.err("Project {1}.{2} save failed:\n{0}".format(tb, *fdb),
prefix=False) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dbdir():
"""Returns the path to the directory where acorn DBs are stored. """ |
global dbdir
from os import mkdir, path, getcwd, chdir
if dbdir is None:
from acorn.config import settings
config = settings("acorn")
if (config.has_section("database") and
config.has_option("database", "folder")):
dbdir = config.get("database", "folder")
else: # pragma: no cover
raise ValueError("The folder to save DBs in must be configured"
" in 'acorn.cfg'")
#It is possible to specify the database path relative to the repository
#root. path.abspath will map it correctly if we are in the root directory.
from acorn.utility import abspath
if not path.isabs(dbdir):
#We want absolute paths to make it easier to port this to other OS.
dbdir = abspath(dbdir)
if not path.isdir(dbdir): # pragma: no cover
mkdir(dbdir)
return dbdir |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _json_clean(d):
"""Cleans the specified python `dict` by converting any tuple keys to strings so that they can be serialized by JSON. Args: d (dict):
python dictionary to clean up. Returns: dict: cleaned-up dictionary. """ |
result = {}
compkeys = {}
for k, v in d.items():
if not isinstance(k, tuple):
result[k] = v
else:
#v is a list of entries for instance methods/constructors on the
#UUID of the key. Instead of using the composite tuple keys, we
#switch them for a string using the
key = "c.{}".format(id(k))
result[key] = v
compkeys[key] = k
return (result, compkeys) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_image(byteio, imgfmt):
"""Saves the specified image to disk. Args: byteio (bytes):
image bytes to save to disk. imgfmt (str):
used as the extension of the saved file. Returns: str: a uuid for the saved image that can be added to the database entry. """ |
from os import path, mkdir
ptdir = "{}.{}".format(project, task)
uuid = str(uuid4())
#Save the image within the project/task specific folder.
idir = path.join(dbdir, ptdir)
if not path.isdir(idir):
mkdir(idir)
ipath = path.join(idir, "{}.{}".format(uuid, imgfmt))
with open(ipath, 'wb') as f:
f.write(byteio)
return uuid |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def log_uuid(self, uuid):
"""Logs the object with the specified `uuid` to `self.uuids` if possible. Args: uuid (str):
string value of :meth:`uuid.uuid4` value for the object. """ |
#We only need to try and describe an object once; if it is already in
#our database, then just move along.
if uuid not in self.uuids and uuid in uuids:
self.uuids[uuid] = uuids[uuid].describe() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_option(option, default=None, cast=None):
"""Returns the option value for the specified acorn database option. """ |
from acorn.config import settings
config = settings("acorn")
if (config.has_section("database") and
config.has_option("database", option)):
result = config.get("database", option)
if cast is not None:
result = cast(result)
else:
result = default
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load(self):
"""Deserializes the database from disk. """ |
#We load the database even when it is not configured to be
#writable. After all, the user may decide part-way through a session to
#begin writing again, and then we would want a history up to that point
#to be valid.
from os import path
if path.isfile(self.dbpath):
import json
with open(self.dbpath) as f:
jdb = json.load(f)
self.entities = jdb["entities"]
self.uuids = jdb["uuids"] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self, force=False):
"""Serializes the database file to disk. Args: force (bool):
when True, the elapsed time since last save is ignored and the database is saved anyway (subject to global :data:`writeable` setting). """ |
from time import time
# Since the DBs can get rather large, we don't want to save them every
# single time a method is called. Instead, we only save them at the
# frequency specified in the global settings file.
from datetime import datetime
savefreq = TaskDB.get_option("savefreq", 2, int)
if self.lastsave is not None:
delta = (datetime.fromtimestamp(time()) -
datetime.fromtimestamp(self.lastsave))
elapsed = int(delta.total_seconds()/60)
else:
elapsed = savefreq + 1
if elapsed > savefreq or force:
if not writeable:
#We still overwrite the lastsave value so that this message doesn't
#keep getting output for every :meth:`record` call.
self.lastsave = time()
msg.std("Skipping database write to disk by setting.", 2)
return
import json
try:
entities, compkeys = _json_clean(self.entities)
jdb = {"entities": entities,
"compkeys": compkeys,
"uuids": self.uuids}
with open(self.dbpath, 'w') as f:
json.dump(jdb, f)
except: # pragma: no cover
from acorn.msg import err
import sys
raise
err("{}: {}".format(*sys.exc_info()[0:2]))
self.lastsave = time() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def describe(self):
"""Returns a dictionary describing the object based on its type. """ |
result = {}
#Because we created an Instance object, we already know that this object
#is not one of the regular built-in types (except, perhaps, for list,
#dict and set objects that can have their tracking turned on).
#For objects that are instantiated by the user in __main__, we will
#already have a paper trail that shows exactly how it was done; but for
#these, we have to rely on human-specified descriptions.
from acorn.logging.descriptors import describe
return describe(self.obj) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def load_module(name, filename):
'''Load a module into name given its filename'''
if sys.version_info < (3, 5):
import imp
import warnings
with warnings.catch_warnings(): # Required for Python 2.7
warnings.simplefilter("ignore", RuntimeWarning)
return imp.load_source(name, filename)
else:
from importlib.machinery import SourceFileLoader
loader = SourceFileLoader(name, filename)
return loader.load_module() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def file_code(function_index=1, function_name=None):
""" This will return the code of the calling function function_index of 2 will give the parent of the caller function_name should not be used with function_index :param function_index: int of how many frames back the program should look :param function_name: str of what function to look for :return: str of the code from the target function """ |
info = function_info(function_index + 1, function_name)
with open(info['file'], 'r') as fn:
return fn.read() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def relevant_kwargs(function, exclude_keys='self', exclude_values=None, extra_values=None):
""" This will return a dictionary of local variables that are parameters to the function provided in the arg. Example: function(**relevant_kwargs(function)) :param function: function to select parameters for :param exclude_keys: str,list,func if not a function it will be converted into a funciton, defaults to excluding None :param exclude_values: obj,list,func if not a function it will be convereted into one, defaults to excluding 'self' :param extra_values: dict of other values to include with local :return: dict of local variables for the function """ |
args = function_args(function)
locals_values = function_kwargs(function_index=2, exclude_keys=exclude_keys)
if extra_values:
locals_values.update(extra_values)
return {k: v for k, v in locals_values.items() if k in args} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def make_feature(fc):
'''Builds a new `StringCounter` from the many `StringCounters` in the
input `fc`. This StringCounter will define one of the targets for
the `MultinomialNB` classifier.
This crucial function decides the relative importance of features
extracted by the ETL pipeline. This is essentially a form of
domain fitting that allows us to tune the extraction to the fields
that are important to a domain. However, if the NER for a domain
is inadequate, then the primary purpose of these relative
weightings is to remove bogus NER extractions.
'''
feat = StringCounter()
rejects = set()
keepers = set()
#keepers_keys = ['GPE', 'PERSON', 'ORGANIZATION', 'usernames']
keepers_keys = ['phone', 'email'] #['usernames', 'phone', 'email', 'ORGANIZATION', 'PERSON']
rejects_keys = ['keywords', 'usernames', 'ORGANIZATION', 'PERSON']
# The features used to pull the keys for the classifier
for f, strength in [('keywords', 10**4), ('GPE', 1), ('bow', 1), ('bowNP_sip', 10**8),
('phone', 10**12), ('email', 10**12),
('bowNP', 10**3), ('PERSON', 10**8), ('ORGANIZATION', 10**6), ('usernames', 10**12)]:
if strength == 1:
feat += fc[f]
else:
feat += StringCounter({key: strength * count
for key, count in fc[f].items()})
if f in rejects_keys:
map(rejects.add, fc[f])
if f in keepers_keys:
map(keepers.add, fc[f])
if u'' in feat: feat.pop(u'')
return feat, rejects, keepers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rx_int_extra(rxmatch):
""" We didn't just match an int but the int is what we need. """ |
rxmatch = re.search("\d+", rxmatch.group(0))
return int(rxmatch.group(0)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prepare_filename_decorator(fn):
""" A decorator of `prepare_filename` method 1. It automatically assign `settings.ROUGHPAGES_INDEX_FILENAME` if the `normalized_url` is ''. 2. It automatically assign file extensions to the output list. """ |
@wraps(fn)
def inner(self, normalized_url, request):
ext = settings.ROUGHPAGES_TEMPLATE_FILE_EXT
if not normalized_url:
normalized_url = settings.ROUGHPAGES_INDEX_FILENAME
filenames = fn(self, normalized_url, request)
filenames = [x + ext for x in filenames if x]
return filenames
return inner |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def smart_query_string(parser, token):
""" Outputs current GET query string with additions appended. Additions are provided in token pairs. """ |
args = token.split_contents()
additions = args[1:]
addition_pairs = []
while additions:
addition_pairs.append(additions[0:2])
additions = additions[2:]
return SmartQueryStringNode(addition_pairs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_thread(self, target, args=(), kwargs=None, priority=0):
""" To make sure applications work with the old name """ |
return self.add_task(target, args, kwargs, priority) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def list_projects(folders, folder = None, user = None):
'''List all folders or all subfolders of a folder.
If folder is provided, this method will output a list of subfolders
contained by it. Otherwise, a list of all top-level folders is produced.
:param folders: reference to folder.Folders instance
:param folder: folder name or None
:param user: optional user name
'''
fid = None if folder is None else Folders.name_to_id(folder)
# List all folders if none provided.
if fid is None:
for f in folders.folders(user):
print(Folders.id_to_name(f))
return
# List subfolders of a specific folder
try:
for sid in folders.subfolders(fid, user):
print(Folders.id_to_name(sid))
except KeyError:
print("E: folder not found: %s" %folder, file=sys.stderr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_supervisor(func: types.AnyFunction) -> types.Supervisor: """Get the appropriate supervisor to use and pre-apply the function. Args: func: A function. """ |
if not callable(func):
raise TypeError("func is not callable")
if asyncio.iscoroutinefunction(func):
supervisor = _async_supervisor
else:
supervisor = _sync_supervisor
return functools.partial(supervisor, func) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def _async_supervisor(func, animation_, step, *args, **kwargs):
"""Supervisor for running an animation with an asynchronous function. Args: func: A function to be run alongside an animation. animation_: An infinite generator that produces strings for the animation. step: Seconds between each animation frame. *args: Arguments for func. **kwargs: Keyword arguments for func. Returns: The result of func(*args, **kwargs) Raises: Any exception that is thrown when executing func. """ |
with ThreadPoolExecutor(max_workers=2) as pool:
with _terminating_event() as event:
pool.submit(animate_cli, animation_, step, event)
result = await func(*args, **kwargs)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def concatechain(*generators: types.FrameGenerator, separator: str = ''):
"""Return a generator that in each iteration takes one value from each of the supplied generators, joins them together with the specified separator and yields the result. Stops as soon as any iterator raises StopIteration and returns the value contained in it. Primarily created for chaining string generators, hence the name. Args: generators: Any number of generators that yield types that can be joined together with the separator string. separator: A separator to insert between each value yielded by the different generators. Returns: A generator that yields strings that are the concatenation of one value from each of the generators, joined together with the separator string. """ |
while True:
try:
next_ = [next(gen) for gen in generators]
yield separator.join(next_)
except StopIteration as exc:
return exc.value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compare(self, control_result, experimental_result):
_compare = getattr(self, '_compare', lambda x, y: x == y) """ Return true if the results match. """ |
return (
# Mismatch if only one of the results returned an error, or if
# different types of errors were returned.
type(control_result.error) is type(experimental_result.error) and
_compare(control_result.value, experimental_result.value)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch_inst_id(self):
""" Fetches the institute id of the RU """ |
try:
for d in msgpack.unpack(urllib2.urlopen(
"%s/list/institutes?format=msgpack" % self.url)):
if d['name'] == 'Radboud Universiteit Nijmegen':
return d['id']
except IOError, e: # urllib2 exceptions are a subclass of IOError
raise RuusterError(e)
assert False |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def get_ip(source='aws'):
''' a method to get current public ip address of machine '''
if source == 'aws':
source_url = 'http://checkip.amazonaws.com/'
else:
raise Exception('get_ip currently only supports queries to aws')
import requests
try:
response = requests.get(url=source_url)
except Exception as err:
from labpack.handlers.requests import handle_requests
from requests import Request
request_object = Request(method='GET', url=source_url)
request_details = handle_requests(request_object)
raise Exception(request_details['error'])
current_ip = response.content.decode()
current_ip = current_ip.strip()
return current_ip |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_parameter(samples, sample_period):
"""Create a HTK Parameter object from an array of samples and a samples period :param samples (list of lists or array of floats):
The samples to write into the file. Usually feature vectors. :param sample_period (int):
Sample period in 100ns units. """ |
parm_kind_str = 'USER'
parm_kind = _htk_str_to_param(parm_kind_str)
parm_kind_base, parm_kind_opts = _htk_str_to_param(parm_kind_str)
meta = ParameterMeta(n_samples=len(samples),
samp_period=sample_period,
samp_size=len(samples[0]) * 4, # size in bytes
parm_kind_str=parm_kind_str,
parm_kind=parm_kind,
parm_kind_base=parm_kind_base,
parm_kind_opts=parm_kind_opts)
return Parameter(meta=meta,
samples=np.array(samples)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_mlf(filename, utf8_normalization=None):
"""Load an HTK Master Label File. :param filename: The filename of the MLF file. :param utf8_normalization: None """ |
with codecs.open(filename, 'r', 'string_escape') as f:
data = f.read().decode('utf8')
if utf8_normalization:
data = unicodedata.normalize(utf8_normalization, data)
mlfs = {}
for mlf_object in HTK_MLF_RE.finditer(data):
mlfs[mlf_object.group('file')] = [[Label(**mo.groupdict())
for mo
in HTK_HYPOTHESIS_RE.finditer(recognition_data)]
for recognition_data
in re.split(r'\n///\n', mlf_object.group('hypotheses'))]
return mlfs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_mlf(mlf, output_filename):
"""Save an HTK Master Label File. :param mlf: MLF dictionary containing a mapping from file to list of annotations. :param output_filename: The file where to save the MLF """ |
with codecs.open(output_filename, 'w', 'utf-8') as f:
f.write(u'#!MLF!#\n')
for k, v in mlf.items():
f.write(u'"{}"\n'.format(k))
for labels in v:
for label in labels:
line = u'{start} {end} {symbol} ' \
u'{loglikelihood} {word}'.format(start=label.start or '',
end=label.end or '',
symbol=label.symbol or '',
loglikelihood=label.log_likelihood or '',
word=label.word or '')
f.write(u'{}\n'.format(line.strip()))
f.write(u'.\n') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
"""Test code called from commandline""" |
model = load_model('../data/hmmdefs')
hmm = model.hmms['r-We']
for state_name in hmm.state_names:
print(state_name)
state = model.states[state_name]
print(state.means_)
print(model)
model2 = load_model('../data/prior.hmm1mixSI.rate32')
print(model2) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_lower(cls):
# NOQA """ Return a list of all the fields that should be lowercased This is done on fields with `lower=True`. """ |
email = cls.get_fields_by_class(EmailType)
lower = cls.get_fields_by_prop('lower', True) + email
return list(set(email + lower)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fields_by_class(cls, field_class):
""" Return a list of field names matching a field class :param field_class: field class object :return: list """ |
ret = []
for key, val in getattr(cls, '_fields').items():
if isinstance(val, field_class):
ret.append(key)
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_fields_with_prop(cls, prop_key):
""" Return a list of fields with a prop key defined Each list item will be a tuple of field name containing the prop key & the value of that prop key. :param prop_key: key name :return: list of tuples """ |
ret = []
for key, val in getattr(cls, '_fields').items():
if hasattr(val, prop_key):
ret.append((key, getattr(val, prop_key)))
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_exceptions(cls, errors):
""" Convert the validation errors into ValidationFailure exc's Transform native schematics validation errors into a goldman ValidationFailure exception. :param errors: dict of errors in schematics format :return: list of ValidationFailure exception objects """ |
ret = []
for key, val in errors.items():
if key in cls.relationships:
attr = '/data/relationships/%s' % key
else:
attr = '/data/attributes/%s' % key
for error in val:
ret.append(ValidationFailure(attr, detail=error))
return ret |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dirty_fields(self):
""" Return an array of field names that are dirty Dirty means if a model was hydrated first from the store & then had field values changed they are now considered dirty. For new models all fields are considered dirty. :return: list """ |
dirty_fields = []
for field in self.all_fields:
if field not in self._original:
dirty_fields.append(field)
elif self._original[field] != getattr(self, field):
dirty_fields.append(field)
return dirty_fields |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def merge(self, data, clean=False, validate=False):
""" Merge a dict with the model This is needed because schematics doesn't auto cast values when assigned. This method allows us to ensure incoming data & existing data on a model are always coerced properly. We create a temporary model instance with just the new data so all the features of schematics deserialization are still available. :param data: dict of potentially new different data to merge :param clean: set the dirty bit back to clean. This is useful when the merge is coming from the store where the data could have been mutated & the new merged in data is now the single source of truth. :param validate: run the schematics validate method :return: nothing.. it has mutation side effects """ |
try:
model = self.__class__(data)
except ConversionError as errors:
abort(self.to_exceptions(errors.messages))
for key, val in model.to_native().items():
if key in data:
setattr(self, key, val)
if validate:
try:
self.validate()
except ModelValidationError as errors:
abort(self.to_exceptions(errors.messages))
if clean:
self._original = self.to_native() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_primitive(self, load_rels=None, sparse_fields=None, *args, **kwargs):
""" Override the schematics native to_primitive method :param loads_rels: List of field names that are relationships that should be loaded for the serialization process. This needs to be run before the native schematics to_primitive is run so the proper data is serialized. :param sparse_fields: List of field names that can be provided which limits the serialization to ONLY those field names. A whitelist effectively. """ |
if load_rels:
for rel in load_rels:
getattr(self, rel).load()
data = super(Model, self).to_primitive(*args, **kwargs)
if sparse_fields:
for key in data.keys():
if key not in sparse_fields:
del data[key]
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stash_split(fqdn, result, *argl, **argd):
"""Stashes the split between training and testing sets so that it can be used later for automatic scoring of the models in the log. """ |
global _splits
if fqdn == "sklearn.cross_validation.train_test_split":
key = id(result[1])
_splits[key] = result
#We don't actually want to return anything for the analysis; we are using it
#as a hook to save pointers to the dataset split so that we can easily
#analyze performance later on.
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _machine_fqdn(machine):
"""Returns the FQDN of the given learning machine. """ |
from acorn.logging.decoration import _fqdn
if hasattr(machine, "__class__"):
return _fqdn(machine.__class__, False)
else: # pragma: no cover
#See what FQDN can get out of the class instance.
return _fqdn(machine) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit(fqdn, result, *argl, **argd):
"""Analyzes the result of a generic fit operation performed by `sklearn`. Args: fqdn (str):
full-qualified name of the method that was called. result: result of calling the method with `fqdn`. argl (tuple):
positional arguments passed to the method call. argd (dict):
keyword arguments passed to the method call. """ |
#Check the arguments to see what kind of data we are working with, then
#choose the appropriate function below to return the analysis dictionary.
#The first positional argument will be the instance of the machine that was
#used. Check its name against a list.
global _machines
out = None
if len(argl) > 0:
machine = argl[0]
#We save pointers to the machine that was just fit so that we can figure
#out later what training data was used for analysis purposes.
key = id(machine)
_machines[key] = (machine, argl[0], argl[1])
if isclassifier(machine):
out = classify_fit(fqdn, result, *argl, **argd)
elif isregressor(machine):
out = regress_fit(fqdn, result, *argl, **argd)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def predict(fqdn, result, *argl, **argd):
"""Analyzes the result of a generic predict operation performed by `sklearn`. Args: fqdn (str):
full-qualified name of the method that was called. result: result of calling the method with `fqdn`. argl (tuple):
positional arguments passed to the method call. argd (dict):
keyword arguments passed to the method call. """ |
#Check the arguments to see what kind of data we are working with, then
#choose the appropriate function below to return the analysis dictionary.
out = None
if len(argl) > 0:
machine = argl[0]
if isclassifier(machine):
out = classify_predict(fqdn, result, None, *argl, **argd)
elif isregressor(machine):
out = regress_predict(fqdn, result, None, *argl, **argd)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _do_auto_predict(machine, X, *args):
"""Performs an automatic prediction for the specified machine and returns the predicted values. """ |
if auto_predict and hasattr(machine, "predict"):
return machine.predict(X) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _generic_fit(fqdn, result, scorer, yP=None, *argl, **argd):
"""Performs the generic fit tests that are common to both classifier and regressor; uses `scorer` to score the predicted values given by the machine when tested against its training set. Args: scorer (function):
called on the result of `machine.predict(Xtrain, ytrain)`. """ |
out = None
if len(argl) > 0:
machine = argl[0]
out = {}
if hasattr(machine, "best_score_"):
out["score"] = machine.best_score_
#With fitting it is often useful to know how well the fitting set was
#matched (by trying to predict a score on it). We can do this
#automatically and show the result to the user.
yL = _do_auto_predict(*argl[0:2])
yscore = scorer(fqdn, yL, yP, *argl, **argd)
if yscore is not None:
out.update(yscore)
return out |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _percent_match(result, out, yP=None, *argl):
"""Returns the percent match for the specified prediction call; requires that the data was split before using an analyzed method. Args: out (dict):
output dictionary to save the result to. """ |
if len(argl) > 1:
if yP is None:
Xt = argl[1]
key = id(Xt)
if key in _splits:
yP = _splits[key][3]
if yP is not None:
import math
out["%"] = round(1.-sum(abs(yP - result))/float(len(result)), 3) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def path(self):
"""Getter property for the URL path to this Task. :rtype: string :returns: The URL path to this task. """ |
if not self.id:
raise ValueError('Cannot determine path without a task id.')
return self.path_helper(self.taskqueue.path, self.id) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self, client=None):
"""Deletes a task from Task Queue. :type client: :class:`gcloud.taskqueue.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the task's taskqueue. :rtype: :class:`Task` :returns: The task that was just deleted. :raises: :class:`gcloud.exceptions.NotFound` (propagated from :meth:`gcloud.taskqueue.taskqueue.Taskqueue.delete_task`). """ |
return self.taskqueue.delete_task(self.id, client=client) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self, new_lease_time, client=None):
"""Update the duration of a task lease :type new_lease_time: int :param new_lease_time: the new lease time in seconds. :type client: :class:`gcloud.taskqueue.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the task's taskqueue. :rtype: :class:`Task` :returns: The task that was just updated. :raises: :class:`gcloud.exceptions.NotFound` (propagated from :meth:`gcloud.taskqueue.taskqueue.Taskqueue.update_task`). """ |
return self.taskqueue.update_task(self.id, new_lease_time=new_lease_time, client=client) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def description(self):
"""The description for this task. See: https://cloud.google.com/appengine/docs/python/taskqueue/rest/tasks :rtype: string :returns: The description for this task. """ |
if self._description is None:
if 'payloadBase64' not in self._properties:
self._properties = self.taskqueue.get_task(id=self.id)._properties
self._description = base64.b64decode(self._properties.get('payloadBase64', b'')).decode("ascii")
return self._description |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def time_enqueued(self):
"""Retrieve the timestamp at which the task was enqueued. See: https://cloud.google.com/appengine/docs/python/taskqueue/rest/tasks :rtype: :class:`datetime.datetime` or ``NoneType`` :returns: Datetime object parsed from microsecond timestamp, or ``None`` if the property is not set locally. """ |
value = self._properties.get('enqueueTimestamp')
if value is not None:
return _datetime_from_microseconds(int(value)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def conv2d(self, x_in: Connection, w_in: Connection, receptive_field_size, filters_number, stride=1, padding=1, name=""):
""" Computes a 2-D convolution given 4-D input and filter tensors. """ |
x_cols = self.tensor_3d_to_cols(x_in, receptive_field_size, stride=stride, padding=padding)
mul = self.transpose(self.matrix_multiply(x_cols, w_in), 0, 2, 1)
#output_width = self.sum(self.div(self.sum(self.sum(self.shape(x_in, 2), self.constant(-1 * receptive_field_size)),
# self.constant(2 * padding)), self.constant(stride)), self.constant(1))
# output_height = (h - f + 2 * p) / s + 1
output = self.reshape(mul, (-1, filters_number, receptive_field_size, receptive_field_size))
output.name = name
return output |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def append(self, hcont, value, score = None):
""" If sort_field is specified, score must be None. If sort_field is not specified, score is mandatory. """ |
assert (score is None) != (self.field.sort_field is None)
if score is None:
score = getattr(value, self.field.sort_field.name)
ContainerFieldWriter.append(self, hcont, value, score) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def copy_file(aws_access_key_id, aws_secret_access_key, bucket_name, file, s3_folder):
""" copies file to bucket s3_folder """ |
# Connect to the bucket
bucket = s3_bucket(aws_access_key_id, aws_secret_access_key, bucket_name)
key = boto.s3.key.Key(bucket)
if s3_folder:
target_name = '%s/%s' % (s3_folder, os.path.basename(file))
else:
target_name = os.path.basename(file)
key.key = target_name
print('Uploading %s to %s' % (file, target_name))
key.set_contents_from_filename(file)
print('Upload %s FINISHED: %s' % (file, dt.now())) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main():
""" We here demostrate the basic functionality of barrett. We use a global scan of scalar dark matter as an example. The details aren't really important. """ |
dataset = 'RD'
observables = ['log(<\sigma v>)', '\Omega_{\chi}h^2', 'log(\sigma_p^{SI})']
var = ['log(m_{\chi})']
var += ['log(C_1)', 'log(C_2)', 'log(C_3)', 'log(C_4)', 'log(C_5)', 'log(C_6)']
var += observables
plot_vs_mass(dataset, observables, 'mass_vs_observables.png')
plot_oneD(dataset, var, 'oneD.png')
pairplot(dataset, var, 'pairplot.png') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def pairplot(dataset, vars, filename, bins=60):
""" Plot a matrix of the specified variables with all the 2D pdfs and 1D pdfs. """ |
n = len(vars)
fig, axes = plt.subplots(nrows=n, ncols=n)
plt.subplots_adjust(wspace=0.1, hspace=0.1)
for i, x in enumerate(vars):
for j, y in enumerate(vars):
print(((x, y), (i, j)))
ax = axes[j,i]
if j < i:
ax.axis('off')
continue
elif i == j:
P = posterior.oneD(dataset+'.h5', x, limits=limits(x), bins=bins)
P.plot(ax)
ax.set_xlim(limits(x))
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.xaxis.set_ticks_position('bottom')
ax.set_yticks([])
else:
P = posterior.twoD(dataset+'.h5', x, y,
xlimits=limits(x), ylimits=limits(y), xbins=bins, ybins=bins)
# apply some gaussian smoothing to make the contours slightly smoother
sigmas = (np.diff(P.ycenters)[0], np.diff(P.xcenters)[0])
P.pdf = gaussian_filter(P.pdf, sigmas, mode='nearest')
P.plot(ax, levels=np.linspace(0.9, 0.1, 9))
ax.set_xlim(limits(x))
ax.set_ylim(limits(y))
# now we clean up labels, ticks and such
leftmostcol = i == 0
bottomrow = j == n-1
ax.set_xlabel(labels(x) if bottomrow else '')
ax.set_ylabel(labels(y) if leftmostcol else '')
if not leftmostcol:
ax.set_yticklabels([])
if not bottomrow:
ax.set_xticklabels([])
fig.set_size_inches(n*4,n*4)
fig.savefig(filename, dpi=200, bbox_inches='tight')
plt.close(fig) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_vs_mass(dataset, vars, filename, bins=60):
""" Plot 2D marginalised posteriors of the 'vars' vs the dark matter mass. We plot the one sigma, and two sigma filled contours. More contours can be plotted which produces something more akin to a heatmap. If one require more complicated plotting, it is recommended to write a custom plotting function by extending the default plot() method. """ |
n = len(vars)
fig, axes = plt.subplots(nrows=n,
ncols=1,
sharex='col',
sharey=False)
plt.subplots_adjust(wspace=0, hspace=0)
m = 'log(m_{\chi})'
for i, y in enumerate(vars):
ax = axes[i]
P = posterior.twoD(dataset+'.h5', m, y,
xlimits=limits(m), ylimits=limits(y), xbins=bins, ybins=bins)
# apply some gaussian smoothing to make the contours slightly smoother
sigmas = (np.diff(P.ycenters)[0], np.diff(P.xcenters)[0])
P.pdf = gaussian_filter(P.pdf, sigmas, mode='nearest')
P.plot(ax, levels=np.linspace(0.9, 0.1, 9))
ax.set_xlabel(labels('log(m_{\chi})'))
ax.set_ylabel(labels(y))
fig.set_size_inches(4,n*3)
fig.savefig(filename, dpi=200, bbox_inches='tight')
plt.close(fig) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def plot_oneD(dataset, vars, filename, bins=60):
""" Plot 1D marginalised posteriors for the 'vars' of interest.""" |
n = len(vars)
fig, axes = plt.subplots(nrows=n,
ncols=1,
sharex=False,
sharey=False)
for i, x in enumerate(vars):
ax = axes[i]
P = posterior.oneD(dataset+'.h5', x, limits=limits(x), bins=bins)
P.plot(ax)
ax.set_xlabel(labels(x))
ax.set_yticklabels([])
fig.set_size_inches(4, 4*n)
fig.savefig(filename, dpi=200, bbox_inches='tight')
plt.close(fig) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def start_logger(self):
""" Enables the root logger and configures extra loggers. """ |
level = self.real_level(self.level)
logging.basicConfig(level=level)
self.set_logger(self.name, self.level)
config.dictConfig(self.config)
self.logger = logging.getLogger(self.name) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_logger(self, logger_name, level, handler=None):
""" Sets the level of a logger """ |
if 'loggers' not in self.config:
self.config['loggers'] = {}
real_level = self.real_level(level)
self.config['loggers'][logger_name] = {'level': real_level}
if handler:
self.config['loggers'][logger_name]['handlers'] = [handler] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def register_event(self, event_name, event_level, message):
""" Registers an event so that it can be logged later. """ |
self.events[event_name] = (event_level, message) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def to_iso8601(dt, tz=None):
""" Returns an ISO-8601 representation of a given datetime instance. '2014-10-01T23:21:33.718508Z' :param dt: a :class:`~datetime.datetime` instance :param tz: a :class:`~datetime.tzinfo` to use; if None - use a default one """ |
if tz is not None:
dt = dt.replace(tzinfo=tz)
iso8601 = dt.isoformat()
# Naive datetime objects usually don't have info about timezone.
# Let's assume it's UTC and add Z to the end.
if re.match(r'.*(Z|[+-]\d{2}:\d{2})$', iso8601) is None:
iso8601 += 'Z'
return iso8601 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _is_dst(dt):
""" Returns True if a given datetime object represents a time with DST shift. """ |
# we can't use `dt.timestamp()` here since it requires a `utcoffset`
# and we don't want to get into a recursive loop
localtime = time.localtime(time.mktime((
dt.year,
dt.month,
dt.day,
dt.hour,
dt.minute,
dt.second,
dt.weekday(),
0, # day of the year
-1 # dst
)))
return localtime.tm_isdst > 0 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def dst(self, dt):
""" Returns a difference in seconds between standard offset and dst offset. """ |
if not self._is_dst(dt):
return datetime.timedelta(0)
offset = time.timezone - time.altzone
return datetime.timedelta(seconds=-offset) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _to_camelcase(input_string):
''' a helper method to convert python to camelcase'''
camel_string = ''
for i in range(len(input_string)):
if input_string[i] == '_':
pass
elif not camel_string:
camel_string += input_string[i].upper()
elif input_string[i-1] == '_':
camel_string += input_string[i].upper()
else:
camel_string += input_string[i]
return camel_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _to_python(input_string):
''' a helper method to convert camelcase to python'''
python_string = ''
for i in range(len(input_string)):
if not python_string:
python_string += input_string[i].lower()
elif input_string[i].isupper():
python_string += '_%s' % input_string[i].lower()
else:
python_string += input_string[i]
return python_string |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self, callback_url):
"""Register a new Subscription on this collection's parent object. Args: callback_url (str):
URI of an active endpoint which can receive notifications. Returns: A round.Subscription object if successful. """ |
resource = self.resource.create({'subscribed_to': 'address',
'callback_url': callback_url})
subscription = self.wrap(resource)
self.add(subscription)
return subscription |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def match(self, seq, **kwargs):
'''If the schema matches seq, returns a list of the matched objects.
Otherwise, returns MatchFailure instance.
'''
strict = kwargs.get('strict', False)
top_level = kwargs.get('top_level', True)
match = kwargs.get('match', list())
if top_level:
kwargs['top_level'] = False
kwargs['match'] = match
try:
seq = IterableList(seq)
self.match(seq, **kwargs)
if strict:
if not seq.empty():
raise MatchFailed('Sequence is too long', seq)
except MatchFailed as e:
return e.failure()
return Match(*match)
for elem in self.elems:
elem.match(seq, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def downloads_per_year(collection, code, raw=False):
""" This method retrieve the total of downloads per year. arguments collection: SciELO 3 letters Acronym code: (Journal ISSN, Issue PID, Article PID) return [ ("2017", "20101"), ("2016", "11201"), ("2015", "12311"), ] """ |
tc = ThriftClient()
body = {"query": {"filtered": {}}}
fltr = {}
query = {
"query": {
"bool": {
"must": [
{
"match": {
"collection": collection
}
}
]
}
}
}
aggs = {
"aggs": {
"access_year": {
"terms": {
"field": "access_year",
"size": 0,
"order": {
"_term": "asc"
}
},
"aggs": {
"access_total": {
"sum": {
"field": "access_total"
}
}
}
}
}
}
body['query']['filtered'].update(fltr)
body['query']['filtered'].update(query)
body.update(aggs)
code_type = _code_type(code)
if code_type:
query["query"]["bool"]["must"].append({
"match": {
code_type: code
}
})
query_parameters = [
('size', '0')
]
query_result = tc.search(json.dumps(body), query_parameters)
return query_result if raw is True else _compute_downloads_per_year(query_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def word_wrap(self, text, width=1023):
""" A simple word wrapping greedy algorithm that puts as many words into a single string as possible. """ |
substrings = []
string = text
while len(string) > width:
index = width - 1
while not string[index].isspace():
index = index - 1
line = string[0:index]
substrings.append(line)
string = string[index + 1:]
substrings.append(string)
return substrings |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def speak(self, text):
""" The main function to convert text into speech. """ |
if not self.is_valid_string(text):
raise Exception("%s is not ISO-8859-1 compatible." % (text))
# Maximum allowable 1023 characters per message
if len(text) > 1023:
lines = self.word_wrap(text, width=1023)
for line in lines:
self.queue.put("S%s" % (line))
else:
self.queue.put("S%s" % (text)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_site(self, url, headers, cookies, timeout, driver_args, driver_kwargs):
""" Try and return page content in the requested format using selenium """ |
try:
# **TODO**: Find what exception this will throw and catch it and call
# self.driver.execute_script("window.stop()")
# Then still try and get the source from the page
self.driver.set_page_load_timeout(timeout)
self.driver.get(url)
header_data = self.get_selenium_header()
status_code = header_data['status-code']
# Set data to access from script
self.status_code = status_code
self.url = self.driver.current_url
except TimeoutException:
logger.warning("Page timeout: {}".format(url))
try:
scraper_monitor.failed_url(url, 'Timeout')
except (NameError, AttributeError):
# Happens when scraper_monitor is not being used/setup
pass
except Exception:
logger.exception("Unknown problem with scraper_monitor sending a failed url")
except Exception as e:
raise e.with_traceback(sys.exc_info()[2])
else:
# If an exception was not thrown then check the http status code
if status_code < 400:
# If the http status code is not an error
return self.driver.page_source
else:
# If http status code is 400 or greater
raise SeleniumHTTPError("Status code >= 400", status_code=status_code) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sync_one(self, aws_syncr, amazon, key):
"""Make sure this key is as defined""" |
key_info = amazon.kms.key_info(key.name, key.location)
if not key_info:
amazon.kms.create_key(key.name, key.description, key.location, key.grant, key.policy.document)
else:
amazon.kms.modify_key(key_info, key.name, key.description, key.location, key.grant, key.policy.document) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _walk(self, root_path=''):
''' an iterator method which walks the file structure of the dropbox collection '''
title = '%s._walk' % self.__class__.__name__
if root_path:
root_path = '/%s' % root_path
try:
response = self.dropbox.files_list_folder(path=root_path, recursive=True)
for record in response.entries:
if not isinstance(record, self.objects.FileMetadata):
continue
yield record.path_display[1:]
if response.has_more:
while response.has_more:
response = self.dropbox.files_list_folder_continue(response.cursor)
for record in response.entries:
if not isinstance(record, self.objects.FileMetadata):
continue
yield record.path_display[1:]
except:
raise DropboxConnectionError(title) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _read_stdin():
""" Generator for reading from standard input in nonblocking mode. Other ways of reading from ``stdin`` in python waits, until the buffer is big enough, or until EOF character is sent. This functions yields immediately after each line. """ |
line = sys.stdin.readline()
while line:
yield line
line = sys.stdin.readline() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parse_line(line):
""" Convert one line from the extended log to dict. Args: line (str):
Line which will be converted. Returns: dict: dict with ``timestamp``, ``command``, ``username`` and ``path`` \ keys. Note: Typical line looks like this:: /home/ftp/xex/asd bsd.dat, xex, STOR, 1398351777 Filename may contain ``,`` character, so I am ``rsplitting`` the line from the end to the beginning. """ |
line, timestamp = line.rsplit(",", 1)
line, command = line.rsplit(",", 1)
path, username = line.rsplit(",", 1)
return {
"timestamp": timestamp.strip(),
"command": command.strip(),
"username": username.strip(),
"path": path,
} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def process_log(file_iterator):
""" Process the extended ProFTPD log. Args: file_iterator (file):
any file-like iterator for reading the log or stdin (see :func:`_read_stdin`). Yields: ImportRequest: with each import. """ |
for line in file_iterator:
if "," not in line:
continue
parsed = _parse_line(line)
if not parsed["command"].upper() in ["DELE", "DEL"]:
continue
# don't react to anything else, than trigger in form of deleted
# "lock" file
if os.path.basename(parsed["path"]) != settings.LOCK_FILENAME:
continue
# react only to lock file in in home directory
dir_name = os.path.dirname(parsed["path"])
if settings.LOCK_ONLY_IN_HOME:
if dir_name != settings.DATA_PATH + parsed["username"]:
continue
# deleted user
if not os.path.exists(os.path.dirname(parsed["path"])):
continue
# old record, which doesn't need to be parsed again
if os.path.exists(parsed["path"]):
continue
logger.info(
"Request for processing from user '%s'." % parsed["username"]
)
yield process_import_request(
username=parsed["username"],
path=os.path.dirname(parsed["path"]),
timestamp=parsed["timestamp"],
logger_handler=logger
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def main(filename):
""" Open `filename` and start processing it line by line. If `filename` is none, process lines from `stdin`. """ |
if filename:
if not os.path.exists(filename):
logger.error("'%s' doesn't exists!" % filename)
sys.stderr.write("'%s' doesn't exists!\n" % filename)
sys.exit(1)
logger.info("Processing '%s'" % filename)
for ir in process_log(sh.tail("-f", filename, _iter=True)):
print ir
else:
logger.info("Processing stdin.")
for ir in process_log(_read_stdin()):
print ir |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def serialize(self, data):
""" Invoke the serializer These are common things for all serializers. Mostly, stuff to do with managing headers. The data passed in may not be reliable for much of anything. Conditionally, set the Content-Type header unless it has already been set. """ |
if not self.resp.content_type:
self.resp.set_header('Content-Type', getattr(self, 'MIMETYPE')) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def printData(self, output = sys.stdout):
"""Output all the file data to be written to any writable output""" |
self.printDatum("Name : ", self.fileName, output)
self.printDatum("Author : ", self.author, output)
self.printDatum("Repository : ", self.repository, output)
self.printDatum("Category : ", self.category, output)
self.printDatum("Downloads : ", self.downloads, output)
self.printDatum("Date Uploaded : ", self.fileDate, output)
self.printDatum("File Size : ", self.fileSize, output)
self.printDatum("Documentation : ", self.documentation, output)
self.printDatum("Source Code : ", self.sourceCode, output)
self.printDatum("Description : ", self.description, output)
# print("\n", output)
print >> output, "\n\n" |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def raise_errors_on_nested_writes(method_name, serializer, validated_data):
""" Give explicit errors when users attempt to pass writable nested data. If we don't do this explicitly they'd get a less helpful error when calling `.save()` on the serializer. We don't *automatically* support these sorts of nested writes because there are too many ambiguities to define a default behavior. Eg. Suppose we have a `UserSerializer` with a nested profile. How should we handle the case of an update, where the `profile` relationship does not exist? Any of the following might be valid: * Raise an application error. * Silently ignore the nested part of the update. * Automatically create a profile instance. """ |
# Ensure we don't have a writable nested field. For example:
#
# class UserSerializer(ModelSerializer):
# ...
# profile = ProfileSerializer()
assert not any(
isinstance(field, BaseSerializer) and
(key in validated_data) and
isinstance(validated_data[key], (list, dict))
for key, field in serializer.fields.items()
), (
'The `.{method_name}()` method does not support writable nested'
'fields by default.\nWrite an explicit `.{method_name}()` method for '
'serializer `{module}.{class_name}`, or set `read_only=True` on '
'nested serializer fields.'.format(
method_name=method_name,
module=serializer.__class__.__module__,
class_name=serializer.__class__.__name__
)
)
# Ensure we don't have a writable dotted-source field. For example:
#
# class UserSerializer(ModelSerializer):
# ...
# address = serializer.CharField('profile.address')
assert not any(
'.' in field.source and
(key in validated_data) and
isinstance(validated_data[key], (list, dict))
for key, field in serializer.fields.items()
), (
'The `.{method_name}()` method does not support writable dotted-source '
'fields by default.\nWrite an explicit `.{method_name}()` method for '
'serializer `{module}.{class_name}`, or set `read_only=True` on '
'dotted-source serializer fields.'.format(
method_name=method_name,
module=serializer.__class__.__module__,
class_name=serializer.__class__.__name__
)
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def many_init(cls, *args, **kwargs):
""" This method implements the creation of a `ListSerializer` parent class when `many=True` is used. You can customize it if you need to control which keyword arguments are passed to the parent, and which are passed to the child. Note that we're over-cautious in passing most arguments to both parent and child classes in order to try to cover the general case. If you're overriding this method you'll probably want something much simpler, eg: @classmethod def many_init(cls, *args, **kwargs):
kwargs['child'] = cls() return CustomListSerializer(*args, **kwargs) """ |
allow_empty = kwargs.pop('allow_empty', None)
child_serializer = cls(*args, **kwargs)
list_kwargs = {
'child': child_serializer,
}
if allow_empty is not None:
list_kwargs['allow_empty'] = allow_empty
list_kwargs.update({
key: value for key, value in kwargs.items()
if key in LIST_SERIALIZER_KWARGS
})
meta = getattr(cls, 'Meta', None)
list_serializer_class = getattr(meta, 'list_serializer_class', ListSerializer)
return list_serializer_class(*args, **list_kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_value(self, dictionary):
""" Given the input dictionary, return the field value. """ |
# We override the default field access in order to support
# lists in HTML forms.
if html.is_html_input(dictionary):
return html.parse_html_list(dictionary, prefix=self.field_name)
return dictionary.get(self.field_name, empty) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_field_names(self, declared_fields, info):
""" Returns the list of all field names that should be created when instantiating this serializer class. This is based on the default set of fields, but also takes into account the `Meta.fields` or `Meta.exclude` options if they have been specified. """ |
fields = getattr(self.Meta, 'fields', None)
exclude = getattr(self.Meta, 'exclude', None)
if fields and fields != ALL_FIELDS and not isinstance(fields, (list, tuple)):
raise TypeError(
'The `fields` option must be a list or tuple or "__all__". '
'Got %s.' % type(fields).__name__
)
if exclude and not isinstance(exclude, (list, tuple)):
raise TypeError(
'The `exclude` option must be a list or tuple. Got %s.' %
type(exclude).__name__
)
assert not (fields and exclude), (
"Cannot set both 'fields' and 'exclude' options on "
"serializer {serializer_class}.".format(
serializer_class=self.__class__.__name__
)
)
if fields is None and exclude is None:
warnings.warn(
"Creating a ModelSerializer without either the 'fields' "
"attribute or the 'exclude' attribute is pending deprecation "
"since 3.3.0. Add an explicit fields = '__all__' to the "
"{serializer_class} serializer.".format(
serializer_class=self.__class__.__name__
),
PendingDeprecationWarning
)
if fields == ALL_FIELDS:
fields = None
if fields is not None:
# Ensure that all declared fields have also been included in the
# `Meta.fields` option.
# Do not require any fields that are declared a parent class,
# in order to allow serializer subclasses to only include
# a subset of fields.
required_field_names = set(declared_fields)
for cls in self.__class__.__bases__:
required_field_names -= set(getattr(cls, '_declared_fields', []))
for field_name in required_field_names:
assert field_name in fields, (
"The field '{field_name}' was declared on serializer "
"{serializer_class}, but has not been included in the "
"'fields' option.".format(
field_name=field_name,
serializer_class=self.__class__.__name__
)
)
return fields
# Use the default set of field names if `Meta.fields` is not specified.
fields = self.get_default_field_names(declared_fields, info)
if exclude is not None:
# If `Meta.exclude` is included, then remove those fields.
for field_name in exclude:
assert field_name in fields, (
"The field '{field_name}' was included on serializer "
"{serializer_class} in the 'exclude' option, but does "
"not match any model field.".format(
field_name=field_name,
serializer_class=self.__class__.__name__
)
)
fields.remove(field_name)
return fields |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def build_standard_field(self, field_name, model_field):
""" Create regular model fields. """ |
field_mapping = ClassLookupDict(self.serializer_field_mapping)
field_class = field_mapping[model_field]
field_kwargs = get_field_kwargs(field_name, model_field)
if 'choices' in field_kwargs:
# Fields with choices get coerced into `ChoiceField`
# instead of using their regular typed field.
field_class = self.serializer_choice_field
# Some model fields may introduce kwargs that would not be valid
# for the choice field. We need to strip these out.
# Eg. models.DecimalField(max_digits=3, decimal_places=1, choices=DECIMAL_CHOICES)
valid_kwargs = set((
'read_only', 'write_only',
'required', 'default', 'initial', 'source',
'label', 'help_text', 'style',
'error_messages', 'validators', 'allow_null', 'allow_blank',
'choices'
))
for key in list(field_kwargs.keys()):
if key not in valid_kwargs:
field_kwargs.pop(key)
if not issubclass(field_class, ModelField):
# `model_field` is only valid for the fallback case of
# `ModelField`, which is used when no other typed field
# matched to the model field.
field_kwargs.pop('model_field', None)
if not issubclass(field_class, CharField) and not issubclass(field_class, ChoiceField):
# `allow_blank` is only valid for textual fields.
field_kwargs.pop('allow_blank', None)
if postgres_fields and isinstance(model_field, postgres_fields.ArrayField):
# Populate the `child` argument on `ListField` instances generated
# for the PostgrSQL specfic `ArrayField`.
child_model_field = model_field.base_field
child_field_class, child_field_kwargs = self.build_standard_field(
'child', child_model_field
)
field_kwargs['child'] = child_field_class(**child_field_kwargs)
return field_class, field_kwargs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.