text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def field_set(doc, field, value):
""" Sets or replaces a value for a field in a locally cached Document object. To remove the field set the ``value`` to None. :param Document doc: Locally cached Document object that can be a Document, DesignDocument or dict. :param str field: Name of the field to set. :param value: Value to set the field to. """ |
if value is None:
doc.__delitem__(field)
else:
doc[field] = value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_field(self, action, field, value, max_tries, tries=0):
""" Private update_field method. Wrapped by Document.update_field. Tracks a "tries" var to help limit recursion. """ |
# Refresh our view of the document.
self.fetch()
# Update the field.
action(self, field, value)
# Attempt to save, retrying conflicts up to max_tries.
try:
self.save()
except requests.HTTPError as ex:
if tries < max_tries and ex.response.status_code == 409:
self._update_field(
action, field, value, max_tries, tries=tries+1)
else:
raise |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update_field(self, action, field, value, max_tries=10):
""" Updates a field in the remote document. If a conflict exists, the document is re-fetched from the remote database and the update is retried. This is performed up to ``max_tries`` number of times. Use this method when you want to update a single field in a document, and don't want to risk clobbering other people's changes to the document in other fields, but also don't want the caller to implement logic to deal with conflicts. For example: .. code-block:: python # Append the string 'foo' to the 'words' list of Document doc. doc.update_field( action=doc.list_field_append, field='words', value='foo' ) :param callable action: A routine that takes a Document object, a field name, and a value. The routine should attempt to update a field in the locally cached Document object with the given value, using whatever logic is appropriate. Valid actions are :func:`~cloudant.document.Document.list_field_append`, :func:`~cloudant.document.Document.list_field_remove`, :func:`~cloudant.document.Document.field_set` :param str field: Name of the field to update :param value: Value to update the field with :param int max_tries: In the case of a conflict, the number of retries to attempt """ |
self._update_field(action, field, value, max_tries) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
""" Removes the document from the remote database and clears the content of the locally cached Document object with the exception of the ``_id`` field. In order to successfully remove a document from the remote database, a ``_rev`` value must exist in the locally cached Document object. """ |
if not self.get("_rev"):
raise CloudantDocumentException(103)
del_resp = self.r_session.delete(
self.document_url,
params={"rev": self["_rev"]},
)
del_resp.raise_for_status()
_id = self['_id']
self.clear()
self['_id'] = _id |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete_attachment(self, attachment, headers=None):
""" Removes an attachment from a remote document and refreshes the locally cached document object. :param str attachment: Attachment file name used to identify the attachment. :param dict headers: Optional, additional headers to be sent with request. :returns: Attachment deletion status in JSON format """ |
# need latest rev
self.fetch()
attachment_url = '/'.join((self.document_url, attachment))
if headers is None:
headers = {'If-Match': self['_rev']}
else:
headers['If-Match'] = self['_rev']
resp = self.r_session.delete(
attachment_url,
headers=headers
)
resp.raise_for_status()
super(Document, self).__setitem__('_rev', response_to_json_dict(resp)['rev'])
# Execute logic only if attachment metadata exists locally
if self.get('_attachments'):
# Remove the attachment metadata for the specified attachment
if self['_attachments'].get(attachment):
self['_attachments'].__delitem__(attachment)
# Remove empty attachment metadata from the local dictionary
if not self['_attachments']:
super(Document, self).__delitem__('_attachments')
return response_to_json_dict(resp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def put_attachment(self, attachment, content_type, data, headers=None):
""" Adds a new attachment, or updates an existing attachment, to the remote document and refreshes the locally cached Document object accordingly. :param attachment: Attachment file name used to identify the attachment. :param content_type: The http ``Content-Type`` of the attachment used as an additional header. :param data: Attachment data defining the attachment content. :param headers: Optional, additional headers to be sent with request. :returns: Attachment addition/update status in JSON format """ |
# need latest rev
self.fetch()
attachment_url = '/'.join((self.document_url, attachment))
if headers is None:
headers = {
'If-Match': self['_rev'],
'Content-Type': content_type
}
else:
headers['If-Match'] = self['_rev']
headers['Content-Type'] = content_type
resp = self.r_session.put(
attachment_url,
data=data,
headers=headers
)
resp.raise_for_status()
self.fetch()
return response_to_json_dict(resp) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def py_to_couch_validate(key, val):
""" Validates the individual parameter key and value. """ |
if key not in RESULT_ARG_TYPES:
raise CloudantArgumentError(116, key)
# pylint: disable=unidiomatic-typecheck
# Validate argument values and ensure that a boolean is not passed in
# if an integer is expected
if (not isinstance(val, RESULT_ARG_TYPES[key]) or
(type(val) is bool and int in RESULT_ARG_TYPES[key])):
raise CloudantArgumentError(117, key, RESULT_ARG_TYPES[key])
if key == 'keys':
for key_list_val in val:
if (not isinstance(key_list_val, RESULT_ARG_TYPES['key']) or
type(key_list_val) is bool):
raise CloudantArgumentError(134, RESULT_ARG_TYPES['key'])
if key == 'stale':
if val not in ('ok', 'update_after'):
raise CloudantArgumentError(135, val) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _py_to_couch_translate(key, val):
""" Performs the conversion of the Python parameter value to its CouchDB equivalent. """ |
try:
if key in ['keys', 'endkey_docid', 'startkey_docid', 'stale', 'update']:
return {key: val}
if val is None:
return {key: None}
arg_converter = TYPE_CONVERTERS.get(type(val))
return {key: arg_converter(val)}
except Exception as ex:
raise CloudantArgumentError(136, key, ex) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_docs(r_session, url, encoder=None, headers=None, **params):
""" Provides a helper for functions that require GET or POST requests with a JSON, text, or raw response containing documents. :param r_session: Authentication session from the client :param str url: URL containing the endpoint :param JSONEncoder encoder: Custom encoder from the client :param dict headers: Optional HTTP Headers to send with the request :returns: Raw response content from the specified endpoint """ |
keys_list = params.pop('keys', None)
keys = None
if keys_list is not None:
keys = json.dumps({'keys': keys_list}, cls=encoder)
f_params = python_to_couch(params)
resp = None
if keys is not None:
# If we're using POST we are sending JSON so add the header
if headers is None:
headers = {}
headers['Content-Type'] = 'application/json'
resp = r_session.post(url, headers=headers, params=f_params, data=keys)
else:
resp = r_session.get(url, headers=headers, params=f_params)
resp.raise_for_status()
return resp |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def response_to_json_dict(response, **kwargs):
""" Standard place to convert responses to JSON. :param response: requests response object :param **kwargs: arguments accepted by json.loads :returns: dict of JSON response """ |
if response.encoding is None:
response.encoding = 'utf-8'
return json.loads(response.text, **kwargs) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cloudant_iam(account_name, api_key, **kwargs):
""" Provides a context manager to create a Cloudant session using IAM authentication and provide access to databases, docs etc. :param account_name: Cloudant account name. :param api_key: IAM authentication API key. For example: .. code-block:: python # cloudant context manager from cloudant import cloudant_iam with cloudant_iam(ACCOUNT_NAME, API_KEY) as client: # Context handles connect() and disconnect() for you. # Perform library operations within this context. Such as: print client.all_dbs() """ |
cloudant_session = Cloudant.iam(account_name, api_key, **kwargs)
cloudant_session.connect()
yield cloudant_session
cloudant_session.disconnect() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def couchdb(user, passwd, **kwargs):
""" Provides a context manager to create a CouchDB session and provide access to databases, docs etc. :param str user: Username used to connect to CouchDB. :param str passwd: Passcode used to connect to CouchDB. :param str url: URL for CouchDB server. :param str encoder: Optional json Encoder object used to encode documents for storage. Defaults to json.JSONEncoder. For example: .. code-block:: python # couchdb context manager from cloudant import couchdb with couchdb(USERNAME, PASSWORD, url=COUCHDB_URL) as client: # Context handles connect() and disconnect() for you. # Perform library operations within this context. Such as: print client.all_dbs() """ |
couchdb_session = CouchDB(user, passwd, **kwargs)
couchdb_session.connect()
yield couchdb_session
couchdb_session.disconnect() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def couchdb_admin_party(**kwargs):
""" Provides a context manager to create a CouchDB session in Admin Party mode and provide access to databases, docs etc. :param str url: URL for CouchDB server. :param str encoder: Optional json Encoder object used to encode documents for storage. Defaults to json.JSONEncoder. For example: .. code-block:: python # couchdb_admin_party context manager from cloudant import couchdb_admin_party with couchdb_admin_party(url=COUCHDB_URL) as client: # Context handles connect() and disconnect() for you. # Perform library operations within this context. Such as: print client.all_dbs() """ |
couchdb_session = CouchDB(None, None, True, **kwargs)
couchdb_session.connect()
yield couchdb_session
couchdb_session.disconnect() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _handle_result_by_index(self, idx):
""" Handle processing when the result argument provided is an integer. """ |
if idx < 0:
return None
opts = dict(self.options)
skip = opts.pop('skip', 0)
limit = opts.pop('limit', None)
py_to_couch_validate('skip', skip)
py_to_couch_validate('limit', limit)
if limit is not None and idx >= limit:
# Result is out of range
return dict()
return self._ref(skip=skip+idx, limit=1, **opts) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _handle_result_by_key(self, key):
""" Handle processing when the result argument provided is a document key. """ |
invalid_options = ('key', 'keys', 'startkey', 'endkey')
if any(x in invalid_options for x in self.options):
raise ResultException(102, invalid_options, self.options)
return self._ref(key=key, **self.options) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _handle_result_by_idx_slice(self, idx_slice):
""" Handle processing when the result argument provided is an index slice. """ |
opts = dict(self.options)
skip = opts.pop('skip', 0)
limit = opts.pop('limit', None)
py_to_couch_validate('skip', skip)
py_to_couch_validate('limit', limit)
start = idx_slice.start
stop = idx_slice.stop
data = None
# start and stop cannot be None and both must be greater than 0
if all(i is not None and i >= 0 for i in [start, stop]) and start < stop:
if limit is not None:
if start >= limit:
# Result is out of range
return dict()
if stop > limit:
# Ensure that slice does not extend past original limit
return self._ref(skip=skip+start, limit=limit-start, **opts)
data = self._ref(skip=skip+start, limit=stop-start, **opts)
elif start is not None and stop is None and start >= 0:
if limit is not None:
if start >= limit:
# Result is out of range
return dict()
# Ensure that slice does not extend past original limit
data = self._ref(skip=skip+start, limit=limit-start, **opts)
else:
data = self._ref(skip=skip+start, **opts)
elif start is None and stop is not None and stop >= 0:
if limit is not None and stop > limit:
# Ensure that slice does not extend past original limit
data = self._ref(skip=skip, limit=limit, **opts)
else:
data = self._ref(skip=skip, limit=stop, **opts)
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _handle_result_by_key_slice(self, key_slice):
""" Handle processing when the result argument provided is a key slice. """ |
invalid_options = ('key', 'keys', 'startkey', 'endkey')
if any(x in invalid_options for x in self.options):
raise ResultException(102, invalid_options, self.options)
if isinstance(key_slice.start, ResultByKey):
start = key_slice.start()
else:
start = key_slice.start
if isinstance(key_slice.stop, ResultByKey):
stop = key_slice.stop()
else:
stop = key_slice.stop
if (start is not None and stop is not None and
isinstance(start, type(stop))):
data = self._ref(startkey=start, endkey=stop, **self.options)
elif start is not None and stop is None:
data = self._ref(startkey=start, **self.options)
elif start is None and stop is not None:
data = self._ref(endkey=stop, **self.options)
else:
data = None
return data |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _iterator(self, response):
'''
Iterate through view data.
'''
while True:
result = deque(self._parse_data(response))
del response
if result:
doc_count = len(result)
last = result.pop()
while result:
yield result.popleft()
# We expect doc_count = self._page_size + 1 results, if
# we have self._page_size or less it means we are on the
# last page and need to return the last result.
if doc_count < self._real_page_size:
yield last
break
del result
# if we are in a view, keys could be duplicate so we
# need to start from the right docid
if last['id']:
response = self._call(startkey=last['key'],
startkey_docid=last['id'])
# reduce result keys are unique by definition
else:
response = self._call(startkey=last['key'])
else:
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _iterator(self, response):
'''
Iterate through query data.
'''
while True:
result = self._parse_data(response)
bookmark = response.get('bookmark')
if result:
for row in result:
yield row
del result
if not bookmark:
break
response = self._call(bookmark=bookmark)
else:
break |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def url(self):
""" Constructs and returns the View URL. :returns: View URL """ |
if self._partition_key:
base_url = self.design_doc.document_partition_url(
self._partition_key)
else:
base_url = self.design_doc.document_url
return '/'.join((
base_url,
'_view',
self.view_name
)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def as_a_dict(self):
""" Displays the index as a dictionary. This includes the design document id, index name, index type, and index definition. :returns: Dictionary representation of the index as a dictionary """ |
index_dict = {
'ddoc': self._ddoc_id,
'name': self._name,
'type': self._type,
'def': self._def
}
if self._partitioned:
index_dict['partitioned'] = True
return index_dict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create(self):
""" Creates the current index in the remote database. """ |
payload = {'type': self._type}
if self._ddoc_id and self._ddoc_id != '':
if isinstance(self._ddoc_id, STRTYPE):
if self._ddoc_id.startswith('_design/'):
payload['ddoc'] = self._ddoc_id[8:]
else:
payload['ddoc'] = self._ddoc_id
else:
raise CloudantArgumentError(122, self._ddoc_id)
if self._name and self._name != '':
if isinstance(self._name, STRTYPE):
payload['name'] = self._name
else:
raise CloudantArgumentError(123, self._name)
self._def_check()
payload['index'] = self._def
if self._partitioned:
payload['partitioned'] = True
headers = {'Content-Type': 'application/json'}
resp = self._r_session.post(
self.index_url,
data=json.dumps(payload, cls=self._database.client.encoder),
headers=headers
)
resp.raise_for_status()
self._ddoc_id = response_to_json_dict(resp)['id']
self._name = response_to_json_dict(resp)['name'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
""" Removes the current index from the remote database. """ |
if not self._ddoc_id:
raise CloudantArgumentError(125)
if not self._name:
raise CloudantArgumentError(126)
ddoc_id = self._ddoc_id
if ddoc_id.startswith('_design/'):
ddoc_id = ddoc_id[8:]
url = '/'.join((self.index_url, ddoc_id, self._type, self._name))
resp = self._r_session.delete(url)
resp.raise_for_status() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _def_check(self):
""" Checks that the definition provided contains only valid arguments for a text index. """ |
if self._def != dict():
for key, val in iteritems_(self._def):
if key not in list(TEXT_INDEX_ARGS.keys()):
raise CloudantArgumentError(127, key)
if not isinstance(val, TEXT_INDEX_ARGS[key]):
raise CloudantArgumentError(128, key, TEXT_INDEX_ARGS[key]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fetch(self):
""" Retrieves the content of the current security document from the remote database and populates the locally cached SecurityDocument object with that content. A call to fetch will overwrite any dictionary content currently in the locally cached SecurityDocument object. """ |
resp = self.r_session.get(self.document_url)
resp.raise_for_status()
self.clear()
self.update(response_to_json_dict(resp)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save(self):
""" Saves changes made to the locally cached SecurityDocument object's data structures to the remote database. """ |
resp = self.r_session.put(
self.document_url,
data=self.json(),
headers={'Content-Type': 'application/json'}
)
resp.raise_for_status() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def xpm(Pdb=Pdb):
""" To be used inside an except clause, enter a post-mortem pdb related to the just catched exception. """ |
info = sys.exc_info()
print(traceback.format_exc())
post_mortem(info[2], Pdb) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def complete(self, text, state):
"""Handle completions from fancycompleter and original pdb.""" |
if state == 0:
local._pdbpp_completing = True
mydict = self.curframe.f_globals.copy()
mydict.update(self.curframe_locals)
completer = Completer(mydict)
self._completions = self._get_all_completions(
completer.complete, text)
real_pdb = super(Pdb, self)
for x in self._get_all_completions(real_pdb.complete, text):
if x not in self._completions:
self._completions.append(x)
self._filter_completions(text)
del local._pdbpp_completing
# Remove "\t" from fancycompleter if there are pdb completions.
if len(self._completions) > 1 and self._completions[0] == "\t":
self._completions.pop(0)
try:
return self._completions[state]
except IndexError:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def do_edit(self, arg):
"Open an editor visiting the current file at the current line"
if arg == '':
filename, lineno = self._get_current_position()
else:
filename, lineno, _ = self._get_position_of_arg(arg)
if filename is None:
return
# this case handles code generated with py.code.Source()
# filename is something like '<0-codegen foo.py:18>'
match = re.match(r'.*<\d+-codegen (.*):(\d+)>', filename)
if match:
filename = match.group(1)
lineno = int(match.group(2))
try:
self._open_editor(self._get_editor_cmd(filename, lineno))
except Exception as exc:
self.error(exc) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_trace(self, frame=None):
"""Remember starting frame. This is used with pytest, which does not use pdb.set_trace(). """ |
if hasattr(local, '_pdbpp_completing'):
# Handle set_trace being called during completion, e.g. with
# fancycompleter's attr_matches.
return
if frame is None:
frame = sys._getframe().f_back
self._via_set_trace_frame = frame
return super(Pdb, self).set_trace(frame) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _remove_bdb_context(evalue):
"""Remove exception context from Pdb from the exception. E.g. "AttributeError: 'Pdb' object has no attribute 'do_foo'", when trying to look up commands (bpo-36494). """ |
removed_bdb_context = evalue
while removed_bdb_context.__context__:
ctx = removed_bdb_context.__context__
if (
isinstance(ctx, AttributeError)
and ctx.__traceback__.tb_frame.f_code.co_name == "onecmd"
):
removed_bdb_context.__context__ = None
break
removed_bdb_context = removed_bdb_context.__context__ |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def args_from_config(func):
"""Decorator that injects parameters from the configuration. """ |
func_args = signature(func).parameters
@wraps(func)
def wrapper(*args, **kwargs):
config = get_config()
for i, argname in enumerate(func_args):
if len(args) > i or argname in kwargs:
continue
elif argname in config:
kwargs[argname] = config[argname]
try:
getcallargs(func, *args, **kwargs)
except TypeError as exc:
msg = "{}\n{}".format(exc.args[0], PALLADIUM_CONFIG_ERROR)
exc.args = (msg,)
raise exc
return func(*args, **kwargs)
wrapper.__wrapped__ = func
return wrapper |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def memory_usage_psutil():
"""Return the current process memory usage in MB. """ |
process = psutil.Process(os.getpid())
mem = process.memory_info()[0] / float(2 ** 20)
mem_vms = process.memory_info()[1] / float(2 ** 20)
return mem, mem_vms |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def version_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Print the version number of Palladium. Usage: pld-version [options] Options: -h --help Show this screen. """ |
docopt(version_cmd.__doc__, argv=argv)
print(__version__) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def upgrade_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Upgrade the database to the latest version. Usage: pld-ugprade [options] Options: --from=<v> Upgrade from a specific version, overriding the version stored in the database. --to=<v> Upgrade to a specific version instead of the latest version. -h --help Show this screen. """ |
arguments = docopt(upgrade_cmd.__doc__, argv=argv)
initialize_config(__mode__='fit')
upgrade(from_version=arguments['--from'], to_version=arguments['--to']) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def export_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Export a model from one model persister to another. The model persister to export to is supposed to be available in the configuration file under the 'model_persister_export' key. Usage: pld-export [options] Options: --version=<v> Export a specific version rather than the active one. --no-activate Don't activate the exported model with the 'model_persister_export'. -h --help Show this screen. """ |
arguments = docopt(export_cmd.__doc__, argv=argv)
model_version = export(
model_version=arguments['--version'],
activate=not arguments['--no-activate'],
)
logger.info("Exported model. New version number: {}".format(model_version)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def Partial(func, **kwargs):
"""Allows the use of partially applied functions in the configuration. """ |
if isinstance(func, str):
func = resolve_dotted_name(func)
partial_func = partial(func, **kwargs)
update_wrapper(partial_func, func)
return partial_func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_predict_function( route, predict_service, decorator_list_name, config):
"""Creates a predict function and registers it to the Flask app using the route decorator. :param str route: Path of the entry point. :param palladium.interfaces.PredictService predict_service: The predict service to be registered to this entry point. :param str decorator_list_name: The decorator list to be used for this predict service. It is OK if there is no such entry in the active Palladium config. :return: A predict service function that will be used to process predict requests. """ |
model_persister = config.get('model_persister')
@app.route(route, methods=['GET', 'POST'], endpoint=route)
@PluggableDecorator(decorator_list_name)
def predict_func():
return predict(model_persister, predict_service)
return predict_func |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def devserver_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Serve the web API for development. Usage: pld-devserver [options] Options: -h --help Show this screen. --host=<host> The host to use [default: 0.0.0.0]. --port=<port> The port to use [default: 5000]. --debug=<debug> Whether or not to use debug mode [default: 0]. """ |
arguments = docopt(devserver_cmd.__doc__, argv=argv)
initialize_config()
app.run(
host=arguments['--host'],
port=int(arguments['--port']),
debug=int(arguments['--debug']),
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stream_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Start the streaming server, which listens to stdin, processes line by line, and returns predictions. The input should consist of a list of json objects, where each object will result in a prediction. Each line is processed in a batch. Example input (must be on a single line):
[{"sepal length": 1.0, "sepal width": 1.1, "petal length": 0.7, "petal width": 5}, {"sepal length": 1.0, "sepal width": 8.0, "petal length": 1.4, "petal width": 5}] Example output: ["Iris-virginica","Iris-setosa"] An input line with the word 'exit' will quit the streaming server. Usage: pld-stream [options] Options: -h --help Show this screen. """ |
docopt(stream_cmd.__doc__, argv=argv)
initialize_config()
stream = PredictStream()
stream.listen(sys.stdin, sys.stdout, sys.stderr) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def listen(self, io_in, io_out, io_err):
"""Listens to provided io stream and writes predictions to output. In case of errors, the error stream will be used. """ |
for line in io_in:
if line.strip().lower() == 'exit':
break
try:
y_pred = self.process_line(line)
except Exception as e:
io_out.write('[]\n')
io_err.write(
"Error while processing input row: {}"
"{}: {}\n".format(line, type(e), e))
io_err.flush()
else:
io_out.write(ujson.dumps(y_pred.tolist()))
io_out.write('\n')
io_out.flush() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def list_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ List information about available models. Uses the 'model_persister' from the configuration to display a list of models and their metadata. Usage: pld-list [options] Options: -h --help Show this screen. """ |
docopt(list_cmd.__doc__, argv=argv)
initialize_config(__mode__='fit')
list() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Fit a model and save to database. Will use 'dataset_loader_train', 'model', and 'model_perister' from the configuration file, to load a dataset to train a model with, and persist it. Usage: pld-fit [options] Options: -n --no-save Don't persist the fitted model to disk. --no-activate Don't activate the fitted model. --save-if-better-than=<k> Persist only if test score better than given value. -e --evaluate Evaluate fitted model on train and test set and print out results. -h --help Show this screen. """ |
arguments = docopt(fit_cmd.__doc__, argv=argv)
no_save = arguments['--no-save']
no_activate = arguments['--no-activate']
save_if_better_than = arguments['--save-if-better-than']
evaluate = arguments['--evaluate'] or bool(save_if_better_than)
if save_if_better_than is not None:
save_if_better_than = float(save_if_better_than)
initialize_config(__mode__='fit')
fit(
persist=not no_save,
activate=not no_activate,
evaluate=evaluate,
persist_if_better_than=save_if_better_than,
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def admin_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Activate or delete models. Models are usually made active right after fitting (see command pld-fit). The 'activate' command allows you to explicitly set the currently active model. Use 'pld-list' to get an overview of all available models along with their version identifiers. Deleting a model will simply remove it from the database. Usage: pld-admin activate <version> [options] pld-admin delete <version> [options] Options: -h --help Show this screen. """ |
arguments = docopt(admin_cmd.__doc__, argv=argv)
initialize_config(__mode__='fit')
if arguments['activate']:
activate(model_version=int(arguments['<version>']))
elif arguments['delete']:
delete(model_version=int(arguments['<version>'])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def grid_search_cmd(argv=sys.argv[1:]):
# pragma: no cover """\ Grid search parameters for the model. Uses 'dataset_loader_train', 'model', and 'grid_search' from the configuration to load a training dataset, and run a grid search on the model using the grid of hyperparameters. Usage: pld-grid-search [options] Options: --save-results=<fname> Save results to CSV file --persist-best Persist the best model from grid search -h --help Show this screen. """ |
arguments = docopt(grid_search_cmd.__doc__, argv=argv)
initialize_config(__mode__='fit')
grid_search(
save_results=arguments['--save-results'],
persist_best=arguments['--persist-best'],
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def switch_fingerprint_method(self, old=False):
""" Switches main fingerprinting method. :param old: if True old fingerprinting method will be used. :return: """ |
if old:
self.has_fingerprint = self.has_fingerprint_moduli
else:
self.has_fingerprint = self.has_fingerprint_dlog |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _map_tril_1d_on_2d(indices, dims):
"""Map 1d indices on lower triangular matrix in 2d. """ |
N = (dims * dims - dims) / 2
m = np.ceil(np.sqrt(2 * N))
c = m - np.round(np.sqrt(2 * (N - indices))) - 1
r = np.mod(indices + (c + 1) * (c + 2) / 2 - 1, m) + 1
return np.array([r, c], dtype=np.int64) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _unique_rows_numpy(a):
"""return unique rows""" |
a = np.ascontiguousarray(a)
unique_a = np.unique(a.view([('', a.dtype)] * a.shape[1]))
return unique_a.view(a.dtype).reshape((unique_a.shape[0], a.shape[1])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def random_pairs_with_replacement(n, shape, random_state=None):
"""make random record pairs""" |
if not isinstance(random_state, np.random.RandomState):
random_state = np.random.RandomState(random_state)
n_max = max_pairs(shape)
if n_max <= 0:
raise ValueError('n_max must be larger than 0')
# make random pairs
indices = random_state.randint(0, n_max, n)
if len(shape) == 1:
return _map_tril_1d_on_2d(indices, shape[0])
else:
return np.unravel_index(indices, shape) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def random_pairs_without_replacement_large_frames( n, shape, random_state=None):
"""Make a sample of random pairs with replacement""" |
n_max = max_pairs(shape)
sample = np.array([])
# Run as long as the number of pairs is less than the requested number
# of pairs n.
while len(sample) < n:
# The number of pairs to sample (sample twice as much record pairs
# because the duplicates are dropped).
n_sample_size = (n - len(sample)) * 2
sample = random_state.randint(n_max, size=n_sample_size)
# concatenate pairs and deduplicate
pairs_non_unique = np.append(sample, sample)
sample = _unique_rows_numpy(pairs_non_unique)
# return 2d indices
if len(shape) == 1:
return _map_tril_1d_on_2d(sample[0:n], shape[0])
else:
return np.unravel_index(sample[0:n], shape) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clean(s, lowercase=True, replace_by_none=r'[^ \-\_A-Za-z0-9]+', replace_by_whitespace=r'[\-\_]', strip_accents=None, remove_brackets=True, encoding='utf-8', decode_error='strict'):
"""Clean string variables. Clean strings in the Series by removing unwanted tokens, whitespace and brackets. Parameters s : pandas.Series A Series to clean. lower : bool, optional Convert strings in the Series to lowercase. Default True. replace_by_none : str, optional The matches of this regular expression are replaced by ''. replace_by_whitespace : str, optional The matches of this regular expression are replaced by a whitespace. remove_brackets : bool, optional Remove all content between brackets and the bracket themselves. Default True. strip_accents : {'ascii', 'unicode', None}, optional Remove accents during the preprocessing step. 'ascii' is a fast method that only works on characters that have an direct ASCII mapping. 'unicode' is a slightly slower method that works on any characters. None (default) does nothing. encoding : str, optional If bytes are given, this encoding is used to decode. Default is 'utf-8'. decode_error : {'strict', 'ignore', 'replace'}, optional Instruction on what to do if a byte Series is given that contains characters not of the given `encoding`. By default, it is 'strict', meaning that a UnicodeDecodeError will be raised. Other values are 'ignore' and 'replace'. Example ------- 'Bob :)', 'Angel', 'Bob (alias Billy)', None] 0 mary ann 1 bob 2 angel 3 bob 4 NaN dtype: object Returns ------- pandas.Series: A cleaned Series of strings. """ |
if s.shape[0] == 0:
return s
# Lower s if lower is True
if lowercase is True:
s = s.str.lower()
# Accent stripping based on https://github.com/scikit-learn/
# scikit-learn/blob/412996f/sklearn/feature_extraction/text.py
# BSD license
if not strip_accents:
pass
elif callable(strip_accents):
strip_accents_fn = strip_accents
elif strip_accents == 'ascii':
strip_accents_fn = strip_accents_ascii
elif strip_accents == 'unicode':
strip_accents_fn = strip_accents_unicode
else:
raise ValueError(
"Invalid value for 'strip_accents': {}".format(strip_accents)
)
# Remove accents etc
if strip_accents:
def strip_accents_fn_wrapper(x):
if sys.version_info[0] >= 3:
if isinstance(x, str):
return strip_accents_fn(x)
else:
return x
else:
if isinstance(x, unicode): # noqa
return strip_accents_fn(x)
else:
return x
# encoding
s = s.apply(
lambda x: x.decode(encoding, decode_error) if
type(x) == bytes else x)
s = s.map(lambda x: strip_accents_fn_wrapper(x))
# Remove all content between brackets
if remove_brackets is True:
s = s.str.replace(r'(\[.*?\]|\(.*?\)|\{.*?\})', '')
# Remove the special characters
if replace_by_none:
s = s.str.replace(replace_by_none, '')
if replace_by_whitespace:
s = s.str.replace(replace_by_whitespace, ' ')
# Remove multiple whitespaces
s = s.str.replace(r'\s\s+', ' ')
# Strip s
s = s.str.lstrip().str.rstrip()
return s |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def value_occurence(s):
"""Count the number of times each value occurs. This function returns the counts for each row, in contrast with `pandas.value_counts <http://pandas.pydata.org/pandas- docs/stable/generated/pandas.Series.value_counts.html>`_. Returns ------- pandas.Series A Series with value counts. """ |
# https://github.com/pydata/pandas/issues/3729
value_count = s.fillna('NAN')
return value_count.groupby(by=value_count).transform('count') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def safe_sparse_dot(a, b, dense_output=False):
"""Dot product that handle the sparse matrix case correctly Uses BLAS GEMM as replacement for numpy.dot where possible to avoid unnecessary copies. Parameters a : array or sparse matrix b : array or sparse matrix dense_output : boolean, default False When False, either ``a`` or ``b`` being sparse will yield sparse output. When True, output will always be an array. Returns ------- dot_product : array or sparse matrix sparse if ``a`` or ``b`` is sparse and ``dense_output=False``. """ |
if issparse(a) or issparse(b):
ret = a * b
if dense_output and hasattr(ret, "toarray"):
ret = ret.toarray()
return ret
else:
return np.dot(a, b) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _joint_log_likelihood(self, X):
"""Calculate the posterior log probability of the samples X""" |
check_is_fitted(self, "classes_")
X = check_array(X, accept_sparse='csr')
X_bin = self._transform_data(X)
n_classes, n_features = self.feature_log_prob_.shape
n_samples, n_features_X = X_bin.shape
if n_features_X != n_features:
raise ValueError(
"Expected input with %d features, got %d instead" %
(n_features, n_features_X))
# see chapter 4.1 of http://www.cs.columbia.edu/~mcollins/em.pdf
# implementation as in Formula 4.
jll = safe_sparse_dot(X_bin, self.feature_log_prob_.T)
jll += self.class_log_prior_
return jll |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def predict(self, X):
""" Perform classification on an array of test vectors X. Parameters X : array-like, shape = [n_samples, n_features] Returns ------- C : array, shape = [n_samples] Predicted target values for X """ |
jll = self._joint_log_likelihood(X)
return self.classes_[np.argmax(jll, axis=1)] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def predict_log_proba(self, X):
""" Return log-probability estimates for the test vector X. Parameters X : array-like, shape = [n_samples, n_features] Returns ------- C : array-like, shape = [n_samples, n_classes] Returns the log-probability of the samples for each class in the model. The columns correspond to the classes in sorted order, as they appear in the attribute `classes_`. """ |
jll = self._joint_log_likelihood(X)
# normalize by P(x) = P(f_1, ..., f_n)
log_prob_x = logsumexp(jll, axis=1) # return shape = (2,)
return jll - np.atleast_2d(log_prob_x).T |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _count(self, X, Y):
"""Count and smooth feature occurrences.""" |
self.feature_count_ += safe_sparse_dot(Y.T, X)
self.class_count_ += Y.sum(axis=0) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update_feature_log_prob(self, alpha):
"""Apply smoothing to raw counts and recompute log probabilities""" |
smoothed_fc = self.feature_count_ + alpha
smoothed_cc = self.class_count_ + alpha * 2
self.feature_log_prob_ = (np.log(smoothed_fc) -
np.log(smoothed_cc.reshape(-1, 1))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit(self, X, y, sample_weight=None):
"""Fit Naive Bayes classifier according to X, y Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target values. sample_weight : array-like, shape = [n_samples], (default=None) Weights applied to individual samples (1. for unweighted). Returns ------- self : object """ |
X, y = check_X_y(X, y, 'csr')
# Transform data with a label binarizer. Each column will get
# transformed into a N columns (for each distinct value a column). For
# a situation with 0 and 1 outcome values, the result given two
# columns.
X_bin = self._fit_data(X)
_, n_features = X_bin.shape
# prepare Y
labelbin = LabelBinarizer()
Y = labelbin.fit_transform(y)
self.classes_ = labelbin.classes_
if Y.shape[1] == 1:
Y = np.concatenate((1 - Y, Y), axis=1)
# LabelBinarizer().fit_transform() returns arrays with dtype=np.int64.
# We convert it to np.float64 to support sample_weight consistently;
# this means we also don't have to cast X to floating point
Y = Y.astype(np.float64)
if sample_weight is not None:
sample_weight = np.atleast_2d(sample_weight)
Y *= check_array(sample_weight).T
class_prior = self.class_prior
# Count raw events from data before updating the class log prior
# and feature log probas
n_effective_classes = Y.shape[1]
self.class_count_ = np.zeros(n_effective_classes, dtype=np.float64)
self.feature_count_ = np.zeros((n_effective_classes, n_features),
dtype=np.float64)
self._count(X_bin, Y)
alpha = self._check_alpha()
self._update_feature_log_prob(alpha)
self._update_class_log_prior(class_prior=class_prior)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def fit(self, X):
"""Fit ECM classifier according to X Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ------- self : object Returns self. """ |
X = check_array(X, accept_sparse='csr')
# count frequencies of elements in vector space
# based on https://stackoverflow.com/a/33235665
# faster than numpy.unique
X_unique, X_freq = np.unique(X, axis=0, return_counts=True)
X_freq = np.atleast_2d(X_freq)
# Transform data with a label binarizer. Each column will get
# transformed into a N columns (for each distinct value a column). For
# a situation with 0 and 1 outcome values, the result given two
# columns.
X_unique_bin = self._fit_data(X_unique)
_, n_features = X_unique_bin.shape
# initialise parameters
self.classes_ = np.array([0, 1])
if is_string_like(self.init) and self.init == 'random':
self.class_log_prior_, self.feature_log_prob_ = \
self._init_parameters_random(X_unique_bin)
elif is_string_like(self.init) and self.init == 'jaro':
self.class_log_prior_, self.feature_log_prob_ = \
self._init_parameters_jaro(X_unique_bin)
else:
raise ValueError("'{}' is not a valid value for "
"argument 'init'".format(self.init))
iteration = 0
stop_iteration = False
self._log_class_log_prior = np.atleast_2d(self.class_log_prior_)
self._log_feature_log_prob = np.atleast_3d(self.feature_log_prob_)
while iteration < self.max_iter and not stop_iteration:
# expectation step
g = self.predict_proba(X_unique)
g_freq = g * X_freq.T
g_freq_sum = g_freq.sum(axis=0)
# maximisation step
class_log_prior_ = np.log(g_freq_sum) - np.log(X.shape[0]) # p
feature_log_prob_ = np.log(safe_sparse_dot(g_freq.T, X_unique_bin))
feature_log_prob_ -= np.log(np.atleast_2d(g_freq_sum).T)
# Stop iterating when the class prior and feature probs are close
# to the values in the to previous iteration (parameters starting
# with 'self').
class_log_prior_close = np.allclose(
class_log_prior_, self.class_log_prior_, atol=self.atol)
feature_log_prob_close = np.allclose(
feature_log_prob_, self.feature_log_prob_, atol=self.atol)
if (class_log_prior_close and feature_log_prob_close):
stop_iteration = True
if np.all(np.isnan(feature_log_prob_)):
stop_iteration = True
# Update the class prior and feature probs.
self.class_log_prior_ = class_log_prior_
self.feature_log_prob_ = feature_log_prob_
# create logs
self._log_class_log_prior = np.concatenate(
[self._log_class_log_prior,
np.atleast_2d(self.class_log_prior_)]
)
self._log_feature_log_prob = np.concatenate(
[self._log_feature_log_prob,
np.atleast_3d(self.feature_log_prob_)], axis=2
)
# Increment counter
iteration += 1
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_sorting_key_values(self, array1, array2):
"""return the sorting key values as a series""" |
concat_arrays = numpy.concatenate([array1, array2])
unique_values = numpy.unique(concat_arrays)
return numpy.sort(unique_values) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, links):
"""Return the connected components. Parameters links : pandas.MultiIndex The links to apply one-to-one matching on. Returns ------- list of pandas.MultiIndex A list with pandas.MultiIndex objects. Each MultiIndex object represents a set of connected record pairs. """ |
try:
import networkx as nx
except ImportError():
raise Exception("'networkx' module is needed for this operation")
G = nx.Graph()
G.add_edges_from(links.values)
connected_components = nx.connected_component_subgraphs(G)
links_result = [pd.MultiIndex.from_tuples(subgraph.edges())
for subgraph in connected_components]
return links_result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _prob_match(self, features):
"""Compute match probabilities. Parameters features : numpy.ndarray The data to train the model on. Returns ------- numpy.ndarray The match probabilties. """ |
# compute the probabilities
probs = self.kernel.predict_proba(features)
# get the position of match probabilities
classes = list(self.kernel.classes_)
match_class_position = classes.index(1)
return probs[:, match_class_position] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _predict(self, features):
"""Predict matches and non-matches. Parameters features : numpy.ndarray The data to predict the class of. Returns ------- numpy.ndarray The predicted classes. """ |
from sklearn.exceptions import NotFittedError
try:
prediction = self.kernel.predict_classes(features)[:, 0]
except NotFittedError:
raise NotFittedError(
"{} is not fitted yet. Call 'fit' with appropriate "
"arguments before using this method.".format(
type(self).__name__
)
)
return prediction |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _febrl_links(df):
"""Get the links of a FEBRL dataset.""" |
index = df.index.to_series()
keys = index.str.extract(r'rec-(\d+)', expand=True)[0]
index_int = numpy.arange(len(df))
df_helper = pandas.DataFrame({
'key': keys,
'index': index_int
})
# merge the two frame and make MultiIndex.
pairs_df = df_helper.merge(
df_helper, on='key'
)[['index_x', 'index_y']]
pairs_df = pairs_df[pairs_df['index_x'] > pairs_df['index_y']]
return pandas.MultiIndex(
levels=[df.index.values, df.index.values],
labels=[pairs_df['index_x'].values, pairs_df['index_y'].values],
names=[None, None],
verify_integrity=False
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_febrl1(return_links=False):
"""Load the FEBRL 1 dataset. The Freely Extensible Biomedical Record Linkage (Febrl) package is distributed with a dataset generator and four datasets generated with the generator. This function returns the first Febrl dataset as a :class:`pandas.DataFrame`. *"This data set contains 1000 records (500 original and 500 duplicates, with exactly one duplicate per original record."* Parameters return_links: bool When True, the function returns also the true links. Returns ------- pandas.DataFrame A :class:`pandas.DataFrame` with Febrl dataset1.csv. When return_links is True, the function returns also the true links. The true links are all links in the lower triangular part of the matrix. """ |
df = _febrl_load_data('dataset1.csv')
if return_links:
links = _febrl_links(df)
return df, links
else:
return df |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_febrl2(return_links=False):
"""Load the FEBRL 2 dataset. The Freely Extensible Biomedical Record Linkage (Febrl) package is distributed with a dataset generator and four datasets generated with the generator. This function returns the second Febrl dataset as a :class:`pandas.DataFrame`. *"This data set contains 5000 records (4000 originals and 1000 duplicates), with a maximum of 5 duplicates based on one original record (and a poisson distribution of duplicate records). Distribution of duplicates: 19 originals records have 5 duplicate records 47 originals records have 4 duplicate records 107 originals records have 3 duplicate records 141 originals records have 2 duplicate records 114 originals records have 1 duplicate record 572 originals records have no duplicate record"* Parameters return_links: bool When True, the function returns also the true links. Returns ------- pandas.DataFrame A :class:`pandas.DataFrame` with Febrl dataset2.csv. When return_links is True, the function returns also the true links. The true links are all links in the lower triangular part of the matrix. """ |
df = _febrl_load_data('dataset2.csv')
if return_links:
links = _febrl_links(df)
return df, links
else:
return df |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_febrl3(return_links=False):
"""Load the FEBRL 3 dataset. The Freely Extensible Biomedical Record Linkage (Febrl) package is distributed with a dataset generator and four datasets generated with the generator. This function returns the third Febrl dataset as a :class:`pandas.DataFrame`. *"This data set contains 5000 records (2000 originals and 3000 duplicates), with a maximum of 5 duplicates based on one original record (and a Zipf distribution of duplicate records). Distribution of duplicates: 168 originals records have 5 duplicate records 161 originals records have 4 duplicate records 212 originals records have 3 duplicate records 256 originals records have 2 duplicate records 368 originals records have 1 duplicate record 1835 originals records have no duplicate record"* Parameters return_links: bool When True, the function returns also the true links. Returns ------- pandas.DataFrame A :class:`pandas.DataFrame` with Febrl dataset3.csv. When return_links is True, the function returns also the true links. The true links are all links in the lower triangular part of the matrix. """ |
df = _febrl_load_data('dataset3.csv')
if return_links:
links = _febrl_links(df)
return df, links
else:
return df |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_febrl4(return_links=False):
"""Load the FEBRL 4 datasets. The Freely Extensible Biomedical Record Linkage (Febrl) package is distributed with a dataset generator and four datasets generated with the generator. This function returns the fourth Febrl dataset as a :class:`pandas.DataFrame`. *"Generated as one data set with 10000 records (5000 originals and 5000 duplicates, with one duplicate per original), the originals have been split from the duplicates, into dataset4a.csv (containing the 5000 original records) and dataset4b.csv (containing the 5000 duplicate records) These two data sets can be used for testing linkage procedures."* Parameters return_links: bool When True, the function returns also the true links. Returns ------- (pandas.DataFrame, pandas.DataFrame) A :class:`pandas.DataFrame` with Febrl dataset4a.csv and a pandas dataframe with Febrl dataset4b.csv. When return_links is True, the function returns also the true links. """ |
df_a = _febrl_load_data('dataset4a.csv')
df_b = _febrl_load_data('dataset4b.csv')
if return_links:
links = pandas.MultiIndex.from_arrays([
["rec-{}-org".format(i) for i in range(0, 5000)],
["rec-{}-dup-0".format(i) for i in range(0, 5000)]]
)
return df_a, df_b, links
else:
return df_a, df_b |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_krebsregister(block=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], missing_values=None, shuffle=True):
"""Load the Krebsregister dataset. This dataset of comparison patterns was obtained in a epidemiological cancer study in Germany. The comparison patterns were created by the Institute for Medical Biostatistics, Epidemiology and Informatics (IMBEI) and the University Medical Center of Johannes Gutenberg University (Mainz, Germany). The dataset is available for research online. "The records represent individual data including first and family name, sex, date of birth and postal code, which were collected through iterative insertions in the course of several years. The comparison patterns in this data set are based on a sample of 100.000 records dating from 2005 to 2008. Data pairs were classified as "match" or "non-match" during an extensive manual review where several documentarists were involved. The resulting classification formed the basis for assessing the quality of the registry's own record linkage procedure. In order to limit the amount of patterns a blocking procedure was applied, which selects only record pairs that meet specific agreement conditions. The results of the following six blocking iterations were merged together: - Phonetic equality of first name and family name, equality of date of birth. - Phonetic equality of first name, equality of day of birth. - Phonetic equality of first name, equality of month of birth. - Phonetic equality of first name, equality of year of birth. - Equality of complete date of birth. - Phonetic equality of family name, equality of sex. This procedure resulted in 5.749.132 record pairs, of which 20.931 are matches. The data set is split into 10 blocks of (approximately) equal size and ratio of matches to non-matches." Parameters block : int, list An integer or a list with integers between 1 and 10. The blocks are the blocks explained in the description. missing_values : object, int, float The value of the missing values. Default NaN. shuffle : bool Shuffle the record pairs. Default True. Returns ------- (pandas.DataFrame, pandas.MultiIndex) A pandas.DataFrame with comparison vectors and a pandas.MultiIndex with the indices of the matches. """ |
# If the data is not found, download it.
for i in range(1, 11):
filepath = os.path.join(os.path.dirname(__file__),
'krebsregister', 'block_{}.zip'.format(i))
if not os.path.exists(filepath):
_download_krebsregister()
break
if isinstance(block, (list, tuple)):
data = pandas.concat([_krebsregister_block(bl) for bl in block])
else:
data = _krebsregister_block(block)
if shuffle:
data = data.sample(frac=1, random_state=535)
match_index = data.index[data['is_match']]
del data['is_match']
if pandas.notnull(missing_values):
data.fillna(missing_values, inplace=True)
return data, match_index |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def phonetic(s, method, concat=True, encoding='utf-8', decode_error='strict'):
"""Convert names or strings into phonetic codes. The implemented algorithms are `soundex <https://en.wikipedia.org/wiki/Soundex>`_, `nysiis <https://en.wikipedia.org/wiki/New_York_State_Identification_and_ Intelligence_System>`_, `metaphone <https://en.wikipedia.org/wiki/Metaphone>`_ or `match_rating <https://en.wikipedia.org/wiki/Match_rating_approach>`_. Parameters s : pandas.Series A pandas.Series with string values (often names) to encode. method: str The algorithm that is used to phonetically encode the values. The possible options are "soundex", "nysiis", "metaphone" or "match_rating". concat: bool, optional Remove whitespace before phonetic encoding. encoding: str, optional If bytes are given, this encoding is used to decode. Default is 'utf-8'. decode_error: {'strict', 'ignore', 'replace'}, optional Instruction on what to do if a byte Series is given that contains characters not of the given `encoding`. By default, it is 'strict', meaning that a UnicodeDecodeError will be raised. Other values are 'ignore' and 'replace'. Returns ------- pandas.Series A Series with phonetic encoded values. """ |
# encoding
if sys.version_info[0] == 2:
s = s.apply(
lambda x: x.decode(encoding, decode_error)
if type(x) == bytes else x)
if concat:
s = s.str.replace(r"[\-\_\s]", "")
for alg in _phonetic_algorithms:
if method in alg['argument_names']:
phonetic_callback = alg['callback']
break
else:
raise ValueError("The algorithm '{}' is not known.".format(method))
return s.str.upper().apply(
lambda x: phonetic_callback(x) if pandas.notnull(x) else np.nan
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def block(self, *args, **kwargs):
"""Add a block index. Shortcut of :class:`recordlinkage.index.Block`:: from recordlinkage.index import Block indexer = recordlinkage.Index() indexer.add(Block()) """ |
indexer = Block(*args, **kwargs)
self.add(indexer)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sortedneighbourhood(self, *args, **kwargs):
"""Add a Sorted Neighbourhood Index. Shortcut of :class:`recordlinkage.index.SortedNeighbourhood`:: from recordlinkage.index import SortedNeighbourhood indexer = recordlinkage.Index() indexer.add(SortedNeighbourhood()) """ |
indexer = SortedNeighbourhood(*args, **kwargs)
self.add(indexer)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def random(self, *args, **kwargs):
"""Add a random index. Shortcut of :class:`recordlinkage.index.Random`:: from recordlinkage.index import Random indexer = recordlinkage.Index() indexer.add(Random()) """ |
indexer = Random()
self.add(indexer)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def exact(self, *args, **kwargs):
"""Compare attributes of pairs exactly. Shortcut of :class:`recordlinkage.compare.Exact`:: from recordlinkage.compare import Exact indexer = recordlinkage.Compare() indexer.add(Exact()) """ |
compare = Exact(*args, **kwargs)
self.add(compare)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def string(self, *args, **kwargs):
"""Compare attributes of pairs with string algorithm. Shortcut of :class:`recordlinkage.compare.String`:: from recordlinkage.compare import String indexer = recordlinkage.Compare() indexer.add(String()) """ |
compare = String(*args, **kwargs)
self.add(compare)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def numeric(self, *args, **kwargs):
"""Compare attributes of pairs with numeric algorithm. Shortcut of :class:`recordlinkage.compare.Numeric`:: from recordlinkage.compare import Numeric indexer = recordlinkage.Compare() indexer.add(Numeric()) """ |
compare = Numeric(*args, **kwargs)
self.add(compare)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def geo(self, *args, **kwargs):
"""Compare attributes of pairs with geo algorithm. Shortcut of :class:`recordlinkage.compare.Geographic`:: from recordlinkage.compare import Geographic indexer = recordlinkage.Compare() indexer.add(Geographic()) """ |
compare = Geographic(*args, **kwargs)
self.add(compare)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def date(self, *args, **kwargs):
"""Compare attributes of pairs with date algorithm. Shortcut of :class:`recordlinkage.compare.Date`:: from recordlinkage.compare import Date indexer = recordlinkage.Compare() indexer.add(Date()) """ |
compare = Date(*args, **kwargs)
self.add(compare)
return self |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def reduction_ratio(links_pred, *total):
"""Compute the reduction ratio. The reduction ratio is 1 minus the ratio candidate matches and the maximum number of pairs possible. Parameters links_pred: int, pandas.MultiIndex The number of candidate record pairs or the pandas.MultiIndex with record pairs. *total: pandas.DataFrame object(s) The DataFrames are used to compute the full index size with the full_index_size function. Returns ------- float The reduction ratio. """ |
n_max = full_index_size(*total)
if isinstance(links_pred, pandas.MultiIndex):
links_pred = len(links_pred)
if links_pred > n_max:
raise ValueError("n has to be smaller of equal n_max")
return 1 - links_pred / n_max |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def full_index_size(*args):
"""Compute the number of records in a full index. Compute the number of records in a full index without building the index itself. The result is the maximum number of record pairs possible. This function is especially useful in measures like the `reduction_ratio`. Deduplication: Given a DataFrame A with length N, the full index size is N*(N-1)/2. Linking: Given a DataFrame A with length N and a DataFrame B with length M, the full index size is N*M. Parameters *args: int, pandas.MultiIndex, pandas.Series, pandas.DataFrame A pandas object or a int representing the length of a dataset to link. When there is one argument, it is assumed that the record linkage is a deduplication process. Examples -------- Use integers: or pandas objects """ |
# check if a list or tuple is passed as argument
if len(args) == 1 and isinstance(args[0], (list, tuple)):
args = tuple(args[0])
if len(args) == 1:
n = get_length(args[0])
size = int(n * (n - 1) / 2)
else:
size = numpy.prod([get_length(arg) for arg in args])
return size |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def true_positives(links_true, links_pred):
"""Count the number of True Positives. Returns the number of correctly predicted links, also called the number of True Positives (TP). Parameters links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of correctly predicted links. """ |
links_true = _get_multiindex(links_true)
links_pred = _get_multiindex(links_pred)
return len(links_true & links_pred) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def true_negatives(links_true, links_pred, total):
"""Count the number of True Negatives. Returns the number of correctly predicted non-links, also called the number of True Negatives (TN). Parameters links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. total: int, pandas.MultiIndex The count of all record pairs (both links and non-links). When the argument is a pandas.MultiIndex, the length of the index is used. Returns ------- int The number of correctly predicted non-links. """ |
links_true = _get_multiindex(links_true)
links_pred = _get_multiindex(links_pred)
if isinstance(total, pandas.MultiIndex):
total = len(total)
return int(total) - len(links_true | links_pred) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def false_positives(links_true, links_pred):
"""Count the number of False Positives. Returns the number of incorrect predictions of true non-links. (true non- links, but predicted as links). This value is known as the number of False Positives (FP). Parameters links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of false positives. """ |
links_true = _get_multiindex(links_true)
links_pred = _get_multiindex(links_pred)
return len(links_pred.difference(links_true)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def false_negatives(links_true, links_pred):
"""Count the number of False Negatives. Returns the number of incorrect predictions of true links. (true links, but predicted as non-links). This value is known as the number of False Negatives (FN). Parameters links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. Returns ------- int The number of false negatives. """ |
links_true = _get_multiindex(links_true)
links_pred = _get_multiindex(links_pred)
return len(links_true.difference(links_pred)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def confusion_matrix(links_true, links_pred, total=None):
"""Compute the confusion matrix. The confusion matrix is of the following form: | | Predicted Positives | Predicted Negatives | +======================+=======================+======================+ | **True Positives** | True Positives (TP) | False Negatives (FN) | | **True Negatives** | False Positives (FP) | True Negatives (TN) | The confusion matrix is an informative way to analyse a prediction. The matrix can used to compute measures like precision and recall. The count of true prositives is [0,0], false negatives is [0,1], true negatives is [1,1] and false positives is [1,0]. Parameters links_true: pandas.MultiIndex, pandas.DataFrame, pandas.Series The true (or actual) links. links_pred: pandas.MultiIndex, pandas.DataFrame, pandas.Series The predicted links. total: int, pandas.MultiIndex The count of all record pairs (both links and non-links). When the argument is a pandas.MultiIndex, the length of the index is used. If the total is None, the number of True Negatives is not computed. Default None. Returns ------- numpy.array The confusion matrix with TP, TN, FN, FP values. Note ---- The number of True Negatives is computed based on the total argument. This argument is the number of record pairs of the entire matrix. """ |
links_true = _get_multiindex(links_true)
links_pred = _get_multiindex(links_pred)
tp = true_positives(links_true, links_pred)
fp = false_positives(links_true, links_pred)
fn = false_negatives(links_true, links_pred)
if total is None:
tn = numpy.nan
else:
tn = true_negatives(links_true, links_pred, total)
return numpy.array([[tp, fn], [fp, tn]]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute(self, pairs, x=None, x_link=None):
"""Return continuous random values for each record pair. Parameters pairs : pandas.MultiIndex A pandas MultiIndex with the record pairs to compare. The indices in the MultiIndex are indices of the DataFrame(s) to link. x : pandas.DataFrame The DataFrame to link. If `x_link` is given, the comparing is a linking problem. If `x_link` is not given, the problem is one of deduplication. x_link : pandas.DataFrame, optional The second DataFrame. Returns ------- pandas.Series, pandas.DataFrame, numpy.ndarray The result of comparing record pairs (the features). Can be a tuple with multiple pandas.Series, pandas.DataFrame, numpy.ndarray objects. """ |
df_empty = pd.DataFrame(index=pairs)
return self._compute(
tuple([df_empty]),
tuple([df_empty])
) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parallel_compare_helper(class_obj, pairs, x, x_link=None):
"""Internal function to overcome pickling problem in python2.""" |
return class_obj._compute(pairs, x, x_link) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def chunk_pandas(frame_or_series, chunksize=None):
"""Chunk a frame into smaller, equal parts.""" |
if not isinstance(chunksize, int):
raise ValueError('argument chunksize needs to be integer type')
bins = np.arange(0, len(frame_or_series), step=chunksize)
for b in bins:
yield frame_or_series[b:b + chunksize] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add(self, model):
"""Add a index method. This method is used to add index algorithms. If multiple algorithms are added, the union of the record pairs from the algorithm is taken. Parameters model : list, class A (list of) index algorithm(s) from :mod:`recordlinkage.index`. """ |
if isinstance(model, list):
self.algorithms = self.algorithms + model
else:
self.algorithms.append(model) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dedup_index(self, df_a):
"""Build an index for deduplicating a dataset. Parameters df_a : (tuple of) pandas.Series The data of the DataFrame to build the index with. Returns ------- pandas.MultiIndex A pandas.MultiIndex with record pairs. Each record pair contains the index values of two records. The records are sampled from the lower triangular part of the matrix. """ |
pairs = self._link_index(df_a, df_a)
# Remove all pairs not in the lower triangular part of the matrix.
# This part can be inproved by not comparing the level values, but the
# level itself.
pairs = pairs[pairs.labels[0] > pairs.labels[1]]
return pairs |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _compute(self, left_on, right_on):
"""Compare the data on the left and right. :meth:`BaseCompareFeature._compute` and :meth:`BaseCompareFeature.compute` differ on the accepted arguments. `_compute` accepts indexed data while `compute` accepts the record pairs and the DataFrame's. Parameters left_on : (tuple of) pandas.Series Data to compare with `right_on` right_on : (tuple of) pandas.Series Data to compare with `left_on` Returns ------- pandas.Series, pandas.DataFrame, numpy.ndarray The result of comparing record pairs (the features). Can be a tuple with multiple pandas.Series, pandas.DataFrame, numpy.ndarray objects. """ |
result = self._compute_vectorized(*tuple(left_on + right_on))
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compare_vectorized(self, comp_func, labels_left, labels_right, *args, **kwargs):
"""Compute the similarity between values with a callable. This method initialises the comparing of values with a custom function/callable. The function/callable should accept numpy.ndarray's. Example ------- Parameters comp_func : function A comparison function. This function can be a built-in function or a user defined comparison function. The function should accept numpy.ndarray's as first two arguments. labels_left : label, pandas.Series, pandas.DataFrame The labels, Series or DataFrame to compare. labels_right : label, pandas.Series, pandas.DataFrame The labels, Series or DataFrame to compare. *args : Additional arguments to pass to callable comp_func. **kwargs : Additional keyword arguments to pass to callable comp_func. (keyword 'label' is reserved.) label : (list of) label(s) The name of the feature and the name of the column. IMPORTANT: This argument is a keyword argument and can not be part of the arguments of comp_func. """ |
label = kwargs.pop('label', None)
if isinstance(labels_left, tuple):
labels_left = list(labels_left)
if isinstance(labels_right, tuple):
labels_right = list(labels_right)
feature = BaseCompareFeature(
labels_left, labels_right, args, kwargs, label=label)
feature._f_compare_vectorized = comp_func
self.add(feature) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_labels_left(self, validate=None):
"""Get all labels of the left dataframe.""" |
labels = []
for compare_func in self.features:
labels = labels + listify(compare_func.labels_left)
# check requested labels (for better error messages)
if not is_label_dataframe(labels, validate):
error_msg = "label is not found in the dataframe"
raise KeyError(error_msg)
return unique(labels) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _get_labels_right(self, validate=None):
"""Get all labels of the right dataframe.""" |
labels = []
for compare_func in self.features:
labels = labels + listify(compare_func.labels_right)
# check requested labels (for better error messages)
if not is_label_dataframe(labels, validate):
error_msg = "label is not found in the dataframe"
raise KeyError(error_msg)
return unique(labels) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _union(self, objs, index=None, column_i=0):
"""Make a union of the features. The term 'union' is based on the terminology of scikit-learn. """ |
feat_conc = []
for feat, label in objs:
# result is tuple of results
if isinstance(feat, tuple):
if label is None:
label = [None] * len(feat)
partial_result = self._union(
zip(feat, label), column_i=column_i)
feat_conc.append(partial_result)
column_i = column_i + partial_result.shape[1]
# result is pandas.Series.
elif isinstance(feat, pandas.Series):
feat.reset_index(drop=True, inplace=True)
if label is None:
label = column_i
feat.rename(label, inplace=True)
feat_conc.append(feat)
column_i = column_i + 1
# result is pandas.DataFrame
elif isinstance(feat, pandas.DataFrame):
feat.reset_index(drop=True, inplace=True)
if label is None:
label = np.arange(column_i, column_i + feat.shape[1])
feat.columns = label
feat_conc.append(feat)
column_i = column_i + feat.shape[1]
# result is numpy 1d array
elif is_numpy_like(feat) and len(feat.shape) == 1:
if label is None:
label = column_i
f = pandas.Series(feat, name=label, copy=False)
feat_conc.append(f)
column_i = column_i + 1
# result is numpy 2d array
elif is_numpy_like(feat) and len(feat.shape) == 2:
if label is None:
label = np.arange(column_i, column_i + feat.shape[1])
feat_df = pandas.DataFrame(feat, columns=label, copy=False)
if label is None:
feat_df.columns = [None for _ in range(feat_df.shape[1])]
feat_conc.append(feat_df)
column_i = column_i + feat.shape[1]
# other results are not (yet) supported
else:
raise ValueError("expected numpy.ndarray or "
"pandas object to be returned, "
"got '{}'".format(feat.__class__.__name__))
result = pandas.concat(feat_conc, axis=1, copy=False)
if index is not None:
result.set_index(index, inplace=True)
return result |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def predict(self, comparison_vectors):
"""Predict the class of the record pairs. Classify a set of record pairs based on their comparison vectors into matches, non-matches and possible matches. The classifier has to be trained to call this method. Parameters comparison_vectors : pandas.DataFrame Dataframe with comparison vectors. return_type : str Deprecated. Use recordlinkage.options instead. Use the option `recordlinkage.set_option('classification.return_type', 'index')` instead. Returns ------- pandas.Series A pandas Series with the labels 1 (for the matches) and 0 (for the non-matches). """ |
logging.info("Classification - predict matches and non-matches")
# make the predicition
prediction = self._predict(comparison_vectors.values)
self._post_predict(prediction)
# format and return the result
return self._return_result(prediction, comparison_vectors) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def prob(self, comparison_vectors, return_type=None):
"""Compute the probabilities for each record pair. For each pair of records, estimate the probability of being a match. Parameters comparison_vectors : pandas.DataFrame The dataframe with comparison vectors. return_type : str Deprecated. (default 'series') Returns ------- pandas.Series or numpy.ndarray The probability of being a match for each record pair. """ |
if return_type is not None:
warnings.warn("The argument 'return_type' is removed. "
"Default value is now 'series'.",
VisibleDeprecationWarning, stacklevel=2)
logging.info("Classification - compute probabilities")
prob_match = self._prob_match(comparison_vectors.values)
return pandas.Series(prob_match, index=comparison_vectors.index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _return_result(self, result, comparison_vectors=None):
"""Return different formatted classification results. """ |
return_type = cf.get_option('classification.return_type')
if type(result) != np.ndarray:
raise ValueError("numpy.ndarray expected.")
# return the pandas.MultiIndex
if return_type == 'index':
return comparison_vectors.index[result.astype(bool)]
# return a pandas.Series
elif return_type == 'series':
return pandas.Series(
result,
index=comparison_vectors.index,
name='classification')
# return a numpy.ndarray
elif return_type == 'array':
return result
# return_type not known
else:
raise ValueError(
"return_type {} unknown. Choose 'index', 'series' or "
"'array'".format(return_type)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def binary_vectors(n, n_match, m=[0.9] * 8, u=[0.1] * 8, random_state=None, return_links=False, dtype=np.int8):
"""Generate random binary comparison vectors. This function is used to generate random comparison vectors. The result of each comparison is a binary value (0 or 1). Parameters n : int The total number of comparison vectors. n_match : int The number of matching record pairs. m : list, default [0.9] * 8, optional A list of m probabilities of each partially identifying variable. The m probability is the probability that an identifier in matching record pairs agrees. u : list, default [0.9] * 8, optional A list of u probabilities of each partially identifying variable. The u probability is the probability that an identifier in non-matching record pairs agrees. random_state : int or numpy.random.RandomState, optional Seed for the random number generator with an integer or numpy RandomState object. return_links: bool When True, the function returns also the true links. dtype: numpy.dtype The dtype of each column in the returned DataFrame. Returns ------- pandas.DataFrame A dataframe with comparison vectors. """ |
if len(m) != len(u):
raise ValueError("the length of 'm' is not equal the length of 'u'")
if n_match >= n or n_match < 0:
raise ValueError("the number of matches is bounded by [0, n]")
# set the random seed
np.random.seed(random_state)
matches = []
nonmatches = []
sample_set = np.array([0, 1], dtype=dtype)
for i, _ in enumerate(m):
p_mi = [1 - m[i], m[i]]
p_ui = [1 - u[i], u[i]]
comp_mi = np.random.choice(sample_set, (n_match, 1), p=p_mi)
comp_ui = np.random.choice(sample_set, (n - n_match, 1), p=p_ui)
nonmatches.append(comp_ui)
matches.append(comp_mi)
match_block = np.concatenate(matches, axis=1)
nonmatch_block = np.concatenate(nonmatches, axis=1)
data_np = np.concatenate((match_block, nonmatch_block), axis=0)
index_np = np.random.randint(1001, 1001 + n * 2, (n, 2))
data_col_names = ['c_%s' % (i + 1) for i in range(len(m))]
data_mi = pd.MultiIndex.from_arrays([index_np[:, 0], index_np[:, 1]])
data_df = pd.DataFrame(data_np, index=data_mi, columns=data_col_names)
features = data_df.sample(frac=1, random_state=random_state)
if return_links:
links = data_mi[:n_match]
return features, links
else:
return features |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.