text_prompt
stringlengths 157
13.1k
| code_prompt
stringlengths 7
19.8k
⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def featureName(self):
""" ID attribute from GFF3 or None if record doesn't have it. Called "Name" rather than "Id" within GA4GH, as there is no guarantee of either uniqueness or existence. """
|
featId = self.attributes.get("ID")
if featId is not None:
featId = featId[0]
return featId
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _linkFeature(self, feature):
""" Link a feature with its parents. """
|
parentNames = feature.attributes.get("Parent")
if parentNames is None:
self.roots.add(feature)
else:
for parentName in parentNames:
self._linkToParent(feature, parentName)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _linkToParent(self, feature, parentName):
""" Link a feature with its children """
|
parentParts = self.byFeatureName.get(parentName)
if parentParts is None:
raise GFF3Exception(
"Parent feature does not exist: {}".format(parentName),
self.fileName)
# parent maybe disjoint
for parentPart in parentParts:
feature.parents.add(parentPart)
parentPart.children.add(feature)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def linkChildFeaturesToParents(self):
""" finish loading the set, constructing the tree """
|
# features maybe disjoint
for featureParts in self.byFeatureName.itervalues():
for feature in featureParts:
self._linkFeature(feature)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _writeRec(self, fh, rec):
""" Writes a single record to a file provided by the filehandle fh. """
|
fh.write(str(rec) + "\n")
for child in sorted(rec.children, key=self._recSortKey):
self._writeRec(fh, child)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write(self, fh):
""" Write set to a GFF3 format file. :param file fh: file handle for file to write to """
|
fh.write(GFF3_HEADER+"\n")
for root in sorted(self.roots, key=self._recSortKey):
self._writeRec(fh, root)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _open(self):
""" open input file, optionally with decompression """
|
if self.fileName.endswith(".gz"):
return gzip.open(self.fileName)
elif self.fileName.endswith(".bz2"):
return bz2.BZ2File(self.fileName)
else:
return open(self.fileName)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parseAttrs(self, attrsStr):
""" Parse the attributes and values """
|
attributes = dict()
for attrStr in self.SPLIT_ATTR_COL_RE.split(attrsStr):
name, vals = self._parseAttrVal(attrStr)
if name in attributes:
raise GFF3Exception(
"duplicated attribute name: {}".format(name),
self.fileName, self.lineNumber)
attributes[name] = vals
return attributes
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parseRecord(self, gff3Set, line):
""" Parse one record. """
|
row = line.split("\t")
if len(row) != self.GFF3_NUM_COLS:
raise GFF3Exception(
"Wrong number of columns, expected {}, got {}".format(
self.GFF3_NUM_COLS, len(row)),
self.fileName, self.lineNumber)
feature = Feature(
urllib.unquote(row[0]),
urllib.unquote(row[1]),
urllib.unquote(row[2]),
int(row[3]), int(row[4]),
row[5], row[6], row[7],
self._parseAttrs(row[8]))
gff3Set.add(feature)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse(self):
""" Run the parse and return the resulting Gff3Set object. """
|
fh = self._open()
try:
gff3Set = Gff3Set(self.fileName)
for line in fh:
self.lineNumber += 1
self._parseLine(gff3Set, line[0:-1])
finally:
fh.close()
gff3Set.linkChildFeaturesToParents()
return gff3Set
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addDataset(self, dataset):
""" Adds the specified dataset to this data repository. """
|
id_ = dataset.getId()
self._datasetIdMap[id_] = dataset
self._datasetNameMap[dataset.getLocalId()] = dataset
self._datasetIds.append(id_)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addReferenceSet(self, referenceSet):
""" Adds the specified reference set to this data repository. """
|
id_ = referenceSet.getId()
self._referenceSetIdMap[id_] = referenceSet
self._referenceSetNameMap[referenceSet.getLocalId()] = referenceSet
self._referenceSetIds.append(id_)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addOntology(self, ontology):
""" Add an ontology map to this data repository. """
|
self._ontologyNameMap[ontology.getName()] = ontology
self._ontologyIdMap[ontology.getId()] = ontology
self._ontologyIds.append(ontology.getId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPeer(self, url):
""" Select the first peer in the datarepo with the given url simulating the behavior of selecting by URL. This is only used during testing. """
|
peers = filter(lambda x: x.getUrl() == url, self.getPeers())
if len(peers) == 0:
raise exceptions.PeerNotFoundException(url)
return peers[0]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getDataset(self, id_):
""" Returns a dataset with the specified ID, or raises a DatasetNotFoundException if it does not exist. """
|
if id_ not in self._datasetIdMap:
raise exceptions.DatasetNotFoundException(id_)
return self._datasetIdMap[id_]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getDatasetByName(self, name):
""" Returns the dataset with the specified name. """
|
if name not in self._datasetNameMap:
raise exceptions.DatasetNameNotFoundException(name)
return self._datasetNameMap[name]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getOntology(self, id_):
""" Returns the ontology with the specified ID. """
|
if id_ not in self._ontologyIdMap:
raise exceptions.OntologyNotFoundException(id_)
return self._ontologyIdMap[id_]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getOntologyByName(self, name):
""" Returns an ontology by name """
|
if name not in self._ontologyNameMap:
raise exceptions.OntologyNameNotFoundException(name)
return self._ontologyNameMap[name]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getReferenceSet(self, id_):
""" Retuns the ReferenceSet with the specified ID, or raises a ReferenceSetNotFoundException if it does not exist. """
|
if id_ not in self._referenceSetIdMap:
raise exceptions.ReferenceSetNotFoundException(id_)
return self._referenceSetIdMap[id_]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getReferenceSetByName(self, name):
""" Returns the reference set with the specified name. """
|
if name not in self._referenceSetNameMap:
raise exceptions.ReferenceSetNameNotFoundException(name)
return self._referenceSetNameMap[name]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allReadGroups(self):
""" Return an iterator over all read groups in the data repo """
|
for dataset in self.getDatasets():
for readGroupSet in dataset.getReadGroupSets():
for readGroup in readGroupSet.getReadGroups():
yield readGroup
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allFeatures(self):
""" Return an iterator over all features in the data repo """
|
for dataset in self.getDatasets():
for featureSet in dataset.getFeatureSets():
for feature in featureSet.getFeatures():
yield feature
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allCallSets(self):
""" Return an iterator over all call sets in the data repo """
|
for dataset in self.getDatasets():
for variantSet in dataset.getVariantSets():
for callSet in variantSet.getCallSets():
yield callSet
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allVariantAnnotationSets(self):
""" Return an iterator over all variant annotation sets in the data repo """
|
for dataset in self.getDatasets():
for variantSet in dataset.getVariantSets():
for vaSet in variantSet.getVariantAnnotationSets():
yield vaSet
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allRnaQuantifications(self):
""" Return an iterator over all rna quantifications """
|
for dataset in self.getDatasets():
for rnaQuantificationSet in dataset.getRnaQuantificationSets():
for rnaQuantification in \
rnaQuantificationSet.getRnaQuantifications():
yield rnaQuantification
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def allExpressionLevels(self):
""" Return an iterator over all expression levels """
|
for dataset in self.getDatasets():
for rnaQuantificationSet in dataset.getRnaQuantificationSets():
for rnaQuantification in \
rnaQuantificationSet.getRnaQuantifications():
for expressionLevel in \
rnaQuantification.getExpressionLevels():
yield expressionLevel
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPeer(self, url):
""" Finds a peer by URL and return the first peer record with that URL. """
|
peers = list(models.Peer.select().where(models.Peer.url == url))
if len(peers) == 0:
raise exceptions.PeerNotFoundException(url)
return peers[0]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getPeers(self, offset=0, limit=1000):
""" Get the list of peers using an SQL offset and limit. Returns a list of peer datamodel objects in a list. """
|
select = models.Peer.select().order_by(
models.Peer.url).limit(limit).offset(offset)
return [peers.Peer(p.url, record=p) for p in select]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tableToTsv(self, model):
""" Takes a model class and attempts to create a table in TSV format that can be imported into a spreadsheet program. """
|
first = True
for item in model.select():
if first:
header = "".join(
["{}\t".format(x) for x in model._meta.fields.keys()])
print(header)
first = False
row = "".join(
["{}\t".format(
getattr(item, key)) for key in model._meta.fields.keys()])
print(row)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def clearAnnouncements(self):
""" Flushes the announcement table. """
|
try:
q = models.Announcement.delete().where(
models.Announcement.id > 0)
q.execute()
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertAnnouncement(self, announcement):
""" Adds an announcement to the registry for later analysis. """
|
url = announcement.get('url', None)
try:
peers.Peer(url)
except:
raise exceptions.BadUrlException(url)
try:
# TODO get more details about the user agent
models.Announcement.create(
url=announcement.get('url'),
attributes=json.dumps(announcement.get('attributes', {})),
remote_addr=announcement.get('remote_addr', None),
user_agent=announcement.get('user_agent', None))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def open(self, mode=MODE_READ):
""" Opens this repo in the specified mode. TODO: figure out the correct semantics of this and document the intended future behaviour as well as the current transitional behaviour. """
|
if mode not in [MODE_READ, MODE_WRITE]:
error = "Open mode must be '{}' or '{}'".format(
MODE_READ, MODE_WRITE)
raise ValueError(error)
self._openMode = mode
if mode == MODE_READ:
self.assertExists()
if mode == MODE_READ:
# This is part of the transitional behaviour where
# we load the whole DB into memory to get access to
# the data model.
self.load()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertOntology(self, ontology):
""" Inserts the specified ontology into this repository. """
|
try:
models.Ontology.create(
id=ontology.getName(),
name=ontology.getName(),
dataurl=ontology.getDataUrl(),
ontologyprefix=ontology.getOntologyPrefix())
except Exception:
raise exceptions.DuplicateNameException(
ontology.getName())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeOntology(self, ontology):
""" Removes the specified ontology term map from this repository. """
|
q = models.Ontology.delete().where(id == ontology.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertReference(self, reference):
""" Inserts the specified reference into this repository. """
|
models.Reference.create(
id=reference.getId(),
referencesetid=reference.getParentContainer().getId(),
name=reference.getLocalId(),
length=reference.getLength(),
isderived=reference.getIsDerived(),
species=json.dumps(reference.getSpecies()),
md5checksum=reference.getMd5Checksum(),
sourceaccessions=json.dumps(reference.getSourceAccessions()),
sourceuri=reference.getSourceUri())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertReferenceSet(self, referenceSet):
""" Inserts the specified referenceSet into this repository. """
|
try:
models.Referenceset.create(
id=referenceSet.getId(),
name=referenceSet.getLocalId(),
description=referenceSet.getDescription(),
assemblyid=referenceSet.getAssemblyId(),
isderived=referenceSet.getIsDerived(),
species=json.dumps(referenceSet.getSpecies()),
md5checksum=referenceSet.getMd5Checksum(),
sourceaccessions=json.dumps(
referenceSet.getSourceAccessions()),
sourceuri=referenceSet.getSourceUri(),
dataurl=referenceSet.getDataUrl())
for reference in referenceSet.getReferences():
self.insertReference(reference)
except Exception:
raise exceptions.DuplicateNameException(
referenceSet.getLocalId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertDataset(self, dataset):
""" Inserts the specified dataset into this repository. """
|
try:
models.Dataset.create(
id=dataset.getId(),
name=dataset.getLocalId(),
description=dataset.getDescription(),
attributes=json.dumps(dataset.getAttributes()))
except Exception:
raise exceptions.DuplicateNameException(
dataset.getLocalId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeDataset(self, dataset):
""" Removes the specified dataset from this repository. This performs a cascading removal of all items within this dataset. """
|
for datasetRecord in models.Dataset.select().where(
models.Dataset.id == dataset.getId()):
datasetRecord.delete_instance(recursive=True)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removePhenotypeAssociationSet(self, phenotypeAssociationSet):
""" Remove a phenotype association set from the repo """
|
q = models.Phenotypeassociationset.delete().where(
models.Phenotypeassociationset.id ==
phenotypeAssociationSet.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeFeatureSet(self, featureSet):
""" Removes the specified featureSet from this repository. """
|
q = models.Featureset.delete().where(
models.Featureset.id == featureSet.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeContinuousSet(self, continuousSet):
""" Removes the specified continuousSet from this repository. """
|
q = models.ContinuousSet.delete().where(
models.ContinuousSet.id == continuousSet.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertReadGroup(self, readGroup):
""" Inserts the specified readGroup into the DB. """
|
statsJson = json.dumps(protocol.toJsonDict(readGroup.getStats()))
experimentJson = json.dumps(
protocol.toJsonDict(readGroup.getExperiment()))
try:
models.Readgroup.create(
id=readGroup.getId(),
readgroupsetid=readGroup.getParentContainer().getId(),
name=readGroup.getLocalId(),
predictedinsertedsize=readGroup.getPredictedInsertSize(),
samplename=readGroup.getSampleName(),
description=readGroup.getDescription(),
stats=statsJson,
experiment=experimentJson,
biosampleid=readGroup.getBiosampleId(),
attributes=json.dumps(readGroup.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeReadGroupSet(self, readGroupSet):
""" Removes the specified readGroupSet from this repository. This performs a cascading removal of all items within this readGroupSet. """
|
for readGroupSetRecord in models.Readgroupset.select().where(
models.Readgroupset.id == readGroupSet.getId()):
readGroupSetRecord.delete_instance(recursive=True)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeVariantSet(self, variantSet):
""" Removes the specified variantSet from this repository. This performs a cascading removal of all items within this variantSet. """
|
for variantSetRecord in models.Variantset.select().where(
models.Variantset.id == variantSet.getId()):
variantSetRecord.delete_instance(recursive=True)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeBiosample(self, biosample):
""" Removes the specified biosample from this repository. """
|
q = models.Biosample.delete().where(
models.Biosample.id == biosample.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeIndividual(self, individual):
""" Removes the specified individual from this repository. """
|
q = models.Individual.delete().where(
models.Individual.id == individual.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertReadGroupSet(self, readGroupSet):
""" Inserts a the specified readGroupSet into this repository. """
|
programsJson = json.dumps(
[protocol.toJsonDict(program) for program in
readGroupSet.getPrograms()])
statsJson = json.dumps(protocol.toJsonDict(readGroupSet.getStats()))
try:
models.Readgroupset.create(
id=readGroupSet.getId(),
datasetid=readGroupSet.getParentContainer().getId(),
referencesetid=readGroupSet.getReferenceSet().getId(),
name=readGroupSet.getLocalId(),
programs=programsJson,
stats=statsJson,
dataurl=readGroupSet.getDataUrl(),
indexfile=readGroupSet.getIndexFile(),
attributes=json.dumps(readGroupSet.getAttributes()))
for readGroup in readGroupSet.getReadGroups():
self.insertReadGroup(readGroup)
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeReferenceSet(self, referenceSet):
""" Removes the specified referenceSet from this repository. This performs a cascading removal of all references within this referenceSet. However, it does not remove any of the ReadGroupSets or items that refer to this ReferenceSet. These must be deleted before the referenceSet can be removed. """
|
try:
q = models.Reference.delete().where(
models.Reference.referencesetid == referenceSet.getId())
q.execute()
q = models.Referenceset.delete().where(
models.Referenceset.id == referenceSet.getId())
q.execute()
except Exception:
msg = ("Unable to delete reference set. "
"There are objects currently in the registry which are "
"aligned against it. Remove these objects before removing "
"the reference set.")
raise exceptions.RepoManagerException(msg)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertVariantAnnotationSet(self, variantAnnotationSet):
""" Inserts a the specified variantAnnotationSet into this repository. """
|
analysisJson = json.dumps(
protocol.toJsonDict(variantAnnotationSet.getAnalysis()))
try:
models.Variantannotationset.create(
id=variantAnnotationSet.getId(),
variantsetid=variantAnnotationSet.getParentContainer().getId(),
ontologyid=variantAnnotationSet.getOntology().getId(),
name=variantAnnotationSet.getLocalId(),
analysis=analysisJson,
annotationtype=variantAnnotationSet.getAnnotationType(),
created=variantAnnotationSet.getCreationTime(),
updated=variantAnnotationSet.getUpdatedTime(),
attributes=json.dumps(variantAnnotationSet.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertCallSet(self, callSet):
""" Inserts a the specified callSet into this repository. """
|
try:
models.Callset.create(
id=callSet.getId(),
name=callSet.getLocalId(),
variantsetid=callSet.getParentContainer().getId(),
biosampleid=callSet.getBiosampleId(),
attributes=json.dumps(callSet.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertVariantSet(self, variantSet):
""" Inserts a the specified variantSet into this repository. """
|
# We cheat a little here with the VariantSetMetadata, and encode these
# within the table as a JSON dump. These should really be stored in
# their own table
metadataJson = json.dumps(
[protocol.toJsonDict(metadata) for metadata in
variantSet.getMetadata()])
urlMapJson = json.dumps(variantSet.getReferenceToDataUrlIndexMap())
try:
models.Variantset.create(
id=variantSet.getId(),
datasetid=variantSet.getParentContainer().getId(),
referencesetid=variantSet.getReferenceSet().getId(),
name=variantSet.getLocalId(),
created=datetime.datetime.now(),
updated=datetime.datetime.now(),
metadata=metadataJson,
dataurlindexmap=urlMapJson,
attributes=json.dumps(variantSet.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
for callSet in variantSet.getCallSets():
self.insertCallSet(callSet)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertFeatureSet(self, featureSet):
""" Inserts a the specified featureSet into this repository. """
|
# TODO add support for info and sourceUri fields.
try:
models.Featureset.create(
id=featureSet.getId(),
datasetid=featureSet.getParentContainer().getId(),
referencesetid=featureSet.getReferenceSet().getId(),
ontologyid=featureSet.getOntology().getId(),
name=featureSet.getLocalId(),
dataurl=featureSet.getDataUrl(),
attributes=json.dumps(featureSet.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertContinuousSet(self, continuousSet):
""" Inserts a the specified continuousSet into this repository. """
|
# TODO add support for info and sourceUri fields.
try:
models.ContinuousSet.create(
id=continuousSet.getId(),
datasetid=continuousSet.getParentContainer().getId(),
referencesetid=continuousSet.getReferenceSet().getId(),
name=continuousSet.getLocalId(),
dataurl=continuousSet.getDataUrl(),
attributes=json.dumps(continuousSet.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertBiosample(self, biosample):
""" Inserts the specified Biosample into this repository. """
|
try:
models.Biosample.create(
id=biosample.getId(),
datasetid=biosample.getParentContainer().getId(),
name=biosample.getLocalId(),
description=biosample.getDescription(),
disease=json.dumps(biosample.getDisease()),
created=biosample.getCreated(),
updated=biosample.getUpdated(),
individualid=biosample.getIndividualId(),
attributes=json.dumps(biosample.getAttributes()),
individualAgeAtCollection=json.dumps(
biosample.getIndividualAgeAtCollection()))
except Exception:
raise exceptions.DuplicateNameException(
biosample.getLocalId(),
biosample.getParentContainer().getLocalId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertIndividual(self, individual):
""" Inserts the specified individual into this repository. """
|
try:
models.Individual.create(
id=individual.getId(),
datasetId=individual.getParentContainer().getId(),
name=individual.getLocalId(),
description=individual.getDescription(),
created=individual.getCreated(),
updated=individual.getUpdated(),
species=json.dumps(individual.getSpecies()),
sex=json.dumps(individual.getSex()),
attributes=json.dumps(individual.getAttributes()))
except Exception:
raise exceptions.DuplicateNameException(
individual.getLocalId(),
individual.getParentContainer().getLocalId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertPhenotypeAssociationSet(self, phenotypeAssociationSet):
""" Inserts the specified phenotype annotation set into this repository. """
|
datasetId = phenotypeAssociationSet.getParentContainer().getId()
attributes = json.dumps(phenotypeAssociationSet.getAttributes())
try:
models.Phenotypeassociationset.create(
id=phenotypeAssociationSet.getId(),
name=phenotypeAssociationSet.getLocalId(),
datasetid=datasetId,
dataurl=phenotypeAssociationSet._dataUrl,
attributes=attributes)
except Exception:
raise exceptions.DuplicateNameException(
phenotypeAssociationSet.getParentContainer().getId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertRnaQuantificationSet(self, rnaQuantificationSet):
""" Inserts a the specified rnaQuantificationSet into this repository. """
|
try:
models.Rnaquantificationset.create(
id=rnaQuantificationSet.getId(),
datasetid=rnaQuantificationSet.getParentContainer().getId(),
referencesetid=rnaQuantificationSet.getReferenceSet().getId(),
name=rnaQuantificationSet.getLocalId(),
dataurl=rnaQuantificationSet.getDataUrl(),
attributes=json.dumps(rnaQuantificationSet.getAttributes()))
except Exception:
raise exceptions.DuplicateNameException(
rnaQuantificationSet.getLocalId(),
rnaQuantificationSet.getParentContainer().getLocalId())
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removeRnaQuantificationSet(self, rnaQuantificationSet):
""" Removes the specified rnaQuantificationSet from this repository. This performs a cascading removal of all items within this rnaQuantificationSet. """
|
q = models.Rnaquantificationset.delete().where(
models.Rnaquantificationset.id == rnaQuantificationSet.getId())
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertPeer(self, peer):
""" Accepts a peer datamodel object and adds it to the registry. """
|
try:
models.Peer.create(
url=peer.getUrl(),
attributes=json.dumps(peer.getAttributes()))
except Exception as e:
raise exceptions.RepoManagerException(e)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def removePeer(self, url):
""" Remove peers by URL. """
|
q = models.Peer.delete().where(
models.Peer.url == url)
q.execute()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def initialise(self):
""" Initialise this data repository, creating any necessary directories and file paths. """
|
self._checkWriteMode()
self._createSystemTable()
self._createNetworkTables()
self._createOntologyTable()
self._createReferenceSetTable()
self._createReferenceTable()
self._createDatasetTable()
self._createReadGroupSetTable()
self._createReadGroupTable()
self._createCallSetTable()
self._createVariantSetTable()
self._createVariantAnnotationSetTable()
self._createFeatureSetTable()
self._createContinuousSetTable()
self._createBiosampleTable()
self._createIndividualTable()
self._createPhenotypeAssociationSetTable()
self._createRnaQuantificationSetTable()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load(self):
""" Loads this data repository into memory. """
|
self._readSystemTable()
self._readOntologyTable()
self._readReferenceSetTable()
self._readReferenceTable()
self._readDatasetTable()
self._readReadGroupSetTable()
self._readReadGroupTable()
self._readVariantSetTable()
self._readCallSetTable()
self._readVariantAnnotationSetTable()
self._readFeatureSetTable()
self._readContinuousSetTable()
self._readBiosampleTable()
self._readIndividualTable()
self._readPhenotypeAssociationSetTable()
self._readRnaQuantificationSetTable()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getFeature(self, compoundId):
""" find a feature and return ga4gh representation, use compoundId as featureId """
|
feature = self._getFeatureById(compoundId.featureId)
feature.id = str(compoundId)
return feature
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _getFeatureById(self, featureId):
""" find a feature and return ga4gh representation, use 'native' id as featureId """
|
featureRef = rdflib.URIRef(featureId)
featureDetails = self._detailTuples([featureRef])
feature = {}
for detail in featureDetails:
feature[detail['predicate']] = []
for detail in featureDetails:
feature[detail['predicate']].append(detail['object'])
pbFeature = protocol.Feature()
term = protocol.OntologyTerm()
# Schema for feature only supports one type of `type`
# here we default to first OBO defined
for featureType in sorted(feature[TYPE]):
if "obolibrary" in featureType:
term.term = self._featureTypeLabel(featureType)
term.term_id = featureType
pbFeature.feature_type.MergeFrom(term)
break
pbFeature.id = featureId
# Schema for feature only supports one type of `name` `symbol`
# here we default to shortest for symbol and longest for name
feature[LABEL].sort(key=len)
pbFeature.gene_symbol = feature[LABEL][0]
pbFeature.name = feature[LABEL][-1]
pbFeature.attributes.MergeFrom(protocol.Attributes())
for key in feature:
for val in sorted(feature[key]):
pbFeature.attributes.attr[key].values.add().string_value = val
if featureId in self._locationMap:
location = self._locationMap[featureId]
pbFeature.reference_name = location["chromosome"]
pbFeature.start = location["begin"]
pbFeature.end = location["end"]
return pbFeature
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _filterSearchFeaturesRequest(self, reference_name, gene_symbol, name, start, end):
""" formulate a sparql query string based on parameters """
|
filters = []
query = self._baseQuery()
filters = []
location = self._findLocation(reference_name, start, end)
if location:
filters.append("?feature = <{}>".format(location))
if gene_symbol:
filters.append('regex(?feature_label, "{}")')
if name:
filters.append(
'regex(?feature_label, "{}")'.format(name))
# apply filters
filter = "FILTER ({})".format(' && '.join(filters))
if len(filters) == 0:
filter = ""
query = query.replace("#%FILTER%", filter)
return query
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _findLocation(self, reference_name, start, end):
""" return a location key form the locationMap """
|
try:
# TODO - sequence_annotations does not have build?
return self._locationMap['hg19'][reference_name][start][end]
except:
return None
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def addValue(self, protocolElement):
""" Appends the specified protocolElement to the value list for this response. """
|
self._numElements += 1
self._bufferSize += protocolElement.ByteSize()
attr = getattr(self._protoObject, self._valueListName)
obj = attr.add()
obj.CopyFrom(protocolElement)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getSerializedResponse(self):
""" Returns a string version of the SearchResponse that has been built by this SearchResponseBuilder. """
|
self._protoObject.next_page_token = pb.string(self._nextPageToken)
s = protocol.toJson(self._protoObject)
return s
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def populateFromRow(self, ontologyRecord):
""" Populates this Ontology using values in the specified DB row. """
|
self._id = ontologyRecord.id
self._dataUrl = ontologyRecord.dataurl
self._readFile()
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getGaTermByName(self, name):
""" Returns a GA4GH OntologyTerm object by name. :param name: name of the ontology term, ex. "gene". :return: GA4GH OntologyTerm object. """
|
# TODO what is the correct value when we have no mapping??
termIds = self.getTermIds(name)
if len(termIds) == 0:
termId = ""
# TODO add logging for missed term translation.
else:
# TODO what is the correct behaviour here when we have multiple
# IDs matching a given name?
termId = termIds[0]
term = protocol.OntologyTerm()
term.term = name
term.term_id = termId
return term
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getExceptionClass(errorCode):
""" Converts the specified error code into the corresponding class object. Raises a KeyError if the errorCode is not found. """
|
classMap = {}
for name, class_ in inspect.getmembers(sys.modules[__name__]):
if inspect.isclass(class_) and issubclass(class_, BaseServerException):
classMap[class_.getErrorCode()] = class_
return classMap[errorCode]
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def toProtocolElement(self):
""" Converts this exception into the GA4GH protocol type so that it can be communicated back to the client. """
|
error = protocol.GAException()
error.error_code = self.getErrorCode()
error.message = self.getMessage()
return error
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_goterm_ref(self, rec_curr, name, lnum):
"""Initialize new reference and perform checks."""
|
if rec_curr is None:
return GOTerm()
msg = "PREVIOUS {REC} WAS NOT TERMINATED AS EXPECTED".format(REC=name)
self._die(msg, lnum)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _init_typedef(self, typedef_curr, name, lnum):
"""Initialize new typedef and perform checks."""
|
if typedef_curr is None:
return TypeDef()
msg = "PREVIOUS {REC} WAS NOT TERMINATED AS EXPECTED".format(REC=name)
self._die(msg, lnum)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _add_nested(self, rec, name, value):
"""Adds a term's nested attributes."""
|
# Remove comments and split term into typedef / target term.
(typedef, target_term) = value.split('!')[0].rstrip().split(' ')
# Save the nested term.
getattr(rec, name)[typedef].append(target_term)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _die(self, msg, lnum):
"""Raise an Exception if file read is unexpected."""
|
raise Exception("**FATAL {FILE}({LNUM}): {MSG}\n".format(
FILE=self.obo_file, LNUM=lnum, MSG=msg))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_hier_rec(self, gos_printed, out=sys.stdout, len_dash=1, max_depth=None, num_child=None, short_prt=False, include_only=None, go_marks=None, depth=1, dp="-"):
"""Write hierarchy for a GO Term record."""
|
# Added by DV Klopfenstein
GO_id = self.id
# Shortens hierarchy report by only printing the hierarchy
# for the sub-set of user-specified GO terms which are connected.
if include_only is not None and GO_id not in include_only:
return
nrp = short_prt and GO_id in gos_printed
if go_marks is not None:
out.write('{} '.format('>' if GO_id in go_marks else ' '))
if len_dash is not None:
# Default character indicating hierarchy level is '-'.
# '=' is used to indicate a hierarchical path printed in detail previously.
letter = '-' if not nrp or not self.children else '='
dp = ''.join([letter]*depth)
out.write('{DASHES:{N}} '.format(DASHES=dp, N=len_dash))
if num_child is not None:
out.write('{N:>5} '.format(N=len(self.get_all_children())))
out.write('{GO}\tL-{L:>02}\tD-{D:>02}\t{desc}\n'.format(
GO=self.id, L=self.level, D=self.depth, desc=self.name))
# Track GOs previously printed only if needed
if short_prt:
gos_printed.add(GO_id)
# Do not print hierarchy below this turn if it has already been printed
if nrp:
return
depth += 1
if max_depth is not None and depth > max_depth:
return
for p in self.children:
p.write_hier_rec(gos_printed, out, len_dash, max_depth, num_child, short_prt,
include_only, go_marks,
depth, dp)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def write_hier(self, GO_id, out=sys.stdout, len_dash=1, max_depth=None, num_child=None, short_prt=False, include_only=None, go_marks=None):
"""Write hierarchy for a GO Term."""
|
gos_printed = set()
self[GO_id].write_hier_rec(gos_printed, out, len_dash, max_depth, num_child,
short_prt, include_only, go_marks)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def paths_to_top(self, term):
""" Returns all possible paths to the root node Each path includes the term given. The order of the path is top -> bottom, i.e. it starts with the root and ends with the given term (inclusively). Parameters: - term: the id of the GO term, where the paths begin (i.e. the accession 'GO:0003682') Returns: -------- - a list of lists of GO Terms """
|
# error handling consistent with original authors
if term not in self:
print("Term %s not found!" % term, file=sys.stderr)
return
def _paths_to_top_recursive(rec):
if rec.level == 0:
return [[rec]]
paths = []
for parent in rec.parents:
top_paths = _paths_to_top_recursive(parent)
for top_path in top_paths:
top_path.append(rec)
paths.append(top_path)
return paths
go_term = self[term]
return _paths_to_top_recursive(go_term)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_graph_pydot(self, recs, nodecolor, edgecolor, dpi, draw_parents=True, draw_children=True):
"""draw AMIGO style network, lineage containing one query record."""
|
import pydot
G = pydot.Dot(graph_type='digraph', dpi="{}".format(dpi)) # Directed Graph
edgeset = set()
usr_ids = [rec.id for rec in recs]
for rec in recs:
if draw_parents:
edgeset.update(rec.get_all_parent_edges())
if draw_children:
edgeset.update(rec.get_all_child_edges())
lw = self._label_wrap
rec_id_set = set([rec_id for endpts in edgeset for rec_id in endpts])
nodes = {str(ID):pydot.Node(
lw(ID).replace("GO:",""), # Node name
shape="box",
style="rounded, filled",
# Highlight query terms in plum:
fillcolor="beige" if ID not in usr_ids else "plum",
color=nodecolor)
for ID in rec_id_set}
# add nodes explicitly via add_node
for rec_id, node in nodes.items():
G.add_node(node)
for src, target in edgeset:
# default layout in graphviz is top->bottom, so we invert
# the direction and plot using dir="back"
G.add_edge(pydot.Edge(nodes[target], nodes[src],
shape="normal",
color=edgecolor,
label="is_a",
dir="back"))
return G
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sqliteRowsToDicts(sqliteRows):
""" Unpacks sqlite rows as returned by fetchall into an array of simple dicts. :param sqliteRows: array of rows returned from fetchall DB call :return: array of dicts, keyed by the column names. """
|
return map(lambda r: dict(zip(r.keys(), r)), sqliteRows)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def limitsSql(startIndex=0, maxResults=0):
""" Construct a SQL LIMIT clause """
|
if startIndex and maxResults:
return " LIMIT {}, {}".format(startIndex, maxResults)
elif startIndex:
raise Exception("startIndex was provided, but maxResults was not")
elif maxResults:
return " LIMIT {}".format(maxResults)
else:
return ""
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def iterativeFetch(query, batchSize=default_batch_size):
""" Returns rows of a sql fetch query on demand """
|
while True:
rows = query.fetchmany(batchSize)
if not rows:
break
rowDicts = sqliteRowsToDicts(rows)
for rowDict in rowDicts:
yield rowDict
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parsePageToken(pageToken, numValues):
""" Parses the specified pageToken and returns a list of the specified number of values. Page tokens are assumed to consist of a fixed number of integers seperated by colons. If the page token does not conform to this specification, raise a InvalidPageToken exception. """
|
tokens = pageToken.split(":")
if len(tokens) != numValues:
msg = "Invalid number of values in page token"
raise exceptions.BadPageTokenException(msg)
try:
values = map(int, tokens)
except ValueError:
msg = "Malformed integers in page token"
raise exceptions.BadPageTokenException(msg)
return values
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _parseIntegerArgument(args, key, defaultValue):
""" Attempts to parse the specified key in the specified argument dictionary into an integer. If the argument cannot be parsed, raises a BadRequestIntegerException. If the key is not present, return the specified default value. """
|
ret = defaultValue
try:
if key in args:
try:
ret = int(args[key])
except ValueError:
raise exceptions.BadRequestIntegerException(key, args[key])
except TypeError:
raise Exception((key, args))
return ret
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _initialiseIteration(self):
""" Starts a new iteration. """
|
self._searchIterator = self._search(
self._request.start,
self._request.end if self._request.end != 0 else None)
self._currentObject = next(self._searchIterator, None)
if self._currentObject is not None:
self._nextObject = next(self._searchIterator, None)
self._searchAnchor = self._request.start
self._distanceFromAnchor = 0
firstObjectStart = self._getStart(self._currentObject)
if firstObjectStart > self._request.start:
self._searchAnchor = firstObjectStart
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filterVariantAnnotation(self, vann):
""" Returns true when an annotation should be included. """
|
# TODO reintroduce feature ID search
ret = False
if len(self._effects) != 0 and not vann.transcript_effects:
return False
elif len(self._effects) == 0:
return True
for teff in vann.transcript_effects:
if self.filterEffect(teff):
ret = True
return ret
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def filterEffect(self, teff):
""" Returns true when any of the transcript effects are present in the request. """
|
ret = False
for effect in teff.effects:
ret = self._matchAnyEffects(effect) or ret
return ret
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _checkIdEquality(self, requestedEffect, effect):
""" Tests whether a requested effect and an effect present in an annotation are equal. """
|
return self._idPresent(requestedEffect) and (
effect.term_id == requestedEffect.term_id)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def ga4ghImportGlue():
""" Call this method before importing a ga4gh module in the scripts dir. Otherwise, you will be using the installed package instead of the development package. Assumes a certain directory structure. """
|
path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(path)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _update(self, dataFile, handle):
""" Update the priority of the file handle. The element is first removed and then added to the left of the deque. """
|
self._cache.remove((dataFile, handle))
self._add(dataFile, handle)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _removeLru(self):
""" Remove the least recently used file handle from the cache. The pop method removes an element from the right of the deque. Returns the name of the file that has been removed. """
|
(dataFile, handle) = self._cache.pop()
handle.close()
return dataFile
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getFileHandle(self, dataFile, openMethod):
""" Returns handle associated to the filename. If the file is already opened, update its priority in the cache and return its handle. Otherwise, open the file using openMethod, store it in the cache and return the corresponding handle. """
|
if dataFile in self._memoTable:
handle = self._memoTable[dataFile]
self._update(dataFile, handle)
return handle
else:
try:
handle = openMethod(dataFile)
except ValueError:
raise exceptions.FileOpenFailedException(dataFile)
self._memoTable[dataFile] = handle
self._add(dataFile, handle)
if len(self._memoTable) > self._maxCacheSize:
dataFile = self._removeLru()
del self._memoTable[dataFile]
return handle
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def join(cls, splits):
""" Join an array of ids into a compound id string """
|
segments = []
for split in splits:
segments.append('"{}",'.format(split))
if len(segments) > 0:
segments[-1] = segments[-1][:-1]
jsonString = '[{}]'.format(''.join(segments))
return jsonString
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse(cls, compoundIdStr):
""" Parses the specified compoundId string and returns an instance of this CompoundId class. :raises: An ObjectWithIdNotFoundException if parsing fails. This is because this method is a client-facing method, and if a malformed identifier (under our internal rules) is provided, the response should be that the identifier does not exist. """
|
if not isinstance(compoundIdStr, basestring):
raise exceptions.BadIdentifierException(compoundIdStr)
try:
deobfuscated = cls.deobfuscate(compoundIdStr)
except TypeError:
# When a string that cannot be converted to base64 is passed
# as an argument, b64decode raises a TypeError. We must treat
# this as an ID not found error.
raise exceptions.ObjectWithIdNotFoundException(compoundIdStr)
try:
encodedSplits = cls.split(deobfuscated)
splits = [cls.decode(split) for split in encodedSplits]
except (UnicodeDecodeError, ValueError):
# Sometimes base64 decoding succeeds but we're left with
# unicode gibberish. This is also and IdNotFound.
raise exceptions.ObjectWithIdNotFoundException(compoundIdStr)
# pull the differentiator out of the splits before instantiating
# the class, if the differentiator exists
fieldsLength = len(cls.fields)
if cls.differentiator is not None:
differentiatorIndex = cls.fields.index(
cls.differentiatorFieldName)
if differentiatorIndex < len(splits):
del splits[differentiatorIndex]
else:
raise exceptions.ObjectWithIdNotFoundException(
compoundIdStr)
fieldsLength -= 1
if len(splits) != fieldsLength:
raise exceptions.ObjectWithIdNotFoundException(compoundIdStr)
return cls(None, *splits)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def obfuscate(cls, idStr):
""" Mildly obfuscates the specified ID string in an easily reversible fashion. This is not intended for security purposes, but rather to dissuade users from depending on our internal ID structures. """
|
return unicode(base64.urlsafe_b64encode(
idStr.encode('utf-8')).replace(b'=', b''))
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def serializeAttributes(self, msg):
""" Sets the attrbutes of a message during serialization. """
|
attributes = self.getAttributes()
for key in attributes:
protocol.setAttribute(
msg.attributes.attr[key].values, attributes[key])
return msg
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _scanDataFiles(self, dataDir, patterns):
""" Scans the specified directory for files with the specified globbing pattern and calls self._addDataFile for each. Raises an EmptyDirException if no data files are found. """
|
numDataFiles = 0
for pattern in patterns:
scanPath = os.path.join(dataDir, pattern)
for filename in glob.glob(scanPath):
self._addDataFile(filename)
numDataFiles += 1
if numDataFiles == 0:
raise exceptions.EmptyDirException(dataDir, patterns)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def getInitialPeerList(filePath, logger=None):
""" Attempts to get a list of peers from a file specified in configuration. This file has one URL per line and can contain newlines and comments. # Main ga4gh node http://1kgenomes.ga4gh.org # Local intranet peer https://192.168.1.1 The server will attempt to add URLs in this file to its registry at startup and will log a warning if the file isn't found. """
|
ret = []
with open(filePath) as textFile:
ret = textFile.readlines()
if len(ret) == 0:
if logger:
logger.warn("Couldn't load the initial "
"peer list. Try adding a "
"file named 'initial_peers.txt' "
"to {}".format(os.getcwd()))
# Remove lines that start with a hash or are empty.
return filter(lambda x: x != "" and not x.find("#") != -1, ret)
|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def insertInitialPeer(dataRepository, url, logger=None):
""" Takes the datarepository, a url, and an optional logger and attempts to add the peer into the repository. """
|
insertPeer = dataRepository.insertPeer
try:
peer = datamodel.peers.Peer(url)
insertPeer(peer)
except exceptions.RepoManagerException as exc:
if logger:
logger.debug(
"Peer already in registry {} {}".format(peer.getUrl(), exc))
except exceptions.BadUrlException as exc:
if logger:
logger.debug("A URL in the initial "
"peer list {} was malformed. {}".format(url), exc)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.