INSTRUCTION
stringlengths
1
8.43k
RESPONSE
stringlengths
75
104k
Maps bytes length to a sequence s offset. For example if we do byte_offset ( 5 ) and our list of sequences is [ ( 0 2 ) ( 10 11 ) ( 40 45 ) ] then the returned value will be 42. Note that bytes must be < = byte_length ().: returns: actual offset in one of the sequences in the range for request byte length.: rtype: int ...
def byte_offset(self, bytes): """ Maps `bytes` length to a sequence's offset. For example, if we do byte_offset(5) and our list of sequences is [(0, 2), (10, 11), (40, 45)] then the returned value will be 42. Note that `bytes` must be <= byte_length(). :returns: actual offset in ...
Parse command line argument and output appropriate file type ( csv or JSON )
def main(): """ Parse command line argument and output appropriate file type (csv or JSON) """ parser = ArgumentParser() parser.add_argument( "-c", "--clinvarfile", dest="clinvarfile", help="ClinVar VCF file (either this or -C must be specified)", metavar="CLINVARFILE") ...
Navigate an open ftplib. FTP to appropriate directory for ClinVar VCF files.
def nav_to_vcf_dir(ftp, build): """ Navigate an open ftplib.FTP to appropriate directory for ClinVar VCF files. Args: ftp: (type: ftplib.FTP) an open connection to ftp.ncbi.nlm.nih.gov build: (type: string) genome build, either 'b37' or 'b38' """ if build == 'b37': ftp.cwd...
Return ClinVarAllele data as dict object.
def as_dict(self, *args, **kwargs): """Return ClinVarAllele data as dict object.""" self_as_dict = super(ClinVarAllele, self).as_dict(*args, **kwargs) self_as_dict['hgvs'] = self.hgvs self_as_dict['clnalleleid'] = self.clnalleleid self_as_dict['clnsig'] = self.clnsig self...
Parse frequency data in ClinVar VCF
def _parse_frequencies(self): """Parse frequency data in ClinVar VCF""" frequencies = OrderedDict([ ('EXAC', 'Unknown'), ('ESP', 'Unknown'), ('TGP', 'Unknown')]) pref_freq = 'Unknown' for source in frequencies.keys(): freq_key = 'AF_' + sou...
Parse alleles for ClinVar VCF overrides parent method.
def _parse_allele_data(self): """Parse alleles for ClinVar VCF, overrides parent method.""" # Get allele frequencies if they exist. pref_freq, frequencies = self._parse_frequencies() info_clnvar_single_tags = ['ALLELEID', 'CLNSIG', 'CLNHGVS'] cln_data = {x.lower(): self.info[x]...
Returns back a class decorator that enables registering Blox to this factory
def add(self, *names): '''Returns back a class decorator that enables registering Blox to this factory''' def decorator(blok): for name in names or (blok.__name__, ): self[name] = blok return blok return decorator
Decorator for warning user of depricated functions before use.
def depricated_name(newmethod): """ Decorator for warning user of depricated functions before use. Args: newmethod (str): Name of method to use instead. """ def decorator(func): @wraps(func) def wrapper(*args, **kwargs): warnings.simplefilter('always', Deprecatio...
setDefaultRedisConnectionParams - Sets the default parameters used when connecting to Redis.
def setDefaultRedisConnectionParams( connectionParams ): ''' setDefaultRedisConnectionParams - Sets the default parameters used when connecting to Redis. This should be the args to redis.Redis in dict (kwargs) form. @param connectionParams <dict> - A dict of connection parameters. Common keys are: ...
clearRedisPools - Disconnect all managed connection pools and clear the connectiobn_pool attribute on all stored managed connection pools.
def clearRedisPools(): ''' clearRedisPools - Disconnect all managed connection pools, and clear the connectiobn_pool attribute on all stored managed connection pools. A "managed" connection pool is one where REDIS_CONNECTION_PARAMS does not define the "connection_pool" attribute. If you define your ...
getRedisPool - Returns and possibly also creates a Redis connection pool based on the REDIS_CONNECTION_PARAMS passed in.
def getRedisPool(params): ''' getRedisPool - Returns and possibly also creates a Redis connection pool based on the REDIS_CONNECTION_PARAMS passed in. The goal of this method is to keep a small connection pool rolling to each unique Redis instance, otherwise during network issues etc python-redis will l...
toDict/ asDict - Get a dictionary representation of this model.
def asDict(self, includeMeta=False, forStorage=False, strKeys=False): ''' toDict / asDict - Get a dictionary representation of this model. @param includeMeta - Include metadata in return. For now, this is only pk stored as "_id" @param convertValueTypes <bool> - default True. If False, fields with fieldVal...
pprint - Pretty - print a dict representation of this object.
def pprint(self, stream=None): ''' pprint - Pretty-print a dict representation of this object. @param stream <file/None> - Either a stream to output, or None to default to sys.stdout ''' pprint.pprint(self.asDict(includeMeta=True, forStorage=False, strKeys=True), stream=stream)
hasUnsavedChanges - Check if any unsaved changes are present in this model or if it has never been saved.
def hasUnsavedChanges(self, cascadeObjects=False): ''' hasUnsavedChanges - Check if any unsaved changes are present in this model, or if it has never been saved. @param cascadeObjects <bool> default False, if True will check if any foreign linked objects themselves have unsaved changes (recursively). Other...
getUpdatedFields - See changed fields. @param cascadeObjects <bool > default False if True will check if any foreign linked objects themselves have unsaved changes ( recursively ). Otherwise will just check if the pk has changed.
def getUpdatedFields(self, cascadeObjects=False): ''' getUpdatedFields - See changed fields. @param cascadeObjects <bool> default False, if True will check if any foreign linked objects themselves have unsaved changes (recursively). Otherwise, will just check if the pk has changed. @return - a dicti...
diff - Compare the field values on two IndexedRedisModels.
def diff(firstObj, otherObj, includeMeta=False): ''' diff - Compare the field values on two IndexedRedisModels. @param firstObj <IndexedRedisModel instance> - First object (or self) @param otherObj <IndexedRedisModel instance> - Second object @param includeMeta <bool> - If meta information (like pk) sh...
save - Save this object. Will perform an insert if this object had not been saved before otherwise will update JUST the fields changed on THIS INSTANCE of the model.
def save(self, cascadeSave=True): ''' save - Save this object. Will perform an "insert" if this object had not been saved before, otherwise will update JUST the fields changed on THIS INSTANCE of the model. i.e. If you have two processes fetch the same object and change different fields, they wil...
reset - Remove all stored data associated with this model ( i. e. all objects of this type ) and then save all the provided objects in #newObjs all in one atomic transaction.
def reset(cls, newObjs): ''' reset - Remove all stored data associated with this model (i.e. all objects of this type), and then save all the provided objects in #newObjs , all in one atomic transaction. Use this method to move from one complete set of objects to another, where any querying applications ...
hasSameValues - Check if this and another model have the same fields and values.
def hasSameValues(self, other, cascadeObject=True): ''' hasSameValues - Check if this and another model have the same fields and values. This does NOT include id, so the models can have the same values but be different objects in the database. @param other <IndexedRedisModel> - Another model @param cas...
copy - Copies this object.
def copy(self, copyPrimaryKey=False, copyValues=False): ''' copy - Copies this object. @param copyPrimaryKey <bool> default False - If True, any changes to the copy will save over-top the existing entry in Redis. If False, only the data is copied, and n...
saveToExternal - Saves this object to a different Redis than that specified by REDIS_CONNECTION_PARAMS on this model.
def saveToExternal(self, redisCon): ''' saveToExternal - Saves this object to a different Redis than that specified by REDIS_CONNECTION_PARAMS on this model. @param redisCon <dict/redis.Redis> - Either a dict of connection params, a la REDIS_CONNECTION_PARAMS, or an existing Redis connection. If you are do...
reload - Reload this object from the database overriding any local changes and merging in any updates.
def reload(self, cascadeObjects=True): ''' reload - Reload this object from the database, overriding any local changes and merging in any updates. @param cascadeObjects <bool> Default True. If True, foreign-linked objects will be reloaded if their values have changed since last save/fe...
copyModel - Copy this model and return that copy.
def copyModel(mdl): ''' copyModel - Copy this model, and return that copy. The copied model will have all the same data, but will have a fresh instance of the FIELDS array and all members, and the INDEXED_FIELDS array. This is useful for converting, like changing field types or whatever, wh...
validateModel - Class method that validates a given model is implemented correctly. Will only be validated once on first model instantiation.
def validateModel(model): ''' validateModel - Class method that validates a given model is implemented correctly. Will only be validated once, on first model instantiation. @param model - Implicit of own class @return - True @raises - InvalidModelException if there is a problem with the model, and the ...
connectAlt - Create a class of this model which will use an alternate connection than the one specified by REDIS_CONNECTION_PARAMS on this model.
def connectAlt(cls, redisConnectionParams): ''' connectAlt - Create a class of this model which will use an alternate connection than the one specified by REDIS_CONNECTION_PARAMS on this model. @param redisConnectionParams <dict> - Dictionary of arguments to redis.Redis, same as REDIS_CONNECTION_PARAMS. @r...
_get_new_connection - Get a new connection internal
def _get_new_connection(self): ''' _get_new_connection - Get a new connection internal ''' pool = getRedisPool(self.mdl.REDIS_CONNECTION_PARAMS) return redis.Redis(connection_pool=pool)
_get_connection - Maybe get a new connection or reuse if passed in. Will share a connection with a model internal
def _get_connection(self): ''' _get_connection - Maybe get a new connection, or reuse if passed in. Will share a connection with a model internal ''' if self._connection is None: self._connection = self._get_new_connection() return self._connection
_add_id_to_keys - Adds primary key to table internal
def _add_id_to_keys(self, pk, conn=None): ''' _add_id_to_keys - Adds primary key to table internal ''' if conn is None: conn = self._get_connection() conn.sadd(self._get_ids_key(), pk)
_rem_id_from_keys - Remove primary key from table internal
def _rem_id_from_keys(self, pk, conn=None): ''' _rem_id_from_keys - Remove primary key from table internal ''' if conn is None: conn = self._get_connection() conn.srem(self._get_ids_key(), pk)
_add_id_to_index - Adds an id to an index internal
def _add_id_to_index(self, indexedField, pk, val, conn=None): ''' _add_id_to_index - Adds an id to an index internal ''' if conn is None: conn = self._get_connection() conn.sadd(self._get_key_for_index(indexedField, val), pk)
_rem_id_from_index - Removes an id from an index internal
def _rem_id_from_index(self, indexedField, pk, val, conn=None): ''' _rem_id_from_index - Removes an id from an index internal ''' if conn is None: conn = self._get_connection() conn.srem(self._get_key_for_index(indexedField, val), pk)
_get_key_for_index - Returns the key name that would hold the indexes on a value Internal - does not validate that indexedFields is actually indexed. Trusts you. Don t let it down.
def _get_key_for_index(self, indexedField, val): ''' _get_key_for_index - Returns the key name that would hold the indexes on a value Internal - does not validate that indexedFields is actually indexed. Trusts you. Don't let it down. @param indexedField - string of field name @param val - Value of field ...
_compat_get_str_key_for_index - Return the key name as a string even if it is a hashed index field. This is used in converting unhashed fields to a hashed index ( called by _compat_rem_str_id_from_index which is called by compat_convertHashedIndexes )
def _compat_get_str_key_for_index(self, indexedField, val): ''' _compat_get_str_key_for_index - Return the key name as a string, even if it is a hashed index field. This is used in converting unhashed fields to a hashed index (called by _compat_rem_str_id_from_index which is called by compat_convertHashedInde...
_compat_rem_str_id_from_index - Used in compat_convertHashedIndexes to remove the old string repr of a field in order to later add the hashed value
def _compat_rem_str_id_from_index(self, indexedField, pk, val, conn=None): ''' _compat_rem_str_id_from_index - Used in compat_convertHashedIndexes to remove the old string repr of a field, in order to later add the hashed value, ''' if conn is None: conn = self._get_connection() conn.srem(self._compat...
_peekNextID - Look at but don t increment the primary key for this model. Internal.
def _peekNextID(self, conn=None): ''' _peekNextID - Look at, but don't increment the primary key for this model. Internal. @return int - next pk ''' if conn is None: conn = self._get_connection() return to_unicode(conn.get(self._get_next_id_key()) or 0)
_getNextID - Get ( and increment ) the next primary key for this model. If you don t want to increment @see _peekNextID. Internal. This is done automatically on save. No need to call it.
def _getNextID(self, conn=None): ''' _getNextID - Get (and increment) the next primary key for this model. If you don't want to increment, @see _peekNextID . Internal. This is done automatically on save. No need to call it. @return int - next pk ''' if conn is None: conn = self._get_connecti...
filter - Add filters based on INDEXED_FIELDS having or not having a value. Note no objects are actually fetched until. all () is called
def filter(self, **kwargs): ''' filter - Add filters based on INDEXED_FIELDS having or not having a value. Note, no objects are actually fetched until .all() is called Use the field name [ model.objects.filter(some_field='value')] to filter on items containing that value. Use the field name suffxed w...
Internal for handling filters ; the guts of. filter and. filterInline
def _filter(filterObj, **kwargs): ''' Internal for handling filters; the guts of .filter and .filterInline ''' for key, value in kwargs.items(): if key.endswith('__ne'): notFilter = True key = key[:-4] else: notFilter = False if key not in filterObj.indexedFields: raise ValueError('Fie...
count - gets the number of records matching the filter criteria
def count(self): ''' count - gets the number of records matching the filter criteria Example: theCount = Model.objects.filter(field1='value').count() ''' conn = self._get_connection() numFilters = len(self.filters) numNotFilters = len(self.notFilters) if numFilters + numNotFilters == 0: ret...
exists - Tests whether a record holding the given primary key exists.
def exists(self, pk): ''' exists - Tests whether a record holding the given primary key exists. @param pk - Primary key (see getPk method) Example usage: Waiting for an object to be deleted without fetching the object or running a filter. This is a very cheap operation. @return <bool> - True if ob...
getPrimaryKeys - Returns all primary keys matching current filterset.
def getPrimaryKeys(self, sortByAge=False): ''' getPrimaryKeys - Returns all primary keys matching current filterset. @param sortByAge <bool> - If False, return will be a set and may not be ordered. If True, return will be a list and is guarenteed to represent objects oldest->newest @return <set> - A se...
all - Get the underlying objects which match the filter criteria.
def all(self, cascadeFetch=False): ''' all - Get the underlying objects which match the filter criteria. Example: objs = Model.objects.filter(field1='value', field2='value2').all() @param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model will be fetched imme...
allByAge - Get the underlying objects which match the filter criteria ordered oldest - > newest If you are doing a queue or just need the head/ tail consider. first () and. last () instead.
def allByAge(self, cascadeFetch=False): ''' allByAge - Get the underlying objects which match the filter criteria, ordered oldest -> newest If you are doing a queue or just need the head/tail, consider .first() and .last() instead. @param cascadeFetch <bool> Default False, If True, all Foreign objects ass...
allOnlyFields - Get the objects which match the filter criteria only fetching given fields.
def allOnlyFields(self, fields, cascadeFetch=False): ''' allOnlyFields - Get the objects which match the filter criteria, only fetching given fields. @param fields - List of fields to fetch @param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model will be fetch...
allOnlyIndexedFields - Get the objects which match the filter criteria only fetching indexed fields.
def allOnlyIndexedFields(self): ''' allOnlyIndexedFields - Get the objects which match the filter criteria, only fetching indexed fields. @return - Partial objects with only the indexed fields fetched ''' matchedKeys = self.getPrimaryKeys() if matchedKeys: return self.getMultipleOnlyIndexedFields(matc...
First - Returns the oldest record ( lowerst primary key ) with current filters. This makes an efficient queue as it only fetches a single object.
def first(self, cascadeFetch=False): ''' First - Returns the oldest record (lowerst primary key) with current filters. This makes an efficient queue, as it only fetches a single object. @param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model will be fetche...
Random - Returns a random record in current filterset.
def random(self, cascadeFetch=False): ''' Random - Returns a random record in current filterset. @param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model will be fetched immediately. If False, foreign objects will be fetched on-access. @return - Instance of M...
delete - Deletes all entries matching the filter criteria
def delete(self): ''' delete - Deletes all entries matching the filter criteria ''' if self.filters or self.notFilters: return self.mdl.deleter.deleteMultiple(self.allOnlyIndexedFields()) return self.mdl.deleter.destroyModel()
get - Get a single value with the internal primary key.
def get(self, pk, cascadeFetch=False): ''' get - Get a single value with the internal primary key. @param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model will be fetched immediately. If False, foreign objects will be fetched on-access. @param pk - internal ...
_doCascadeFetch - Takes an object and performs a cascading fetch on all foreign links and all theirs and so on.
def _doCascadeFetch(obj): ''' _doCascadeFetch - Takes an object and performs a cascading fetch on all foreign links, and all theirs, and so on. @param obj <IndexedRedisModel> - A fetched model ''' obj.validateModel() if not obj.foreignFields: return # NOTE: Currently this fetches using one trans...
getMultiple - Gets multiple objects with a single atomic operation
def getMultiple(self, pks, cascadeFetch=False): ''' getMultiple - Gets multiple objects with a single atomic operation @param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model will be fetched immediately. If False, foreign objects will be fetched on-access. @...
getOnlyFields - Gets only certain fields from a paticular primary key. For working on entire filter set see allOnlyFields
def getOnlyFields(self, pk, fields, cascadeFetch=False): ''' getOnlyFields - Gets only certain fields from a paticular primary key. For working on entire filter set, see allOnlyFields @param pk <int> - Primary Key @param fields list<str> - List of fields @param cascadeFetch <bool> Default False, If Tru...
getMultipleOnlyFields - Gets only certain fields from a list of primary keys. For working on entire filter set see allOnlyFields
def getMultipleOnlyFields(self, pks, fields, cascadeFetch=False): ''' getMultipleOnlyFields - Gets only certain fields from a list of primary keys. For working on entire filter set, see allOnlyFields @param pks list<str> - Primary Keys @param fields list<str> - List of fields @param cascadeFetch <boo...
reindex - Reindexes the objects matching current filterset. Use this if you add/ remove a field to INDEXED_FIELDS.
def reindex(self): ''' reindex - Reindexes the objects matching current filterset. Use this if you add/remove a field to INDEXED_FIELDS. NOTE - This will NOT remove entries from the old index if you change index type, or change decimalPlaces on a IRFixedPointField. To correct these indexes, you'll ne...
compat_convertHashedIndexes - Reindex fields used for when you change the propery hashIndex on one or more fields.
def compat_convertHashedIndexes(self, fetchAll=True): ''' compat_convertHashedIndexes - Reindex fields, used for when you change the propery "hashIndex" on one or more fields. For each field, this will delete both the hash and unhashed keys to an object, and then save a hashed or unhashed value, dependin...
save - Save an object/ objects associated with this model. You probably want to just do object. save () instead of this but to save multiple objects at once in a single transaction you can use: MyModel. saver. save ( myObjs )
def save(self, obj, usePipeline=True, forceID=False, cascadeSave=True, conn=None): ''' save - Save an object / objects associated with this model. You probably want to just do object.save() instead of this, but to save multiple objects at once in a single transaction, you can use: MyModel.s...
_doSave - Internal function to save a single object. Don t call this directly. Use save instead.
def _doSave(self, obj, isInsert, conn, pipeline=None): ''' _doSave - Internal function to save a single object. Don't call this directly. Use "save" instead. If a pipeline is provided, the operations (setting values, updating indexes, etc) will be queued into that pipeline. Otherw...
reindex - Reindexes a given list of objects. Probably you want to do Model. objects. reindex () instead of this directly.
def reindex(self, objs, conn=None): ''' reindex - Reindexes a given list of objects. Probably you want to do Model.objects.reindex() instead of this directly. @param objs list<IndexedRedisModel> - List of objects to reindex @param conn <redis.Redis or None> - Specific Redis connection or None to reuse '''...
compat_convertHashedIndexes - Reindex all fields for the provided objects where the field value is hashed or not. If the field is unhashable do not allow.
def compat_convertHashedIndexes(self, objs, conn=None): ''' compat_convertHashedIndexes - Reindex all fields for the provided objects, where the field value is hashed or not. If the field is unhashable, do not allow. NOTE: This works one object at a time. It is intended to be used while your application is ...
deleteOne - Delete one object
def deleteOne(self, obj, conn=None): ''' deleteOne - Delete one object @param obj - object to delete @param conn - Connection to reuse, or None @return - number of items deleted (0 or 1) ''' if not getattr(obj, '_id', None): return 0 if conn is None: conn = self._get_connection() pipelin...
deleteByPk - Delete object associated with given primary key
def deleteByPk(self, pk): ''' deleteByPk - Delete object associated with given primary key ''' obj = self.mdl.objects.getOnlyIndexedFields(pk) if not obj: return 0 return self.deleteOne(obj)
deleteMultiple - Delete multiple objects
def deleteMultiple(self, objs): ''' deleteMultiple - Delete multiple objects @param objs - List of objects @return - Number of objects deleted ''' conn = self._get_connection() pipeline = conn.pipeline() numDeleted = 0 for obj in objs: numDeleted += self.deleteOne(obj, pipeline) pipeline....
deleteMultipleByPks - Delete multiple objects given their primary keys
def deleteMultipleByPks(self, pks): ''' deleteMultipleByPks - Delete multiple objects given their primary keys @param pks - List of primary keys @return - Number of objects deleted ''' if type(pks) == set: pks = list(pks) if len(pks) == 1: return self.deleteByPk(pks[0]) objs = self.mdl.obje...
destroyModel - Destroy everything related to this model in one swoop.
def destroyModel(self): ''' destroyModel - Destroy everything related to this model in one swoop. Same effect as Model.reset([]) - Except slightly more efficient. This function is called if you do Model.objects.delete() with no filters set. @return - Number of keys deleted. Note, this is NOT nu...
Returns a blox template from an html string
def string(html, start_on=None, ignore=(), use_short=True, **queries): '''Returns a blox template from an html string''' if use_short: html = grow_short(html) return _to_template(fromstring(html), start_on=start_on, ignore=ignore, **queries)
Returns a blox template from a file stream object
def file(file_object, start_on=None, ignore=(), use_short=True, **queries): '''Returns a blox template from a file stream object''' return string(file_object.read(), start_on=start_on, ignore=ignore, use_short=use_short, **queries)
Returns a blox template from a valid file path
def filename(file_name, start_on=None, ignore=(), use_short=True, **queries): '''Returns a blox template from a valid file path''' with open(file_name) as template_file: return file(template_file, start_on=start_on, ignore=ignore, use_short=use_short, **queries)
Deserializes a new instance from a string. This is a convenience method that creates a StringIO object and calls create_instance_from_stream ().
def create_from_string(self, string, context=EMPTY_CONTEXT, *args, **kwargs): """ Deserializes a new instance from a string. This is a convenience method that creates a StringIO object and calls create_instance_from_stream(). """ if not PY2 and not isinstance(string, bytes): ...
Decorator for managing chained dependencies of different class properties. The @require decorator allows developers to specify that a function call must be operated on before another property or function call is accessed so that data and processing for an entire class can be evaluated in a lazy way ( i. e. not all upon...
def require(method): """ Decorator for managing chained dependencies of different class properties. The @require decorator allows developers to specify that a function call must be operated on before another property or function call is accessed, so that data and processing for an entire class c...
Wrap function/ method with specific exception if any exception occurs during function execution. Args: exception ( Exception ): Exception to re - cast error as.
def exception(exception): """ Wrap function/method with specific exception if any exception occurs during function execution. Args: exception (Exception): Exception to re-cast error as. Examples: >>> from gems import exception >>> >>> class MyCustomException(Except...
Accumulate all dictionary and named arguments as keyword argument dictionary. This is generally useful for functions that try to automatically resolve inputs.
def keywords(func): """ Accumulate all dictionary and named arguments as keyword argument dictionary. This is generally useful for functions that try to automatically resolve inputs. Examples: >>> @keywords >>> def test(*args, **kwargs): >>> return kwargs >>> ...
getCompressMod - Return the module used for compression on this field
def getCompressMod(self): ''' getCompressMod - Return the module used for compression on this field @return <module> - The module for compression ''' if self.compressMode == COMPRESS_MODE_ZLIB: return zlib if self.compressMode == COMPRESS_MODE_BZ2: return bz2 if self.compressMode == COMPRESS_MODE...
Outputs the set text
def output(self, to=None, *args, **kwargs): '''Outputs the set text''' to.write(str(self._value))
toBytes - Convert a value to bytes using the encoding specified on this field
def toBytes(self, value): ''' toBytes - Convert a value to bytes using the encoding specified on this field @param value <str> - The field to convert to bytes @return <bytes> - The object encoded using the codec specified on this field. NOTE: This method may go away. ''' if type(value) == bytes: ...
deprecatedMessage - Print a deprecated messsage ( unless they are toggled off ). Will print a message only once ( based on key )
def deprecatedMessage(msg, key=None, printStack=False): ''' deprecatedMessage - Print a deprecated messsage (unless they are toggled off). Will print a message only once (based on "key") @param msg <str> - Deprecated message to possibly print @param key <anything> - A key that is specific to this message. ...
This macro can be used in several methods:
def ConstField(name, value, marshal=None): """ This macro can be used in several methods: >>> ConstField("foo", 5, UBInt8) This created a constant field called ``foo`` with a value of 5 and is serialized/deserialized using UBInt8. >>> ConstField("foo", MyStruct(my_field=1, my_other_field=2)) ...
Like functools. partial but instead of using the new kwargs keeps the old ones.
def keep_kwargs_partial(func, *args, **keywords): """Like functools.partial but instead of using the new kwargs, keeps the old ones.""" def newfunc(*fargs, **fkeywords): newkeywords = fkeywords.copy() newkeywords.update(keywords) return func(*(args + fargs), **newkeywords) newfunc.fu...
Serializes this NoteItem to a byte - stream and writes it to the file - like object stream. contentType and version must be one of the supported content - types and if not specified will default to text/ plain.
def dump(self, stream, contentType=None, version=None): ''' Serializes this NoteItem to a byte-stream and writes it to the file-like object `stream`. `contentType` and `version` must be one of the supported content-types, and if not specified, will default to ``text/plain``. ''' if contentTy...
Reverses the effects of the: meth: dump method creating a NoteItem from the specified file - like stream object.
def load(cls, stream, contentType=None, version=None): ''' Reverses the effects of the :meth:`dump` method, creating a NoteItem from the specified file-like `stream` object. ''' if contentType is None or contentType == constants.TYPE_TEXT_PLAIN: data = stream.read() name = data.split('\n...
Callable to configure Bokeh s show method when a proxy must be configured.
def remote_jupyter_proxy_url(port): """ Callable to configure Bokeh's show method when a proxy must be configured. If port is None we're asking about the URL for the origin header. """ base_url = os.environ['EXTERNAL_URL'] host = urllib.parse.urlparse(base_url).netloc # If port is ...
Called at the start of notebook execution to setup the environment.
def setup_notebook(debug=False): """Called at the start of notebook execution to setup the environment. This will configure bokeh, and setup the logging library to be reasonable.""" output_notebook(INLINE, hide_banner=True) if debug: _setup_logging(logging.DEBUG) logging.debug('Runn...
Creates a overview of the hosts per range.
def overview(): """ Creates a overview of the hosts per range. """ range_search = RangeSearch() ranges = range_search.get_ranges() if ranges: formatted_ranges = [] tags_lookup = {} for r in ranges: formatted_ranges.append({'mask': r.range}) tag...
Split a resource in ( filename directory ) tuple with taking care of external resources
def resource_qualifier(resource): """ Split a resource in (filename, directory) tuple with taking care of external resources :param resource: A file path or a URI :return: (Filename, Directory) for files, (URI, None) for URI """ if resource.startswith("//") or resource.startswith("http"): r...
Create an OrderedDict
def create_hierarchy(hierarchy, level): """Create an OrderedDict :param hierarchy: a dictionary :param level: single key :return: deeper dictionary """ if level not in hierarchy: hierarchy[level] = OrderedDict() return hierarchy[level]
This is the default chunker which will resolve the reference giving a callback ( getreffs ) and a text object with its metadata
def default_chunker(text, getreffs): """ This is the default chunker which will resolve the reference giving a callback (getreffs) and a text object with its metadata :param text: Text Object representing either an edition or a translation :type text: MyCapytains.resources.inventory.Text :param getreff...
This is the scheme chunker which will resolve the reference giving a callback ( getreffs ) and a text object with its metadata
def scheme_chunker(text, getreffs): """ This is the scheme chunker which will resolve the reference giving a callback (getreffs) and a text object with its metadata :param text: Text Object representing either an edition or a translation :type text: MyCapytains.resources.inventory.Text :param getreffs:...
Groups line reference together
def line_chunker(text, getreffs, lines=30): """ Groups line reference together :param text: Text object :type text: MyCapytains.resources.text.api :param getreffs: Callback function to retrieve text :type getreffs: function(level) :param lines: Number of lines to use by group :type lines: i...
Chunk a text at the passage level
def level_chunker(text, getreffs, level=1): """ Chunk a text at the passage level :param text: Text object :type text: MyCapytains.resources.text.api :param getreffs: Callback function to retrieve text :type getreffs: function(level) :return: List of urn references with their human readable ver...
Alternative to level_chunker: groups levels together at the latest level
def level_grouper(text, getreffs, level=None, groupby=20): """ Alternative to level_chunker: groups levels together at the latest level :param text: Text object :param getreffs: GetValidReff query callback :param level: Level of citation to retrieve :param groupby: Number of level to groupby :r...
Calculate Teff for main sequence stars ranging from Teff 3500K - 8000K. Use [ Fe/ H ] of the cluster if available.
def teff(cluster): """ Calculate Teff for main sequence stars ranging from Teff 3500K - 8000K. Use [Fe/H] of the cluster, if available. Returns a list of Teff values. """ b_vs, _ = cluster.stars() teffs = [] for b_v in b_vs: b_v -= cluster.eb_v if b_v > -0.04: ...
Conventional color descriptions of stars. Source: https:// en. wikipedia. org/ wiki/ Stellar_classification
def color(teffs): """ Conventional color descriptions of stars. Source: https://en.wikipedia.org/wiki/Stellar_classification """ colors = [] for t in teffs: if t >= 7500: colors.append('blue_white') # RGB:CAE1FF elif t >= 6000: colors.append('white') # R...
Create a numpy. ndarray with all observed fields and computed teff and luminosity values.
def table(cluster): """ Create a numpy.ndarray with all observed fields and computed teff and luminosity values. """ teffs = teff(cluster) lums = luminosity(cluster) arr = cluster.to_array() i = 0 for row in arr: row['lum'][0] = np.array([lums[i]], dtype='f') row['tem...
Return the numpy array with rounded teff and luminosity columns.
def round_arr_teff_luminosity(arr): """ Return the numpy array with rounded teff and luminosity columns. """ arr['temp'] = np.around(arr['temp'], -1) arr['lum'] = np.around(arr['lum'], 3) return arr
Bruteforce function will try all the credentials at the same time splits the given credentials at a:.
def brutefore_passwords(ip, url, credentials, service): """ Bruteforce function, will try all the credentials at the same time, splits the given credentials at a ':'. """ auth_requests = [] for credential in credentials: split = credential.strip().split(':') username = split[0] ...
Checks the arguments to brutefore and spawns greenlets to perform the bruteforcing.
def main(): """ Checks the arguments to brutefore and spawns greenlets to perform the bruteforcing. """ services = ServiceSearch() argparse = services.argparser argparse.add_argument('-f', '--file', type=str, help="File") arguments = argparse.parse_args() if not arguments.file: ...
Use a: class: ~bokeh. plotting. figure. Figure and x and y collections to create an H - R diagram.
def _diagram(plot_figure, source=None, color='black', line_color='#444444', xaxis_label='B-V [mag]', yaxis_label='V [mag]', name=None): """Use a :class:`~bokeh.plotting.figure.Figure` and x and y collections to create an H-R diagram. """ plot_figure.circle(x='x', y='y', source=source, ...
Create a: class: ~bokeh. plotting. figure. Figure to create an H - R diagram using the cluster_name ; then show it.
def cc_diagram(cluster_name): """Create a :class:`~bokeh.plotting.figure.Figure` to create an H-R diagram using the cluster_name; then show it. """ x, y = get_hr_data(cluster_name) y_range = [max(y) + 0.5, min(y) - 0.25] pf = figure(y_range=y_range, title=cluster_name) _diagram(x, y, pf) ...
Create a: class: ~bokeh. plotting. figure. Figure to create an H - R diagram using the cluster_name ; then show it.
def hr_diagram(cluster_name, output=None): """Create a :class:`~bokeh.plotting.figure.Figure` to create an H-R diagram using the cluster_name; then show it. Re """ cluster = get_hr_data(cluster_name) pf = hr_diagram_figure(cluster) show_with_bokeh_server(pf)
Given a cluster create a Bokeh plot figure using the cluster s image.
def skyimage_figure(cluster): """ Given a cluster create a Bokeh plot figure using the cluster's image. """ pf_image = figure(x_range=(0, 1), y_range=(0, 1), title='Image of {0}'.format(cluster.name)) pf_image.image_url(url=[cluster.image_path], x=0, ...
Returns rounded teff and luminosity lists.
def round_teff_luminosity(cluster): """ Returns rounded teff and luminosity lists. """ temps = [round(t, -1) for t in teff(cluster)] lums = [round(l, 3) for l in luminosity(cluster)] return temps, lums