INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Normalize a path. This function simplifies a path by collapsing back-references and removing duplicated separators. Arguments: path (str): Path to normalize. Returns: str: A valid FS path. Example: >>> normpath("/foo//bar/frob/../baz") '/foo/bar/baz' >>> n...
def normpath(path): # type: (Text) -> Text """Normalize a path. This function simplifies a path by collapsing back-references and removing duplicated separators. Arguments: path (str): Path to normalize. Returns: str: A valid FS path. Example: >>> normpath("/foo//...
Get intermediate paths from the root to the given path. Arguments: path (str): A PyFilesystem path reverse (bool): Reverses the order of the paths (default `False`). Returns: list: A list of paths. Example: >>> recursepath('a/b/c') ['/', '/a', '/a/b', '...
def recursepath(path, reverse=False): # type: (Text, bool) -> List[Text] """Get intermediate paths from the root to the given path. Arguments: path (str): A PyFilesystem path reverse (bool): Reverses the order of the paths (default `False`). Returns: list: A list of...
Join any number of paths together. Arguments: *paths (str): Paths to join, given as positional arguments. Returns: str: The joined path. Example: >>> join('foo', 'bar', 'baz') 'foo/bar/baz' >>> join('foo/bar', '../baz') 'foo/baz' >>> join('foo/bar',...
def join(*paths): # type: (*Text) -> Text """Join any number of paths together. Arguments: *paths (str): Paths to join, given as positional arguments. Returns: str: The joined path. Example: >>> join('foo', 'bar', 'baz') 'foo/bar/baz' >>> join('foo/bar', '....
Join two paths together. This is faster than :func:`~fs.path.join`, but only works when the second path is relative, and there are no back references in either path. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: str: The joint path. ...
def combine(path1, path2): # type: (Text, Text) -> Text """Join two paths together. This is faster than :func:`~fs.path.join`, but only works when the second path is relative, and there are no back references in either path. Arguments: path1 (str): A PyFilesytem path. path2 (st...
Split a path in to its component parts. Arguments: path (str): Path to split in to parts. Returns: list: List of components Example: >>> parts('/foo/bar/baz') ['/', 'foo', 'bar', 'baz']
def parts(path): # type: (Text) -> List[Text] """Split a path in to its component parts. Arguments: path (str): Path to split in to parts. Returns: list: List of components Example: >>> parts('/foo/bar/baz') ['/', 'foo', 'bar', 'baz'] """ _path = normpath(...
Split the extension from the path. Arguments: path (str): A path to split. Returns: (str, str): A tuple containing the path and the extension. Example: >>> splitext('baz.txt') ('baz', '.txt') >>> splitext('foo/bar/baz.txt') ('foo/bar/baz', '.txt') >...
def splitext(path): # type: (Text) -> Tuple[Text, Text] """Split the extension from the path. Arguments: path (str): A path to split. Returns: (str, str): A tuple containing the path and the extension. Example: >>> splitext('baz.txt') ('baz', '.txt') >>> sp...
Check if ``path1`` is a base of ``path2``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if ``path2`` starts with ``path1`` Example: >>> isbase('foo/bar', 'foo/bar/baz/egg.txt') True
def isbase(path1, path2): # type: (Text, Text) -> bool """Check if ``path1`` is a base of ``path2``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if ``path2`` starts with ``path1`` Example: >>> isbase('foo/bar',...
Check if ``path1`` is a parent directory of ``path2``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if ``path1`` is a parent directory of ``path2`` Example: >>> isparent("foo/bar", "foo/bar/spam.txt") True ...
def isparent(path1, path2): # type: (Text, Text) -> bool """Check if ``path1`` is a parent directory of ``path2``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: bool: `True` if ``path1`` is a parent directory of ``path2`` Example: ...
Get the final path of ``path2`` that isn't in ``path1``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: str: the final part of ``path2``. Example: >>> frombase('foo/bar/', 'foo/bar/baz/egg') 'baz/egg'
def frombase(path1, path2): # type: (Text, Text) -> Text """Get the final path of ``path2`` that isn't in ``path1``. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: str: the final part of ``path2``. Example: >>> frombase('foo/b...
Return a path relative from a given base path. Insert backrefs as appropriate to reach the path from the base. Arguments: base (str): Path to a directory. path (str): Path to make relative. Returns: str: the path to ``base`` from ``path``. >>> relativefrom("foo/bar", "baz/ind...
def relativefrom(base, path): # type: (Text, Text) -> Text """Return a path relative from a given base path. Insert backrefs as appropriate to reach the path from the base. Arguments: base (str): Path to a directory. path (str): Path to make relative. Returns: str: the pat...
Get a context to map OS errors to their `fs.errors` counterpart. The context will re-write the paths in resource exceptions to be in the same context as the wrapped filesystem. The only parameter may be the path from the parent, if only one path is to be unwrapped. Or it may be a dictionary that maps ...
def unwrap_errors(path_replace): # type: (Union[Text, Mapping[Text, Text]]) -> Iterator[None] """Get a context to map OS errors to their `fs.errors` counterpart. The context will re-write the paths in resource exceptions to be in the same context as the wrapped filesystem. The only parameter may b...
Decodes a Windows NT FTP LIST line like these two: `11-02-18 02:12PM <DIR> images` `11-02-18 03:33PM 9276 logo.gif`
def decode_windowsnt(line, match): """ Decodes a Windows NT FTP LIST line like these two: `11-02-18 02:12PM <DIR> images` `11-02-18 03:33PM 9276 logo.gif` """ is_dir = match.group("size") == "<DIR>" raw_info = { "basic": { "name": match....
Test whether a name matches a wildcard pattern. Arguments: pattern (str): A wildcard pattern, e.g. ``"*.py"``. name (str): A filename. Returns: bool: `True` if the filename matches the pattern.
def match(pattern, name): # type: (Text, Text) -> bool """Test whether a name matches a wildcard pattern. Arguments: pattern (str): A wildcard pattern, e.g. ``"*.py"``. name (str): A filename. Returns: bool: `True` if the filename matches the pattern. """ try: ...
Test whether a name matches a wildcard pattern (case insensitive). Arguments: pattern (str): A wildcard pattern, e.g. ``"*.py"``. name (bool): A filename. Returns: bool: `True` if the filename matches the pattern.
def imatch(pattern, name): # type: (Text, Text) -> bool """Test whether a name matches a wildcard pattern (case insensitive). Arguments: pattern (str): A wildcard pattern, e.g. ``"*.py"``. name (bool): A filename. Returns: bool: `True` if the filename matches the pattern. ...
Test if a name matches any of a list of patterns. Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A filename. Returns: bool: `True` if the name matches at least one of th...
def match_any(patterns, name): # type: (Iterable[Text], Text) -> bool """Test if a name matches any of a list of patterns. Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A fi...
Test if a name matches any of a list of patterns (case insensitive). Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A filename. Returns: bool: `True` if the name matches...
def imatch_any(patterns, name): # type: (Iterable[Text], Text) -> bool """Test if a name matches any of a list of patterns (case insensitive). Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` ...
Get a callable that matches names against the given patterns. Arguments: patterns (list): A list of wildcard pattern. e.g. ``["*.py", "*.pyc"]`` case_sensitive (bool): If ``True``, then the callable will be case sensitive, otherwise it will be case insensitive. Returns:...
def get_matcher(patterns, case_sensitive): # type: (Iterable[Text], bool) -> Callable[[Text], bool] """Get a callable that matches names against the given patterns. Arguments: patterns (list): A list of wildcard pattern. e.g. ``["*.py", "*.pyc"]`` case_sensitive (bool): If ``Tru...
Translate a wildcard pattern to a regular expression. There is no way to quote meta-characters. Arguments: pattern (str): A wildcard pattern. case_sensitive (bool): Set to `False` to use a case insensitive regex (default `True`). Returns: str: A regex equivalent to the...
def _translate(pattern, case_sensitive=True): # type: (Text, bool) -> Text """Translate a wildcard pattern to a regular expression. There is no way to quote meta-characters. Arguments: pattern (str): A wildcard pattern. case_sensitive (bool): Set to `False` to use a case in...
Get the delegate FS for a given path. Arguments: path (str): A path. Returns: (FS, str): a tuple of ``(<fs>, <path>)`` for a mounted filesystem, or ``(None, None)`` if no filesystem is mounted on the given ``path``.
def _delegate(self, path): # type: (Text) -> Tuple[FS, Text] """Get the delegate FS for a given path. Arguments: path (str): A path. Returns: (FS, str): a tuple of ``(<fs>, <path>)`` for a mounted filesystem, or ``(None, None)`` if no filesystem is m...
Mounts a host FS object on a given path. Arguments: path (str): A path within the MountFS. fs (FS or str): A filesystem (instance or URL) to mount.
def mount(self, path, fs): # type: (Text, Union[FS, Text]) -> None """Mounts a host FS object on a given path. Arguments: path (str): A path within the MountFS. fs (FS or str): A filesystem (instance or URL) to mount. """ if isinstance(fs, text_type): ...
Start the workers.
def start(self): """Start the workers.""" if self.num_workers: self.queue = Queue(maxsize=self.num_workers) self.workers = [_Worker(self) for _ in range(self.num_workers)] for worker in self.workers: worker.start() self.running = True
Stop the workers (will block until they are finished).
def stop(self): """Stop the workers (will block until they are finished).""" if self.running and self.num_workers: for worker in self.workers: self.queue.put(None) for worker in self.workers: worker.join() # Free up references held by w...
Copy a file from one fs to another.
def copy(self, src_fs, src_path, dst_fs, dst_path): # type: (FS, Text, FS, Text) -> None """Copy a file from one fs to another.""" if self.queue is None: # This should be the most performant for a single-thread copy_file_internal(src_fs, src_path, dst_fs, dst_path) ...
Add a filesystem to the MultiFS. Arguments: name (str): A unique name to refer to the filesystem being added. fs (FS or str): The filesystem (instance or URL) to add. write (bool): If this value is True, then the ``fs`` will be used as the wri...
def add_fs(self, name, fs, write=False, priority=0): # type: (Text, FS, bool, int) -> None """Add a filesystem to the MultiFS. Arguments: name (str): A unique name to refer to the filesystem being added. fs (FS or str): The filesystem (instance or URL) to...
Get iterator that returns (name, fs) in priority order.
def iterate_fs(self): # type: () -> Iterator[Tuple[Text, FS]] """Get iterator that returns (name, fs) in priority order. """ if self._fs_sequence is None: self._fs_sequence = [ (name, fs) for name, (_order, fs) in sorted( se...
Get a filesystem which has a given path.
def _delegate(self, path): # type: (Text) -> Optional[FS] """Get a filesystem which has a given path. """ for _name, fs in self.iterate_fs(): if fs.exists(path): return fs return None
Check that there is a filesystem with the given ``path``.
def _delegate_required(self, path): # type: (Text) -> FS """Check that there is a filesystem with the given ``path``. """ fs = self._delegate(path) if fs is None: raise errors.ResourceNotFound(path) return fs
Check that ``path`` is writeable.
def _writable_required(self, path): # type: (Text) -> FS """Check that ``path`` is writeable. """ if self.write_fs is None: raise errors.ResourceReadOnly(path) return self.write_fs
Get a tuple of (name, fs) that the given path would map to. Arguments: path (str): A path on the filesystem. mode (str): An `io.open` mode.
def which(self, path, mode="r"): # type: (Text, Text) -> Tuple[Optional[Text], Optional[FS]] """Get a tuple of (name, fs) that the given path would map to. Arguments: path (str): A path on the filesystem. mode (str): An `io.open` mode. """ if check_writa...
Take a Python 2.x binary file and return an IO Stream.
def make_stream( name, # type: Text bin_file, # type: RawIOBase mode="r", # type: Text buffering=-1, # type: int encoding=None, # type: Optional[Text] errors=None, # type: Optional[Text] newline="", # type: Optional[Text] line_buffering=False, # type: bool **kwargs # type: A...
Iterate over the lines of a file. Implementation reads each char individually, which is not very efficient. Yields: str: a single line in the file.
def line_iterator(readable_file, size=None): # type: (IO[bytes], Optional[int]) -> Iterator[bytes] """Iterate over the lines of a file. Implementation reads each char individually, which is not very efficient. Yields: str: a single line in the file. """ read = readable_file.read ...
Check ``mode`` parameter of `~fs.base.FS.openbin` is valid. Arguments: mode (str): Mode parameter. Raises: `ValueError` if mode is not valid.
def validate_openbin_mode(mode, _valid_chars=frozenset("rwxab+")): # type: (Text, Union[Set[Text], FrozenSet[Text]]) -> None """Check ``mode`` parameter of `~fs.base.FS.openbin` is valid. Arguments: mode (str): Mode parameter. Raises: `ValueError` if mode is not valid. """ if ...
Render a directory structure in to a pretty tree. Arguments: fs (~fs.base.FS): A filesystem instance. path (str): The path of the directory to start rendering from (defaults to root folder, i.e. ``'/'``). file (io.IOBase): An open file-like object to render the tree,...
def render( fs, # type: FS path="/", # type: Text file=None, # type: Optional[TextIO] encoding=None, # type: Optional[Text] max_levels=5, # type: int with_color=None, # type: Optional[bool] dirs_first=True, # type: bool exclude=None, # type: Optional[List[Text]] filter=None, ...
Compare two `Info` objects to see if they should be copied. Returns: bool: `True` if the `Info` are different in size or mtime.
def _compare(info1, info2): # type: (Info, Info) -> bool """Compare two `Info` objects to see if they should be copied. Returns: bool: `True` if the `Info` are different in size or mtime. """ # Check filesize has changed if info1.size != info2.size: return True # Check modi...
Mirror files / directories from one filesystem to another. Mirroring a filesystem will create an exact copy of ``src_fs`` on ``dst_fs``, by removing any files / directories on the destination that aren't on the source, and copying files that aren't. Arguments: src_fs (FS or str): Source filesy...
def mirror( src_fs, # type: Union[FS, Text] dst_fs, # type: Union[FS, Text] walker=None, # type: Optional[Walker] copy_if_newer=True, # type: bool workers=0, # type: int ): # type: (...) -> None """Mirror files / directories from one filesystem to another. Mirroring a filesystem wi...
Parse a Filesystem URL and return a `ParseResult`. Arguments: fs_url (str): A filesystem URL. Returns: ~fs.opener.parse.ParseResult: a parse result instance. Raises: ~fs.errors.ParseError: if the FS URL is not valid.
def parse_fs_url(fs_url): # type: (Text) -> ParseResult """Parse a Filesystem URL and return a `ParseResult`. Arguments: fs_url (str): A filesystem URL. Returns: ~fs.opener.parse.ParseResult: a parse result instance. Raises: ~fs.errors.ParseError: if the FS URL is not vali...
Return a method with a deprecation warning.
def _new_name(method, old_name): """Return a method with a deprecation warning.""" # Looks suspiciously like a decorator, but isn't! @wraps(method) def _method(*args, **kwargs): warnings.warn( "method '{}' has been deprecated, please rename to '{}'".format( old_name,...
Change stream position. Change the stream position to the given byte offset. The offset is interpreted relative to the position indicated by ``whence``. Arguments: offset (int): the offset to the new position, in bytes. whence (int): the position reference. Poss...
def seek(self, offset, whence=Seek.set): # type: (int, SupportsInt) -> int """Change stream position. Change the stream position to the given byte offset. The offset is interpreted relative to the position indicated by ``whence``. Arguments: offset (int): th...
Get the walk generator.
def _iter_walk( self, fs, # type: FS path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Optional[Info]]] """Get the walk generator.""" if self.search == "breadth": return self._walk_br...
Check if a directory should be considered in the walk.
def _check_open_dir(self, fs, path, info): # type: (FS, Text, Info) -> bool """Check if a directory should be considered in the walk. """ if self.exclude_dirs is not None and fs.match(self.exclude_dirs, info.name): return False if self.filter_dirs is not None and not ...
Check if a directory contents should be scanned.
def _check_scan_dir(self, fs, path, info, depth): # type: (FS, Text, Info, int) -> bool """Check if a directory contents should be scanned.""" if self.max_depth is not None and depth >= self.max_depth: return False return self.check_scan_dir(fs, path, info)
Check if a filename should be included. Override to exclude files from the walk. Arguments: fs (FS): A filesystem instance. info (Info): A resource info object. Returns: bool: `True` if the file should be included.
def check_file(self, fs, info): # type: (FS, Info) -> bool """Check if a filename should be included. Override to exclude files from the walk. Arguments: fs (FS): A filesystem instance. info (Info): A resource info object. Returns: bool: `Tr...
Get an iterator of `Info` objects for a directory path. Arguments: fs (FS): A filesystem instance. dir_path (str): A path to a directory on the filesystem. namespaces (list): A list of additional namespaces to include in the `Info` objects. Returns: ...
def _scan( self, fs, # type: FS dir_path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Info] """Get an iterator of `Info` objects for a directory path. Arguments: fs (FS): A filesystem instance. ...
Walk the directory structure of a filesystem. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. namespaces (list, optional): A list of additional namespaces to add to the `Info` objects. Returns: ...
def walk( self, fs, # type: FS path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Step] """Walk the directory structure of a filesystem. Arguments: fs (FS): A filesystem instance. ...
Walk a filesystem, yielding absolute paths to files. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. Yields: str: absolute path to files on the filesystem found recursively within the given directory.
def files(self, fs, path="/"): # type: (FS, Text) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to files. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. Yields: str: absolute path to ...
Walk a filesystem, yielding tuples of ``(<path>, <info>)``. Arguments: fs (FS): A filesystem instance. path (str): A path to a directory on the filesystem. namespaces (list, optional): A list of additional namespaces to add to the `Info` objects. Yie...
def info( self, fs, # type: FS path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Info]] """Walk a filesystem, yielding tuples of ``(<path>, <info>)``. Arguments: fs (FS): A files...
Walk files using a *breadth first* search.
def _walk_breadth( self, fs, # type: FS path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Optional[Info]]] """Walk files using a *breadth first* search. """ queue = deque([path]) ...
Walk files using a *depth first* search.
def _walk_depth( self, fs, # type: FS path, # type: Text namespaces=None, # type: Optional[Collection[Text]] ): # type: (...) -> Iterator[Tuple[Text, Optional[Info]]] """Walk files using a *depth first* search. """ # No recursion! _combine ...
Create a walker instance.
def _make_walker(self, *args, **kwargs): # type: (*Any, **Any) -> Walker """Create a walker instance. """ walker = self.walker_class(*args, **kwargs) return walker
Walk the directory structure of a filesystem. Arguments: path (str): namespaces (list, optional): A list of namespaces to include in the resource information, e.g. ``['basic', 'access']`` (defaults to ``['basic']``). Keyword Arguments: ...
def walk( self, path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] **kwargs # type: Any ): # type: (...) -> Iterator[Step] """Walk the directory structure of a filesystem. Arguments: path (str): namespaces (l...
Walk a filesystem, yielding absolute paths to files. Arguments: path (str): A path to a directory. Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. ...
def files(self, path="/", **kwargs): # type: (Text, **Any) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to files. Arguments: path (str): A path to a directory. Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a ...
Walk a filesystem, yielding absolute paths to directories. Arguments: path (str): A path to a directory. Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a directory will be ignored, otherwise exceptions will be raised. ...
def dirs(self, path="/", **kwargs): # type: (Text, **Any) -> Iterator[Text] """Walk a filesystem, yielding absolute paths to directories. Arguments: path (str): A path to a directory. Keyword Arguments: ignore_errors (bool): If `True`, any errors reading a ...
Walk a filesystem, yielding path and `Info` of resources. Arguments: path (str): A path to a directory. namespaces (list, optional): A list of namespaces to include in the resource information, e.g. ``['basic', 'access']`` (defaults to ``['basic']``). ...
def info( self, path="/", # type: Text namespaces=None, # type: Optional[Collection[Text]] **kwargs # type: Any ): # type: (...) -> Iterator[Tuple[Text, Info]] """Walk a filesystem, yielding path and `Info` of resources. Arguments: path (str): ...
Remove all empty parents. Arguments: fs (FS): A filesystem instance. path (str): Path to a directory on the filesystem.
def remove_empty(fs, path): # type: (FS, Text) -> None """Remove all empty parents. Arguments: fs (FS): A filesystem instance. path (str): Path to a directory on the filesystem. """ path = abspath(normpath(path)) try: while path not in ("", "/"): fs.removedi...
Copy data from one file object to another. Arguments: src_file (io.IOBase): File open for reading. dst_file (io.IOBase): File open for writing. chunk_size (int): Number of bytes to copy at a time (or `None` to use sensible default).
def copy_file_data(src_file, dst_file, chunk_size=None): # type: (IO, IO, Optional[int]) -> None """Copy data from one file object to another. Arguments: src_file (io.IOBase): File open for reading. dst_file (io.IOBase): File open for writing. chunk_size (int): Number of bytes to co...
Get a list of non-existing intermediate directories. Arguments: fs (FS): A filesystem instance. dir_path (str): A path to a new directory on the filesystem. Returns: list: A list of non-existing paths. Raises: ~fs.errors.DirectoryExpected: If a path component r...
def get_intermediate_dirs(fs, dir_path): # type: (FS, Text) -> List[Text] """Get a list of non-existing intermediate directories. Arguments: fs (FS): A filesystem instance. dir_path (str): A path to a new directory on the filesystem. Returns: list: A list of non-existing paths....
Given a JSON string, it returns it as a safe formatted HTML
def prettify_json(json_string): """Given a JSON string, it returns it as a safe formatted HTML""" try: data = json.loads(json_string) html = '<pre>' + json.dumps(data, sort_keys=True, indent=4) + '</pre>' except: html = json_string return mark_safe(html)
Removes all objects in this table. This action first displays a confirmation page; next, it deletes all objects and redirects back to the change list.
def purge_objects(self, request): """ Removes all objects in this table. This action first displays a confirmation page; next, it deletes all objects and redirects back to the change list. """ def truncate_table(model): if settings.TRUNCATE_TABLE_SQL_STATEMEN...
Receives a list of strings with app_name.model_name format and turns them into classes. If an item is already a class it ignores it.
def get_model_list(class_list): """ Receives a list of strings with app_name.model_name format and turns them into classes. If an item is already a class it ignores it. """ for idx, item in enumerate(class_list): if isinstance(item, six.string_types): model_class = apps.get_m...
Gets the value of a given model instance field. :param obj: The model instance. :type obj: Model :param field: The field you want to find the value of. :type field: Any :return: The value of the field as a string. :rtype: str
def get_field_value(obj, field): """ Gets the value of a given model instance field. :param obj: The model instance. :type obj: Model :param field: The field you want to find the value of. :type field: Any :return: The value of the field as a string. :rtype: str """ if isinstance...
Provides delta/difference between two models :param old: The old state of the model instance. :type old: Model :param new: The new state of the model instance. :type new: Model :return: A dictionary with the names of the changed fields as keys and a two tuple of the old and new field va...
def model_delta(old_model, new_model): """ Provides delta/difference between two models :param old: The old state of the model instance. :type old: Model :param new: The new state of the model instance. :type new: Model :return: A dictionary with the names of the changed fields as keys and a...
Returns True or False to indicate whether the instance should be audited or not, depending on the project settings.
def should_audit(instance): """Returns True or False to indicate whether the instance should be audited or not, depending on the project settings.""" # do not audit any model listed in UNREGISTERED_CLASSES for unregistered_class in UNREGISTERED_CLASSES: if isinstance(instance, unregistered_clas...
https://docs.djangoproject.com/es/1.10/ref/signals/#post-save
def pre_save(sender, instance, raw, using, update_fields, **kwargs): """https://docs.djangoproject.com/es/1.10/ref/signals/#post-save""" if raw: # Return if loading Fixtures return try: with transaction.atomic(): if not should_audit(instance): return Fals...
Gets the name of the reverse m2m accessor from `model1` to `model2` For example, if User has a ManyToManyField connected to Group, `_m2m_rev_field_name(Group, User)` retrieves the name of the field on Group that lists a group's Users. (By default, this field is called `user_set`, but the name can be ov...
def _m2m_rev_field_name(model1, model2): """Gets the name of the reverse m2m accessor from `model1` to `model2` For example, if User has a ManyToManyField connected to Group, `_m2m_rev_field_name(Group, User)` retrieves the name of the field on Group that lists a group's Users. (By default, this field ...
https://docs.djangoproject.com/es/1.10/ref/signals/#m2m-changed
def m2m_changed(sender, instance, action, reverse, model, pk_set, using, **kwargs): """https://docs.djangoproject.com/es/1.10/ref/signals/#m2m-changed""" try: with transaction.atomic(): if not should_audit(instance): return False if action not in ("post_add", "po...
https://docs.djangoproject.com/es/1.10/ref/signals/#post-delete
def post_delete(sender, instance, using, **kwargs): """https://docs.djangoproject.com/es/1.10/ref/signals/#post-delete""" try: with transaction.atomic(): if not should_audit(instance): return False object_json_repr = serializers.serialize("json", [instance]) ...
Query the information of all the GPUs on local machine
def new_query(): """Query the information of all the GPUs on local machine""" N.nvmlInit() def _decode(b): if isinstance(b, bytes): return b.decode() # for python3, to unicode return b def get_gpu_info(handle): """Get one GPU info...
Display the GPU query results into standard output.
def print_gpustat(json=False, debug=False, **kwargs): ''' Display the GPU query results into standard output. ''' try: gpu_stats = GPUStatCollection.new_query() except Exception as e: sys.stderr.write('Error on querying NVIDIA devices.' ' Use --debug flag for...
fetch instruments by ids
def fetch_list(cls, client, ids): """ fetch instruments by ids """ results = [] request_url = "https://api.robinhood.com/options/instruments/" for _ids in chunked_list(ids, 50): params = {"ids": ",".join(_ids)} data = client.get(request_url, para...
fetch all option instruments in an options chain - expiration_dates = optionally scope
def in_chain(cls, client, chain_id, expiration_dates=[]): """ fetch all option instruments in an options chain - expiration_dates = optionally scope """ request_url = "https://api.robinhood.com/options/instruments/" params = { "chain_id": chain_id, ...
unroll option orders like this, https://github.com/joshfraser/robinhood-to-csv/blob/master/csv-options-export.py
def unroll_option_legs(cls, client, option_orders): ''' unroll option orders like this, https://github.com/joshfraser/robinhood-to-csv/blob/master/csv-options-export.py ''' # # @TODO write this with python threats to make concurrent HTTP requests # resul...
params: - client - direction - legs - price - quantity - time_in_force - trigger - order_type - run_validations. default = True
def submit(cls, client, direction, legs, price, quantity, time_in_force, trigger, order_type, run_validations=True): ''' params: - client - direction - legs - price - quantity - time_in_force - trigger - order_type - ...
totally just playing around ideas for the API. this IC sells - credit put spread - credit call spread the approach - set width for the wing spread (eg, 1, ie, 1 unit width spread) - set delta for inner leg of the put credit spread (eg, -0.2) - set delta for inne...
def generate_by_deltas(cls, options, width, put_inner_lte_delta, call_inner_lte_delta): """ totally just playing around ideas for the API. this IC sells - credit put spread - credit call spread the approach - set width for the wing spr...
fetch option chain for instrument
def fetch(cls, client, _id, symbol): """ fetch option chain for instrument """ url = "https://api.robinhood.com/options/chains/" params = { "equity_instrument_ids": _id, "state": "active", "tradability": "tradable" } data = clie...
Authenticate using data in `options`
def authenticate(self): ''' Authenticate using data in `options` ''' if "username" in self.options and "password" in self.options: self.login_oauth2( self.options["username"], self.options["password"], self.options.get('mfa_code...
Execute HTTP GET
def get(self, url=None, params=None, retry=True): ''' Execute HTTP GET ''' headers = self._gen_headers(self.access_token, url) attempts = 1 while attempts <= HTTP_ATTEMPTS_MAX: try: res = requests.get(url, hea...
Generate headders, adding in Oauth2 bearer token if present
def _gen_headers(self, bearer, url): ''' Generate headders, adding in Oauth2 bearer token if present ''' headers = { "Accept": "*/*", "Accept-Encoding": "gzip, deflate", "Accept-Language": ("en;q=1, fr;q=0.9, de;q=0.8, ja;q=0.7, " + ...
Login using username and password
def login_oauth2(self, username, password, mfa_code=None): ''' Login using username and password ''' data = { "grant_type": "password", "scope": "internal", "client_id": CLIENT_ID, "expires_in": 86400, "password": password, ...
(Re)login using the Oauth2 refresh token
def relogin_oauth2(self): ''' (Re)login using the Oauth2 refresh token ''' url = "https://api.robinhood.com/oauth2/token/" data = { "grant_type": "refresh_token", "refresh_token": self.refresh_token, "scope": "internal", "client_id"...
Logout for given Oauth2 bearer token
def logout_oauth2(self): ''' Logout for given Oauth2 bearer token ''' url = "https://api.robinhood.com/oauth2/revoke_token/" data = { "client_id": CLIENT_ID, "token": self.refresh_token, } res = self.post(url, payload=data) if res i...
fetch data for stock
def fetch(cls, client, symbol): """ fetch data for stock """ assert(type(symbol) is str) url = ("https://api.robinhood.com/instruments/?symbol={0}". format(symbol)) data = client.get(url) return data["results"][0]
fetch data for multiple stocks
def all(cls, client, symbols): """" fetch data for multiple stocks """ params = {"symbol": ",".join(symbols)} request_url = "https://api.robinhood.com/instruments/" data = client.get(request_url, params=params) results = data["results"] while data["next"...
Generate Pandas Dataframe of Vertical :param options: python dict of options. :param width: offset for spread. Must be integer. :param spread_type: call or put. defaults to "call". :param spread_kind: buy or sell. defaults to "buy".
def gen_df(cls, options, width, spread_type="call", spread_kind="buy"): """ Generate Pandas Dataframe of Vertical :param options: python dict of options. :param width: offset for spread. Must be integer. :param spread_type: call or put. defaults to "call". :param spread_...
fetch data for multiple stocks
def all(cls, client): """" fetch data for multiple stocks """ url = "https://api.robinhood.com/orders/" data = client.get(url) results = data["results"] while data["next"]: data = client.get(data["next"]) results.extend(data["results"]) ...
Break lists into small lists for processing:w
def chunked_list(_list, _chunk_size=50): """ Break lists into small lists for processing:w """ for i in range(0, len(_list), _chunk_size): yield _list[i:i + _chunk_size]
create instrument urls, fetch, return results
def quote_by_instruments(cls, client, ids): """ create instrument urls, fetch, return results """ base_url = "https://api.robinhood.com/instruments" id_urls = ["{}/{}/".format(base_url, _id) for _id in ids] return cls.quotes_by_instrument_urls(client, id_urls)
fetch and return results
def quotes_by_instrument_urls(cls, client, urls): """ fetch and return results """ instruments = ",".join(urls) params = {"instruments": instruments} url = "https://api.robinhood.com/marketdata/quotes/" data = client.get(url, params=params) results = data[...
fetch all option positions
def all(cls, client, **kwargs): """ fetch all option positions """ max_date = kwargs['max_date'] if 'max_date' in kwargs else None max_fetches = \ kwargs['max_fetches'] if 'max_fetches' in kwargs else None url = 'https://api.robinhood.com/options/positions/' ...
Fetch and merge in Marketdata for each option position
def mergein_marketdata_list(cls, client, option_positions): """ Fetch and merge in Marketdata for each option position """ ids = cls._extract_ids(option_positions) mds = OptionMarketdata.quotes_by_instrument_ids(client, ids) results = [] for op in option_position...
Evaluates raw Python string like `ast.literal_eval` does
def evaluateRawString(self, escaped): """Evaluates raw Python string like `ast.literal_eval` does""" unescaped = [] hexdigit = None escape = False for char in escaped: number = ord(char) if hexdigit is not None: if hexdigit: ...
Return a list corresponding to the lines of text in the `txt` list indented by `indent`. Prepend instead the string given in `prepend` to the beginning of the first line. Note that if len(prepend) > len(indent), then `prepend` will be truncated (doing better is tricky!). This preserves a special '' ent...
def shift(txt, indent = ' ', prepend = ''): """Return a list corresponding to the lines of text in the `txt` list indented by `indent`. Prepend instead the string given in `prepend` to the beginning of the first line. Note that if len(prepend) > len(indent), then `prepend` will be truncated (doing be...
Parse a given node. This function in turn calls the `parse_<nodeType>` functions which handle the respective nodes.
def parse(self, node): """Parse a given node. This function in turn calls the `parse_<nodeType>` functions which handle the respective nodes. """ pm = getattr(self, "parse_%s" % node.__class__.__name__) pm(node)
Parse the subnodes of a given node. Subnodes with tags in the `ignore` list are ignored. If pieces is given, use this as target for the parse results instead of self.pieces. Indent all lines by the amount given in `indent`. Note that the initial content in `pieces` is not indented. The f...
def subnode_parse(self, node, pieces=None, indent=0, ignore=[], restrict=None): """Parse the subnodes of a given node. Subnodes with tags in the `ignore` list are ignored. If pieces is given, use this as target for the parse results instead of self.pieces. Indent all lines by the amount ...
Parse the subnodes of a given node. Subnodes with tags in the `ignore` list are ignored. Prepend `pre_char` and append `post_char` to the output in self.pieces.
def surround_parse(self, node, pre_char, post_char): """Parse the subnodes of a given node. Subnodes with tags in the `ignore` list are ignored. Prepend `pre_char` and append `post_char` to the output in self.pieces.""" self.add_text(pre_char) self.subnode_parse(node) sel...
Given a node and a name, return a list of child `ELEMENT_NODEs`, that have a `tagName` matching the `name`. Search recursively for `recursive` levels.
def get_specific_subnodes(self, node, name, recursive=0): """Given a node and a name, return a list of child `ELEMENT_NODEs`, that have a `tagName` matching the `name`. Search recursively for `recursive` levels. """ children = [x for x in node.childNodes if x.nodeType == x.ELEMEN...
Given a node and a sequence of strings in `names`, return a dictionary containing the names as keys and child `ELEMENT_NODEs`, that have a `tagName` equal to the name.
def get_specific_nodes(self, node, names): """Given a node and a sequence of strings in `names`, return a dictionary containing the names as keys and child `ELEMENT_NODEs`, that have a `tagName` equal to the name. """ nodes = [(x.tagName, x) for x in node.childNodes ...
Adds text corresponding to `value` into `self.pieces`.
def add_text(self, value): """Adds text corresponding to `value` into `self.pieces`.""" if isinstance(value, (list, tuple)): self.pieces.extend(value) else: self.pieces.append(value)
Make sure to create an empty line. This is overridden, if the previous text ends with the special marker ''. In that case, nothing is done.
def start_new_paragraph(self): """Make sure to create an empty line. This is overridden, if the previous text ends with the special marker ''. In that case, nothing is done. """ if self.pieces[-1:] == ['']: # respect special marker return elif self.pieces == []: # fir...
Add line of text and wrap such that subsequent lines are indented by `indent` spaces.
def add_line_with_subsequent_indent(self, line, indent=4): """Add line of text and wrap such that subsequent lines are indented by `indent` spaces. """ if isinstance(line, (list, tuple)): line = ''.join(line) line = line.strip() width = self.textwidth-self.ind...
Return the string representation of the node or list of nodes by parsing the subnodes, but returning the result as a string instead of adding it to `self.pieces`. Note that this allows extracting text even if the node is in the ignore list.
def extract_text(self, node): """Return the string representation of the node or list of nodes by parsing the subnodes, but returning the result as a string instead of adding it to `self.pieces`. Note that this allows extracting text even if the node is in the ignore list. """ if...