{"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates the Redis cache module and RedisIdHelper classes.", "response": "def configure(self, binder):\n # type: (Binder) -> None\n \"\"\"Initializer of the cache - creates the Redis cache module as the\n default cache infrastructure. The module is bound to `RedisCacheModule`\n and `CacheModule` keys. The initializer also creates `RedisIdHelper`\n and bounds it to `RedisIdHelper` and `IdHelper` keys.\n\n :param Binder binder: The binder object holding the binding context, we\\\n add cache to the binder.\n \"\"\"\n redis_cache_module = RedisCacheModule()\n binder.bind(\n RedisCacheModule,\n to=redis_cache_module,\n scope=singleton\n )\n binder.bind(\n CacheModule,\n to=redis_cache_module,\n scope=singleton\n )\n\n redis_id_helper = RedisIdHelper()\n binder.bind(\n RedisIdHelper,\n to=redis_id_helper,\n scope=singleton\n )\n binder.bind(\n IdHelper,\n to=redis_id_helper,\n scope=singleton\n )\n\n logging.debug(\"Created RedisCache binding.\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef error_handler(f):\n\n @wraps(f)\n def error_handling_function(*args, **kwargs):\n @inject(error_config=ErrorConfiguration)\n def get_error_configuration(error_config):\n # type:(ErrorConfiguration)->ErrorConfiguration\n return error_config\n\n def should_skip_handling():\n use_flask_handler = get_error_configuration().use_flask_handler\n is_web_request = isinstance(JobContext.get_current_context(), WebJobContext)\n return use_flask_handler and is_web_request\n\n try:\n return f(*args, **kwargs)\n except Exception as ex:\n if should_skip_handling():\n raise\n\n for ep in _error_processors:\n ep.handle(ex)\n\n for eh in _error_handlers:\n if eh.can_handle(ex):\n return eh.handle(ex)\n\n return _DEFAULT_ERROR_HANDLER.handle(ex)\n\n return error_handling_function", "response": "Decorator for error handling."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef lazy_property(func):\n '''\n Wraps a property to provide lazy evaluation. Eliminates boilerplate.\n Also provides for setting and deleting the property.\n\n Use as you would use the @property decorator::\n\n # OLD:\n class MyClass():\n def __init__():\n self._compute = None\n\n @property\n def compute(self):\n if self._compute is None:\n # computationally intense stuff\n # ...\n # ...\n self._compute = result\n return self._compute\n\n @compute.setter\n def compute(self, value):\n self._compute = value\n\n\n # NEW:\n class MyClass():\n\n def __init__():\n pass\n\n @lazy_property\n def compute(self):\n # computationally intense stuff\n # ...\n # ...\n return result\n\n .. note:\n\n Properties wrapped with ``lazy_property`` are only evaluated once.\n If the instance state changes, lazy properties will not be automatically\n re-evaulated and the update must be explicitly called for::\n\n c = MyClass(data)\n prop = c.lazy_property\n\n # If you update some data that affects c.lazy_property\n c.data = new_data\n\n # c.lazy_property won't change\n prop == c.lazy_property # TRUE\n\n # If you want to update c.lazy_property, you can delete it, which will\n # force it to be recomputed (with the new data) the next time you use it\n del c.lazy_property\n new_prop = c.lazy_property\n new_prop == prop # FALSE\n '''\n attr_name = '_lazy_' + func.__name__\n\n @property\n def _lazy_property(self):\n if not hasattr(self, attr_name):\n setattr(self, attr_name, func(self))\n return getattr(self, attr_name)\n\n @_lazy_property.deleter\n def _lazy_property(self):\n if hasattr(self, attr_name):\n delattr(self, attr_name)\n\n @_lazy_property.setter\n def _lazy_property(self, value):\n setattr(self, attr_name, value)\n\n return _lazy_property", "response": "A property decorator that returns a property that will be evaluated once."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef coroutine(func):\n '''\n Initializes a coroutine -- essentially it just takes a\n generator function and calls generator.next() to get\n things going.\n '''\n def start(*args, **kwargs):\n cr = func(*args, **kwargs)\n cr.next()\n return cr\n return start", "response": "A coroutine function that returns a sequence of objects."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fasta(self):\n '''\n str: Returns the sequence, as a FASTA-formatted string\n\n Note: The FASTA string is built using ``Sequence.id`` and ``Sequence.sequence``.\n '''\n if not self._fasta:\n self._fasta = '>{}\\n{}'.format(self.id, self.sequence)\n return self._fasta", "response": "Returns the sequence as a FASTA - formatted string"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fastq(self):\n '''\n str: Returns the sequence, as a FASTQ-formatted string\n\n If ``Sequence.qual`` is ``None``, then ``None`` will be returned instead of a\n FASTQ string\n '''\n if self.qual is None:\n self._fastq = None\n else:\n if self._fastq is None:\n self._fastq = '@{}\\n{}\\n+\\n{}'.format(self.id, self.sequence, self.qual)\n return self._fastq", "response": "Returns the sequence as a FASTQ - formatted string"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef region(self, start=0, end=None):\n '''\n Returns a region of ``Sequence.sequence``, in FASTA format.\n\n If called without kwargs, the entire sequence will be returned.\n\n Args:\n\n start (int): Start position of the region to be returned. Default\n is 0.\n\n end (int): End position of the region to be returned. Negative values\n will function as they do when slicing strings.\n\n Returns:\n\n str: A region of ``Sequence.sequence``, in FASTA format\n '''\n if end is None:\n end = len(self.sequence)\n return '>{}\\n{}'.format(self.id, self.sequence[start:end])", "response": "Returns a string representation of the sequence in FASTA format."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntransform string from underscore_string to camelCase.", "response": "def underscore_to_camelcase(value, first_upper=True):\n \"\"\"Transform string from underscore_string to camelCase.\n\n :param value: string with underscores\n :param first_upper: the result will have its first character in upper case\n :type value: str\n :return: string in CamelCase or camelCase according to the first_upper\n :rtype: str\n\n :Example:\n >>> underscore_to_camelcase('camel_case')\n 'CamelCase'\n >>> underscore_to_camelcase('camel_case', False)\n 'camelCase'\n \"\"\"\n value = str(value)\n camelized = \"\".join(x.title() if x else '_' for x in value.split(\"_\"))\n if not first_upper:\n camelized = camelized[0].lower() + camelized[1:]\n return camelized"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef et_node_to_string(et_node, default=''):\n\n return str(et_node.text).strip() if et_node is not None and et_node.text else default", "response": "Simple method to get stripped text from node or default string if None is given."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_random_string(size=6, chars=string.ascii_uppercase + string.digits):\n return ''.join(random.choice(chars) for _ in range(size))", "response": "Generate random string.\n\n :param size: Length of the returned string. Default is 6.\n :param chars: List of the usable characters. Default is string.ascii_uppercase + string.digits.\n :type size: int\n :type chars: str\n :return: The random string.\n :rtype: str"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds slashes for given characters. Default is for \\ and \\.", "response": "def addslashes(s, escaped_chars=None):\n \"\"\"Add slashes for given characters. Default is for ``\\`` and ``'``.\n\n :param s: string\n :param escaped_chars: list of characters to prefix with a slash ``\\``\n :return: string with slashed characters\n :rtype: str\n\n :Example:\n >>> addslashes(\"'\")\n \"\\\\'\"\n \"\"\"\n if escaped_chars is None:\n escaped_chars = [\"\\\\\", \"'\", ]\n\n # l = [\"\\\\\", '\"', \"'\", \"\\0\", ]\n for i in escaped_chars:\n if i in s:\n s = s.replace(i, '\\\\' + i)\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef join_list(values, delimiter=', ', transform=None):\n # type: (Union[List[str], str], str)->str\n if transform is None:\n transform = _identity\n\n if values is not None and not isinstance(values, (str, bytes)):\n values = delimiter.join(transform(x) for x in values)\n return values", "response": "Joins the given values using the given delimiter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mafft(sequences=None, alignment_file=None, fasta=None, fmt='fasta', threads=-1, as_file=False,\n reorder=True, print_stdout=False, print_stderr=False, mafft_bin=None):\n '''\n Performs multiple sequence alignment with MAFFT.\n\n Args:\n\n sequences (list): Sequences to be aligned. ``sequences`` can be one of four things:\n\n 1. a FASTA-formatted string\n\n 2. a list of BioPython ``SeqRecord`` objects\n\n 3. a list of AbTools ``Sequence`` objects\n\n 4. a list of lists/tuples, of the format ``[sequence_id, sequence]``\n\n alignment_file (str): Path for the output alignment file. If not supplied,\n a name will be generated using ``tempfile.NamedTemporaryFile()``.\n\n fasta (str): Path to a FASTA-formatted file of sequences. Used as an\n alternative to ``sequences`` when suppling a FASTA file.\n\n fmt (str): Format of the alignment. Options are 'fasta', 'phylip', and 'clustal'. Default\n is 'fasta'.\n\n threads (int): Number of threads for MAFFT to use. Default is ``-1``, which\n results in MAFFT using ``multiprocessing.cpu_count()`` threads.\n\n as_file (bool): If ``True``, returns a path to the alignment file. If ``False``,\n returns a BioPython ``MultipleSeqAlignment`` object (obtained by calling\n ``Bio.AlignIO.read()`` on the alignment file).\n \n print_stdout (bool): If ``True``, prints MAFFT's standard output. Default is ``False``.\n\n print_stderr (bool): If ``True``, prints MAFFT's standard error. Default is ``False``.\n\n mafft_bin (str): Path to MAFFT executable. ``abutils`` includes built-in MAFFT binaries\n for MacOS and Linux, however, if a different MAFFT binary can be provided. Default is\n ``None``, which results in using the appropriate built-in MAFFT binary.\n\n Returns:\n\n Returns a BioPython ``MultipleSeqAlignment`` object, unless ``as_file`` is ``True``,\n in which case the path to the alignment file is returned.\n\n '''\n if sequences:\n fasta_string = _get_fasta_string(sequences)\n fasta_file = tempfile.NamedTemporaryFile(delete=False)\n fasta_file.close()\n ffile = fasta_file.name\n with open(ffile, 'w') as f:\n f.write(fasta_string)\n elif fasta:\n ffile = fasta\n if alignment_file is None:\n alignment_file = tempfile.NamedTemporaryFile(delete=False).name\n aln_format = ''\n if fmt.lower() == 'clustal':\n aln_format = '--clustalout '\n if fmt.lower() == 'phylip':\n aln_format = '--phylipout '\n if reorder:\n aln_format += '--reorder '\n if mafft_bin is None:\n mafft_bin = 'mafft'\n # mod_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n # mafft_bin = os.path.join(BINARY_DIR, 'mafft_{}'.format(platform.system().lower()))\n mafft_cline = '{} --thread {} {}{} > {}'.format(mafft_bin, threads, aln_format, ffile, alignment_file)\n mafft = sp.Popen(str(mafft_cline),\n stdout=sp.PIPE,\n stderr=sp.PIPE,\n universal_newlines=True,\n shell=True)\n stdout, stderr = mafft.communicate()\n if print_stdout:\n print(mafft_cline)\n print(stdout)\n if print_stderr:\n print(stderr)\n os.unlink(ffile)\n if os.stat(alignment_file).st_size == 0:\n return None\n if as_file:\n return alignment_file\n aln = AlignIO.read(open(alignment_file), fmt)\n os.unlink(alignment_file)\n return aln", "response": "This function will align a list of sequences with MAFFT."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef muscle(sequences=None, alignment_file=None, fasta=None,\n fmt='fasta', as_file=False, maxiters=None, diags=False,\n gap_open=None, gap_extend=None, muscle_bin=None):\n '''\n Performs multiple sequence alignment with MUSCLE.\n\n Args:\n\n sequences (list): Sequences to be aligned. ``sequences`` can be one of four things:\n\n 1. a FASTA-formatted string\n\n 2. a list of BioPython ``SeqRecord`` objects\n\n 3. a list of AbTools ``Sequence`` objects\n\n 4. a list of lists/tuples, of the format ``[sequence_id, sequence]``\n\n alignment_file (str): Path for the output alignment file. If not supplied,\n a name will be generated using ``tempfile.NamedTemporaryFile()``.\n\n fasta (str): Path to a FASTA-formatted file of sequences. Used as an\n alternative to ``sequences`` when suppling a FASTA file.\n\n fmt (str): Format of the alignment. Options are 'fasta' and 'clustal'. Default\n is 'fasta'.\n\n threads (int): Number of threads (CPU cores) for MUSCLE to use. Default is ``-1``, which\n results in MUSCLE using all available cores.\n\n as_file (bool): If ``True``, returns a path to the alignment file. If ``False``,\n returns a BioPython ``MultipleSeqAlignment`` object (obtained by calling\n ``Bio.AlignIO.read()`` on the alignment file).\n\n maxiters (int): Passed directly to MUSCLE using the ``-maxiters`` flag.\n\n diags (int): Passed directly to MUSCLE using the ``-diags`` flag.\n\n gap_open (float): Passed directly to MUSCLE using the ``-gapopen`` flag. Ignored\n if ``gap_extend`` is not also provided.\n\n gap_extend (float): Passed directly to MUSCLE using the ``-gapextend`` flag. Ignored\n if ``gap_open`` is not also provided.\n\n muscle_bin (str): Path to MUSCLE executable. ``abutils`` includes built-in MUSCLE binaries\n for MacOS and Linux, however, if a different MUSCLE binary can be provided. Default is\n ``None``, which results in using the appropriate built-in MUSCLE binary.\n\n Returns:\n\n Returns a BioPython ``MultipleSeqAlignment`` object, unless ``as_file`` is ``True``,\n in which case the path to the alignment file is returned.\n '''\n if sequences:\n fasta_string = _get_fasta_string(sequences)\n elif fasta:\n fasta_string = open(fasta, 'r').read()\n if muscle_bin is None:\n # mod_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n muscle_bin = os.path.join(BINARY_DIR, 'muscle_{}'.format(platform.system().lower()))\n aln_format = ''\n if fmt == 'clustal':\n aln_format = ' -clwstrict'\n muscle_cline = '{}{} '.format(muscle_bin, aln_format)\n if maxiters is not None:\n muscle_cline += ' -maxiters {}'.format(maxiters)\n if diags:\n muscle_cline += ' -diags'\n if all([gap_open is not None, gap_extend is not None]):\n muscle_cline += ' -gapopen {} -gapextend {}'.format(gap_open, gap_extend)\n muscle = sp.Popen(str(muscle_cline),\n stdin=sp.PIPE,\n stdout=sp.PIPE,\n stderr=sp.PIPE,\n universal_newlines=True,\n shell=True)\n if sys.version_info[0] > 2:\n alignment = muscle.communicate(input=fasta_string)[0]\n else:\n alignment = unicode(muscle.communicate(input=fasta_string)[0], 'utf-8')\n aln = AlignIO.read(StringIO(alignment), fmt)\n if as_file:\n if not alignment_file:\n alignment_file = tempfile.NamedTemporaryFile().name\n AlignIO.write(aln, alignment_file, fmt)\n return alignment_file\n return aln", "response": "This function will align a list of sequences with MUSCLE using the MUSCLE command line tool."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef local_alignment(query, target=None, targets=None, match=3, mismatch=-2,\n gap_open=-5, gap_extend=-2, matrix=None, aa=False, gap_open_penalty=None, gap_extend_penalty=None):\n '''\n Striped Smith-Waterman local pairwise alignment.\n\n Args:\n\n query: Query sequence. ``query`` can be one of four things:\n\n 1. a nucleotide or amino acid sequence, as a string\n\n 2. a Biopython ``SeqRecord`` object\n\n 3. an AbTools ``Sequence`` object\n\n 4. a list/tuple of the format ``[seq_id, sequence]``\n\n target: A single target sequence. ``target`` can be anything that\n ``query`` accepts.\n\n targets (list): A list of target sequences, to be proccssed iteratively.\n Each element in the ``targets`` list can be anything accepted by\n ``query``.\n\n match (int): Match score. Should be a positive integer. Default is 3.\n\n mismatch (int): Mismatch score. Should be a negative integer. Default is -2.\n\n gap_open (int): Penalty for opening gaps. Should be a negative integer.\n Default is -5.\n\n gap_extend (int): Penalty for extending gaps. Should be a negative\n integer. Default is -2.\n\n matrix (str, dict): Alignment scoring matrix. Two options for passing the\n alignment matrix:\n\n - The name of a built-in matrix. Current options are ``blosum62`` and ``pam250``.\n\n - A nested dictionary, giving an alignment score for each residue pair. Should be formatted\n such that retrieving the alignment score for A and G is accomplished by::\n\n matrix['A']['G']\n\n aa (bool): Must be set to ``True`` if aligning amino acid sequences. Default\n is ``False``.\n\n Returns:\n\n If a single target sequence is provided (via ``target``), a single ``SSWAlignment``\n object will be returned. If multiple target sequences are supplied (via ``targets``),\n a list of ``SSWAlignment`` objects will be returned.\n '''\n if aa and not matrix:\n err = 'ERROR: You must supply a scoring matrix for amino acid alignments'\n raise RuntimeError(err)\n if not target and not targets:\n err = 'ERROR: You must supply a target sequence (or sequences).'\n raise RuntimeError(err)\n if target:\n targets = [target, ]\n # to maintain backward compatibility with earlier AbTools API\n if gap_open_penalty is not None:\n gap_open = -1 * gap_open_penalty\n if gap_extend_penalty is not None:\n gap_extend = -1 * gap_extend_penalty\n alignments = []\n for t in targets:\n try:\n alignment = SSWAlignment(query=query,\n target=t,\n match=match,\n mismatch=mismatch,\n matrix=matrix,\n gap_open=-1 * gap_open,\n gap_extend=-1 * gap_extend,\n aa=aa)\n alignments.append(alignment)\n except IndexError:\n continue\n if len(alignments) == 1:\n return alignments[0]\n return alignments", "response": "Returns a string that can be used to align a sequence with a local pairwise alignment."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef global_alignment(query, target=None, targets=None, match=3, mismatch=-2, gap_open=-5, gap_extend=-2,\n score_match=None, score_mismatch=None, score_gap_open=None,\n score_gap_extend=None, matrix=None, aa=False):\n '''\n Needleman-Wunch global pairwise alignment.\n\n With ``global_alignment``, you can score an alignment using different\n paramaters than were used to compute the alignment. This allows you to\n compute pure identity scores (match=1, mismatch=0) on pairs of sequences\n for which those alignment parameters would be unsuitable. For example::\n\n seq1 = 'ATGCAGC'\n seq2 = 'ATCAAGC'\n\n using identity scoring params (match=1, all penalties are 0) for both alignment\n and scoring produces the following alignment::\n\n ATGCA-GC\n || || ||\n AT-CAAGC\n\n with an alignment score of 6 and an alignment length of 8 (identity = 75%). But\n what if we want to calculate the identity of a gapless alignment? Using::\n\n global_alignment(seq1, seq2,\n gap_open=20,\n score_match=1,\n score_mismatch=0,\n score_gap_open=10,\n score_gap_extend=1)\n\n we get the following alignment::\n\n ATGCAGC\n || |||\n ATCAAGC\n\n which has an score of 5 and an alignment length of 7 (identity = 71%). Obviously,\n this is an overly simple example (it would be much easier to force gapless alignment\n by just iterating over each sequence and counting the matches), but there are several\n real-life cases in which different alignment and scoring paramaters are desirable.\n\n Args:\n\n query: Query sequence. ``query`` can be one of four things:\n\n 1. a nucleotide or amino acid sequence, as a string\n\n 2. a Biopython ``SeqRecord`` object\n\n 3. an AbTools ``Sequence`` object\n\n 4. a list/tuple of the format ``[seq_id, sequence]``\n\n target: A single target sequence. ``target`` can be anything that\n ``query`` accepts.\n\n targets (list): A list of target sequences, to be proccssed iteratively.\n Each element in the ``targets`` list can be anything accepted by\n ``query``.\n\n match (int): Match score for alignment. Should be a positive integer. Default is 3.\n\n mismatch (int): Mismatch score for alignment. Should be a negative integer. Default is -2.\n\n gap_open (int): Penalty for opening gaps in alignment. Should be a negative integer.\n Default is -5.\n\n gap_extend (int): Penalty for extending gaps in alignment. Should be a negative\n integer. Default is -2.\n\n score_match (int): Match score for scoring the alignment. Should be a positive integer.\n Default is to use the score from ``match`` or ``matrix``, whichever is provided.\n\n score_mismatch (int): Mismatch score for scoring the alignment. Should be a negative\n integer. Default is to use the score from ``mismatch`` or ``matrix``, whichever\n is provided.\n\n score_gap_open (int): Gap open penalty for scoring the alignment. Should be a negative\n integer. Default is to use ``gap_open``.\n\n score_gap_extend (int): Gap extend penalty for scoring the alignment. Should be a negative\n integer. Default is to use ``gap_extend``.\n\n matrix (str, dict): Alignment scoring matrix. Two options for passing the alignment matrix:\n\n - The name of a built-in matrix. Current options are ``blosum62`` and ``pam250``.\n\n - A nested dictionary, giving an alignment score for each residue pair. Should be\n formatted such that retrieving the alignment score for A and G is accomplished by::\n\n matrix['A']['G']\n\n aa (bool): Must be set to ``True`` if aligning amino acid sequences. Default\n is ``False``.\n\n Returns:\n\n If a single target sequence is provided (via ``target``), a single ``NWAlignment``\n object will be returned. If multiple target sequences are supplied (via ``targets``),\n a list of ``NWAlignment`` objects will be returned.\n '''\n if not target and not targets:\n err = 'ERROR: You must supply a target sequence (or sequences).'\n raise RuntimeError(err)\n if target:\n targets = [target, ]\n if type(targets) not in (list, tuple):\n err = 'ERROR: ::targets:: requires an iterable (list or tuple).'\n err += 'For a single sequence, use ::target::'\n raise RuntimeError(err)\n alignments = []\n for t in targets:\n alignment = NWAlignment(query=query,\n target=t,\n match=match,\n mismatch=mismatch,\n gap_open=gap_open,\n gap_extend=gap_extend,\n score_match=score_match,\n score_mismatch=score_mismatch,\n score_gap_open=score_gap_open,\n score_gap_extend=score_gap_extend,\n matrix=matrix,\n aa=aa)\n alignments.append(alignment)\n if target is not None:\n return alignments[0]\n return alignments", "response": "This function calculates a global pairwise alignment of the given sequence."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of dot aligned sequences for a list of sequences.", "response": "def dot_alignment(sequences, seq_field=None, name_field=None, root=None, root_name=None,\n cluster_threshold=0.75, as_fasta=False, just_alignment=False):\n '''\n Creates a dot alignment (dots indicate identity, mismatches are represented by the mismatched\n residue) for a list of sequences.\n\n Args:\n\n sequence (list(Sequence)): A list of Sequence objects to be aligned.\n\n seq_field (str): Name of the sequence field key. Default is ``vdj_nt``.\n\n name_field (str): Name of the name field key. Default is ``seq_id``.\n\n root (str, Sequence): The sequence used to 'root' the alignment. This sequence will be at the\n top of the alignment and is the sequence against which dots (identity) will be evaluated.\n Can be provided either as a string corresponding to the name of one of the sequences in\n ``sequences`` or as a Sequence object. If not provided, ``sequences`` will be clustered\n at ``cluster_threshold`` and the centroid of the largest cluster will be used.\n\n root_name (str): Name of the root sequence. If not provided, the existing name of the root\n sequence (``name_field``) will be used. If ``root`` is not provided, the default ``root_name``\n is ``'centroid'``.\n\n cluster_threshold (float): Threshold with which to cluster sequences if ``root`` is not provided.\n Default is ``0.75``.\n\n as_fasta (bool): If ``True``, returns the dot alignment as a FASTA-formatted string, rather than\n a string formatted for human readability.\n\n just_alignment (bool): If ``True``, returns just the dot-aligned sequences as a list.\n\n Returns:\n\n If ``just_alignment`` is ``True``, a list of dot-aligned sequences (without sequence names) will be returned.\n If ``as_fasta`` is ``True``, a string containing the dot-aligned sequences in FASTA format will be returned.\n Otherwise, a formatted string containing the aligned sequences (with sequence names) will be returned.\n '''\n import abstar\n\n from .cluster import cluster\n\n sequences = deepcopy(sequences)\n root = copy(root)\n\n # if custom seq_field is specified, copy to the .alignment_sequence attribute\n if seq_field is not None:\n if not all([seq_field in list(s.annotations.keys()) for s in sequences]):\n print('\\nERROR: {} is not present in all of the supplied sequences.\\n'.format(seq_field))\n sys.exit(1)\n for s in sequences:\n s.alignment_sequence = s[seq_field]\n else:\n for s in sequences:\n s.alignment_sequence = s.sequence\n\n # if custom name_field is specified, copy to the .id attribute\n if name_field is not None:\n if not all([name_field in list(s.annotations.keys()) for s in sequences]):\n print('\\nERROR: {} is not present in all of the supplied sequences.\\n'.format(name_field))\n sys.exit(1)\n for s in sequences:\n s.alignment_id = s[name_field]\n else:\n for s in sequences:\n s.alignment_id = s.id\n\n # parse the root sequence\n if all([root is None, root_name is None]):\n clusters = cluster(sequences, threshold=cluster_threshold, quiet=True)\n clusters = sorted(clusters, key=lambda x: x.size, reverse=True)\n centroid = clusters[0].centroid\n root = abstar.run(('centroid', centroid.sequence))\n root.alignment_id = 'centroid'\n root.alignment_sequence = root[seq_field]\n elif type(root) in STR_TYPES:\n root = [s for s in sequences if s.alignment_id == root][0]\n if not root:\n print('\\nERROR: The name of the root sequence ({}) was not found in the list of input sequences.'.format(root))\n print('\\n')\n sys.exit(1)\n sequences = [s for s in sequences if s.alignment_id != root.alignment_id]\n elif type(root) == Sequence:\n if seq_field is not None:\n if seq_field not in list(root.anotations.keys()):\n print('\\nERROR: {} is not present in the supplied root sequence.\\n'.format(seq_field))\n sys.exit(1)\n root.alignment_sequence = root[seq_field]\n if name_field is not None:\n if name_field not in list(root.anotations.keys()):\n print('\\nERROR: {} is not present in the supplied root sequence.\\n'.format(name_field))\n sys.exit(1)\n root.alignment_id = root[name_field]\n sequences = [s for s in sequences if s.alignment_id != root.alignment_id]\n else:\n print('\\nERROR: If root is provided, it must be the name of a sequence \\\n found in the supplied list of sequences or it must be a Sequence object.')\n print('\\n')\n sys.exit(1)\n\n if root_name is not None:\n root.alignment_id = root_name\n else:\n root_name = root.alignment_id\n\n # compute and parse the alignment\n seqs = [(root.alignment_id, root.alignment_sequence)]\n seqs += [(s.alignment_id, s.alignment_sequence) for s in sequences]\n aln = muscle(seqs)\n g_aln = [a for a in aln if a.id == root_name][0]\n dots = [(root_name, str(g_aln.seq)), ]\n for seq in [a for a in aln if a.id != root_name]:\n s_aln = ''\n for g, q in zip(str(g_aln.seq), str(seq.seq)):\n if g == q == '-':\n s_aln += '-'\n elif g == q:\n s_aln += '.'\n else:\n s_aln += q\n dots.append((seq.id, s_aln))\n if just_alignment:\n return [d[1] for d in dots]\n name_len = max([len(d[0]) for d in dots]) + 2\n dot_aln = []\n for d in dots:\n if as_fasta:\n dot_aln.append('>{}\\n{}'.format(d[0], d[1]))\n else:\n spaces = name_len - len(d[0])\n dot_aln.append(d[0] + ' ' * spaces + d[1])\n return '\\n'.join(dot_aln)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fetch_class(full_class_name):\n (module_name, class_name) = full_class_name.rsplit('.', 1)\n module = importlib.import_module(module_name)\n return getattr(module, class_name)", "response": "Fetches the given class."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef has_chosen(state, correct, msgs):\n\n ctxt = {}\n exec(state.student_code, globals(), ctxt)\n sel_indx = ctxt[\"selected_option\"]\n if sel_indx != correct:\n state.report(Feedback(msgs[sel_indx - 1]))\n else:\n state.reporter.success_msg = msgs[correct - 1]\n\n return state", "response": "Verify exercises of the type MultipleChoiceExercise\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef multi(state, *tests):\n for test in iter_tests(tests):\n # assume test is function needing a state argument\n # partial state so reporter can test\n state.do_test(partial(test, state))\n\n # return original state, so can be chained\n return state", "response": "Run multiple subtests. Return original state."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_not(state, *tests, msg):\n for test in iter_tests(tests):\n try:\n test(state)\n except TestFail:\n # it fails, as expected, off to next one\n continue\n return state.report(Feedback(msg))\n\n # return original state, so can be chained\n return state", "response": "Run multiple sub - tests that fail. Returns original state so can be chained."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntest whether at least one SCT passes.", "response": "def check_or(state, *tests):\n \"\"\"Test whether at least one SCT passes.\n\n If all of the tests fail, the feedback of the first test will be presented to the student.\n\n Args:\n state: State instance describing student and solution code, can be omitted if used with Ex()\n tests: one or more sub-SCTs to run\n\n :Example:\n The SCT below tests that the student typed either 'SELECT' or 'WHERE' (or both).. ::\n\n Ex().check_or(\n has_code('SELECT'),\n has_code('WHERE')\n )\n\n The SCT below checks that a SELECT statement has at least a WHERE c or LIMIT clause.. ::\n\n Ex().check_node('SelectStmt', 0).check_or(\n check_edge('where_clause'),\n check_edge('limit_clause')\n )\n \"\"\"\n success = False\n first_feedback = None\n\n for test in iter_tests(tests):\n try:\n multi(state, test)\n success = True\n except TestFail as e:\n if not first_feedback:\n first_feedback = e.feedback\n if success:\n return state # todo: add test\n\n state.report(first_feedback)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nallowing feedback from a diagnostic SCT only if a check SCT fails.", "response": "def check_correct(state, check, diagnose):\n \"\"\"Allows feedback from a diagnostic SCT, only if a check SCT fails.\n\n Args:\n state: State instance describing student and solution code. Can be omitted if used with Ex().\n check: An sct chain that must succeed.\n diagnose: An sct chain to run if the check fails.\n\n :Example:\n The SCT below tests whether students query result is correct, before running diagnostic SCTs.. ::\n\n Ex().check_correct(\n check_result(),\n check_node('SelectStmt')\n )\n\n \"\"\"\n feedback = None\n try:\n multi(state, check)\n except TestFail as e:\n feedback = e.feedback\n\n # todo: let if from except wrap try-except\n # only once teach uses force_diagnose\n try:\n multi(state, diagnose)\n except TestFail as e:\n if feedback is not None or state.force_diagnose:\n feedback = e.feedback\n\n if feedback is not None:\n state.report(feedback)\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking element for required attributes. Raise NotValidXmlException on error.", "response": "def required_attributes(element, *attributes):\n \"\"\"Check element for required attributes. Raise ``NotValidXmlException`` on error.\n\n :param element: ElementTree element\n :param attributes: list of attributes names to check\n :raises NotValidXmlException: if some argument is missing\n \"\"\"\n if not reduce(lambda still_valid, param: still_valid and param in element.attrib, attributes, True):\n raise NotValidXmlException(msg_err_missing_attributes(element.tag, *attributes))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking element for required children defined as XPath. Raise NotValidXmlException on error.", "response": "def required_elements(element, *children):\n \"\"\"Check element (``xml.etree.ElementTree.Element``) for required children, defined as XPath. Raise\n ``NotValidXmlException`` on error.\n\n :param element: ElementTree element\n :param children: list of XPaths to check\n :raises NotValidXmlException: if some child is missing\n \"\"\"\n for child in children:\n if element.find(child) is None:\n raise NotValidXmlException(msg_err_missing_children(element.tag, *children))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking an xml element to include given attributes and children.", "response": "def required_items(element, children, attributes):\n \"\"\"Check an xml element to include given attributes and children.\n\n :param element: ElementTree element\n :param children: list of XPaths to check\n :param attributes: list of attributes names to check\n :raises NotValidXmlException: if some argument is missing\n :raises NotValidXmlException: if some child is missing\n \"\"\"\n required_elements(element, *children)\n required_attributes(element, *attributes)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef attrib_to_dict(element, *args, **kwargs):\n if len(args) > 0:\n return {key: element.get(key) for key in args}\n\n if len(kwargs) > 0:\n return {new_key: element.get(old_key) for new_key, old_key in viewitems(kwargs)}\n\n return element.attrib", "response": "For an ElementTree element extract specified attributes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_xml_root(xml_path):\n r = requests.get(xml_path)\n root = ET.fromstring(r.content)\n return root", "response": "Load and parse an xml by given xml_path and return its root."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts element object to int.", "response": "def element_to_int(element, attribute=None):\n \"\"\"Convert ``element`` object to int. If attribute is not given, convert ``element.text``.\n\n :param element: ElementTree element\n :param attribute: attribute name\n :type attribute: str\n :returns: integer\n :rtype: int\n \"\"\"\n if attribute is not None:\n return int(element.get(attribute))\n else:\n return int(element.text)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating element with given attributes and set element. text property to given text value", "response": "def create_el(name, text=None, attrib=None):\n \"\"\"Create element with given attributes and set element.text property to given\n text value (if text is not None)\n\n :param name: element name\n :type name: str\n :param text: text node value\n :type text: str\n :param attrib: attributes\n :type attrib: dict\n :returns: xml element\n :rtype: Element\n \"\"\"\n if attrib is None:\n attrib = {}\n\n el = ET.Element(name, attrib)\n if text is not None:\n el.text = text\n return el"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncollects all the public class attributes. This method returns as a list of DI modules.", "response": "def modules(cls):\n \"\"\"Collect all the public class attributes.\n\n All class attributes should be a DI modules, this method collects them\n and returns as a list.\n\n :return: list of DI modules\n :rtype: list[Union[Module, Callable]]\n \"\"\"\n members = inspect.getmembers(cls, lambda a: not (inspect.isroutine(a) and a.__name__ == 'modules'))\n modules = [module for name, module in members if not name.startswith('_')]\n return modules"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pairs(db, collection, experiment=None, subject=None, group=None, name='seq_id',\n delim=None, delim_occurance=1, pairs_only=False, h_selection_func=None, l_selection_func=None):\n '''\n Gets sequences and assigns them to the appropriate mAb pair, based on the sequence name.\n\n Inputs:\n\n ::db:: is a pymongo database connection object\n ::collection:: is the collection name, as a string\n If ::subject:: is provided, only sequences with a 'subject' field matching ::subject:: will\n be included. ::subject:: can be either a single subject (as a string) or an iterable\n (list or tuple) of subject strings.\n If ::group:: is provided, only sequences with a 'group' field matching ::group:: will\n be included. ::group:: can be either a single group (as a string) or an iterable\n (list or tuple) of group strings.\n ::name:: is the dict key of the field to be used to group the sequences into pairs.\n Default is 'seq_id'\n ::delim:: is an optional delimiter used to truncate the contents of the ::name:: field.\n Default is None, which results in no name truncation.\n ::delim_occurance:: is the occurance of the delimiter at which to trim. Trimming is performed\n as delim.join(name.split(delim)[:delim_occurance]), so setting delim_occurance to -1 will\n trucate after the last occurance of delim. Default is 1.\n ::pairs_only:: setting to True results in only truly paired sequences (pair.is_pair == True)\n will be returned. Default is False.\n\n Returns a list of Pair objects, one for each mAb pair.\n '''\n match = {}\n if subject is not None:\n if type(subject) in (list, tuple):\n match['subject'] = {'$in': subject}\n elif type(subject) in STR_TYPES:\n match['subject'] = subject\n if group is not None:\n if type(group) in (list, tuple):\n match['group'] = {'$in': group}\n elif type(group) in STR_TYPES:\n match['group'] = group\n if experiment is not None:\n if type(experiment) in (list, tuple):\n match['experiment'] = {'$in': experiment}\n elif type(experiment) in STR_TYPES:\n match['experiment'] = experiment\n seqs = list(db[collection].find(match))\n return assign_pairs(seqs, name=name, delim=delim,\n delim_occurance=delim_occurance, pairs_only=pairs_only,\n h_selection_func=h_selection_func, l_selection_func=l_selection_func)", "response": "Gets sequences and assigns them to the appropriate mAb pair."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef assign_pairs(seqs, name='seq_id', delim=None, delim_occurance=1, pairs_only=False,\n h_selection_func=None, l_selection_func=None):\n '''\n Assigns sequences to the appropriate mAb pair, based on the sequence name.\n\n Inputs:\n\n ::seqs:: is a list of dicts, of the format returned by querying a MongoDB containing\n Abstar output.\n ::name:: is the dict key of the field to be used to group the sequences into pairs.\n Default is 'seq_id'\n ::delim:: is an optional delimiter used to truncate the contents of the ::name:: field.\n Default is None, which results in no name truncation.\n ::delim_occurance:: is the occurance of the delimiter at which to trim. Trimming is performed\n as delim.join(name.split(delim)[:delim_occurance]), so setting delim_occurance to -1 will\n trucate after the last occurance of delim. Default is 1.\n ::pairs_only:: setting to True results in only truly paired sequences (pair.is_pair == True)\n will be returned. Default is False.\n\n Returns a list of Pair objects, one for each mAb pair.\n '''\n pdict = {}\n for s in seqs:\n if delim is not None:\n pname = delim.join(s[name].split(delim)[:delim_occurance])\n else:\n pname = s[name]\n if pname not in pdict:\n pdict[pname] = [s, ]\n else:\n pdict[pname].append(s)\n pairs = [Pair(pdict[n], name=n,\n h_selection_func=h_selection_func,\n l_selection_func=l_selection_func) for n in pdict.keys()]\n if pairs_only:\n pairs = [p for p in pairs if p.is_pair]\n return pairs", "response": "Assigns sequences to the appropriate mAb pair based on the sequence name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deduplicate(pairs, aa=False, ignore_primer_regions=False):\n '''\n Removes duplicate sequences from a list of Pair objects.\n\n If a Pair has heavy and light chains, both chains must identically match heavy and light chains\n from another Pair to be considered a duplicate. If a Pair has only a single chain,\n identical matches to that chain will cause the single chain Pair to be considered a duplicate,\n even if the comparison Pair has both chains.\n\n Note that identical sequences are identified by simple string comparison, so sequences of\n different length that are identical over the entirety of the shorter sequence are not\n considered duplicates.\n\n By default, comparison is made on the nucleotide sequence. To use the amino acid sequence instead,\n set aa=True.\n '''\n nr_pairs = []\n just_pairs = [p for p in pairs if p.is_pair]\n single_chains = [p for p in pairs if not p.is_pair]\n _pairs = just_pairs + single_chains\n for p in _pairs:\n duplicates = []\n for nr in nr_pairs:\n identical = True\n vdj = 'vdj_aa' if aa else 'vdj_nt'\n offset = 4 if aa else 12\n if p.heavy is not None:\n if nr.heavy is None:\n identical = False\n else:\n heavy = p.heavy[vdj][offset:-offset] if ignore_primer_regions else p.heavy[vdj]\n nr_heavy = nr.heavy[vdj][offset:-offset] if ignore_primer_regions else nr.heavy[vdj]\n if heavy != nr_heavy:\n identical = False\n if p.light is not None:\n if nr.light is None:\n identical = False\n else:\n light = p.light[vdj][offset:-offset] if ignore_primer_regions else p.light[vdj]\n nr_light = nr.light[vdj][offset:-offset] if ignore_primer_regions else nr.light[vdj]\n if light != nr_light:\n identical = False\n duplicates.append(identical)\n if any(duplicates):\n continue\n else:\n nr_pairs.append(p)\n return nr_pairs", "response": "Removes duplicate sequences from a list of Pair objects."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _refine_v(seq, species):\n '''\n Completes the 5' end of a a truncated sequence with germline nucleotides.\n Input is a MongoDB dict (seq) and the species.\n '''\n vgerm = germlines.get_germline(seq['v_gene']['full'], species)\n aln = global_alignment(seq['vdj_nt'], vgerm)\n prepend = ''\n for s, g in zip(aln.aligned_query, aln.aligned_target):\n if s != '-':\n break\n else:\n prepend += g\n seq['vdj_nt'] = prepend + seq['vdj_nt']", "response": "Refine the 5 end of a truncated sequence with germline nucleotides."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrefines the 3 end of a truncated sequence with germline nucleotides.", "response": "def _refine_j(seq, species):\n '''\n Completes the 3' end of a a truncated sequence with germline nucleotides.\n Input is a MongoDB dict (seq) and the species.\n '''\n jgerm = germlines.get_germline(seq['j_gene']['full'], species)\n aln = global_alignment(seq['vdj_nt'], jgerm)\n append = ''\n for s, g in zip(aln.aligned_query[::-1], aln.aligned_target[::-1]):\n if s != '-':\n break\n else:\n append += g\n seq['vdj_nt'] = seq['vdj_nt'] + append[::-1]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fasta(self, key='vdj_nt', append_chain=True):\n '''\n Returns the sequence pair as a fasta string. If the Pair object contains\n both heavy and light chain sequences, both will be returned as a single string.\n\n By default, the fasta string contains the 'vdj_nt' sequence for each chain. To change,\n use the option to select an alternate sequence.\n\n By default, the chain (heavy or light) will be appended to the sequence name:\n\n >MySequence_heavy\n\n To just use the pair name (which will result in duplicate sequence names for Pair objects\n with both heavy and light chains), set to False.\n '''\n fastas = []\n for s, chain in [(self.heavy, 'heavy'), (self.light, 'light')]:\n if s is not None:\n c = '_{}'.format(chain) if append_chain else ''\n fastas.append('>{}{}\\n{}'.format(s['seq_id'], c, s[key]))\n return '\\n'.join(fastas)", "response": "Returns the sequence pair as a string."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a matplotlib colormap from a single color.", "response": "def cmap_from_color(color, dark=False):\n '''\n Generates a matplotlib colormap from a single color.\n\n Colormap will be built, by default, from white to ``color``.\n\n Args:\n\n color: Can be one of several things:\n\n 1. Hex code\n 2. HTML color name\n 3. RGB tuple\n\n dark (bool): If ``True``, colormap will be built from ``color`` to\n black. Default is ``False``, which builds a colormap from\n white to ``color``.\n\n Returns:\n\n colormap: A matplotlib colormap\n\n '''\n if dark:\n return sns.dark_palette(color, as_cmap=True)\n else:\n return sns.light_palette(color, as_cmap=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef truncate_colormap(cmap, minval=0.0, maxval=1.0, n=256):\n cmap = get_cmap(cmap)\n name = \"%s-trunc-%.2g-%.2g\" % (cmap.name, minval, maxval)\n return colors.LinearSegmentedColormap.from_list(\n name, cmap(np.linspace(minval, maxval, n)))", "response": "Truncates a colormap such that the new colormap consists of a subset of the colormap."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a new colormap that stacks two colormaps.", "response": "def stack_colormap(lower, upper, n=256):\n \"\"\"\n Stacks two colormaps (``lower`` and ``upper``) such that\n low half -> ``lower`` colors, high half -> ``upper`` colors\n\n Args:\n\n \tlower (colormap): colormap for the lower half of the stacked colormap.\n\n \tupper (colormap): colormap for the upper half of the stacked colormap.\n\n \tn (int): Number of colormap steps. Default is ``256``.\n \"\"\"\n A = get_cmap(lower)\n B = get_cmap(upper)\n name = \"%s-%s\" % (A.name, B.name)\n lin = np.linspace(0, 1, n)\n return array_cmap(np.vstack((A(lin), B(lin))), name, n=n)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deprecated(func):\n\n @functools.wraps(func)\n def decorated(*args, **kwargs):\n warnings.warn_explicit(\n \"Call to deprecated function {}.\".format(func.__name__),\n category=DeprecationWarning,\n filename=func.__code__.co_filename,\n lineno=func.__code__.co_firstlineno + 1\n )\n\n return func(*args, **kwargs)\n return decorated", "response": "This is a decorator which can be used to mark functions\n as deprecated. It will result in a warning being emitted\nRecording when the function is used."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef nested_model(model, nested_fields):\n # type: (ModelBase, Any)->Optional[AppModel]\n if model is None:\n return None\n\n app_model = model.get_app_model()\n is_dict = isinstance(nested_fields, dict)\n\n for field in nested_fields:\n field = get_nested_field_name(field)\n\n nested_nested = nested_fields.get(\n field) if is_dict and nested_fields.get(field) else []\n value = getattr(model, field, None)\n\n # we can have also lists in field\n nm_fn = nested_models if isinstance(value, list) else nested_model\n\n setattr(app_model, field, nm_fn(value, nested_nested))\n\n return app_model", "response": "Returns the AppModel with the nested_fields attached."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nselects a node from abstract syntax tree using its name and index position.", "response": "def check_node(\n state,\n name,\n index=0,\n missing_msg=\"Check the {ast_path}. Could not find the {index}{node_name}.\",\n priority=None,\n):\n \"\"\"Select a node from abstract syntax tree (AST), using its name and index position.\n\n Args:\n state: State instance describing student and solution code. Can be omitted if used with Ex().\n name : the name of the abstract syntax tree node to find.\n index: the position of that node (see below for details).\n missing_msg: feedback message if node is not in student AST.\n priority: the priority level of the node being searched for. This determines whether to\n descend into other AST nodes during the search. Higher priority nodes descend\n into lower priority. Currently, the only important part of priority is that\n setting a very high priority (e.g. 99) will search every node.\n\n\n :Example:\n If both the student and solution code are.. ::\n\n SELECT a FROM b; SELECT x FROM y;\n\n then we can focus on the first select with::\n\n # approach 1: with manually created State instance\n state = State(*args, **kwargs)\n new_state = check_node(state, 'SelectStmt', 0)\n\n # approach 2: with Ex and chaining\n new_state = Ex().check_node('SelectStmt', 0)\n\n \"\"\"\n df = partial(state.ast_dispatcher, name, priority=priority)\n\n sol_stmt_list = df(state.solution_ast)\n try:\n sol_stmt = sol_stmt_list[index]\n except IndexError:\n raise IndexError(\"Can't get %s statement at index %s\" % (name, index))\n\n stu_stmt_list = df(state.student_ast)\n try:\n stu_stmt = stu_stmt_list[index]\n except IndexError:\n # use speaker on ast dialect module to get message, or fall back to generic\n ast_path = state.get_ast_path() or \"highlighted code\"\n _msg = state.ast_dispatcher.describe(\n sol_stmt, missing_msg, index=index, ast_path=ast_path\n )\n if _msg is None:\n _msg = MSG_CHECK_FALLBACK\n state.report(Feedback(_msg))\n\n action = {\n \"type\": \"check_node\",\n \"kwargs\": {\"name\": name, \"index\": index},\n \"node\": stu_stmt,\n }\n\n return state.to_child(\n student_ast=stu_stmt, solution_ast=sol_stmt, history=state.history + (action,)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck the edge of the student and solution code.", "response": "def check_edge(\n state,\n name,\n index=0,\n missing_msg=\"Check the {ast_path}. Could not find the {index}{field_name}.\",\n):\n \"\"\"Select an attribute from an abstract syntax tree (AST) node, using the attribute name.\n\n Args:\n state: State instance describing student and solution code. Can be omitted if used with Ex().\n name: the name of the attribute to select from current AST node.\n index: entry to get from a list field. If too few entires, will fail with missing_msg.\n missing_msg: feedback message if attribute is not in student AST.\n\n :Example:\n If both the student and solution code are.. ::\n\n SELECT a FROM b; SELECT x FROM y;\n\n then we can get the from_clause using ::\n\n # approach 1: with manually created State instance -----\n state = State(*args, **kwargs)\n select = check_node(state, 'SelectStmt', 0)\n clause = check_edge(select, 'from_clause')\n\n # approach 2: with Ex and chaining ---------------------\n select = Ex().check_node('SelectStmt', 0) # get first select statement\n clause = select.check_edge('from_clause', None) # get from_clause (a list)\n clause2 = select.check_edge('from_clause', 0) # get first entry in from_clause\n \"\"\"\n try:\n sol_attr = getattr(state.solution_ast, name)\n if sol_attr and isinstance(sol_attr, list) and index is not None:\n sol_attr = sol_attr[index]\n except IndexError:\n raise IndexError(\"Can't get %s attribute\" % name)\n\n # use speaker on ast dialect module to get message, or fall back to generic\n ast_path = state.get_ast_path() or \"highlighted code\"\n _msg = state.ast_dispatcher.describe(\n state.student_ast, missing_msg, field=name, index=index, ast_path=ast_path\n )\n if _msg is None:\n _msg = MSG_CHECK_FALLBACK\n\n try:\n stu_attr = getattr(state.student_ast, name)\n if stu_attr and isinstance(stu_attr, list) and index is not None:\n stu_attr = stu_attr[index]\n except:\n state.report(Feedback(_msg))\n\n # fail if attribute exists, but is none only for student\n if stu_attr is None and sol_attr is not None:\n state.report(Feedback(_msg))\n\n action = {\"type\": \"check_edge\", \"kwargs\": {\"name\": name, \"index\": index}}\n\n return state.to_child(\n student_ast=stu_attr, solution_ast=sol_attr, history=state.history + (action,)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntest whether the student code contains text.", "response": "def has_code(\n state,\n text,\n incorrect_msg=\"Check the {ast_path}. The checker expected to find {text}.\",\n fixed=False,\n):\n \"\"\"Test whether the student code contains text.\n\n Args:\n state: State instance describing student and solution code. Can be omitted if used with Ex().\n text : text that student code must contain. Can be a regex pattern or a simple string.\n incorrect_msg: feedback message if text is not in student code.\n fixed: whether to match text exactly, rather than using regular expressions.\n\n Note:\n Functions like ``check_node`` focus on certain parts of code.\n Using these functions followed by ``has_code`` will only look\n in the code being focused on.\n\n :Example:\n If the student code is.. ::\n\n SELECT a FROM b WHERE id < 100\n\n Then the first test below would (unfortunately) pass, but the second would fail..::\n\n # contained in student code\n Ex().has_code(text=\"id < 10\")\n\n # the $ means that you are matching the end of a line\n Ex().has_code(text=\"id < 10$\")\n\n By setting ``fixed = True``, you can search for fixed strings::\n\n # without fixed = True, '*' matches any character\n Ex().has_code(text=\"SELECT * FROM b\") # passes\n Ex().has_code(text=\"SELECT \\\\\\\\* FROM b\") # fails\n Ex().has_code(text=\"SELECT * FROM b\", fixed=True) # fails\n\n You can check only the code corresponding to the WHERE clause, using ::\n\n where = Ex().check_node('SelectStmt', 0).check_edge('where_clause')\n where.has_code(text = \"id < 10)\n\n \"\"\"\n stu_ast = state.student_ast\n stu_code = state.student_code\n\n # fallback on using complete student code if no ast\n ParseError = state.ast_dispatcher.ParseError\n\n def get_text(ast, code):\n if isinstance(ast, ParseError):\n return code\n try:\n return ast.get_text(code)\n except:\n return code\n\n stu_text = get_text(stu_ast, stu_code)\n\n _msg = incorrect_msg.format(\n ast_path=state.get_ast_path() or \"highlighted code\", text=text\n )\n\n # either simple text matching or regex test\n res = text in stu_text if fixed else re.search(text, stu_text)\n\n if not res:\n state.report(Feedback(_msg))\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef has_equal_ast(\n state,\n incorrect_msg=\"Check the {ast_path}. {extra}\",\n sql=None,\n start=[\"expression\", \"subquery\", \"sql_script\"][0],\n exact=None,\n):\n \"\"\"Test whether the student and solution code have identical AST representations\n\n Args:\n state: State instance describing student and solution code. Can be omitted if used with Ex().\n incorrect_msg: feedback message if student and solution ASTs don't match\n sql : optional code to use instead of the solution ast that is zoomed in on.\n start: if ``sql`` arg is used, the parser rule to parse the sql code.\n One of 'expression' (the default), 'subquery', or 'sql_script'.\n exact: whether to require an exact match (True), or only that the\n student AST contains the solution AST. If not specified, this\n defaults to ``True`` if ``sql`` is not specified, and to ``False``\n if ``sql`` is specified. You can always specify it manually.\n\n :Example:\n\n Example 1 - Suppose the solution code is ::\n\n SELECT * FROM cities\n\n and you want to verify whether the `FROM` part is correct: ::\n\n Ex().check_node('SelectStmt').from_clause().has_equal_ast()\n\n Example 2 - Suppose the solution code is ::\n\n SELECT * FROM b WHERE id > 1 AND name = 'filip'\n\n Then the following SCT makes sure ``id > 1`` was used somewhere in the WHERE clause.::\n\n Ex().check_node('SelectStmt') \\\\/\n .check_edge('where_clause') \\\\/\n .has_equal_ast(sql = 'id > 1')\n\n \"\"\"\n ast = state.ast_dispatcher.ast_mod\n sol_ast = state.solution_ast if sql is None else ast.parse(sql, start)\n\n # if sql is set, exact defaults to False.\n # if sql not set, exact defaults to True.\n if exact is None:\n exact = sql is None\n\n stu_rep = repr(state.student_ast)\n sol_rep = repr(sol_ast)\n\n def get_str(ast, code, sql):\n if sql:\n return sql\n if isinstance(ast, str):\n return ast\n try:\n return ast.get_text(code)\n except:\n return None\n\n sol_str = get_str(state.solution_ast, state.solution_code, sql)\n _msg = incorrect_msg.format(\n ast_path=state.get_ast_path() or \"highlighted code\",\n extra=\"The checker expected to find `{}` in there.\".format(sol_str)\n if sol_str\n else \"Something is missing.\",\n )\n if (exact and (sol_rep != stu_rep)) or (not exact and (sol_rep not in stu_rep)):\n state.report(Feedback(_msg))\n\n return state", "response": "Test whether the student and solution code have identical AST representations."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncache decorator for app models in task.perform", "response": "def cache_model(key_params, timeout='default'):\n \"\"\"\n Caching decorator for app models in task.perform\n \"\"\"\n def decorator_fn(fn):\n return CacheModelDecorator().decorate(key_params, timeout, fn)\n\n return decorator_fn"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncaching a page of a list of AppModels", "response": "def cache_page(key_params, timeout='default'):\n \"\"\"\n Cache a page (slice) of a list of AppModels\n \"\"\"\n def decorator_fn(fn):\n d = CachePageDecorator()\n return d.decorate(key_params, timeout, fn)\n\n return decorator_fn"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a key for the data in task.", "response": "def create_key_for_data(prefix, data, key_params):\n \"\"\"\n From ``data`` params in task create corresponding key with help of ``key_params`` (defined in decorator)\n \"\"\"\n d = data.get_data()\n values = []\n for k in key_params:\n if k in d and type(d[k]) is list:\n values.append(\"{0}:{1}\".format(k, \" -\".join(d[k])))\n else:\n value = d[k] if k in d else ''\n values.append(\"{0}:{1}\".format(k, value))\n\n return \"{0}-{1}\".format(prefix, \"-\".join(values))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking whether or not jsonitems and dbitems differ", "response": "def items_differ(jsonitems, dbitems, subfield_dict):\n \"\"\" check whether or not jsonitems and dbitems differ \"\"\"\n\n # short circuit common cases\n if len(jsonitems) == len(dbitems) == 0:\n # both are empty\n return False\n elif len(jsonitems) != len(dbitems):\n # if lengths differ, they're definitely different\n return True\n\n original_jsonitems = jsonitems\n jsonitems = copy.deepcopy(jsonitems)\n keys = jsonitems[0].keys()\n\n # go over dbitems looking for matches\n for dbitem in dbitems:\n order = getattr(dbitem, 'order', None)\n match = None\n for i, jsonitem in enumerate(jsonitems):\n # check if all keys (excluding subfields) match\n for k in keys:\n if k not in subfield_dict and getattr(dbitem, k) != jsonitem.get(k, None):\n break\n else:\n # all fields match so far, possibly equal, just check subfields now\n for k in subfield_dict:\n jsonsubitems = jsonitem[k]\n dbsubitems = list(getattr(dbitem, k).all())\n if items_differ(jsonsubitems, dbsubitems, subfield_dict[k][2]):\n break\n else:\n # if the dbitem sets 'order', then the order matters\n if order is not None and int(order) != original_jsonitems.index(jsonitem):\n break\n # these items are equal, so let's mark it for removal\n match = i\n break\n\n if match is not None:\n # item exists in both, remove from jsonitems\n jsonitems.pop(match)\n else:\n # exists in db but not json\n return True\n\n # if we get here, jsonitems has to be empty because we asserted that the length was\n # the same and we found a match for each thing in dbitems, here's a safety check just in case\n if jsonitems: # pragma: no cover\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives an id found in scraped JSON return a DB id for the object.", "response": "def resolve_json_id(self, json_id, allow_no_match=False):\n \"\"\"\n Given an id found in scraped JSON, return a DB id for the object.\n\n params:\n json_id: id from json\n allow_no_match: just return None if id can't be resolved\n\n returns:\n database id\n\n raises:\n ValueError if id couldn't be resolved\n \"\"\"\n if not json_id:\n return None\n\n if json_id.startswith('~'):\n # keep caches of all the pseudo-ids to avoid doing 1000s of lookups during import\n if json_id not in self.pseudo_id_cache:\n spec = get_pseudo_id(json_id)\n spec = self.limit_spec(spec)\n\n if isinstance(spec, Q):\n objects = self.model_class.objects.filter(spec)\n else:\n objects = self.model_class.objects.filter(**spec)\n ids = {each.id for each in objects}\n if len(ids) == 1:\n self.pseudo_id_cache[json_id] = ids.pop()\n errmsg = None\n elif not ids:\n errmsg = 'cannot resolve pseudo id to {}: {}'.format(\n self.model_class.__name__, json_id)\n else:\n errmsg = 'multiple objects returned for {} pseudo id {}: {}'.format(\n self.model_class.__name__, json_id, ids)\n\n # either raise or log error\n if errmsg:\n if not allow_no_match:\n raise UnresolvedIdError(errmsg)\n else:\n self.error(errmsg)\n self.pseudo_id_cache[json_id] = None\n\n # return the cached object\n return self.pseudo_id_cache[json_id]\n\n # get the id that the duplicate points to, or use self\n json_id = self.duplicates.get(json_id, json_id)\n\n try:\n return self.json_to_db_id[json_id]\n except KeyError:\n raise UnresolvedIdError('cannot resolve id: {}'.format(json_id))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nimport a JSON directory into the database", "response": "def import_directory(self, datadir):\n \"\"\" import a JSON directory into the database \"\"\"\n\n def json_stream():\n # load all json, mapped by json_id\n for fname in glob.glob(os.path.join(datadir, self._type + '_*.json')):\n with open(fname) as f:\n yield json.load(f)\n\n return self.import_data(json_stream())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _prepare_imports(self, dicts):\n\n \"\"\" filters the import stream to remove duplicates\n\n also serves as a good place to override if anything special has to be done to the\n order of the import stream (see OrganizationImporter)\n \"\"\"\n # hash(json): id\n seen_hashes = {}\n\n for data in dicts:\n json_id = data.pop('_id')\n\n # map duplicates (using omnihash to tell if json dicts are identical-ish)\n objhash = omnihash(data)\n if objhash not in seen_hashes:\n seen_hashes[objhash] = json_id\n yield json_id, data\n else:\n self.duplicates[json_id] = seen_hashes[objhash]", "response": "Filter the import stream to remove duplicates and map duplicates"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nimports a bunch of dicts together", "response": "def import_data(self, data_items):\n \"\"\" import a bunch of dicts together \"\"\"\n # keep counts of all actions\n record = {\n 'insert': 0, 'update': 0, 'noop': 0,\n 'start': utcnow(),\n 'records': {\n 'insert': [],\n 'update': [],\n 'noop': [],\n }\n }\n\n for json_id, data in self._prepare_imports(data_items):\n obj_id, what = self.import_item(data)\n self.json_to_db_id[json_id] = obj_id\n record['records'][what].append(obj_id)\n record[what] += 1\n\n # all objects are loaded, a perfect time to do inter-object resolution and other tasks\n self.postimport()\n\n record['end'] = utcnow()\n\n return {self._type: record}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef import_item(self, data):\n what = 'noop'\n\n # remove the JSON _id (may still be there if called directly)\n data.pop('_id', None)\n\n # add fields/etc.\n data = self.apply_transformers(data)\n data = self.prepare_for_db(data)\n\n try:\n obj = self.get_object(data)\n except self.model_class.DoesNotExist:\n obj = None\n\n # remove pupa_id which does not belong in the OCD data models\n pupa_id = data.pop('pupa_id', None)\n\n # pull related fields off\n related = {}\n for field in self.related_models:\n related[field] = data.pop(field)\n\n # obj existed, check if we need to do an update\n if obj:\n if obj.id in self.json_to_db_id.values():\n raise DuplicateItemError(data, obj, related.get('sources', []))\n # check base object for changes\n for key, value in data.items():\n if getattr(obj, key) != value and key not in obj.locked_fields:\n setattr(obj, key, value)\n what = 'update'\n\n updated = self._update_related(obj, related, self.related_models)\n if updated:\n what = 'update'\n\n if what == 'update':\n obj.save()\n\n # need to create the data\n else:\n what = 'insert'\n try:\n obj = self.model_class.objects.create(**data)\n except Exception as e:\n raise DataImportError('{} while importing {} as {}'.format(e, data,\n self.model_class))\n self._create_related(obj, related, self.related_models)\n\n if pupa_id:\n Identifier.objects.get_or_create(identifier=pupa_id,\n jurisdiction_id=self.jurisdiction_id,\n defaults={'content_object': obj})\n\n return obj.id, what", "response": "function used by import_data"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates DB objects related to a base object.", "response": "def _update_related(self, obj, related, subfield_dict):\n \"\"\"\n update DB objects related to a base object\n obj: a base object to create related\n related: dict mapping field names to lists of related objects\n subfield_list: where to get the next layer of subfields\n \"\"\"\n # keep track of whether or not anything was updated\n updated = False\n\n # for each related field - check if there are differences\n for field, items in related.items():\n # skip subitem check if it's locked anyway\n if field in obj.locked_fields:\n continue\n\n # get items from database\n dbitems = list(getattr(obj, field).all())\n dbitems_count = len(dbitems)\n\n # default to doing nothing\n do_delete = do_update = False\n\n if items and dbitems_count: # we have items, so does db, check for conflict\n do_delete = do_update = items_differ(items, dbitems, subfield_dict[field][2])\n elif items and not dbitems_count: # we have items, db doesn't, just update\n do_update = True\n elif not items and dbitems_count: # db has items, we don't, just delete\n do_delete = True\n # otherwise: no items or dbitems, so nothing is done\n\n # don't delete if field is in merge_related\n if field in self.merge_related:\n new_items = []\n # build a list of keyfields to existing database objects\n keylist = self.merge_related[field]\n keyed_dbitems = {tuple(getattr(item, k) for k in keylist):\n item for item in dbitems}\n\n # go through 'new' items\n # if item with the same keyfields exists:\n # update the database item w/ the new item's properties\n # else:\n # add it to new_items\n for item in items:\n key = tuple(item.get(k) for k in keylist)\n dbitem = keyed_dbitems.get(key)\n if not dbitem:\n new_items.append(item)\n else:\n # update dbitem\n for fname, val in item.items():\n setattr(dbitem, fname, val)\n dbitem.save()\n\n # import anything that made it to new_items in the usual fashion\n self._create_related(obj, {field: new_items}, subfield_dict)\n else:\n # default logic is to just wipe and recreate subobjects\n if do_delete:\n updated = True\n getattr(obj, field).all().delete()\n if do_update:\n updated = True\n self._create_related(obj, {field: items}, subfield_dict)\n\n return updated"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate DB objects related to a base object obj.", "response": "def _create_related(self, obj, related, subfield_dict):\n \"\"\"\n create DB objects related to a base object\n obj: a base object to create related\n related: dict mapping field names to lists of related objects\n subfield_list: where to get the next layer of subfields\n \"\"\"\n for field, items in related.items():\n subobjects = []\n all_subrelated = []\n Subtype, reverse_id_field, subsubdict = subfield_dict[field]\n for order, item in enumerate(items):\n # pull off 'subrelated' (things that are related to this obj)\n subrelated = {}\n for subfield in subsubdict:\n subrelated[subfield] = item.pop(subfield)\n\n if field in self.preserve_order:\n item['order'] = order\n\n item[reverse_id_field] = obj.id\n\n try:\n subobjects.append(Subtype(**item))\n all_subrelated.append(subrelated)\n except Exception as e:\n raise DataImportError('{} while importing {} as {}'.format(e, item, Subtype))\n\n # add all subobjects at once (really great for actions & votes)\n try:\n Subtype.objects.bulk_create(subobjects)\n except Exception as e:\n raise DataImportError('{} while importing {} as {}'.format(e, subobjects, Subtype))\n\n # after import the subobjects, import their subsubobjects\n for subobj, subrel in zip(subobjects, all_subrelated):\n self._create_related(subobj, subrel, subsubdict)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_edge(self, fro, to):\n self.add_node(fro)\n self.add_node(to)\n self.edges[fro].add(to)", "response": "Add an edge to the tree."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove a node from the network.", "response": "def prune_node(self, node, remove_backrefs=False):\n \"\"\"\n remove node `node` from the network (including any edges that may\n have been pointing at `node`).\n \"\"\"\n if not remove_backrefs:\n for fro, connections in self.edges.items():\n if node in self.edges[fro]:\n raise ValueError(\"\"\"Attempting to remove a node with\n backrefs. You may consider setting\n `remove_backrefs` to true.\"\"\")\n\n # OK. Otherwise, let's do our removal.\n\n self.nodes.remove(node)\n if node in self.edges:\n # Remove add edges from this node if we're pruning it.\n self.edges.pop(node)\n\n for fro, connections in self.edges.items():\n # Remove any links to this node (if they exist)\n if node in self.edges[fro]:\n # If we should remove backrefs:\n self.edges[fro].remove(node)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sort(self):\n while self.nodes:\n iterated = False\n for node in self.leaf_nodes():\n iterated = True\n self.prune_node(node)\n yield node\n if not iterated:\n raise CyclicGraphError(\"Sorting has found a cyclic graph.\")", "response": "Return an iterable of nodes toplogically sorted to correctly import\n dependencies before leaf nodes."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dot(self):\n buff = \"digraph graphname {\"\n for fro in self.edges:\n for to in self.edges[fro]:\n buff += \"%s -> %s;\" % (fro, to)\n buff += \"}\"\n return buff", "response": "Return a dot - representation of the tree."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cycles(self):\n\n def walk_node(node, seen):\n \"\"\"\n Walk each top-level node we know about, and recurse\n along the graph.\n \"\"\"\n if node in seen:\n yield (node,)\n return\n seen.add(node)\n for edge in self.edges[node]:\n for cycle in walk_node(edge, set(seen)):\n yield (node,) + cycle\n\n # First, let's get a iterable of all known cycles.\n cycles = chain.from_iterable(\n (walk_node(node, set()) for node in self.nodes))\n\n shortest = set()\n # Now, let's go through and sift through the cycles, finding\n # the shortest unique cycle known, ignoring cycles which contain\n # already known cycles.\n for cycle in sorted(cycles, key=len):\n for el in shortest:\n if set(el).issubset(set(cycle)):\n break\n else:\n shortest.add(cycle)\n # And return that unique list.\n return shortest", "response": "This method returns the shortest unique cycles that are found in the network."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_object(self, obj):\n obj.pre_save(self.jurisdiction.jurisdiction_id)\n\n filename = '{0}_{1}.json'.format(obj._type, obj._id).replace('/', '-')\n\n self.info('save %s %s as %s', obj._type, obj, filename)\n self.debug(json.dumps(OrderedDict(sorted(obj.as_dict().items())),\n cls=utils.JSONEncoderPlus, indent=4, separators=(',', ': ')))\n\n self.output_names[obj._type].add(filename)\n\n with open(os.path.join(self.datadir, filename), 'w') as f:\n json.dump(obj.as_dict(), f, cls=utils.JSONEncoderPlus)\n\n # validate after writing, allows for inspection on failure\n try:\n obj.validate()\n except ValueError as ve:\n if self.strict_validation:\n raise ve\n else:\n self.warning(ve)\n\n # after saving and validating, save subordinate objects\n for obj in obj._related:\n self.save_object(obj)", "response": "Save object to disk as JSON."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validate(self, schema=None):\n if schema is None:\n schema = self._schema\n\n type_checker = Draft3Validator.TYPE_CHECKER.redefine(\n \"datetime\", lambda c, d: isinstance(d, (datetime.date, datetime.datetime))\n )\n ValidatorCls = jsonschema.validators.extend(Draft3Validator, type_checker=type_checker)\n validator = ValidatorCls(schema, format_checker=FormatChecker())\n\n errors = [str(error) for error in validator.iter_errors(self.as_dict())]\n if errors:\n raise ScrapeValueError('validation of {} {} failed: {}'.format(\n self.__class__.__name__, self._id, '\\n\\t'+'\\n\\t'.join(errors)\n ))", "response": "Validate that we have a valid object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a source URL from which data was collected", "response": "def add_source(self, url, *, note=''):\n \"\"\" Add a source URL from which data was collected \"\"\"\n new = {'url': url, 'note': note}\n self.sources.append(new)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef evolve_genomes(rng, pop, params, recorder=None):\n import warnings\n # Test parameters while suppressing warnings\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n # Will throw exception if anything is wrong:\n params.validate()\n\n from ._fwdpy11 import MutationRegions\n from ._fwdpy11 import evolve_without_tree_sequences\n from ._fwdpy11 import dispatch_create_GeneticMap\n pneutral = params.mutrate_n/(params.mutrate_n+params.mutrate_s)\n mm = MutationRegions.create(pneutral, params.nregions, params.sregions)\n rm = dispatch_create_GeneticMap(params.recrate, params.recregions)\n\n if recorder is None:\n from ._fwdpy11 import RecordNothing\n recorder = RecordNothing()\n\n evolve_without_tree_sequences(rng, pop, params.demography,\n params.mutrate_n, params.mutrate_s,\n params.recrate, mm, rm, params.gvalue,\n recorder, params.pself, params.prune_selected)", "response": "Evolve a population without tree sequence recordings."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninitializes the individual table with the metadata of the given pop.", "response": "def _initializeIndividualTable(pop, tc):\n \"\"\"\n Returns node ID -> individual map\n \"\"\"\n # First, alive individuals:\n individal_nodes = {}\n for i in range(pop.N):\n individal_nodes[2*i] = i\n individal_nodes[2*i+1] = i\n metadata_strings = _generate_individual_metadata(pop.diploid_metadata, tc)\n\n # Now, preserved nodes\n num_ind_nodes = pop.N\n for i in pop.ancient_sample_metadata:\n assert i not in individal_nodes, \"indivudal record error\"\n individal_nodes[i.nodes[0]] = num_ind_nodes\n individal_nodes[i.nodes[1]] = num_ind_nodes\n num_ind_nodes += 1\n\n metadata_strings.extend(_generate_individual_metadata(\n pop.ancient_sample_metadata, tc))\n\n md, mdo = tskit.pack_bytes(metadata_strings)\n flags = [0 for i in range(pop.N+len(pop.ancient_sample_metadata))]\n tc.individuals.set_columns(flags=flags, metadata=md, metadata_offset=mdo)\n return individal_nodes"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dump_tables_to_tskit(pop):\n node_view = np.array(pop.tables.nodes, copy=True)\n node_view['time'] -= node_view['time'].max()\n node_view['time'][np.where(node_view['time'] != 0.0)[0]] *= -1.0\n edge_view = np.array(pop.tables.edges, copy=False)\n mut_view = np.array(pop.tables.mutations, copy=False)\n\n tc = tskit.TableCollection(pop.tables.genome_length)\n\n # We must initialize population and individual\n # tables before we can do anything else.\n # Attempting to set population to anything\n # other than -1 in an tskit.NodeTable will\n # raise an exception if the PopulationTable\n # isn't set up.\n _initializePopulationTable(node_view, tc)\n node_to_individual = _initializeIndividualTable(pop, tc)\n\n individual = [-1 for i in range(len(node_view))]\n for k, v in node_to_individual.items():\n individual[k] = v\n flags = [1]*2*pop.N + [0]*(len(node_view) - 2*pop.N)\n # Bug fixed in 0.3.1: add preserved nodes to samples list\n for i in pop.tables.preserved_nodes:\n flags[i] = 1\n tc.nodes.set_columns(flags=flags, time=node_view['time'],\n population=node_view['population'],\n individual=individual)\n tc.edges.set_columns(left=edge_view['left'],\n right=edge_view['right'],\n parent=edge_view['parent'],\n child=edge_view['child'])\n\n mpos = np.array([pop.mutations[i].pos for i in mut_view['key']])\n ancestral_state = np.zeros(len(mut_view), dtype=np.int8)+ord('0')\n ancestral_state_offset = np.arange(len(mut_view)+1, dtype=np.uint32)\n tc.sites.set_columns(position=mpos,\n ancestral_state=ancestral_state,\n ancestral_state_offset=ancestral_state_offset)\n\n derived_state = np.zeros(len(mut_view), dtype=np.int8)+ord('1')\n md, mdo = _generate_mutation_metadata(pop)\n tc.mutations.set_columns(site=np.arange(len(mpos), dtype=np.int32),\n node=mut_view['node'],\n derived_state=derived_state,\n derived_state_offset=ancestral_state_offset,\n metadata=md,\n metadata_offset=mdo)\n return tc.tree_sequence()", "response": "Converts fwdpy11. TableCollection to an anatomy. TreeSequence\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mslike(pop, **kwargs):\n import fwdpy11\n if isinstance(pop, fwdpy11.DiploidPopulation) is False:\n raise ValueError(\"incorrect pop type: \" + str(type(pop)))\n defaults = {'simlen': 10*pop.N,\n 'beg': 0.0,\n 'end': 1.0,\n 'theta': 100.0,\n 'pneutral': 1.0,\n 'rho': 100.0,\n 'dfe': None\n }\n for key, value in kwargs.items():\n if key in defaults:\n defaults[key] = value\n import numpy as np\n\n params = {'demography': np.array([pop.N]*defaults['simlen'],\n dtype=np.uint32),\n 'nregions': [fwdpy11.Region(defaults['beg'],\n defaults['end'], 1.0)],\n 'recregions': [fwdpy11.Region(defaults['beg'],\n defaults['end'], 1.0)],\n 'rates': ((defaults['pneutral']*defaults['theta'])/(4.0*pop.N),\n ((1.0-defaults['pneutral'])*defaults['theta']) /\n (4.0*pop.N),\n defaults['rho']/(4.0*float(pop.N))),\n 'gvalue': fwdpy11.Multiplicative(2.0)\n }\n if defaults['dfe'] is None:\n params['sregions'] = []\n else:\n params['sregions'] = [defaults['dfe']]\n return params", "response": "Function to establish default parameters for a single - locus simulation for standard pop - gen\n modeling scenarios."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef limit_spec(self, spec):\n if list(spec.keys()) == ['name']:\n # if we're just resolving on name, include other names\n return ((Q(name=spec['name']) | Q(other_names__name=spec['name'])) &\n Q(memberships__organization__jurisdiction_id=self.jurisdiction_id))\n spec['memberships__organization__jurisdiction_id'] = self.jurisdiction_id\n return spec", "response": "Limit the spec to include only the related entries that are not in the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nraise TypeError if validation fails.", "response": "def validate(self):\n \"\"\"\n Error check model params.\n\n :raises TypeError: Throws TypeError if validation fails.\n\n \"\"\"\n if self.nregions is None:\n raise TypeError(\"neutral regions cannot be None\")\n if self.sregions is None:\n raise TypeError(\"selected regions cannot be None\")\n if self.recregions is None:\n raise TypeError(\"recombination regions cannot be None\")\n if self.demography is None:\n raise TypeError(\"demography cannot be None\")\n if self.prune_selected is None:\n raise TypeError(\"prune_selected cannot be None\")\n if self.gvalue is None:\n raise TypeError(\"gvalue cannot be None\")\n if self.rates is None:\n raise TypeError(\"rates cannot be None\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprepare the imports for the organization.", "response": "def _prepare_imports(self, dicts):\n \"\"\" an override for prepare imports that sorts the imports by parent_id dependencies \"\"\"\n # all pseudo parent ids we've seen\n pseudo_ids = set()\n # pseudo matches\n pseudo_matches = {}\n\n # get prepared imports from parent\n prepared = dict(super(OrganizationImporter, self)._prepare_imports(dicts))\n\n # collect parent pseudo_ids\n for _, data in prepared.items():\n parent_id = data.get('parent_id', None) or ''\n if parent_id.startswith('~'):\n pseudo_ids.add(parent_id)\n\n # turn pseudo_ids into a tuple of dictionaries\n pseudo_ids = [(ppid, get_pseudo_id(ppid)) for ppid in pseudo_ids]\n\n # loop over all data again, finding the pseudo ids true json id\n for json_id, data in prepared.items():\n # check if this matches one of our ppids\n for ppid, spec in pseudo_ids:\n match = True\n for k, v in spec.items():\n if data[k] != v:\n match = False\n break\n if match:\n if ppid in pseudo_matches:\n raise UnresolvedIdError('multiple matches for pseudo id: ' + ppid)\n pseudo_matches[ppid] = json_id\n\n # toposort the nodes so parents are imported first\n network = Network()\n in_network = set()\n import_order = []\n\n for json_id, data in prepared.items():\n parent_id = data.get('parent_id', None)\n\n # resolve pseudo_ids to their json id before building the network\n if parent_id in pseudo_matches:\n parent_id = pseudo_matches[parent_id]\n\n network.add_node(json_id)\n if parent_id:\n # Right. There's an import dep. We need to add the edge from\n # the parent to the current node, so that we import the parent\n # before the current node.\n network.add_edge(parent_id, json_id)\n\n # resolve the sorted import order\n for jid in network.sort():\n import_order.append((jid, prepared[jid]))\n in_network.add(jid)\n\n # ensure all data made it into network (paranoid check, should never fail)\n if in_network != set(prepared.keys()): # pragma: no cover\n raise PupaInternalError(\"import is missing nodes in network set\")\n\n return import_order"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nevolve a population with tree sequence recording.", "response": "def evolvets(rng, pop, params, simplification_interval, recorder=None,\n suppress_table_indexing=False, record_gvalue_matrix=False,\n stopping_criterion=None,\n track_mutation_counts=False,\n remove_extinct_variants=True):\n \"\"\"\n Evolve a population with tree sequence recording\n\n :param rng: random number generator\n :type rng: :class:`fwdpy11.GSLrng`\n :param pop: A population\n :type pop: :class:`fwdpy11.DiploidPopulation`\n :param params: simulation parameters\n :type params: :class:`fwdpy11.ModelParams`\n :param simplification_interval: Number of generations between simplifications.\n :type simplification_interval: int\n :param recorder: (None) A temporal sampler/data recorder.\n :type recorder: callable\n :param suppress_table_indexing: (False) Prevents edge table indexing until end of simulation\n :type suppress_table_indexing: boolean\n :param record_gvalue_matrix: (False) Whether to record genetic values into :attr:`fwdpy11.Population.genetic_values`.\n :type record_gvalue_matrix: boolean\n\n The recording of genetic values into :attr:`fwdpy11.Population.genetic_values` is supprssed by default. First, it\n is redundant with :attr:`fwdpy11.DiploidMetadata.g` for the common case of mutational effects on a single trait.\n Second, we save some memory by not tracking these matrices. However, it is useful to track these data for some\n cases when simulating multivariate mutational effects (pleiotropy).\n\n .. note::\n If recorder is None,\n then :class:`fwdpy11.NoAncientSamples` will be used.\n\n \"\"\"\n import warnings\n\n # Currently, we do not support simulating neutral mutations\n # during tree sequence simulations, so we make sure that there\n # are no neutral regions/rates:\n if len(params.nregions) != 0:\n raise ValueError(\n \"Simulation of neutral mutations on tree sequences not supported (yet).\")\n\n # Test parameters while suppressing warnings\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n # Will throw exception if anything is wrong:\n params.validate()\n\n if recorder is None:\n from ._fwdpy11 import NoAncientSamples\n recorder = NoAncientSamples()\n\n if stopping_criterion is None:\n from ._fwdpy11 import _no_stopping\n stopping_criterion = _no_stopping\n\n from ._fwdpy11 import MutationRegions\n from ._fwdpy11 import dispatch_create_GeneticMap\n from ._fwdpy11 import evolve_with_tree_sequences\n # TODO: update to allow neutral mutations\n pneutral = 0\n mm = MutationRegions.create(pneutral, params.nregions, params.sregions)\n rm = dispatch_create_GeneticMap(params.recrate, params.recregions)\n\n from ._fwdpy11 import SampleRecorder\n sr = SampleRecorder()\n evolve_with_tree_sequences(rng, pop, sr, simplification_interval,\n params.demography, params.mutrate_s,\n mm, rm, params.gvalue,\n recorder, stopping_criterion,\n params.pself, params.prune_selected is False,\n suppress_table_indexing, record_gvalue_matrix,\n track_mutation_counts,\n remove_extinct_variants)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a list of integers representing the number of times a given population size is in the range Nstart to Nstop.", "response": "def exponential_size_change(Nstart, Nstop, time):\n \"\"\"\n Generate a list of population sizes\n according to exponential size_change model\n\n :param Nstart: population size at onset of size change\n :param Nstop: Population size to reach at end of size change\n :param time: Time (in generations) to get from Nstart to Nstop\n\n :return: A list of integers representing population size over time.\n\n .. versionadded:: 0.1.1\n \"\"\"\n if time < 1:\n raise RuntimeError(\"time must be >= 1\")\n if Nstart < 1 or Nstop < 1:\n raise RuntimeError(\"Nstart and Nstop must both be >= 1\")\n G = math.exp((math.log(Nstop) - math.log(Nstart))/time)\n rv = []\n for i in range(time):\n rv.append(round(Nstart*pow(G, i+1)))\n return rv"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsplits the SQL into a sequence of beginning bookend closing bookend and contents", "response": "def split_sql(sql):\n \"\"\"generate hunks of SQL that are between the bookends\n return: tuple of beginning bookend, closing bookend, and contents\n note: beginning & end of string are returned as None\"\"\"\n bookends = (\"\\n\", \";\", \"--\", \"/*\", \"*/\")\n last_bookend_found = None\n start = 0\n\n while start <= len(sql):\n results = get_next_occurence(sql, start, bookends)\n if results is None:\n yield (last_bookend_found, None, sql[start:])\n start = len(sql) + 1\n else:\n (end, bookend) = results\n yield (last_bookend_found, bookend, sql[start:end])\n start = end + len(bookend)\n last_bookend_found = bookend"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck whether an input file is valid PostgreSQL.", "response": "def check_file(filename=None, show_filename=False, add_semicolon=False):\n \"\"\"\n Check whether an input file is valid PostgreSQL. If no filename is\n passed, STDIN is checked.\n\n Returns a status code: 0 if the input is valid, 1 if invalid.\n \"\"\"\n # either work with sys.stdin or open the file\n if filename is not None:\n with open(filename, \"r\") as filelike:\n sql_string = filelike.read()\n else:\n with sys.stdin as filelike:\n sql_string = sys.stdin.read()\n\n success, msg = check_string(sql_string, add_semicolon=add_semicolon)\n\n # report results\n result = 0\n if not success:\n # possibly show the filename with the error message\n prefix = \"\"\n if show_filename and filename is not None:\n prefix = filename + \": \"\n print(prefix + msg)\n result = 1\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck whether a string is valid PostgreSQL. Returns a boolean indicating validity and a message from ecpg.", "response": "def check_string(sql_string, add_semicolon=False):\n \"\"\"\n Check whether a string is valid PostgreSQL. Returns a boolean\n indicating validity and a message from ecpg, which will be an\n empty string if the input was valid, or a description of the\n problem otherwise.\n \"\"\"\n prepped_sql = sqlprep.prepare_sql(sql_string, add_semicolon=add_semicolon)\n success, msg = ecpg.check_syntax(prepped_sql)\n return success, msg"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck syntax of a string of PostgreSQL - dialect SQL", "response": "def check_syntax(string):\n \"\"\" Check syntax of a string of PostgreSQL-dialect SQL \"\"\"\n args = [\"ecpg\", \"-o\", \"-\", \"-\"]\n\n with open(os.devnull, \"w\") as devnull:\n try:\n proc = subprocess.Popen(args, shell=False,\n stdout=devnull,\n stdin=subprocess.PIPE,\n stderr=subprocess.PIPE,\n universal_newlines=True)\n _, err = proc.communicate(string)\n except OSError:\n msg = \"Unable to execute 'ecpg', you likely need to install it.'\"\n raise OSError(msg)\n if proc.returncode == 0:\n return (True, \"\")\n else:\n return (False, parse_error(err))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the inventory for a given resource", "response": "def get_inventory(self, context):\n \"\"\"\n Will locate vm in vcenter and fill its uuid\n :type context: cloudshell.shell.core.context.ResourceCommandContext\n \"\"\"\n vcenter_vm_name = context.resource.attributes['vCenter VM']\n vcenter_vm_name = vcenter_vm_name.replace('\\\\', '/')\n vcenter_name = context.resource.attributes['vCenter Name']\n\n self.logger.info('start autoloading vm_path: {0} on vcenter: {1}'.format(vcenter_vm_name, vcenter_name))\n\n with CloudShellSessionContext(context) as cloudshell_session:\n session = cloudshell_session\n\n vcenter_api_res = session.GetResourceDetails(vcenter_name)\n vcenter_resource = self.model_parser.convert_to_vcenter_model(vcenter_api_res)\n\n si = None\n\n try:\n self.logger.info('connecting to vcenter ({0})'.format(vcenter_api_res.Address))\n si = self._get_connection_to_vcenter(self.pv_service, session, vcenter_resource, vcenter_api_res.Address)\n\n self.logger.info('loading vm uuid')\n vm_loader = VMLoader(self.pv_service)\n uuid = vm_loader.load_vm_uuid_by_name(si, vcenter_resource, vcenter_vm_name)\n self.logger.info('vm uuid: {0}'.format(uuid))\n self.logger.info('loading the ip of the vm')\n ip = self._try_get_ip(self.pv_service, si, uuid, vcenter_resource)\n if ip:\n session.UpdateResourceAddress(context.resource.name, ip)\n\n except Exception:\n self.logger.exception(\"Get inventory command failed\")\n raise\n finally:\n if si:\n self.pv_service.disconnect(si)\n\n return self._get_auto_load_response(uuid, vcenter_name, context.resource)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_snapshots(self, si, logger, vm_uuid):\n vm = self.pyvmomi_service.find_by_uuid(si, vm_uuid)\n logger.info(\"Get snapshots\")\n snapshots = SnapshotRetriever.get_vm_snapshots(vm)\n\n return snapshots.keys()", "response": "Get a list of virtual machine names that are available for this service instance"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconnect to a vCenter via SSL and returns a vim. si. PyVmomi object.", "response": "def connect(self, address, user, password, port=443):\n \"\"\" \n Connect to vCenter via SSL and return SI object\n \n :param address: vCenter address (host / ip address)\n :param user: user name for authentication\n :param password:password for authentication\n :param port: port for the SSL connection. Default = 443\n \"\"\"\n\n '# Disabling urllib3 ssl warnings'\n requests.packages.urllib3.disable_warnings()\n\n '# Disabling SSL certificate verification'\n context = None\n import ssl\n if hasattr(ssl, 'SSLContext'):\n context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)\n context.verify_mode = ssl.CERT_NONE\n\n try:\n if context:\n try:\n '#si = SmartConnect(host=address, user=user, pwd=password, port=port, sslContext=context)'\n si = self.pyvmomi_connect(host=address, user=user, pwd=password, port=port, sslContext=context)\n except ssl.SSLEOFError:\n context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)\n context.verify_mode = ssl.CERT_NONE\n si = self.pyvmomi_connect(host=address, user=user, pwd=password, port=port, sslContext=context)\n else:\n '#si = SmartConnect(host=address, user=user, pwd=password, port=port)'\n si = self.pyvmomi_connect(host=address, user=user, pwd=password, port=port)\n return si\n except vim.fault.InvalidLogin as e:\n raise VCenterAuthError(e.msg, e)\n except IOError as e:\n # logger.info(\"I/O error({0}): {1}\".format(e.errno, e.strerror))\n raise ValueError('Cannot connect to vCenter, please check that the address is valid')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds datacenter in the vCenter or returns None", "response": "def find_datacenter_by_name(self, si, path, name):\n \"\"\"\n Finds datacenter in the vCenter or returns \"None\"\n\n :param si: pyvmomi 'ServiceInstance'\n :param path: the path to find the object ('dc' or 'dc/folder' or 'dc/folder/folder/etc...')\n :param name: the datacenter name to return\n \"\"\"\n return self.find_obj_by_path(si, path, name, self.Datacenter)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds vm or host by uuid in the vCenter or returns None", "response": "def find_by_uuid(self, si, uuid, is_vm=True, path=None, data_center=None):\n \"\"\"\n Finds vm/host by his uuid in the vCenter or returns \"None\"\n\n :param si: pyvmomi 'ServiceInstance'\n :param uuid: the object uuid\n :param path: the path to find the object ('dc' or 'dc/folder' or 'dc/folder/folder/etc...')\n :param is_vm: if true, search for virtual machines, otherwise search for hosts\n :param data_center:\n \"\"\"\n\n if uuid is None:\n return None\n if path is not None:\n data_center = self.find_item_in_path_by_type(si, path, vim.Datacenter)\n\n search_index = si.content.searchIndex\n return search_index.FindByUuid(data_center, uuid, is_vm)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding datastore in the vCenter or returns None", "response": "def find_host_by_name(self, si, path, name):\n \"\"\"\n Finds datastore in the vCenter or returns \"None\"\n\n :param si: pyvmomi 'ServiceInstance'\n :param path: the path to find the object ('dc' or 'dc/folder' or 'dc/folder/folder/etc...')\n :param name: the datastore name to return\n \"\"\"\n return self.find_obj_by_path(si, path, name, self.Host)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_datastore_by_name(self, si, path, name):\n return self.find_obj_by_path(si, path, name, self.Datastore)", "response": "Finds datastore in the vCenter or returns None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_portgroup(self, si, dv_switch_path, name):\n dv_switch = self.get_folder(si, dv_switch_path)\n if dv_switch and dv_switch.portgroup:\n for port in dv_switch.portgroup:\n if port.name == name:\n return port\n return None", "response": "Finds the portgroup with the given name on the given dv switch"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_network_by_name(self, si, path, name):\n return self.find_obj_by_path(si, path, name, self.Network)", "response": "Finds the network in the vCenter or returns None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_vm_by_name(self, si, path, name):\n return self.find_obj_by_path(si, path, name, self.VM)", "response": "Finds vm in the vCenter or returns None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_obj_by_path(self, si, path, name, type_name):\n\n folder = self.get_folder(si, path)\n if folder is None:\n raise ValueError('vmomi managed object not found at: {0}'.format(path))\n\n look_in = None\n if hasattr(folder, type_name):\n look_in = getattr(folder, type_name)\n if hasattr(folder, self.ChildEntity):\n look_in = folder\n if look_in is None:\n raise ValueError('vmomi managed object not found at: {0}'.format(path))\n\n search_index = si.content.searchIndex\n '#searches for the specific vm in the folder'\n return search_index.FindChild(look_in, name)", "response": "Finds the object in the vCenter or returns None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind the virtual switch in the vCenter or returns None", "response": "def find_dvs_by_path(self,si ,path):\n \"\"\"\n Finds vm in the vCenter or returns \"None\"\n :param si: pyvmomi 'ServiceInstance'\n :param path: the path to find the object ('dc' or 'dc/folder' or 'dc/folder/folder/etc...')\n \"\"\"\n dvs = self.get_folder(si, path)\n\n if not dvs:\n raise ValueError('Could not find Default DvSwitch in path {0}'.format(path))\n elif not isinstance(dvs, vim.dvs.VmwareDistributedVirtualSwitch):\n raise ValueError('The object in path {0} is {1} and not a DvSwitch'.format(path, type(dvs)))\n\n return dvs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_folder(self, si, path, root=None):\n\n search_index = si.content.searchIndex\n sub_folder = root if root else si.content.rootFolder\n\n if not path:\n return sub_folder\n\n paths = [p for p in path.split(\"/\") if p]\n\n child = None\n try:\n new_root = search_index.FindChild(sub_folder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n except:\n child = None\n\n if child is None and hasattr(sub_folder, self.ChildEntity):\n new_root = search_index.FindChild(sub_folder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n if child is None and hasattr(sub_folder, self.VM):\n new_root = search_index.FindChild(sub_folder.vmFolder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n if child is None and hasattr(sub_folder, self.Datastore):\n new_root = search_index.FindChild(sub_folder.datastoreFolder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n if child is None and hasattr(sub_folder, self.Network):\n new_root = search_index.FindChild(sub_folder.networkFolder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n if child is None and hasattr(sub_folder, self.Host):\n new_root = search_index.FindChild(sub_folder.hostFolder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n if child is None and hasattr(sub_folder, self.Datacenter):\n new_root = search_index.FindChild(sub_folder.datacenterFolder, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n if child is None and hasattr(sub_folder, 'resourcePool'):\n new_root = search_index.FindChild(sub_folder.resourcePool, paths[0])\n if new_root:\n child = self.get_folder(si, '/'.join(paths[1:]), new_root)\n\n return child", "response": "Gets the folder in the vCenter or returns None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_network_by_full_name(self, si, default_network_full_name):\n path, name = get_path_and_name(default_network_full_name)\n return self.find_network_by_name(si, path, name) if name else None", "response": "Find a network by a Full Name"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_obj(self, content, vimtype, name):\n obj = None\n\n container = self._get_all_objects_by_type(content, vimtype)\n\n # If no name was given will return the first object from list of a objects matching the given vimtype type\n for c in container.view:\n if name:\n if c.name == name:\n obj = c\n break\n else:\n obj = c\n break\n\n return obj", "response": "Returns an object by name for a specific type"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clone_vm(self, clone_params, logger, cancellation_context):\n result = self.CloneVmResult()\n\n if not isinstance(clone_params.si, self.vim.ServiceInstance):\n result.error = 'si must be init as ServiceInstance'\n return result\n\n if clone_params.template_name is None:\n result.error = 'template_name param cannot be None'\n return result\n\n if clone_params.vm_name is None:\n result.error = 'vm_name param cannot be None'\n return result\n\n if clone_params.vm_folder is None:\n result.error = 'vm_folder param cannot be None'\n return result\n\n datacenter = self.get_datacenter(clone_params)\n\n dest_folder = self._get_destination_folder(clone_params)\n\n vm_location = VMLocation.create_from_full_path(clone_params.template_name)\n\n template = self._get_template(clone_params, vm_location)\n\n snapshot = self._get_snapshot(clone_params, template)\n\n resource_pool, host = self._get_resource_pool(datacenter.name, clone_params)\n\n if not resource_pool and not host:\n raise ValueError('The specifed host, cluster or resource pool could not be found')\n\n '# set relo_spec'\n placement = self.vim.vm.RelocateSpec()\n if resource_pool:\n placement.pool = resource_pool\n if host:\n placement.host = host\n\n clone_spec = self.vim.vm.CloneSpec()\n\n if snapshot:\n clone_spec.snapshot = snapshot\n clone_spec.template = False\n placement.diskMoveType = 'createNewChildDiskBacking'\n\n placement.datastore = self._get_datastore(clone_params)\n\n # after deployment the vm must be powered off and will be powered on if needed by orchestration driver\n clone_spec.location = placement\n # clone_params.power_on\n # due to hotfix 1 for release 1.0,\n clone_spec.powerOn = False\n\n logger.info(\"cloning VM...\")\n try:\n task = template.Clone(folder=dest_folder, name=clone_params.vm_name, spec=clone_spec)\n vm = self.task_waiter.wait_for_task(task=task, logger=logger, action_name='Clone VM',\n cancellation_context=cancellation_context)\n except TaskFaultException:\n raise\n except vim.fault.NoPermission as error:\n logger.error(\"vcenter returned - no permission: {0}\".format(error))\n raise Exception('Permissions is not set correctly, please check the log for more info.')\n except Exception as e:\n logger.error(\"error deploying: {0}\".format(e))\n raise Exception('Error has occurred while deploying, please look at the log for more info.')\n\n result.vm = vm\n return result", "response": "This method clones a VM from a template and returns the VM object or throws argument is not valid"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef destroy_vm(self, vm, logger):\n\n self.power_off_before_destroy(logger, vm)\n\n logger.info((\"Destroying VM {0}\".format(vm.name)))\n\n task = vm.Destroy_Task()\n return self.task_waiter.wait_for_task(task=task, logger=logger, action_name=\"Destroy VM\")", "response": "destroy the given vm \n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndestroy the given vm by name", "response": "def destroy_vm_by_name(self, si, vm_name, vm_path, logger):\n \"\"\" \n destroy the given vm \n :param si: pyvmomi 'ServiceInstance'\n :param vm_name: str name of the vm to destroyed\n :param vm_path: str path to the vm that will be destroyed\n :param logger:\n \"\"\"\n if vm_name is not None:\n vm = self.find_vm_by_name(si, vm_path, vm_name)\n if vm:\n return self.destroy_vm(vm, logger)\n raise ValueError('vm not found')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef destroy_vm_by_uuid(self, si, vm_uuid, vm_path, logger):\n if vm_uuid is not None:\n vm = self.find_by_uuid(si, vm_uuid, vm_path)\n if vm:\n return self.destroy_vm(vm, logger)\n # return 'vm not found'\n # for apply the same Interface as for 'destroy_vm_by_name'\n raise ValueError('vm not found')", "response": "destroy the given vm by uuid"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef vm_reconfig_task(vm, device_change):\n config_spec = vim.vm.ConfigSpec(deviceChange=device_change)\n task = vm.ReconfigVM_Task(config_spec)\n return task", "response": "Create Task for VM re - configure\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef vm_get_network_by_name(vm, network_name):\n # return None\n for network in vm.network:\n if hasattr(network, \"name\") and network_name == network.name:\n return network\n return None", "response": "Try to find a Network in VM by name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the full path to the VM", "response": "def get_vm_full_path(self, si, vm):\n \"\"\"\n :param vm: vim.VirtualMachine\n :return:\n \"\"\"\n folder_name = None\n folder = vm.parent\n\n if folder:\n folder_name = folder.name\n folder_parent = folder.parent\n\n while folder_parent and folder_parent.name and folder_parent != si.content.rootFolder and not isinstance(folder_parent, vim.Datacenter):\n folder_name = folder_parent.name + '/' + folder_name\n try:\n folder_parent = folder_parent.parent\n except Exception:\n break\n # at this stage we receive a path like this: vm/FOLDER1/FOLDER2;\n # we're not interested in the \"vm\" part, so we throw that away\n folder_name = '/'.join(folder_name.split('/')[1:])\n # ok, now we're adding the vm name; btw, if there is no folder, that's cool, just return vm.name\n return VMLocation.combine([folder_name, vm.name]) if folder_name else vm.name"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the vm uuid by name", "response": "def load_vm_uuid_by_name(self, si, vcenter_data_model, vm_name):\n \"\"\"\n Returns the vm uuid\n :param si: Service instance to the vcenter\n :param vcenter_data_model: vcenter data model\n :param vm_name: the vm name\n :return: str uuid\n \"\"\"\n path = VMLocation.combine([vcenter_data_model.default_datacenter, vm_name])\n paths = path.split('/')\n name = paths[len(paths) - 1]\n path = VMLocation.combine(paths[:len(paths) - 1])\n vm = self.pv_service.find_vm_by_name(si, path, name)\n if not vm:\n raise ValueError('Could not find the vm in the given path: {0}/{1}'.format(path, name))\n\n if isinstance(vm, vim.VirtualMachine):\n return vm.config.uuid\n\n raise ValueError('The given object is not a vm: {0}/{1}'.format(path, name))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npower off a VM", "response": "def power_off(self, si, logger, session, vcenter_data_model, vm_uuid, resource_fullname):\n \"\"\"\n Power off of a vm\n :param vcenter_data_model: vcenter model\n :param si: Service Instance\n :param logger:\n :param session:\n :param vcenter_data_model: vcenter_data_model\n :param vm_uuid: the uuid of the vm\n :param resource_fullname: the full name of the deployed app resource\n :return:\n \"\"\"\n\n logger.info('retrieving vm by uuid: {0}'.format(vm_uuid))\n vm = self.pv_service.find_by_uuid(si, vm_uuid)\n\n if vm.summary.runtime.powerState == 'poweredOff':\n logger.info('vm already powered off')\n task_result = 'Already powered off'\n else:\n # hard power off\n logger.info('{0} powering of vm'.format(vcenter_data_model.shutdown_method))\n if vcenter_data_model.shutdown_method.lower() != 'soft':\n task = vm.PowerOff()\n task_result = self.synchronous_task_waiter.wait_for_task(task=task,\n logger=logger,\n action_name='Power Off')\n else:\n if vm.guest.toolsStatus == 'toolsNotInstalled':\n logger.warning('VMWare Tools status on virtual machine \\'{0}\\' are not installed'.format(vm.name))\n raise ValueError('Cannot power off the vm softly because VMWare Tools are not installed')\n\n if vm.guest.toolsStatus == 'toolsNotRunning':\n logger.warning('VMWare Tools status on virtual machine \\'{0}\\' are not running'.format(vm.name))\n raise ValueError('Cannot power off the vm softly because VMWare Tools are not running')\n\n vm.ShutdownGuest()\n task_result = 'vm powered off'\n \n return task_result"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npowers on the specified vm", "response": "def power_on(self, si, logger, session, vm_uuid, resource_fullname):\n \"\"\"\n power on the specified vm\n :param si:\n :param logger:\n :param session:\n :param vm_uuid: the uuid of the vm\n :param resource_fullname: the full name of the deployed app resource\n :return:\n \"\"\"\n logger.info('retrieving vm by uuid: {0}'.format(vm_uuid))\n vm = self.pv_service.find_by_uuid(si, vm_uuid)\n\n if vm.summary.runtime.powerState == 'poweredOn':\n logger.info('vm already powered on')\n task_result = 'Already powered on'\n else:\n logger.info('powering on vm')\n task = vm.PowerOn()\n task_result = self.synchronous_task_waiter.wait_for_task(task=task,\n logger=logger,\n action_name='Power On')\n\n return task_result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a QS Logger for the given command context.", "response": "def create_logger_for_context(self, logger_name, context):\n \"\"\"\n Create QS Logger for command context AutoLoadCommandContext or ResourceCommandContext\n :param logger_name:\n :type logger_name: str\n :param context:\n :return:\n \"\"\"\n\n if self._is_instance_of(context, 'AutoLoadCommandContext'):\n reservation_id = 'Autoload'\n handler_name = context.resource.name\n elif self._is_instance_of(context, 'UnreservedResourceCommandContext'):\n reservation_id = 'DeleteArtifacts'\n handler_name = context.resource.name\n else:\n reservation_id = self._get_reservation_id(context)\n\n if self._is_instance_of(context, 'ResourceCommandContext'):\n handler_name = context.resource.name\n elif self._is_instance_of(context, 'ResourceRemoteCommandContext'):\n handler_name = context.remote_endpoints[0].name\n else:\n raise Exception(ContextBasedLoggerFactory.UNSUPPORTED_CONTEXT_PROVIDED, context)\n\n logger = get_qs_logger(log_file_prefix=handler_name,\n log_group=reservation_id,\n log_category=logger_name)\n return logger"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconnect VM to Network", "response": "def connect_to_networks(self, si, logger, vm_uuid, vm_network_mappings, default_network_name,\n reserved_networks, dv_switch_name, promiscuous_mode):\n \"\"\"\n Connect VM to Network\n :param si: VmWare Service Instance - defined connection to vCenter\n :param logger:\n :param vm_uuid: UUID for VM\n :param vm_network_mappings: \n :param default_network_name: Full Network name - likes 'DataCenterName/NetworkName'\n :param reserved_networks:\n :param dv_switch_name: Default dvSwitch name\n :param promiscuous_mode 'True' or 'False' turn on/off promiscuous mode for the port group\n :return: None\n \"\"\"\n vm = self.pv_service.find_by_uuid(si, vm_uuid)\n\n if not vm:\n raise ValueError('VM having UUID {0} not found'.format(vm_uuid))\n\n default_network_instance = self.pv_service.get_network_by_full_name(si, default_network_name)\n\n if not default_network_instance:\n raise ValueError('Default Network {0} not found'.format(default_network_name))\n\n if vm_has_no_vnics(vm):\n raise ValueError('Trying to connect VM (uuid: {0}) but it has no vNics'.format(vm_uuid))\n\n mappings = self._prepare_mappings(dv_switch_name=dv_switch_name, vm_network_mappings=vm_network_mappings)\n\n updated_mappings = self.virtual_switch_to_machine_connector.connect_by_mapping(\n si, vm, mappings, default_network_instance, reserved_networks, logger, promiscuous_mode)\n\n connection_results = []\n for updated_mapping in updated_mappings:\n\n connection_result = ConnectionResult(mac_address=updated_mapping.vnic.macAddress,\n vnic_name=updated_mapping.vnic.deviceInfo.label,\n requested_vnic=updated_mapping.requested_vnic,\n vm_uuid=vm_uuid,\n network_name=updated_mapping.network.name,\n network_key=updated_mapping.network.key)\n connection_results.append(connection_result)\n\n return connection_results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef map_request_to_vnics(self, requests, vnics, existing_network, default_network, reserved_networks):\n mapping = dict()\n reserved_networks = reserved_networks if reserved_networks else []\n\n vnics_to_network_mapping = self._map_vnic_to_network(vnics, existing_network, default_network, reserved_networks)\n for request in requests:\n if request.vnic_name:\n if request.vnic_name not in vnics_to_network_mapping:\n raise ValueError('No vNIC by that name \"{0}\" exist'.format(request.vnic_name))\n net_at_requsted_vnic = vnics_to_network_mapping[request.vnic_name]\n if self.quali_name_generator.is_generated_name(net_at_requsted_vnic):\n raise ValueError('The vNIC: \"{0}\" is already set with: \"{1}\"'.format(request.vnic_name,\n net_at_requsted_vnic))\n mapping[request.vnic_name] = (request.network, request.vnic_name)\n vnics_to_network_mapping.pop(request.vnic_name)\n else:\n vnic_name = self._find_available_vnic(vnics_to_network_mapping, default_network)\n mapping[vnic_name] = (request.network, request.vnic_name)\n vnics_to_network_mapping.pop(vnic_name)\n\n return mapping", "response": "Maps the requests to the available vnic and returns a dictionary of the mappings."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreceiving ovf image parameters and deploy it on the designated vcenter :param VMwarevCenterResourceModel vcenter_data_model: :type image_params: vCenterShell.vm.ovf_image_params.OvfImageParams :param logger:", "response": "def deploy_image(self, vcenter_data_model, image_params, logger):\n \"\"\"\n Receives ovf image parameters and deploy it on the designated vcenter\n :param VMwarevCenterResourceModel vcenter_data_model:\n :type image_params: vCenterShell.vm.ovf_image_params.OvfImageParams\n :param logger:\n \"\"\"\n ovf_tool_exe_path = vcenter_data_model.ovf_tool_path\n\n self._validate_url_exists(ovf_tool_exe_path, 'OVF Tool', logger)\n\n args = self._get_args(ovf_tool_exe_path, image_params, logger)\n logger.debug('opening ovf tool process with the params: {0}'.format(','.join(args)))\n process = subprocess.Popen(args, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n\n logger.debug('communicating with ovf tool')\n result = process.communicate()\n process.stdin.close()\n\n if result:\n res = '\\n\\r'.join(result)\n else:\n if image_params.user_arguments.find('--quiet') == -1:\n raise Exception('no result has return from the ovftool')\n res = COMPLETED_SUCCESSFULLY\n\n logger.info('communication with ovf tool results: {0}'.format(res))\n if res.find(COMPLETED_SUCCESSFULLY) > -1:\n return True\n\n image_params.connectivity.password = '******'\n args_for_error = ' '.join(self._get_args(ovf_tool_exe_path, image_params, logger))\n logger.error('error deploying image with the args: {0}, error: {1}'.format(args_for_error, res))\n raise Exception('error deploying image with the args: {0}, error: {1}'.format(args_for_error, res))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_args(self, ovf_tool_exe_path, image_params, logger):\n # create vm name\n vm_name_param = VM_NAME_PARAM.format(image_params.vm_name)\n\n # datastore name\n datastore_param = DATA_STORE_PARAM.format(image_params.datastore)\n\n # power state\n # power_state = POWER_ON_PARAM if image_params.power_on else POWER_OFF_PARAM\n # due to hotfix 1 for release 1.0,\n # after deployment the vm must be powered off and will be powered on if needed by orchestration driver\n\n power_state = POWER_OFF_PARAM\n\n # build basic args\n args = [ovf_tool_exe_path,\n NO_SSL_PARAM,\n ACCEPT_ALL_PARAM,\n power_state,\n vm_name_param,\n datastore_param]\n # append user folder\n if hasattr(image_params, 'vm_folder') and image_params.vm_folder:\n vm_folder_str = VM_FOLDER_PARAM.format(image_params.vm_folder)\n args.append(vm_folder_str)\n\n # append args that are user inputs\n if hasattr(image_params, 'user_arguments') and image_params.user_arguments:\n args += [key.strip()\n for key in image_params.user_arguments.split(',')]\n\n # get ovf destination\n ovf_destination = self._get_ovf_destenation(image_params)\n\n image_url = image_params.image_url\n self._validate_image_exists(image_url, logger)\n\n # set location and destination\n args += [image_url,\n ovf_destination]\n\n return args", "response": "Build the list of arguments to use for the ovf tool."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef execute_command_with_connection(self, context, command, *args):\n\n logger = self.context_based_logger_factory.create_logger_for_context(\n logger_name='vCenterShell',\n context=context)\n\n if not command:\n logger.error(COMMAND_CANNOT_BE_NONE)\n raise Exception(COMMAND_CANNOT_BE_NONE)\n\n try:\n command_name = command.__name__\n logger.info(LOG_FORMAT.format(START, command_name))\n command_args = []\n si = None\n session = None\n connection_details = None\n vcenter_data_model = None\n\n # get connection details\n if context:\n with CloudShellSessionContext(context) as cloudshell_session:\n session = cloudshell_session\n\n vcenter_data_model = self.resource_model_parser.convert_to_vcenter_model(context.resource)\n connection_details = ResourceConnectionDetailsRetriever.get_connection_details(session=session,\n vcenter_resource_model=vcenter_data_model,\n resource_context=context.resource)\n\n if connection_details:\n logger.info(INFO_CONNECTING_TO_VCENTER.format(connection_details.host))\n logger.debug(\n DEBUG_CONNECTION_INFO.format(connection_details.host,\n connection_details.username,\n connection_details.port))\n\n si = self.get_py_service_connection(connection_details, logger)\n if si:\n logger.info(CONNECTED_TO_CENTER.format(connection_details.host))\n command_args.append(si)\n\n self._try_inject_arg(command=command,\n command_args=command_args,\n arg_object=session,\n arg_name='session')\n self._try_inject_arg(command=command,\n command_args=command_args,\n arg_object=vcenter_data_model,\n arg_name='vcenter_data_model')\n self._try_inject_arg(command=command,\n command_args=command_args,\n arg_object=self._get_reservation_id(context),\n arg_name='reservation_id')\n self._try_inject_arg(command=command,\n command_args=command_args,\n arg_object=logger,\n arg_name='logger')\n\n command_args.extend(args)\n\n logger.info(EXECUTING_COMMAND.format(command_name))\n logger.debug(DEBUG_COMMAND_PARAMS.format(COMMA.join([str(x) for x in command_args])))\n\n results = command(*tuple(command_args))\n\n logger.info(FINISHED_EXECUTING_COMMAND.format(command_name))\n logger.debug(DEBUG_COMMAND_RESULT.format(str(results)))\n\n return results\n except Exception as ex:\n logger.exception(COMMAND_ERROR.format(command_name))\n logger.exception(str(type(ex)) + ': ' + str(ex))\n raise\n finally:\n logger.info(LOG_FORMAT.format(END, command_name))", "response": "Executes a command with a connection to the vCenter service."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef has_connection_details_changed(self, req_connection_details):\n if self.connection_details is None and req_connection_details is None:\n return False\n if self.connection_details is None or req_connection_details is None:\n return True\n return not all([self.connection_details.host == req_connection_details.host,\n self.connection_details.username == req_connection_details.username,\n self.connection_details.password == req_connection_details.password,\n self.connection_details.port == req_connection_details.port])", "response": "Returns True if the connection details has changed."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrestore a virtual machine to a snapshot", "response": "def restore_snapshot(self, si, logger, session, vm_uuid, resource_fullname, snapshot_name):\n \"\"\"\n Restores a virtual machine to a snapshot\n :param vim.ServiceInstance si: py_vmomi service instance\n :param logger: Logger\n :param session: CloudShellAPISession\n :type session: cloudshell_api.CloudShellAPISession\n :param vm_uuid: uuid of the virtual machine\n :param resource_fullname:\n :type: resource_fullname: str\n :param str snapshot_name: Snapshot name to save the snapshot to\n \"\"\"\n vm = self.pyvmomi_service.find_by_uuid(si, vm_uuid)\n\n logger.info(\"Revert snapshot\")\n\n snapshot = SnapshotRestoreCommand._get_snapshot(vm=vm, snapshot_name=snapshot_name)\n session.SetResourceLiveStatus(resource_fullname, \"Offline\", \"Powered Off\")\n task = snapshot.RevertToSnapshot_Task()\n\t\t\n return self.task_waiter.wait_for_task(task=task, logger=logger, action_name='Revert Snapshot')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_snapshot(vm, snapshot_name):\n snapshots = SnapshotRetriever.get_vm_snapshots(vm)\n\n if snapshot_name not in snapshots:\n raise SnapshotNotFoundException('Snapshot {0} was not found'.format(snapshot_name))\n\n return snapshots[snapshot_name]", "response": "Returns a snapshot object by its name"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef formula_1980(household, period, parameters):\n '''\n To compute this allowance, the 'rent' value must be provided for the same month, but 'housing_occupancy_status' is not necessary.\n '''\n return household('rent', period) * parameters(period).benefits.housing_allowance", "response": "This formula is used for the 1980 table."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef formula(person, period, parameters):\n '''\n A person's pension depends on their birth date.\n In French: retraite selon l'\u00e2ge.\n In Arabic: \u062a\u0642\u0627\u0639\u062f.\n '''\n age_condition = person('age', period) >= parameters(period).general.age_of_retirement\n return age_condition", "response": "A formula that returns True if the person s pension depends on their birth date."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_command_result(result, unpicklable=False):\n # we do not need to serialize an empty response from the vCenter\n if result is None:\n return\n\n if isinstance(result, basestring):\n return result\n\n json = jsonpickle.encode(result, unpicklable=unpicklable)\n result_for_output = str(json)\n return result_for_output", "response": "Serializes output as JSON and writes it to console output wrapped with special prefix and suffix"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_and_discover(self, context):\n logger = self._get_logger(context)\n logger.info('Autodiscovery started')\n si = None\n resource = None\n\n with CloudShellSessionContext(context) as cloudshell_session:\n self._check_if_attribute_not_empty(context.resource, ADDRESS)\n resource = context.resource\n si = self._check_if_vcenter_user_pass_valid(context, cloudshell_session, resource.attributes)\n\n resource.attributes = VCenterAutoModelDiscovery._make_attributes_slash_backslash_agnostic(resource.attributes)\n\n auto_attr = []\n if not si:\n error_message = 'Could not connect to the vCenter: {0}, with given credentials'\\\n .format(context.resource.address)\n logger.error(error_message)\n raise ValueError(error_message)\n\n try:\n all_dc = self.pv_service.get_all_items_in_vcenter(si, vim.Datacenter)\n dc = self._validate_datacenter(si, all_dc, auto_attr, resource.attributes)\n\n all_items_in_dc = self.pv_service.get_all_items_in_vcenter(si, None, dc)\n dc_name = dc.name\n\n for key, value in resource.attributes.items():\n if key in [USER, PASSWORD, DEFAULT_DATACENTER, VM_CLUSTER]:\n continue\n validation_method = self._get_validation_method(key)\n validation_method(si, all_items_in_dc, auto_attr, dc_name, resource.attributes, key)\n except vim.fault.NoPermission:\n logger.exception('Autodiscovery failed due to permissions error:')\n raise Exception(\"vCenter permissions for configured resource(s) are invalid\")\n\n logger.info('Autodiscovery completed')\n\n return AutoLoadDetails([], auto_attr)", "response": "Validate and discover the current AutoLoad details."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts back slashes to front slashes.", "response": "def _make_attributes_slash_backslash_agnostic(attributes_with_slash_or_backslash):\n \"\"\"\n :param attributes_with_slash_or_backslash: resource attributes from\n cloudshell.cp.vcenter.models.QualiDriverModels.ResourceContextDetails\n :type attributes_with_slash_or_backslash: dict[str,str]\n :return: attributes_with_slash\n :rtype attributes_with_slash: dict[str,str]\n \"\"\"\n attributes_with_slash = dict()\n for key, value in attributes_with_slash_or_backslash.items():\n if key in ATTRIBUTE_NAMES_THAT_ARE_SLASH_BACKSLASH_AGNOSTIC:\n value = back_slash_to_front_converter(value)\n attributes_with_slash[key] = value\n\n return attributes_with_slash"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsave sandbox command, persists an artifact of existing VMs, from which new vms can be restored :param ResourceCommandContext context: :param list[SaveApp] save_actions: :param CancellationContext cancellation_context: :return: list[SaveAppResult] save_app_results", "response": "def save_sandbox(self, context, save_actions, cancellation_context):\n \"\"\"\n Save sandbox command, persists an artifact of existing VMs, from which new vms can be restored\n :param ResourceCommandContext context:\n :param list[SaveApp] save_actions:\n :param CancellationContext cancellation_context:\n :return: list[SaveAppResult] save_app_results\n \"\"\"\n connection = self.command_wrapper.execute_command_with_connection(context, self.save_app_command.save_app,\n save_actions, cancellation_context, )\n save_app_results = connection\n return save_app_results"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_saved_sandbox(self, context, delete_saved_apps, cancellation_context):\n connection = self.command_wrapper.execute_command_with_connection(context,\n self.delete_saved_sandbox_command.delete_sandbox,\n delete_saved_apps, cancellation_context)\n delete_saved_apps_results = connection\n return delete_saved_apps_results", "response": "Delete a saved sandbox along with any vms associated with it\n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeploys From Template Command, will deploy vm from template :param CancellationContext cancellation_context: :param ResourceCommandContext context: the context of the command :param DeployApp deploy_action: :return DeployAppResult deploy results", "response": "def deploy_from_template(self, context, deploy_action, cancellation_context):\n \"\"\"\n Deploy From Template Command, will deploy vm from template\n\n :param CancellationContext cancellation_context:\n :param ResourceCommandContext context: the context of the command\n :param DeployApp deploy_action:\n :return DeployAppResult deploy results\n \"\"\"\n deploy_from_template_model = self.resource_model_parser.convert_to_resource_model(\n attributes=deploy_action.actionParams.deployment.attributes,\n resource_model_type=vCenterVMFromTemplateResourceModel)\n data_holder = DeployFromTemplateDetails(deploy_from_template_model, deploy_action.actionParams.appName)\n\n deploy_result_action = self.command_wrapper.execute_command_with_connection(\n context,\n self.deploy_command.execute_deploy_from_template,\n data_holder,\n cancellation_context,\n self.folder_manager)\n\n deploy_result_action.actionId = deploy_action.actionId\n return deploy_result_action"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndeploy Cloned VM From VM Command, will deploy vm from template :param CancellationContext cancellation_context: :param ResourceCommandContext context: the context of the command :param DeployApp deploy_action: :return DeployAppResult deploy results", "response": "def deploy_clone_from_vm(self, context, deploy_action, cancellation_context):\n \"\"\"\n Deploy Cloned VM From VM Command, will deploy vm from template\n\n :param CancellationContext cancellation_context:\n :param ResourceCommandContext context: the context of the command\n :param DeployApp deploy_action:\n :return DeployAppResult deploy results\n \"\"\"\n deploy_from_vm_model = self.resource_model_parser.convert_to_resource_model(\n attributes=deploy_action.actionParams.deployment.attributes,\n resource_model_type=vCenterCloneVMFromVMResourceModel)\n data_holder = DeployFromTemplateDetails(deploy_from_vm_model, deploy_action.actionParams.appName)\n\n deploy_result_action = self.command_wrapper.execute_command_with_connection(\n context,\n self.deploy_command.execute_deploy_clone_from_vm,\n data_holder,\n cancellation_context,\n self.folder_manager)\n\n deploy_result_action.actionId = deploy_action.actionId\n return deploy_result_action"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deploy_from_linked_clone(self, context, deploy_action, cancellation_context):\n linked_clone_from_vm_model = self.resource_model_parser.convert_to_resource_model(\n attributes=deploy_action.actionParams.deployment.attributes,\n resource_model_type=VCenterDeployVMFromLinkedCloneResourceModel)\n data_holder = DeployFromTemplateDetails(linked_clone_from_vm_model, deploy_action.actionParams.appName)\n\n if not linked_clone_from_vm_model.vcenter_vm_snapshot:\n raise ValueError('Please insert snapshot to deploy an app from a linked clone')\n\n deploy_result_action = self.command_wrapper.execute_command_with_connection(\n context,\n self.deploy_command.execute_deploy_from_linked_clone,\n data_holder,\n cancellation_context,\n self.folder_manager)\n\n deploy_result_action.actionId = deploy_action.actionId\n return deploy_result_action", "response": "This method will deploy vm from linked clone from template"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef deploy_from_image(self, context, deploy_action, cancellation_context):\n deploy_action.actionParams.deployment.attributes['vCenter Name'] = context.resource.name\n deploy_from_image_model = self.resource_model_parser.convert_to_resource_model(\n attributes=deploy_action.actionParams.deployment.attributes,\n resource_model_type=vCenterVMFromImageResourceModel)\n data_holder = DeployFromImageDetails(deploy_from_image_model, deploy_action.actionParams.appName)\n\n # execute command\n deploy_result_action = self.command_wrapper.execute_command_with_connection(\n context,\n self.deploy_command.execute_deploy_from_image,\n data_holder,\n context.resource,\n cancellation_context,\n self.folder_manager)\n\n deploy_result_action.actionId = deploy_action.actionId\n return deploy_result_action", "response": "This method will deploy vm from ovf image"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef disconnect_all(self, context, ports):\n\n resource_details = self._parse_remote_model(context)\n # execute command\n res = self.command_wrapper.execute_command_with_connection(\n context,\n self.virtual_switch_disconnect_command.disconnect_all,\n resource_details.vm_uuid)\n return set_command_result(result=res, unpicklable=False)", "response": "This command will assign all the vnics on the default network to the default network and will the disconnect all the vnics on the default network"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef DeleteInstance(self, context, ports):\n resource_details = self._parse_remote_model(context)\n # execute command\n res = self.command_wrapper.execute_command_with_connection(\n context,\n self.destroy_virtual_machine_command.DeleteInstance,\n resource_details.vm_uuid,\n resource_details.fullname)\n return set_command_result(result=res, unpicklable=False)", "response": "Delete Instance Command will only destroy the vm and will not remove the resource"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef refresh_ip(self, context, cancellation_context, ports):\n resource_details = self._parse_remote_model(context)\n # execute command\n res = self.command_wrapper.execute_command_with_connection(context,\n self.refresh_ip_command.refresh_ip,\n resource_details,\n cancellation_context,\n context.remote_endpoints[\n 0].app_context.app_request_json)\n return set_command_result(result=res, unpicklable=False)", "response": "Refresh the ip of the resource"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef power_off(self, context, ports):\n return self._power_command(context, ports, self.vm_power_management_command.power_off)", "response": "Power off the remote vm\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npowering on the remote vm", "response": "def power_on(self, context, ports):\n \"\"\"\n Powers on the remote vm\n :param models.QualiDriverModels.ResourceRemoteCommandContext context: the context the command runs on\n :param list[string] ports: the ports of the connection between the remote resource and the local resource, NOT IN USE!!!\n \"\"\"\n return self._power_command(context, ports, self.vm_power_management_command.power_on)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\npower cycle the VM on the given ports", "response": "def power_cycle(self, context, ports, delay):\n \"\"\"\n preforms a restart to the vm\n :param models.QualiDriverModels.ResourceRemoteCommandContext context: the context the command runs on\n :param list[string] ports: the ports of the connection between the remote resource and the local resource, NOT IN USE!!!\n :param number delay: the time to wait between the power on and off\n \"\"\"\n self.power_off(context, ports)\n time.sleep(float(delay))\n return self.power_on(context, ports)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _parse_remote_model(self, context):\n if not context.remote_endpoints:\n raise Exception('no remote resources found in context: {0}', jsonpickle.encode(context, unpicklable=False))\n resource = context.remote_endpoints[0]\n\n dictionary = jsonpickle.decode(resource.app_context.deployed_app_json)\n holder = DeployDataHolder(dictionary)\n app_resource_detail = GenericDeployedAppResourceModel()\n app_resource_detail.vm_uuid = holder.vmdetails.uid\n app_resource_detail.cloud_provider = context.resource.fullname\n app_resource_detail.fullname = resource.fullname\n if hasattr(holder.vmdetails, 'vmCustomParams'):\n app_resource_detail.vm_custom_params = holder.vmdetails.vmCustomParams\n return app_resource_detail", "response": "parse the remote resource model and adds its full name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsaving a snapshot of the virtual machine to a snapshot", "response": "def save_snapshot(self, context, snapshot_name, save_memory='No'):\n \"\"\"\n Saves virtual machine to a snapshot\n :param context: resource context of the vCenterShell\n :type context: models.QualiDriverModels.ResourceCommandContext\n :param snapshot_name: snapshot name to save to\n :type snapshot_name: str\n :param save_memory: Snapshot the virtual machine's memory. Lookup, Yes / No\n :type save_memory: str\n :return:\n \"\"\"\n resource_details = self._parse_remote_model(context)\n created_snapshot_path = self.command_wrapper.execute_command_with_connection(context,\n self.snapshot_saver.save_snapshot,\n resource_details.vm_uuid,\n snapshot_name,\n save_memory)\n return set_command_result(created_snapshot_path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrestore virtual machine from a snapshot", "response": "def restore_snapshot(self, context, snapshot_name):\n \"\"\"\n Restores virtual machine from a snapshot\n :param context: resource context of the vCenterShell\n :type context: models.QualiDriverModels.ResourceCommandContext\n :param snapshot_name: snapshot name to save to\n :type snapshot_name: str\n :return:\n \"\"\"\n resource_details = self._parse_remote_model(context)\n self.command_wrapper.execute_command_with_connection(context,\n self.snapshot_restorer.restore_snapshot,\n resource_details.vm_uuid,\n resource_details.fullname,\n snapshot_name)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn list of snapshots for the current resource", "response": "def get_snapshots(self, context):\n \"\"\"\n Returns list of snapshots\n :param context: resource context of the vCenterShell\n :type context: models.QualiDriverModels.ResourceCommandContext\n :return:\n \"\"\"\n resource_details = self._parse_remote_model(context)\n res = self.command_wrapper.execute_command_with_connection(context,\n self.snapshots_retriever.get_snapshots,\n resource_details.vm_uuid)\n return set_command_result(result=res, unpicklable=False)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef orchestration_save(self, context, mode=\"shallow\", custom_params=None):\n resource_details = self._parse_remote_model(context)\n created_date = datetime.now()\n snapshot_name = created_date.strftime('%y_%m_%d %H_%M_%S_%f')\n created_snapshot_path = self.save_snapshot(context=context, snapshot_name=snapshot_name)\n\n created_snapshot_path = self._strip_double_quotes(created_snapshot_path)\n\n orchestration_saved_artifact = OrchestrationSavedArtifact()\n orchestration_saved_artifact.artifact_type = 'vcenter_snapshot'\n orchestration_saved_artifact.identifier = created_snapshot_path\n\n saved_artifacts_info = OrchestrationSavedArtifactsInfo(\n resource_name=resource_details.cloud_provider,\n created_date=created_date,\n restore_rules={'requires_same_resource': True},\n saved_artifact=orchestration_saved_artifact)\n\n orchestration_save_result = OrchestrationSaveResult(saved_artifacts_info)\n\n return set_command_result(result=orchestration_save_result, unpicklable=False)", "response": "Creates a snapshot with a unique name and returns a SavedResults object"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_vcenter_data_model(self, api, vcenter_name):\n if not vcenter_name:\n raise ValueError('VMWare vCenter name is empty')\n vcenter_instance = api.GetResourceDetails(vcenter_name)\n vcenter_resource_model = self.resource_model_parser.convert_to_vcenter_model(vcenter_instance)\n return vcenter_resource_model", "response": "Returns a VMwarevCenterResourceModel object from the API."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert back slashes to front slashes", "response": "def back_slash_to_front_converter(string):\n \"\"\"\n Replacing all \\ in the str to /\n :param string: single string to modify\n :type string: str\n \"\"\"\n try:\n if not string or not isinstance(string, str):\n return string\n return string.replace('\\\\', '/')\n except Exception:\n return string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting any object to JSON - like readable format ready to be printed for debugging purposes", "response": "def get_object_as_string(obj):\n \"\"\"\n Converts any object to JSON-like readable format, ready to be printed for debugging purposes\n :param obj: Any object\n :return: string\n \"\"\"\n if isinstance(obj, str):\n return obj\n if isinstance(obj, list):\n return '\\r\\n\\;'.join([get_object_as_string(item) for item in obj])\n attrs = vars(obj)\n as_string = ', '.join(\"%s: %s\" % item for item in attrs.items())\n return as_string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsplitting Whole Patch onto Patch and Name", "response": "def get_path_and_name(full_name):\n \"\"\"\n Split Whole Patch onto 'Patch' and 'Name'\n :param full_name: Full Resource Name - likes 'Root/Folder/Folder2/Name'\n :return: tuple (Patch, Name)\n \"\"\"\n if full_name:\n parts = full_name.split(\"/\")\n return (\"/\".join(parts[0:-1]), parts[-1]) if len(parts) > 1 else (\"/\", full_name)\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning details of the virtual machine", "response": "def get_details(self):\n \"\"\"\n :rtype list[VmDataField]\n \"\"\"\n data = []\n \n if self.deployment == 'vCenter Clone VM From VM':\n data.append(VmDetailsProperty(key='Cloned VM Name',value= self.dep_attributes.get('vCenter VM','')))\n\n if self.deployment == 'VCenter Deploy VM From Linked Clone':\n template = self.dep_attributes.get('vCenter VM','')\n snapshot = self.dep_attributes.get('vCenter VM Snapshot','')\n data.append(VmDetailsProperty(key='Cloned VM Name',value= '{0} (snapshot: {1})'.format(template, snapshot)))\n\n if self.deployment == 'vCenter VM From Image':\n data.append(VmDetailsProperty(key='Base Image Name',value= self.dep_attributes.get('vCenter Image','').split('/')[-1]))\n\n if self.deployment == 'vCenter VM From Template':\n data.append(VmDetailsProperty(key='Template Name',value= self.dep_attributes.get('vCenter Template','')))\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_deplyment_vcenter_params(vcenter_resource_model, deploy_params):\n # Override attributes\n deploy_params.vm_cluster = deploy_params.vm_cluster or vcenter_resource_model.vm_cluster\n deploy_params.vm_storage = deploy_params.vm_storage or vcenter_resource_model.vm_storage\n deploy_params.vm_resource_pool = deploy_params.vm_resource_pool or vcenter_resource_model.vm_resource_pool\n deploy_params.vm_location = deploy_params.vm_location or vcenter_resource_model.vm_location\n deploy_params.default_datacenter = vcenter_resource_model.default_datacenter\n\n if not deploy_params.vm_cluster:\n raise ValueError('VM Cluster is empty')\n if not deploy_params.vm_storage:\n raise ValueError('VM Storage is empty')\n if not deploy_params.vm_location:\n raise ValueError('VM Location is empty')\n if not deploy_params.default_datacenter:\n raise ValueError('Default Datacenter attribute on VMWare vCenter is empty')\n\n deploy_params.vm_location = VMLocation.combine([deploy_params.default_datacenter, deploy_params.vm_location])", "response": "Sets the vcenter parameters if not already set at the deployment option\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsaving a snapshot of the virtual machine to a snapshot", "response": "def remote_save_snapshot(self, context, ports, snapshot_name, save_memory):\n \"\"\"\n Saves virtual machine to a snapshot\n :param context: resource context of the vCenterShell\n :type context: models.QualiDriverModels.ResourceCommandContext\n :param ports:list[string] ports: the ports of the connection between the remote resource and the local resource\n :type ports: list[string]\n :param snapshot_name: snapshot name to save to\n :type snapshot_name: str\n :param save_memory: Snapshot the virtual machine's memory. Lookup, Yes / No\n :type save_memory: str\n :return:\n \"\"\"\n return self.command_orchestrator.save_snapshot(context, snapshot_name, save_memory)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remote_restore_snapshot(self, context, ports, snapshot_name):\n return self.command_orchestrator.restore_snapshot(context, snapshot_name)", "response": "Restores virtual machine from a snapshot"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconnects to a single virtual machine.", "response": "def connect_bulk(self, si, logger, vcenter_data_model, request):\n \"\"\"\n :param si:\n :param logger:\n :param VMwarevCenterResourceModel vcenter_data_model:\n :param request:\n :return:\n \"\"\"\n self.logger = logger\n\n self.logger.info('Apply connectivity changes has started')\n self.logger.debug('Apply connectivity changes has started with the requet: {0}'.format(request))\n\n holder = DeployDataHolder(jsonpickle.decode(request))\n\n self.vcenter_data_model = vcenter_data_model\n if vcenter_data_model.reserved_networks:\n self.reserved_networks = [name.strip() for name in vcenter_data_model.reserved_networks.split(',')]\n\n if not vcenter_data_model.default_dvswitch:\n return self._handle_no_dvswitch_error(holder)\n\n dvswitch_location = VMLocation.create_from_full_path(vcenter_data_model.default_dvswitch)\n\n self.dv_switch_path = VMLocation.combine([vcenter_data_model.default_datacenter, dvswitch_location.path])\n self.dv_switch_name = dvswitch_location.name\n self.default_network = VMLocation.combine(\n [vcenter_data_model.default_datacenter, vcenter_data_model.holding_network])\n\n mappings = self._map_requsets(holder.driverRequest.actions)\n self.logger.debug('Connectivity actions mappings: {0}'.format(jsonpickle.encode(mappings, unpicklable=False)))\n\n pool = ThreadPool()\n async_results = self._run_async_connection_actions(si, mappings, pool, logger)\n\n results = self._get_async_results(async_results, pool)\n self.logger.info('Apply connectivity changes done')\n self.logger.debug('Apply connectivity has finished with the results: {0}'.format(jsonpickle.encode(results,\n unpicklable=False)))\n return results"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndestroying the resource with the given uuid", "response": "def destroy(self, si, logger, session, vcenter_data_model, vm_uuid, vm_name, reservation_id):\n \"\"\"\n :param si:\n :param logger:\n :param CloudShellAPISession session:\n :param vcenter_data_model:\n :param vm_uuid:\n :param str vm_name: This is the resource name\n :param reservation_id:\n :return:\n \"\"\"\n # disconnect\n self._disconnect_all_my_connectors(session=session, resource_name=vm_name, reservation_id=reservation_id,\n logger=logger)\n # find vm\n vm = self.pv_service.find_by_uuid(si, vm_uuid)\n\n if vm is not None:\n # destroy vm\n result = self.pv_service.destroy_vm(vm=vm, logger=logger)\n else:\n logger.info(\"Could not find the VM {0},will remove the resource.\".format(vm_name))\n result = True\n\n # delete resources\n self.resource_remover.remove_resource(session=session, resource_full_name=vm_name)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndisconnects all connectors in a reservation.", "response": "def _disconnect_all_my_connectors(session, resource_name, reservation_id, logger):\n \"\"\"\n :param CloudShellAPISession session:\n :param str resource_name:\n :param str reservation_id:\n \"\"\"\n reservation_details = session.GetReservationDetails(reservation_id)\n connectors = reservation_details.ReservationDescription.Connectors\n endpoints = []\n for endpoint in connectors:\n if endpoint.Target == resource_name or endpoint.Source == resource_name:\n endpoints.append(endpoint.Target)\n endpoints.append(endpoint.Source)\n\n if len(endpoints) == 0:\n logger.info(\"No routes to disconnect for resource {0} in reservation {1}\"\n .format(resource_name, reservation_id))\n return\n\n logger.info(\"Executing disconnect routes for resource {0} in reservation {1}\"\n .format(resource_name, reservation_id))\n\n try:\n session.DisconnectRoutesInReservation(reservation_id, endpoints)\n except Exception as exc:\n logger.exception(\"Error disconnecting routes for resource {0} in reservation {1}. Error: {2}\"\n .format(resource_name, reservation_id, get_error_message_from_exception(exc)))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes a saved sandbox.", "response": "def delete_sandbox(self, si, logger, vcenter_data_model, delete_sandbox_actions, cancellation_context):\n \"\"\"\n Deletes a saved sandbox's artifacts\n\n :param vcenter_data_model: VMwarevCenterResourceModel\n :param vim.ServiceInstance si: py_vmomi service instance\n :type si: vim.ServiceInstance\n :param logger: Logger\n :type logger: cloudshell.core.logger.qs_logger.get_qs_logger\n :param list[SaveApp] delete_sandbox_actions:\n :param cancellation_context:\n \"\"\"\n results = []\n\n logger.info('Deleting saved sandbox command starting on ' + vcenter_data_model.default_datacenter)\n\n if not delete_sandbox_actions:\n raise Exception('Failed to delete saved sandbox, missing data in request.')\n\n actions_grouped_by_save_types = groupby(delete_sandbox_actions, lambda x: x.actionParams.saveDeploymentModel)\n artifactHandlersToActions = {ArtifactHandler.factory(k,\n self.pyvmomi_service,\n vcenter_data_model,\n si,\n logger,\n self.deployer,\n None,\n self.resource_model_parser,\n self.snapshot_saver,\n self.task_waiter,\n self.folder_manager,\n self.pg,\n self.cs): list(g)\n for k, g in actions_grouped_by_save_types}\n\n self._validate_save_deployment_models(artifactHandlersToActions, delete_sandbox_actions, results)\n\n error_results = [r for r in results if not r.success]\n if not error_results:\n results = self._execute_delete_saved_sandbox(artifactHandlersToActions,\n cancellation_context,\n logger,\n results)\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexecute the deployer to deploy vm from linked clone", "response": "def execute_deploy_from_linked_clone(self, si, logger, vcenter_data_model, reservation_id, deployment_params, cancellation_context, folder_manager):\n \"\"\"\n Calls the deployer to deploy vm from snapshot\n :param cancellation_context:\n :param str reservation_id:\n :param si:\n :param logger:\n :type deployment_params: DeployFromLinkedClone\n :param vcenter_data_model:\n :return:\n \"\"\"\n self._prepare_deployed_apps_folder(deployment_params, si, logger, folder_manager, vcenter_data_model)\n\n deploy_result = self.deployer.deploy_from_linked_clone(si, logger, deployment_params, vcenter_data_model,\n reservation_id, cancellation_context)\n return deploy_result"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute_deploy_clone_from_vm(self, si, logger, vcenter_data_model, reservation_id, deployment_params, cancellation_context, folder_manager):\n self._prepare_deployed_apps_folder(deployment_params, si, logger, folder_manager, vcenter_data_model)\n deploy_result = self.deployer.deploy_clone_from_vm(si, logger, deployment_params, vcenter_data_model,\n reservation_id, cancellation_context)\n return deploy_result", "response": "Executes the deploy_clone_from_vm method"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_snapshot(self, si, logger, vm_uuid, snapshot_name, save_memory):\n vm = self.pyvmomi_service.find_by_uuid(si, vm_uuid)\n\n snapshot_path_to_be_created = SaveSnapshotCommand._get_snapshot_name_to_be_created(snapshot_name, vm)\n\n save_vm_memory_to_snapshot = SaveSnapshotCommand._get_save_vm_memory_to_snapshot(save_memory)\n\n SaveSnapshotCommand._verify_snapshot_uniquness(snapshot_path_to_be_created, vm)\n\n task = self._create_snapshot(logger, snapshot_name, vm, save_vm_memory_to_snapshot)\n\n self.task_waiter.wait_for_task(task=task, logger=logger, action_name='Create Snapshot')\n return snapshot_path_to_be_created", "response": "Creates a snapshot of the current state of the virtual machine"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _create_snapshot(logger, snapshot_name, vm, save_vm_memory_to_snapshot):\n logger.info(\"Create virtual machine snapshot\")\n dump_memory = save_vm_memory_to_snapshot\n quiesce = True\n task = vm.CreateSnapshot(snapshot_name, 'Created by CloudShell vCenterShell', dump_memory, quiesce)\n return task", "response": "Create a snapshot of the virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a unique name.", "response": "def generate_unique_name(name_prefix, reservation_id=None):\n \"\"\"\n Generate a unique name.\n Method generate a guid and adds the first 8 characteres of the new guid to 'name_prefix'.\n If reservation id is provided than the first 4 chars of the generated guid are taken and the last 4\n of the reservation id\n \"\"\"\n if reservation_id and isinstance(reservation_id, str) and len(reservation_id) >= 4:\n unique_id = str(uuid.uuid4())[:4] + \"-\" + reservation_id[-4:]\n else:\n unique_id = str(uuid.uuid4())[:8]\n return name_prefix + \"_\" + unique_id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef disconnect(self, si, logger, vcenter_data_model, vm_uuid, network_name=None, vm=None):\n logger.debug(\"Disconnect Interface VM: '{0}' Network: '{1}' ...\".format(vm_uuid, network_name or \"ALL\"))\n if vm is None:\n vm = self.pyvmomi_service.find_by_uuid(si, vm_uuid)\n if not vm:\n return \"Warning: failed to locate vm {0} in vCenter\".format(vm_uuid)\n\n if network_name:\n network = self.pyvmomi_service.vm_get_network_by_name(vm, network_name)\n if network is None:\n raise KeyError('Network not found ({0})'.format(network_name))\n else:\n network = None\n\n network_full_name = VMLocation.combine(\n [vcenter_data_model.default_datacenter, vcenter_data_model.holding_network])\n\n default_network = self.pyvmomi_service.get_network_by_full_name(si, network_full_name)\n if network:\n return self.port_group_configurer.disconnect_network(vm, network, default_network,\n vcenter_data_model.reserved_networks,\n logger=logger)\n else:\n return self.port_group_configurer.disconnect_all_networks(vm, default_network,\n vcenter_data_model.reserved_networks,\n logger=logger)", "response": "Disconnects the specified network from the specified virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_interfaces_from_vm_task(self, virtual_machine, filter_function=None):\n device_change = []\n for device in virtual_machine.config.hardware.device:\n if isinstance(device, vim.vm.device.VirtualEthernetCard) and \\\n (filter_function is None or filter_function(device)):\n nicspec = vim.vm.device.VirtualDeviceSpec()\n nicspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.remove\n nicspec.device = device\n device_change.append(nicspec)\n\n if len(device_change) > 0:\n return self.pyvmomi_service.vm_reconfig_task(virtual_machine, device_change)\n return None", "response": "Removes interfaces from VM"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef connect_by_mapping(self, si, vm, mapping, default_network, reserved_networks, logger, promiscuous_mode):\n request_mapping = []\n\n logger.debug(\n 'about to map to the vm: {0}, the following networks'.format(vm.name if vm.name else vm.config.uuid))\n\n for network_map in mapping:\n network = self.dv_port_group_creator.get_or_create_network(si,\n vm,\n network_map.dv_port_name,\n network_map.dv_switch_name,\n network_map.dv_switch_path,\n network_map.vlan_id,\n network_map.vlan_spec,\n logger,\n promiscuous_mode)\n\n request_mapping.append(ConnectRequest(network_map.vnic_name, network))\n\n logger.debug(str(request_mapping))\n return self.virtual_machine_port_group_configurer.connect_vnic_to_networks(vm,\n request_mapping,\n default_network,\n reserved_networks,\n logger)", "response": "Connects the given mapping to the given virtual machine and returns the virtual machine s entry point."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dv_port_group_create_task(dv_port_name, dv_switch, spec, vlan_id, logger, promiscuous_mode, num_ports=32):\n\n dv_pg_spec = vim.dvs.DistributedVirtualPortgroup.ConfigSpec()\n dv_pg_spec.name = dv_port_name\n dv_pg_spec.numPorts = num_ports\n dv_pg_spec.type = vim.dvs.DistributedVirtualPortgroup.PortgroupType.earlyBinding\n\n dv_pg_spec.defaultPortConfig = vim.dvs.VmwareDistributedVirtualSwitch.VmwarePortConfigPolicy()\n dv_pg_spec.defaultPortConfig.securityPolicy = vim.dvs.VmwareDistributedVirtualSwitch.SecurityPolicy()\n\n dv_pg_spec.defaultPortConfig.vlan = spec\n dv_pg_spec.defaultPortConfig.vlan.vlanId = vlan_id\n\n promiscuous_mode = promiscuous_mode.lower() == 'true'\n dv_pg_spec.defaultPortConfig.securityPolicy.allowPromiscuous = vim.BoolPolicy(value=promiscuous_mode)\n dv_pg_spec.defaultPortConfig.securityPolicy.forgedTransmits = vim.BoolPolicy(value=True)\n\n dv_pg_spec.defaultPortConfig.vlan.inherited = False\n dv_pg_spec.defaultPortConfig.securityPolicy.macChanges = vim.BoolPolicy(value=False)\n dv_pg_spec.defaultPortConfig.securityPolicy.inherited = False\n\n task = dv_switch.AddDVPortgroup_Task([dv_pg_spec])\n\n logger.info(u\"DV Port Group '{}' CREATE Task ...\".format(dv_port_name))\n return task", "response": "This function creates a Task which can be used to create a Distributed Virtual Portgroup."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a DeploymentDetails object from the given VMwarevCenterResourceModel.", "response": "def create_deployment_details(vcenter_resource_model, vm_cluster, vm_storage, vm_resource_pool, vm_location):\n \"\"\"\n :type vcenter_resource_model: VMwarevCenterResourceModel\n :type vm_cluster: str\n :type vm_storage: str\n :type vm_resource_pool: str\n :type vm_location: str\n :rtype: DeploymentDetails\n \"\"\"\n vm_cluster = vm_cluster or vcenter_resource_model.vm_cluster\n vm_storage = vm_storage or vcenter_resource_model.vm_storage\n vm_resource_pool = vm_resource_pool or vcenter_resource_model.vm_resource_pool\n vm_location = vm_location or vcenter_resource_model.vm_location\n\n return DeploymentDetails(\n vm_cluster=vm_cluster,\n vm_storage=vm_storage,\n vm_resource_pool=vm_resource_pool,\n vm_location=vm_location\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef refresh_ip(self, si, logger, session, vcenter_data_model, resource_model, cancellation_context,\n app_request_json):\n \"\"\"\n Refreshes IP address of virtual machine and updates Address property on the resource\n\n :param vim.ServiceInstance si: py_vmomi service instance\n :param logger:\n :param vCenterShell.driver.SecureCloudShellApiSession session: cloudshell session\n :param GenericDeployedAppResourceModel resource_model: UUID of Virtual Machine\n :param VMwarevCenterResourceModel vcenter_data_model: the vcenter data model attributes\n :param cancellation_context:\n \"\"\"\n self._do_not_run_on_static_vm(app_request_json=app_request_json)\n\n default_network = VMLocation.combine(\n [vcenter_data_model.default_datacenter, vcenter_data_model.holding_network])\n\n match_function = self.ip_manager.get_ip_match_function(\n self._get_ip_refresh_ip_regex(resource_model.vm_custom_params))\n\n timeout = self._get_ip_refresh_timeout(resource_model.vm_custom_params)\n\n vm = self.pyvmomi_service.find_by_uuid(si, resource_model.vm_uuid)\n\n ip_res = self.ip_manager.get_ip(vm, default_network, match_function, cancellation_context, timeout, logger)\n\n if ip_res.reason == IpReason.Timeout:\n raise ValueError('IP address of VM \\'{0}\\' could not be obtained during {1} seconds'\n .format(resource_model.fullname, timeout))\n\n if ip_res.reason == IpReason.Success:\n self._update_resource_address_with_retry(session=session,\n resource_name=resource_model.fullname,\n ip_address=ip_res.ip_address)\n\n return ip_res.ip_address", "response": "Refreshes the IP address of a virtual machine and updates Address property on the resource"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_connection_details(session, vcenter_resource_model, resource_context):\n\n session = session\n resource_context = resource_context\n\n # get vCenter connection details from vCenter resource\n user = vcenter_resource_model.user\n vcenter_url = resource_context.address\n password = session.DecryptPassword(vcenter_resource_model.password).Value\n\n return VCenterConnectionDetails(vcenter_url, user, password)", "response": "Method retrieves the connection details from the vcenter resource model attributes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef vnic_add_to_vm_task(nicspec, virtual_machine, logger):\n if issubclass(type(nicspec), vim.vm.device.VirtualDeviceSpec):\n nicspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add\n VNicService.vnic_set_connectivity_status(nicspec, True)\n logger.debug(u\"Attaching new vNIC '{}' to VM '{}'...\".format(nicspec, virtual_machine))\n task = pyVmomiService.vm_reconfig_task(virtual_machine, [nicspec])\n return task\n else:\n logger.warn(u\"Cannot attach new vNIC '{}' to VM '{}'\".format(nicspec, virtual_machine))\n return None", "response": "Add new vNIC to VM"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a Network connected to a particular Device", "response": "def get_network_by_device(vm, device, pyvmomi_service, logger):\n \"\"\"\n Get a Network connected to a particular Device (vNIC)\n @see https://github.com/vmware/pyvmomi/blob/master/docs/vim/dvs/PortConnection.rst\n\n :param vm:\n :param device: instance of adapter\n :param pyvmomi_service:\n :param logger:\n :return: \n \"\"\"\n\n try:\n backing = device.backing\n if hasattr(backing, 'network'):\n return backing.network\n elif hasattr(backing, 'port') and hasattr(backing.port, 'portgroupKey'):\n return VNicService._network_get_network_by_connection(vm, backing.port, pyvmomi_service)\n except:\n logger.debug(u\"Cannot determinate which Network connected to device {}\".format(device))\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef device_is_attached_to_network(device, network_name):\n try:\n backing = device.backing\n except:\n return False\n if hasattr(backing, 'network') and hasattr(backing.network, 'name'):\n return network_name == backing.network.name\n elif hasattr(backing, 'port') and hasattr(backing.port, 'portgroupKey'):\n return network_name == backing.port.portgroupKey\n\n return False", "response": "Checks if the device is attached to the right network name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef vnic_compose_empty(device=None):\n nicspec = vim.vm.device.VirtualDeviceSpec()\n\n if device:\n nicspec.device = device\n nicspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit\n else:\n nicspec.operation = vim.vm.device.VirtualDeviceSpec.Operation.add\n nicspec.device = vim.vm.device.VirtualVmxnet3()\n nicspec.device.wakeOnLanEnabled = True\n nicspec.device.deviceInfo = vim.Description()\n\n nicspec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()\n nicspec.device.connectable.startConnected = True\n nicspec.device.connectable.allowGuestControl = True\n\n return nicspec", "response": "Returns a vNIC with an empty vNIC."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nattaching vNIC to a standard network", "response": "def vnic_attach_to_network_standard(nicspec, network, logger):\n \"\"\"\n Attach vNIC to a 'usual' network\n :param nicspec: \n :param network: \n :param logger:\n :return: updated 'nicspec'\n \"\"\"\n if nicspec and network_is_standard(network):\n network_name = network.name\n nicspec.device.backing = vim.vm.device.VirtualEthernetCard.NetworkBackingInfo()\n nicspec.device.backing.network = network\n\n nicspec.device.wakeOnLanEnabled = True\n nicspec.device.deviceInfo = vim.Description()\n\n nicspec.device.backing.deviceName = network_name\n\n nicspec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()\n nicspec.device.connectable.startConnected = True\n nicspec.device.connectable.allowGuestControl = True\n\n logger.debug(u\"Assigning network '{}' for vNIC\".format(network_name))\n else:\n # logger.warn(u\"Cannot assigning network '{}' for vNIC {}\".format(network, nicspec))\n logger.warn(u\"Cannot assigning network for vNIC \".format(network, nicspec))\n return nicspec"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nattach vNIC to a Distributed Port Group", "response": "def vnic_attach_to_network_distributed(nicspec, port_group, logger):\n \"\"\"\n Attach vNIC to a Distributed Port Group network\n :param nicspec: \n :param port_group: \n :param logger:\n :return: updated 'nicspec'\n \"\"\"\n if nicspec and network_is_portgroup(port_group):\n network_name = port_group.name\n\n dvs_port_connection = vim.dvs.PortConnection()\n dvs_port_connection.portgroupKey = port_group.key\n dvs_port_connection.switchUuid = port_group.config.distributedVirtualSwitch.uuid\n\n nicspec.device.backing = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()\n nicspec.device.backing.port = dvs_port_connection\n\n logger.debug(u\"Assigning portgroup '{}' for vNIC\".format(network_name))\n else:\n logger.warn(u\"Cannot assigning portgroup for vNIC\")\n return nicspec"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef vnic_attached_to_network(nicspec, network, logger):\n\n if nicspec:\n if network_is_portgroup(network):\n return VNicService.vnic_attach_to_network_distributed(nicspec, network,\n logger=logger)\n elif network_is_standard(network):\n return VNicService.vnic_attach_to_network_standard(nicspec, network,\n logger=logger)\n return None", "response": "Attach vNIC to a network."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef vnic_add_new_to_vm_task(vm, network, logger):\n\n nicspes = VNicService.vnic_new_attached_to_network(network)\n task = VNicService.vnic_add_to_vm_task(nicspes, vm, logger)\n return task", "response": "Compose new vNIC and attach it to VM & connect to Network\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmaps the vnic on the vm by name", "response": "def map_vnics(vm):\n \"\"\"\n maps the vnic on the vm by name\n :param vm: virtual machine\n :return: dictionary: {'vnic_name': vnic}\n \"\"\"\n return {device.deviceInfo.label: device\n for device in vm.config.hardware.device\n if isinstance(device, vim.vm.device.VirtualEthernetCard)}"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_device_spec(vnic, set_connected):\n nic_spec = VNicService.create_vnic_spec(vnic)\n VNicService.set_vnic_connectivity_status(nic_spec, to_connect=set_connected)\n return nic_spec", "response": "This function creates the device change spec and returns it"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a vnic spec for existing device", "response": "def create_vnic_spec(device):\n \"\"\"\n create device spec for existing device and the mode of edit for the vcenter to update\n :param device:\n :rtype: device spec\n \"\"\"\n nic_spec = vim.vm.device.VirtualDeviceSpec()\n nic_spec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit\n nic_spec.device = device\n return nic_spec"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the vnic connectivity status of the nic spec.", "response": "def set_vnic_connectivity_status(nic_spec, to_connect):\n \"\"\"\n sets the device spec as connected or disconnected\n :param nic_spec: the specification\n :param to_connect: bool\n \"\"\"\n nic_spec.device.connectable = vim.vm.device.VirtualDevice.ConnectInfo()\n nic_spec.device.connectable.connected = to_connect\n nic_spec.device.connectable.startConnected = to_connect"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_from_params(cls, template_model, datastore_name, vm_cluster_model, ip_regex, refresh_ip_timeout,\n auto_power_on, auto_power_off, wait_for_ip, auto_delete):\n \"\"\"\n :param VCenterTemplateModel template_model:\n :param str datastore_name:\n :param VMClusterModel vm_cluster_model:\n :param str ip_regex: Custom regex to filter IP addresses\n :param refresh_ip_timeout:\n :param bool auto_power_on:\n :param bool auto_power_off:\n :param bool wait_for_ip:\n :param bool auto_delete:\n \"\"\"\n dic = {\n 'template_model': template_model,\n 'datastore_name': datastore_name,\n 'vm_cluster_model': vm_cluster_model,\n 'ip_regex': ip_regex,\n 'refresh_ip_timeout': refresh_ip_timeout,\n 'auto_power_on': auto_power_on,\n 'auto_power_off': auto_power_off,\n 'wait_for_ip': wait_for_ip,\n 'auto_delete': auto_delete\n }\n return cls(dic)", "response": "Create a new instance of the class based on the given parameters."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_current_snapshot_name(vm):\n all_snapshots = SnapshotRetriever.get_vm_snapshots(vm)\n # noinspection PyProtectedMember\n if not vm.snapshot:\n return None\n current_snapshot_id = vm.snapshot.currentSnapshot._moId\n for snapshot_name in all_snapshots.keys():\n # noinspection PyProtectedMember\n if all_snapshots[snapshot_name]._moId == current_snapshot_id:\n return snapshot_name\n return None", "response": "Returns the name of the current snapshot"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_snapshots_recursively(snapshots, snapshot_location):\n snapshot_paths = collections.OrderedDict()\n\n if not snapshots:\n return snapshot_paths\n\n for snapshot in snapshots:\n if snapshot_location:\n current_snapshot_path = SnapshotRetriever.combine(snapshot_location, snapshot.name)\n else:\n current_snapshot_path = snapshot.name\n\n snapshot_paths[current_snapshot_path] = snapshot.snapshot\n child_snapshots = SnapshotRetriever._get_snapshots_recursively(snapshot.childSnapshotList,\n current_snapshot_path)\n snapshot_paths.update(child_snapshots)\n\n return snapshot_paths", "response": "Recursively traverses child snapshots and returns dictinary of snapshot path and snapshot instances"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwait for a task to complete and provides updates on a vSphere task.", "response": "def wait_for_task(self, task, logger, action_name='job', hide_result=False, cancellation_context=None):\n \"\"\"\n Waits and provides updates on a vSphere task\n :param cancellation_context: package.cloudshell.cp.vcenter.models.QualiDriverModels.CancellationContext\n :param task:\n :param action_name:\n :param hide_result:\n :param logger:\n \"\"\"\n\n while task.info.state in [vim.TaskInfo.State.running, vim.TaskInfo.State.queued]:\n time.sleep(2)\n if cancellation_context is not None and task.info.cancelable and cancellation_context.is_cancelled and not task.info.cancelled:\n # some times the cancel operation doesn't really cancel the task\n # so consider an additional handling of the canceling\n task.CancelTask()\n logger.info(\"SynchronousTaskWaiter: task.CancelTask() \" + str(task.info.name.info.name))\n logger.info(\"SynchronousTaskWaiter: task.info.cancelled \" + str(task.info.cancelled))\n logger.info(\"SynchronousTaskWaiter: task.info.state \" + str(task.info.state))\n\n if task.info.state == vim.TaskInfo.State.success:\n if task.info.result is not None and not hide_result:\n out = '%s completed successfully, result: %s' % (action_name, task.info.result)\n logger.info(out)\n else:\n out = '%s completed successfully.' % action_name\n logger.info(out)\n else: # error state\n multi_msg = ''\n if task.info.error.faultMessage:\n multi_msg = ', '.join([err.message for err in task.info.error.faultMessage])\n elif task.info.error.msg:\n multi_msg = task.info.error.msg\n\n logger.info(\"task execution failed due to: {}\".format(multi_msg))\n logger.info(\"task info dump: {0}\".format(task.info))\n \n raise TaskFaultException(multi_msg)\n\n return task.info.result"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deploy_from_linked_clone(self, si, logger, data_holder, vcenter_data_model, reservation_id,\n cancellation_context):\n \"\"\"\n deploy Cloned VM From VM Command, will deploy vm from a snapshot\n\n :param cancellation_context:\n :param si:\n :param logger:\n :param data_holder:\n :param vcenter_data_model:\n :param str reservation_id:\n :rtype DeployAppResult:\n :return:\n \"\"\"\n\n template_resource_model = data_holder.template_resource_model\n\n return self._deploy_a_clone(si=si,\n logger=logger,\n app_name=data_holder.app_name,\n template_name=template_resource_model.vcenter_vm,\n other_params=template_resource_model,\n vcenter_data_model=vcenter_data_model,\n reservation_id=reservation_id,\n cancellation_context=cancellation_context,\n snapshot=template_resource_model.vcenter_vm_snapshot)", "response": "deploy_from_linked_clone - Deploys a VM From VM Command will deploy vm from a snapshot"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeploys Cloned VM From VM Command, will deploy vm from another vm :param cancellation_context: :param reservation_id: :param si: :param logger: :type data_holder: :type vcenter_data_model: :rtype DeployAppResult: :return:", "response": "def deploy_clone_from_vm(self, si, logger, data_holder, vcenter_data_model, reservation_id, cancellation_context):\n \"\"\"\n deploy Cloned VM From VM Command, will deploy vm from another vm\n\n :param cancellation_context:\n :param reservation_id:\n :param si:\n :param logger:\n :type data_holder:\n :type vcenter_data_model:\n :rtype DeployAppResult:\n :return:\n \"\"\"\n template_resource_model = data_holder.template_resource_model\n return self._deploy_a_clone(si,\n logger,\n data_holder.app_name,\n template_resource_model.vcenter_vm,\n template_resource_model,\n vcenter_data_model,\n reservation_id,\n cancellation_context)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_deploy_image_params(data_holder, host_info, vm_name):\n image_params = OvfImageParams()\n if hasattr(data_holder, 'vcenter_image_arguments') and data_holder.vcenter_image_arguments:\n image_params.user_arguments = data_holder.vcenter_image_arguments\n if hasattr(data_holder, 'vm_location') and data_holder.vm_location:\n image_params.vm_folder = data_holder.vm_location.replace(data_holder.default_datacenter + '/', '')\n image_params.cluster = data_holder.vm_cluster\n image_params.resource_pool = data_holder.vm_resource_pool\n image_params.connectivity = host_info\n image_params.vm_name = vm_name\n image_params.datastore = data_holder.vm_storage\n image_params.datacenter = data_holder.default_datacenter\n image_params.image_url = data_holder.vcenter_image\n image_params.power_on = False\n image_params.vcenter_name = data_holder.vcenter_name\n return image_params", "response": "Returns the deploy image parameters for the specified image resource model."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning details of the current VM", "response": "def get_details(self):\n \"\"\"\n :rtype list[VmDataField]\n \"\"\"\n data = []\n if isinstance(self.model, vCenterCloneVMFromVMResourceModel):\n data.append(VmDetailsProperty(key='Cloned VM Name', value=self.model.vcenter_vm))\n\n if isinstance(self.model, VCenterDeployVMFromLinkedCloneResourceModel):\n template = self.model.vcenter_vm\n snapshot = self.model.vcenter_vm_snapshot\n data.append(VmDetailsProperty(key='Cloned VM Name',value= '{0} (snapshot: {1})'.format(template, snapshot)))\n\n if isinstance(self.model, vCenterVMFromImageResourceModel):\n data.append(VmDetailsProperty(key='Base Image Name', value=self.model.vcenter_image.split('/')[-1]))\n\n if isinstance(self.model, vCenterVMFromTemplateResourceModel):\n data.append(VmDetailsProperty(key='Template Name', value=self.model.vcenter_template))\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting an instance of resource with dictionary of attributes to a class instance according to family and assigns its properties", "response": "def convert_to_resource_model(self, attributes, resource_model_type):\n \"\"\"\n Converts an instance of resource with dictionary of attributes\n to a class instance according to family and assigns its properties\n :type attributes: dict\n :param resource_model_type: Resource Model type to create\n :return:\n \"\"\"\n if resource_model_type:\n if not callable(resource_model_type):\n raise ValueError('resource_model_type {0} cannot be instantiated'.format(resource_model_type))\n instance = resource_model_type()\n else:\n raise ValueError('resource_model_type must have a value')\n\n props = ResourceModelParser.get_public_properties(instance)\n for attrib in attributes:\n property_name = ResourceModelParser.get_property_name_from_attribute_name(attrib)\n property_name_for_attribute_name = ResourceModelParser.get_property_name_with_attribute_name_postfix(attrib)\n if props.__contains__(property_name):\n value = attributes[attrib]\n setattr(instance, property_name, value)\n if hasattr(instance, property_name_for_attribute_name):\n setattr(instance, property_name_for_attribute_name, attrib)\n props.remove(property_name_for_attribute_name)\n props.remove(property_name)\n\n if props:\n raise ValueError('Property(ies) {0} not found on resource with attributes {1}'\n .format(','.join(props),\n ','.join(attributes)))\n\n if hasattr(instance, 'vcenter_vm'):\n instance.vcenter_vm = back_slash_to_front_converter(instance.vcenter_vm)\n\n if hasattr(instance, 'vcenter_vm_snapshot'):\n instance.vcenter_vm_snapshot = back_slash_to_front_converter(instance.vcenter_vm_snapshot)\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_resource_model_instance(resource_instance):\n resource_model = ResourceModelParser.get_resource_model(resource_instance)\n resource_class_name = ResourceModelParser.get_resource_model_class_name(\n resource_model)\n # print 'Family name is ' + resource_class_name\n instance = ResourceModelParser.get_class('cloudshell.cp.vcenter.models.' + resource_class_name)\n return instance", "response": "Create an instance of class named *ResourceModel\n from models folder according to ResourceModelName of a resource"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_class(class_path):\n module_path, class_name = class_path.rsplit(\".\", 1)\n\n try:\n module = __import__(class_path, fromlist=[class_name])\n except ImportError:\n raise ValueError('Class {0} could not be imported'.format(class_path))\n\n try:\n cls = getattr(module, class_name)\n except AttributeError:\n raise ValueError(\"Module '%s' has no class '%s'\" % (module_path, class_name,))\n\n try:\n instance = cls()\n except TypeError as type_error:\n raise ValueError('Failed to instantiate class {0}. Error: {1}'.format(class_name, type_error.message))\n\n return instance", "response": "Returns an instance of a class by its class_path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_property_name_from_attribute_name(attribute):\n if isinstance(attribute, str) or isinstance(attribute, unicode):\n attribute_name = attribute\n elif hasattr(attribute, 'Name'):\n attribute_name = attribute.Name\n else:\n raise Exception('Attribute type {0} is not supported'.format(str(type(attribute))))\n\n return attribute_name.lower().replace(' ', '_')", "response": "Returns property name from attribute name"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef crc32File(filename, skip=0):\n with open(filename, 'rb') as stream:\n discard = stream.read(skip)\n return zlib.crc32(stream.read()) & 0xffffffff", "response": "Computes the CRC - 32 of the contents of filename optionally skipping a certain number of bytes at the beginning of the file."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nswap pairs of bytes in the given bytearray.", "response": "def endianSwapU16(bytes):\n \"\"\"Swaps pairs of bytes (16-bit words) in the given bytearray.\"\"\"\n for b in range(0, len(bytes), 2):\n bytes[b], bytes[b + 1] = bytes[b + 1], bytes[b]\n return bytes"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setDictDefaults (d, defaults):\n for key, val in defaults.items():\n d.setdefault(key, val)\n\n return d", "response": "Sets all defaults for the given dictionary to those contained in a\n athermotto."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the default AIT dictonary for modname", "response": "def getDefaultDict(modname, config_key, loader, reload=False, filename=None):\n \"\"\"Returns default AIT dictonary for modname\n\n This helper function encapulates the core logic necessary to\n (re)load, cache (via util.ObjectCache), and return the default\n dictionary. For example, in ait.core.cmd:\n\n def getDefaultDict(reload=False):\n return ait.util.getDefaultDict(__name__, 'cmddict', CmdDict, reload)\n \"\"\"\n module = sys.modules[modname]\n default = getattr(module, 'DefaultDict', None)\n\n if filename is None:\n filename = ait.config.get('%s.filename' % config_key, None)\n\n if filename is not None and (default is None or reload is True):\n try:\n default = ObjectCache(filename, loader).load()\n setattr(module, 'DefaultDict', default)\n except IOError, e:\n msg = 'Could not load default %s \"%s\": %s'\n log.error(msg, config_key, filename, str(e))\n\n return default or loader()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert the number n into a Binary Coded Decimal.", "response": "def toBCD (n):\n\n \"\"\"Converts the number n into Binary Coded Decimal.\"\"\"\n bcd = 0\n bits = 0\n\n while True:\n n, r = divmod(n, 10)\n bcd |= (r << bits)\n if n is 0:\n break\n bits += 4\n\n return bcd"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef toFloat (str, default=None):\n value = default\n\n try:\n value = float(str)\n except ValueError:\n pass\n\n return value", "response": "Converts the given string to a floating - point value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef toNumber (str, default=None):\n value = default\n\n try:\n if str.startswith(\"0x\"):\n value = int(str, 16)\n else:\n try:\n value = int(str)\n except ValueError:\n value = float(str)\n except ValueError:\n pass\n\n return value", "response": "Convert a string to a numeric value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef toRepr (obj):\n args = [ ]\n names = [ ]\n\n if hasattr(obj, \"__dict__\"):\n names += getattr(obj, \"__dict__\").keys()\n if hasattr(obj, \"__slots__\"):\n names += getattr(obj, \"__slots__\")\n\n for name in names:\n value = getattr(obj, name)\n if value is not None:\n if type(value) is str:\n if len(value) > 32:\n value = value[0:32] + \"...\"\n value = \"'\" + value + \"'\"\n args.append(\"%s=%s\" % (name, str(value)))\n\n return \"%s(%s)\" % (obj.__class__.__name__, \", \".join(args))", "response": "Convert the Python object to a string representation of the kind\n often returned by a class __repr__ method."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a description of the given duration in the most appropriate units.", "response": "def toStringDuration (duration):\n \"\"\"Returns a description of the given duration in the most appropriate\n units (e.g. seconds, ms, us, or ns).\n \"\"\"\n\n table = (\n ('%dms' , 1e-3, 1e3),\n (u'%d\\u03BCs', 1e-6, 1e6),\n ('%dns' , 1e-9, 1e9)\n )\n\n if duration > 1:\n return '%fs' % duration\n\n for format, threshold, factor in table:\n if duration > threshold:\n return format % int(duration * factor)\n\n return '%fs' % duration"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef expandPath (pathname, prefix=None):\n if prefix is None:\n prefix = ''\n\n expanded = pathname\n\n if pathname[0] == '~':\n expanded = os.path.expanduser(pathname)\n elif pathname[0] != '/':\n expanded = os.path.join(prefix, pathname)\n\n return os.path.abspath(expanded)", "response": "Return pathname as an absolute path either expanded by the users\n home directory or with prefix prepended."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the list of all files within the input directory and all subdirectories.", "response": "def listAllFiles (directory, suffix=None, abspath=False):\n \"\"\"Returns the list of all files within the input directory and\n all subdirectories.\n \"\"\"\n files = []\n\n directory = expandPath(directory)\n\n for dirpath, dirnames, filenames in os.walk(directory, followlinks=True):\n if suffix:\n filenames = [f for f in filenames if f.endswith(suffix)]\n\n for filename in filenames:\n filepath = os.path.join(dirpath, filename)\n if not abspath:\n filepath = os.path.relpath(filepath, start=directory)\n\n # os.path.join(path, filename)\n\n files.append(filepath)\n\n return files"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntrue if the cache needs to be updated False otherwise.", "response": "def dirty(self):\n \"\"\"True if the cache needs to be updated, False otherwise\"\"\"\n return not os.path.exists(self.cachename) or \\\n (os.path.getmtime(self.filename) >\n os.path.getmtime(self.cachename))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncaching the result of loader ( filename to cachename.", "response": "def cache(self):\n \"\"\"Caches the result of loader(filename) to cachename.\"\"\"\n msg = 'Saving updates from more recent \"%s\" to \"%s\"'\n log.info(msg, self.filename, self.cachename)\n with open(self.cachename, 'wb') as output:\n cPickle.dump(self._dict, output, -1)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading the Python object from the cache file.", "response": "def load(self):\n \"\"\"Loads the Python object\n\n Loads the Python object, either via loader(filename) or the\n pickled cache file, whichever was modified most recently.\n \"\"\"\n if self._dict is None:\n if self.dirty:\n self._dict = self._loader(self.filename)\n self.cache()\n else:\n with open(self.cachename, 'rb') as stream:\n self._dict = cPickle.load(stream)\n\n return self._dict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process(self, input_data, topic=None, **kwargs):\n try:\n split = input_data[1:-1].split(',', 1)\n uid, pkt = int(split[0]), split[1]\n defn = self.packet_dict[uid]\n decoded = tlm.Packet(defn, data=bytearray(pkt))\n self.dbconn.insert(decoded, **kwargs)\n except Exception as e:\n log.error('Data archival failed with error: {}.'.format(e))", "response": "Process a message received from PacketHandler into a tuple and inserts it into the database backend."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the current UTC time in seconds and microseconds.", "response": "def getTimestampUTC():\n \"\"\"getTimestampUTC() -> (ts_sec, ts_usec)\n\n Returns the current UTC time in seconds and microseconds.\n \"\"\"\n utc = datetime.datetime.utcnow()\n ts_sec = calendar.timegm( utc.timetuple() )\n ts_usec = utc.microsecond\n return ts_sec, ts_usec"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getUTCDatetimeDOY(days=0, hours=0, minutes=0, seconds=0):\n return (datetime.datetime.utcnow() + \n datetime.timedelta(days=days, hours=hours, minutes=minutes, seconds=seconds)).strftime(DOY_Format)", "response": "getUTCDatetimeDOY - Returns the UTC current datetime with the input timedelta arguments days hours minutes seconds"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef toc():\n end = datetime.datetime.now()\n return totalSeconds( end - TICs.pop() ) if len(TICs) else None", "response": "Returns the total elapsed seconds since the most recent tic call or None if tic was not called."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert the given UTC timestamp to a two - tuple that is a two - tuple with the GPS week number and GPS seconds within the week.", "response": "def toGPSWeekAndSecs(timestamp=None):\n \"\"\"Converts the given UTC timestamp (defaults to the current time) to\n a two-tuple, (GPS week number, GPS seconds within the week).\n \"\"\"\n if timestamp is None:\n timestamp = datetime.datetime.utcnow()\n\n leap = LeapSeconds.get_GPS_offset_for_date(timestamp)\n\n secsInWeek = 604800\n delta = totalSeconds(timestamp - GPS_Epoch) + leap\n seconds = delta % secsInWeek\n week = int( math.floor(delta / secsInWeek) )\n\n return (week, seconds)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert the given Python datetime or Julian date to Greenwich Mean Sidereal Time in radians", "response": "def toGMST(dt=None):\n \"\"\"Converts the given Python datetime or Julian date (float) to\n Greenwich Mean Sidereal Time (GMST) (in radians) using the formula\n from D.A. Vallado (2004).\n\n See:\n\n D.A. Vallado, Fundamentals of Astrodynamics and Applications, p. 192\n http://books.google.com/books?id=PJLlWzMBKjkC&lpg=PA956&vq=192&pg=PA192\n \"\"\"\n if dt is None or type(dt) is datetime.datetime:\n jd = toJulian(dt)\n else:\n jd = dt\n\n tUT1 = (jd - 2451545.0) / 36525.0\n gmst = 67310.54841 + (876600 * 3600 + 8640184.812866) * tUT1\n gmst += 0.093104 * tUT1**2\n gmst -= 6.2e-6 * tUT1**3\n\n # Convert from seconds to degrees, i.e.\n # 86400 seconds / 360 degrees = 240 seconds / degree\n gmst /= 240.\n\n # Convert to radians\n gmst = math.radians(gmst) % TwoPi\n\n if gmst < 0:\n gmst += TwoPi\n\n return gmst"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef toJulian(dt=None):\n if dt is None:\n dt = datetime.datetime.utcnow()\n\n if dt.month < 3:\n year = dt.year - 1\n month = dt.month + 12\n else:\n year = dt.year\n month = dt.month\n\n A = int(year / 100.0)\n B = 2 - A + int(A / 4.0)\n C = ( (dt.second / 60.0 + dt.minute) / 60.0 + dt.hour ) / 24.0\n jd = int(365.25 * (year + 4716))\n jd += int(30.6001 * (month + 1)) + dt.day + B - 1524.5 + C\n\n return jd", "response": "Converts a Python datetime to a Julian date using the formula from\n Meesus ( 2004."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert the given number of seconds since the GPS Epoch to this computer s local time.", "response": "def toLocalTime(seconds, microseconds=0):\n \"\"\"toLocalTime(seconds, microseconds=0) -> datetime\n\n Converts the given number of seconds since the GPS Epoch (midnight\n on January 6th, 1980) to this computer's local time. Returns a\n Python datetime object.\n\n Examples:\n\n >>> toLocalTime(0)\n datetime.datetime(1980, 1, 6, 0, 0)\n\n >>> toLocalTime(25 * 86400)\n datetime.datetime(1980, 1, 31, 0, 0)\n \"\"\"\n delta = datetime.timedelta(seconds=seconds, microseconds=microseconds)\n return GPS_Epoch + delta"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef totalSeconds(td):\n if hasattr(td, \"total_seconds\"):\n ts = td.total_seconds()\n else:\n ts = (td.microseconds + (td.seconds + td.days * 24 * 3600.0) * 1e6) / 1e6\n\n return ts", "response": "Returns the total number of seconds contained in the given Python\n datetime. timedelta object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _update_leap_second_data(self):\n\n log.info('Attempting to acquire latest leapsecond data')\n\n ls_file = ait.config.get(\n 'leapseconds.filename',\n os.path.join(ait.config._directory, _DEFAULT_FILE_NAME)\n )\n\n url = 'https://www.ietf.org/timezones/data/leap-seconds.list'\n r = requests.get(url)\n\n if r.status_code != 200:\n msg = 'Unable to locate latest timezone data. Connection to IETF failed'\n log.error(msg)\n raise ValueError(msg)\n\n text = r.text.split('\\n')\n lines = [l for l in text if l.startswith('#@') or not l.startswith('#')]\n\n data = {'valid': None, 'leapseconds': []}\n data['valid'] = datetime.datetime(1900, 1, 1) + datetime.timedelta(seconds=int(lines[0].split('\\t')[1]))\n\n leap = 1\n for l in lines[1:-1]:\n t = datetime.datetime(1900, 1, 1) + datetime.timedelta(seconds=int(l.split('\\t')[0]))\n if t < GPS_Epoch:\n continue\n\n data['leapseconds'].append((t, leap))\n leap += 1\n\n self._data = data\n with open(ls_file, 'w') as outfile:\n pickle.dump(data, outfile)", "response": "Updates the system leap second information from the system leap second list file and updates the leap second config file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wait(self):\n for greenlet in (self.greenlets + self.servers):\n log.info(\"Starting {} greenlet...\".format(greenlet))\n greenlet.start()\n\n gevent.joinall(self.greenlets)", "response": "Starts all greenlets and joins all servers."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _load_streams(self):\n common_err_msg = 'No valid {} stream configurations found. '\n specific_err_msg = {'inbound': 'No data will be received (or displayed).',\n 'outbound': 'No data will be published.'}\n err_msgs = {}\n\n for stream_type in ['inbound', 'outbound']:\n err_msgs[stream_type] = common_err_msg.format(stream_type) + specific_err_msg[stream_type]\n streams = ait.config.get('server.{}-streams'.format(stream_type))\n\n if streams is None:\n log.warn(err_msgs[stream_type])\n else:\n for index, s in enumerate(streams):\n try:\n if stream_type == 'inbound':\n strm = self._create_inbound_stream(s['stream'])\n if type(strm) == PortInputStream:\n self.servers.append(strm)\n else:\n self.inbound_streams.append(strm)\n elif stream_type == 'outbound':\n strm = self._create_outbound_stream(s['stream'])\n self.outbound_streams.append(strm)\n log.info('Added {} stream {}'.format(stream_type, strm))\n except Exception:\n exc_type, value, tb = sys.exc_info()\n log.error('{} creating {} stream {}: {}'.format(exc_type,\n stream_type,\n index,\n value))\n if not self.inbound_streams and not self.servers:\n log.warn(err_msgs['inbound'])\n\n if not self.outbound_streams:\n log.warn(err_msgs['outbound'])", "response": "Reads and creates streams specified in config. yaml."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate an inbound stream from its config.", "response": "def _create_inbound_stream(self, config=None):\n \"\"\"\n Creates an inbound stream from its config.\n\n Params:\n config: stream configuration as read by ait.config\n Returns:\n stream: a Stream\n Raises:\n ValueError: if any of the required config values are missing\n \"\"\"\n if config is None:\n raise ValueError('No stream config to create stream from.')\n\n name = self._get_stream_name(config)\n stream_handlers = self._get_stream_handlers(config, name)\n stream_input = config.get('input', None)\n if stream_input is None:\n raise(cfg.AitConfigMissing('inbound stream {}\\'s input'.format(name)))\n\n if type(stream_input[0]) is int:\n return PortInputStream(name,\n stream_input,\n stream_handlers,\n zmq_args={'zmq_context': self.broker.context,\n 'zmq_proxy_xsub_url': self.broker.XSUB_URL,\n 'zmq_proxy_xpub_url': self.broker.XPUB_URL})\n else:\n return ZMQStream(name,\n stream_input,\n stream_handlers,\n zmq_args={'zmq_context': self.broker.context,\n 'zmq_proxy_xsub_url': self.broker.XSUB_URL,\n 'zmq_proxy_xpub_url': self.broker.XPUB_URL})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an outbound stream from its config.", "response": "def _create_outbound_stream(self, config=None):\n \"\"\"\n Creates an outbound stream from its config.\n\n Params:\n config: stream configuration as read by ait.config\n Returns:\n stream: a Stream\n Raises:\n ValueError: if any of the required config values are missing\n \"\"\"\n if config is None:\n raise ValueError('No stream config to create stream from.')\n\n name = self._get_stream_name(config)\n stream_handlers = self._get_stream_handlers(config, name)\n stream_input = config.get('input', None)\n stream_output = config.get('output', None)\n\n if type(stream_output) is int:\n return PortOutputStream(name,\n stream_input,\n stream_output,\n stream_handlers,\n zmq_args={'zmq_context': self.broker.context,\n 'zmq_proxy_xsub_url': self.broker.XSUB_URL,\n 'zmq_proxy_xpub_url': self.broker.XPUB_URL})\n else:\n if stream_output is not None:\n log.warn(\"Output of stream {} is not an integer port. \"\n \"Stream outputs can only be ports.\".format(name))\n return ZMQStream(name,\n stream_input,\n stream_handlers,\n zmq_args={'zmq_context': self.broker.context,\n 'zmq_proxy_xsub_url': self.broker.XSUB_URL,\n 'zmq_proxy_xpub_url': self.broker.XPUB_URL})"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a handler from its config.", "response": "def _create_handler(self, config):\n \"\"\"\n Creates a handler from its config.\n\n Params:\n config: handler config\n Returns:\n handler instance\n \"\"\"\n if config is None:\n raise ValueError('No handler config to create handler from.')\n\n if 'name' not in config:\n raise ValueError('Handler name is required.')\n\n handler_name = config['name']\n # try to create handler\n module_name = handler_name.rsplit('.', 1)[0]\n class_name = handler_name.rsplit('.', 1)[-1]\n module = import_module(module_name)\n handler_class = getattr(module, class_name)\n instance = handler_class(**config)\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _load_plugins(self):\n plugins = ait.config.get('server.plugins')\n\n if plugins is None:\n log.warn('No plugins specified in config.')\n else:\n for index, p in enumerate(plugins):\n try:\n plugin = self._create_plugin(p['plugin'])\n self.plugins.append(plugin)\n log.info('Added plugin {}'.format(plugin))\n\n except Exception:\n exc_type, value, tb = sys.exc_info()\n log.error('{} creating plugin {}: {}'.format(exc_type,\n index,\n value))\n if not self.plugins:\n log.warn('No valid plugin configurations found. No plugins will be added.')", "response": "Reads parses and creates plugins specified in config. yaml."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _create_plugin(self, config):\n if config is None:\n raise ValueError('No plugin config to create plugin from.')\n\n name = config.pop('name', None)\n if name is None:\n raise(cfg.AitConfigMissing('plugin name'))\n\n # TODO I don't think we actually care about this being unique? Left over from\n # previous conversations about stuff?\n module_name = name.rsplit('.', 1)[0]\n class_name = name.rsplit('.', 1)[-1]\n if class_name in [x.name for x in (self.outbound_streams +\n self.inbound_streams +\n self.servers +\n self.plugins)]:\n raise ValueError(\n 'Plugin \"{}\" already loaded. Only one plugin of a given name is allowed'.\n format(class_name)\n )\n\n plugin_inputs = config.pop('inputs', None)\n if plugin_inputs is None:\n log.warn('No plugin inputs specified for {}'.format(name))\n plugin_inputs = [ ]\n\n subscribers = config.pop('outputs', None)\n if subscribers is None:\n log.warn('No plugin outputs specified for {}'.format(name))\n subscribers = [ ]\n\n # try to create plugin\n module = import_module(module_name)\n plugin_class = getattr(module, class_name)\n instance = plugin_class(plugin_inputs,\n subscribers,\n zmq_args={'zmq_context': self.broker.context,\n 'zmq_proxy_xsub_url': self.broker.XSUB_URL,\n 'zmq_proxy_xpub_url': self.broker.XPUB_URL},\n **config\n )\n\n return instance", "response": "Create a new plugin from its config."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef slotsToJSON(obj, slots=None):\n if slots is None:\n slots = list(obj.__slots__) if hasattr(obj, '__slots__') else [ ]\n for base in obj.__class__.__bases__:\n if hasattr(base, '__slots__'):\n slots.extend(base.__slots__)\n\n testOmit = hasattr(obj, '__jsonOmit__') and callable(obj.__jsonOmit__)\n result = { }\n\n for slot in slots:\n key = slot[1:] if slot.startswith('_') else slot\n val = getattr(obj, slot, None)\n\n if testOmit is False or obj.__jsonOmit__(key, val) is False:\n result[key] = toJSON(val)\n\n return result", "response": "Converts the given Python object to one suitable for Javascript\n Object Notation ( JSON serialization via json. dumps."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef toJSON (obj):\n if hasattr(obj, 'toJSON') and callable(obj.toJSON):\n result = obj.toJSON()\n elif isinstance(obj, (int, long, float, str, unicode)) or obj is None:\n result = obj\n elif isinstance(obj, collections.Mapping):\n result = { toJSON(key): toJSON(obj[key]) for key in obj }\n elif isinstance(obj, collections.Sequence):\n result = [ toJSON(item) for item in obj ]\n else:\n result = str(obj)\n\n return result", "response": "Converts a Python object to one suitable for Javascript\n Object Notation ( JSON serialization via JSON."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nloop ait. config. _datapaths from AIT_CONFIG and creates a directory structure.", "response": "def createDirStruct(paths, verbose=True):\n '''Loops ait.config._datapaths from AIT_CONFIG and creates a directory.\n\n Replaces year and doy with the respective year and day-of-year.\n If neither are given as arguments, current UTC day and year are used.\n\n Args:\n paths:\n [optional] list of directory paths you would like to create.\n doy and year will be replaced by the datetime day and year, respectively.\n\n datetime:\n UTC Datetime string in ISO 8601 Format YYYY-MM-DDTHH:mm:ssZ\n\n '''\n for k, path in paths.items():\n p = None\n try:\n pathlist = path if type(path) is list else [ path ]\n for p in pathlist:\n os.makedirs(p)\n if verbose:\n log.info('Creating directory: ' + p)\n except OSError, e:\n #print path\n if e.errno == errno.EEXIST and os.path.isdir(p):\n pass\n else:\n raise\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites packet data to the logger s log file.", "response": "def capture_packet(self):\n ''' Write packet data to the logger's log file. '''\n data = self.socket.recv(self._buffer_size)\n\n for h in self.capture_handlers:\n h['reads'] += 1\n h['data_read'] += len(data)\n\n d = data\n if 'pre_write_transforms' in h:\n for data_transform in h['pre_write_transforms']:\n d = data_transform(d)\n h['logger'].write(d)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clean_up(self):\n ''' Clean up the socket and log file handles. '''\n self.socket.close()\n for h in self.capture_handlers:\n h['logger'].close()", "response": "Clean up the socket and log file handles."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmonitor the socket and log captured data.", "response": "def socket_monitor_loop(self):\n ''' Monitor the socket and log captured data. '''\n try:\n while True:\n gevent.socket.wait_read(self.socket.fileno())\n\n self._handle_log_rotations()\n self.capture_packet()\n finally:\n self.clean_up()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding an additional handler for the", "response": "def add_handler(self, handler):\n ''' Add an additional handler\n\n Args:\n handler:\n A dictionary of handler configuration for the handler\n that should be added. See :func:`__init__` for details\n on valid parameters.\n '''\n handler['logger'] = self._get_logger(handler)\n handler['reads'] = 0\n handler['data_read'] = 0\n\n self.capture_handlers.append(handler)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving a handler given a name.", "response": "def remove_handler(self, name):\n ''' Remove a handler given a name\n\n Note, if multiple handlers have the same name the last matching\n instance in the handler list will be removed.\n\n Args:\n name:\n The name of the handler to remove\n '''\n index = None\n for i, h in enumerate(self.capture_handlers):\n if h['name'] == name:\n index = i\n\n if index is not None:\n self.capture_handlers[index]['logger'].close()\n del self.capture_handlers[index]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a dictionary of capture handler configuration data.", "response": "def dump_handler_config_data(self):\n ''' Return capture handler configuration data.\n\n Return a dictionary of capture handler configuration data of the form:\n\n .. code-block:: none\n\n [{\n 'handler': ,\n\n 'log_file_path': ,\n\n 'conn_type': ,\n\n 'address': \n }, ...]\n\n '''\n ignored_keys = ['logger', 'log_rot_time', 'reads', 'data_read']\n config_data = []\n for h in self.capture_handlers:\n config_data.append({\n 'handler': {\n k:v for k, v in h.iteritems()\n if k not in ignored_keys\n },\n 'log_file_path': h['logger']._stream.name,\n 'conn_type': self.conn_type,\n 'address': self.address,\n })\n return config_data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dump_all_handler_stats(self):\n ''' Return handler capture statistics\n\n Return a dictionary of capture handler statistics of the form:\n\n .. code-block:: none\n\n [{\n 'name': The handler's name,\n\n 'reads': The number of packet reads this handler has received\n\n 'data_read_length': The total length of the data received\n\n 'approx_data_rate': The approximate data rate for this handler\n }, ...]\n\n '''\n stats = []\n for h in self.capture_handlers:\n now = calendar.timegm(time.gmtime())\n rot_time = calendar.timegm(h['log_rot_time'])\n time_delta = now - rot_time\n approx_data_rate = '{} bytes/second'.format(h['data_read'] / float(time_delta))\n\n stats.append({\n 'name': h['name'],\n 'reads': h['reads'],\n 'data_read_length': '{} bytes'.format(h['data_read']),\n 'approx_data_rate': approx_data_rate\n })\n\n return stats", "response": "Dump the capture statistics for all handler in the system."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrotates each handler s log file if necessary", "response": "def _handle_log_rotations(self):\n ''' Rotate each handler's log file if necessary '''\n for h in self.capture_handlers:\n if self._should_rotate_log(h):\n self._rotate_log(h)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _should_rotate_log(self, handler):\n ''' Determine if a log file rotation is necessary '''\n if handler['rotate_log']:\n rotate_time_index = handler.get('rotate_log_index', 'day')\n try:\n rotate_time_index = self._decode_time_rotation_index(rotate_time_index)\n except ValueError:\n rotate_time_index = 2\n\n rotate_time_delta = handler.get('rotate_log_delta', 1)\n\n cur_t = time.gmtime()\n first_different_index = 9\n for i in range(9):\n if cur_t[i] != handler['log_rot_time'][i]:\n first_different_index = i\n break\n\n if first_different_index < rotate_time_index:\n # If the time deltas differ by a time step greater than what we\n # have set for the rotation (I.e., months instead of days) we will\n # automatically rotate.\n return True\n else:\n time_delta = cur_t[rotate_time_index] - handler['log_rot_time'][rotate_time_index]\n return time_delta >= rotate_time_delta\n\n return False", "response": "Determine if a log file rotation is necessary"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the time struct index to use for log rotation checks", "response": "def _decode_time_rotation_index(self, time_rot_index):\n ''' Return the time struct index to use for log rotation checks '''\n time_index_decode_table = {\n 'year': 0, 'years': 0, 'tm_year': 0,\n 'month': 1, 'months': 1, 'tm_mon': 1,\n 'day': 2, 'days': 2, 'tm_mday': 2,\n 'hour': 3, 'hours': 3, 'tm_hour': 3,\n 'minute': 4, 'minutes': 4, 'tm_min': 4,\n 'second': 5, 'seconds': 5, 'tm_sec': 5,\n }\n\n if time_rot_index not in time_index_decode_table.keys():\n raise ValueError('Invalid time option specified for log rotation')\n\n return time_index_decode_table[time_rot_index]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates a log file path for a given handler.", "response": "def _get_log_file(self, handler):\n ''' Generate log file path for a given handler\n\n Args:\n handler:\n The handler configuration dictionary for which a log file\n path should be generated.\n '''\n if 'file_name_pattern' not in handler:\n filename = '%Y-%m-%d-%H-%M-%S-{name}.pcap'\n else:\n filename = handler['file_name_pattern']\n\n log_file = handler['log_dir']\n if 'path' in handler:\n log_file = os.path.join(log_file, handler['path'], filename)\n else:\n log_file = os.path.join(log_file, filename)\n\n log_file = time.strftime(log_file, time.gmtime())\n log_file = log_file.format(**handler)\n\n return log_file"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes a PCAP stream for logging data", "response": "def _get_logger(self, handler):\n ''' Initialize a PCAP stream for logging data '''\n log_file = self._get_log_file(handler)\n\n if not os.path.isdir(os.path.dirname(log_file)):\n os.makedirs(os.path.dirname(log_file))\n\n handler['log_rot_time'] = time.gmtime()\n return pcap.open(log_file, mode='a')"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a new capturer to the manager.", "response": "def add_logger(self, name, address, conn_type, log_dir_path=None, **kwargs):\n ''' Add a new stream capturer to the manager.\n\n Add a new stream capturer to the manager with the provided configuration\n details. If an existing capturer is monitoring the same address the\n new handler will be added to it.\n\n Args:\n name:\n A string defining the new capturer's name.\n\n address:\n A tuple containing address data for the capturer. Check the\n :class:`SocketStreamCapturer` documentation for what is\n required.\n\n conn_type:\n A string defining the connection type. Check the\n :class:`SocketStreamCapturer` documentation for a list of valid\n options.\n\n log_dir_path:\n An optional path defining the directory where the\n capturer should write its files. If this isn't provided the root\n log directory from the manager configuration is used.\n\n '''\n capture_handler_conf = kwargs\n\n if not log_dir_path:\n log_dir_path = self._mngr_conf['root_log_directory']\n\n log_dir_path = os.path.normpath(os.path.expanduser(log_dir_path))\n\n capture_handler_conf['log_dir'] = log_dir_path\n capture_handler_conf['name'] = name\n if 'rotate_log' not in capture_handler_conf:\n capture_handler_conf['rotate_log'] = True\n\n transforms = []\n if 'pre_write_transforms' in capture_handler_conf:\n for transform in capture_handler_conf['pre_write_transforms']:\n if isinstance(transform, str):\n if globals().has_key(transform):\n transforms.append(globals().get(transform))\n else:\n msg = (\n 'Unable to load data transformation '\n '\"{}\" for handler \"{}\"'\n ).format(\n transform,\n capture_handler_conf['name']\n )\n log.warn(msg)\n elif hasattr(transform, '__call__'):\n transforms.append(transform)\n else:\n msg = (\n 'Unable to determine how to load data transform \"{}\"'\n ).format(transform)\n log.warn(msg)\n capture_handler_conf['pre_write_transforms'] = transforms\n\n address_key = str(address)\n if address_key in self._stream_capturers:\n capturer = self._stream_capturers[address_key][0]\n capturer.add_handler(capture_handler_conf)\n return\n\n socket_logger = SocketStreamCapturer(capture_handler_conf,\n address,\n conn_type)\n greenlet = gevent.spawn(socket_logger.socket_monitor_loop)\n\n self._stream_capturers[address_key] = (\n socket_logger,\n greenlet\n )\n self._pool.add(greenlet)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstops capture handler with a given name.", "response": "def stop_capture_handler(self, name):\n ''' Remove all handlers with a given name\n\n Args:\n name:\n The name of the handler(s) to remove.\n '''\n empty_capturers_indeces = []\n for k, sc in self._stream_capturers.iteritems():\n stream_capturer = sc[0]\n stream_capturer.remove_handler(name)\n\n if stream_capturer.handler_count == 0:\n self._pool.killone(sc[1])\n empty_capturers_indeces.append(k)\n\n for i in empty_capturers_indeces:\n del self._stream_capturers[i]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstop a capturer that the manager controls.", "response": "def stop_stream_capturer(self, address):\n ''' Stop a capturer that the manager controls.\n\n Args:\n address:\n An address array of the form ['host', 'port'] or similar\n depending on the connection type of the stream capturer being\n terminated. The capturer for the address will be terminated\n along with all handlers for that capturer if the address is\n that of a managed capturer.\n\n Raises:\n ValueError:\n The provided address doesn't match a capturer that is\n currently managed.\n '''\n address = str(address)\n if address not in self._stream_capturers:\n raise ValueError('Capturer address does not match a managed capturer')\n\n stream_cap = self._stream_capturers[address]\n self._pool.killone(stream_cap[1])\n del self._stream_capturers[address]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nforce a rotation of a handler s log file", "response": "def rotate_capture_handler_log(self, name):\n ''' Force a rotation of a handler's log file\n\n Args:\n name:\n The name of the handler who's log file should be rotated.\n '''\n for sc_key, sc in self._stream_capturers.iteritems():\n for h in sc[0].capture_handlers:\n if h['name'] == name:\n sc[0]._rotate_log(h)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_logger_data(self):\n ''' Return data on managed loggers.\n\n Returns a dictionary of managed logger configuration data. The format\n is primarily controlled by the\n :func:`SocketStreamCapturer.dump_handler_config_data` function::\n\n {\n : \n }\n\n '''\n return {\n address : stream_capturer[0].dump_handler_config_data()\n for address, stream_capturer in self._stream_capturers.iteritems()\n }", "response": "Return data on managed loggers.\n Returns a dictionary of managed loggers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning handler read statistics Returns a dictionary of managed handler read statistics. The format is primarily controlled by the SocketStreamCapturer. dump_all_handler_stats", "response": "def get_handler_stats(self):\n ''' Return handler read statistics\n\n Returns a dictionary of managed handler data read statistics. The\n format is primarily controlled by the\n :func:`SocketStreamCapturer.dump_all_handler_stats` function::\n\n {\n : \n }\n\n '''\n return {\n address : stream_capturer[0].dump_all_handler_stats()\n for address, stream_capturer in self._stream_capturers.iteritems()\n }"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_capture_handler_config_by_name(self, name):\n ''' Return data for handlers of a given name.\n\n Args:\n name:\n Name of the capture handler(s) to return config data for.\n\n Returns:\n Dictionary dump from the named capture handler as given by\n the :func:`SocketStreamCapturer.dump_handler_config_data` method.\n '''\n handler_confs = []\n for address, stream_capturer in self._stream_capturers.iteritems():\n handler_data = stream_capturer[0].dump_handler_config_data()\n for h in handler_data:\n if h['handler']['name'] == name:\n handler_confs.append(h)\n\n return handler_confs", "response": "Return data for a given name."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_socket_event_loop(self):\n ''' Start monitoring managed loggers. '''\n try:\n while True:\n self._pool.join()\n\n # If we have no loggers we'll sleep briefly to ensure that we\n # allow other processes (I.e., the webserver) to do their work.\n if len(self._logger_data.keys()) == 0:\n time.sleep(0.5)\n\n except KeyboardInterrupt:\n pass\n finally:\n self._pool.kill()", "response": "Start monitoring managed loggers."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _route(self):\n ''' Handles server route instantiation. '''\n self._app.route('/',\n method='GET',\n callback=self._get_logger_list)\n self._app.route('/stats',\n method='GET',\n callback=self._fetch_handler_stats)\n self._app.route('//start',\n method='POST',\n callback=self._add_logger_by_name)\n self._app.route('//stop',\n method='DELETE',\n callback=self._stop_logger_by_name)\n self._app.route('//config',\n method='GET',\n callback=self._get_logger_conf)\n self._app.route('//rotate',\n method='POST',\n callback=self._rotate_capturer_log)", "response": "Handles server route instantiation."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle POST requests for adding a new logger.", "response": "def _add_logger_by_name(self, name):\n ''' Handles POST requests for adding a new logger.\n\n Expects logger configuration to be passed in the request's query string.\n The logger name is included in the URL and the address components and\n connection type should be included as well. The loc attribute is\n defaulted to \"localhost\" when making the socket connection if not\n defined.\n\n loc = IP / interface\n port = port / protocol\n conn_type = udp or ethernet\n\n Raises:\n ValueError:\n if the port or connection type are not supplied.\n '''\n data = dict(request.forms)\n loc = data.pop('loc', '')\n port = data.pop('port', None)\n conn_type = data.pop('conn_type', None)\n\n if not port or not conn_type:\n e = 'Port and/or conn_type not set'\n raise ValueError(e)\n address = [loc, int(port)]\n\n if 'rotate_log' in data:\n data['rotate_log'] = True if data == 'true' else False\n\n if 'rotate_log_delta' in data:\n data['rotate_log_delta'] = int(data['rotate_log_delta'])\n\n self._logger_manager.add_logger(name, address, conn_type, **data)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef handle_includes(defns):\n '''Recursive handling of includes for any input list of defns.\n The assumption here is that when an include is handled by the\n pyyaml reader, it adds them as a list, which is stands apart from the rest\n of the expected YAML definitions.\n '''\n newdefns = []\n for d in defns:\n if isinstance(d,list):\n newdefns.extend(handle_includes(d))\n else:\n newdefns.append(d)\n\n return newdefns", "response": "Recursive handling of includes for any input list of defns."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the result of evaluating this DNToEUConversion in the context of the given Packet.", "response": "def eval(self, packet):\n \"\"\"Returns the result of evaluating this DNToEUConversion in the\n context of the given Packet.\n \"\"\"\n result = None\n terms = None\n\n if self._when is None or self._when.eval(packet):\n result = self._equation.eval(packet)\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn True if the given field value is valid False otherwise.", "response": "def validate(self, value, messages=None):\n \"\"\"Returns True if the given field value is valid, False otherwise.\n Validation error messages are appended to an optional messages\n array.\n \"\"\"\n valid = True\n primitive = value\n\n def log(msg):\n if messages is not None:\n messages.append(msg)\n\n if self.enum:\n if value not in self.enum.values():\n valid = False\n flds = (self.name, str(value))\n log(\"%s value '%s' not in allowed enumerated values.\" % flds)\n else:\n primitive = int(self.enum.keys()[self.enum.values().index(value)])\n\n if self.type:\n if self.type.validate(primitive, messages, self.name) is False:\n valid = False\n\n return valid"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndecode the given bytes according to this Field Definition.", "response": "def decode(self, bytes, raw=False, index=None):\n \"\"\"Decodes the given bytes according to this Field Definition.\n\n If raw is True, no enumeration substitutions will be applied\n to the data returned.\n\n If index is an integer or slice (and the type of this\n FieldDefinition is an ArrayType), then only the element(s) at\n the specified position(s) will be decoded.\n \"\"\"\n if index is not None and isinstance(self.type, dtype.ArrayType):\n value = self.type.decode( bytes[self.slice()], index, raw )\n else:\n value = self.type.decode( bytes[self.slice()], raw )\n\n\n # Apply bit mask if needed\n if self.mask is not None:\n value &= self.mask\n\n if self.shift > 0:\n value >>= self.shift\n\n if not raw and self.enum is not None:\n value = self.enum.get(value, value)\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef encode(self, value):\n if type(value) == str and self.enum and value in self.enum:\n value = self.enum[value]\n\n if type(value) == int:\n if self.shift > 0:\n value <<= self.shift\n if self.mask is not None:\n value &= self.mask\n\n return self.type.encode(value) if self.type else bytearray()", "response": "Encodes the given value according to this FieldDefinition."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef slice(self, offset=0):\n if self.bytes is None:\n start = 0\n stop = start + self.nbytes\n elif type(self.bytes) is int:\n start = self.bytes\n stop = start + self.nbytes\n else:\n start = self.bytes[0]\n stop = self.bytes[1] + 1\n\n return slice(start + offset, stop + offset)", "response": "Returns a Python slice object that is used to retrieve the start and stop byte positions of this Telemetry field."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _assertField(self, fieldname):\n if not self._hasattr(fieldname):\n values = self._defn.name, fieldname\n raise AttributeError(\"Packet '%s' has no field '%s'\" % values)", "response": "Raise AttributeError when Packet has no field with the given fieldname."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the value of the given field name.", "response": "def _getattr (self, fieldname, raw=False, index=None):\n \"\"\"Returns the value of the given packet field name.\n\n If raw is True, the field value is only decoded. That is no\n enumeration substituions or DN to EU conversions are applied.\n \"\"\"\n self._assertField(fieldname)\n value = None\n\n if fieldname == 'raw':\n value = createRawPacket(self)\n elif fieldname == 'history':\n value = self._defn.history\n else:\n if fieldname in self._defn.derivationmap:\n defn = self._defn.derivationmap[fieldname]\n else:\n defn = self._defn.fieldmap[fieldname]\n\n if isinstance(defn.type, dtype.ArrayType) and index is None:\n return createFieldList(self, defn, raw)\n\n if defn.when is None or defn.when.eval(self):\n if isinstance(defn, DerivationDefinition):\n value = defn.equation.eval(self)\n elif raw or (defn.dntoeu is None and defn.expr is None):\n value = defn.decode(self._data, raw, index)\n elif defn.dntoeu is not None:\n value = defn.dntoeu.eval(self)\n elif defn.expr is not None:\n value = defn.expr.eval(self)\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns True if this packet contains fieldname False otherwise.", "response": "def _hasattr(self, fieldname):\n \"\"\"Returns True if this packet contains fieldname, False otherwise.\"\"\"\n special = 'history', 'raw'\n return (fieldname in special or \n fieldname in self._defn.fieldmap or\n fieldname in self._defn.derivationmap)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef nbytes(self):\n max_byte = -1\n\n for defn in self.fields:\n byte = defn.bytes if type(defn.bytes) is int else max(defn.bytes)\n max_byte = max(max_byte, byte)\n\n return max_byte + 1", "response": "The number of bytes for this telemetry packet"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if the given Packet is valid False otherwise.", "response": "def validate(self, pkt, messages=None):\n \"\"\"Returns True if the given Packet is valid, False otherwise.\n Validation error messages are appended to an optional messages\n array.\n \"\"\"\n valid = True\n\n for f in self.fields:\n try:\n value = getattr(pkt, f.name)\n except AttributeError:\n valid = False\n if messages is not None:\n msg = \"Telemetry field mismatch for packet '%s'. \"\n msg += \"Unable to retrieve value for %s in Packet.\"\n values = self.name, f.name\n messages.append(msg % values)\n break\n\n if f.validate(value, messages) is False:\n valid = False\n\n return valid"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef eval(self, packet):\n try:\n context = createPacketContext(packet)\n result = eval(self._code, packet._defn.globals, context)\n except ZeroDivisionError:\n result = None\n\n return result", "response": "Returns the result of evaluating this PacketExpression in the\n context of the given Packet."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nraises AttributeError when PacketHistory has no field with the given name.", "response": "def _assertField(self, name):\n \"\"\"Raise AttributeError when PacketHistory has no field with the given\n name.\n \"\"\"\n if name not in self._names:\n msg = 'PacketHistory \"%s\" has no field \"%s\"'\n values = self._defn.name, name\n raise AttributeError(msg % values)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds the given Packet to this PacketHistory.", "response": "def add(self, packet):\n \"\"\"Add the given Packet to this PacketHistory.\"\"\"\n for name in self._names:\n value = getattr(packet, name)\n if value is not None:\n self._dict[name] = value"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add(self, defn):\n if defn.name not in self:\n self[defn.name] = defn\n else:\n msg = \"Duplicate packet name '%s'\" % defn.name\n log.error(msg)\n raise util.YAMLError(msg)", "response": "Adds the given Packet Definition to this Telemetry Dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create(self, name, data=None):\n return createPacket(self[name], data) if name in self else None", "response": "Creates a new packet with the given definition and raw data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload Packet Definitions from the given YAML content into this Telemetry Dictionary.", "response": "def load(self, content):\n \"\"\"Loads Packet Definitions from the given YAML content into this\n Telemetry Dictionary. Content may be either a filename\n containing YAML content or a YAML string.\n\n Load has no effect if this Command Dictionary was already\n instantiated with a filename or YAML content.\n \"\"\"\n if self.filename is None:\n if os.path.isfile(content):\n self.filename = content\n stream = open(self.filename, 'rb')\n else:\n stream = content\n\n pkts = yaml.load(stream)\n pkts = handle_includes(pkts)\n for pkt in pkts:\n self.add(pkt)\n\n if type(stream) is file:\n stream.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef writeToCSV(self, output_path=None):\n '''writeToCSV - write the telemetry dictionary to csv\n '''\n header = ['Name', 'First Byte', 'Last Byte', 'Bit Mask', 'Endian',\n 'Type', 'Description', 'Values']\n\n if output_path is None:\n output_path = ait.config._directory\n\n for pkt_name in self.tlmdict:\n filename = os.path.join(output_path, pkt_name + '.csv')\n\n with open(filename, 'wb') as output:\n csvwriter = csv.writer(output, quoting=csv.QUOTE_ALL)\n csvwriter.writerow(header)\n\n for fld in self.tlmdict[pkt_name].fields:\n # Pre-process some fields\n\n # Description\n desc = fld.desc.replace('\\n', ' ') if fld.desc is not None else \"\"\n\n # Mask\n mask = hex(fld.mask) if fld.mask is not None else \"\"\n\n # Enumerations\n enums = '\\n'.join(\"%s: %s\" % (k, fld.enum[k])\n for k in fld.enum) if fld.enum is not None else \"\"\n\n # Set row\n row = [fld.name, fld.slice().start, fld.slice().stop,\n mask, fld.type.endian, fld.type.name, desc, enums]\n\n csvwriter.writerow(row)", "response": "writeToCSV - write the telemetry dictionary to CSV"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef decode(self, bytes):\n value = self.type.decode(bytes)\n if self._enum is not None:\n for name, val in self._enum.items():\n if value == val:\n value = name\n break\n return value", "response": "Decodes the given bytes according to this AIT Argument\n Definition."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encode(self, value):\n if type(value) == str and self.enum and value in self.enum:\n value = self.enum[value]\n return self.type.encode(value) if self.type else bytearray()", "response": "Encodes the given value according to this AIT Argument\n Definition."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if the given Argument value is valid False otherwise.", "response": "def validate(self, value, messages=None):\n \"\"\"Returns True if the given Argument value is valid, False otherwise.\n Validation error messages are appended to an optional messages\n array.\n \"\"\"\n valid = True\n primitive = value\n\n def log(msg):\n if messages is not None:\n messages.append(msg)\n\n if self.enum:\n if value not in self.enum.keys():\n valid = False\n args = (self.name, str(value))\n log(\"%s value '%s' not in allowed enumerated values.\" % args)\n else:\n primitive = int(self.enum[value])\n\n if self.type:\n if self.type.validate(primitive, messages, self.name) is False:\n valid = False\n\n if self.range:\n if primitive < self.range[0] or primitive > self.range[1]:\n valid = False\n args = (self.name, str(primitive), self.range[0], self.range[1])\n log(\"%s value '%s' out of range [%d, %d].\" % args)\n\n return valid"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode this AIT command to binary.", "response": "def encode(self, pad=106):\n \"\"\"Encodes this AIT command to binary.\n\n If pad is specified, it indicates the maximum size of the encoded\n command in bytes. If the encoded command is less than pad, the\n remaining bytes are set to zero.\n\n Commands sent to ISS payloads over 1553 are limited to 64 words\n (128 bytes) with 11 words (22 bytes) of CCSDS overhead (SSP\n 52050J, Section 3.2.3.4). This leaves 53 words (106 bytes) for\n the command itself.\n \"\"\"\n opcode = struct.pack('>H', self.defn.opcode)\n offset = len(opcode)\n size = max(offset + self.defn.argsize, pad)\n encoded = bytearray(size)\n\n encoded[0:offset] = opcode\n encoded[offset] = self.defn.argsize\n offset += 1\n index = 0\n\n for defn in self.defn.argdefns:\n if defn.fixed:\n value = defn.value\n else:\n value = self.args[index]\n index += 1\n encoded[defn.slice(offset)] = defn.encode(value)\n\n return encoded"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef nbytes(self):\n return len(self.opcode) + 1 + sum(arg.nbytes for arg in self.argdefns)", "response": "The number of bytes required to encode this command."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef argsize(self):\n argsize = sum(arg.nbytes for arg in self.argdefns)\n return argsize if len(self.argdefns) > 0 else 0", "response": "The total size in bytes of all the command arguments."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate(self, cmd, messages=None):\n valid = True\n args = [ arg for arg in cmd.args if arg is not None ]\n\n if self.nargs != len(args):\n valid = False\n if messages is not None:\n msg = 'Expected %d arguments, but received %d.'\n messages.append(msg % (self.nargs, len(args)))\n\n for defn, value in zip(self.args, cmd.args):\n if value is None:\n valid = False\n if messages is not None:\n messages.append('Argument \"%s\" is missing.' % defn.name)\n elif defn.validate(value, messages) is False:\n valid = False\n\n if len(cmd._unrecognized) > 0:\n valid = False\n if messages is not None:\n for name in cmd.unrecognized:\n messages.append('Argument \"%s\" is unrecognized.' % name)\n\n return valid", "response": "Returns True if the given Command is valid False otherwise. Messages are appended to an optional messages array. Messages are appended to an optional messages array. Messages are appended to an optional messages array."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd the given Command Definition to this Command Dictionary.", "response": "def add(self, defn):\n \"\"\"Adds the given Command Definition to this Command Dictionary.\"\"\"\n self[defn.name] = defn\n self.opcodes[defn._opcode] = defn"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(self, name, *args, **kwargs):\n tokens = name.split()\n\n if len(tokens) > 1 and (len(args) > 0 or len(kwargs) > 0):\n msg = 'A Cmd may be created with either positional arguments '\n msg += '(passed as a string or a Python list) or keyword '\n msg += 'arguments, but not both.'\n raise TypeError(msg)\n\n if len(tokens) > 1:\n name = tokens[0]\n args = [ util.toNumber(t, t) for t in tokens[1:] ]\n\n defn = self.get(name, None)\n\n if defn is None:\n raise TypeError('Unrecognized command: %s' % name)\n\n return createCmd(defn, *args, **kwargs)", "response": "Creates a new AIT command with the given arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef decode(self, bytes):\n opcode = struct.unpack(\">H\", bytes[0:2])[0]\n nbytes = struct.unpack(\"B\", bytes[2:3])[0]\n name = None\n args = []\n\n if opcode in self.opcodes:\n defn = self.opcodes[opcode]\n name = defn.name\n stop = 3\n\n for arg in defn.argdefns:\n start = stop\n stop = start + arg.nbytes\n if arg.fixed:\n pass # FIXME: Confirm fixed bytes are as expected?\n else:\n args.append(arg.decode(bytes[start:stop]))\n\n return self.create(name, *args)", "response": "Decodes the given bytes according to this AIT Command\n Definition."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload Command Definitions from the given YAML content into this Command Dictionary.", "response": "def load(self, content):\n \"\"\"Loads Command Definitions from the given YAML content into\n into this Command Dictionary. Content may be either a\n filename containing YAML content or a YAML string.\n\n Load has no effect if this Command Dictionary was already\n instantiated with a filename or YAML content.\n \"\"\"\n if self.filename is None:\n if os.path.isfile(content):\n self.filename = content\n stream = open(self.filename, 'rb')\n else:\n stream = content\n\n for cmd in yaml.load(stream):\n self.add(cmd)\n\n if type(stream) is file:\n stream.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the cube root of x.", "response": "def cbrt (x):\n \"\"\"Returns the cube root of x.\"\"\"\n if x >= 0: \n return math.pow(x , 1.0 / 3.0)\n else:\n return - math.pow(abs(x), 1.0 / 3.0)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert the given ECI coordinates to ECEF at the given Greenwich Mean Sidereal Time.", "response": "def eci2ecef (x, y, z, gmst=None):\n \"\"\"Converts the given ECI coordinates to ECEF at the given Greenwich\n Mean Sidereal Time (GMST) (defaults to now).\n \n This code was adapted from\n `shashwatak/satellite-js `_\n and http://ccar.colorado.edu/ASEN5070/handouts/coordsys.doc\n\n \"\"\"\n if gmst is None:\n gmst = dmc.toGMST()\n\n X = (x * math.cos(gmst)) + (y * math.sin(gmst))\n Y = (x * (-math.sin(gmst))) + (y * math.cos(gmst))\n Z = z\n\n return X, Y, Z"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef eci2geodetic (x, y, z, gmst=None, ellipsoid=None):\n if gmst is None:\n gmst = dmc.toGMST()\n\n if ellipsoid is None:\n ellipsoid = WGS84\n\n a = WGS84.a\n b = WGS84.b\n f = WGS84.f\n r = math.sqrt((x * x) + (y * y))\n e2 = (2 * f) - (f * f)\n lon = math.atan2(y, x) - gmst\n k = 0\n kmax = 20\n lat = math.atan2(z, r)\n\n while (k < kmax):\n slat = math.sin(lat)\n C = 1 / math.sqrt( 1 - e2 * (slat * slat) )\n lat = math.atan2(z + (a * C * e2 * slat), r)\n k += 1\n\n z = (r / math.cos(lat)) - (a * C)\n\n return lat, lon, z", "response": "Converts the given ECI coordinates to Geodetic coordinates at the given GMST and with\n."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nprocessing the input data and publishes the final output data.", "response": "def process(self, input_data, topic=None):\n \"\"\"\n Invokes each handler in sequence.\n Publishes final output data.\n\n Params:\n input_data: message received by stream\n topic: name of plugin or stream message received from,\n if applicable\n \"\"\"\n for handler in self.handlers:\n output = handler.handle(input_data)\n input_data = output\n\n self.publish(input_data)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef valid_workflow(self):\n for ix, handler in enumerate(self.handlers[:-1]):\n next_input_type = self.handlers[ix + 1].input_type\n\n if (handler.output_type is not None and\n next_input_type is not None):\n if handler.output_type != next_input_type:\n return False\n\n return True", "response": "Returns True if the workflow is valid."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compress (input_filename, output_filename=None, verbose=False):\n input_size = 0\n output_size = 0\n\n if output_filename is None:\n output_filename = input_fillename + '.ait-zlib'\n\n try:\n stream = open(input_filename , 'rb')\n output = open(output_filename, 'wb')\n bytes = stream.read()\n input_size = len(bytes)\n\n if verbose:\n log.info(\"Compressing %s (%d bytes).\", input_filename, input_size)\n\n compressed = zlib.compress(bytes, 3)\n output_size = len(compressed)\n output.write(compressed)\n\n stream.close()\n output.close()\n\n percent = (1.0 - (output_size / float(input_size) )) * 100\n\n if verbose:\n log.info(\"Wrote %s (%d bytes).\", output_filename, output_size)\n log.info(\"Compressed %6.2f percent\", percent)\n\n except (IOError, OSError), e:\n log.error(str(e) + \".\")\n\n return output_size", "response": "Compresses the input file and stores the result in output_filename."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndumping bytes in hexdump format.", "response": "def hexdump (bytes, addr=None, preamble=None, printfunc=None, stepsize=16):\n \"\"\"hexdump(bytes[, addr[, preamble[, printfunc[, stepsize=16]]]])\n\n Outputs bytes in hexdump format lines similar to the following (here\n preamble='Bank1', stepsize=8, and len(bytes) == 15)::\n\n Bank1: 0xFD020000: 7f45 4c46 0102 0100 *.ELF....*\n Bank1: 0xFD020008: 0000 0000 0000 00 *....... *\n\n Where stepsize controls the number of bytes per line. If addr is\n omitted, the address portion of the hexdump will not be output.\n Lines will be passed to printfunc for output, or Python's builtin\n print, if printfunc is omitted.\n\n If a byte is not in the range [32, 127), a period will rendered for\n the character portion of the output.\n \"\"\"\n if preamble is None:\n preamble = \"\"\n\n bytes = bytearray(bytes)\n size = len(bytes)\n\n for n in xrange(0, size, stepsize):\n if addr is not None:\n dump = preamble + \"0x%04X: \" % (addr + n)\n else:\n dump = preamble\n end = min(size, n + stepsize)\n dump += hexdumpLine(bytes[n:end], stepsize)\n\n if printfunc is None:\n print dump\n else:\n printfunc(dump)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a single hexdump formatted line for the given bytes.", "response": "def hexdumpLine (bytes, length=None):\n \"\"\"hexdumpLine(bytes[, length])\n\n Returns a single hexdump formatted line for bytes. If length is\n greater than len(bytes), the line will be padded with ASCII space\n characters to indicate no byte data is present.\n\n Used by hexdump().\n \"\"\"\n line = \"\"\n\n if length is None:\n length = len(bytes)\n\n for n in xrange(0, length, 2):\n if n < len(bytes) - 1:\n line += \"%02x%02x \" % (bytes[n], bytes[n + 1])\n elif n < len(bytes):\n line += \"%02x \" % bytes[n]\n else:\n line += \" \"\n\n line += \"*\"\n\n for n in xrange(length):\n if n < len(bytes):\n if bytes[n] in xrange(32, 127):\n line += \"%c\" % bytes[n]\n else:\n line += \".\"\n else:\n line += \" \"\n\n line += \"*\"\n return line"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing command - line arguments according to the given defaults.", "response": "def parseArgs (argv, defaults):\n \"\"\"parseArgs(argv, defaults) -> (dict, list)\n\n Parses command-line arguments according to the given defaults. For\n every key in defaults, an argument of the form --key=value will be\n parsed. Numeric arguments are converted from strings with errors\n reported via ait.core.log.error() and default values used instead.\n\n Returns a copy of defaults with parsed option values and a list of\n any non-flag arguments.\n \"\"\"\n options = dict(defaults)\n numeric = \\\n [ k for k, v in options.items() if type(v) is float or type(v) is int ]\n\n try:\n longopts = [ \"%s=\" % key for key in options.keys() ]\n opts, args = getopt.getopt(argv, \"\", longopts)\n\n for key, value in opts:\n if key.startswith(\"--\"):\n key = key[2:]\n options[key] = value\n except getopt.GetoptError, err:\n log.error( str(err) )\n usage( exit=True )\n\n for key in numeric:\n value = options[key]\n if type(value) is str:\n options[key] = util.toNumber(value)\n\n if options[key] is None:\n msg = \"Option '%s': '%s' is not a number, using default '%s' instead.\"\n log.error(msg, key, value, defaults[key])\n options[key] = defaults[key]\n\n return options, args"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef usage (exit=False):\n stream = open(sys.argv[0])\n for line in stream.readlines():\n if line.startswith(\"##\"): print line.replace(\"##\", \"\"),\n stream.close()\n\n if exit:\n sys.exit(2)", "response": "Print the usage statement at the top of a Python program."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the IP address of the computer.", "response": "def getip():\n \"\"\"\n getip()\n\n Returns the IP address of the computer. Helpful for those hosts that might\n sit behind gateways and report a hostname that is a little strange (I'm\n looking at you oco3-sim1).\n \"\"\"\n return [(s.connect(('8.8.8.8', 80)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the arguments using argparse and returns a Namespace object.", "response": "def arg_parse(arguments, description=None):\n \"\"\"\n arg_parse()\n\n Parses the arguments using argparse. Returns a Namespace object. The\n arguments dictionary should match the argparse expected data structure:\n\n .. code-block::python\n\n arguments = {\n '--port': {\n 'type' : int,\n 'default' : 3075,\n 'help' : 'Port on which to send data'\n },\n '--verbose': {\n 'action' : 'store_true',\n 'default' : False,\n 'help' : 'Hexdump of the raw command being sent.'\n }\n }\n\n For positional arguments, be sure to pass in an OrderedDict:\n\n .. code-block::python\n\n arguments = {\n '--port': {\n 'type' : int,\n 'default' : 3075,\n 'help' : 'Port on which to send data'\n },\n '--verbose': {\n 'action' : 'store_true',\n 'default' : False,\n 'help' : 'Hexdump of the raw command being sent.'\n }\n }\n\n arguments['command'] = {\n 'type' : str,\n 'help' : 'Name of the command to send.'\n }\n\n arguments['arguments'] = {\n 'type' : util.toNumberOrStr,\n 'metavar' : 'argument',\n 'nargs' : '*',\n 'help' : 'Command arguments.'\n }\n\n \"\"\"\n if not description:\n description = \"\"\n\n ap = argparse.ArgumentParser(\n description = description,\n formatter_class = argparse.ArgumentDefaultsHelpFormatter\n )\n\n for name, params in arguments.items():\n ap.add_argument(name, **params)\n\n args = ap.parse_args()\n return args"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntypes for argparse - checks that file exists but does not open.", "response": "def extant_file(file):\n \"\"\"\n 'Type' for argparse - checks that file exists but does not open.\n \"\"\"\n if not os.path.exists(file):\n # Argparse uses the ArgumentTypeError to give a rejection message like:\n # error: argument input: file does not exist\n raise argparse.ArgumentTypeError(\"{0} does not exist\".format(file))\n return file"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dot (self, other):\n if self.z:\n return (self.x * other.x) + (self.y * other.y) + (self.z * other.z)\n else:\n return (self.x * other.x) + (self.y * other.y)", "response": "Returns the dot product of this Point with another."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef slope (self):\n return (self.p.y - self.q.y) / (self.p.x - self.q.x)", "response": "Returns the slope of the current node."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nintersect two line segments.", "response": "def intersect (self, line):\n \"\"\"intersect (line) -> Point | None\n\n Returns the intersection point of this line segment with another.\n If this line segment and the other line segment are conincident,\n the first point on this line segment is returned. If the line\n segments do not intersect, None is returned.\n\n See http://local.wasp.uwa.edu.au/~pbourke/geometry/lineline2d/\n\n Examples:\n\n >>> A = Line( Point(0.0, 0.0), Point(5.0, 5.0) )\n >>> B = Line( Point(5.0, 0.0), Point(0.0, 5.0) )\n >>> C = Line( Point(1.0, 3.0), Point(9.0, 3.0) )\n >>> D = Line( Point(0.5, 3.0), Point(6.0, 4.0) )\n >>> E = Line( Point(1.0, 1.0), Point(3.0, 8.0) )\n >>> F = Line( Point(0.5, 2.0), Point(4.0, 7.0) )\n >>> G = Line( Point(1.0, 2.0), Point(3.0, 6.0) )\n >>> H = Line( Point(2.0, 4.0), Point(4.0, 8.0) )\n >>> I = Line( Point(3.5, 9.0), Point(3.5, 0.5) )\n >>> J = Line( Point(3.0, 1.0), Point(9.0, 1.0) )\n >>> K = Line( Point(2.0, 3.0), Point(7.0, 9.0) )\n >>> L = Line( Point(1.0, 2.0), Point(5.0, 7.0) )\n\n >>> A.intersect(B)\n Point(2.5, 2.5)\n\n >>> C.intersect(D) == None\n True\n\n >>> E.intersect(F)\n Point(1.8275862069, 3.89655172414)\n\n >>> G.intersect(H)\n Point(1.0, 2.0)\n\n >>> I.intersect(J)\n Point(3.5, 1.0)\n\n >>> K.intersect(L) == None\n True\n \"\"\"\n (x1, y1) = (self.p.x, self.p.y)\n (x2, y2) = (self.q.x, self.q.y)\n (x3, y3) = (line.p.x, line.p.y)\n (x4, y4) = (line.q.x, line.q.y)\n denom = ((y4 - y3) * (x2 - x1)) - ((x4 - x3) * (y2 - y1))\n num1 = ((x4 - x3) * (y1 - y3)) - ((y4 - y3) * (x1 - x3))\n num2 = ((x2 - x1) * (y1 - y3)) - ((y2 - y1) * (x1 - x3))\n intersect = None\n\n if num1 == 0 and num2 == 0 and denom == 0: # Coincident lines\n intersect = self.p\n elif denom != 0: # Parallel lines (denom == 0)\n ua = float(num1) / denom\n ub = float(num2) / denom\n if ua >= 0.0 and ua <= 1.0 and ub >= 0.0 and ub <= 1.0:\n x = x1 + (ua * (x2 - x1))\n y = y1 + (ua * (y2 - y1))\n intersect = Point(x, y)\n\n return intersect"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef intersect (self, line):\n eps = 1e-8\n d = (line.q - line.p)\n dn = d.dot(self.n)\n point = None\n\n if abs(dn) >= eps:\n mu = self.n.dot(self.p - line.p) / dn\n if mu >= 0 and mu <= 1:\n point = line.p + mu * d\n\n return point", "response": "intersect - Returns the point at which the line segment and Plane intersect\n or None if the line segment and Plane intersect\n does not intersect."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef area (self):\n area = 0.0\n \n for segment in self.segments():\n area += ((segment.p.x * segment.q.y) - (segment.q.x * segment.p.y))/2\n\n return area", "response": "area - Returns the area of this Polygon."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef bounds (self):\n if self._dirty:\n min = self.vertices[0].copy()\n max = self.vertices[0].copy()\n for point in self.vertices[1:]:\n if point.x < min.x: min.x = point.x\n if point.y < min.y: min.y = point.y\n if point.x > max.x: max.x = point.x\n if point.y > max.y: max.y = point.y\n\n self._bounds = Rect(min, max)\n self._dirty = False\n\n return self._bounds", "response": "Returns the bounding rectangle for this Polygon."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef center (self):\n Cx = 0.0\n Cy = 0.0\n denom = 6.0 * self.area()\n\n for segment in self.segments():\n x = (segment.p.x + segment.q.x)\n y = (segment.p.y + segment.q.y)\n xy = (segment.p.x * segment.q.y) - (segment.q.x * segment.p.y)\n Cx += (x * xy)\n Cy += (y * xy)\n\n Cx /= denom\n Cy /= denom\n\n return Point(Cx, Cy)", "response": "Returns the center point of this Polygon."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if point is contained inside this Polygon False otherwise.", "response": "def contains (self, p):\n \"\"\"Returns True if point is contained inside this Polygon, False\n otherwise.\n\n This method uses the Ray Casting algorithm.\n\n Examples:\n \n >>> p = Polygon()\n >>> p.vertices = [Point(1, 1), Point(1, -1), Point(-1, -1), Point(-1, 1)]\n\n >>> p.contains( Point(0, 0) )\n True\n\n >>> p.contains( Point(2, 3) )\n False\n\n \"\"\"\n inside = False\n\n if p in self.bounds():\n for s in self.segments():\n if ((s.p.y > p.y) != (s.q.y > p.y) and\n (p.x < (s.q.x - s.p.x) * (p.y - s.p.y) / (s.q.y - s.p.y) + s.p.x)):\n inside = not inside\n\n return inside"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the Line segments that comprise this Polygon.", "response": "def segments (self):\n \"\"\"Return the Line segments that comprise this Polygon.\"\"\"\n for n in xrange(len(self.vertices) - 1):\n yield Line(self.vertices[n], self.vertices[n + 1])\n\n yield Line(self.vertices[-1], self.vertices[0])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn True if point is contained inside this Rectangle False otherwise.", "response": "def contains (self, point):\n \"\"\"contains(point) -> True | False\n\n Returns True if point is contained inside this Rectangle, False otherwise.\n\n Examples:\n \n >>> r = Rect( Point(-1, -1), Point(1, 1) )\n >>> r.contains( Point(0, 0) )\n True\n\n >>> r.contains( Point(2, 3) )\n False\n \"\"\"\n return (point.x >= self.ul.x and point.x <= self.lr.x) and \\\n (point.y >= self.ul.y and point.y <= self.lr.y)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of Line segments that comprise this Rectangle.", "response": "def segments (self):\n \"\"\"segments () -> [ Line, Line, Line, Line ]\n\n Return a list of Line segments that comprise this Rectangle.\n \"\"\"\n ul = self.ul\n lr = self.lr\n ur = Point(lr.x, ul.y)\n ll = Point(ul.x, lr.y)\n return [ Line(ul, ur), Line(ur, lr), Line(lr, ll), Line(ll, ul) ]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nformat an EVR message with the given byte array of EVR data.", "response": "def format_message(self, evr_hist_data):\n ''' Format EVR message with EVR data\n\n Given a byte array of EVR data, format the EVR's message attribute\n printf format strings and split the byte array into appropriately\n sized chunks. Supports most format strings containing length and type\n fields.\n\n Args:\n evr_hist_data: A bytearray of EVR data. Bytes are expected to be in\n MSB ordering.\n\n Example formatting::\n\n # This is the character '!', string 'Foo', and int '4279317316'\n bytearray([0x21, 0x46, 0x6f, 0x6f, 0x00, 0xff, 0x11, 0x33, 0x44])\n\n Returns:\n The EVR's message string formatted with the EVR data or the\n unformatted EVR message string if there are no valid format\n strings present in it.\n\n Raises:\n ValueError: When the bytearray cannot be fully processed with the\n specified format strings. This is usually a result of the\n expected data length and the byte array length not matching.\n '''\n size_formatter_info = {\n 's' : -1,\n 'c' : 1,\n 'i' : 4,\n 'd' : 4,\n 'u' : 4,\n 'x' : 4,\n 'hh': 1,\n 'h' : 2,\n 'l' : 4,\n 'll': 8,\n 'f' : 8,\n 'g' : 8,\n 'e' : 8,\n }\n type_formatter_info = {\n 'c' : 'U{}',\n 'i' : 'MSB_I{}',\n 'd' : 'MSB_I{}',\n 'u' : 'MSB_U{}',\n 'f' : 'MSB_D{}',\n 'e' : 'MSB_D{}',\n 'g' : 'MSB_D{}',\n 'x' : 'MSB_U{}',\n }\n\n formatters = re.findall(\"%(?:\\d+\\$)?([cdieEfgGosuxXhlL]+)\", self._message)\n\n cur_byte_index = 0\n data_chunks = []\n\n for f in formatters:\n # If the format string we found is > 1 character we know that a length\n # field is included and we need to adjust our sizing accordingly.\n f_size_char = f_type = f[-1]\n if len(f) > 1:\n f_size_char = f[:-1]\n\n fsize = size_formatter_info[f_size_char.lower()]\n\n try:\n if f_type != 's':\n end_index = cur_byte_index + fsize\n fstr = type_formatter_info[f_type.lower()].format(fsize*8)\n\n # Type formatting can give us incorrect format strings when\n # a size formatter promotes a smaller data type. For instnace,\n # 'hhu' says we'll promote a char (1 byte) to an unsigned\n # int for display. Here, the type format string would be\n # incorrectly set to 'MSB_U8' if we didn't correct.\n if fsize == 1 and 'MSB_' in fstr:\n fstr = fstr[4:]\n\n d = dtype.PrimitiveType(fstr).decode(\n evr_hist_data[cur_byte_index:end_index]\n )\n\n # Some formatters have an undefined data size (such as strings)\n # and require additional processing to determine the length of\n # the data and decode data.\n else:\n end_index = str(evr_hist_data).index('\\x00', cur_byte_index)\n d = str(evr_hist_data[cur_byte_index:end_index])\n\n data_chunks.append(d)\n except:\n msg = \"Unable to format EVR Message with data {}\".format(evr_hist_data)\n log.error(msg)\n raise ValueError(msg)\n\n cur_byte_index = end_index\n\n # If we were formatting a string we need to add another index offset\n # to exclude the null terminator.\n if f == 's':\n cur_byte_index += 1\n\n # Format and return the EVR message if formatters were present, otherwise\n # just return the EVR message as is.\n if len(formatters) == 0:\n return self._message\n else:\n # Python format strings cannot handle size formatter information. So something\n # such as %llu needs to be adjusted to be a valid identifier in python by\n # removing the size formatter.\n msg = self._message\n for f in formatters:\n if len(f) > 1:\n msg = msg.replace('%{}'.format(f), '%{}'.format(f[-1]))\n\n return msg % tuple(data_chunks)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _parseHeader (self, line, lineno, log):\n if line.startswith('#') and line.find(':') > 0:\n tokens = [ t.strip().lower() for t in line[1:].split(\":\", 1) ]\n name = tokens[0]\n pos = SeqPos(line, lineno)\n\n if name in self.header:\n msg = 'Ignoring duplicate header parameter: %s'\n log.warning(msg % name, pos)\n else:\n for expected in ['seqid', 'version']:\n if name == expected:\n value = util.toNumber(tokens[1], None)\n if value is None:\n msg = 'Parameter \"%s\" value \"%s\" is not a number.'\n log.error(msg % (name, tokens[1]), poss)\n else:\n self.header[name] = value", "response": "Parses a sequence header line containing name value pairs."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a new command to this sequence.", "response": "def append (self, cmd, delay=0.000, attrs=None):\n \"\"\"Adds a new command with a relative time delay to this sequence.\"\"\"\n self.lines.append( SeqCmd(cmd, delay, attrs) )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef printText (self, stream=None):\n if stream is None:\n stream = sys.stdout\n\n stream.write('# seqid : %u\\n' % self.seqid )\n stream.write('# version : %u\\n' % self.version )\n stream.write('# crc32 : 0x%04x\\n' % self.crc32 )\n stream.write('# ncmds : %u\\n' % len(self.commands) )\n stream.write('# duration: %.3fs\\n' % self.duration )\n stream.write('\\n')\n\n for line in self.lines:\n stream.write( str(line) )\n stream.write('\\n')", "response": "Prints a text representation of this sequence to the given stream."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread a command sequence from the given filename.", "response": "def read (self, filename=None):\n \"\"\"Reads a command sequence from the given filename (defaults to\n self.pathname).\n \"\"\"\n if filename is None:\n filename = self.pathname\n\n stream = open(filename, 'rb')\n magic = struct.unpack('>H', stream.read(2))[0]\n stream.close()\n\n if magic == Seq.Magic:\n self.readBinary(filename)\n else:\n self.readText(filename)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef readBinary (self, filename=None):\n if filename is None:\n filename = self.pathname\n\n stream = open(filename, 'rb')\n magic = struct.unpack('>H', stream.read(2))[0]\n self.crc32 = struct.unpack('>I', stream.read(4))[0]\n self.seqid = struct.unpack('>H', stream.read(2))[0]\n self.version = struct.unpack('>H', stream.read(2))[0]\n ncmds = struct.unpack('>H', stream.read(2))[0]\n reserved = stream.read(20)\n\n for n in range(ncmds):\n bytes = stream.read(110)\n self.lines.append( SeqCmd.decode(bytes, self.cmddict) )", "response": "Reads a binary command sequence from the given file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads a text command sequence from the given file.", "response": "def readText (self, filename=None):\n \"\"\"Reads a text command sequence from the given filename (defaults to\n self.pathname).\n \"\"\"\n if filename is None:\n filename = self.pathname\n\n self.header = { }\n inBody = False\n\n with open(filename, 'rt') as stream:\n for (lineno, line) in enumerate(stream.readlines()):\n stripped = line.strip()\n if stripped == '':\n continue\n elif stripped.startswith('#'):\n if not inBody:\n self._parseHeader(line, lineno, self.log)\n else:\n inBody = True\n self.lines.append( SeqCmd.parse(line, lineno, self.log, self.cmddict) )\n\n if 'seqid' in self.header:\n self.seqid = self.header['seqid']\n elif self.seqid is None:\n self.log.error('No sequence id present in header.')\n\n if 'version' in self.header:\n self.version = self.header['version']\n elif self.version is None:\n self.log.warning('No version present in header. Defaulting to zero (0).')\n self.version = 0"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate (self):\n if not os.path.isfile(self.pathname):\n self.message.append('Filename \"%s\" does not exist.')\n else:\n try:\n with open(self.pathname, 'r') as stream:\n pass\n except IOError:\n self.messages.append('Could not open \"%s\" for reading.' % self.pathname)\n\n for line in self.commands:\n messages = [ ]\n if line.cmd and not line.cmd.validate(messages):\n msg = 'error: %s: %s' % (line.cmd.name, \" \".join(messages))\n self.log.messages.append(msg)\n\n return len(self.log.messages) == 0", "response": "Returns True if this Sequence is valid False otherwise."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites a binary representation of this sequence to the given filename.", "response": "def writeBinary (self, filename=None):\n \"\"\"Writes a binary representation of this sequence to the given filename\n (defaults to self.binpath).\n \"\"\"\n if filename is None:\n filename = self.binpath\n\n with open(filename, 'wb') as output:\n # Magic Number\n output.write( struct.pack('>H', Seq.Magic ) )\n # Upload Type\n output.write( struct.pack('B', 9 ) )\n # Version\n output.write( struct.pack('B', self.version ) )\n # Number of Commands\n output.write( struct.pack('>H', len(self.commands) ) )\n # Sequence ID\n output.write( struct.pack('>H', self.seqid ) )\n # CRC Placeholder\n output.write( struct.pack('>I', 0 ) )\n\n pad = struct.pack('B', 0)\n for n in range(20):\n output.write(pad)\n\n for line in self.lines:\n output.write( line.encode() )\n\n self.crc32 = util.crc32File(filename, 0)\n\n with open(filename, 'r+b') as output:\n output.seek(28)\n output.write( struct.pack('>I', self.crc32) )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrite a text representation of this sequence to the given filename.", "response": "def writeText (self, filename=None):\n \"\"\"Writes a text representation of this sequence to the given filename\n (defaults to self.txtpath).\n \"\"\"\n if filename is None:\n filename = self.txtpath\n\n with open(filename, 'wt') as output:\n self.printText(output)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndecoding a sequence command from an array of bytes according to the given command dictionary and returns a new SeqCmd.", "response": "def decode (cls, bytes, cmddict):\n \"\"\"Decodes a sequence command from an array of bytes, according to the\n given command dictionary, and returns a new SeqCmd.\n \"\"\"\n attrs = SeqCmdAttrs.decode(bytes[0:1])\n delay = SeqDelay .decode(bytes[1:4])\n cmd = cmddict .decode(bytes[4:] )\n return cls(cmd, delay, attrs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef encode (self):\n return self.attrs.encode() + self.delay.encode() + self.cmd.encode()", "response": "Encodes this SeqCmd to binary and returns a bytearray."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the sequence command from a line of text according to the given command dictionary and returns a new SeqCmd.", "response": "def parse (cls, line, lineno, log, cmddict):\n \"\"\"Parses the sequence command from a line of text, according to the\n given command dictionary, and returns a new SeqCmd.\n \"\"\"\n delay = SeqDelay .parse(line, lineno, log, cmddict)\n attrs = SeqCmdAttrs.parse(line, lineno, log, cmddict)\n comment = SeqComment .parse(line, lineno, log, cmddict)\n stop = len(line)\n\n if comment:\n stop = comment.pos.col.start - 1\n\n if attrs and attrs.pos.col.stop != -1:\n stop = attrs.pos.col.start - 1\n\n tokens = line[:stop].split()\n name = tokens[1]\n args = tokens[2:]\n start = line.find(name)\n pos = SeqPos(line, lineno, start + 1, stop)\n\n if name not in cmddict:\n log.error('Unrecognized command \"%s\".' % name, pos)\n elif cmddict[name].nargs != len(args):\n msg = 'Command argument size mismatch: expected %d, but encountered %d.'\n log.error(msg % (cmddict[name].nargs, len(args)), pos)\n\n args = [ util.toNumber(a, a) for a in args ]\n cmd = cmddict.create(name, *args)\n\n return cls(cmd, delay, attrs, comment, pos)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode (cls, bytes, cmddict=None):\n byte = struct.unpack('B', bytes)[0]\n self = cls()\n defval = self.default\n\n for bit, name, value0, value1, default in SeqCmdAttrs.Table:\n mask = 1 << bit\n bitset = mask & byte\n defset = mask & defval\n if bitset != defset:\n if bitset:\n self.attrs[name] = value1\n else:\n self.attrs[name] = value0\n\n return self", "response": "Decodes sequence command attributes from an array of bytes and returns a new SeqCmdAttrs object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encode (self):\n byte = self.default\n\n for bit, name, value0, value1, default in SeqCmdAttrs.Table:\n if name in self.attrs:\n value = self.attrs[name]\n byte = setBit(byte, bit, value == value1)\n\n return struct.pack('B', byte)", "response": "Encodes this SeqCmdAttrs to binary and returns a bytearray."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing a SeqCmdAttrs from a line of text and returns it or None.", "response": "def parse (cls, line, lineno, log, cmddict=None):\n \"\"\"Parses a SeqCmdAttrs from a line of text and returns it or None.\n Warning and error messages are logged via the SeqMsgLog log.\n \"\"\"\n start = line.find('{')\n stop = line.find('}')\n pos = SeqPos(line, lineno, start + 1, stop)\n result = cls(None, pos)\n\n if start >= 0 and stop >= start:\n attrs = { }\n pairs = line[start + 1:stop].split(',')\n\n for item in pairs:\n ncolons = item.count(':')\n if ncolons == 0:\n log.error('Missing colon in command attribute \"%s\".' % item, pos)\n elif ncolons > 1:\n log.error('Too many colons in command attribute \"%s\".' % item, pos)\n else:\n name, value = (s.strip() for s in item.split(':'))\n attrs[name] = value\n\n result = cls(attrs, pos)\n\n elif start != -1 or stop != -1:\n log.error('Incorrect command attribute curly brace placement.', pos)\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecode a sequence delay from an array of bytes according to the given command dictionary and returns a new SeqDelay.", "response": "def decode (cls, bytes, cmddict=None):\n \"\"\"Decodes a sequence delay from an array of bytes, according to the\n given command dictionary, and returns a new SeqDelay.\n \"\"\"\n delay_s = struct.unpack('>H', bytes[0:2])[0]\n delay_ms = struct.unpack('B' , bytes[2:3])[0]\n return cls(delay_s + (delay_ms / 255.0))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef encode (self):\n delay_s = int( math.floor(self.delay) )\n delay_ms = int( (self.delay - delay_s) * 255.0 )\n return struct.pack('>H', delay_s) + struct.pack('B', delay_ms)", "response": "Encodes this SeqDelay to a binary bytearray."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the SeqDelay from a line of text.", "response": "def parse (cls, line, lineno, log, cmddict=None):\n \"\"\"Parses the SeqDelay from a line of text. Warning and error\n messages are logged via the SeqMsgLog log.\n \"\"\"\n delay = -1\n token = line.split()[0]\n start = line.find(token)\n pos = SeqPos(line, lineno, start + 1, start + len(token))\n\n try:\n delay = float(token)\n except ValueError:\n msg = 'String \"%s\" could not be interpreted as a numeric time delay.'\n log.error(msg % token, pos)\n\n return cls(delay, pos)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse (cls, line, lineno, log, cmddict=None):\n start = line.find('%')\n pos = SeqPos(line, lineno, start + 1, len(line))\n result = None\n\n if start >= 0:\n result = cls(line[start:], pos)\n\n return result", "response": "Parses the SeqMetaCmd from a line of text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef error (self, msg, pos=None):\n self.log(msg, 'error: ' + self.location(pos))", "response": "Logs an error message pertaining to the given SeqPos."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef location (self, pos):\n result = ''\n if self.filename:\n result += self.filename + ':'\n if pos:\n result += str(pos)\n return result", "response": "Formats the location of the given SeqPos as a string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef log (self, msg, prefix=None):\n if prefix:\n if not prefix.strip().endswith(':'):\n prefix += ': '\n msg = prefix + msg\n self.messages.append(msg)", "response": "Logs a message with an optional prefix."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef warning (self, msg, pos=None):\n self.log(msg, 'warning: ' + self.location(pos))", "response": "Logs a warning message pertaining to the given SeqAtom."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nexpanding all relative configuration paths in dictionary config by prepending prefix.", "response": "def expandConfigPaths (config, prefix=None, datetime=None, pathvars=None, parameter_key='', *keys):\n \"\"\"Updates all relative configuration paths in dictionary config,\n which contain a key in keys, by prepending prefix.\n\n If keys is omitted, it defaults to 'directory', 'file',\n 'filename', 'path', 'pathname'.\n\n See util.expandPath().\n \"\"\"\n if len(keys) == 0:\n keys = PATH_KEYS\n\n for name, value in config.items():\n if name in keys and type(name) is str:\n expanded = util.expandPath(value, prefix)\n cleaned = replaceVariables(expanded, datetime=datetime, pathvars=pathvars)\n\n for p in cleaned:\n if not os.path.exists(p):\n msg = \"Config parameter {}.{} specifies nonexistent path {}\".format(parameter_key, name, p)\n log.warn(msg)\n\n config[name] = cleaned[0] if len(cleaned) == 1 else cleaned\n\n elif type(value) is dict:\n param_key = name if parameter_key == '' else parameter_key + '.' + name\n expandConfigPaths(value, prefix, datetime, pathvars, param_key, *keys)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn absolute path with path variables replaced as applicable", "response": "def replaceVariables(path, datetime=None, pathvars=None):\n \"\"\"Return absolute path with path variables replaced as applicable\"\"\"\n\n if datetime is None:\n datetime = time.gmtime()\n\n # if path variables are not given, set as empty list\n if pathvars is None:\n pathvars = [ ]\n\n # create an init path list to loop through\n if isinstance(path, list):\n path_list = path\n else:\n path_list = [ path ]\n\n # Set up the regex to search for variables\n regex = re.compile('\\$\\{(.*?)\\}')\n\n # create a newpath list that will hold the 'cleaned' paths\n # with variables and strftime format directives replaced\n newpath_list = [ ]\n\n for p in path_list:\n # create temppath_list to be used a we work through the\n newpath_list.append(p)\n\n # Variable replacement\n # Find all the variables in path using the regex\n for k in regex.findall(p):\n # Check if the key is in path variables map\n if k in pathvars:\n # get the str or list of values\n v = pathvars[k]\n\n # Check value of variable must be in (string, integer, list)\n if type(v) is dict:\n msg = \"Path variable must refer to string, integer, or list\"\n raise TypeError(msg)\n\n # get the list of possible variable values\n value_list = v if type(v) is list else [ v ]\n\n\n # create temp_list for now\n temp_list = []\n\n # loop through the most recent newpath list\n # need to do this every time in order to account for all possible\n # combinations\n # replace the variables\n # loop through the list of values and replace the variables\n for v in value_list:\n for newpath in newpath_list:\n # remove the path from newpath_list\n temp_list.append(newpath.replace('${%s}' % k, str(v)))\n\n # replace newpath_list\n newpath_list = temp_list\n\n # strftime translation\n # Loop through newpath_list to do strftime translation\n for index, newpath in enumerate(newpath_list):\n # Apply strftime translation\n newpath_list[index] = time.strftime(newpath, datetime)\n\n return newpath_list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef flatten (d, *keys):\n flat = { }\n\n for k in keys:\n flat = merge(flat, d.pop(k, { }))\n\n return flat", "response": "Flattens the dictionary d by merging keys in order such that later\n keys take precedence over earlier keys."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef loadYAML (filename=None, data=None):\n config = None\n\n try:\n if filename:\n data = open(filename, 'rt')\n\n config = yaml.load(data)\n\n if type(data) is file:\n data.close()\n except IOError, e:\n msg = 'Could not read AIT configuration file \"%s\": %s'\n log.error(msg, filename, str(e))\n\n return config", "response": "Loads either the given YAML configuration file or YAML data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _getattr_ (self, name):\n value = self._config.get(name)\n\n if type(value) is dict:\n value = AitConfig(self._filename, config=value)\n\n return value", "response": "Internal method. Used by __getattr__ and __getitem__."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a simple key - value map for easy access to data paths", "response": "def _datapaths(self):\n \"\"\"Returns a simple key-value map for easy access to data paths\"\"\"\n paths = { }\n try:\n data = self._config['data']\n for k in data:\n paths[k] = data[k]['path']\n except KeyError as e:\n raise AitConfigMissing(e.message)\n except Exception as e:\n raise AitConfigError('Error reading data paths: %s' % e)\n\n return paths"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreload the AIT configuration.", "response": "def reload (self, filename=None, data=None):\n \"\"\"Reloads the a AIT configuration.\n\n The AIT configuration is automatically loaded when the AIT\n package is first imported. To replace the configuration, call\n reload() (defaults to the current config.filename) or\n reload(new_filename).\n \"\"\"\n if data is None and filename is None:\n filename = self._filename\n\n self._config = loadYAML(filename, data)\n self._filename = filename\n\n if self._config is not None:\n keys = 'default', self._platform, self._hostname\n self._config = flatten(self._config, *keys)\n\n # on reload, if pathvars have not been set, we want to start\n # with the defaults, add the platform and hostname, and\n # merge in all of the information provided in the config\n if self._pathvars is None:\n self._pathvars = self.getDefaultPathVariables()\n\n expandConfigPaths(self._config, \n self._directory,\n self._datetime,\n merge(self._config, self._pathvars))\n\n else:\n self._config = { }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get (self, name, default=None):\n if name in self:\n return self[name]\n\n config = self\n parts = name.split('.')\n heads = parts[:-1]\n tail = parts[-1]\n\n for part in heads:\n if part in config and type(config[part]) is AitConfig:\n config = config[part]\n else:\n return default\n\n return config[tail] if tail in config else default", "response": "Returns the attribute value of AitConfig. name or default if name does not exist."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef addPathVariables(self, pathvars):\n if type(pathvars) is dict:\n self._pathvars = merge(self._pathvars, pathvars)", "response": "Adds path variables to the pathvars map property"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getPDT(typename):\n if typename not in PrimitiveTypeMap and typename.startswith(\"S\"):\n PrimitiveTypeMap[typename] = PrimitiveType(typename)\n\n return PrimitiveTypeMap.get(typename, None)", "response": "Returns the PrimitiveType for the given name or None if no such PrimitiveType exists."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets - Returns the PrimitiveType or ComplexType for typename or None if no primitive type exists for typename or None if no complex type exists for typename.", "response": "def get(typename):\n \"\"\"get(typename) -> PrimitiveType or ComplexType\n\n Returns the PrimitiveType or ComplexType for typename or None.\n \"\"\"\n dt = getPDT(typename) or getCDT(typename)\n\n if dt is None:\n pdt, nelems = ArrayType.parse(typename)\n if pdt and nelems:\n dt = ArrayType(pdt, nelems)\n\n return dt"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndecoding the given bytearray according to this PrimitiveType definition.", "response": "def decode(self, bytes, raw=False):\n \"\"\"decode(bytearray, raw=False) -> value\n\n Decodes the given bytearray according to this PrimitiveType\n definition.\n\n NOTE: The parameter ``raw`` is present to adhere to the\n ``decode()`` inteface, but has no effect for PrimitiveType\n definitions.\n \"\"\"\n return struct.unpack(self.format, buffer(bytes))[0]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self, value, messages=None, prefix=None):\n valid = False\n\n def log(msg):\n if messages is not None:\n if prefix is not None:\n tok = msg.split()\n msg = prefix + ' ' + tok[0].lower() + \" \" + \" \".join(tok[1:])\n messages.append(msg)\n\n if self.string:\n valid = type(value) is str\n else:\n if type(value) is str:\n log(\"String '%s' cannot be represented as a number.\" % value)\n elif type(value) not in (int, long, float):\n log(\"Value '%s' is not a primitive type.\" % str(value))\n elif type(value) is float and not self.float:\n log(\"Float '%g' cannot be represented as an integer.\" % value)\n else:\n if value < self.min or value > self.max:\n args = (str(value), self.min, self.max)\n log(\"Value '%s' out of range [%d, %d].\" % args)\n else:\n valid = True\n\n return valid", "response": "Validate the given value according to this PrimitiveType\nTaxonomy definition."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _assertIndex(self, index):\n if type(index) is not int:\n raise TypeError('list indices must be integers')\n if index < 0 or index >= self.nelems:\n raise IndexError('list index out of range')", "response": "Raise TypeError or IndexError if index is out of range for the number of elements in this array respectively."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode(self, bytes, index=None, raw=False):\n if index is None:\n index = slice(0, self.nelems)\n\n if type(index) is slice:\n step = 1 if index.step is None else index.step\n indices = xrange(index.start, index.stop, step)\n result = [ self.decodeElem(bytes, n, raw) for n in indices ]\n else:\n result = self.decodeElem(bytes, index, raw)\n\n return result", "response": "decode returns the value1... valueN\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndecode a single element at array [ index ] from a sequence bytes that contains data for the entire array.", "response": "def decodeElem(self, bytes, index, raw=False):\n \"\"\"Decodes a single element at array[index] from a sequence bytes\n that contain data for the entire array.\n \"\"\"\n self._assertIndex(index)\n start = index * self.type.nbytes\n stop = start + self.type.nbytes\n\n if stop > len(bytes):\n msg = 'Decoding %s[%d] requires %d bytes, '\n msg += 'but the ArrayType.decode() method received only %d bytes.'\n raise IndexError(msg % (self.type.name, index, stop, len(bytes)))\n\n return self.type.decode( bytes[start:stop], raw )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef encode(self, *args):\n if len(args) != self.nelems:\n msg = 'ArrayType %s encode() requires %d values, but received %d.'\n raise ValueError(msg % (self.name, self.nelems, len(args)))\n\n return bytearray().join(self.type.encode(arg) for arg in args)", "response": "Encodes the given values to a sequence of bytes according to thisArray s underlying element type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse an array name to return the element type name and number of elements e. g. MSB_U16 [ 32 ]", "response": "def parse (name):\n \"\"\"parse(name) -> [typename | None, nelems | None]\n\n Parses an ArrayType name to return the element type name and\n number of elements, e.g.:\n\n >>> ArrayType.parse('MSB_U16[32]')\n ['MSB_U16', 32]\n\n If typename cannot be determined, None is returned.\n Similarly, if nelems is not an integer or less than one (1),\n None is returned.\n \"\"\"\n parts = [None, None]\n start = name.find('[')\n\n if start != -1:\n stop = name.find(']', start)\n if stop != -1:\n try:\n parts[0] = name[:start]\n parts[1] = int(name[start + 1:stop])\n if parts[1] <= 0:\n raise ValueError\n except ValueError:\n msg = 'ArrayType specification: \"%s\" must have an '\n msg += 'integer greater than zero in square brackets.'\n raise ValueError(msg % name)\n\n return parts"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cmddict(self):\n if self._cmddict is None:\n self._cmddict = cmd.getDefaultDict()\n\n return self._cmddict", "response": "PrimitiveType base for the ComplexType"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef encode(self, value):\n opcode = self.cmddict[value].opcode\n return super(CmdType, self).encode(opcode)", "response": "Encodes the given value according to this\n PrimitiveType definition."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef decode(self, bytes, raw=False):\n opcode = super(CmdType, self).decode(bytes)\n result = None\n\n if raw:\n result = opcode\n elif opcode in self.cmddict.opcodes:\n result = self.cmddict.opcodes[opcode]\n else:\n raise ValueError('Unrecognized command opcode: %d' % opcode)\n\n return result", "response": "Decode the given bytearray and returns the corresponding command version."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the EVRs dictionary", "response": "def evrs(self):\n \"\"\"Getter EVRs dictionary\"\"\"\n if self._evrs is None:\n import ait.core.evr as evr\n self._evrs = evr.getDefaultDict()\n\n return self._evrs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef encode(self, value):\n e = self.evrs.get(value, None)\n if not e:\n log.error(str(value) + \" not found as EVR. Cannot encode.\")\n return None\n else:\n return super(EVRType, self).encode(e.code)", "response": "Encodes the given value to a bytearray according to this Complex Type definition."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef decode(self, bytes, raw=False):\n code = super(EVRType, self).decode(bytes)\n result = None\n\n if raw:\n result = code\n elif code in self.evrs.codes:\n result = self.evrs.codes[code]\n else:\n result = code\n log.warn('Unrecognized EVR code: %d' % code)\n\n return result", "response": "decode returns the value of the given bytearray according to the corresponding EVR Definition s EVR code."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode(self, bytes, raw=False):\n result = super(Time8Type, self).decode(bytes)\n\n if not raw:\n result /= 256.0\n\n return result", "response": "Decode the given bytearray and returns the number of nanos."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef encode(self, value):\n if type(value) is not datetime.datetime:\n raise TypeError('encode() argument must be a Python datetime')\n\n return super(Time32Type, self).encode( dmc.toGPSSeconds(value) )", "response": "Encodes the given value to a bytearray according to thisComplexType definition."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef decode(self, bytes, raw=False):\n sec = super(Time32Type, self).decode(bytes)\n return sec if raw else dmc.toLocalTime(sec)", "response": "Decode the given bytearray containing the elapsed time in\n seconds since the GPS epoch and returns the corresponding value\n Python : class : datetime."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef encode(self, value):\n if type(value) is not datetime.datetime:\n raise TypeError('encode() argument must be a Python datetime')\n\n coarse = Time32Type().encode(value)\n fine = Time8Type() .encode(value.microsecond / 1e6)\n\n return coarse + fine", "response": "Encodes the given value to a bytearray according to thisInternationalType definition."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndecodes the given bytearray containing the elapsed time in seconds plus 1 / 256 subseconds since the GPS epoch returns the corresponding Python datetime.", "response": "def decode(self, bytes, raw=False):\n \"\"\"decode(bytearray, raw=False) -> value\n\n Decodes the given bytearray containing the elapsed time in\n seconds plus 1/256 subseconds since the GPS epoch returns the\n corresponding Python :class:`datetime`.\n\n If the optional parameter ``raw`` is ``True``, the number of\n seconds and subseconds will be returned as a floating-point\n number instead.\n \"\"\"\n coarse = Time32Type().decode(bytes[:4], raw)\n fine = Time8Type() .decode(bytes[4:])\n\n if not raw:\n fine = datetime.timedelta(microseconds=fine * 1e6)\n\n return coarse + fine"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(self, ymlfile=None):\n if ymlfile is not None:\n self.ymlfile = ymlfile\n\n try:\n # If yaml should be 'cleaned' of document references\n if self._clean:\n self.data = self.process(self.ymlfile)\n else:\n with open(self.ymlfile, 'rb') as stream:\n for data in yaml.load_all(stream):\n self.data.append(data)\n\n self.loaded = True\n except ScannerError, e:\n msg = \"YAML formattting error - '\" + self.ymlfile + \": '\" + str(e) + \"'\"\n raise util.YAMLError(msg)", "response": "Load and process the YAML file"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nclean out all document tags from the YAML file to make it JSON - friendly.", "response": "def process(self, ymlfile):\n \"\"\"Cleans out all document tags from the YAML file to make it\n JSON-friendly to work with the JSON Schema.\n \"\"\"\n output = \"\"\n\n try:\n # Need a list of line numbers where the documents resides\n # Used for finding/displaying errors\n self.doclines = []\n linenum = None\n with open(ymlfile, 'r') as txt:\n for linenum, line in enumerate(txt):\n # Pattern to match document start lines\n doc_pattern = re.compile('(---) (![a-z]+)(.*$)', flags=re.I)\n\n # Pattern to match sequence start lines\n seq_pattern = re.compile('(\\s*)(-+) !([a-z]+)(.*$)', flags=re.I)\n\n # If we find a document, remove the tag\n if doc_pattern.match(line):\n line = doc_pattern.sub(r\"---\", line).lower()\n self.doclines.append(linenum)\n elif seq_pattern.match(line):\n # Replace the sequence start with key string\n line = seq_pattern.sub(r\"\\1\\2 \\3: line \" + str(linenum), line).lower()\n\n output = output + line\n\n if linenum is None:\n msg = \"Empty YAML file: \" + ymlfile\n raise util.YAMLError(msg)\n else:\n # Append one more document to docline for the end\n self.doclines.append(linenum+1)\n\n return output\n\n except IOError, e:\n msg = \"Could not process YAML file '\" + ymlfile + \"': '\" + str(e) + \"'\"\n raise IOError(msg)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nloads and process the schema file", "response": "def load(self, schemafile=None):\n \"\"\"Load and process the schema file\"\"\"\n if schemafile is not None:\n self._schemafile = schemafile\n\n try:\n self.data = json.load(open(self._schemafile))\n except (IOError, ValueError), e:\n msg = \"Could not load schema file '\" + self._schemafile + \"': '\" + str(e) + \"'\"\n raise jsonschema.SchemaError(msg)\n\n self.loaded = True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting out a pretty representation of the error.", "response": "def pretty(self, start, end, e, messages=None):\n \"\"\"Pretties up the output error message so it is readable\n and designates where the error came from\"\"\"\n\n log.debug(\"Displaying document from lines '%i' to '%i'\", start, end)\n\n errorlist = []\n if len(e.context) > 0:\n errorlist = e.context\n else:\n errorlist.append(e)\n\n for error in errorlist:\n validator = error.validator\n\n if validator == \"required\":\n # Handle required fields\n msg = error.message\n messages.append(\"Between lines %d - %d. %s\" % (start, end, msg))\n elif validator == \"additionalProperties\":\n # Handle additional properties not allowed\n if len(error.message) > 256:\n msg = error.message[:253] + \"...\"\n else:\n msg = error.message\n messages.append(\"Between lines %d - %d. %s\" % (start, end, msg))\n elif len(error.relative_path) > 0:\n # Handle other cases where we can loop through the lines\n\n # get the JSON path to traverse through the file\n jsonpath = error.relative_path\n array_index = 0\n\n current_start = start\n foundline = 0\n found = False\n\n context = collections.deque(maxlen=20)\n tag = \" <<<<<<<<< Expects: %s <<<<<<<<<\\n\"\"\"\n for cnt, path in enumerate(error.relative_path):\n\n # Need to set the key we are looking, and then check the array count\n # if it is an array, we have some interesting checks to do\n if int(cnt) % 2 == 0:\n # we know we have some array account\n # array_index keeps track of the array count we are looking for or number\n # of matches we need to skip over before we get to the one we care about\n\n # check if previous array_index > 0. if so, then we know we need to use\n # that one to track down the specific instance of this nested key.\n # later on, we utilize this array_index loop through\n # if array_index == 0:\n array_index = jsonpath[cnt]\n\n match_count = 0\n continue\n elif int(cnt) % 2 == 1:\n # we know we have some key name\n # current_key keeps track of the key we are looking for in the JSON Path\n current_key = jsonpath[cnt]\n\n for linenum in range(current_start, end):\n line = linecache.getline(self.ymlfile, linenum)\n\n # Check if line contains the error\n if \":\" in line:\n l = line.split(':')\n key = l[0]\n value = ':'.join(l[1:])\n\n # TODO:\n # Handle maxItems TBD\n # Handle minItems TBD\n # Handle in-order (bytes) TBD\n # Handle uniqueness TBD\n\n # Handle cases where key in yml file is hexadecimal\n try:\n key = int(key.strip(), 16)\n except ValueError:\n key = key.strip()\n\n if str(key) == current_key:\n # check if we are at our match_count and end of the path\n if match_count == array_index:\n # check if we are at end of the jsonpath\n if cnt == len(jsonpath)-1:\n # we are at the end of path so let's stop here'\n if error.validator == \"type\":\n if value.strip() == str(error.instance):\n errormsg = \"Value '%s' should be of type '%s'\" % (error.instance, str(error.validator_value))\n line = line.replace(\"\\n\", (tag % errormsg))\n foundline = linenum\n found = True\n elif value.strip() == \"\" and error.instance is None:\n errormsg = \"Missing value for %s.\" % key\n line = line.replace(\"\\n\", (tag % errormsg))\n foundline = linenum\n found = True\n\n elif not found:\n # print \"EXTRA FOO\"\n # print match_count\n # print array_index\n # print current_key\n # print line\n # otherwise change the start to the current line\n current_start = linenum\n break\n\n match_count += 1\n\n\n \n\n # for the context queue, we want to get the error to appear in\n # the middle of the error output. to do so, we will only append\n # to the queue in 2 cases:\n #\n # 1. before we find the error (found == False). we can\n # just keep pushing on the queue until we find it in the YAML.\n # 2. once we find the error (found == True), we just want to push\n # onto the queue until the the line is in the middle\n if not found or (found and context.maxlen > (linenum-foundline)*2):\n context.append(line)\n elif found and context.maxlen <= (linenum-foundline)*2:\n break\n\n\n # Loop through the queue and generate a readable msg output\n out = \"\"\n for line in context:\n out += line\n\n if foundline:\n msg = \"Error found on line %d in %s:\\n\\n%s\" % (foundline, self.ymlfile, out)\n messages.append(msg)\n\n # reset the line it was found on and the context\n foundline = 0\n context.clear()\n\n linecache.clearcache()\n else:\n messages.append(error.message)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef schema_val(self, messages=None):\n \"Perform validation with processed YAML and Schema\"\n self._ymlproc = YAMLProcessor(self._ymlfile)\n self._schemaproc = SchemaProcessor(self._schemafile)\n valid = True\n\n log.debug(\"BEGIN: Schema-based validation for YAML '%s' with schema '%s'\", self._ymlfile, self._schemafile)\n\n # Make sure the yml and schema have been loaded\n if self._ymlproc.loaded and self._schemaproc.loaded:\n # Load all of the yaml documents. Could be more than one in the same YAML file.\n for docnum, data in enumerate(yaml.load_all(self._ymlproc.data)):\n\n # Since YAML allows integer keys but JSON does not, we need to first\n # dump the data as a JSON string to encode all of the potential integers\n # as strings, and then read it back out into the YAML format. Kind of\n # a clunky workaround but it works as expected.\n data = yaml.load(json.dumps(data))\n\n # Now we want to get a validator ready\n v = jsonschema.Draft4Validator(self._schemaproc.data)\n\n # Loop through the errors (if any) and set valid = False if any are found\n # Display the error message\n for error in sorted(v.iter_errors(data)):\n msg = \"Schema-based validation failed for YAML file '\" + self._ymlfile + \"'\"\n self.ehandler.process(docnum, self._ymlproc.doclines, error, messages)\n valid = False\n\n if not valid:\n log.error(msg)\n\n elif not self._ymlproc.loaded:\n raise util.YAMLError(\"YAML must be loaded in order to validate.\")\n elif not self._schemaproc.loaded:\n raise jsonschema.SchemaError(\"Schema must be loaded in order to validate.\")\n\n log.debug(\"END: Schema-based validation complete for '%s'\", self._ymlfile)\n return valid", "response": "Perform validation with processed YAML and Schema"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef content_val(self, ymldata=None, messages=None):\n\n self._ymlproc = YAMLProcessor(self._ymlfile, False)\n\n # Turn off the YAML Processor\n log.debug(\"BEGIN: Content-based validation of Command dictionary\")\n if ymldata is not None:\n cmddict = ymldata\n elif ymldata is None and self._ymlproc.loaded:\n cmddict = self._ymlproc.data\n elif not self._ymlproc.loaded:\n raise util.YAMLError(\"YAML failed to load.\")\n\n try:\n # instantiate the document number. this will increment in order to\n # track the line numbers and section where validation fails\n docnum = 0\n\n # boolean to hold argument validity\n argsvalid = True\n\n # list of rules to validate against\n rules = []\n\n ### set the command rules\n #\n # set uniqueness rule for command names\n rules.append(UniquenessRule('name', \"Duplicate command name: %s\", messages))\n\n # set uniqueness rule for opcodes\n rules.append(UniquenessRule('opcode', \"Duplicate opcode: %s\", messages))\n #\n ###\n for cmdcnt, cmddefn in enumerate(cmddict[0]):\n # check the command rules\n for rule in rules:\n rule.check(cmddefn)\n\n # list of argument rules to validate against\n argrules = []\n\n ### set rules for command arguments\n #\n # set uniqueness rule for opcodes\n argrules.append(UniquenessRule('name', \"Duplicate argument name: \" + cmddefn.name + \".%s\", messages))\n\n # set type rule for arg.type\n argrules.append(TypeRule('type', \"Invalid argument type for argument: \" + cmddefn.name + \".%s\", messages))\n\n # set argument size rule for arg.type.nbytes\n argrules.append(TypeSizeRule('nbytes', \"Invalid argument size for argument: \" + cmddefn.name + \".%s\", messages))\n\n # set argument enumerations rule to check no enumerations contain un-quoted YAML special variables\n argrules.append(EnumRule('enum', \"Invalid enum value for argument: \" + cmddefn.name + \".%s\", messages))\n\n # set byte order rule to ensure proper ordering of aruguments\n argrules.append(ByteOrderRule('bytes', \"Invalid byte order for argument: \" + cmddefn.name + \".%s\", messages))\n #\n ###\n\n argdefns = cmddefn.argdefns\n for arg in argdefns:\n # check argument rules\n for rule in argrules:\n rule.check(arg)\n\n # check if argument rule failed, if so set the validity to False\n if not all(r.valid is True for r in argrules):\n argsvalid = False\n\n log.debug(\"END: Content-based validation complete for '%s'\", self._ymlfile)\n\n # check validity of all command rules and argument validity\n return all(rule.valid is True for rule in rules) and argsvalid\n\n except util.YAMLValidationError, e:\n # Display the error message\n if messages is not None:\n if len(e.message) < 128:\n msg = \"Validation Failed for YAML file '\" + self._ymlfile + \"': '\" + str(e.message) + \"'\"\n else:\n msg = \"Validation Failed for YAML file '\" + self._ymlfile + \"'\"\n\n log.error(msg)\n self.ehandler.process(docnum, self.ehandler.doclines, e, messages)\n return False", "response": "Validates the contents of the Command Dictionary and returns a dictionary of the contents for each of the fields\n ."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate(self, ymldata=None, messages=None):\n schema_val = self.schema_val(messages)\n if len(messages) == 0:\n content_val = self.content_val(ymldata, messages)\n\n return schema_val and content_val", "response": "Validates the Telemetry Dictionary definitions"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef content_val(self, ymldata=None, messages=None):\n\n # Turn off the YAML Processor\n log.debug(\"BEGIN: Content-based validation of Telemetry dictionary\")\n if ymldata is not None:\n tlmdict = ymldata\n else:\n tlmdict = tlm.TlmDict(self._ymlfile)\n\n try:\n # instantiate the document number. this will increment in order to\n # track the line numbers and section where validation fails\n docnum = 0\n\n # boolean to hold argument validity\n fldsvalid = True\n\n # list of rules to validate against\n rules = []\n\n ### set the packet rules\n #\n # set uniqueness rule for packet names\n rules.append(UniquenessRule('name', \"Duplicate packet name: %s\", messages))\n\n ###\n # Loop through the keys and check each PacketDefinition\n for key in tlmdict.keys():\n pktdefn = tlmdict[key]\n # check the telemetry packet rules\n for rule in rules:\n rule.check(pktdefn)\n\n # list of field rules to validate against\n fldrules = []\n\n ### set rules for telemetry fields\n #\n # set uniqueness rule for field name\n fldrules.append(UniquenessRule('name', \"Duplicate field name: \" + pktdefn.name + \".%s\", messages))\n\n # set type rule for field.type\n fldrules.append(TypeRule('type', \"Invalid field type for field: \" + pktdefn.name + \".%s\", messages))\n\n # set field size rule for field.type.nbytes\n fldrules.append(TypeSizeRule('nbytes', \"Invalid field size for field: \" + pktdefn.name + \".%s\", messages))\n\n # set field enumerations rule to check no enumerations contain un-quoted YAML special variables\n fldrules.append(EnumRule('enum', \"Invalid enum value for field: \" + pktdefn.name + \".%s\", messages))\n #\n ###\n\n flddefns = pktdefn.fields\n for fld in flddefns:\n # check field rules\n for rule in fldrules:\n rule.check(fld)\n\n # check if field rule failed, if so set the validity to False\n if not all(r.valid is True for r in fldrules):\n fldsvalid = False\n\n log.debug(\"END: Content-based validation complete for '%s'\", self._ymlfile)\n\n # check validity of all packet rules and field validity\n return all(rule.valid is True for rule in rules) and fldsvalid\n\n except util.YAMLValidationError, e:\n # Display the error message\n if messages is not None:\n if len(e.message) < 128:\n msg = \"Validation Failed for YAML file '\" + self._ymlfile + \"': '\" + str(e.message) + \"'\"\n else:\n msg = \"Validation Failed for YAML file '\" + self._ymlfile + \"'\"\n\n log.error(msg)\n self.ehandler.process(self.ehandler.doclines, e, messages)\n return False", "response": "Validates the contents of the Telemetry Dictionary to ensure that each of the fields in the Telemetry Dictionary meets specific criteria regarding the expected types byte ranges etc."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef check(self, defn):\n val = getattr(defn, self.attr)\n if val is not None and val in self.val_list:\n self.messages.append(self.msg % str(val))\n # TODO self.messages.append(\"TBD location message\")\n self.valid = False\n elif val is not None:\n self.val_list.append(val)\n log.debug(self.val_list)", "response": "Performs the uniqueness check against the value list of the attribute in the rule objects\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check(self, defn):\n allowedTypes = dtype.PrimitiveType, dtype.ArrayType\n if not isinstance(defn.type, allowedTypes):\n self.messages.append(self.msg % str(defn.name))\n # self.messages.append(\"TBD location message\")\n self.valid = False", "response": "Performs isinstance check for the definitions data type."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check(self, defn, msg=None):\n if isinstance(defn.type, dtype.PrimitiveType):\n # Check the nbytes designated in the YAML match the PDT\n nbytes = defn.type.nbytes\n defnbytes = defn.slice().stop - defn.slice().start\n if nbytes != defnbytes:\n self.messages.append(self.msg % defn.name)\n self.messages.append(\"Definition size of (\" + str(defnbytes) +\n \" bytes) does not match size of data\" +\n \" type \" +str(defn.type.name) + \" (\" +\n str(nbytes) + \" byte(s))\")\n # TODO self.messages.append(\"TBD location message\")\n self.valid = False", "response": "Checks if the number of bytes defined in the object definition matches the number of bytes defined in the object definition."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if the given definition is valid.", "response": "def check(self, defn, msg=None):\n \"\"\"Uses the definitions slice() method to determine its start/stop\n range.\n \"\"\"\n # Check enum does not contain boolean keys\n if (defn.slice().start != self.prevstop):\n self.messages.append(self.msg % str(defn.name))\n # TODO self.messages.append(\"TBD location message\")\n self.valid = False\n\n self.prevstop = defn.slice().stop"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef wait (cond, msg=None, _timeout=10, _raiseException=True):\n status = False\n delay = 0.25\n elapsed = 0\n\n if msg is None and type(cond) is str:\n msg = cond\n\n if type(cond) is bool:\n if cond:\n log.warn('Boolean passed as argument to wait. Make sure argument to wait is surrounded by a lambda or \" \"')\n else:\n raise FalseWaitError(msg)\n\n if type(cond) in (int, float):\n gevent.sleep(cond)\n status = True\n else:\n while True:\n if _timeout is not None and elapsed >= _timeout:\n if _raiseException:\n raise APITimeoutError(_timeout, msg)\n else:\n status = False\n break\n\n if type(cond) is str:\n caller = inspect.stack()[1][0]\n status = eval(cond, caller.f_globals, caller.f_locals)\n elif callable(cond):\n status = cond()\n else:\n status = cond\n\n if status:\n break\n\n gevent.sleep(delay)\n elapsed += delay\n\n return status", "response": "Waits for a given condition to be True."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates validates and sends the given command as a UDP packet to the destination host and port specified when thisCmdAPI was created. Returns True if the command was created valid and sent False otherwise.", "response": "def send (self, command, *args, **kwargs):\n \"\"\"Creates, validates, and sends the given command as a UDP\n packet to the destination (host, port) specified when this\n CmdAPI was created.\n\n Returns True if the command was created, valid, and sent,\n False otherwise.\n \"\"\"\n status = False\n cmdobj = self._cmddict.create(command, *args, **kwargs)\n messages = []\n\n if not cmdobj.validate(messages):\n for msg in messages:\n log.error(msg)\n else:\n encoded = cmdobj.encode()\n\n if self._verbose:\n size = len(cmdobj.name)\n pad = (size - len(cmdobj.name) + 1) * ' '\n gds.hexdump(encoded, preamble=cmdobj.name + ':' + pad)\n\n try:\n values = (self._host, self._port, str(cmdobj))\n log.command('Sending to %s:%d: %s' % values)\n self._socket.sendto(encoded, (self._host, self._port))\n status = True\n\n with pcap.open(self.CMD_HIST_FILE, 'a') as output:\n output.write(str(cmdobj))\n except socket.error as e:\n log.error(e.message)\n except IOError as e:\n log.error(e.message)\n\n return status"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _pop(self, block=True, timeout=None, left=False):\n item = None\n timer = None\n deque = self._deque\n empty = IndexError('pop from an empty deque')\n\n if block is False:\n if len(self._deque) > 0:\n item = deque.popleft() if left else deque.pop()\n else:\n raise empty\n else:\n try:\n if timeout is not None:\n timer = gevent.Timeout(timeout, empty)\n timer.start()\n\n while True:\n self.notEmpty.wait()\n if len(deque) > 0:\n item = deque.popleft() if left else deque.pop()\n break\n finally:\n if timer is not None:\n timer.cancel()\n\n if len(deque) == 0:\n self.notEmpty.clear()\n\n return item", "response": "Removes and returns the first item from this GeventDeque."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef append(self, item):\n self._deque.append(item)\n self.notEmpty.set()", "response": "Add item to the right side of the GeventDeque."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef appendleft(self, item):\n self._deque.appendleft(item)\n self.notEmpty.set()", "response": "Add item to the left side of the GeventDeque."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extend(self, iterable):\n self._deque.extend(iterable)\n if len(self._deque) > 0:\n self.notEmpty.set()", "response": "Extend the right side of this GeventDeque by appending the elements from the iterable argument."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef extendleft(self, iterable):\n self._deque.extendleft(iterable)\n if len(self._deque) > 0:\n self.notEmpty.set()", "response": "Extend the left side of this GeventDeque by appending\n elements from the iterable argument."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef popleft(self, block=True, timeout=None):\n return self._pop(block, timeout, left=True)", "response": "Remove and return an item from the right side of the GeventDeque."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef confirm(self, msg, _timeout=-1):\n ''' Send a confirm prompt to the GUI\n \n Arguments:\n msg (string):\n The message to display to the user.\n\n _timeout (int):\n The optional amount of time for which the prompt\n should be displayed to the user before a timeout occurs.\n Defaults to -1 which indicates there is no timeout limit.\n '''\n return self.msgBox('confirm', _timeout=_timeout, msg=msg)", "response": "Send a confirm prompt to the GUI\n \n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend a user prompt request to the GUI.", "response": "def msgBox(self, promptType, _timeout=-1, **options):\n ''' Send a user prompt request to the GUI\n\n Arguments:\n promptType (string):\n The prompt type to send to the GUI. Currently\n the only type supported is 'confirm'.\n\n _timeout (int):\n The optional amount of time for which the prompt\n should be displayed to the user before a timeout occurs.\n Defaults to -1 which indicates there is no timeout limit.\n\n options (dict):\n The keyword arguments that should be passed to the requested\n prompt type. Check prompt specific sections below for information on what\n arguments are expected to be present.\n\n Raises:\n ValueError:\n If the prompt type received is an unexpected value\n\n **Confirm Prompt**\n\n Display a message to the user and prompt them for a confirm/deny\n response to the message.\n\n Arguments:\n msg (string):\n The message to display to the user\n\n Returns:\n True if the user picks 'Confirm', False if the user picks 'Deny'\n\n Raises:\n KeyError:\n If the options passed to the prompt handler doesn't contain a\n `msg` attribute.\n\n APITimeoutError:\n If the timeout value is reached without receiving a response.\n '''\n if promptType == 'confirm':\n return self._sendConfirmPrompt(_timeout, options)\n else:\n raise ValueError('Unknown prompt type: {}'.format(promptType))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _subscribe_all(self):\n for stream in (self.inbound_streams + self.outbound_streams):\n for input_ in stream.inputs:\n if not type(input_) is int and input_ is not None:\n self._subscribe(stream, input_)\n\n for plugin in self.plugins:\n for input_ in plugin.inputs:\n self._subscribe(plugin, input_)\n\n for output in plugin.outputs:\n # Find output stream instance\n subscriber = next((x for x in self.outbound_streams\n if x.name == output), None)\n if subscriber is None:\n log.warn('The outbound stream {} does not '\n 'exist so will not receive messages '\n 'from {}'.format(output, plugin))\n else:\n self._subscribe(subscriber, plugin.name)", "response": "Subscribes all streams to all plugins and inputs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding logging handlers to the logger to log to the local system.", "response": "def addLocalHandlers (logger):\n \"\"\"Adds logging handlers to logger to log to the following local\n resources:\n\n 1. The terminal\n 2. localhost:514 (i.e. syslogd)\n 3. localhost:2514 (i.e. the AIT GUI syslog-like handler)\n \"\"\"\n termlog = logging.StreamHandler()\n termlog.setFormatter( LogFormatter() )\n\n logger.addHandler( termlog )\n logger.addHandler( SysLogHandler() )\n logger.addHandler( SysLogHandler(('localhost', 2514)) )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd logging handlers to logger to remotely log to a specific host.", "response": "def addRemoteHandlers (logger):\n \"\"\"Adds logging handlers to logger to remotely log to:\n\n ait.config.logging.hostname:514 (i.e. syslogd)\n\n If not set or hostname cannot be resolved, this method has no\n effect.\n \"\"\"\n try:\n hostname = ait.config.logging.hostname\n\n # Do not \"remote\" log to this host, as that's already covered\n # by addLocalHandlers().\n if socket.getfqdn() != hostname:\n socket.getaddrinfo(hostname, None)\n logger.addHandler( SysLogHandler( (hostname, 514) ) )\n\n except AttributeError:\n pass # No ait.config.logging.hostname\n\n except socket.gaierror:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing Syslog messages into a dict keyed by the constituent parts of the Syslog message.", "response": "def parseSyslog(msg):\n \"\"\"Parses Syslog messages (RFC 5424)\n\n The `Syslog Message Format (RFC 5424)\n `_ can be parsed with\n simple whitespace tokenization::\n\n SYSLOG-MSG = HEADER SP STRUCTURED-DATA [SP MSG]\n HEADER = PRI VERSION SP TIMESTAMP SP HOSTNAME\n SP APP-NAME SP PROCID SP MSGID\n ...\n NILVALUE = \"-\"\n\n This method does not return STRUCTURED-DATA. It parses NILVALUE\n (\"-\") STRUCTURED-DATA or simple STRUCTURED-DATA which does not\n contain (escaped) ']'.\n\n :returns: A dictionary keyed by the constituent parts of the\n Syslog message.\n \"\"\"\n tokens = msg.split(' ', 6)\n result = { }\n\n if len(tokens) > 0:\n pri = tokens[0]\n start = pri.find('<')\n stop = pri.find('>')\n\n if start != -1 and stop != -1:\n result['pri'] = pri[start + 1:stop]\n else:\n result['pri'] = ''\n\n if stop != -1 and len(pri) > stop:\n result['version'] = pri[stop + 1:]\n else:\n result['version'] = ''\n\n result[ 'timestamp' ] = tokens[1] if len(tokens) > 1 else ''\n result[ 'hostname' ] = tokens[2] if len(tokens) > 2 else ''\n result[ 'appname' ] = tokens[3] if len(tokens) > 3 else ''\n result[ 'procid' ] = tokens[4] if len(tokens) > 4 else ''\n result[ 'msgid' ] = tokens[5] if len(tokens) > 5 else ''\n result[ 'msg' ] = ''\n\n if len(tokens) > 6:\n # The following will work for NILVALUE STRUCTURED-DATA or\n # simple STRUCTURED-DATA which does not contain ']'.\n rest = tokens[6]\n start = rest.find('-')\n\n if start == -1:\n start = rest.find(']')\n\n if len(rest) > start:\n result['msg'] = rest[start + 1:].strip()\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef formatTime (self, record, datefmt=None):\n if datefmt is None:\n datefmt = '%Y-%m-%d %H:%M:%S'\n\n ct = self.converter(record.created)\n t = time.strftime(datefmt, ct)\n s = '%s.%03d' % (t, record.msecs)\n\n return s", "response": "Return the creation time of the specified LogRecord as formatted\n text."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the given LogRecord as formatted text.", "response": "def format (self, record):\n \"\"\"Returns the given LogRecord as formatted text.\"\"\"\n record.hostname = self.hostname\n return logging.Formatter.format(self, record)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the creation time of the given LogRecord as formatted text.", "response": "def formatTime (self, record, datefmt=None):\n \"\"\"Returns the creation time of the given LogRecord as formatted text.\n\n NOTE: The datefmt parameter and self.converter (the time\n conversion method) are ignored. BSD Syslog Protocol messages\n always use local time, and by our convention, Syslog Protocol\n messages use UTC.\n \"\"\"\n if self.bsd:\n lt_ts = datetime.datetime.fromtimestamp(record.created)\n ts = lt_ts.strftime(self.BSD_DATEFMT)\n if ts[4] == '0':\n ts = ts[0:4] + ' ' + ts[5:]\n else:\n utc_ts = datetime.datetime.utcfromtimestamp(record.created)\n ts = utc_ts.strftime(self.SYS_DATEFMT)\n return ts"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an instance of a PCapStream class which contains the read write and close methods.", "response": "def open (filename, mode='r', **options):\n \"\"\"Returns an instance of a :class:`PCapStream` class which contains\n the ``read()``, ``write()``, and ``close()`` methods. Binary mode\n is assumed for this module, so the \"b\" is not required when\n calling ``open()``.\n\n If the optiontal ``rollover`` parameter is True, a\n :class:`PCapRolloverStream` is created instead. In that case\n ``filename`` is treated as a ``strftime(3)`` format string and\n ``nbytes``, ``npackets``, ``nseconds``, and ``dryrun`` parameters\n may also be specified. See :class:``PCapRolloverStream`` for more\n information.\n\n NOTE: :class:`PCapRolloverStream` is always opened in write mode\n (\"wb\") and supports only ``write()`` and ``close()``, not\n ``read()``.\n \"\"\"\n mode = mode.replace('b', '') + 'b'\n\n if options.get('rollover', False):\n stream = PCapRolloverStream(filename,\n options.get('nbytes' , None),\n options.get('npackets', None),\n options.get('nseconds', None),\n options.get('dryrun' , False))\n else:\n stream = PCapStream( __builtin__.open(filename, mode), mode )\n\n return stream"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef query(starttime, endtime, output=None, *filenames):\n '''Given a time range and input file, query creates a new file with only\n that subset of data. If no outfile name is given, the new file name is the\n old file name with the time range appended.\n\n Args:\n starttime:\n The datetime of the beginning time range to be extracted from the files.\n endtime:\n The datetime of the end of the time range to be extracted from the files.\n output:\n Optional: The output file name. Defaults to\n [first filename in filenames][starttime]-[endtime].pcap\n filenames:\n A tuple of one or more file names to extract data from.\n '''\n\n if not output:\n output = (filenames[0].replace('.pcap','') + starttime.isoformat() + '-' + endtime.isoformat() + '.pcap')\n else:\n output = output\n\n with open(output,'w') as outfile:\n for filename in filenames:\n log.info(\"pcap.query: processing %s...\" % filename)\n with open(filename, 'r') as stream:\n for header, packet in stream:\n if packet is not None:\n if header.timestamp >= starttime and header.timestamp <= endtime:\n outfile.write(packet, header=header)", "response": "This function takes a time range and input file and creates a new file with only\n that subset of data."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsegmenting the given pcap files by one or more thresholds.", "response": "def segment(filenames, format, **options):\n \"\"\"Segment the given pcap file(s) by one or more thresholds\n (``nbytes``, ``npackets``, ``nseconds``). New segment filenames\n are determined based on the ``strftime(3)`` ``format`` string\n and the timestamp of the first packet in the file.\n\n :param filenames: Single filename (string) or list of filenames\n :param format: Output filename in ``strftime(3)`` format\n :param nbytes: Rollover after writing N bytes\n :param npackets: Rollover after writing N packets\n :param nseconds: Rollover after N seconds have elapsed between\n the first and last packet timestamp in the file.\n :param dryrun: Simulate file writes and output log messages.\n \"\"\"\n output = open(format, rollover=True, **options)\n\n if isinstance(filenames, str):\n filenames = [ filenames ]\n\n for filename in filenames:\n with open(filename, 'r') as stream:\n for header, packet in stream:\n output.write(packet, header)\n\n output.close()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef times(filenames, tolerance=2):\n times = { }\n delta = datetime.timedelta(seconds=tolerance)\n\n if isinstance(filenames, str):\n filenames = [ filenames ]\n\n for filename in filenames:\n with open(filename, 'r') as stream:\n times[filename] = list()\n header, packet = stream.read()\n start , stop = header.timestamp, header.timestamp\n\n for header, packet in stream:\n if header.timestamp - stop > delta:\n times[filename].append((start, stop))\n start = header.timestamp\n stop = header.timestamp\n\n times[filename].append((start, stop))\n\n return times", "response": "Returns the time ranges available for the given file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read (self, stream):\n self._data = stream.read(self._size)\n\n if len(self._data) >= self._size:\n values = struct.unpack(self._format, self._data)\n else:\n values = None, None, None, None, None, None, None\n\n if values[0] == 0xA1B2C3D4 or values[0] == 0xA1B23C4D:\n self._swap = '@'\n elif values[0] == 0xD4C3B2A1 or values[0] == 0x4D3CB2A1:\n self._swap = EndianSwap\n\n if values[0] is not None:\n values = struct.unpack(self._swap + self._format, self._data)\n\n self.magic_number = values[0]\n self.version_major = values[1]\n self.version_minor = values[2]\n self.thiszone = values[3]\n self.sigfigs = values[4]\n self.snaplen = values[5]\n self.network = values[6]", "response": "Reads PCapGlobalHeader data from the given stream."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread PCapPacketHeader data from the given stream.", "response": "def read (self, stream):\n \"\"\"Reads PCapPacketHeader data from the given stream.\"\"\"\n self._data = stream.read(self._size)\n\n if len(self._data) >= self._size:\n values = struct.unpack(self._swap + self._format, self._data)\n else:\n values = None, None, None, None\n\n self.ts_sec = values[0]\n self.ts_usec = values[1]\n self.incl_len = values[2]\n self.orig_len = values[3]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nindicates whether or not the file is being rollover to a new file.", "response": "def rollover (self):\n \"\"\"Indicates whether or not its time to rollover to a new file.\"\"\"\n rollover = False\n\n if not rollover and self._threshold.nbytes is not None:\n rollover = self._total.nbytes >= self._threshold.nbytes\n\n if not rollover and self._threshold.npackets is not None:\n rollover = self._total.npackets >= self._threshold.npackets\n\n if not rollover and self._threshold.nseconds is not None:\n nseconds = math.ceil(self._total.nseconds)\n rollover = nseconds >= self._threshold.nseconds\n\n return rollover"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite a packet to the pcap file.", "response": "def write (self, bytes, header=None):\n \"\"\"Writes packet ``bytes`` and the optional pcap packet ``header``.\n\n If the pcap packet ``header`` is not specified, one will be\n generated based on the number of packet ``bytes`` and current\n time.\n \"\"\"\n if header is None:\n header = PCapPacketHeader(orig_len=len(bytes))\n\n if self._stream is None:\n if self._threshold.nseconds is not None:\n # Round down to the nearest multiple of nseconds\n nseconds = self._threshold.nseconds\n remainder = int( math.floor( header.ts % nseconds ) )\n delta = datetime.timedelta(seconds=remainder)\n timestamp = header.timestamp - delta\n else:\n timestamp = header.timestamp\n\n self._filename = timestamp.strftime(self._format)\n self._startTime = calendar.timegm(\n timestamp.replace(microsecond=0).timetuple() )\n\n if self._dryrun:\n self._stream = True\n self._total.nbytes += len(PCapGlobalHeader())\n else:\n self._stream = open(self._filename, 'w')\n self._total.nbytes += len(self._stream.header)\n\n if not self._dryrun:\n self._stream.write(bytes, header)\n\n self._total.nbytes += len(bytes) + len(header)\n self._total.npackets += 1\n self._total.nseconds = header.ts - self._startTime\n\n if self.rollover:\n self.close()\n\n return header.incl_len"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef close (self):\n if self._stream:\n values = ( self._total.nbytes,\n self._total.npackets,\n int( math.ceil(self._total.nseconds) ),\n self._filename )\n\n if self._dryrun:\n msg = 'Would write %d bytes, %d packets, %d seconds to %s.'\n else:\n msg = 'Wrote %d bytes, %d packets, %d seconds to %s.'\n self._stream.close()\n\n log.info(msg % values)\n\n self._filename = None\n self._startTime = None\n self._stream = None\n self._total = PCapFileStats(0, 0, 0)", "response": "Closes this : class : PCapStream by closing the underlying Python\n stream."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef next (self):\n header, packet = self.read()\n\n if packet is None:\n raise StopIteration\n\n return header, packet", "response": "Returns the next header and packet from thisPCapStream. See read."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread a single packet from the this pcap stream returning a tuple of header and packet.", "response": "def read (self):\n \"\"\"Reads a single packet from the this pcap stream, returning a\n tuple (PCapPacketHeader, packet)\n \"\"\"\n header = PCapPacketHeader(self._stream, self.header._swap)\n packet = None\n\n if not header.incomplete():\n packet = self._stream.read(header.incl_len)\n\n return (header, packet)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write (self, bytes, header=None):\n if type(bytes) is str:\n bytes = bytearray(bytes)\n\n if not isinstance(header, PCapPacketHeader):\n header = PCapPacketHeader(orig_len=len(bytes))\n\n packet = bytes[0:header.incl_len]\n\n self._stream.write( str(header) )\n self._stream.write( packet )\n self._stream.flush()\n\n return header.incl_len", "response": "This function writes a single entry to the file. It takes a byte array to write to the file as a single\n PCAP packet and an optional header PCAP packet."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd the given Command Definition to this Command Dictionary.", "response": "def add (self, defn):\n \"\"\"Adds the given Command Definition to this Command Dictionary.\"\"\"\n self[defn.name] = defn\n self.colnames[defn.name] = defn"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create (self, name, *args):\n tab = None\n defn = self.get(name, None)\n if defn:\n tab = FSWTab(defn, *args)\n return tab", "response": "Creates a new command with the given arguments."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load (self, filename):\n if self.filename is None:\n self.filename = filename\n\n stream = open(self.filename, \"rb\")\n for doc in yaml.load_all(stream):\n for table in doc:\n self.add(table)\n stream.close()", "response": "Loads Command Definitions from the given YAML file into this Command Dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef publish(self, msg):\n self.pub.send(\"{} {}\".format(self.name, msg))\n log.debug('Published message from {}'.format(self))", "response": "Publishes input message with client name as topic."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new database for the given Telemetry Dictionary and returns a connection to it.", "response": "def create(database, tlmdict=None):\n \"\"\"Creates a new database for the given Telemetry Dictionary and\n returns a connection to it.\n \"\"\"\n if tlmdict is None:\n tlmdict = tlm.getDefaultDict()\n \n dbconn = connect(database)\n\n for name, defn in tlmdict.items():\n createTable(dbconn, defn)\n\n return dbconn"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a database table for the given PacketDefinition.", "response": "def createTable(dbconn, pd):\n \"\"\"Creates a database table for the given PacketDefinition.\"\"\"\n cols = ('%s %s' % (defn.name, getTypename(defn)) for defn in pd.fields)\n sql = 'CREATE TABLE IF NOT EXISTS %s (%s)' % (pd.name, ', '.join(cols))\n\n dbconn.execute(sql)\n dbconn.commit()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef insert(dbconn, packet):\n values = [ ]\n pd = packet._defn\n\n for defn in pd.fields:\n if defn.enum:\n val = getattr(packet.raw, defn.name)\n else:\n val = getattr(packet, defn.name)\n\n if val is None and defn.name in pd.history:\n val = getattr(packet.history, defn.name)\n\n values.append(val)\n\n qmark = ['?'] * len(values)\n sql = 'INSERT INTO %s VALUES (%s)' % (pd.name, ', '.join(qmark))\n\n dbconn.execute(sql, values)", "response": "Inserts the given packet into the connected database."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nuses the given database backend", "response": "def use(backend):\n \"\"\"Use the given database backend, e.g. 'MySQLdb', 'psycopg2',\n 'MySQLdb', etc.\n \"\"\"\n global Backend\n\n try:\n Backend = importlib.import_module(backend)\n except ImportError:\n msg = 'Could not import (load) database.backend: %s' % backend\n raise cfg.AitConfigError(msg)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconnecting to an InfluxDB instance and switch to a given database.", "response": "def connect(self, **kwargs):\n ''' Connect to an InfluxDB instance\n\n Connects to an InfluxDB instance and switches to a given database.\n If the database doesn't exist it is created first via :func:`create`.\n\n **Configuration Parameters**\n\n host\n The host for the connection. Passed as either the config key\n **database.host** or the kwargs argument **host**. Defaults to\n **localhost**.\n\n port\n The port for the connection. Passed as either the config key\n **database.port** or the kwargs argument **port**. Defaults to\n **8086**.\n\n un\n The un for the connection. Passed as either the config key\n **database.un** or the kwargs argument **un**. Defaults to\n **root**.\n\n pw\n The pw for the connection. Passed as either the config key\n **database.pw** or the kwargs argument **pw**. Defaults to\n **pw**.\n\n database name\n The database name for the connection. Passed as either\n the config key **database.dbname** or the kwargs argument\n **database**. Defaults to **ait**.\n '''\n host = ait.config.get('database.host', kwargs.get('host', 'localhost'))\n port = ait.config.get('database.port', kwargs.get('port', 8086))\n un = ait.config.get('database.un', kwargs.get('un', 'root'))\n pw = ait.config.get('database.pw', kwargs.get('pw', 'root'))\n dbname = ait.config.get('database.dbname', kwargs.get('database', 'ait'))\n\n self._conn = self._backend.InfluxDBClient(host, port, un, pw)\n\n if dbname not in [v['name'] for v in self._conn.get_list_database()]:\n self.create(database=dbname)\n\n self._conn.switch_database(dbname)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create(self, **kwargs):\n ''' Create a database in a connected InfluxDB instance\n\n **Configuration Parameters**\n\n database name\n The database name to create. Passed as either the config\n key **database.dbname** or the kwargs argument\n **database**. Defaults to **ait**.\n \n Raises:\n AttributeError:\n If a connection to the database doesn't exist\n '''\n dbname = ait.config.get('database.dbname', kwargs.get('database', 'ait'))\n\n if self._conn is None:\n raise AttributeError('Unable to create database. No connection to database exists.')\n\n self._conn.create_database(dbname)\n self._conn.switch_database(dbname)", "response": "Create a database in a connected InfluxDB instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninsert a packet into the database.", "response": "def insert(self, packet, time=None, **kwargs):\n ''' Insert a packet into the database\n\n Arguments\n packet\n The :class:`ait.core.tlm.Packet` instance to insert into\n the database\n\n time\n Optional parameter specifying the time value to use when inserting\n the record into the database. Default case does not provide a time\n value so Influx defaults to the current time when inserting the\n record.\n\n tags\n Optional kwargs argument for specifying a dictionary of tags to\n include when adding the values. Defaults to nothing.\n \n '''\n fields = {}\n pd = packet._defn\n\n for defn in pd.fields:\n val = getattr(packet.raw, defn.name)\n\n if pd.history and defn.name in pd.history:\n val = getattr(packet.history, defn.name)\n \n if val is not None and not (isinstance(val, float) and math.isnan(val)):\n fields[defn.name] = val\n\n if len(fields) == 0:\n log.error('No fields present to insert into Influx')\n return\n\n tags = kwargs.get('tags', {})\n\n if isinstance(time, dt.datetime):\n time = time.strftime(\"%Y-%m-%dT%H:%M:%S\")\n\n data = {\n 'measurement': pd.name,\n 'tags': tags,\n 'fields': fields\n }\n\n if time:\n data['time'] = time\n\n self._conn.write_points([data])"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_packets_from_results(self, packet_name, result_set):\n ''' Generate AIT Packets from a InfluxDB query ResultSet\n\n Extract Influx DB query results into one packet per result entry. This\n assumes that telemetry data was inserted in the format generated by\n :func:`InfluxDBBackend.insert`. Complex types such as CMD16 and EVR16 are\n evaluated if they can be properly encoded from the raw value in the\n query result. If there is no opcode / EVR-code for a particular raw\n value the value is skipped (and thus defaulted to 0).\n\n Arguments\n packet_name (string)\n The name of the AIT Packet to create from each result entry\n\n result_set (influxdb.resultset.ResultSet)\n The query ResultSet object to convert into packets\n\n Returns\n A list of packets extracted from the ResultSet object or None if\n an invalid packet name is supplied.\n \n '''\n try:\n pkt_defn = tlm.getDefaultDict()[packet_name]\n except KeyError:\n log.error('Unknown packet name {} Unable to unpack ResultSet'.format(packet_name))\n return None\n\n pkt = tlm.Packet(pkt_defn)\n\n pkts = []\n for r in result_set.get_points():\n new_pkt = tlm.Packet(pkt_defn)\n\n for f, f_defn in pkt_defn.fieldmap.iteritems():\n field_type_name = f_defn.type.name\n if field_type_name == 'CMD16':\n if cmd.getDefaultDict().opcodes.get(r[f], None):\n setattr(new_pkt, f, cmd_def.name)\n elif field_type_name == 'EVR16':\n if evr.getDefaultDict().codes.get(r[f], None):\n setattr(new_pkt, f, r[f])\n elif field_type_name == 'TIME8':\n setattr(new_pkt, f, r[f] / 256.0)\n elif field_type_name == 'TIME32':\n new_val = dmc.GPS_Epoch + dt.timedelta(seconds=r[f])\n setattr(new_pkt, f, new_val)\n elif field_type_name == 'TIME40':\n sec = int(r[f])\n microsec = r[f] * 1e6\n new_val = dmc.GPS_Epoch + dt.timedelta(seconds=sec, microseconds=microsec)\n setattr(new_pkt, f, new_val)\n elif field_type_name == 'TIME64':\n sec = int(r[f])\n microsec = r[f] % 1 * 1e6\n new_val = dmc.GPS_Epoch + dt.timedelta(seconds=sec, microseconds=microsec)\n setattr(new_pkt, f, new_val)\n else:\n try:\n setattr(new_pkt, f, r[f])\n except KeyError:\n log.info('Field not found in query results {} Skipping ...'.format(f))\n\n pkts.append(new_pkt)\n return pkts", "response": "Generates AIT Packets from a InfluxDB query ResultSet object and returns a list of Packet objects."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconnects to a SQLite instance holding the database name or file to be used for the database.", "response": "def connect(self, **kwargs):\n ''' Connect to a SQLite instance\n \n **Configuration Parameters**\n\n database\n The database name or file to \"connect\" to. Defaults to **ait**.\n '''\n if 'database' not in kwargs:\n kwargs['database'] = 'ait'\n\n self._conn = self._backend.connect(kwargs['database'])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a skeleton database for the current telemetry dictionary.", "response": "def create(self, **kwargs):\n ''' Create a database for the current telemetry dictionary\n\n Connects to a SQLite instance via :func:`connect` and creates a\n skeleton database for future data inserts.\n\n **Configuration Parameters**\n\n tlmdict\n The :class:`ait.core.tlm.TlmDict` instance to use. Defaults to\n the currently configured telemetry dictionary.\n\n '''\n tlmdict = kwargs.get('tlmdict', tlm.getDefaultDict())\n \n self.connect(**kwargs)\n\n for name, defn in tlmdict.items():\n self._create_table(defn)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a database table for the given PacketDefinition", "response": "def _create_table(self, packet_defn):\n ''' Creates a database table for the given PacketDefinition\n\n Arguments\n packet_defn\n The :class:`ait.core.tlm.PacketDefinition` instance for which a table entry\n should be made.\n '''\n cols = ('%s %s' % (defn.name, self._getTypename(defn)) for defn in packet_defn.fields)\n sql = 'CREATE TABLE IF NOT EXISTS %s (%s)' % (packet_defn.name, ', '.join(cols))\n\n self._conn.execute(sql)\n self._conn.commit()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninserts a packet into the database.", "response": "def insert(self, packet, **kwargs):\n ''' Insert a packet into the database\n\n Arguments\n packet\n The :class:`ait.core.tlm.Packet` instance to insert into\n the database\n\n '''\n values = [ ]\n pd = packet._defn\n\n for defn in pd.fields:\n val = getattr(packet.raw, defn.name)\n\n if val is None and defn.name in pd.history:\n val = getattr(packet.history, defn.name)\n \n values.append(val)\n\n qmark = ['?'] * len(values)\n sql = 'INSERT INTO %s VALUES (%s)' % (pd.name, ', '.join(qmark))\n\n\n self._conn.execute(sql, values)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the SQL typename required to store the given FieldDefinition", "response": "def _getTypename(self, defn):\n \"\"\" Returns the SQL typename required to store the given FieldDefinition \"\"\"\n return 'REAL' if defn.type.float or 'TIME' in defn.type.name or defn.dntoeu else 'INTEGER'"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates a map of vector lengths from the center point to each coordinate", "response": "def genCubeVector(x, y, z, x_mult=1, y_mult=1, z_mult=1):\r\n \"\"\"Generates a map of vector lengths from the center point to each coordinate\r\n\r\n x - width of matrix to generate\r\n y - height of matrix to generate\r\n z - depth of matrix to generate\r\n x_mult - value to scale x-axis by\r\n y_mult - value to scale y-axis by\r\n z_mult - value to scale z-axis by\r\n \"\"\"\r\n cX = (x - 1) / 2.0\r\n cY = (y - 1) / 2.0\r\n cZ = (z - 1) / 2.0\r\n\r\n def vect(_x, _y, _z):\r\n return int(math.sqrt(math.pow(_x - cX, 2 * x_mult) +\r\n math.pow(_y - cY, 2 * y_mult) +\r\n math.pow(_z - cZ, 2 * z_mult)))\r\n\r\n return [[[vect(_x, _y, _z) for _z in range(z)] for _y in range(y)] for _x in range(x)]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngives a datetime object adjust it according to the from_tz timezone into the to_tz timezone string.", "response": "def adjust_datetime_to_timezone(value, from_tz, to_tz=None):\n \"\"\"\n Given a ``datetime`` object adjust it according to the from_tz timezone\n string into the to_tz timezone string.\n \"\"\"\n if to_tz is None:\n to_tz = settings.TIME_ZONE\n if value.tzinfo is None:\n if not hasattr(from_tz, \"localize\"):\n from_tz = pytz.timezone(smart_str(from_tz))\n value = from_tz.localize(value)\n return value.astimezone(pytz.timezone(smart_str(to_tz)))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns field s value prepared for saving into a database.", "response": "def get_db_prep_save(self, value, connection=None):\n \"\"\"\n Returns field's value prepared for saving into a database.\n \"\"\"\n ## convert to settings.TIME_ZONE\n if value is not None:\n if value.tzinfo is None:\n value = default_tz.localize(value)\n else:\n value = value.astimezone(default_tz)\n return super(LocalizedDateTimeField, self).get_db_prep_save(value, connection=connection)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn field s value prepared for database lookup.", "response": "def get_db_prep_lookup(self, lookup_type, value, connection=None, prepared=None):\n \"\"\"\n Returns field's value prepared for database lookup.\n \"\"\"\n ## convert to settings.TIME_ZONE\n if value.tzinfo is None:\n value = default_tz.localize(value)\n else:\n value = value.astimezone(default_tz)\n return super(LocalizedDateTimeField, self).get_db_prep_lookup(lookup_type, value, connection=connection, prepared=prepared)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the number of live neighbours in a given location.", "response": "def liveNeighbours(self, z, y, x):\r\n \"\"\"Returns the number of live neighbours.\"\"\"\r\n count = 0\r\n\r\n for oz, oy, ox in self.offsets:\r\n cz, cy, cx = z + oz, y + oy, x + ox\r\n if cz >= self.depth:\r\n cz = 0\r\n if cy >= self.height:\r\n cy = 0\r\n if cx >= self.width:\r\n cx = 0\r\n count += self.table[cz][cy][cx]\r\n\r\n # if self.toroidal:\r\n # if y == 0:\r\n # if self.table[self.height - 1][x]:\r\n # count = count + 1\r\n # if y == self.height - 1:\r\n # if self.table[0][x]:\r\n # count = count + 1\r\n # if x == 0:\r\n # if self.table[y][self.width - 1]:\r\n # count = count + 1\r\n # if x == self.width - 1:\r\n # if self.table[y][0]:\r\n # count = count + 1\r\n\r\n return count"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nturn the table into a new state.", "response": "def turn(self):\r\n \"\"\"Turn\"\"\"\r\n r = (4, 5, 5, 5)\r\n nt = copy.deepcopy(self.table)\r\n for z in range(self.depth):\r\n for y in range(self.height):\r\n for x in range(self.width):\r\n neighbours = self.liveNeighbours(z, y, x)\r\n if self.table[z][y][x] == 0 and (neighbours > r[0] and neighbours <= r[1]):\r\n nt[z][y][x] = 1\r\n elif self.table[z][y][x] == 1 and (neighbours > r[2] and neighbours < r[3]):\r\n nt[z][y][x] = 0\r\n\r\n self._oldStates.append(self.table)\r\n if len(self._oldStates) > 3:\r\n self._oldStates.popleft()\r\n\r\n self.table = nt"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_bd(self):\n \"Make a set of 'shaped' random #'s for particle brightness deltas (bd)\"\n self.bd = concatenate((\n # These values will dim the particles\n random.normal(\n self.bd_mean - self.bd_mu, self.bd_sigma, 16).astype(int),\n # These values will brighten the particles\n random.normal(\n self.bd_mean + self.bd_mu, self.bd_sigma, 16).astype(int)),\n axis=0)", "response": "Make a set of shaped random #'s for particle brightness deltas ( bd )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmaking a set of velocities to be randomly chosen for emitted particles", "response": "def make_vel(self):\n \"Make a set of velocities to be randomly chosen for emitted particles\"\n self.vel = random.normal(self.vel_mu, self.vel_sigma, 16)\n # Make sure nothing's slower than 1/8 pixel / step\n for i, vel in enumerate(self.vel):\n if abs(vel) < 0.125 / self._size:\n if vel < 0:\n self.vel[i] = -0.125 / self._size\n else:\n self.vel[i] = 0.125 / self._size"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef move_particles(self):\n moved_particles = []\n for vel, pos, stl, color, bright in self.particles:\n\n stl -= 1 # steps to live\n if stl > 0:\n\n pos = pos + vel\n if vel > 0:\n if pos >= (self._end + 1):\n if self.wrap:\n pos = pos - (self._end + 1) + self._start\n else:\n continue # Sacked\n else:\n if pos < self._start:\n if self.wrap:\n pos = pos + self._end + 1 + self._start\n else:\n continue # Sacked\n\n if random.random() < self.step_flare_prob:\n bright = 255\n else:\n bright = bright + random.choice(self.bd)\n if bright > 255:\n bright = 255\n # Zombie particles with bright<=0 walk, don't -overflow\n if bright < -10000:\n bright = -10000\n\n moved_particles.append((vel, pos, stl, color, bright))\n\n self.particles = moved_particles", "response": "Move each particle by it s velocity adjusting brightness as we go."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef move_emitters(self):\n moved_emitters = []\n for e_pos, e_dir, e_vel, e_range, e_color, e_pal in self.emitters:\n\n e_pos = e_pos + e_vel\n if e_vel > 0:\n if e_pos >= (self._end + 1):\n if self.wrap:\n e_pos = e_pos - (self._end + 1) + self._start\n else:\n continue # Sacked\n else:\n if e_pos < self._start:\n if self.wrap:\n e_pos = e_pos + self._end + 1 + self._start\n else:\n continue # Sacked\n\n moved_emitters.append(\n (e_pos, e_dir, e_vel, e_range, e_color, e_pal))\n\n self.emitters = moved_emitters", "response": "Move each emitter by it s velocity."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstarts some new particles from the emitters.", "response": "def start_new_particles(self):\n \"\"\"\n Start some new particles from the emitters. We roll the dice\n starts_at_once times, seeing if we can start each particle based\n on starts_prob. If we start, the particle gets a color form\n the palette and a velocity from the vel list.\n \"\"\"\n for e_pos, e_dir, e_vel, e_range, e_color, e_pal in self.emitters:\n for roll in range(self.starts_at_once):\n if random.random() < self.starts_prob: # Start one?\n p_vel = self.vel[random.choice(len(self.vel))]\n if e_dir < 0 or e_dir == 0 and random.random() > 0.5:\n p_vel = -p_vel\n self.particles.append((\n p_vel, # Velocity\n e_pos, # Position\n int(e_range // abs(p_vel)), # steps to live\n e_pal[\n random.choice(len(e_pal))], # Color\n 255))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef visibility(self, strip_pos, particle_pos):\n dist = abs(particle_pos - strip_pos)\n if dist > self.half_size:\n dist = self._size - dist\n if dist < self.aperture:\n return (self.aperture - dist) / self.aperture\n else:\n return 0", "response": "Compute particle visibility based on distance between current strip position being rendered and particle position."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef render_particles(self):\n for strip_pos in range(self._start, self._end + 1):\n\n blended = COLORS.black\n\n # Render visible emitters\n if self.has_e_colors:\n for (e_pos, e_dir, e_vel, e_range,\n e_color, e_pal) in self.emitters:\n if e_color is not None:\n vis = self.visibility(strip_pos, e_pos)\n if vis > 0:\n blended = color_blend(\n blended,\n color_scale(e_color, int(vis * 255)))\n\n # Render visible particles\n for vel, pos, stl, color, bright in self.particles:\n vis = self.visibility(strip_pos, pos)\n if vis > 0 and bright > 0:\n blended = color_blend(\n blended,\n color_scale(color, int(vis * bright)))\n\n # Add background if showing\n if (blended == COLORS.black):\n blended = self.bgcolor\n\n self.color_list[strip_pos] = blended", "response": "Render visible particles at each strip position by modifying the color list."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes a frame of the animation", "response": "def step(self, amt=1):\n \"Make a frame of the animation\"\n self.move_particles()\n if self.has_moving_emitters:\n self.move_emitters()\n self.start_new_particles()\n self.render_particles()\n if self.emitters == [] and self.particles == []:\n self.completed = True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nis the game over?", "response": "def complete(self):\n \"\"\"is the game over?\"\"\"\n if None not in [v for v in self.squares]:\n return True\n if self.winner() is not None:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_squares(self, player=None):\n if player:\n return [k for k, v in enumerate(self.squares) if v == player]\n else:\n return self.squares", "response": "get_squares - Returns a list of all the squares that belong to a player"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef liveNeighbours(self, y, x):\n count = 0\n if y > 0:\n if self.table[y - 1][x]:\n count = count + 1\n if x > 0:\n if self.table[y - 1][x - 1]:\n count = count + 1\n if self.width > (x + 1):\n if self.table[y - 1][x + 1]:\n count = count + 1\n\n if x > 0:\n if self.table[y][x - 1]:\n count = count + 1\n if self.width > (x + 1):\n if self.table[y][x + 1]:\n count = count + 1\n\n if self.height > (y + 1):\n if self.table[y + 1][x]:\n count = count + 1\n if x > 0:\n if self.table[y + 1][x - 1]:\n count = count + 1\n if self.width > (x + 1):\n if self.table[y + 1][x + 1]:\n count = count + 1\n\n if self.toroidal:\n if y == 0:\n if self.table[self.height - 1][x]:\n count = count + 1\n if y == self.height - 1:\n if self.table[0][x]:\n count = count + 1\n if x == 0:\n if self.table[y][self.width - 1]:\n count = count + 1\n if x == self.width - 1:\n if self.table[y][0]:\n count = count + 1\n\n return count", "response": "Returns the number of live neighbours in the table."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a Glycine residue", "response": "def makeGly(segID, N, CA, C, O, geo):\n '''Creates a Glycine residue'''\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"GLY\", ' ')\n\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n\n ##print(res)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef makeAla(segID, N, CA, C, O, geo):\n '''Creates an Alanine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n\n ##Create Residue Data Structure\n res = Residue((' ', segID, ' '), \"ALA\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n return res", "response": "Creates an Alanine residue"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef makeSer(segID, N, CA, C, O, geo):\n '''Creates a Serine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_OG_length=geo.CB_OG_length\n CA_CB_OG_angle=geo.CA_CB_OG_angle\n N_CA_CB_OG_diangle=geo.N_CA_CB_OG_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n oxygen_g= calculateCoordinates(N, CA, CB, CB_OG_length, CA_CB_OG_angle, N_CA_CB_OG_diangle)\n OG= Atom(\"OG\", oxygen_g, 0.0, 1.0, \" \", \" OG\", 0, \"O\")\n\n ##Create Reside Data Structure\n res= Residue((' ', segID, ' '), \"SER\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(OG)\n\n ##print(res)\n return res", "response": "Creates a Serine residue"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef makeCys(segID, N, CA, C, O, geo):\n '''Creates a Cysteine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_SG_length= geo.CB_SG_length\n CA_CB_SG_angle= geo.CA_CB_SG_angle\n N_CA_CB_SG_diangle= geo.N_CA_CB_SG_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n sulfur_g= calculateCoordinates(N, CA, CB, CB_SG_length, CA_CB_SG_angle, N_CA_CB_SG_diangle)\n SG= Atom(\"SG\", sulfur_g, 0.0, 1.0, \" \", \" SG\", 0, \"S\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"CYS\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(SG)\n return res", "response": "Creates a Cysteine residue"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a Valine residue", "response": "def makeVal(segID, N, CA, C, O, geo):\n '''Creates a Valine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG1_length=geo.CB_CG1_length\n CA_CB_CG1_angle=geo.CA_CB_CG1_angle\n N_CA_CB_CG1_diangle=geo.N_CA_CB_CG1_diangle\n \n CB_CG2_length=geo.CB_CG2_length\n CA_CB_CG2_angle=geo.CA_CB_CG2_angle\n N_CA_CB_CG2_diangle=geo.N_CA_CB_CG2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g1= calculateCoordinates(N, CA, CB, CB_CG1_length, CA_CB_CG1_angle, N_CA_CB_CG1_diangle)\n CG1= Atom(\"CG1\", carbon_g1, 0.0, 1.0, \" \", \" CG1\", 0, \"C\")\n carbon_g2= calculateCoordinates(N, CA, CB, CB_CG2_length, CA_CB_CG2_angle, N_CA_CB_CG2_diangle)\n CG2= Atom(\"CG2\", carbon_g2, 0.0, 1.0, \" \", \" CG2\", 0, \"C\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"VAL\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG1)\n res.add(CG2)\n return res"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef makeIle(segID, N, CA, C, O, geo):\n '''Creates an Isoleucine residue'''\n ##R-group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG1_length=geo.CB_CG1_length\n CA_CB_CG1_angle=geo.CA_CB_CG1_angle\n N_CA_CB_CG1_diangle=geo.N_CA_CB_CG1_diangle \n \n CB_CG2_length=geo.CB_CG2_length\n CA_CB_CG2_angle=geo.CA_CB_CG2_angle\n N_CA_CB_CG2_diangle= geo.N_CA_CB_CG2_diangle\n\n CG1_CD1_length= geo.CG1_CD1_length\n CB_CG1_CD1_angle= geo.CB_CG1_CD1_angle\n CA_CB_CG1_CD1_diangle= geo.CA_CB_CG1_CD1_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g1= calculateCoordinates(N, CA, CB, CB_CG1_length, CA_CB_CG1_angle, N_CA_CB_CG1_diangle)\n CG1= Atom(\"CG1\", carbon_g1, 0.0, 1.0, \" \", \" CG1\", 0, \"C\")\n carbon_g2= calculateCoordinates(N, CA, CB, CB_CG2_length, CA_CB_CG2_angle, N_CA_CB_CG2_diangle)\n CG2= Atom(\"CG2\", carbon_g2, 0.0, 1.0, \" \", \" CG2\", 0, \"C\")\n carbon_d1= calculateCoordinates(CA, CB, CG1, CG1_CD1_length, CB_CG1_CD1_angle, CA_CB_CG1_CD1_diangle)\n CD1= Atom(\"CD1\", carbon_d1, 0.0, 1.0, \" \", \" CD1\", 0, \"C\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"ILE\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG1)\n res.add(CG2)\n res.add(CD1)\n return res", "response": "Creates an Isoleucine residue"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef makeLeu(segID, N, CA, C, O, geo):\n '''Creates a Leucine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle= geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n \n CG_CD1_length=geo.CG_CD1_length\n CB_CG_CD1_angle=geo.CB_CG_CD1_angle\n CA_CB_CG_CD1_diangle=geo.CA_CB_CG_CD1_diangle\n\n CG_CD2_length=geo.CG_CD2_length\n CB_CG_CD2_angle=geo.CB_CG_CD2_angle\n CA_CB_CG_CD2_diangle=geo.CA_CB_CG_CD2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g1= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g1, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d1= calculateCoordinates(CA, CB, CG, CG_CD1_length, CB_CG_CD1_angle, CA_CB_CG_CD1_diangle)\n CD1= Atom(\"CD1\", carbon_d1, 0.0, 1.0, \" \", \" CD1\", 0, \"C\")\n carbon_d2= calculateCoordinates(CA, CB, CG, CG_CD2_length, CB_CG_CD2_angle, CA_CB_CG_CD2_diangle)\n CD2= Atom(\"CD2\", carbon_d2, 0.0, 1.0, \" \", \" CD2\", 0, \"C\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"LEU\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD1)\n res.add(CD2)\n return res", "response": "Creates a Leucine residue"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a Threonine residue", "response": "def makeThr(segID, N, CA, C, O, geo):\n '''Creates a Threonine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_OG1_length=geo.CB_OG1_length\n CA_CB_OG1_angle=geo.CA_CB_OG1_angle\n N_CA_CB_OG1_diangle=geo.N_CA_CB_OG1_diangle \n \n CB_CG2_length=geo.CB_CG2_length\n CA_CB_CG2_angle=geo.CA_CB_CG2_angle\n N_CA_CB_CG2_diangle= geo.N_CA_CB_CG2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n oxygen_g1= calculateCoordinates(N, CA, CB, CB_OG1_length, CA_CB_OG1_angle, N_CA_CB_OG1_diangle)\n OG1= Atom(\"OG1\", oxygen_g1, 0.0, 1.0, \" \", \" OG1\", 0, \"O\")\n carbon_g2= calculateCoordinates(N, CA, CB, CB_CG2_length, CA_CB_CG2_angle, N_CA_CB_CG2_diangle)\n CG2= Atom(\"CG2\", carbon_g2, 0.0, 1.0, \" \", \" CG2\", 0, \"C\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"THR\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(OG1)\n res.add(CG2)\n return res"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef makeArg(segID, N, CA, C, O, geo):\n '''Creates an Arginie residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle= geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n \n CG_CD_length=geo.CG_CD_length\n CB_CG_CD_angle=geo.CB_CG_CD_angle\n CA_CB_CG_CD_diangle=geo.CA_CB_CG_CD_diangle\n \n CD_NE_length=geo.CD_NE_length\n CG_CD_NE_angle=geo.CG_CD_NE_angle\n CB_CG_CD_NE_diangle=geo.CB_CG_CD_NE_diangle\n\n NE_CZ_length=geo.NE_CZ_length\n CD_NE_CZ_angle=geo.CD_NE_CZ_angle\n CG_CD_NE_CZ_diangle=geo.CG_CD_NE_CZ_diangle\n\n CZ_NH1_length=geo.CZ_NH1_length\n NE_CZ_NH1_angle=geo.NE_CZ_NH1_angle\n CD_NE_CZ_NH1_diangle=geo.CD_NE_CZ_NH1_diangle\n\n CZ_NH2_length=geo.CZ_NH2_length\n NE_CZ_NH2_angle=geo.NE_CZ_NH2_angle\n CD_NE_CZ_NH2_diangle=geo.CD_NE_CZ_NH2_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d= calculateCoordinates(CA, CB, CG, CG_CD_length, CB_CG_CD_angle, CA_CB_CG_CD_diangle)\n CD= Atom(\"CD\", carbon_d, 0.0, 1.0, \" \", \" CD\", 0, \"C\")\n nitrogen_e= calculateCoordinates(CB, CG, CD, CD_NE_length, CG_CD_NE_angle, CB_CG_CD_NE_diangle)\n NE= Atom(\"NE\", nitrogen_e, 0.0, 1.0, \" \", \" NE\", 0, \"N\")\n carbon_z= calculateCoordinates(CG, CD, NE, NE_CZ_length, CD_NE_CZ_angle, CG_CD_NE_CZ_diangle)\n CZ= Atom(\"CZ\", carbon_z, 0.0, 1.0, \" \", \" CZ\", 0, \"C\")\n nitrogen_h1= calculateCoordinates(CD, NE, CZ, CZ_NH1_length, NE_CZ_NH1_angle, CD_NE_CZ_NH1_diangle)\n NH1= Atom(\"NH1\", nitrogen_h1, 0.0, 1.0, \" \", \" NH1\", 0, \"N\")\n nitrogen_h2= calculateCoordinates(CD, NE, CZ, CZ_NH2_length, NE_CZ_NH2_angle, CD_NE_CZ_NH2_diangle)\n NH2= Atom(\"NH2\", nitrogen_h2, 0.0, 1.0, \" \", \" NH2\", 0, \"N\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"ARG\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD)\n res.add(NE)\n res.add(CZ)\n res.add(NH1)\n res.add(NH2)\n return res", "response": "Creates an Arginie residue"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a Lysine residue", "response": "def makeLys(segID, N, CA, C, O, geo):\n '''Creates a Lysine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n\n CG_CD_length=geo.CG_CD_length\n CB_CG_CD_angle=geo.CB_CG_CD_angle\n CA_CB_CG_CD_diangle=geo.CA_CB_CG_CD_diangle\n\n CD_CE_length=geo.CD_CE_length\n CG_CD_CE_angle=geo.CG_CD_CE_angle\n CB_CG_CD_CE_diangle=geo.CB_CG_CD_CE_diangle\n\n CE_NZ_length=geo.CE_NZ_length\n CD_CE_NZ_angle=geo.CD_CE_NZ_angle\n CG_CD_CE_NZ_diangle=geo.CG_CD_CE_NZ_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d= calculateCoordinates(CA, CB, CG, CG_CD_length, CB_CG_CD_angle, CA_CB_CG_CD_diangle)\n CD= Atom(\"CD\", carbon_d, 0.0, 1.0, \" \", \" CD\", 0, \"C\")\n carbon_e= calculateCoordinates(CB, CG, CD, CD_CE_length, CG_CD_CE_angle, CB_CG_CD_CE_diangle)\n CE= Atom(\"CE\", carbon_e, 0.0, 1.0, \" \", \" CE\", 0, \"C\")\n nitrogen_z= calculateCoordinates(CG, CD, CE, CE_NZ_length, CD_CE_NZ_angle, CG_CD_CE_NZ_diangle)\n NZ= Atom(\"NZ\", nitrogen_z, 0.0, 1.0, \" \", \" NZ\", 0, \"N\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"LYS\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD)\n res.add(CE)\n res.add(NZ)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef makeAsp(segID, N, CA, C, O, geo):\n '''Creates an Aspartic Acid residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n\n CG_OD1_length=geo.CG_OD1_length\n CB_CG_OD1_angle=geo.CB_CG_OD1_angle\n CA_CB_CG_OD1_diangle=geo.CA_CB_CG_OD1_diangle\n\n CG_OD2_length=geo.CG_OD2_length\n CB_CG_OD2_angle=geo.CB_CG_OD2_angle\n CA_CB_CG_OD2_diangle=geo.CA_CB_CG_OD2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n oxygen_d1= calculateCoordinates(CA, CB, CG, CG_OD1_length, CB_CG_OD1_angle, CA_CB_CG_OD1_diangle)\n OD1= Atom(\"OD1\", oxygen_d1, 0.0, 1.0, \" \", \" OD1\", 0, \"O\")\n oxygen_d2= calculateCoordinates(CA, CB, CG, CG_OD2_length, CB_CG_OD2_angle, CA_CB_CG_OD2_diangle)\n OD2= Atom(\"OD2\", oxygen_d2, 0.0, 1.0, \" \", \" OD2\", 0, \"O\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"ASP\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG) \n res.add(OD1)\n res.add(OD2)\n return res", "response": "Creates an Aspartic Acid residue"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef makeAsn(segID,N, CA, C, O, geo):\n '''Creates an Asparagine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n \n CG_OD1_length=geo.CG_OD1_length\n CB_CG_OD1_angle=geo.CB_CG_OD1_angle\n CA_CB_CG_OD1_diangle=geo.CA_CB_CG_OD1_diangle\n \n CG_ND2_length=geo.CG_ND2_length\n CB_CG_ND2_angle=geo.CB_CG_ND2_angle\n CA_CB_CG_ND2_diangle=geo.CA_CB_CG_ND2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n oxygen_d1= calculateCoordinates(CA, CB, CG, CG_OD1_length, CB_CG_OD1_angle, CA_CB_CG_OD1_diangle)\n OD1= Atom(\"OD1\", oxygen_d1, 0.0, 1.0, \" \", \" OD1\", 0, \"O\")\n nitrogen_d2= calculateCoordinates(CA, CB, CG, CG_ND2_length, CB_CG_ND2_angle, CA_CB_CG_ND2_diangle)\n ND2= Atom(\"ND2\", nitrogen_d2, 0.0, 1.0, \" \", \" ND2\", 0, \"N\")\n res= Residue((' ', segID, ' '), \"ASN\", ' ')\n\n ##Create Residue Data Structure\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG) \n res.add(OD1)\n res.add(ND2)\n return res", "response": "Creates an Asparagine residue"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a Glutamic Acid residue", "response": "def makeGlu(segID, N, CA, C, O, geo):\n '''Creates a Glutamic Acid residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle = geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n\n CG_CD_length=geo.CG_CD_length\n CB_CG_CD_angle=geo.CB_CG_CD_angle\n CA_CB_CG_CD_diangle=geo.CA_CB_CG_CD_diangle\n\n CD_OE1_length=geo.CD_OE1_length\n CG_CD_OE1_angle=geo.CG_CD_OE1_angle\n CB_CG_CD_OE1_diangle=geo.CB_CG_CD_OE1_diangle\n\n CD_OE2_length=geo.CD_OE2_length\n CG_CD_OE2_angle=geo.CG_CD_OE2_angle\n CB_CG_CD_OE2_diangle=geo.CB_CG_CD_OE2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d= calculateCoordinates(CA, CB, CG, CG_CD_length, CB_CG_CD_angle, CA_CB_CG_CD_diangle)\n CD= Atom(\"CD\", carbon_d, 0.0, 1.0, \" \", \" CD\", 0, \"C\")\n oxygen_e1= calculateCoordinates(CB, CG, CD, CD_OE1_length, CG_CD_OE1_angle, CB_CG_CD_OE1_diangle)\n OE1= Atom(\"OE1\", oxygen_e1, 0.0, 1.0, \" \", \" OE1\", 0, \"O\")\n oxygen_e2= calculateCoordinates(CB, CG, CD, CD_OE2_length, CG_CD_OE2_angle, CB_CG_CD_OE2_diangle)\n OE2= Atom(\"OE2\", oxygen_e2, 0.0, 1.0, \" \", \" OE2\", 0, \"O\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"GLU\", ' ')\n \n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD)\n res.add(OE1)\n res.add(OE2)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef makeGln(segID, N, CA, C, O, geo):\n '''Creates a Glutamine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n\n CG_CD_length=geo.CG_CD_length\n CB_CG_CD_angle=geo.CB_CG_CD_angle\n CA_CB_CG_CD_diangle=geo.CA_CB_CG_CD_diangle\n \n CD_OE1_length=geo.CD_OE1_length\n CG_CD_OE1_angle=geo.CG_CD_OE1_angle\n CB_CG_CD_OE1_diangle=geo.CB_CG_CD_OE1_diangle\n \n CD_NE2_length=geo.CD_NE2_length\n CG_CD_NE2_angle=geo.CG_CD_NE2_angle\n CB_CG_CD_NE2_diangle=geo.CB_CG_CD_NE2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d= calculateCoordinates(CA, CB, CG, CG_CD_length, CB_CG_CD_angle, CA_CB_CG_CD_diangle)\n CD= Atom(\"CD\", carbon_d, 0.0, 1.0, \" \", \" CD\", 0, \"C\")\n oxygen_e1= calculateCoordinates(CB, CG, CD, CD_OE1_length, CG_CD_OE1_angle, CB_CG_CD_OE1_diangle)\n OE1= Atom(\"OE1\", oxygen_e1, 0.0, 1.0, \" \", \" OE1\", 0, \"O\")\n nitrogen_e2= calculateCoordinates(CB, CG, CD, CD_NE2_length, CG_CD_NE2_angle, CB_CG_CD_NE2_diangle)\n NE2= Atom(\"NE2\", nitrogen_e2, 0.0, 1.0, \" \", \" NE2\", 0, \"N\")\n\n\n ##Create Residue DS\n res= Residue((' ', segID, ' '), \"GLN\", ' ')\n \n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD)\n res.add(OE1)\n res.add(NE2)\n return res", "response": "Creates a Glutamine residue"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef makeMet(segID, N, CA, C, O, geo):\n '''Creates a Methionine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n \n CG_SD_length=geo.CG_SD_length\n CB_CG_SD_angle=geo.CB_CG_SD_angle\n CA_CB_CG_SD_diangle=geo.CA_CB_CG_SD_diangle\n \n SD_CE_length=geo.SD_CE_length\n CG_SD_CE_angle=geo.CG_SD_CE_angle\n CB_CG_SD_CE_diangle=geo.CB_CG_SD_CE_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n sulfur_d= calculateCoordinates(CA, CB, CG, CG_SD_length, CB_CG_SD_angle, CA_CB_CG_SD_diangle)\n SD= Atom(\"SD\", sulfur_d, 0.0, 1.0, \" \", \" SD\", 0, \"S\")\n carbon_e= calculateCoordinates(CB, CG, SD, SD_CE_length, CG_SD_CE_angle, CB_CG_SD_CE_diangle)\n CE= Atom(\"CE\", carbon_e, 0.0, 1.0, \" \", \" CE\", 0, \"C\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"MET\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(SD)\n res.add(CE)\n return res", "response": "Creates a Methionine residue"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef makeHis(segID, N, CA, C, O, geo):\n '''Creates a Histidine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n \n CG_ND1_length=geo.CG_ND1_length\n CB_CG_ND1_angle=geo.CB_CG_ND1_angle\n CA_CB_CG_ND1_diangle=geo.CA_CB_CG_ND1_diangle\n \n CG_CD2_length=geo.CG_CD2_length\n CB_CG_CD2_angle=geo.CB_CG_CD2_angle\n CA_CB_CG_CD2_diangle=geo.CA_CB_CG_CD2_diangle\n \n ND1_CE1_length=geo.ND1_CE1_length\n CG_ND1_CE1_angle=geo.CG_ND1_CE1_angle\n CB_CG_ND1_CE1_diangle=geo.CB_CG_ND1_CE1_diangle\n \n CD2_NE2_length=geo.CD2_NE2_length\n CG_CD2_NE2_angle=geo.CG_CD2_NE2_angle\n CB_CG_CD2_NE2_diangle=geo.CB_CG_CD2_NE2_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n nitrogen_d1= calculateCoordinates(CA, CB, CG, CG_ND1_length, CB_CG_ND1_angle, CA_CB_CG_ND1_diangle)\n ND1= Atom(\"ND1\", nitrogen_d1, 0.0, 1.0, \" \", \" ND1\", 0, \"N\")\n carbon_d2= calculateCoordinates(CA, CB, CG, CG_CD2_length, CB_CG_CD2_angle, CA_CB_CG_CD2_diangle)\n CD2= Atom(\"CD2\", carbon_d2, 0.0, 1.0, \" \", \" CD2\", 0, \"C\")\n carbon_e1= calculateCoordinates(CB, CG, ND1, ND1_CE1_length, CG_ND1_CE1_angle, CB_CG_ND1_CE1_diangle)\n CE1= Atom(\"CE1\", carbon_e1, 0.0, 1.0, \" \", \" CE1\", 0, \"C\")\n nitrogen_e2= calculateCoordinates(CB, CG, CD2, CD2_NE2_length, CG_CD2_NE2_angle, CB_CG_CD2_NE2_diangle)\n NE2= Atom(\"NE2\", nitrogen_e2, 0.0, 1.0, \" \", \" NE2\", 0, \"N\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"HIS\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(ND1)\n res.add(CD2)\n res.add(CE1)\n res.add(NE2)\n return res", "response": "Creates a Histidine residue."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef makePro(segID, N, CA, C, O, geo):\n '''Creates a Proline residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n \n CG_CD_length=geo.CG_CD_length\n CB_CG_CD_angle=geo.CB_CG_CD_angle\n CA_CB_CG_CD_diangle=geo.CA_CB_CG_CD_diangle\n \n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d= calculateCoordinates(CA, CB, CG, CG_CD_length, CB_CG_CD_angle, CA_CB_CG_CD_diangle)\n CD= Atom(\"CD\", carbon_d, 0.0, 1.0, \" \", \" CD\", 0, \"C\")\n\n ##Create Residue Data Structure\n res= Residue((' ', segID, ' '), \"PRO\", ' ')\n \n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD)\n\n return res", "response": "Creates a Proline residue"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a Phenylalanine residue.", "response": "def makePhe(segID, N, CA, C, O, geo):\n '''Creates a Phenylalanine residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n \n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n\n CG_CD1_length=geo.CG_CD1_length\n CB_CG_CD1_angle=geo.CB_CG_CD1_angle\n CA_CB_CG_CD1_diangle=geo.CA_CB_CG_CD1_diangle\n\n CG_CD2_length=geo.CG_CD2_length\n CB_CG_CD2_angle=geo.CB_CG_CD2_angle\n CA_CB_CG_CD2_diangle= geo.CA_CB_CG_CD2_diangle\n \n CD1_CE1_length=geo.CD1_CE1_length\n CG_CD1_CE1_angle=geo.CG_CD1_CE1_angle\n CB_CG_CD1_CE1_diangle=geo.CB_CG_CD1_CE1_diangle\n\n CD2_CE2_length=geo.CD2_CE2_length\n CG_CD2_CE2_angle=geo.CG_CD2_CE2_angle\n CB_CG_CD2_CE2_diangle=geo.CB_CG_CD2_CE2_diangle\n\n CE1_CZ_length=geo.CE1_CZ_length\n CD1_CE1_CZ_angle=geo.CD1_CE1_CZ_angle\n CG_CD1_CE1_CZ_diangle=geo.CG_CD1_CE1_CZ_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d1= calculateCoordinates(CA, CB, CG, CG_CD1_length, CB_CG_CD1_angle, CA_CB_CG_CD1_diangle)\n CD1= Atom(\"CD1\", carbon_d1, 0.0, 1.0, \" \", \" CD1\", 0, \"C\")\n carbon_d2= calculateCoordinates(CA, CB, CG, CG_CD2_length, CB_CG_CD2_angle, CA_CB_CG_CD2_diangle)\n CD2= Atom(\"CD2\", carbon_d2, 0.0, 1.0, \" \", \" CD2\", 0, \"C\")\n carbon_e1= calculateCoordinates(CB, CG, CD1, CD1_CE1_length, CG_CD1_CE1_angle, CB_CG_CD1_CE1_diangle)\n CE1= Atom(\"CE1\", carbon_e1, 0.0, 1.0, \" \", \" CE1\", 0, \"C\")\n carbon_e2= calculateCoordinates(CB, CG, CD2, CD2_CE2_length, CG_CD2_CE2_angle, CB_CG_CD2_CE2_diangle)\n CE2= Atom(\"CE2\", carbon_e2, 0.0, 1.0, \" \", \" CE2\", 0, \"C\")\n carbon_z= calculateCoordinates(CG, CD1, CE1, CE1_CZ_length, CD1_CE1_CZ_angle, CG_CD1_CE1_CZ_diangle)\n CZ= Atom(\"CZ\", carbon_z, 0.0, 1.0, \" \", \" CZ\", 0, \"C\")\n\n ##Create Residue Data Structures\n res= Residue((' ', segID, ' '), \"PHE\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD1)\n res.add(CE1)\n res.add(CD2)\n res.add(CE2)\n res.add(CZ)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef makeTrp(segID, N, CA, C, O, geo):\n '''Creates a Tryptophan residue'''\n ##R-Group\n CA_CB_length=geo.CA_CB_length\n C_CA_CB_angle=geo.C_CA_CB_angle\n N_C_CA_CB_diangle=geo.N_C_CA_CB_diangle\n\n CB_CG_length=geo.CB_CG_length\n CA_CB_CG_angle=geo.CA_CB_CG_angle\n N_CA_CB_CG_diangle=geo.N_CA_CB_CG_diangle\n\n CG_CD1_length=geo.CG_CD1_length\n CB_CG_CD1_angle=geo.CB_CG_CD1_angle\n CA_CB_CG_CD1_diangle=geo.CA_CB_CG_CD1_diangle\n\n CG_CD2_length=geo.CG_CD2_length\n CB_CG_CD2_angle=geo.CB_CG_CD2_angle\n CA_CB_CG_CD2_diangle=geo.CA_CB_CG_CD2_diangle\n \n CD1_NE1_length=geo.CD1_NE1_length\n CG_CD1_NE1_angle=geo.CG_CD1_NE1_angle\n CB_CG_CD1_NE1_diangle=geo.CB_CG_CD1_NE1_diangle\n\n CD2_CE2_length=geo.CD2_CE2_length\n CG_CD2_CE2_angle=geo.CG_CD2_CE2_angle\n CB_CG_CD2_CE2_diangle=geo.CB_CG_CD2_CE2_diangle\n\n CD2_CE3_length=geo.CD2_CE3_length\n CG_CD2_CE3_angle=geo.CG_CD2_CE3_angle\n CB_CG_CD2_CE3_diangle=geo.CB_CG_CD2_CE3_diangle\n\n CE2_CZ2_length=geo.CE2_CZ2_length\n CD2_CE2_CZ2_angle=geo.CD2_CE2_CZ2_angle\n CG_CD2_CE2_CZ2_diangle=geo.CG_CD2_CE2_CZ2_diangle\n\n CE3_CZ3_length=geo.CE3_CZ3_length\n CD2_CE3_CZ3_angle=geo.CD2_CE3_CZ3_angle\n CG_CD2_CE3_CZ3_diangle=geo.CG_CD2_CE3_CZ3_diangle\n\n CZ2_CH2_length=geo.CZ2_CH2_length\n CE2_CZ2_CH2_angle=geo.CE2_CZ2_CH2_angle\n CD2_CE2_CZ2_CH2_diangle=geo.CD2_CE2_CZ2_CH2_diangle\n\n carbon_b= calculateCoordinates(N, C, CA, CA_CB_length, C_CA_CB_angle, N_C_CA_CB_diangle)\n CB= Atom(\"CB\", carbon_b, 0.0 , 1.0, \" \",\" CB\", 0,\"C\")\n carbon_g= calculateCoordinates(N, CA, CB, CB_CG_length, CA_CB_CG_angle, N_CA_CB_CG_diangle)\n CG= Atom(\"CG\", carbon_g, 0.0, 1.0, \" \", \" CG\", 0, \"C\")\n carbon_d1= calculateCoordinates(CA, CB, CG, CG_CD1_length, CB_CG_CD1_angle, CA_CB_CG_CD1_diangle)\n CD1= Atom(\"CD1\", carbon_d1, 0.0, 1.0, \" \", \" CD1\", 0, \"C\")\n carbon_d2= calculateCoordinates(CA, CB, CG, CG_CD2_length, CB_CG_CD2_angle, CA_CB_CG_CD2_diangle)\n CD2= Atom(\"CD2\", carbon_d2, 0.0, 1.0, \" \", \" CD2\", 0, \"C\")\n nitrogen_e1= calculateCoordinates(CB, CG, CD1, CD1_NE1_length, CG_CD1_NE1_angle, CB_CG_CD1_NE1_diangle)\n NE1= Atom(\"NE1\", nitrogen_e1, 0.0, 1.0, \" \", \" NE1\", 0, \"N\")\n carbon_e2= calculateCoordinates(CB, CG, CD2, CD2_CE2_length, CG_CD2_CE2_angle, CB_CG_CD2_CE2_diangle)\n CE2= Atom(\"CE2\", carbon_e2, 0.0, 1.0, \" \", \" CE2\", 0, \"C\")\n carbon_e3= calculateCoordinates(CB, CG, CD2, CD2_CE3_length, CG_CD2_CE3_angle, CB_CG_CD2_CE3_diangle)\n CE3= Atom(\"CE3\", carbon_e3, 0.0, 1.0, \" \", \" CE3\", 0, \"C\")\n\n carbon_z2= calculateCoordinates(CG, CD2, CE2, CE2_CZ2_length, CD2_CE2_CZ2_angle, CG_CD2_CE2_CZ2_diangle)\n CZ2= Atom(\"CZ2\", carbon_z2, 0.0, 1.0, \" \", \" CZ2\", 0, \"C\")\n\n carbon_z3= calculateCoordinates(CG, CD2, CE3, CE3_CZ3_length, CD2_CE3_CZ3_angle, CG_CD2_CE3_CZ3_diangle)\n CZ3= Atom(\"CZ3\", carbon_z3, 0.0, 1.0, \" \", \" CZ3\", 0, \"C\")\n\n carbon_h2= calculateCoordinates(CD2, CE2, CZ2, CZ2_CH2_length, CE2_CZ2_CH2_angle, CD2_CE2_CZ2_CH2_diangle)\n CH2= Atom(\"CH2\", carbon_h2, 0.0, 1.0, \" \", \" CH2\", 0, \"C\")\n \n ##Create Residue DS\n res= Residue((' ', segID, ' '), \"TRP\", ' ')\n res.add(N)\n res.add(CA)\n res.add(C)\n res.add(O)\n res.add(CB)\n res.add(CG)\n res.add(CD1)\n res.add(CD2)\n\n res.add(NE1)\n res.add(CE2)\n res.add(CE3)\n\n res.add(CZ2)\n res.add(CZ3)\n\n res.add(CH2)\n return res", "response": "Creates a Tryptophan residue."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new structure containing a single amino acid.", "response": "def initialize_res(residue):\n '''Creates a new structure containing a single amino acid. The type and\n geometry of the amino acid are determined by the argument, which has to be\n either a geometry object or a single-letter amino acid code.\n The amino acid will be placed into chain A of model 0.'''\n \n if isinstance( residue, Geo ):\n geo = residue\n else:\n geo=geometry(residue) \n \n segID=1\n AA= geo.residue_name\n CA_N_length=geo.CA_N_length\n CA_C_length=geo.CA_C_length\n N_CA_C_angle=geo.N_CA_C_angle\n \n CA_coord= numpy.array([0.,0.,0.])\n C_coord= numpy.array([CA_C_length,0,0])\n N_coord = numpy.array([CA_N_length*math.cos(N_CA_C_angle*(math.pi/180.0)),CA_N_length*math.sin(N_CA_C_angle*(math.pi/180.0)),0])\n\n N= Atom(\"N\", N_coord, 0.0 , 1.0, \" \",\" N\", 0, \"N\")\n CA=Atom(\"CA\", CA_coord, 0.0 , 1.0, \" \",\" CA\", 0,\"C\")\n C= Atom(\"C\", C_coord, 0.0, 1.0, \" \", \" C\",0,\"C\")\n\n ##Create Carbonyl atom (to be moved later)\n C_O_length=geo.C_O_length\n CA_C_O_angle=geo.CA_C_O_angle\n N_CA_C_O_diangle=geo.N_CA_C_O_diangle\n \n carbonyl=calculateCoordinates(N, CA, C, C_O_length, CA_C_O_angle, N_CA_C_O_diangle)\n O= Atom(\"O\",carbonyl , 0.0 , 1.0, \" \",\" O\", 0, \"O\")\n\n if(AA=='G'):\n res=makeGly(segID, N, CA, C, O, geo)\n elif(AA=='A'):\n res=makeAla(segID, N, CA, C, O, geo)\n elif(AA=='S'):\n res=makeSer(segID, N, CA, C, O, geo)\n elif(AA=='C'):\n res=makeCys(segID, N, CA, C, O, geo)\n elif(AA=='V'):\n res=makeVal(segID, N, CA, C, O, geo)\n elif(AA=='I'):\n res=makeIle(segID, N, CA, C, O, geo)\n elif(AA=='L'):\n res=makeLeu(segID, N, CA, C, O, geo)\n elif(AA=='T'):\n res=makeThr(segID, N, CA, C, O, geo)\n elif(AA=='R'):\n res=makeArg(segID, N, CA, C, O, geo)\n elif(AA=='K'):\n res=makeLys(segID, N, CA, C, O, geo)\n elif(AA=='D'):\n res=makeAsp(segID, N, CA, C, O, geo)\n elif(AA=='E'):\n res=makeGlu(segID, N, CA, C, O, geo)\n elif(AA=='N'):\n res=makeAsn(segID, N, CA, C, O, geo)\n elif(AA=='Q'):\n res=makeGln(segID, N, CA, C, O, geo)\n elif(AA=='M'):\n res=makeMet(segID, N, CA, C, O, geo)\n elif(AA=='H'):\n res=makeHis(segID, N, CA, C, O, geo)\n elif(AA=='P'):\n res=makePro(segID, N, CA, C, O, geo)\n elif(AA=='F'):\n res=makePhe(segID, N, CA, C, O, geo)\n elif(AA=='Y'):\n res=makeTyr(segID, N, CA, C, O, geo)\n elif(AA=='W'):\n res=makeTrp(segID, N, CA, C, O, geo)\n else:\n res=makeGly(segID, N, CA, C, O, geo)\n\n cha= Chain('A')\n cha.add(res)\n \n mod= Model(0)\n mod.add(cha)\n\n struc= Structure('X')\n struc.add(mod)\n return struc"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a residue to chain A model 0 of the given structure and returns the new structure.", "response": "def add_residue_from_geo(structure, geo):\n '''Adds a residue to chain A model 0 of the given structure, and\n returns the new structure. The residue to be added is determined by\n the geometry object given as second argument.\n \n This function is a helper function and should not normally be called\n directly. Call add_residue() instead.'''\n resRef= getReferenceResidue(structure)\n AA=geo.residue_name\n segID= resRef.get_id()[1]\n segID+=1\n\n ##geometry to bring together residue\n peptide_bond=geo.peptide_bond\n CA_C_N_angle=geo.CA_C_N_angle\n C_N_CA_angle=geo.C_N_CA_angle\n\n ##Backbone Coordinages\n N_CA_C_angle=geo.N_CA_C_angle\n CA_N_length=geo.CA_N_length\n CA_C_length=geo.CA_C_length\n phi= geo.phi\n psi_im1=geo.psi_im1\n omega=geo.omega\n\n N_coord=calculateCoordinates(resRef['N'], resRef['CA'], resRef['C'], peptide_bond, CA_C_N_angle, psi_im1)\n N= Atom(\"N\", N_coord, 0.0 , 1.0, \" \",\" N\", 0, \"N\")\n\n CA_coord=calculateCoordinates(resRef['CA'], resRef['C'], N, CA_N_length, C_N_CA_angle, omega)\n CA=Atom(\"CA\", CA_coord, 0.0 , 1.0, \" \",\" CA\", 0,\"C\")\n\n C_coord=calculateCoordinates(resRef['C'], N, CA, CA_C_length, N_CA_C_angle, phi)\n C= Atom(\"C\", C_coord, 0.0, 1.0, \" \", \" C\",0,\"C\")\n\n ##Create Carbonyl atom (to be moved later)\n C_O_length=geo.C_O_length\n CA_C_O_angle=geo.CA_C_O_angle\n N_CA_C_O_diangle=geo.N_CA_C_O_diangle\n\n carbonyl=calculateCoordinates(N, CA, C, C_O_length, CA_C_O_angle, N_CA_C_O_diangle)\n O= Atom(\"O\",carbonyl , 0.0 , 1.0, \" \",\" O\", 0, \"O\")\n \n if(AA=='G'):\n res=makeGly(segID, N, CA, C, O, geo)\n elif(AA=='A'):\n res=makeAla(segID, N, CA, C, O, geo)\n elif(AA=='S'):\n res=makeSer(segID, N, CA, C, O, geo)\n elif(AA=='C'):\n res=makeCys(segID, N, CA, C, O, geo)\n elif(AA=='V'):\n res=makeVal(segID, N, CA, C, O, geo)\n elif(AA=='I'):\n res=makeIle(segID, N, CA, C, O, geo)\n elif(AA=='L'):\n res=makeLeu(segID, N, CA, C, O, geo)\n elif(AA=='T'):\n res=makeThr(segID, N, CA, C, O, geo)\n elif(AA=='R'):\n res=makeArg(segID, N, CA, C, O, geo)\n elif(AA=='K'):\n res=makeLys(segID, N, CA, C, O, geo)\n elif(AA=='D'):\n res=makeAsp(segID, N, CA, C, O, geo)\n elif(AA=='E'):\n res=makeGlu(segID, N, CA, C, O, geo)\n elif(AA=='N'):\n res=makeAsn(segID, N, CA, C, O, geo)\n elif(AA=='Q'):\n res=makeGln(segID, N, CA, C, O, geo)\n elif(AA=='M'):\n res=makeMet(segID, N, CA, C, O, geo)\n elif(AA=='H'):\n res=makeHis(segID, N, CA, C, O, geo)\n elif(AA=='P'):\n res=makePro(segID, N, CA, C, O, geo)\n elif(AA=='F'):\n res=makePhe(segID, N, CA, C, O, geo)\n elif(AA=='Y'):\n res=makeTyr(segID, N, CA, C, O, geo)\n elif(AA=='W'):\n res=makeTrp(segID, N, CA, C, O, geo)\n else:\n res=makeGly(segID, N, CA, C, O, geo)\n \n resRef['O'].set_coord(calculateCoordinates(res['N'], resRef['CA'], resRef['C'], C_O_length, CA_C_O_angle, 180.0))\n\n ghost= Atom(\"N\", calculateCoordinates(res['N'], res['CA'], res['C'], peptide_bond, CA_C_N_angle, psi_im1), 0.0 , 0.0, \" \",\"N\", 0, \"N\")\n res['O'].set_coord(calculateCoordinates( res['N'], res['CA'], res['C'], C_O_length, CA_C_O_angle, 180.0))\n\n structure[0]['A'].add(res)\n return structure"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_extended_structure(AA_chain):\n '''Place a sequence of amino acids into a peptide in the extended\n conformation. The argument AA_chain holds the sequence of amino\n acids to be used.'''\n geo = geometry(AA_chain[0])\n struc=initialize_res(geo)\n \n for i in range(1,len(AA_chain)): \n AA = AA_chain[i]\n geo = geometry(AA)\n add_residue(struc, geo)\n\n return struc", "response": "Place a sequence of amino acids into a peptide in the extended\n conformation. The argument AA_chain holds the amino acids to be used."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_residue(structure, residue, phi=-120, psi_im1=140, omega=-370):\n '''Adds a residue to chain A model 0 of the given structure, and\n returns the new structure. The residue to be added can be specified\n in two ways: either as a geometry object (in which case\n the remaining arguments phi, psi_im1, and omega are ignored) or as a\n single-letter amino-acid code. In the latter case, the optional\n arguments phi, psi_im1, and omega specify the corresponding backbone\n angles.\n \n When omega is specified, it needs to be a value greater than or equal\n to -360. Values below -360 are ignored.''' \n \n if isinstance( residue, Geo ):\n geo = residue\n else:\n geo=geometry(residue) \n geo.phi=phi\n geo.psi_im1=psi_im1\n if omega>-361:\n geo.omega=omega\n \n add_residue_from_geo(structure, geo)\n return structure", "response": "Adds a residue to the chain A model 0 of the given structure and returns the new structure."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_structure(AA_chain,phi,psi_im1,omega=[]):\n '''Place a sequence of amino acids into a peptide with specified\n backbone dihedral angles. The argument AA_chain holds the\n sequence of amino acids to be used. The arguments phi and psi_im1 hold\n lists of backbone angles, one for each amino acid, *starting from\n the second amino acid in the chain*. The argument \n omega (optional) holds a list of omega angles, also starting from\n the second amino acid in the chain.'''\n geo = geometry(AA_chain[0])\n struc=initialize_res(geo)\n\n if len(omega)==0:\n for i in range(1,len(AA_chain)): \n AA = AA_chain[i]\n add_residue(struc, AA, phi[i-1], psi_im1[i-1])\n else:\n for i in range(1,len(AA_chain)): \n AA = AA_chain[i]\n add_residue(struc, AA, phi[i-1], psi_im1[i-1], omega[i-1])\n\n return struc", "response": "Place a sequence of amino acids into a peptide with specified backbone dihedral angles."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a structure out of a list of geometry objects.", "response": "def make_structure_from_geos(geos):\n '''Creates a structure out of a list of geometry objects.'''\n model_structure=initialize_res(geos[0])\n for i in range(1,len(geos)):\n model_structure=add_residue(model_structure, geos[i])\n\n return model_structure"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate the geometry of the requested amino acid.", "response": "def geometry(AA):\n '''Generates the geometry of the requested amino acid.\n The amino acid needs to be specified by its single-letter\n code. If an invalid code is specified, the function\n returns the geometry of Glycine.'''\n if(AA=='G'):\n return GlyGeo()\n elif(AA=='A'):\n return AlaGeo()\n elif(AA=='S'):\n return SerGeo()\n elif(AA=='C'):\n return CysGeo()\n elif(AA=='V'):\n return ValGeo()\n elif(AA=='I'):\n return IleGeo()\n elif(AA=='L'):\n return LeuGeo()\n elif(AA=='T'):\n return ThrGeo()\n elif(AA=='R'):\n return ArgGeo()\n elif(AA=='K'):\n return LysGeo()\n elif(AA=='D'):\n return AspGeo()\n elif(AA=='E'):\n return GluGeo()\n elif(AA=='N'):\n return AsnGeo()\n elif(AA=='Q'):\n return GlnGeo()\n elif(AA=='M'):\n return MetGeo()\n elif(AA=='H'):\n return HisGeo()\n elif(AA=='P'):\n return ProGeo()\n elif(AA=='F'):\n return PheGeo()\n elif(AA=='Y'):\n return TyrGeo()\n elif(AA=='W'):\n return TrpGeo()\n else:\n return GlyGeo()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef enregister(svc, newAddress, password):\n return svc.connectQ2Q(q2q.Q2QAddress(\"\",\"\"),\n q2q.Q2QAddress(newAddress.domain, \"accounts\"),\n 'identity-admin',\n protocol.ClientFactory.forProtocol(AMP)\n ).addCallback(\n AMP.callRemote,\n AddUser,\n name=newAddress.resource,\n password=password\n ).addErrback(\n Failure.trap,\n error.ConnectionDone\n )", "response": "Register a new account and return a Deferred that fires if it worked."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconnect the given endpoint to the given protocolFactory.", "response": "def connectCached(self, endpoint, protocolFactory,\n extraWork=lambda x: x,\n extraHash=None):\n \"\"\"\n See module docstring\n\n @param endpoint:\n @param protocolFactory:\n @param extraWork:\n @param extraHash:\n\n @return: the D\n \"\"\"\n key = endpoint, extraHash\n D = Deferred()\n if key in self.cachedConnections:\n D.callback(self.cachedConnections[key])\n elif key in self.inProgress:\n self.inProgress[key].append(D)\n else:\n self.inProgress[key] = [D]\n endpoint.connect(\n _CachingClientFactory(\n self, key, protocolFactory,\n extraWork))\n return D"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove lost connection from cache.", "response": "def connectionLostForKey(self, key):\n \"\"\"\n Remove lost connection from cache.\n\n @param key: key of connection that was lost\n @type key: L{tuple} of L{IAddress} and C{extraHash}\n \"\"\"\n if key in self.cachedConnections:\n del self.cachedConnections[key]\n if self._shuttingDown and self._shuttingDown.get(key):\n d, self._shuttingDown[key] = self._shuttingDown[key], None\n d.callback(None)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndisconnects all cached connections.", "response": "def shutdown(self):\n \"\"\"\n Disconnect all cached connections.\n\n @returns: a deferred that fires once all connection are disconnected.\n @rtype: L{Deferred}\n \"\"\"\n self._shuttingDown = {key: Deferred()\n for key in self.cachedConnections.keys()}\n return DeferredList(\n [maybeDeferred(p.transport.loseConnection)\n for p in self.cachedConnections.values()]\n + self._shuttingDown.values())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_state(state):\n if isinstance(state, bool):\n return state\n if not isinstance(state, basestring):\n raise TypeError('ACL state must be bool or string')\n try:\n return _state_strings[state.lower()]\n except KeyError:\n raise ValueError('unknown ACL state string')", "response": "Convert a string or bool into a bool."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nextracts the Zipkin attributes and configuration from the request attributes.", "response": "def _get_settings_from_request(request):\n \"\"\"Extracts Zipkin attributes and configuration from request attributes.\n See the `zipkin_span` context in py-zipkin for more detaied information on\n all the settings.\n\n Here are the supported Pyramid registry settings:\n\n zipkin.create_zipkin_attr: allows the service to override the creation of\n Zipkin attributes. For example, if you want to deterministically\n calculate trace ID from some service-specific attributes.\n zipkin.transport_handler: how py-zipkin will log the spans it generates.\n zipkin.stream_name: an additional parameter to be used as the first arg\n to the transport_handler function. A good example is a Kafka topic.\n zipkin.add_logging_annotation: if true, the outermost span in this service\n will have an annotation set when py-zipkin begins its logging.\n zipkin.report_root_timestamp: if true, the outermost span in this service\n will set its timestamp and duration attributes. Use this only if this\n service is not going to have a corresponding client span. See\n https://github.com/Yelp/pyramid_zipkin/issues/68\n zipkin.firehose_handler: [EXPERIMENTAL] this enables \"firehose tracing\",\n which will log 100% of the spans to this handler, regardless of\n sampling decision. This is experimental and may change or be removed\n at any time without warning.\n zipkin.use_pattern_as_span_name: if true, we'll use the pyramid route pattern\n as span name. If false (default) we'll keep using the raw url path.\n \"\"\"\n settings = request.registry.settings\n\n # Creates zipkin_attrs and attaches a zipkin_trace_id attr to the request\n if 'zipkin.create_zipkin_attr' in settings:\n zipkin_attrs = settings['zipkin.create_zipkin_attr'](request)\n else:\n zipkin_attrs = create_zipkin_attr(request)\n\n if 'zipkin.transport_handler' in settings:\n transport_handler = settings['zipkin.transport_handler']\n if not isinstance(transport_handler, BaseTransportHandler):\n warnings.warn(\n 'Using a function as transport_handler is deprecated. '\n 'Please extend py_zipkin.transport.BaseTransportHandler',\n DeprecationWarning,\n )\n stream_name = settings.get('zipkin.stream_name', 'zipkin')\n transport_handler = functools.partial(transport_handler, stream_name)\n else:\n raise ZipkinError(\n \"`zipkin.transport_handler` is a required config property, which\"\n \" is missing. Have a look at py_zipkin's docs for how to implement\"\n \" it: https://github.com/Yelp/py_zipkin#transport\"\n )\n\n context_stack = _getattr_path(request, settings.get('zipkin.request_context'))\n\n service_name = settings.get('service_name', 'unknown')\n span_name = '{0} {1}'.format(request.method, request.path)\n add_logging_annotation = settings.get(\n 'zipkin.add_logging_annotation',\n False,\n )\n\n # If the incoming request doesn't have Zipkin headers, this request is\n # assumed to be the root span of a trace. There's also a configuration\n # override to allow services to write their own logic for reporting\n # timestamp/duration.\n if 'zipkin.report_root_timestamp' in settings:\n report_root_timestamp = settings['zipkin.report_root_timestamp']\n else:\n report_root_timestamp = 'X-B3-TraceId' not in request.headers\n zipkin_host = settings.get('zipkin.host')\n zipkin_port = settings.get('zipkin.port', request.server_port)\n firehose_handler = settings.get('zipkin.firehose_handler')\n post_handler_hook = settings.get('zipkin.post_handler_hook')\n max_span_batch_size = settings.get('zipkin.max_span_batch_size')\n use_pattern_as_span_name = bool(\n settings.get('zipkin.use_pattern_as_span_name', False),\n )\n encoding = settings.get('zipkin.encoding', Encoding.V1_THRIFT)\n return _ZipkinSettings(\n zipkin_attrs,\n transport_handler,\n service_name,\n span_name,\n add_logging_annotation,\n report_root_timestamp,\n zipkin_host,\n zipkin_port,\n context_stack,\n firehose_handler,\n post_handler_hook,\n max_span_batch_size,\n use_pattern_as_span_name,\n encoding=encoding,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a pyramid tween function for the given pyramid request handler and pyramid app registry.", "response": "def zipkin_tween(handler, registry):\n \"\"\"\n Factory for pyramid tween to handle zipkin server logging. Note that even\n if the request isn't sampled, Zipkin attributes are generated and pushed\n into threadlocal storage, so `create_http_headers_for_new_span` and\n `zipkin_span` will have access to the proper Zipkin state.\n\n Consumes custom create_zipkin_attr function if one is set in the pyramid\n registry.\n\n :param handler: pyramid request handler\n :param registry: pyramid app registry\n\n :returns: pyramid tween\n \"\"\"\n def tween(request):\n zipkin_settings = _get_settings_from_request(request)\n tracer = get_default_tracer()\n\n tween_kwargs = dict(\n service_name=zipkin_settings.service_name,\n span_name=zipkin_settings.span_name,\n zipkin_attrs=zipkin_settings.zipkin_attrs,\n transport_handler=zipkin_settings.transport_handler,\n host=zipkin_settings.host,\n port=zipkin_settings.port,\n add_logging_annotation=zipkin_settings.add_logging_annotation,\n report_root_timestamp=zipkin_settings.report_root_timestamp,\n context_stack=zipkin_settings.context_stack,\n max_span_batch_size=zipkin_settings.max_span_batch_size,\n encoding=zipkin_settings.encoding,\n kind=Kind.SERVER,\n )\n\n if zipkin_settings.firehose_handler is not None:\n tween_kwargs['firehose_handler'] = zipkin_settings.firehose_handler\n\n with tracer.zipkin_span(**tween_kwargs) as zipkin_context:\n response = handler(request)\n if zipkin_settings.use_pattern_as_span_name and request.matched_route:\n zipkin_context.override_span_name('{} {}'.format(\n request.method,\n request.matched_route.pattern,\n ))\n zipkin_context.update_binary_annotations(\n get_binary_annotations(request, response),\n )\n\n if zipkin_settings.post_handler_hook:\n zipkin_settings.post_handler_hook(request, response)\n\n return response\n\n return tween"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_skill_data(self):\n path_to_sha = {\n folder: sha for folder, sha in self.get_shas()\n }\n modules = self.read_file('.gitmodules').split('[submodule \"')\n for i, module in enumerate(modules):\n if not module:\n continue\n try:\n name = module.split('\"]')[0].strip()\n path = module.split('path = ')[1].split('\\n')[0].strip()\n url = module.split('url = ')[1].strip()\n sha = path_to_sha.get(path, '')\n yield name, path, url, sha\n except (ValueError, IndexError) as e:\n LOG.warning('Failed to parse submodule \"{}\" #{}:{}'.format(\n locals().get('name', ''), i, e\n ))", "response": "Generates tuples of name path url sha"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef expectAck(self):\n last = self.lastTransmitted\n self.ackPredicate = lambda ackPacket: (\n ackPacket.relativeAck() >= last.relativeSeq()\n )", "response": "Sets the ackPredicate to True if the most recent packet produced as an output of this state machine has been acknowledged by our peer."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreceive an ack or SynAck packet from the given packet.", "response": "def maybeReceiveAck(self, ackPacket):\n \"\"\"\n Receive an L{ack} or L{synAck} input from the given packet.\n \"\"\"\n ackPredicate = self.ackPredicate\n self.ackPredicate = lambda packet: False\n if ackPacket.syn:\n # New SYN packets are always news.\n self.synAck()\n return\n if ackPredicate(ackPacket):\n self.ack()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _set_options():\n CleanCommand.user_options = _CleanCommand.user_options[:]\n CleanCommand.user_options.extend([\n ('dist', 'd', 'remove distribution directory'),\n ('eggs', None, 'remove egg and egg-info directories'),\n ('environment', 'E', 'remove virtual environment directory'),\n ('pycache', 'p', 'remove __pycache__ directories'),\n\n ('egg-base=', 'e',\n 'directory containing .egg-info directories '\n '(default: top of the source tree)'),\n ('virtualenv-dir=', None,\n 'root directory for the virtual directory '\n '(default: value of VIRTUAL_ENV environment variable)'),\n ])\n CleanCommand.boolean_options = _CleanCommand.boolean_options[:]\n CleanCommand.boolean_options.extend(\n ['dist', 'eggs', 'environment', 'pycache'])", "response": "Set the options for CleanCommand.\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef activate_output(self, universe: int) -> None:\n check_universe(universe)\n # check, if the universe already exists in the list:\n if universe in self._outputs:\n return\n # add new sending:\n new_output = Output(DataPacket(cid=self.__CID, sourceName=self.source_name, universe=universe))\n self._outputs[universe] = new_output", "response": "Activates a universe that s then starting to sending every second."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef deactivate_output(self, universe: int) -> None:\n check_universe(universe)\n try: # try to send out three messages with stream_termination bit set to 1\n self._outputs[universe]._packet.option_StreamTerminated = True\n for i in range(0, 3):\n self._output_thread.send_out(self._outputs[universe])\n except:\n pass\n try:\n del self._outputs[universe]\n except:\n pass", "response": "Deactivates an existing sending output."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmoves an entry from one universe to another.", "response": "def move_universe(self, universe_from: int, universe_to: int) -> None:\n \"\"\"\n Moves an sending from one universe to another. All settings are being restored and only the universe changes\n :param universe_from: the universe that should be moved\n :param universe_to: the target universe. An existing universe will be overwritten\n \"\"\"\n check_universe(universe_from)\n check_universe(universe_to)\n # store the sending object and change the universe in the packet of the sending\n tmp_output = self._outputs[universe_from]\n tmp_output._packet.universe = universe_to\n # deactivate sending\n self.deactivate_output(universe_from)\n # activate new sending with the new universe\n self._outputs[universe_to] = tmp_output"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nstart or restarts a new Thread with the parameters given in the constructor.", "response": "def start(self, bind_address=None, bind_port: int = None, fps: int = None) -> None:\n \"\"\"\n Starts or restarts a new Thread with the parameters given in the constructor or\n the parameters given in this function.\n The parameters in this function do not override the class specific values!\n :param bind_address: the IP-Address to bind to\n :param bind_port: the port to bind to\n :param fps: the fps to use. Note: this is not precisely hold, use for load balance in the network\n \"\"\"\n if bind_address is None:\n bind_address = self.bindAddress\n if fps is None:\n fps = self._fps\n if bind_port is None:\n bind_port = self.bind_port\n self.stop()\n self._output_thread = OutputThread(cid=self.__CID, source_name=self.source_name,\n outputs=self._outputs, bind_address=bind_address,\n bind_port=bind_port, fps=fps, universe_discovery=self._universeDiscovery)\n self._output_thread.start()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef openReadWrite(filename):\n try:\n os.makedirs(os.path.dirname(filename))\n except OSError:\n pass\n try:\n return file(filename, 'rb+')\n except IOError:\n return file(filename, 'wb+')", "response": "Open a file object for reading and writing."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nopening the bitmask file in the filesystem.", "response": "def openMaskFile(filename):\n \"\"\"\n Open the bitmask file sitting next to a file in the filesystem.\n \"\"\"\n dirname, basename = os.path.split(filename)\n newbasename = '_%s_.sbm' % (basename,)\n maskfname = os.path.join(dirname, newbasename)\n maskfile = openReadWrite(maskfname)\n return maskfile"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef data(self, name, chunk, body):\n self.callRemote(Data, name=name, chunk=chunk, body=body)", "response": "Issue a DATA command on the local cache"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nissue a GET command on the related nexus object and return the size of the name being requested.", "response": "def get(self, name, mask=None):\n \"\"\"\n Issue a GET command\n\n Return a Deferred which fires with the size of the name being requested\n \"\"\"\n mypeer = self.transport.getQ2QPeer()\n tl = self.nexus.transloads[name]\n peerz = tl.peers\n if mypeer in peerz:\n peerk = peerz[mypeer]\n else:\n # all turned on initially; we aren't going to send them anything.\n peerk = PeerKnowledge(bits.BitArray(size=len(tl.mask), default=1))\n peerz[mypeer] = peerk\n peerk.sentGet = True\n return self.callRemote(\n Get, name=name, mask=mask).addCallback(lambda r: r['size'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nnotify AMP that a connection has been lost.", "response": "def connectionLost(self, reason):\n \"\"\"\n Inform the associated L{conncache.ConnectionCache} that this\n protocol has been disconnected.\n \"\"\"\n self.nexus.conns.connectionLostForKey((endpoint.Q2QEndpoint(\n self.nexus.svc,\n self.nexus.addr,\n self.transport.getQ2QPeer(),\n PROTOCOL_NAME), None))\n AMP.connectionLost(self, reason)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsend some DATA commands to my peer.", "response": "def sendSomeData(self, howMany):\n \"\"\"\n Send some DATA commands to my peer(s) to relay some data.\n\n @param howMany: an int, the number of chunks to send out.\n \"\"\"\n # print 'sending some data', howMany\n if self.transport is None:\n return\n peer = self.transport.getQ2QPeer()\n while howMany > 0:\n # sort transloads so that the least-frequently-serviced ones will\n # come first\n tloads = [\n (findin(tl.name, self.sentTransloads),\n tl) for tl in self.nexus.transloadsForPeer(peer)]\n tloads.sort()\n tloads = [tl for (idx, tl) in tloads if tl.peerNeedsData(peer)]\n if not tloads:\n break\n\n wasHowMany = howMany\n\n for myTransload in tloads:\n # move this transload to the end so it will be sorted last next\n # time.\n name = myTransload.name\n if name in self.sentTransloads:\n self.sentTransloads.remove(name)\n self.sentTransloads.append(name)\n\n knowledge = myTransload.peers[peer]\n chunkNumber, chunkData = myTransload.selectOptimalChunk(peer)\n if chunkNumber is None:\n continue\n\n peerToIntroduce = knowledge.selectPeerToIntroduce(\n myTransload.peers.keys())\n\n if peerToIntroduce is not None:\n self.introduce(myTransload.name, peerToIntroduce)\n\n self.data(name, chunkNumber, chunkData)\n # Don't re-send that chunk again unless they explicitly tell us\n # they need it for some reason\n knowledge.mask[chunkNumber] = 1\n howMany -= 1\n if howMany <= 0:\n break\n\n if wasHowMany == howMany:\n # couldn't find anything to send.\n break"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nchoose a peer to introduce.", "response": "def selectPeerToIntroduce(self, otherPeers):\n \"\"\"\n Choose a peer to introduce. Return a q2q address or None, if there are\n no suitable peers to introduce at this time.\n \"\"\"\n for peer in otherPeers:\n if peer not in self.otherPeers:\n self.otherPeers.append(peer)\n return peer"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef chunkVerified(self, who, chunkNumber, chunkData):\n if self.mask[chunkNumber]:\n # already received that chunk.\n return\n self.file.seek(chunkNumber * CHUNK_SIZE)\n self.file.write(chunkData)\n self.file.flush()\n self.sha1sums[chunkNumber] = sha.new(chunkData).digest()\n\n if not self.mask[chunkNumber]:\n self.nexus.increaseScore(who)\n self.mask[chunkNumber] = 1\n self.writeMaskFile()\n self.changes += 1\n\n if self.changes > self.maximumChangeCountBeforeMaskUpdate:\n self.call.cancel()\n self.sendMaskUpdate()\n self.call = self.nexus.callLater(\n self.maximumChangeCountBeforeMaskUpdate,\n self.maybeUpdateMask)\n\n if not self.seed and not self.mask.countbits(0):\n # we're done, let's let other people get at that file.\n self.file.close()\n os.rename(self.incompletePath.path,\n self.fullPath.path)\n self.file = self.fullPath.open()\n self.maskfile.close()\n os.unlink(self.maskfile.name)\n\n self.ui.updateHostMask(self.mask)", "response": "This method is called when a chunk has been verified."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef selectOptimalChunk(self, peer):\n\n # stuff I have\n have = sets.Set(self.mask.positions(1))\n # stuff that this peer wants\n want = sets.Set(self.peers[peer].mask.positions(0))\n exchangeable = have.intersection(want)\n finalSet = dict.fromkeys(exchangeable, 0)\n\n # taking a page from bittorrent, rarest-first\n for chunkNumber in exchangeable:\n for otherPeer in self.peers.itervalues():\n finalSet[chunkNumber] += not otherPeer.mask[chunkNumber]\n rarityList = [(rarity, random.random(), chunkNumber)\n for (chunkNumber, rarity)\n in finalSet.iteritems()]\n if not rarityList:\n return None, None\n rarityList.sort()\n chunkNumber = rarityList[-1][-1] # sorted in ascending order of rarity\n\n # sanity check\n assert self.mask[chunkNumber], \"I wanted to send a chunk I didn't have\"\n\n self.file.seek(chunkNumber * CHUNK_SIZE)\n chunkData = self.file.read(CHUNK_SIZE)\n self.sha1sums[chunkNumber] = sha.new(chunkData).digest()\n return chunkNumber, chunkData", "response": "select an optimal chunk to send to a peer."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nallocate a file from the given sharename", "response": "def allocateFile(self, sharename, peer):\n \"\"\"\n return a 2-tuple of incompletePath, fullPath\n \"\"\"\n peerDir = self.basepath.child(str(peer))\n if not peerDir.isdir():\n peerDir.makedirs()\n return (peerDir.child(sharename+'.incomplete'),\n peerDir.child(sharename))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef transloadsForPeer(self, peer):\n for tl in self.transloads.itervalues():\n if peer in tl.peers:\n yield tl", "response": "Returns an iterator over all transloads that apply to a particular peer."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef seed(self, path, name):\n t = self.transloads[name] = Transload(self.addr, self, name,\n None, path,\n self.ui.startTransload(name,\n self.addr),\n seed=True)\n return t", "response": "Create a new transload from an existing file that is complete."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef connectPeer(self, peer):\n return self.conns.connectCached(endpoint.Q2QEndpoint(self.svc,\n self.addr,\n peer,\n PROTOCOL_NAME),\n self.clientFactory)", "response": "Establish a SIGMA connection to the given peer."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef increaseScore(self, participant):\n if participant not in self.scores:\n self.scores[participant] = 0\n self.scores[participant] += 1", "response": "Increase the score of a participant."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef listen_on(self, trigger: str, **kwargs) -> callable:\n def decorator(f):\n self.register_listener(trigger, f, **kwargs)\n return f\n return decorator", "response": "This is a simple decorator for registering a callback for an event."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_listener(self, trigger: str, func: callable, **kwargs) -> None:\n if trigger in LISTEN_ON_OPTIONS:\n if trigger == LISTEN_ON_OPTIONS[1]: # if the trigger is universe, use the universe from args as key\n try:\n self._callbacks[kwargs[LISTEN_ON_OPTIONS[1]]].append(func)\n except:\n self._callbacks[kwargs[LISTEN_ON_OPTIONS[1]]] = [func]\n try:\n self._callbacks[trigger].append(func)\n except:\n self._callbacks[trigger] = [func]\n else:\n raise TypeError(f'The given trigger \"{trigger}\" is not a valid one!')", "response": "Register a callback for the given trigger."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef join_multicast(self, universe: int) -> None:\n self.sock.setsockopt(socket.SOL_IP, socket.IP_ADD_MEMBERSHIP,\n socket.inet_aton(calculate_multicast_addr(universe)) +\n socket.inet_aton(self._bindAddress))", "response": "Joins the multicast group with the given universe."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntries to leave the multicast group with the specified universe.", "response": "def leave_multicast(self, universe: int) -> None:\n \"\"\"\n Try to leave the multicast group with the specified universe. This does not throw any exception if the group\n could not be leaved.\n :param universe: the universe to leave the multicast group.\n The network hardware has to support the multicast feature!\n \"\"\"\n try:\n self.sock.setsockopt(socket.SOL_IP, socket.IP_DROP_MEMBERSHIP,\n socket.inet_aton(calculate_multicast_addr(universe)) +\n socket.inet_aton(self._bindAddress))\n except: # try to leave the multicast group for the universe\n pass"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting a new thread that handles the input.", "response": "def start(self) -> None:\n \"\"\"\n Starts a new thread that handles the input. If a thread is already running, the thread will be restarted.\n \"\"\"\n self.stop() # stop an existing thread\n self._thread = receiverThread(socket=self.sock, callbacks=self._callbacks)\n self._thread.start()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef predicate(self, name, func=None):\n if func is None:\n return functools.partial(self.predicate, name)\n self.predicates[name] = func\n return func", "response": "Define a predicate function."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef permission_set(self, name, func=None):\n if func is None:\n return functools.partial(self.predicate, name)\n self.permission_sets[name] = func\n return func", "response": "Define a permission set."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if we can do something with an object.", "response": "def can(self, permission, obj, **kwargs):\n \"\"\"Check if we can do something with an object.\n\n :param permission: The permission to look for.\n :param obj: The object to check the ACL of.\n :param **kwargs: The context to pass to predicates.\n\n >>> auth.can('read', some_object)\n >>> auth.can('write', another_object, group=some_group)\n\n \"\"\"\n\n context = {'user': current_user}\n for func in self.context_processors:\n context.update(func())\n context.update(get_object_context(obj))\n context.update(kwargs)\n return check(permission, iter_object_acl(obj), **context)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking sure we have a permission or abort the request.", "response": "def assert_can(self, permission, obj, **kwargs):\n \"\"\"Make sure we have a permission, or abort the request.\n\n :param permission: The permission to look for.\n :param obj: The object to check the ACL of.\n :param flash: The message to flask if denied (keyword only).\n :param stealth: Abort with a 404? (keyword only).\n :param **kwargs: The context to pass to predicates.\n\n \"\"\"\n flash_message = kwargs.pop('flash', None)\n stealth = kwargs.pop('stealth', False)\n default = kwargs.pop('default', None)\n\n res = self.can(permission, obj, **kwargs)\n res = default if res is None else res\n\n if not res:\n if flash_message and not stealth:\n flask.flash(flash_message, 'danger')\n if current_user.is_authenticated():\n if flash_message is not False:\n flask.flash(flash_message or 'You are not permitted to \"%s\" this resource' % permission)\n flask.abort(403)\n elif not stealth and self.login_view:\n if flash_message is not False:\n flask.flash(flash_message or 'Please login for access.')\n raise _Redirect(flask.url_for(self.login_view) + '?' + urlencode(dict(next=\n flask.request.script_root + flask.request.path\n )))\n else:\n flask.abort(404)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake sure we can route to the given endpoint or url.", "response": "def can_route(self, endpoint, method=None, **kwargs):\n \"\"\"Make sure we can route to the given endpoint or url.\n\n This checks for `http.get` permission (or other methods) on the ACL of\n route functions, attached via the `ACL` decorator.\n\n :param endpoint: A URL or endpoint to check for permission to access.\n :param method: The HTTP method to check; defaults to `'GET'`.\n :param **kwargs: The context to pass to predicates.\n\n \"\"\"\n\n view = flask.current_app.view_functions.get(endpoint)\n if not view:\n endpoint, args = flask._request_ctx.top.match(endpoint)\n view = flask.current_app.view_functions.get(endpoint)\n if not view:\n return False\n\n return self.can('http.' + (method or 'GET').lower(), view, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _sendTo(self, proto):\n # XXX This is overriding a private interface\n super(ConnectionStartBox, self)._sendTo(proto)\n self.virtualTransport.startProtocol()", "response": "When the connection is established send the AMP protocol to the virtual transport."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_trace_id(request):\n if 'X-B3-TraceId' in request.headers:\n trace_id = _convert_signed_hex(request.headers['X-B3-TraceId'])\n # Tolerates 128 bit X-B3-TraceId by reading the right-most 16 hex\n # characters (as opposed to overflowing a U64 and starting a new trace).\n trace_id = trace_id[-16:]\n elif 'zipkin.trace_id_generator' in request.registry.settings:\n trace_id = _convert_signed_hex(request.registry.settings[\n 'zipkin.trace_id_generator'](request))\n else:\n trace_id = generate_random_64bit_string()\n\n return trace_id", "response": "Gets the trace id based on a request."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntake a signed hex string that begins with 0x and converts it to a 16 - character string representing an unsigned hex value.", "response": "def _convert_signed_hex(s):\n \"\"\"Takes a signed hex string that begins with '0x' and converts it to\n a 16-character string representing an unsigned hex value.\n Examples:\n '0xd68adf75f4cfd13' => 'd68adf75f4cfd13'\n '-0x3ab5151d76fb85e1' => 'c54aeae289047a1f'\n \"\"\"\n if s.startswith('0x') or s.startswith('-0x'):\n s = '{0:x}'.format(struct.unpack('Q', struct.pack('q', int(s, 16)))[0])\n return s.zfill(16)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeciding whether current request path should be sampled or not.", "response": "def should_not_sample_path(request):\n \"\"\"Decided whether current request path should be sampled or not. This is\n checked previous to `should_not_sample_route` and takes precedence.\n\n :param: current active pyramid request\n :returns: boolean whether current request path is blacklisted.\n \"\"\"\n blacklisted_paths = request.registry.settings.get(\n 'zipkin.blacklisted_paths', [])\n # Only compile strings, since even recompiling existing\n # compiled regexes takes time.\n regexes = [\n re.compile(r) if isinstance(r, six.string_types) else r\n for r in blacklisted_paths\n ]\n return any(r.match(request.path) for r in regexes)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef should_not_sample_route(request):\n blacklisted_routes = request.registry.settings.get(\n 'zipkin.blacklisted_routes', [])\n\n if not blacklisted_routes:\n return False\n route_mapper = request.registry.queryUtility(IRoutesMapper)\n route_info = route_mapper(request).get('route')\n return (route_info and route_info.name in blacklisted_routes)", "response": "Decided whether current request route should be sampled or not."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_tracing(request):\n if should_not_sample_path(request):\n return False\n elif should_not_sample_route(request):\n return False\n elif 'X-B3-Sampled' in request.headers:\n return request.headers.get('X-B3-Sampled') == '1'\n else:\n zipkin_tracing_percent = request.registry.settings.get(\n 'zipkin.tracing_percent', DEFAULT_REQUEST_TRACING_PERCENT)\n return should_sample_as_per_zipkin_tracing_percent(\n zipkin_tracing_percent)", "response": "Determine if zipkin should be tracing."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate ZipkinAttrs object from a pyramid request.", "response": "def create_zipkin_attr(request):\n \"\"\"Create ZipkinAttrs object from a request with sampled flag as True.\n Attaches lazy attribute `zipkin_trace_id` with request which is then used\n throughout the tween.\n\n Consumes custom is_tracing function to determine if the request is traced\n if one is set in the pyramid registry.\n\n :param request: pyramid request object\n :rtype: :class:`pyramid_zipkin.request_helper.ZipkinAttrs`\n \"\"\"\n settings = request.registry.settings\n\n if 'zipkin.is_tracing' in settings:\n is_sampled = settings['zipkin.is_tracing'](request)\n else:\n is_sampled = is_tracing(request)\n\n request.zipkin_trace_id = get_trace_id(request)\n\n span_id = request.headers.get(\n 'X-B3-SpanId', generate_random_64bit_string())\n parent_span_id = request.headers.get('X-B3-ParentSpanId', None)\n flags = request.headers.get('X-B3-Flags', '0')\n return ZipkinAttrs(\n trace_id=request.zipkin_trace_id,\n span_id=span_id,\n parent_span_id=parent_span_id,\n flags=flags,\n is_sampled=is_sampled,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dmxData(self, data: tuple):\n newData = [0]*512\n for i in range(0, min(len(data), 512)):\n newData[i] = data[i]\n self._dmxData = tuple(newData)\n # in theory this class supports dynamic length, so the next line is correcting the length\n self.length = 126 + len(self._dmxData)", "response": "This method is used to set the DMX data for the current object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_data_packet(raw_data) -> 'DataPacket':\n # Check if the length is sufficient\n if len(raw_data) < 126:\n raise TypeError('The length of the provided data is not long enough! Min length is 126!')\n # Check if the three Vectors are correct\n if tuple(raw_data[18:22]) != tuple(VECTOR_ROOT_E131_DATA) or \\\n tuple(raw_data[40:44]) != tuple(VECTOR_E131_DATA_PACKET) or \\\n raw_data[117] != VECTOR_DMP_SET_PROPERTY: # REMEMBER: when slicing: [inclusive:exclusive]\n raise TypeError('Some of the vectors in the given raw data are not compatible to the E131 Standard!')\n\n tmpPacket = DataPacket(cid=raw_data[22:38], sourceName=str(raw_data[44:108]),\n universe=(0xFF * raw_data[113]) + raw_data[114]) # high byte first\n tmpPacket.priority = raw_data[108]\n # SyncAddress in the future?!\n tmpPacket.sequence = raw_data[111]\n tmpPacket.option_PreviewData = bool(raw_data[112] & 0b10000000) # use the 7th bit as preview_data\n tmpPacket.option_StreamTerminated = bool(raw_data[112] & 0b01000000)\n tmpPacket.dmxData = raw_data[126:638]\n return tmpPacket", "response": "Converts the given raw byte data into a sACN DataPacket."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the Root layer as list with bytes", "response": "def getBytes(self) -> list:\n '''Returns the Root layer as list with bytes'''\n tmpList = []\n tmpList.extend(_FIRST_INDEX)\n # first append the high byte from the Flags and Length\n # high 4 bit: 0x7 then the bits 8-11(indexes) from _length\n length = self.length - 16\n tmpList.append((0x7 << 4) + (length >> 8))\n # Then append the lower 8 bits from _length\n tmpList.append(length & 0xFF)\n\n tmpList.extend(self._vector)\n tmpList.extend(self._cid)\n return tmpList"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef curate_skills_data(self, skills_data):\n local_skills = [s for s in self.list() if s.is_local]\n default_skills = [s.name for s in self.list_defaults()]\n local_skill_names = [s.name for s in local_skills]\n skills_data_skills = [s['name'] for s in skills_data['skills']]\n\n # Check for skills that aren't in the list\n for skill in local_skills:\n if skill.name not in skills_data_skills:\n if skill.name in default_skills:\n origin = 'default'\n elif skill.url:\n origin = 'cli'\n else:\n origin = 'non-msm'\n entry = build_skill_entry(skill.name, origin, False)\n skills_data['skills'].append(entry)\n\n # Check for skills in the list that doesn't exist in the filesystem\n remove_list = []\n for s in skills_data.get('skills', []):\n if (s['name'] not in local_skill_names and\n s['installation'] == 'installed'):\n remove_list.append(s)\n for skill in remove_list:\n skills_data['skills'].remove(skill)\n return skills_data", "response": "Synchronize skills_data with actual skills on disk."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates internal skill_data_structure with the data from disk.", "response": "def sync_skills_data(self):\n \"\"\" Update internal skill_data_structure from disk. \"\"\"\n self.skills_data = self.load_skills_data()\n if 'upgraded' in self.skills_data:\n self.skills_data.pop('upgraded')\n else:\n self.skills_data_hash = skills_data_hash(self.skills_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_skills_data(self, data=None):\n data = data or self.skills_data\n if skills_data_hash(data) != self.skills_data_hash:\n write_skills_data(data)\n self.skills_data_hash = skills_data_hash(data)", "response": "Writes skills data hash if it has been modified."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninstall a skill by url or name.", "response": "def install(self, param, author=None, constraints=None, origin=''):\n \"\"\"Install by url or name\"\"\"\n if isinstance(param, SkillEntry):\n skill = param\n else:\n skill = self.find_skill(param, author)\n entry = build_skill_entry(skill.name, origin, skill.is_beta)\n try:\n skill.install(constraints)\n entry['installed'] = time.time()\n entry['installation'] = 'installed'\n entry['status'] = 'active'\n entry['beta'] = skill.is_beta\n except AlreadyInstalled:\n entry = None\n raise\n except MsmException as e:\n entry['installation'] = 'failed'\n entry['status'] = 'error'\n entry['failure_message'] = repr(e)\n raise\n finally:\n # Store the entry in the list\n if entry:\n self.skills_data['skills'].append(entry)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving skill by url or name", "response": "def remove(self, param, author=None):\n \"\"\"Remove by url or name\"\"\"\n if isinstance(param, SkillEntry):\n skill = param\n else:\n skill = self.find_skill(param, author)\n skill.remove()\n skills = [s for s in self.skills_data['skills']\n if s['name'] != skill.name]\n self.skills_data['skills'] = skills\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update(self, skill=None, author=None):\n if skill is None:\n return self.update_all()\n else:\n if isinstance(skill, str):\n skill = self.find_skill(skill, author)\n entry = get_skill_entry(skill.name, self.skills_data)\n if entry:\n entry['beta'] = skill.is_beta\n if skill.update():\n # On successful update update the update value\n if entry:\n entry['updated'] = time.time()", "response": "Update all downloaded skills or one specified skill."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns a function on all skills in parallel", "response": "def apply(self, func, skills):\n \"\"\"Run a function on all skills in parallel\"\"\"\n\n def run_item(skill):\n try:\n func(skill)\n return True\n except MsmException as e:\n LOG.error('Error running {} on {}: {}'.format(\n func.__name__, skill.name, repr(e)\n ))\n return False\n except:\n LOG.exception('Error running {} on {}:'.format(\n func.__name__, skill.name\n ))\n\n with ThreadPool(20) as tp:\n return tp.map(run_item, skills)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninstalls the default skills updates all others", "response": "def install_defaults(self):\n \"\"\"Installs the default skills, updates all others\"\"\"\n\n def install_or_update_skill(skill):\n if skill.is_local:\n self.update(skill)\n else:\n self.install(skill, origin='default')\n\n return self.apply(install_or_update_skill, self.list_defaults())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_all_defaults(self): # type: () -> Dict[str, List[SkillEntry]]\n skills = self.list()\n name_to_skill = {skill.name: skill for skill in skills}\n defaults = {group: [] for group in self.SKILL_GROUPS}\n\n for section_name, skill_names in self.repo.get_default_skill_names():\n section_skills = []\n for skill_name in skill_names:\n if skill_name in name_to_skill:\n section_skills.append(name_to_skill[skill_name])\n else:\n LOG.warning('No such default skill: ' + skill_name)\n defaults[section_name] = section_skills\n\n return defaults", "response": "Returns a dictionary of all default skills in the repository."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nloads a list of SkillEntry objects from both local and remote skills at the same time.", "response": "def list(self):\n \"\"\"\n Load a list of SkillEntry objects from both local and\n remote skills\n\n It is necessary to load both local and remote skills at\n the same time to correctly associate local skills with the name\n in the repo and remote skills with any custom path that they\n have been downloaded to\n \"\"\"\n try:\n self.repo.update()\n except GitException as e:\n if not isdir(self.repo.path):\n raise\n LOG.warning('Failed to update repo: {}'.format(repr(e)))\n remote_skill_list = (\n SkillEntry(\n name, SkillEntry.create_path(self.skills_dir, url, name),\n url, sha if self.versioned else '', msm=self\n )\n for name, path, url, sha in self.repo.get_skill_data()\n )\n remote_skills = {\n skill.id: skill for skill in remote_skill_list\n }\n all_skills = []\n for skill_file in glob(join(self.skills_dir, '*', '__init__.py')):\n skill = SkillEntry.from_folder(dirname(skill_file), msm=self)\n if skill.id in remote_skills:\n skill.attach(remote_skills.pop(skill.id))\n all_skills.append(skill)\n all_skills += list(remote_skills.values())\n return all_skills"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_skill(self, param, author=None, skills=None):\n # type: (str, str, List[SkillEntry]) -> SkillEntry\n \"\"\"Find skill by name or url\"\"\"\n if param.startswith('https://') or param.startswith('http://'):\n repo_id = SkillEntry.extract_repo_id(param)\n for skill in self.list():\n if skill.id == repo_id:\n return skill\n name = SkillEntry.extract_repo_name(param)\n path = SkillEntry.create_path(self.skills_dir, param)\n return SkillEntry(name, path, param, msm=self)\n else:\n skill_confs = {\n skill: skill.match(param, author)\n for skill in skills or self.list()\n }\n best_skill, score = max(skill_confs.items(), key=lambda x: x[1])\n LOG.info('Best match ({}): {} by {}'.format(\n round(score, 2), best_skill.name, best_skill.author)\n )\n if score < 0.3:\n raise SkillNotFound(param)\n low_bound = (score * 0.7) if score != 1.0 else 1.0\n\n close_skills = [\n skill for skill, conf in skill_confs.items()\n if conf >= low_bound and skill != best_skill\n ]\n if close_skills:\n raise MultipleSkillMatches([best_skill] + close_skills)\n return best_skill", "response": "Find skill by name or url or skills."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\niterating chunks of data", "response": "def iterchunks(data, chunksize):\n \"\"\"iterate chunks of data\n \"\"\"\n offt = 0\n while offt < len(data):\n yield data[offt:offt+chunksize]\n offt += chunksize"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if a segment is acceptable for the current thread.", "response": "def segmentAcceptable(RCV_NXT, RCV_WND, SEG_SEQ, SEG_LEN):\n \"\"\"\n An acceptable segment: RFC 793 page 26.\n \"\"\"\n if SEG_LEN == 0 and RCV_WND == 0:\n return SEG_SEQ == RCV_NXT\n if SEG_LEN == 0 and RCV_WND > 0:\n return ((RCV_NXT <= SEG_SEQ) and (SEG_SEQ < RCV_NXT + RCV_WND))\n if SEG_LEN > 0 and RCV_WND == 0:\n return False\n if SEG_LEN > 0 and RCV_WND > 0:\n return (( (RCV_NXT <= SEG_SEQ) and (SEG_SEQ < RCV_NXT + RCV_WND))\n or ((RCV_NXT <= SEG_SEQ+SEG_LEN-1) and\n (SEG_SEQ+SEG_LEN-1 < RCV_NXT + RCV_WND)))\n assert 0, 'Should be impossible to get here.'\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck to see if this packet must be retransmitted until it was received.", "response": "def mustRetransmit(self):\n \"\"\"\n Check to see if this packet must be retransmitted until it was\n received.\n\n Packets which contain a connection-state changing flag (SYN or FIN) or\n a non-zero amount of data can be retransmitted.\n \"\"\"\n if self.syn or self.fin or self.dlen:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nemits an acknowledgement packet soon.", "response": "def ackSoon(self):\n \"\"\"\n Emit an acknowledgement packet soon.\n \"\"\"\n if self._ackTimer is None:\n def originateAck():\n self._ackTimer = None\n self.originate(ack=True)\n self._ackTimer = reactor.callLater(0.1, originateAck)\n else:\n self._ackTimer.reset(ACK_DELAY)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a packet, enqueue it to be sent, and return it.", "response": "def originate(self, data='', syn=False, ack=False, fin=False, rst=False):\n \"\"\"\n Create a packet, enqueue it to be sent, and return it.\n \"\"\"\n if self._ackTimer is not None:\n self._ackTimer.cancel()\n self._ackTimer = None\n if syn:\n # We really should be randomizing the ISN but until we finish the\n # implementations of the various bits of wraparound logic that were\n # started with relativeSequence\n assert self.nextSendSeqNum == 0, (\n \"NSSN = \" + repr(self.nextSendSeqNum))\n assert self.hostSendISN == 0\n p = PTCPPacket.create(self.hostPseudoPort,\n self.peerPseudoPort,\n seqNum=(self.nextSendSeqNum +\n self.hostSendISN) % (2**32),\n ackNum=self.currentAckNum(),\n data=data,\n window=self.recvWindow,\n syn=syn, ack=ack, fin=fin, rst=rst,\n destination=self.peerAddressTuple)\n # do we want to enqueue this packet for retransmission?\n sl = p.segmentLength()\n self.nextSendSeqNum += sl\n\n if p.mustRetransmit():\n # print self, 'originating retransmittable packet', len(self.retransmissionQueue)\n if self.retransmissionQueue:\n if self.retransmissionQueue[-1].fin:\n raise AssertionError(\"Sending %r after FIN??!\" % (p,))\n # print 'putting it on the queue'\n self.retransmissionQueue.append(p)\n # print 'and sending it later'\n self._retransmitLater()\n if not self.sendWindowRemaining: # len(self.retransmissionQueue) > 5:\n # print 'oh no my queue is too big'\n # This is a random number (5) because I ought to be summing the\n # packet lengths or something.\n self._writeBufferFull()\n else:\n # print 'my queue is still small enough', len(self.retransmissionQueue), self, self.sendWindowRemaining\n pass\n self.ptcp.sendPacket(p)\n return p"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef connect(self, factory, host, port, pseudoPort=1):\n sourcePseudoPort = genConnID() % MAX_PSEUDO_PORT\n conn = self._connections[(pseudoPort, sourcePseudoPort, (host, port))\n ] = PTCPConnection(\n sourcePseudoPort, pseudoPort, self, factory, (host, port))\n conn.machine.appActiveOpen()\n return conn", "response": "Establish a new connection via PTCP to the given remote address."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nclean up all of our connections by issuing application - level close stop notifications sending Hail - MOR and Hail - MOR final FIN packets.", "response": "def _finalCleanup(self):\n \"\"\"\n Clean up all of our connections by issuing application-level close and\n stop notifications, sending hail-mary final FIN packets (which may not\n reach the other end, but nevertheless can be useful) when possible.\n \"\"\"\n for conn in self._connections.values():\n conn.releaseConnectionResources()\n assert not self._connections"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwaiting for all currently - open connections to enter the CLOSED state.", "response": "def waitForAllConnectionsToClose(self):\n \"\"\"\n Wait for all currently-open connections to enter the 'CLOSED' state.\n Currently this is only usable from test fixtures.\n \"\"\"\n if not self._connections:\n return self._stop()\n return self._allConnectionsClosed.deferred().addBoth(self._stop)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate an argument class for loading and saving instances of the given loader class.", "response": "def _argumentForLoader(loaderClass):\n \"\"\"\n Create an AMP argument for (de-)serializing instances of C{loaderClass}.\n\n @param loaderClass: A type object with a L{load} class method that takes\n some bytes and returns an instance of itself, and a L{dump} instance\n method that returns some bytes.\n\n @return: a class decorator which decorates an AMP argument class by\n replacing it with the one defined for loading and saving C{loaderClass}\n instances.\n \"\"\"\n def decorator(argClass):\n class LoadableArgument(String):\n def toString(self, arg):\n assert isinstance(arg, loaderClass), \\\n (\"%r not %r\" % (arg, loaderClass))\n return String.toString(self, arg.dump())\n\n def fromString(self, arg):\n return loaderClass.load(String.fromString(self, arg))\n\n LoadableArgument.__name__ = argClass.__name__\n return LoadableArgument\n return decorator"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fromString(self, inStr):\n host, sPort = inStr.split(\":\")\n return (host, int(sPort))", "response": "Convert the given bytes into a C { host port } tuple."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets up a Q2Q service.", "response": "def setup_Q2Q(self, path,\n q2qPortnum=q2q.port,\n inboundTCPPortnum=q2q.port+1,\n publicIP=None\n ):\n \"\"\"Set up a Q2Q service.\n \"\"\"\n store = DirectoryCertificateAndUserStore(path)\n # store.addPrivateCertificate(\"kazekage\")\n # store.addUser(\"kazekage\", \"username\", \"password1234\")\n\n self.attach(q2q.Q2QService(\n protocolFactoryFactory=IdentityAdminFactory(store).examineRequest,\n certificateStorage=store,\n portal=Portal(store, checkers=[store]),\n q2qPortnum=q2qPortnum,\n inboundTCPPortnum=inboundTCPPortnum,\n publicIP=publicIP,\n ))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreplacing all macros in given string with corresponding values.", "response": "def replace_macros(string, spec=None):\n \"\"\"Replace all macros in given string with corresponding values.\n\n For example: a string '%{name}-%{version}.tar.gz' will be transformed to 'foo-2.0.tar.gz'.\n\n :param string A string containing macros that you want to be replaced\n :param spec An optional spec file. If given, definitions in that spec\n file will be used to replace macros.\n\n :return A string where all macros in given input are substituted as good as possible.\n\n \"\"\"\n if spec:\n assert isinstance(spec, Spec)\n\n def _is_conditional(macro: str) -> bool:\n return macro.startswith(\"?\") or macro.startswith(\"!\")\n\n def _test_conditional(macro: str) -> bool:\n if macro[0] == \"?\":\n return True\n if macro[0] == \"!\":\n return False\n raise Exception(\"Given string is not a conditional macro\")\n\n def _macro_repl(match):\n macro_name = match.group(1)\n if _is_conditional(macro_name) and spec:\n parts = macro_name[1:].split(sep=\":\", maxsplit=1)\n assert parts\n if _test_conditional(macro_name): # ?\n if hasattr(spec, parts[0]):\n if len(parts) == 2:\n return parts[1]\n\n return getattr(spec, parts[0], None)\n\n return \"\"\n else: # !\n if not hasattr(spec, parts[0]):\n if len(parts) == 2:\n return parts[1]\n\n return getattr(spec, parts[0], None)\n\n return \"\"\n\n if spec:\n value = getattr(spec, macro_name, None)\n if value:\n return str(value)\n return match.string[match.start() : match.end()]\n\n # Recursively expand macros\n # Note: If macros are not defined in the spec file, this won't try to\n # expand them.\n while True:\n ret = re.sub(_macro_pattern, _macro_repl, string)\n if ret != string:\n string = ret\n continue\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate given spec object and parse context and return them again.", "response": "def update(self, spec_obj, context, match_obj, line):\n \"\"\"Update given spec object and parse context and return them again.\n\n :param spec_obj: An instance of Spec class\n :param context: The parse context\n :param match_obj: The re.match object\n :param line: The original line\n :return: Given updated Spec instance and parse context dictionary.\n \"\"\"\n\n assert spec_obj\n assert context\n assert match_obj\n assert line\n\n return self.update_impl(spec_obj, context, match_obj, line)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef packages_dict(self):\n assert self.packages\n return dict(zip([package.name for package in self.packages], self.packages))", "response": "All packages in this RPM spec as a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new Spec object from a given file.", "response": "def from_file(filename):\n \"\"\"Creates a new Spec object from a given file.\n\n :param filename: The path to the spec file.\n :return: A new Spec object.\n \"\"\"\n\n spec = Spec()\n with open(filename, \"r\", encoding=\"utf-8\") as f:\n parse_context = {\"current_subpackage\": None}\n for line in f:\n spec, parse_context = _parse(spec, parse_context, line)\n return spec"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_string(string: str):\n\n spec = Spec()\n parse_context = {\"current_subpackage\": None}\n for line in string.splitlines():\n spec, parse_context = _parse(spec, parse_context, line)\n return spec", "response": "Creates a new Spec object from a given string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a string or list of ACE definitions into usable ACEs.", "response": "def parse_acl(acl_iter):\n \"\"\"Parse a string, or list of ACE definitions, into usable ACEs.\"\"\"\n\n if isinstance(acl_iter, basestring):\n acl_iter = [acl_iter]\n\n for chunk in acl_iter:\n\n if isinstance(chunk, basestring):\n chunk = chunk.splitlines()\n chunk = [re.sub(r'#.+', '', line).strip() for line in chunk]\n chunk = filter(None, chunk)\n else:\n chunk = [chunk]\n\n for ace in chunk:\n\n # If this was provided as a string, then parse the permission set.\n # Otherwise, use it as-is, which will result in an equality test.\n if isinstance(ace, basestring):\n ace = ace.split(None, 2)\n state, predicate, permission_set = ace\n yield parse_state(state), parse_predicate(predicate), parse_permission_set(permission_set)\n else:\n state, predicate, permission_set = ace\n yield parse_state(state), parse_predicate(predicate), permission_set"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iter_object_acl(root):\n\n for obj in iter_object_graph(root):\n for ace in parse_acl(getattr(obj, '__acl__', ())):\n yield ace", "response": "Iterate over the ACEs in the object root."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the authentication context for an object.", "response": "def get_object_context(root):\n \"\"\"Depth-first discovery of authentication context for an object.\n\n Walks the ACL graph via ``__acl_bases__`` and merges the ``__acl_context__``\n attributes.\n\n \"\"\"\n\n context = {}\n for obj in iter_object_graph(root, parents_first=True):\n context.update(getattr(obj, '__acl_context__', {}))\n return context"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload the skills data from the file.", "response": "def load_skills_data() -> dict:\n \"\"\"Contains info on how skills should be updated\"\"\"\n skills_data_file = expanduser('~/.mycroft/skills.json')\n if isfile(skills_data_file):\n try:\n with open(skills_data_file) as f:\n return json.load(f)\n except json.JSONDecodeError:\n return {}\n else:\n return {}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_skill_entry(name, skills_data) -> dict:\n for e in skills_data.get('skills', []):\n if e.get('name') == name:\n return e\n return {}", "response": "Find a skill entry in the skills_data and return it."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if the sequence number of the packet is legal.", "response": "def is_legal_sequence(self, packet: DataPacket) -> bool:\n \"\"\"\n Check if the Sequence number of the DataPacket is legal.\n For more information see page 17 of http://tsp.esta.org/tsp/documents/docs/E1-31-2016.pdf.\n :param packet: the packet to check\n :return: true if the sequence is legal. False if the sequence number is bad\n \"\"\"\n # if the sequence of the packet is smaller than the last received sequence, return false\n # therefore calculate the difference between the two values:\n try: # try, because self.lastSequence might not been initialized\n diff = packet.sequence - self.lastSequence[packet.universe]\n # if diff is between ]-20,0], return False for a bad packet sequence\n if 0 >= diff > -20:\n return False\n except:\n pass\n # if the sequence is good, return True and refresh the list with the new value\n self.lastSequence[packet.universe] = packet.sequence\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if the given packet has a legal priority for the stored values for the universe.", "response": "def is_legal_priority(self, packet: DataPacket):\n \"\"\"\n Check if the given packet has high enough priority for the stored values for the packet's universe.\n :param packet: the packet to check\n :return: returns True if the priority is good. Otherwise False\n \"\"\"\n # check if the packet's priority is high enough to get processed\n if packet.universe not in self.callbacks.keys() or \\\n packet.priority < self.priorities[packet.universe][0]:\n return False # return if the universe is not interesting\n else:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deploy(Class, name=None, uid=None, gid=None, **kw):\n svc = Class(**kw)\n\n if name is None:\n name = Class.__name__\n # Make it easier (possible) to find this service by name later on\n svc.setName(name)\n\n app = service.Application(name, uid=uid, gid=gid)\n app.addComponent(NotPersistable(app), ignoreClass=True)\n svc.setServiceParent(app)\n\n return app", "response": "Create an application with the given name uid and gid."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef addServer(self, normalPort, sslPort, f, name):\n tcp = internet.TCPServer(normalPort, f)\n tcp.setName(name)\n self.servers.append(tcp)\n if sslPort is not None:\n ssl = internet.SSLServer(sslPort, f, contextFactory=self.sslfac)\n ssl.setName(name+'s')\n self.servers.append(ssl)", "response": "Add a TCP and an SSL server. Name them name and name + s."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _identify(self, subject):\n ourPrivateCert = self.service.certificateStorage.getPrivateCertificate(\n str(subject)\n )\n ourCA = Certificate(ourPrivateCert.original)\n return dict(certificate=ourCA)", "response": "Return a dictionary of the current identity and certificate."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking that the cert currently in use by this transport is valid to claim that the connection offers authorization for this host speaking for C{ourAddress}, to a host speaking for C{theirAddress}. The remote host (the one claiming to use theirAddress) may have a certificate which is issued for the domain for theirAddress or the full address given in theirAddress. This method runs B{after} cryptographic verification of the validity of certificates, although it does not perform any cryptographic checks itself. It depends on SSL connection handshaking - *and* the particular certificate lookup logic which prevents spoofed Issuer fields, to work properly. However, all it checks is the X509 names present in the certificates matching with the application-level security claims being made by our peer. An example of successful verification, because both parties have properly signed certificates for their usage from the domain they have been issued:: our current certficate: issuer: divmod.com subject: glyph@divmod.com their current certificate: issuer: twistedmatrix.com subject: exarkun@twistedmatrix.com Arguments to verifyCertificateAllowed: ourAddress: glyph@divmod.com theirAddress: exarkun@twistedmatrix.com Result of verifyCertificateAllowed: None An example of rejected verification, because domain certificates are always B{self}-signed in Q2Q; verisign is not a trusted certificate authority for the entire internet as with some other TLS implementations:: our current certificate: issuer: divmod.com subject: divmod.com their current certificate: issuer: verisign.com subject: twistedmatrix.com Arguments to verifyCertificateAllowed: ourAddress: divmod.com theirAddress: twistedmatrix.com Result of verifyCertificateAllowed: exception VerifyError raised Another example of successful verification, because we assume our current certificate is under the control of this side of the connection, so *any* claimed subject is considered acceptable:: our current certificate: issuer: divmod.com subject: divmod.com their current certificate: issuer: divmod.com subject: glyph@twistedmatrix.com Arguments to verifyCertificateAllowed: ourAddress: divmod.com theirAddress: glyph@twistedmatrix.com Result of verifyCertificateAllowed: None Another example of successful verification, because the user is claiming to be anonymous; there is also a somewhat looser cryptographic check applied to signatures for anonymous connections:: our current certificate: issuer: divmod.com subject: divmod.com their current certificate: issuer: @ subject: @ arguments to verifyCertificateAllowed: ourAddress: divmod.com theirAddress: @ Result of verifyCertificateAllowed: None Accept anonymous connections with caution. @param ourAddress: a L{Q2QAddress} representing the address that we are supposed to have authority for, requested by our peer. @param theirAddress: a L{Q2QAddress} representing the address that our network peer claims to be communicating on behalf of. For example, if our peer is foobar.com they may claim to be operating on behalf of any user @foobar.com. @raise: L{VerifyError} if the certificates do not match the claimed addresses.", "response": "def verifyCertificateAllowed(self,\n ourAddress,\n theirAddress):\n \"\"\"\n Check that the cert currently in use by this transport is valid to\n claim that the connection offers authorization for this host speaking\n for C{ourAddress}, to a host speaking for C{theirAddress}. The remote\n host (the one claiming to use theirAddress) may have a certificate\n which is issued for the domain for theirAddress or the full address\n given in theirAddress.\n\n This method runs B{after} cryptographic verification of the validity of\n certificates, although it does not perform any cryptographic checks\n itself. It depends on SSL connection handshaking - *and* the\n particular certificate lookup logic which prevents spoofed Issuer\n fields, to work properly. However, all it checks is the X509 names\n present in the certificates matching with the application-level\n security claims being made by our peer.\n\n An example of successful verification, because both parties have\n properly signed certificates for their usage from the domain they\n have been issued::\n\n our current certficate:\n issuer: divmod.com\n subject: glyph@divmod.com\n their current certificate:\n issuer: twistedmatrix.com\n subject: exarkun@twistedmatrix.com\n Arguments to verifyCertificateAllowed:\n ourAddress: glyph@divmod.com\n theirAddress: exarkun@twistedmatrix.com\n Result of verifyCertificateAllowed: None\n\n An example of rejected verification, because domain certificates are\n always B{self}-signed in Q2Q; verisign is not a trusted certificate\n authority for the entire internet as with some other TLS\n implementations::\n\n our current certificate:\n issuer: divmod.com\n subject: divmod.com\n their current certificate:\n issuer: verisign.com\n subject: twistedmatrix.com\n Arguments to verifyCertificateAllowed:\n ourAddress: divmod.com\n theirAddress: twistedmatrix.com\n Result of verifyCertificateAllowed: exception VerifyError raised\n\n Another example of successful verification, because we assume our\n current certificate is under the control of this side of the\n connection, so *any* claimed subject is considered acceptable::\n\n our current certificate:\n issuer: divmod.com\n subject: divmod.com\n their current certificate:\n issuer: divmod.com\n subject: glyph@twistedmatrix.com\n Arguments to verifyCertificateAllowed:\n ourAddress: divmod.com\n theirAddress: glyph@twistedmatrix.com\n Result of verifyCertificateAllowed: None\n\n Another example of successful verification, because the user is\n claiming to be anonymous; there is also a somewhat looser\n cryptographic check applied to signatures for anonymous\n connections::\n\n our current certificate:\n issuer: divmod.com\n subject: divmod.com\n their current certificate:\n issuer: @\n subject: @\n arguments to verifyCertificateAllowed:\n ourAddress: divmod.com\n theirAddress: @\n Result of verifyCertificateAllowed: None\n\n Accept anonymous connections with caution.\n\n @param ourAddress: a L{Q2QAddress} representing the address that we are\n supposed to have authority for, requested by our peer.\n\n @param theirAddress: a L{Q2QAddress} representing the address that our\n network peer claims to be communicating on behalf of. For example, if\n our peer is foobar.com they may claim to be operating on behalf of any\n user @foobar.com.\n\n @raise: L{VerifyError} if the certificates do not match the\n claimed addresses.\n \"\"\"\n\n # XXX TODO: Somehow, it's got to be possible for a single cluster to\n # internally claim to be agents of any other host when issuing a\n # CONNECT; in other words, we always implicitly trust ourselves. Also,\n # we might want to issue anonymous CONNECTs over unencrypted\n # connections.\n\n # IOW: *we* can sign a certificate to be whoever, but the *peer* can\n # only sign the certificate to be the peer.\n\n # The easiest way to make this work is to issue ourselves a wildcard\n # certificate.\n\n if not self.authorized:\n if theirAddress.domain == '':\n # XXX TODO: document this rule, anonymous connections are\n # allowed to not be authorized because they are not making any\n # claims about who they are\n\n # XXX also TODO: make it so that anonymous connections are\n # disabled by default for most protocols\n return True\n raise VerifyError(\"No official negotiation has taken place.\")\n\n peerCert = Certificate.peerFromTransport(self.transport)\n ourCert = self.hostCertificate\n\n ourClaimedDomain = ourAddress.domainAddress()\n theirClaimedDomain = theirAddress.domainAddress()\n\n # Sanity check #1: did we pick the right certificate on our end?\n if not ourClaimedDomain.claimedAsIssuerOf(ourCert):\n raise VerifyError(\n \"Something has gone horribly wrong: local domain mismatch \"\n \"claim: %s actual: %s\" % (ourClaimedDomain,\n ourCert.getIssuer()))\n if theirClaimedDomain.claimedAsIssuerOf(peerCert):\n # Their domain issued their certificate.\n if (theirAddress.claimedAsSubjectOf(peerCert) or\n theirClaimedDomain.claimedAsSubjectOf(peerCert)):\n return\n elif ourClaimedDomain.claimedAsIssuerOf(peerCert):\n # *our* domain can spoof *anything*\n return\n elif ourAddress.claimedAsIssuerOf(peerCert):\n # Neither our domain nor their domain signed this. Did *we*?\n # (Useful in peer-to-peer persistent transactions where we don't\n # want the server involved: exarkun@twistedmatrix.com can sign\n # glyph@divmod.com's certificate).\n return\n\n raise VerifyError(\n \"Us: %s Them: %s \"\n \"TheyClaimWeAre: %s TheyClaimTheyAre: %s\" %\n (ourCert, peerCert,\n ourAddress, theirAddress))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _listen(self, protocols, From, description):\n # The peer is coming from a client-side representation of the user\n # described by 'From', and talking *to* a server-side representation of\n # the user described by 'From'.\n self.verifyCertificateAllowed(From, From)\n theirCert = Certificate.peerFromTransport(self.transport)\n for protocolName in protocols:\n if protocolName.startswith('.'):\n raise VerifyError(\n \"Internal protocols are for server-server use _only_: %r\" %\n protocolName)\n\n key = (From, protocolName)\n value = (self, theirCert, description)\n log.msg(\"%r listening for %r\" % key)\n self.listeningClient.append((key, value))\n self.service.listeningClients.setdefault(key, []).append(value)\n return {}", "response": "Implementation of L { Listen."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _write(self, body, id):\n if id not in self.connections:\n raise error.ConnectionDone()\n connection = self.connections[id]\n connection.dataReceived(body)\n return {}", "response": "Respond to a WRITE command sending some data over a virtual channel."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nresponding to a CLOSE command dumping some data onto the stream.", "response": "def _close(self, id):\n \"\"\"\n Respond to a CLOSE command, dumping some data onto the stream. As with\n WRITE, this returns an empty acknowledgement.\n\n An occurrence of I{Close} on the wire, together with the response\n generated by this method, might have this apperance::\n\n C: -Command: Close\n C: -Ask: 1\n C: Id: glyph@divmod.com->radix@twistedmatrix.com:q2q-example:0\n C:\n S: -Answer: 1\n S:\n\n \"\"\"\n # The connection is removed from the mapping by connectionLost.\n connection = self.connections[id]\n connection.connectionLost(Failure(CONNECTION_DONE))\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _sign(self, certificate_request, password):\n if self.service.portal is None:\n raise BadCertificateRequest(\"This agent cannot sign certificates.\")\n\n subj = certificate_request.getSubject()\n\n sk = subj.keys()\n if 'commonName' not in sk:\n raise BadCertificateRequest(\n \"Certificate requested with bad subject: %s\" % (sk,))\n\n uandd = subj.commonName.split(\"@\")\n if len(uandd) != 2:\n raise BadCertificateRequest(\n \"Won't sign certificates for other domains\"\n )\n domain = uandd[1]\n\n CS = self.service.certificateStorage\n ourCert = CS.getPrivateCertificate(domain)\n\n D = self.service.portal.login(\n UsernameShadowPassword(subj.commonName, password),\n self,\n ivertex.IQ2QUser)\n\n def _(ial):\n (iface, aspect, logout) = ial\n ser = CS.genSerial(domain)\n return dict(certificate=aspect.signCertificateRequest(\n certificate_request, ourCert, ser))\n\n return D.addCallback(_)", "response": "Respond to a request to sign a CSR for a user or agent located within a user or agent located within a specific domain."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve a remote certificate for the first available time.", "response": "def _retrieveRemoteCertificate(self, From, port=port):\n \"\"\"\n The entire conversation, starting with TCP handshake and ending at\n disconnect, to retrieve a foreign domain's certificate for the first\n time.\n \"\"\"\n CS = self.service.certificateStorage\n host = str(From.domainAddress())\n p = AMP()\n p.wrapper = self.wrapper\n f = protocol.ClientCreator(reactor, lambda: p)\n connD = f.connectTCP(host, port)\n\n def connected(proto):\n dhost = From.domainAddress()\n iddom = proto.callRemote(Identify, subject=dhost)\n def gotCert(identifyBox):\n theirCert = identifyBox['certificate']\n theirIssuer = theirCert.getIssuer().commonName\n theirName = theirCert.getSubject().commonName\n if (theirName != str(dhost)):\n raise VerifyError(\n \"%r claimed it was %r in IDENTIFY response\"\n % (theirName, dhost))\n if (theirIssuer != str(dhost)):\n raise VerifyError(\n \"self-signed %r claimed it was issued by \"\n \"%r in IDENTIFY response\" % (dhost, theirIssuer))\n def storedCert(ignored):\n return theirCert\n return CS.storeSelfSignedCertificate(\n str(dhost), theirCert).addCallback(storedCert)\n def nothingify(x):\n proto.transport.loseConnection()\n return x\n return iddom.addCallback(gotCert).addBoth(nothingify)\n connD.addCallback(connected)\n return connD"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a Deferred which fires True when this connection has been secured as a channel between fromAddress (locally) and toAddress (remotely). Raises an error if this is not possible.", "response": "def secure(self, fromAddress, toAddress,\n fromCertificate, foreignCertificateAuthority=None,\n authorize=True):\n \"\"\"\n Return a Deferred which fires True when this connection has been\n secured as a channel between fromAddress (locally) and\n toAddress (remotely).\n Raises an error if this is not possible.\n \"\"\"\n if self.hostCertificate is not None:\n raise RuntimeError(\"Re-securing already secured connection.\")\n\n def _cbSecure(response):\n if foreignCertificateAuthority is not None:\n self.authorized = True\n return True\n extra = {'tls_localCertificate': fromCertificate}\n if foreignCertificateAuthority is not None:\n extra['tls_verifyAuthorities'] = [foreignCertificateAuthority]\n\n return self.callRemote(\n Secure,\n From=fromAddress,\n to=toAddress,\n authorize=authorize, **extra).addCallback(_cbSecure)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nissue an INBOUND command to create a virtual connection to the peer.", "response": "def connect(self, From, to,\n protocolName, clientFactory,\n chooser):\n \"\"\"\n Issue an INBOUND command, creating a virtual connection to the peer,\n given identifying information about the endpoint to connect to, and a\n protocol factory.\n\n @param clientFactory: a *Client* ProtocolFactory instance which will\n generate a protocol upon connect.\n\n @return: a Deferred which fires with the protocol instance that was\n connected, or fails with AttemptsFailed if the connection was not\n possible.\n \"\"\"\n\n publicIP = self._determinePublicIP()\n\n A = dict(From=From,\n to=to,\n protocol=protocolName)\n\n if self.service.dispatcher is not None:\n # Tell them exactly where they can shove it\n A['udp_source'] = (publicIP,\n self.service.sharedUDPPortnum)\n else:\n # Don't tell them because we don't know\n log.msg(\"dispatcher unavailable when connecting\")\n\n D = self.callRemote(Inbound, **A)\n\n def _connected(answer):\n listenersD = defer.maybeDeferred(chooser, answer['listeners'])\n def gotListeners(listeners):\n allConnectionAttempts = []\n for listener in listeners:\n d = self.attemptConnectionMethods(\n listener['methods'],\n listener['id'],\n From, to,\n protocolName, clientFactory,\n )\n allConnectionAttempts.append(d)\n return defer.DeferredList(allConnectionAttempts)\n listenersD.addCallback(gotListeners)\n def finishedAllAttempts(results):\n succeededAny = False\n failures = []\n if not results:\n return Failure(NoAttemptsMade(\n \"there was no available path for connections \"\n \"(%r->%r/%s)\" % (From, to, protocolName)))\n for succeeded, result in results:\n if succeeded:\n succeededAny = True\n randomConnection = result\n break\n else:\n failures.append(result)\n if not succeededAny:\n return Failure(\n AttemptsFailed(\n [failure.getBriefTraceback()\n for failure in failures]\n )\n )\n\n # XXX TODO: this connection is really random; connectQ2Q should\n # not return one of the connections it's made, put it into your\n # protocol's connectionMade handler\n\n return randomConnection\n\n return listenersD.addCallback(finishedAllAttempts)\n return D.addCallback(_connected)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef whoami(self):\n def cbWhoAmI(result):\n return result['address']\n return self.callRemote(WhoAmI).addCallback(cbWhoAmI)", "response": "Return a Deferred which fires with a 2 - tuple of ip port\n number."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrequest the avatar ID associated with these credentials.", "response": "def requestAvatarId(self, credentials):\n \"\"\"\n Return the ID associated with these credentials.\n\n @param credentials: something which implements one of the interfaces in\n self.credentialInterfaces.\n\n @return: a Deferred which will fire a string which identifies an\n avatar, an empty tuple to specify an authenticated anonymous user\n (provided as checkers.ANONYMOUS) or fire a Failure(UnauthorizedLogin).\n\n @see: L{twisted.cred.credentials}\n \"\"\"\n username, domain = credentials.username.split(\"@\")\n key = self.users.key(domain, username)\n if key is None:\n return defer.fail(UnauthorizedLogin())\n\n def _cbPasswordChecked(passwordIsCorrect):\n if passwordIsCorrect:\n return username + '@' + domain\n else:\n raise UnauthorizedLogin()\n\n return defer.maybeDeferred(credentials.checkPassword,\n key).addCallback(_cbPasswordChecked)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef addPrivateCertificate(self, subjectName, existingCertificate=None):\n if existingCertificate is None:\n assert '@' not in subjectName, \"Don't self-sign user certs!\"\n mainDN = DistinguishedName(commonName=subjectName)\n mainKey = KeyPair.generate()\n mainCertReq = mainKey.certificateRequest(mainDN)\n mainCertData = mainKey.signCertificateRequest(\n mainDN, mainCertReq,\n lambda dn: True,\n self.genSerial(subjectName)\n )\n mainCert = mainKey.newCertificate(mainCertData)\n else:\n mainCert = existingCertificate\n self.localStore[subjectName] = mainCert", "response": "Add a PrivateCertificate object to this store for this subjectName."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlistens to a protocol from a given address and return a Deferred that resolves when the protocol is ready.", "response": "def listenQ2Q(self, fromAddress, protocolsToFactories, serverDescription):\n \"\"\"\n Right now this is really only useful in the client implementation,\n since it is transient. protocolFactoryFactory is used for persistent\n listeners.\n \"\"\"\n myDomain = fromAddress.domainAddress()\n D = self.getSecureConnection(fromAddress, myDomain)\n def _secured(proto):\n lfm = self.localFactoriesMapping\n def startup(listenResult):\n for protocol, factory in protocolsToFactories.iteritems():\n key = (fromAddress, protocol)\n if key not in lfm:\n lfm[key] = []\n lfm[key].append((factory, serverDescription))\n factory.doStart()\n\n def shutdown():\n for protocol, factory in protocolsToFactories.iteritems():\n lfm[fromAddress, protocol].remove(\n (factory, serverDescription))\n factory.doStop()\n\n proto.notifyOnConnectionLost(shutdown)\n return listenResult\n\n if self.dispatcher is not None:\n gp = proto.transport.getPeer()\n udpAddress = (gp.host, gp.port)\n pubUDPDeferred = self._retrievePublicUDPPortNumber(udpAddress)\n else:\n pubUDPDeferred = defer.succeed(None)\n\n def _gotPubUDPPort(publicAddress):\n self._publicUDPAddress = publicAddress\n return proto.listen(fromAddress, protocolsToFactories.keys(),\n serverDescription).addCallback(startup)\n pubUDPDeferred.addCallback(_gotPubUDPPort)\n return pubUDPDeferred\n\n D.addCallback(_secured)\n return D"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef requestCertificateForAddress(self, fromAddress, sharedSecret):\n kp = KeyPair.generate()\n subject = DistinguishedName(commonName=str(fromAddress))\n reqobj = kp.requestObject(subject)\n # Create worthless, self-signed certificate for the moment, it will be\n # replaced later.\n # attemptAddress = q2q.Q2QAddress(fromAddress.domain,\n # fromAddress.resource + '+attempt')\n # fakeSubj = DistinguishedName(commonName=str(attemptAddress))\n fakereq = kp.requestObject(subject)\n ssigned = kp.signRequestObject(subject, fakereq, 1)\n certpair = PrivateCertificate.fromCertificateAndKeyPair\n fakecert = certpair(ssigned, kp)\n apc = self.certificateStorage.addPrivateCertificate\n\n gettingSecureConnection = self.getSecureConnection(\n fromAddress, fromAddress.domainAddress(), authorize=False,\n usePrivateCertificate=fakecert,\n )\n def gotSecureConnection(secured):\n return secured.callRemote(\n Sign,\n certificate_request=reqobj,\n password=sharedSecret)\n gettingSecureConnection.addCallback(gotSecureConnection)\n def gotSignResponse(signResponse):\n cert = signResponse['certificate']\n privcert = certpair(cert, kp)\n apc(str(fromAddress), privcert)\n return signResponse\n return gettingSecureConnection.addCallback(gotSignResponse)", "response": "Request a certificate for the given domain part of the given address and store it in my local certificate storage."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mapListener(self, to, From, protocolName, protocolFactory,\n isClient=False):\n \"\"\"\n Returns 2-tuple of (expiryTime, listenerID)\n \"\"\"\n listenerID = self._nextConnectionID(From, to)\n call = reactor.callLater(120,\n self.unmapListener,\n listenerID)\n expires = datetime.datetime(*time.localtime(call.getTime())[:7])\n self.inboundConnections[listenerID] = (\n _ConnectionWaiter(\n From, to, protocolName, protocolFactory, isClient\n ),\n call\n )\n return expires, listenerID", "response": "Map a listener to a new connection."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlook up a connection by its connection identifier.", "response": "def lookupListener(self, listenID):\n \"\"\"\n (internal)\n\n Retrieve a waiting connection by its connection identifier, passing in\n the transport to be used to connect the waiting protocol factory to.\n \"\"\"\n if listenID in self.inboundConnections:\n # Make the connection?\n cwait, call = self.inboundConnections.pop(listenID)\n # _ConnectionWaiter instance\n call.cancel()\n return cwait"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getLocalFactories(self, From, to, protocolName):\n result = []\n x = self.localFactoriesMapping.get((to, protocolName), ())\n result.extend(x)\n y = self.protocolFactoryFactory(From, to, protocolName)\n result.extend(y)\n return result", "response": "Returns a list of 2 - tuples of protocolFactory description to handle\n this from to protocolName\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconnect a protocol factory from a resource@domain to a protocol factory.", "response": "def connectQ2Q(self, fromAddress, toAddress, protocolName, protocolFactory,\n usePrivateCertificate=None, fakeFromDomain=None,\n chooser=None):\n \"\"\"\n Connect a named protocol factory from a resource@domain to a\n resource@domain.\n\n This is analagous to something like connectTCP, in that it creates a\n connection-oriented transport for each connection, except instead of\n specifying your credentials with an application-level (username,\n password) and your endpoint with a framework-level (host, port), you\n specify both at once, in the form of your ID (user@my-domain), their ID\n (user@their-domain) and the desired protocol. This provides several\n useful features:\n\n - All connections are automatically authenticated via SSL\n certificates, although not authorized for any particular\n activities, based on their transport interface rather than having\n to have protocol logic to authenticate.\n\n - User-meaningful protocol nicknames are attached to\n implementations of protocol logic, rather than arbitrary\n numbering.\n\n - Endpoints can specify a variety of transport mechanisms\n transparently to the application: for example, you might be\n connecting to an authorized user-agent on the user's server or to\n the user directly using a NAT-circumvention handshake. All the\n application has to know is that it wants to establish a TCP-like\n connection.\n\n XXX Really, really should return an IConnector implementor for symmetry\n with other connection-oriented transport APIs, but currently does not.\n\n The 'resource' parameters are so named (rather than beginning with\n 'user', for example) because they are sometimes used to refer to\n abstract entities or roles, such as 'payments', or groups of users\n (communities) but generally the convention is to document them as\n individual users for simplicity's sake.\n\n The parameters are described as if Alice were trying\n try connect to Bob to transfer a file over HTTP.\n\n @param fromAddress: The address of the connecting user: in this case,\n Q2QAddress(\"divmod.com\", \"alice\")\n\n @param toAddress: The address of the user connected to: in this case,\n Q2QAddress(\"notdivmod.com\", \"bob\")\n\n @param protocolName: The name of the protocol, by convention observing\n similar names to http://www.iana.org/assignments/port-numbers when\n appropriate. In this case, 'http'.\n\n @param protocolFactory: An implementation of\n L{twisted.internet.interfaces.IProtocolFactory}\n\n @param usePrivateCertificate: Use a different private certificate for\n initiating the 'secure' call. Mostly for testing different invalid\n certificate attacks.\n\n @param fakeFromDomain: This domain name will be used for an argument to\n the 'connect' command, but NOT as an argument to the SECURE command.\n This is to test a particular kind of invalid cert attack.\n\n @param chooser: a function taking a list of connection-describing\n objects and returning another list. Those items in the remaining list\n will be attempted as connections and buildProtocol called on the client\n factory. May return a Deferred.\n\n @default chooser: C{lambda x: x and [x[0]]}\n\n @return:\n \"\"\"\n if chooser is None:\n chooser = lambda x: x and [x[0]]\n\n def onSecureConnection(protocol):\n if fakeFromDomain:\n connectFromAddress = Q2QAddress(\n fakeFromDomain,\n toAddress.resource\n )\n else:\n connectFromAddress = fromAddress\n\n return protocol.connect(connectFromAddress, toAddress,\n protocolName, protocolFactory,\n chooser)\n\n def onSecureConnectionFailure(reason):\n protocolFactory.clientConnectionFailed(None, reason)\n return reason\n\n return self.getSecureConnection(\n fromAddress, toAddress,\n port, usePrivateCertificate).addCallback(\n onSecureConnection).addErrback(onSecureConnectionFailure)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getSecureConnection(self, fromAddress, toAddress, port=port,\n usePrivateCertificate=None,\n authorize=True):\n \"\"\"\n Establish or retrieven an already-established L{Q2Q}-protocol\n connection from C{fromAddress} to I{the q2q proxy provider} for\n C{toAddress}; i.e. the entity responsible for the I{domain party only}\n of C{toAddress}.\n\n The connection is \"from\" C{fromAddress} in the sense that it will use a\n certificate and private key associated with that address to\n authenticate itself.\n\n For example, if we want to connect from C{foo@bar.com} to\n C{baz@qux.com}, this will establish a connection to C{qux.com}.\n\n @param fromAddress: The address of the party represented by the local\n host.\n @type fromAddress: L{Q2QAddress}\n\n @param toAddress: The address of the party whose proxy we are trying to\n connect to. The domain part of this address is the DNS name to\n connect to, and also the source (in our local certificate store, or\n via a TOFU DNS lookup) of the certificate authority to use to\n verify the connection.\n @type toAddress: L{Q2QAddress}\n\n @param port: The TCP port number on which to make the outgoing\n connection.\n @type port: L{int}\n\n @param usePrivateCertificate:\n @param authorize: \n\n @return: A L{Deferred} firing with a connected L{Q2Q} where the peer is\n the I{domain part} of the given C{toAddress}.\n \"\"\"\n\n # Secure connections using users as clients will have to be established\n # using the 'secure' method differently than this does: we are ONLY\n # capable of connecting to other domains (supernodes)\n\n toDomain = toAddress.domainAddress()\n resolveme = reactor.resolve(str(toDomain))\n def cb(toIPAddress, authorize=authorize):\n GPS = self.certificateStorage.getPrivateCertificate\n if usePrivateCertificate:\n ourCert = usePrivateCertificate\n cacheFrom = fromAddress\n log.msg(\n 'Using fakie private cert:',\n fromAddress,\n ourCert,\n cacheFrom\n )\n elif fromAddress.domain == '':\n assert all( \n (fromAddress.resource == '',\n \"No domain means anonymous, %r\" % (fromAddress,))\n )\n \n # We are actually anonymous, whoops!\n authorize = False\n # We need to create our own certificate\n ourCert = KeyPair.generate().selfSignedCert(218374, CN='@')\n # Feel free to cache the anonymous certificate we just made\n cacheFrom = fromAddress\n log.msg(\"Using anonymous cert for anonymous user.\")\n else:\n try:\n # Are we in fact a domain, operating on behalf of a user?\n x = fromAddress.domainAddress()\n ourCert = GPS(str(x))\n cacheFrom = x\n log.msg(\n 'domain on behalf of user:',\n fromAddress,\n ourCert,\n cacheFrom\n )\n except KeyError:\n # Nope, guess not. Are we actually that user?\n try:\n x = fromAddress\n ourCert = GPS(str(x))\n cacheFrom = x\n log.msg(\n 'actual user:',\n fromAddress,\n ourCert,\n cacheFrom\n )\n except KeyError:\n # Hmm. We're not that user either. Are we trying to\n # pretend to be a user from a *different* domain, to\n # ourselves? (We've got to be a domain to \"make\n # believe\", since this is effectively a clustering\n # feature...)\n\n try:\n x = toDomain\n ourCert = GPS(str(x))\n cacheFrom = x\n log.msg(\n 'fakie domain cert:',\n fromAddress,\n ourCert,\n cacheFrom\n )\n except KeyError:\n raise VerifyError(\n \"We tried to secure a connection \"\n \"between %s and %s, \"\n \"but we don't have any certificates \"\n \"that could be used.\" % (fromAddress,\n toAddress))\n\n def connected(proto):\n certD = self.certificateStorage.getSelfSignedCertificate(\n str(toDomain))\n def nocert(failure):\n failure.trap(KeyError)\n identD = proto.callRemote(\n Identify, subject=toDomain\n ).addCallback(\n lambda x: x['certificate'])\n def storeit(certificate):\n return (\n self.certificateStorage.storeSelfSignedCertificate(\n str(toDomain),\n certificate\n ).addCallback(lambda x: certificate)\n )\n return identD.addCallback(storeit)\n certD.addErrback(nocert)\n def gotcert(foreignCA):\n secdef = proto.secure(cacheFrom, toDomain,\n ourCert, foreignCA,\n authorize=authorize)\n return secdef\n certD.addCallback(gotcert)\n return certD\n return self.secureConnectionCache.connectCached(\n endpoint.TCPEndpoint(toIPAddress, port),\n Q2QClientFactory(self),\n extraWork=connected,\n extraHash=(cacheFrom, toDomain, authorize)\n )\n return resolveme.addCallback(cb)", "response": "Establish or retrieve a connection from a given local node to a given remote node."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nattaches a remote entry to a local entry", "response": "def attach(self, remote_entry):\n \"\"\"Attach a remote entry to a local entry\"\"\"\n self.name = remote_entry.name\n self.sha = remote_entry.sha\n self.url = remote_entry.url\n self.author = remote_entry.author\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a permission set name in the defined permissions.", "response": "def parse_permission_set(input):\n \"\"\"Lookup a permission set name in the defined permissions.\n\n Requires a Flask app context.\n\n \"\"\"\n\n # Priority goes to the user's parsers.\n if isinstance(input, basestring):\n for func in current_acl_manager.permission_set_parsers:\n res = func(input)\n if res is not None:\n input = res\n break\n\n if isinstance(input, basestring):\n try:\n return current_acl_manager.permission_sets[input]\n except KeyError:\n raise ValueError('unknown permission set %r' % input)\n\n return input"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_permission_in_set(perm, perm_set):\n\n if isinstance(perm_set, basestring):\n return perm == perm_set\n elif isinstance(perm_set, Container):\n return perm in perm_set\n elif isinstance(perm_set, Callable):\n return perm_set(perm)\n else:\n raise TypeError('permission set must be a string, container, or callable')", "response": "Test if a permission is in the given set."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the raw data to a readable universes tuple.", "response": "def convert_raw_data_to_universes(raw_data) -> tuple:\n \"\"\"\n converts the raw data to a readable universes tuple. The raw_data is scanned from index 0 and has to have\n 16-bit numbers with high byte first. The data is converted from the start to the beginning!\n :param raw_data: the raw data to convert\n :return: tuple full with 16-bit numbers\n \"\"\"\n if len(raw_data)%2 != 0:\n raise TypeError('The given data has not a length that is a multiple of 2!')\n rtrnList = []\n for i in range(0, len(raw_data), 2):\n rtrnList.append(two_bytes_to_int(raw_data[i], raw_data[i+1]))\n return tuple(rtrnList)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a list with universe discovery packets based on the given data. It creates automatically enough packets for the given universes list. :param cid: the cid to use in all packets :param sourceName: the source name to use in all packets :param universes: the universes. Can be longer than 512, but has to be shorter than 256*512. The values in the list should be [1-63999] :return: a list full of universe discovery packets", "response": "def make_multiple_uni_disc_packets(cid: tuple, sourceName: str, universes: list) -> List['UniverseDiscoveryPacket']:\n \"\"\"\n Creates a list with universe discovery packets based on the given data. It creates automatically enough packets\n for the given universes list.\n :param cid: the cid to use in all packets\n :param sourceName: the source name to use in all packets\n :param universes: the universes. Can be longer than 512, but has to be shorter than 256*512.\n The values in the list should be [1-63999]\n :return: a list full of universe discovery packets\n \"\"\"\n tmpList = []\n if len(universes)%512 != 0:\n num_of_packets = int(len(universes)/512)+1\n else: # just get how long the list has to be. Just read and think about the if statement.\n # Should be self-explaining\n num_of_packets = int(len(universes)/512)\n universes.sort() # E1.31 wants that the send out universes are sorted\n for i in range(0, num_of_packets):\n if i == num_of_packets-1:\n tmpUniverses = universes[i * 512:len(universes)]\n # if we are here, then the for is in the last loop\n else:\n tmpUniverses = universes[i * 512:(i+1) * 512]\n # create new UniverseDiscoveryPacket and append it to the list. Page and lastPage are getting special values\n tmpList.append(UniverseDiscoveryPacket(cid=cid, sourceName=sourceName, universes=tmpUniverses,\n page=i, lastPage=num_of_packets-1))\n return tmpList"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\noverride the default pagination method to add max_count to the total value.", "response": "def paginate_queryset(self, queryset, request, view=None):\n \"\"\"\n adds `max_count` as a running tally of the largest table size. Used for calculating\n next/previous links later\n \"\"\"\n result = super(MultipleModelLimitOffsetPagination, self).paginate_queryset(queryset, request, view)\n\n try:\n if self.max_count < self.count:\n self.max_count = self.count\n except AttributeError:\n self.max_count = self.count\n\n try:\n self.total += self.count\n except AttributeError:\n self.total = self.count\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nformats the response data for the resource table.", "response": "def format_response(self, data):\n \"\"\"\n replaces the `count` (the last queryset count) with the running `max_count` variable,\n to ensure accurate link calculation\n \"\"\"\n self.count = self.max_count\n\n return OrderedDict([\n ('highest_count', self.max_count),\n ('overall_total', self.total),\n ('next', self.get_next_link()),\n ('previous', self.get_previous_link()),\n ('results', data)\n ])"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks that all items in a querylist have the required keys.", "response": "def check_query_data(self, query_data):\n \"\"\"\n All items in a `querylist` must at least have `queryset` key and a\n `serializer_class` key. Any querylist item lacking both those keys\n will raise a ValidationError\n \"\"\"\n for key in self.required_keys:\n if key not in query_data:\n raise ValidationError(\n 'All items in the {} querylist attribute should contain a '\n '`{}` key'.format(self.__class__.__name__, key)\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_queryset(self, query_data, request, *args, **kwargs):\n queryset = query_data.get('queryset', [])\n\n if isinstance(queryset, QuerySet):\n # Ensure queryset is re-evaluated on each request.\n queryset = queryset.all()\n\n # run rest_framework filters\n queryset = self.filter_queryset(queryset)\n\n # run custom filters\n filter_fn = query_data.get('filter_fn', None)\n if filter_fn is not None:\n queryset = filter_fn(queryset, request, *args, **kwargs)\n\n page = self.paginate_queryset(queryset)\n self.is_paginated = page is not None\n\n return page if page is not None else queryset", "response": "Fetches the queryset and runs any necessary filtering and returns the first page that is returned."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the empty results for the current object.", "response": "def get_empty_results(self):\n \"\"\"\n Because the base result type is different depending on the return structure\n (e.g. list for flat, dict for object), `get_result_type` initials the\n `results` variable to the proper type\n \"\"\"\n assert self.result_type is not None, (\n '{} must specify a `result_type` value or overwrite the '\n '`get_empty_result` method.'.format(self.__class__.__name__)\n )\n\n return self.result_type()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_to_results(self, data, label, results):\n raise NotImplementedError(\n '{} must specify how to add data to the running results tally '\n 'by overriding the `add_to_results` method.'.format(\n self.__class__.__name__\n )\n )", "response": "Add data to the running results variable."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\noverride DRF s initial method to set the _sorting_field property to the corresponding property in view.", "response": "def initial(self, request, *args, **kwargs):\n \"\"\"\n Overrides DRF's `initial` in order to set the `_sorting_field` from corresponding property in view.\n Protected property is required in order to support overriding of `sorting_field` via `@property`, we do this\n after original `initial` has been ran in order to make sure that view has all its properties set up.\n \"\"\"\n super(FlatMultipleModelMixin, self).initial(request, *args, **kwargs)\n assert not (self.sorting_field and self.sorting_fields), \\\n '{} should either define ``sorting_field`` or ``sorting_fields`` property, not both.' \\\n .format(self.__class__.__name__)\n if self.sorting_field:\n warnings.warn(\n '``sorting_field`` property is pending its deprecation. Use ``sorting_fields`` instead.',\n DeprecationWarning\n )\n self.sorting_fields = [self.sorting_field]\n self._sorting_fields = self.sorting_fields"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_to_results(self, data, label, results):\n for datum in data:\n if label is not None:\n datum.update({'type': label})\n\n results.append(datum)\n\n return results", "response": "Adds the label to the results as needed then appends the data\n to the running results tab\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nformat the results for the current application.", "response": "def format_results(self, results, request):\n \"\"\"\n Prepares sorting parameters, and sorts results, if(as) necessary\n \"\"\"\n self.prepare_sorting_fields()\n if self._sorting_fields:\n results = self.sort_results(results)\n\n if request.accepted_renderer.format == 'html':\n # Makes the the results available to the template context by transforming to a dict\n results = {'data': results}\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _sort_by(self, datum, param, path=None):\n if not path:\n path = []\n try:\n if '__' in param:\n root, new_param = param.split('__')\n path.append(root)\n return self._sort_by(datum[root], param=new_param, path=path)\n else:\n path.append(param)\n\n data = datum[param]\n if isinstance(data, list):\n raise ValidationError(self._list_attribute_error.format(param))\n return data\n except TypeError:\n raise ValidationError(self._list_attribute_error.format('.'.join(path)))\n except KeyError:\n raise ValidationError('Invalid sorting field: {}'.format('.'.join(path)))", "response": "This method sorts the data structure by a key."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npreparing the sorting fields for the related object based on the request query parameters and sorting options.", "response": "def prepare_sorting_fields(self):\n \"\"\"\n Determine sorting direction and sorting field based on request query parameters and sorting options\n of self\n \"\"\"\n if self.sorting_parameter_name in self.request.query_params:\n # Extract sorting parameter from query string\n self._sorting_fields = [\n _.strip() for _ in self.request.query_params.get(self.sorting_parameter_name).split(',')\n ]\n\n if self._sorting_fields:\n # Create a list of sorting parameters. Each parameter is a tuple: (field:str, descending:bool)\n self._sorting_fields = [\n (self.sorting_fields_map.get(field.lstrip('-'), field.lstrip('-')), field[0] == '-')\n for field in self._sorting_fields\n ]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_label(self, queryset, query_data):\n if query_data.get('label', False):\n return query_data['label']\n\n try:\n return queryset.model.__name__\n except AttributeError:\n return query_data['queryset'].model.__name__", "response": "Gets option label for each datum. Can be used for type identification\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build_extension(self, ext):\n try:\n build_ext.build_ext.build_extension(self, ext)\n except (\n distutils.errors.CCompilerError,\n distutils.errors.DistutilsExecError,\n distutils.errors.DistutilsPlatformError,\n ValueError,\n ):\n raise BuildFailed()", "response": "build_extension.\n\n :raises BuildFailed: cythonize impossible"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the remote environment s env so we can explicitly add the path without the wiping out everything", "response": "def extend_env(conn, arguments):\n \"\"\"\n get the remote environment's env so we can explicitly add the path without\n wiping out everything\n \"\"\"\n # retrieve the remote environment variables for the host\n try:\n result = conn.gateway.remote_exec(\"import os; channel.send(os.environ.copy())\")\n env = result.receive()\n except Exception:\n conn.logger.exception('failed to retrieve the remote environment variables')\n env = {}\n\n # get the $PATH and extend it (do not overwrite)\n path = env.get('PATH', '')\n env['PATH'] = path + '/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin'\n arguments['env'] = env\n if arguments.get('extend_env'):\n for key, value in arguments['extend_env'].items():\n arguments['env'][key] = value\n arguments.pop('extend_env')\n return arguments"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(conn, command, exit=False, timeout=None, **kw):\n stop_on_error = kw.pop('stop_on_error', True)\n if not kw.get('env'):\n # get the remote environment's env so we can explicitly add\n # the path without wiping out everything\n kw = extend_env(conn, kw)\n\n command = conn.cmd(command)\n\n timeout = timeout or conn.global_timeout\n conn.logger.info('Running command: %s' % ' '.join(admin_command(conn.sudo, command)))\n result = conn.execute(_remote_run, cmd=command, **kw)\n try:\n reporting(conn, result, timeout)\n except Exception:\n remote_trace = traceback.format_exc()\n remote_error = RemoteError(remote_trace)\n if remote_error.exception_name == 'RuntimeError':\n conn.logger.error(remote_error.exception_line)\n else:\n for tb_line in remote_trace.split('\\n'):\n conn.logger.error(tb_line)\n if stop_on_error:\n raise RuntimeError(\n 'Failed to execute command: %s' % ' '.join(command)\n )\n if exit:\n conn.exit()", "response": "A real - time - logging implementation of a remote subprocess. Popen call where the command is executed on the remote end."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting a remote command and return the output of the command.", "response": "def check(conn, command, exit=False, timeout=None, **kw):\n \"\"\"\n Execute a remote command with ``subprocess.Popen`` but report back the\n results in a tuple with three items: stdout, stderr, and exit status.\n\n This helper function *does not* provide any logging as it is the caller's\n responsibility to do so.\n \"\"\"\n command = conn.cmd(command)\n\n stop_on_error = kw.pop('stop_on_error', True)\n timeout = timeout or conn.global_timeout\n if not kw.get('env'):\n # get the remote environment's env so we can explicitly add\n # the path without wiping out everything\n kw = extend_env(conn, kw)\n\n conn.logger.info('Running command: %s' % ' '.join(admin_command(conn.sudo, command)))\n result = conn.execute(_remote_check, cmd=command, **kw)\n response = None\n try:\n response = result.receive(timeout)\n except Exception as err:\n # the things we need to do here :(\n # because execnet magic, we cannot catch this as\n # `except TimeoutError`\n if err.__class__.__name__ == 'TimeoutError':\n msg = 'No data was received after %s seconds, disconnecting...' % timeout\n conn.logger.warning(msg)\n # there is no stdout, stderr, or exit code but make the exit code\n # an error condition (non-zero) regardless\n return [], [], -1\n else:\n remote_trace = traceback.format_exc()\n remote_error = RemoteError(remote_trace)\n if remote_error.exception_name == 'RuntimeError':\n conn.logger.error(remote_error.exception_line)\n else:\n for tb_line in remote_trace.split('\\n'):\n conn.logger.error(tb_line)\n if stop_on_error:\n raise RuntimeError(\n 'Failed to execute command: %s' % ' '.join(command)\n )\n if exit:\n conn.exit()\n return response"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmonitor a local directory and a remote FlashAir directory and generates sets of new files to be uploaded or downloaded. Sets to upload are generated in a tuple like (Direction.up, {...}), while download sets to download are generated in a tuple like (Direction.down, {...}). The generator yields before each upload or download actually takes place.", "response": "def up_down_by_arrival(*filters, local_dir=\".\",\n remote_dir=DEFAULT_REMOTE_DIR):\n \"\"\"Monitors a local directory and a remote FlashAir directory and\n generates sets of new files to be uploaded or downloaded.\n Sets to upload are generated in a tuple\n like (Direction.up, {...}), while download sets to download\n are generated in a tuple like (Direction.down, {...}). The generator yields\n before each upload or download actually takes place.\"\"\"\n local_monitor = watch_local_files(*filters, local_dir=local_dir)\n remote_monitor = watch_remote_files(*filters, remote_dir=remote_dir)\n _, lfile_set = next(local_monitor)\n _, rfile_set = next(remote_monitor)\n _notify_sync_ready(len(lfile_set), local_dir, remote_dir)\n _notify_sync_ready(len(rfile_set), remote_dir, local_dir)\n processed = set()\n for new_local, new_remote in zip(local_monitor, remote_monitor):\n new_local, local_set = new_local\n local_arrivals = {f for f in new_local if f.filename not in processed}\n yield Direction.up, local_arrivals\n if local_arrivals:\n new_names.update(f.filename for f in local_arrivals)\n _notify_sync(Direction.up, local_arrivals)\n up_by_files(local_arrivals, remote_dir)\n _notify_sync_ready(len(local_set), local_dir, remote_dir)\n new_remote, remote_set = new_remote\n remote_arrivals = {f for f in new_remote if f.filename not in processed}\n yield Direction.down, remote_arrivals\n if remote_arrivals:\n new_names.update(f.filename for f in remote_arrivals)\n _notify_sync(Direction.down, remote_arrivals)\n yield Direction.down, remote_arrivals\n down_by_files(remote_arrivals, local_dir)\n _notify_sync_ready(len(remote_set), remote_dir, local_dir)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmonitoring a local directory and generates sets of new files to be uploaded to FlashAir.", "response": "def up_by_arrival(*filters, local_dir=\".\", remote_dir=DEFAULT_REMOTE_DIR):\n \"\"\"Monitors a local directory and\n generates sets of new files to be uploaded to FlashAir.\n Sets to upload are generated in a tuple like (Direction.up, {...}).\n The generator yields before each upload actually takes place.\"\"\"\n local_monitor = watch_local_files(*filters, local_dir=local_dir)\n _, file_set = next(local_monitor)\n _notify_sync_ready(len(file_set), local_dir, remote_dir)\n for new_arrivals, file_set in local_monitor:\n yield Direction.up, new_arrivals # where new_arrivals is possibly empty\n if new_arrivals:\n _notify_sync(Direction.up, new_arrivals)\n up_by_files(new_arrivals, remote_dir)\n _notify_sync_ready(len(file_set), local_dir, remote_dir)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmonitors a remote FlashAir directory and generates sets of new files to be downloaded from FlashAir. Sets to download are generated in a tuple like (Direction.down, {...}). The generator yields AFTER each download actually takes place.", "response": "def down_by_arrival(*filters, local_dir=\".\", remote_dir=DEFAULT_REMOTE_DIR):\n \"\"\"Monitors a remote FlashAir directory and generates sets of\n new files to be downloaded from FlashAir.\n Sets to download are generated in a tuple like (Direction.down, {...}).\n The generator yields AFTER each download actually takes place.\"\"\"\n remote_monitor = watch_remote_files(*filters, remote_dir=remote_dir)\n _, file_set = next(remote_monitor)\n _notify_sync_ready(len(file_set), remote_dir, local_dir)\n for new_arrivals, file_set in remote_monitor:\n if new_arrivals:\n _notify_sync(Direction.down, new_arrivals)\n down_by_files(new_arrivals, local_dir)\n _notify_sync_ready(len(file_set), remote_dir, local_dir)\n yield Direction.down, new_arrivals"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _write_file_safely(local_path, fileinfo, response):\n try:\n _write_file(local_path, fileinfo, response)\n except BaseException as e:\n logger.warning(\"{} interrupted writing {} -- \"\n \"cleaning up partial file\".format(\n e.__class__.__name__, local_path))\n os.remove(local_path)\n raise e", "response": "Attempt to stream a remote file into a local file object and removes the local file if it s interrupted by any error"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nattempt to upload a local file to FlashAir and attempts to remove the remote file if interrupted by any error", "response": "def _upload_file_safely(fileinfo, remote_dir):\n \"\"\"attempts to upload a local file to FlashAir,\n tries to remove the remote file if interrupted by any error\"\"\"\n try:\n upload.upload_file(fileinfo.path, remote_dir=remote_dir)\n except BaseException as e:\n logger.warning(\"{} interrupted writing {} -- \"\n \"cleaning up partial remote file\".format(\n e.__class__.__name__, fileinfo.path))\n upload.delete_file(fileinfo.path)\n raise e"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rsync(hosts, source, destination, logger=None, sudo=False):\n logger = logger or basic_remote_logger()\n sync = _RSync(source, logger=logger)\n\n # setup_targets\n if not isinstance(hosts, list):\n hosts = [hosts]\n\n for host in hosts:\n conn = Connection(\n host,\n logger,\n sudo,\n )\n sync.add_target(conn.gateway, destination)\n\n return sync.send()", "response": "Sends a rsync request to the specified hosts."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmap a single value to a range of dead and hot zones.", "response": "def map_single_axis(low, high, dead_zone, hot_zone, value):\n \"\"\"\n Apply dead and hot zones before mapping a value to a range. The dead and hot zones are both expressed as the\n proportion of the axis range which should be regarded as 0.0 or 1.0 (or -1.0 depending on cardinality) respectively,\n so for example setting dead zone to 0.2 means the first 20% of the range of the axis will be treated as if it's the\n low value, and setting the hot zone to 0.4 means the last 40% of the range will be treated as if it's the high\n value. Note that as with map_into_range, low is not necessarily numerically lower than high, it instead expresses\n a low value signal as opposed to a high value one (which could include a high negative value). Note that bad things\n happen if dead_zone + hot_zone == 1.0, so don't do that. This is used by the map_dual_axis call, but can also be\n used by itself to handle single axes like triggers where the overall range varies from 0.0 to 1.0 rather than -1.0\n to 1.0 as a regular joystick axis would.\n\n :param low:\n The value corresponding to no signal\n :param high:\n The value corresponding to a full signal\n :param dead_zone:\n The proportion of the range of motion away from the no-signal end which should be treated as equivalent to no\n signal and return 0.0\n :param hot_zone:\n The proportion of the range of motion away from the high signal end which should be treated as equivalent to a\n full strength input.\n :param value:\n The raw value to map\n :return:\n The scaled and clipped value, taking into account dead and hot zone boundaries, ranging from 0.0 to either 1.0\n or -1.0 depending on whether low or high are numerically larger (low < high means max value is 1.0, high < low\n means it's -1.0).\n \"\"\"\n input_range = high - low\n corrected_low = low + input_range * dead_zone\n corrected_high = high - input_range * hot_zone\n return map_into_range(corrected_low, corrected_high, value)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmap a value to a range from - 1. 0 to 1. 0.", "response": "def map_dual_axis(low, high, centre, dead_zone, hot_zone, value):\n \"\"\"\n Map an axis with a central dead zone and hot zones at each end to a range from -1.0 to 1.0. This in effect uses two\n calls to map_single_axis, choosing whether to use centre and low, or centre and high as the low and high values in\n that call based on which side of the centre value the input value falls. This is the call that handles mapping of\n values on regular joysticks where there's a centre point to which the physical control returns when no input is\n being made.\n\n :param low:\n The raw value corresponding to the strongest negative input (stick far left / down).\n :param high:\n The raw value corresponding to the strongest positive input (stick far right / up).\n :param centre:\n The raw value corresponding to the resting position of the axis when no user interaction is happening.\n :param dead_zone:\n The proportion of each (positive and negative) part of the motion away from the centre which should result in\n an output of 0.0\n :param hot_zone:\n The proportion of each (positive and negative) part of the motion away from each extreme end of the range which\n should result in 1.0 or -1.0 being returned (depending on whether we're on the high or low side of the centre\n point)\n :param value:\n The raw value to map\n :return:\n The filtered and clamped value, from -1.0 at low to 1.0 at high, with a centre as specified mapping to 0.0\n \"\"\"\n if value <= centre:\n return map_single_axis(centre, low, dead_zone, hot_zone, value)\n else:\n return map_single_axis(centre, high, dead_zone, hot_zone, value)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nregistering a function which will be called when a button is pressed.", "response": "def register_button_handler(self, button_handler, button_sname: str):\n \"\"\"\n Register a handler function which will be called when a button is pressed\n\n :param button_handler:\n A function which will be called when any of the specified buttons are pressed. The\n function is called with the Button that was pressed as the sole argument.\n :param button_sname:\n The sname of the button which should trigger the handler function\n :return:\n A no-arg function which can be used to remove this registration\n \"\"\"\n return self.buttons.register_button_handler(button_handler, self.buttons[button_sname])"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef axis_updated(self, event: InputEvent, prefix=None):\n if prefix is not None:\n axis = self.axes_by_code.get(prefix + str(event.code))\n else:\n axis = self.axes_by_code.get(event.code)\n if axis is not None:\n axis.receive_device_value(event.value)\n else:\n logger.debug('Unknown axis code {} ({}), value {}'.format(event.code, prefix, event.value))", "response": "Called when an absolute axis event is received from evdev."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_axis_centres(self, *args):\n for axis in self.axes_by_code.values():\n if isinstance(axis, CentredAxis):\n axis.centre = axis.value", "response": "Sets the centre points for each axis to the current value for that axis."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _input_to_raw_value(self, value: int) -> float:\n return (float(value) - self.min_raw_value) / self.max_raw_value", "response": "Convert the input value from evdev to a 0. 0 to 1. 0 range."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the value of the key entry in the specified axis.", "response": "def value(self) -> float:\n \"\"\"\n Get a centre-compensated, scaled, value for the axis, taking any dead-zone into account. The value will\n scale from 0.0 at the edge of the dead-zone to 1.0 (positive) at the extreme position of\n the trigger or the edge of the hot zone, if defined as other than 1.0.\n\n :return:\n a float value, 0.0 when not pressed or within the dead zone, to 1.0 when fully pressed or in the hot zone\n \"\"\"\n return map_single_axis(self.min, self.max, self.dead_zone, self.hot_zone, self.__value)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets a new value from the hardware joystick implementation class when parsing the event queue.", "response": "def receive_device_value(self, raw_value: int):\n \"\"\"\n Set a new value, called from within the joystick implementation class when parsing the event queue.\n\n :param raw_value: the raw value from the joystick hardware\n\n :internal:\n \"\"\"\n new_value = self._input_to_raw_value(raw_value)\n if self.button is not None:\n if new_value > (self.button_trigger_value + 0.05) > self.__value:\n self.buttons.button_pressed(self.button.key_code)\n elif new_value < (self.button_trigger_value - 0.05) < self.__value:\n self.buttons.button_released(self.button.key_code)\n self.__value = new_value\n if new_value > self.max:\n self.max = new_value\n elif new_value < self.min:\n self.min = new_value"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the value of the cluster entry.", "response": "def value(self) -> float:\n \"\"\"\n Get a centre-compensated, scaled, value for the axis, taking any dead-zone into account. The value will\n scale from 0.0 at the edge of the dead-zone to 1.0 (positive) or -1.0 (negative) at the extreme position of\n the controller or the edge of the hot zone, if defined as other than 1.0. The axis will auto-calibrate for\n maximum value, initially it will behave as if the highest possible value from the hardware is 0.9 in each\n direction, and will expand this as higher values are observed. This is scaled by this function and should\n always return 1.0 or -1.0 at the extreme ends of the axis.\n\n :return: a float value, negative to the left or down and ranging from -1.0 to 1.0\n \"\"\"\n mapped_value = map_dual_axis(self.min, self.max, self.centre, self.dead_zone, self.hot_zone, self.__value)\n if self.invert:\n return -mapped_value\n else:\n return mapped_value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef receive_device_value(self, raw_value: int):\n\n new_value = self._input_to_raw_value(raw_value)\n self.__value = new_value\n if new_value > self.max:\n self.max = new_value\n elif new_value < self.min:\n self.min = new_value", "response": "Set a new value from the hardware joystick implementation class when parsing the event queue."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall by the controller classes to update the state of this button manager when a button is pressed.", "response": "def button_pressed(self, key_code, prefix=None):\n \"\"\"\n Called from the controller classes to update the state of this button manager when a button is pressed.\n\n :internal:\n\n :param key_code:\n The code specified when populating Button instances\n :param prefix:\n Applied to key code if present\n \"\"\"\n if prefix is not None:\n state = self.buttons_by_code.get(prefix + str(key_code))\n else:\n state = self.buttons_by_code.get(key_code)\n if state is not None:\n for handler in state.button_handlers:\n handler(state.button)\n state.is_pressed = True\n state.last_pressed = time()\n state.was_pressed_since_last_check = True\n else:\n logger.debug('Unknown button code {} ({})'.format(key_code, prefix))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef button_released(self, key_code, prefix=None):\n if prefix is not None:\n state = self.buttons_by_code.get(prefix + str(key_code))\n else:\n state = self.buttons_by_code.get(key_code)\n if state is not None:\n state.is_pressed = False\n state.last_pressed = None", "response": "Called by the controller classes to update the state of this button manager when a button is released."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_presses(self):\n pressed = []\n for button, state in self.buttons.items():\n if state.was_pressed_since_last_check:\n pressed.append(button)\n state.was_pressed_since_last_check = False\n self.__presses = ButtonPresses(pressed)\n return self.__presses", "response": "Returns the set of Buttons which have been pressed since this call was last made clearing it as we do."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining whether a button is currently held by standard name.", "response": "def held(self, sname):\n \"\"\"\n Determines whether a button is currently held, identifying it by standard name\n\n :param sname:\n The standard name of the button\n :return:\n None if the button is not held down, or is not available, otherwise the number of seconds as a floating\n point value since it was pressed\n \"\"\"\n state = self.buttons_by_sname.get(sname)\n if state is not None:\n if state.is_pressed and state.last_pressed is not None:\n return time() - state.last_pressed\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_button_handler(self, button_handler, buttons):\n if not isinstance(buttons, list):\n buttons = [buttons]\n for button in buttons:\n state = self.buttons.get(button)\n if state is not None:\n state.button_handlers.append(button_handler)\n\n def remove():\n for button_to_remove in buttons:\n state_to_remove = self.buttons.get(button_to_remove)\n if state_to_remove is not None:\n state_to_remove.button_handlers.remove(button_handler)\n\n return remove", "response": "Register a handler function which will be called when a button is pressed."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _func(self) -> typing.Optional[typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]]]:\n return self.__func", "response": "Get wrapped function.\n\n :rtype: typing.Optional[typing.Callable[..., typing.Union[typing.Awaitable, typing.Any]]]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_function_wrapper(\n self, func: typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]]\n ) -> typing.Callable[..., typing.Any]:\n \"\"\"Here should be constructed and returned real decorator.\n\n :param func: Wrapped function\n :type func: typing.Callable[..., typing.Union[typing.Awaitable, typing.Any]]\n :rtype: typing.Callable\n \"\"\"\n raise NotImplementedError()", "response": "This is a function decorator that returns a function that can be used to wrap the function."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _await_if_required(\n target: typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]]\n ) -> typing.Callable[..., typing.Any]:\n \"\"\"Await result if coroutine was returned.\"\"\"\n\n @functools.wraps(target)\n def wrapper(*args, **kwargs): # type: (typing.Any, typing.Any) -> typing.Any\n \"\"\"Decorator/wrapper.\"\"\"\n result = target(*args, **kwargs)\n if asyncio.iscoroutine(result):\n loop = asyncio.new_event_loop()\n result = loop.run_until_complete(result)\n loop.close()\n return result\n\n return wrapper", "response": "Await result if coroutine was returned."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconstructs a unique name for the input device.", "response": "def unique_name(device: InputDevice) -> str:\n \"\"\"\n Construct a unique name for the device based on, in order if available, the uniq ID, the phys ID and\n finally a concatenation of vendor, product, version and filename.\n\n :param device:\n An InputDevice instance to query\n :return:\n A string containing as unique as possible a name for the physical entity represented by the device\n \"\"\"\n if device.uniq:\n return device.uniq\n elif device.phys:\n return device.phys.split('/')[0]\n return '{}-{}-{}-{}'.format(device.info.vendor, device.info.product, device.info.version, device.path)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding a sequence of controllers which satisfy the supplied requirements.", "response": "def find_matching_controllers(*requirements, **kwargs) -> [ControllerDiscovery]:\n \"\"\"\n Find a sequence of controllers which match the supplied requirements, or raise an error if no such controllers\n exist.\n\n :param requirements:\n Zero or more ControllerRequirement instances defining the requirements for controllers. If no item is passed it\n will be treated as a single requirement with no filters applied and will therefore match the first controller\n found.\n :return:\n A sequence of the same length as the supplied requirements array containing ControllerDiscovery instances which\n match the requirements supplied.\n :raises:\n ControllerNotFoundError if no appropriately matching controllers can be located\n \"\"\"\n requirements = list(requirements)\n if requirements is None or len(requirements) == 0:\n requirements = [ControllerRequirement()]\n\n def pop_controller(r: ControllerRequirement, discoveries: [ControllerDiscovery]) -> ControllerDiscovery:\n \"\"\"\n Find a single controller matching the supplied requirement from a list of ControllerDiscovery instances\n\n :param r:\n The ControllerRequirement to match\n :param discoveries:\n The [ControllerDiscovery] to search\n :return:\n A matching ControllerDiscovery. Modifies the supplied list of discoveries, removing the found item.\n :raises:\n ControllerNotFoundError if no matching controller can be found\n \"\"\"\n for index, d in enumerate(discoveries):\n if r.accept(d):\n return discoveries.pop(index)\n raise ControllerNotFoundError()\n\n all_controllers = find_all_controllers(**kwargs)\n\n try:\n return list(pop_controller(r, all_controllers) for r in requirements)\n except ControllerNotFoundError as exception:\n logger.info('Unable to satisfy controller requirements' +\n ', required {}, found {}'.format(requirements, find_all_controllers(**kwargs)))\n raise exception"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfinds all controllers attached to this host and populate the registry with the controllers that are registered with the given kwargs.", "response": "def find_all_controllers(**kwargs) -> [ControllerDiscovery]:\n \"\"\"\n :return:\n A list of :class:`~approxeng.input.controllers.ControllerDiscovery` instances corresponding to controllers\n attached to this host, ordered by the ordering on ControllerDiscovery. Any controllers found will be\n constructed with kwargs passed to their constructor function, particularly useful for dead and hot zone\n parameters.\n \"\"\"\n\n def get_controller_classes() -> [{}]:\n \"\"\"\n Scans for subclasses of :class:`~approxeng.input.Controller` and reads out data from their\n :meth:`~approxeng.input.Controller.registrations_ids` method. This should return a list of\n tuples of `(vendor_id, product_id)` which are then used along with the subclass itself to\n populate a registry of known subclasses.\n\n :return:\n A generator that produces known subclasses and their registration information\n \"\"\"\n for controller_class in Controller.__subclasses__():\n for vendor_id, product_id in controller_class.registration_ids():\n yield {'constructor': controller_class,\n 'vendor_id': vendor_id,\n 'product_id': product_id}\n\n id_to_constructor = {'{}-{}'.format(c['vendor_id'], c['product_id']): c['constructor'] for c in\n get_controller_classes()}\n\n def controller_constructor(d: InputDevice):\n id = '{}-{}'.format(d.info.vendor, d.info.product)\n if id in id_to_constructor:\n return id_to_constructor[id]\n return None\n\n all_devices = list(InputDevice(path) for path in list_devices())\n\n devices_by_name = {name: list(e for e in all_devices if unique_name(e) == name) for name in\n set(unique_name(e) for e in all_devices if\n controller_constructor(e) is not None)}\n\n controllers = sorted(\n ControllerDiscovery(controller=controller_constructor(devices[0])(**kwargs), devices=devices, name=name) for\n name, devices in devices_by_name.items())\n\n return controllers"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef print_devices():\n\n def device_verbose_info(device: InputDevice) -> {}:\n \"\"\"\n Gather and format as much info as possible about the supplied InputDevice. Used mostly for debugging at this point.\n\n :param device:\n An InputDevice to examine\n :return:\n A dict containing as much information as possible about the input device.\n \"\"\"\n\n def axis_name(axis_code):\n try:\n return ecodes.ABS[axis_code]\n except KeyError:\n return 'EXTENDED_CODE_{}'.format(axis_code)\n\n def rel_axis_name(axis_code):\n try:\n return ecodes.REL[axis_code]\n except KeyError:\n return 'EXTENDED_CODE_{}'.format(axis_code)\n\n axes = None\n if has_abs_axes(device):\n axes = {\n axis_name(axis_code): {'code': axis_code, 'min': axis_info.min, 'max': axis_info.max,\n 'fuzz': axis_info.fuzz,\n 'flat': axis_info.flat, 'res': axis_info.resolution} for\n axis_code, axis_info in device.capabilities().get(3)}\n\n rel_axes = None\n if has_rel_axes(device):\n print(device.capabilities().get(2))\n rel_axes = {\n rel_axis_name(axis_code): {'code': axis_code} for\n axis_code in device.capabilities().get(2)}\n\n buttons = None\n if has_buttons(device):\n buttons = {code: names for (names, code) in\n dict(util.resolve_ecodes_dict({1: device.capabilities().get(1)})).get(('EV_KEY', 1))}\n\n return {'fn': device.fn, 'path': device.path, 'name': device.name, 'phys': device.phys, 'uniq': device.uniq,\n 'vendor': device.info.vendor, 'product': device.info.product, 'version': device.info.version,\n 'bus': device.info.bustype, 'axes': axes, 'rel_axes': rel_axes, 'buttons': buttons,\n 'unique_name': unique_name(device)}\n\n def has_abs_axes(device):\n return device.capabilities().get(3) is not None\n\n def has_rel_axes(device):\n return device.capabilities().get(2) is not None\n\n def has_buttons(device):\n return device.capabilities().get(1) is not None\n\n _check_import()\n for d in [InputDevice(fn) for fn in list_devices()]:\n if has_abs_axes(d) or has_rel_axes(d):\n pp = pprint.PrettyPrinter(indent=2, width=100)\n pp.pprint(device_verbose_info(d))", "response": "Simple test function which prints out all devices found by evdev\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef accept(self, discovery: ControllerDiscovery):\n if self.require_class is not None and not isinstance(discovery.controller, self.require_class):\n return False\n if self.snames is not None:\n all_controls = discovery.controller.buttons.names + discovery.controller.axes.names\n for sname in self.snames:\n if sname not in all_controls:\n return False\n return True", "response": "Returns True if the supplied ControllerDiscovery matches this requirement False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef config(param_map, mastercode=DEFAULT_MASTERCODE):\n pmap = {Config.mastercode: mastercode}\n pmap.update(param_map)\n processed_params = dict(_process_params(pmap))\n return processed_params", "response": "Takes a dictionary of Config. key and Config. value and returns a dictionary of processed keys and values to be used in the FlashAir s config. cz"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef post(param_map, url=URL):\n prepped_request = _prep_post(url=url, **param_map)\n return cgi.send(prepped_request)", "response": "Posts a param_map created with config to\n the FlashAir config. cgi entrypoint"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _validate_timeout(seconds: float):\n val = int(seconds * 1000)\n assert 60000 <= val <= 4294967294, \"Bad value: {}\".format(val)\n return val", "response": "Validates that the given time is within the allowed time range."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing a date string into a tuple of date and time values.", "response": "def parse_datetime(datetime_input):\n \"\"\"The arrow library is sadly not good enough to parse\n certain date strings. It even gives unexpected results for partial\n date strings such as '2015-01' or just '2015', which I think\n should be seen as 'the first moment of 2014'.\n This function should overcome those limitations.\"\"\"\n date_els, time_els = _split_datetime(datetime_input)\n date_vals = _parse_date(date_els)\n time_vals = _parse_time(time_els)\n vals = tuple(date_vals) + tuple(time_vals)\n return arrow.get(*vals)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_led(self, led_number, led_value):\n if 1 > led_number > 4:\n return\n write_led_value(hw_id=self.device_unique_name, led_name='sony{}'.format(led_number), value=led_value)", "response": "Set the LED on the DS3 controller."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mixer(yaw, throttle, max_power=100):\n left = throttle + yaw\n right = throttle - yaw\n scale = float(max_power) / max(1, abs(left), abs(right))\n return int(left * scale), int(right * scale)", "response": "Mix a pair of joystick axes returning a pair of wheel speeds."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the command to be executed by the admin", "response": "def admin_command(sudo, command):\n \"\"\"\n If sudo is needed, make sure the command is prepended\n correctly, otherwise return the command as it came.\n\n :param sudo: A boolean representing the intention of having a sudo command\n (or not)\n :param command: A list of the actual command to execute with Popen.\n \"\"\"\n if sudo:\n if not isinstance(command, list):\n command = [command]\n return ['sudo'] + [cmd for cmd in command]\n return command"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nscan the system for all LEDs and returns a dict of name to filename.", "response": "def scan_system():\n \"\"\"\n Scans /sys/class/leds looking for entries, then examining their .../device/uevent file to obtain unique hardware\n IDs corresponding to the associated hardware. This then allows us to associate InputDevice based controllers with\n sets of zero or more LEDs. The result is a dict from hardware address to a dict of name to filename, where the name\n is the name of the LED and the filename is the brightness control to which writing a value changes the LED state.\n\n At the same time, scans /sys/class/power_supply looking for battery entries and analysing them in the same way.\n\n Hardware IDs are, in order of preference, the HID_UNIQ address (corresponding to the physical MAC of an attached\n bluetooth or similar HID device), or the PHYS corresponding to other devices. This is the same logic as used in the\n evdev based scanning to group together input nodes for composite controllers (i.e. ones with motion sensors as well\n as physical axes). It is intended to produce the same results, so the LEDs for e.g. a PS4 controller will be keyed\n on the same physical address as that returned by :func:`approxeng.input.controllers.unique_name` for all the\n InputDevice instances associated with a given controller.\n\n :return:\n A dict containing available LEDs, keyed on physical device ID\n \"\"\"\n\n def find_device_hardware_id(uevent_file_path):\n hid_uniq = None\n phys = None\n for line in open(uevent_file_path, 'r').read().split('\\n'):\n parts = line.split('=')\n if len(parts) == 2:\n name, value = parts\n value = value.replace('\"', '')\n if name == 'HID_UNIQ' and value:\n hid_uniq = value\n elif name == 'PHYS' and value:\n phys = value.split('/')[0]\n if hid_uniq:\n return hid_uniq\n elif phys:\n return phys\n return None\n\n leds = {}\n for sub in ['/sys/class/leds/' + sub_dir for sub_dir in listdir('/sys/class/leds')]:\n led_name = sub.split(':')[-1]\n write_path = sub + '/brightness'\n device_id = find_device_hardware_id(sub + '/device/uevent')\n if device_id:\n if device_id not in leds:\n leds[device_id] = {}\n leds[device_id][led_name] = write_path\n\n power = {}\n for sub in ['/sys/class/power_supply/' + sub_dir for sub_dir in listdir('/sys/class/power_supply')]:\n read_path = sub + '/capacity'\n device_id = find_device_hardware_id(sub + '/device/uevent')\n if device_id:\n power[device_id] = read_path\n\n return {'leds': leds,\n 'power': power}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef bind_controllers(*discoveries, print_events=False):\n\n discoveries = list(discoveries)\n\n class SelectThread(Thread):\n def __init__(self):\n Thread.__init__(self, name='evdev select thread')\n self.daemon = True\n self.running = True\n\n self.device_to_controller_discovery = {}\n for discovery in discoveries:\n for d in discovery.devices:\n self.device_to_controller_discovery[d.fn] = discovery\n self.all_devices = reduce(lambda x, y: x + y, [discovery.devices for discovery in discoveries])\n\n def run(self):\n\n for discovery in discoveries:\n discovery.controller.device_unique_name = discovery.name\n\n while self.running:\n try:\n r, w, x = select(self.all_devices, [], [], 0.5)\n for fd in r:\n active_device = fd\n controller_discovery = self.device_to_controller_discovery[active_device.fn]\n controller = controller_discovery.controller\n controller_devices = controller_discovery.devices\n prefix = None\n if controller.node_mappings is not None and len(controller_devices) > 1:\n try:\n prefix = controller.node_mappings[active_device.name]\n except KeyError:\n pass\n for event in active_device.read():\n if print_events:\n print(event)\n if event.type == EV_ABS or event.type == EV_REL:\n controller.axes.axis_updated(event, prefix=prefix)\n elif event.type == EV_KEY:\n # Button event\n if event.value == 1:\n # Button down\n controller.buttons.button_pressed(event.code, prefix=prefix)\n elif event.value == 0:\n # Button up\n controller.buttons.button_released(event.code, prefix=prefix)\n except Exception as e:\n self.stop(e)\n\n def stop(self, exception=None):\n\n for discovery in discoveries:\n discovery.controller.device_unique_name = None\n discovery.controller.exception = exception\n\n self.running = False\n\n polling_thread = SelectThread()\n\n # Force an update of the LED and battery system cache\n sys.scan_cache(force_update=True)\n\n for device in polling_thread.all_devices:\n device.grab()\n\n def unbind():\n polling_thread.stop()\n for dev in polling_thread.all_devices:\n try:\n dev.ungrab()\n except IOError:\n pass\n\n polling_thread.start()\n\n return unbind", "response": "Bind a set of controllers to a set of InputDevice instances."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _encode_time(mtime: float):\n dt = arrow.get(mtime)\n dt = dt.to(\"local\")\n date_val = ((dt.year - 1980) << 9) | (dt.month << 5) | dt.day\n secs = dt.second + dt.microsecond / 10**6\n time_val = (dt.hour << 11) | (dt.minute << 5) | math.floor(secs / 2)\n return (date_val << 16) | time_val", "response": "Encode a mtime float as a 32 - bit FAT time"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\noverloading function callable with loop getter.", "response": "def threadpooled(\n func: typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]],\n *,\n loop_getter: None = None,\n loop_getter_need_context: bool = False,\n) -> typing.Callable[..., \"concurrent.futures.Future[typing.Any]\"]:\n \"\"\"Overload: function callable, no loop getter.\"\"\""} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef threadpooled(\n func: typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]],\n *,\n loop_getter: typing.Union[typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop],\n loop_getter_need_context: bool = False,\n) -> typing.Callable[..., \"asyncio.Task[typing.Any]\"]:\n \"\"\"Overload: function callable, loop getter available.\"\"\"", "response": "Overload function callable loop getter available."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\noverloads : No function.", "response": "def threadpooled(\n func: None = None,\n *,\n loop_getter: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop] = None,\n loop_getter_need_context: bool = False,\n) -> ThreadPooled:\n \"\"\"Overload: No function.\"\"\""} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npost function to ThreadPoolExecutor.", "response": "def threadpooled( # noqa: F811\n func: typing.Optional[typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]]] = None,\n *,\n loop_getter: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop] = None,\n loop_getter_need_context: bool = False,\n) -> typing.Union[\n ThreadPooled,\n typing.Callable[..., \"typing.Union[concurrent.futures.Future[typing.Any], typing.Awaitable[typing.Any]]\"],\n]:\n \"\"\"Post function to ThreadPoolExecutor.\n\n :param func: function to wrap\n :type func: typing.Optional[typing.Callable[..., typing.Union[typing.Awaitable, typing.Any]]]\n :param loop_getter: Method to get event loop, if wrap in asyncio task\n :type loop_getter: typing.Union[\n None,\n typing.Callable[..., asyncio.AbstractEventLoop],\n asyncio.AbstractEventLoop\n ]\n :param loop_getter_need_context: Loop getter requires function context\n :type loop_getter_need_context: bool\n :return: ThreadPooled instance, if called as function or argumented decorator, else callable wrapper\n :rtype: typing.Union[ThreadPooled, typing.Callable[..., typing.Union[concurrent.futures.Future, typing.Awaitable]]]\n \"\"\"\n if func is None:\n return ThreadPooled(func=func, loop_getter=loop_getter, loop_getter_need_context=loop_getter_need_context)\n return ThreadPooled( # type: ignore\n func=None, loop_getter=loop_getter, loop_getter_need_context=loop_getter_need_context\n )(func)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef configure(cls: typing.Type[\"ThreadPooled\"], max_workers: typing.Optional[int] = None) -> None:\n if isinstance(cls.__executor, ThreadPoolExecutor):\n if cls.__executor.max_workers == max_workers:\n return\n cls.__executor.shutdown()\n\n cls.__executor = ThreadPoolExecutor(max_workers=max_workers)", "response": "Create and configure the ThreadPoolExecutor."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the ThreadPoolExecutor instance.", "response": "def executor(self) -> \"ThreadPoolExecutor\":\n \"\"\"Executor instance.\n\n :rtype: ThreadPoolExecutor\n \"\"\"\n if not isinstance(self.__executor, ThreadPoolExecutor) or self.__executor.is_shutdown:\n self.configure()\n return self.__executor"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef loop_getter(\n self\n ) -> typing.Optional[typing.Union[typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop]]:\n \"\"\"Loop getter.\n\n :rtype: typing.Union[None, typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop]\n \"\"\"\n return self.__loop_getter", "response": "Return the loop getter function for the given base namespace."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_loop(self, *args: typing.Any, **kwargs: typing.Any) -> typing.Optional[asyncio.AbstractEventLoop]:\n if callable(self.loop_getter):\n if self.loop_getter_need_context:\n return self.loop_getter(*args, **kwargs) # pylint: disable=not-callable\n return self.loop_getter() # pylint: disable=not-callable\n return self.loop_getter", "response": "Get event loop in decorator class."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if the given hostname needs SSH.", "response": "def needs_ssh(hostname, _socket=None):\n \"\"\"\n Obtains remote hostname of the socket and cuts off the domain part\n of its FQDN.\n \"\"\"\n if hostname.lower() in ['localhost', '127.0.0.1', '127.0.1.1']:\n return False\n _socket = _socket or socket\n fqdn = _socket.getfqdn()\n if hostname == fqdn:\n return False\n local_hostname = _socket.gethostname()\n local_short_hostname = local_hostname.split('.')[0]\n if local_hostname == hostname or local_short_hostname == hostname:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntry to determine the remote Python executable and return it.", "response": "def get_python_executable(conn):\n \"\"\"\n Try to determine the remote Python version so that it can be used\n when executing. Avoids the problem of different Python versions, or distros\n that do not use ``python`` but do ``python3``\n \"\"\"\n # executables in order of preference:\n executables = ['python3', 'python', 'python2.7']\n for executable in executables:\n conn.logger.debug('trying to determine remote python executable with %s' % executable)\n out, err, code = check(conn, ['which', executable])\n if code:\n conn.logger.warning('skipping %s, was not found in path' % executable)\n else:\n try:\n return out[0].strip()\n except IndexError:\n conn.logger.warning('could not parse stdout: %s' % out)\n\n # if all fails, we just return whatever the main connection had\n conn.logger.info('Falling back to using interpreter: %s' % conn.interpreter)\n return conn.interpreter"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _detect_sudo(self, _execnet=None):\n exc = _execnet or execnet\n gw = exc.makegateway(\n self._make_connection_string(self.hostname, use_sudo=False)\n )\n\n channel = gw.remote_exec(\n 'import getpass; channel.send(getpass.getuser())'\n )\n\n result = channel.receive()\n gw.exit()\n\n if result == 'root':\n return False\n self.logger.debug('connection detected need for sudo')\n return True", "response": "Detects if the user is a sudo"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef import_module(self, module):\n if self.remote_import_system is not None:\n if self.remote_import_system == 'json':\n self.remote_module = JsonModuleExecute(self, module, self.logger)\n else:\n self.remote_module = LegacyModuleExecute(self.gateway, module, self.logger)\n else:\n self.remote_module = LegacyModuleExecute(self.gateway, module, self.logger)\n return self.remote_module", "response": "Allows remote execution of a local module."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the matching backend class from a string.", "response": "def get(name, fallback='ssh'):\n \"\"\"\n Retrieve the matching backend class from a string. If no backend can be\n matched, it raises an error.\n\n >>> get('ssh')\n \n >>> get()\n \n >>> get('non-existent')\n \n >>> get('non-existent', 'openshift')\n \n \"\"\"\n mapping = {\n 'ssh': ssh.SshConnection,\n 'oc': openshift.OpenshiftConnection,\n 'openshift': openshift.OpenshiftConnection,\n 'kubernetes': kubernetes.KubernetesConnection,\n 'k8s': kubernetes.KubernetesConnection,\n 'local': local.LocalConnection,\n 'popen': local.LocalConnection,\n 'localhost': local.LocalConnection,\n 'docker': docker.DockerConnection,\n 'podman': podman.PodmanConnection,\n }\n if not name:\n # fallsback to just plain local/ssh\n name = 'ssh'\n\n name = name.strip().lower()\n connection_class = mapping.get(name)\n if not connection_class:\n logger.warning('no connection backend found for: \"%s\"' % name)\n if fallback:\n logger.info('falling back to \"%s\"' % fallback)\n # this assumes that ``fallback`` is a valid mapping name\n return mapping.get(fallback)\n return connection_class"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn configured CACHES dictionary from CACHE_URL", "response": "def config(env=DEFAULT_ENV, default='locmem://'):\n \"\"\"Returns configured CACHES dictionary from CACHE_URL\"\"\"\n config = {}\n\n s = os.environ.get(env, default)\n\n if s:\n config = parse(s)\n\n return config"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses a cache URL.", "response": "def parse(url):\n \"\"\"Parses a cache URL.\"\"\"\n config = {}\n\n url = urlparse.urlparse(url)\n # Handle python 2.6 broken url parsing\n path, query = url.path, url.query\n if '?' in path and query == '':\n path, query = path.split('?', 1)\n\n cache_args = dict([(key.upper(), ';'.join(val)) for key, val in\n urlparse.parse_qs(query).items()])\n\n # Update with environment configuration.\n backend = BACKENDS.get(url.scheme)\n if not backend:\n raise Exception('Unknown backend: \"{0}\"'.format(url.scheme))\n\n config['BACKEND'] = BACKENDS[url.scheme]\n\n redis_options = {}\n if url.scheme == 'hiredis':\n redis_options['PARSER_CLASS'] = 'redis.connection.HiredisParser'\n\n # File based\n if not url.netloc:\n if url.scheme in ('memcached', 'pymemcached', 'djangopylibmc'):\n config['LOCATION'] = 'unix:' + path\n\n elif url.scheme in ('redis', 'hiredis'):\n match = re.match(r'.+?(?P\\d+)', path)\n if match:\n db = match.group('db')\n path = path[:path.rfind('/')]\n else:\n db = '0'\n config['LOCATION'] = 'unix:%s:%s' % (path, db)\n else:\n config['LOCATION'] = path\n # URL based\n else:\n # Handle multiple hosts\n config['LOCATION'] = ';'.join(url.netloc.split(','))\n\n if url.scheme in ('redis', 'hiredis'):\n if url.password:\n redis_options['PASSWORD'] = url.password\n # Specifying the database is optional, use db 0 if not specified.\n db = path[1:] or '0'\n port = url.port if url.port else 6379\n config['LOCATION'] = \"redis://%s:%s/%s\" % (url.hostname, port, db)\n\n if redis_options:\n config.setdefault('OPTIONS', {}).update(redis_options)\n\n if url.scheme == 'uwsgicache':\n config['LOCATION'] = config.get('LOCATION', 'default') or 'default'\n\n # Pop special options from cache_args\n # https://docs.djangoproject.com/en/1.10/topics/cache/#cache-arguments\n options = {}\n for key in ['MAX_ENTRIES', 'CULL_FREQUENCY']:\n val = cache_args.pop(key, None)\n if val is not None:\n options[key] = int(val)\n if options:\n config.setdefault('OPTIONS', {}).update(options)\n\n config.update(cache_args)\n\n return config"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if memory has been written to False otherwise", "response": "def memory_changed(url=URL):\n \"\"\"Returns True if memory has been written to, False otherwise\"\"\"\n response = _get(Operation.memory_changed, url)\n try:\n return int(response.text) == 1\n except ValueError:\n raise IOError(\"Likely no FlashAir connection, \"\n \"memory changed CGI command failed\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef do_translate(parser, token):\n bits = token.split_contents()\n if len(bits) < 2:\n raise TemplateSyntaxError(\"'%s' takes at least one argument\" % bits[0])\n message_string = parser.compile_filter(bits[1])\n remaining = bits[2:]\n\n noop = False\n asvar = None\n message_context = None\n seen = set()\n invalid_context = {'as', 'noop'}\n\n while remaining:\n option = remaining.pop(0)\n if option in seen:\n raise TemplateSyntaxError(\n \"The '%s' option was specified more than once.\" % option,\n )\n elif option == 'noop':\n noop = True\n elif option == 'context':\n try:\n value = remaining.pop(0)\n except IndexError:\n msg = \"No argument provided to the '%s' tag for the context option.\" % bits[0]\n six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])\n if value in invalid_context:\n raise TemplateSyntaxError(\n \"Invalid argument '%s' provided to the '%s' tag for the context option\" % (value, bits[0]),\n )\n message_context = parser.compile_filter(value)\n elif option == 'as':\n try:\n value = remaining.pop(0)\n except IndexError:\n msg = \"No argument provided to the '%s' tag for the as option.\" % bits[0]\n six.reraise(TemplateSyntaxError, TemplateSyntaxError(msg), sys.exc_info()[2])\n asvar = value\n else:\n raise TemplateSyntaxError(\n \"Unknown argument for '%s' tag: '%s'. The only options \"\n \"available are 'noop', 'context' \\\"xxx\\\", and 'as VAR'.\" % (\n bits[0], option,\n )\n )\n seen.add(option)\n\n if phrase_settings.PHRASE_ENABLED:\n return PhraseTranslateNode(message_string, noop, asvar, message_context)\n else:\n return TranslateNode(message_string, noop, asvar, message_context)", "response": "Translate a string into a new language."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef asynciotask(\n func: None = None,\n *,\n loop_getter: typing.Union[\n typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop\n ] = asyncio.get_event_loop,\n loop_getter_need_context: bool = False,\n) -> AsyncIOTask:\n \"\"\"Overload: no function.\"\"\"", "response": "A function that returns an async task."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef asynciotask( # noqa: F811\n func: typing.Optional[typing.Callable[..., \"typing.Awaitable[typing.Any]\"]] = None,\n *,\n loop_getter: typing.Union[\n typing.Callable[..., asyncio.AbstractEventLoop], asyncio.AbstractEventLoop\n ] = asyncio.get_event_loop,\n loop_getter_need_context: bool = False,\n) -> typing.Union[AsyncIOTask, typing.Callable[..., \"asyncio.Task[typing.Any]\"]]:\n \"\"\"Wrap function in future and return.\n\n :param func: Function to wrap\n :type func: typing.Optional[typing.Callable[..., typing.Awaitable]]\n :param loop_getter: Method to get event loop, if wrap in asyncio task\n :type loop_getter: typing.Union[\n typing.Callable[..., asyncio.AbstractEventLoop],\n asyncio.AbstractEventLoop\n ]\n :param loop_getter_need_context: Loop getter requires function context\n :type loop_getter_need_context: bool\n :return: AsyncIOTask instance, if called as function or argumented decorator, else callable wrapper\n :rtype: typing.Union[AsyncIOTask, typing.Callable[..., asyncio.Task]]\n \"\"\"\n if func is None:\n return AsyncIOTask(func=func, loop_getter=loop_getter, loop_getter_need_context=loop_getter_need_context)\n return AsyncIOTask( # type: ignore\n func=None, loop_getter=loop_getter, loop_getter_need_context=loop_getter_need_context\n )(func)", "response": "Decorator to wrap function in future and return."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget event loop in decorator class.", "response": "def get_loop(self, *args, **kwargs): # type: (typing.Any, typing.Any) -> asyncio.AbstractEventLoop\n \"\"\"Get event loop in decorator class.\"\"\"\n if callable(self.loop_getter):\n if self.loop_getter_need_context:\n return self.loop_getter(*args, **kwargs) # pylint: disable=not-callable\n return self.loop_getter() # pylint: disable=not-callable\n return self.loop_getter"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_function_wrapper(\n self, func: typing.Callable[..., \"typing.Awaitable[typing.Any]\"]\n ) -> typing.Callable[..., \"asyncio.Task[typing.Any]\"]:\n \"\"\"Here should be constructed and returned real decorator.\n\n :param func: Wrapped function\n :type func: typing.Callable[..., typing.Awaitable]\n :return: wrapper, which will produce asyncio.Task on call with function called inside it\n :rtype: typing.Callable[..., asyncio.Task]\n \"\"\"\n # noinspection PyMissingOrEmptyDocstring\n @functools.wraps(func) # pylint: disable=missing-docstring\n def wrapper(*args, **kwargs): # type: (typing.Any, typing.Any) -> asyncio.Task[typing.Any]\n loop = self.get_loop(*args, **kwargs)\n return loop.create_task(func(*args, **kwargs))\n\n return wrapper", "response": "This is a function decorator which returns a function which will produce asyncio. Task on call with function called inside it\nTaxonomy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\noverload Call decorator without arguments.", "response": "def threaded(\n name: typing.Callable[..., typing.Any], daemon: bool = False, started: bool = False\n) -> typing.Callable[..., threading.Thread]:\n \"\"\"Overload: Call decorator without arguments.\"\"\""} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrun function in separate thread. :param name: New thread name. If callable: use as wrapped function. If none: use wrapped function name. :type name: typing.Union[None, str, typing.Callable] :param daemon: Daemonize thread. :type daemon: bool :param started: Return started thread :type started: bool :return: Threaded instance, if called as function or argumented decorator, else callable wraper :rtype: typing.Union[Threaded, typing.Callable[..., threading.Thread]]", "response": "def threaded( # noqa: F811\n name: typing.Optional[typing.Union[str, typing.Callable[..., typing.Any]]] = None,\n daemon: bool = False,\n started: bool = False,\n) -> typing.Union[Threaded, typing.Callable[..., threading.Thread]]:\n \"\"\"Run function in separate thread.\n\n :param name: New thread name.\n If callable: use as wrapped function.\n If none: use wrapped function name.\n :type name: typing.Union[None, str, typing.Callable]\n :param daemon: Daemonize thread.\n :type daemon: bool\n :param started: Return started thread\n :type started: bool\n :return: Threaded instance, if called as function or argumented decorator, else callable wraper\n :rtype: typing.Union[Threaded, typing.Callable[..., threading.Thread]]\n \"\"\"\n if callable(name):\n func, name = (name, \"Threaded: \" + getattr(name, \"__name__\", str(hash(name))))\n return Threaded(name=name, daemon=daemon, started=started)(func) # type: ignore\n return Threaded(name=name, daemon=daemon, started=started)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_function_wrapper(\n self, func: typing.Callable[..., typing.Union[\"typing.Awaitable[typing.Any]\", typing.Any]]\n ) -> typing.Callable[..., threading.Thread]:\n \"\"\"Here should be constructed and returned real decorator.\n\n :param func: Wrapped function\n :type func: typing.Callable[..., typing.Union[typing.Awaitable, typing.Any]]\n :return: wrapped function\n :rtype: typing.Callable[..., threading.Thread]\n \"\"\"\n prepared: typing.Callable[..., typing.Any] = self._await_if_required(func)\n name: typing.Optional[str] = self.name\n if name is None:\n name = \"Threaded: \" + getattr(func, \"__name__\", str(hash(func)))\n\n # noinspection PyMissingOrEmptyDocstring\n @functools.wraps(prepared) # pylint: disable=missing-docstring\n def wrapper(*args, **kwargs): # type: (typing.Any, typing.Any) -> threading.Thread\n thread = threading.Thread(target=prepared, name=name, args=args, kwargs=kwargs, daemon=self.daemon)\n if self.started:\n thread.start()\n return thread\n\n return wrapper", "response": "This is a function decorator that returns a thread that will be started when the function is called."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef issuer(self, value):\n\n is_oscrypto = isinstance(value, asymmetric.Certificate)\n if not is_oscrypto and not isinstance(value, x509.Certificate):\n raise TypeError(_pretty_message(\n '''\n issuer must be an instance of asn1crypto.x509.Certificate or\n oscrypto.asymmetric.Certificate, not %s\n ''',\n _type_name(value)\n ))\n\n if is_oscrypto:\n value = value.asn1\n\n self._issuer = value", "response": "Set the issuer of the object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef key_hash_algo(self, value):\n\n if value not in set(['sha1', 'sha256']):\n raise ValueError(_pretty_message(\n '''\n hash_algo must be one of \"sha1\", \"sha256\", not %s\n ''',\n repr(value)\n ))\n\n self._key_hash_algo = value", "response": "Sets the key hash algorithm."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef nonce(self, value):\n\n if not isinstance(value, bool):\n raise TypeError(_pretty_message(\n '''\n nonce must be a boolean, not %s\n ''',\n _type_name(value)\n ))\n\n self._nonce = value", "response": "Set the nonce of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_extension(self, name, value):\n\n if isinstance(name, str_cls):\n request_extension_oids = set([\n 'service_locator',\n '1.3.6.1.5.5.7.48.1.7'\n ])\n tbs_request_extension_oids = set([\n 'nonce',\n 'acceptable_responses',\n 'preferred_signature_algorithms',\n '1.3.6.1.5.5.7.48.1.2',\n '1.3.6.1.5.5.7.48.1.4',\n '1.3.6.1.5.5.7.48.1.8'\n ])\n\n if name in request_extension_oids:\n name = ocsp.RequestExtensionId(name)\n\n elif name in tbs_request_extension_oids:\n name = ocsp.TBSRequestExtensionId(name)\n\n else:\n raise ValueError(_pretty_message(\n '''\n name must be a unicode string from\n asn1crypto.ocsp.TBSRequestExtensionId or\n asn1crypto.ocsp.RequestExtensionId, not %s\n ''',\n repr(name)\n ))\n\n if isinstance(name, ocsp.RequestExtensionId):\n extension = ocsp.RequestExtension({'extn_id': name})\n\n elif isinstance(name, ocsp.TBSRequestExtensionId):\n extension = ocsp.TBSRequestExtension({'extn_id': name})\n\n else:\n raise TypeError(_pretty_message(\n '''\n name must be a unicode string or an instance of\n asn1crypto.ocsp.TBSRequestExtensionId or\n asn1crypto.ocsp.RequestExtensionId, not %s\n ''',\n _type_name(name)\n ))\n\n # We use native here to convert OIDs to meaningful names\n name = extension['extn_id'].native\n spec = extension.spec('extn_value')\n\n if not isinstance(value, spec) and value is not None:\n raise TypeError(_pretty_message(\n '''\n value must be an instance of %s, not %s\n ''',\n _type_name(spec),\n _type_name(value)\n ))\n\n if isinstance(extension, ocsp.TBSRequestExtension):\n extn_dict = self._tbs_request_extensions\n else:\n extn_dict = self._request_extensions\n\n if value is None:\n if name in extn_dict:\n del extn_dict[name]\n else:\n extn_dict[name] = value", "response": "Sets the value for an extension using a fully constructed ASN. 1 Value object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build(self, requestor_private_key=None, requestor_certificate=None, other_certificates=None):\n\n def _make_extension(name, value):\n return {\n 'extn_id': name,\n 'critical': False,\n 'extn_value': value\n }\n\n tbs_request_extensions = []\n request_extensions = []\n has_nonce = False\n\n for name, value in self._tbs_request_extensions.items():\n if name == 'nonce':\n has_nonce = True\n tbs_request_extensions.append(_make_extension(name, value))\n if self._nonce and not has_nonce:\n tbs_request_extensions.append(\n _make_extension('nonce', util.rand_bytes(16))\n )\n\n if not tbs_request_extensions:\n tbs_request_extensions = None\n\n for name, value in self._request_extensions.items():\n request_extensions.append(_make_extension(name, value))\n\n if not request_extensions:\n request_extensions = None\n\n tbs_request = ocsp.TBSRequest({\n 'request_list': [\n {\n 'req_cert': {\n 'hash_algorithm': {\n 'algorithm': self._key_hash_algo\n },\n 'issuer_name_hash': getattr(self._certificate.issuer, self._key_hash_algo),\n 'issuer_key_hash': getattr(self._issuer.public_key, self._key_hash_algo),\n 'serial_number': self._certificate.serial_number,\n },\n 'single_request_extensions': request_extensions\n }\n ],\n 'request_extensions': tbs_request_extensions\n })\n signature = None\n\n if requestor_private_key or requestor_certificate or other_certificates:\n is_oscrypto = isinstance(requestor_private_key, asymmetric.PrivateKey)\n if not isinstance(requestor_private_key, keys.PrivateKeyInfo) and not is_oscrypto:\n raise TypeError(_pretty_message(\n '''\n requestor_private_key must be an instance of\n asn1crypto.keys.PrivateKeyInfo or\n oscrypto.asymmetric.PrivateKey, not %s\n ''',\n _type_name(requestor_private_key)\n ))\n\n cert_is_oscrypto = isinstance(requestor_certificate, asymmetric.Certificate)\n if not isinstance(requestor_certificate, x509.Certificate) and not cert_is_oscrypto:\n raise TypeError(_pretty_message(\n '''\n requestor_certificate must be an instance of\n asn1crypto.x509.Certificate or\n oscrypto.asymmetric.Certificate, not %s\n ''',\n _type_name(requestor_certificate)\n ))\n\n if other_certificates is not None and not isinstance(other_certificates, list):\n raise TypeError(_pretty_message(\n '''\n other_certificates must be a list of\n asn1crypto.x509.Certificate or\n oscrypto.asymmetric.Certificate objects, not %s\n ''',\n _type_name(other_certificates)\n ))\n\n if cert_is_oscrypto:\n requestor_certificate = requestor_certificate.asn1\n\n tbs_request['requestor_name'] = x509.GeneralName(\n name='directory_name',\n value=requestor_certificate.subject\n )\n\n certificates = [requestor_certificate]\n\n for other_certificate in other_certificates:\n other_cert_is_oscrypto = isinstance(other_certificate, asymmetric.Certificate)\n if not isinstance(other_certificate, x509.Certificate) and not other_cert_is_oscrypto:\n raise TypeError(_pretty_message(\n '''\n other_certificate must be an instance of\n asn1crypto.x509.Certificate or\n oscrypto.asymmetric.Certificate, not %s\n ''',\n _type_name(other_certificate)\n ))\n if other_cert_is_oscrypto:\n other_certificate = other_certificate.asn1\n certificates.append(other_certificate)\n\n signature_algo = requestor_private_key.algorithm\n if signature_algo == 'ec':\n signature_algo = 'ecdsa'\n\n signature_algorithm_id = '%s_%s' % (self._hash_algo, signature_algo)\n\n if requestor_private_key.algorithm == 'rsa':\n sign_func = asymmetric.rsa_pkcs1v15_sign\n elif requestor_private_key.algorithm == 'dsa':\n sign_func = asymmetric.dsa_sign\n elif requestor_private_key.algorithm == 'ec':\n sign_func = asymmetric.ecdsa_sign\n\n if not is_oscrypto:\n requestor_private_key = asymmetric.load_private_key(requestor_private_key)\n signature_bytes = sign_func(requestor_private_key, tbs_request.dump(), self._hash_algo)\n\n signature = ocsp.Signature({\n 'signature_algorithm': {'algorithm': signature_algorithm_id},\n 'signature': signature_bytes,\n 'certs': certificates\n })\n\n return ocsp.OCSPRequest({\n 'tbs_request': tbs_request,\n 'optional_signature': signature\n })", "response": "Builds the OCSP request object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef response_status(self, value):\n\n if not isinstance(value, str_cls):\n raise TypeError(_pretty_message(\n '''\n response_status must be a unicode string, not %s\n ''',\n _type_name(value)\n ))\n\n valid_response_statuses = set([\n 'successful',\n 'malformed_request',\n 'internal_error',\n 'try_later',\n 'sign_required',\n 'unauthorized'\n ])\n if value not in valid_response_statuses:\n raise ValueError(_pretty_message(\n '''\n response_status must be one of \"successful\",\n \"malformed_request\", \"internal_error\", \"try_later\",\n \"sign_required\", \"unauthorized\", not %s\n ''',\n repr(value)\n ))\n\n self._response_status = value", "response": "Sets the response status of the OCSP response."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the certificate of the response.", "response": "def certificate(self, value):\n \"\"\"\n An asn1crypto.x509.Certificate or oscrypto.asymmetric.Certificate object\n of the certificate the response is about.\n \"\"\"\n\n if value is not None:\n is_oscrypto = isinstance(value, asymmetric.Certificate)\n if not is_oscrypto and not isinstance(value, x509.Certificate):\n raise TypeError(_pretty_message(\n '''\n certificate must be an instance of asn1crypto.x509.Certificate\n or oscrypto.asymmetric.Certificate, not %s\n ''',\n _type_name(value)\n ))\n\n if is_oscrypto:\n value = value.asn1\n\n self._certificate = value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef certificate_status(self, value):\n\n if value is not None:\n if not isinstance(value, str_cls):\n raise TypeError(_pretty_message(\n '''\n certificate_status must be a unicode string, not %s\n ''',\n _type_name(value)\n ))\n\n valid_certificate_statuses = set([\n 'good',\n 'revoked',\n 'key_compromise',\n 'ca_compromise',\n 'affiliation_changed',\n 'superseded',\n 'cessation_of_operation',\n 'certificate_hold',\n 'remove_from_crl',\n 'privilege_withdrawn',\n 'unknown',\n ])\n if value not in valid_certificate_statuses:\n raise ValueError(_pretty_message(\n '''\n certificate_status must be one of \"good\", \"revoked\", \"key_compromise\",\n \"ca_compromise\", \"affiliation_changed\", \"superseded\",\n \"cessation_of_operation\", \"certificate_hold\", \"remove_from_crl\",\n \"privilege_withdrawn\", \"unknown\" not %s\n ''',\n repr(value)\n ))\n\n self._certificate_status = value", "response": "Sets the certificate status of the certificate."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the revocation date of the certificate.", "response": "def revocation_date(self, value):\n \"\"\"\n A datetime.datetime object of when the certificate was revoked, if the\n status is not \"good\" or \"unknown\".\n \"\"\"\n\n if value is not None and not isinstance(value, datetime):\n raise TypeError(_pretty_message(\n '''\n revocation_date must be an instance of datetime.datetime, not %s\n ''',\n _type_name(value)\n ))\n\n self._revocation_date = value"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the certificate issuer of the OCSP response.", "response": "def certificate_issuer(self, value):\n \"\"\"\n An asn1crypto.x509.Certificate object of the issuer of the certificate.\n This should only be set if the OCSP responder is not the issuer of\n the certificate, but instead a special certificate only for OCSP\n responses.\n \"\"\"\n\n if value is not None:\n is_oscrypto = isinstance(value, asymmetric.Certificate)\n if not is_oscrypto and not isinstance(value, x509.Certificate):\n raise TypeError(_pretty_message(\n '''\n certificate_issuer must be an instance of\n asn1crypto.x509.Certificate or\n oscrypto.asymmetric.Certificate, not %s\n ''',\n _type_name(value)\n ))\n\n if is_oscrypto:\n value = value.asn1\n\n self._certificate_issuer = value"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef this_update(self, value):\n\n if not isinstance(value, datetime):\n raise TypeError(_pretty_message(\n '''\n this_update must be an instance of datetime.datetime, not %s\n ''',\n _type_name(value)\n ))\n\n self._this_update = value", "response": "Set the _this_update field of the object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the next update date for this entry.", "response": "def next_update(self, value):\n \"\"\"\n A datetime.datetime object of when the response may next change. This\n should only be set if responses are cached. If responses are generated\n fresh on every request, this should not be set.\n \"\"\"\n\n if not isinstance(value, datetime):\n raise TypeError(_pretty_message(\n '''\n next_update must be an instance of datetime.datetime, not %s\n ''',\n _type_name(value)\n ))\n\n self._next_update = value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_extension(self, name, value):\n\n if isinstance(name, str_cls):\n response_data_extension_oids = set([\n 'nonce',\n 'extended_revoke',\n '1.3.6.1.5.5.7.48.1.2',\n '1.3.6.1.5.5.7.48.1.9'\n ])\n\n single_response_extension_oids = set([\n 'crl',\n 'archive_cutoff',\n 'crl_reason',\n 'invalidity_date',\n 'certificate_issuer',\n '1.3.6.1.5.5.7.48.1.3',\n '1.3.6.1.5.5.7.48.1.6',\n '2.5.29.21',\n '2.5.29.24',\n '2.5.29.29'\n ])\n\n if name in response_data_extension_oids:\n name = ocsp.ResponseDataExtensionId(name)\n\n elif name in single_response_extension_oids:\n name = ocsp.SingleResponseExtensionId(name)\n\n else:\n raise ValueError(_pretty_message(\n '''\n name must be a unicode string from\n asn1crypto.ocsp.ResponseDataExtensionId or\n asn1crypto.ocsp.SingleResponseExtensionId, not %s\n ''',\n repr(name)\n ))\n\n if isinstance(name, ocsp.ResponseDataExtensionId):\n extension = ocsp.ResponseDataExtension({'extn_id': name})\n\n elif isinstance(name, ocsp.SingleResponseExtensionId):\n extension = ocsp.SingleResponseExtension({'extn_id': name})\n\n else:\n raise TypeError(_pretty_message(\n '''\n name must be a unicode string or an instance of\n asn1crypto.ocsp.SingleResponseExtensionId or\n asn1crypto.ocsp.ResponseDataExtensionId, not %s\n ''',\n _type_name(name)\n ))\n\n # We use native here to convert OIDs to meaningful names\n name = extension['extn_id'].native\n spec = extension.spec('extn_value')\n\n if name == 'nonce':\n raise ValueError(_pretty_message(\n '''\n The nonce value should be set via the .nonce attribute, not the\n .set_extension() method\n '''\n ))\n\n if name == 'crl_reason':\n raise ValueError(_pretty_message(\n '''\n The crl_reason value should be set via the certificate_status\n parameter of the OCSPResponseBuilder() constructor, not the\n .set_extension() method\n '''\n ))\n\n if name == 'certificate_issuer':\n raise ValueError(_pretty_message(\n '''\n The certificate_issuer value should be set via the\n .certificate_issuer attribute, not the .set_extension() method\n '''\n ))\n\n if not isinstance(value, spec) and value is not None:\n raise TypeError(_pretty_message(\n '''\n value must be an instance of %s, not %s\n ''',\n _type_name(spec),\n _type_name(value)\n ))\n\n if isinstance(extension, ocsp.ResponseDataExtension):\n extn_dict = self._response_data_extensions\n else:\n extn_dict = self._single_response_extensions\n\n if value is None:\n if name in extn_dict:\n del extn_dict[name]\n else:\n extn_dict[name] = value", "response": "Sets the value for an extension using a fully constructed ASN. 1 Value object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nbuilding the OCSP response object.", "response": "def build(self, responder_private_key=None, responder_certificate=None):\n \"\"\"\n Validates the request information, constructs the ASN.1 structure and\n signs it.\n\n The responder_private_key and responder_certificate parameters are only\n required if the response_status is \"successful\".\n\n :param responder_private_key:\n An asn1crypto.keys.PrivateKeyInfo or oscrypto.asymmetric.PrivateKey\n object for the private key to sign the response with\n\n :param responder_certificate:\n An asn1crypto.x509.Certificate or oscrypto.asymmetric.Certificate\n object of the certificate associated with the private key\n\n :return:\n An asn1crypto.ocsp.OCSPResponse object of the response\n \"\"\"\n\n if self._response_status != 'successful':\n return ocsp.OCSPResponse({\n 'response_status': self._response_status\n })\n\n is_oscrypto = isinstance(responder_private_key, asymmetric.PrivateKey)\n if not isinstance(responder_private_key, keys.PrivateKeyInfo) and not is_oscrypto:\n raise TypeError(_pretty_message(\n '''\n responder_private_key must be an instance of\n asn1crypto.keys.PrivateKeyInfo or\n oscrypto.asymmetric.PrivateKey, not %s\n ''',\n _type_name(responder_private_key)\n ))\n\n cert_is_oscrypto = isinstance(responder_certificate, asymmetric.Certificate)\n if not isinstance(responder_certificate, x509.Certificate) and not cert_is_oscrypto:\n raise TypeError(_pretty_message(\n '''\n responder_certificate must be an instance of\n asn1crypto.x509.Certificate or\n oscrypto.asymmetric.Certificate, not %s\n ''',\n _type_name(responder_certificate)\n ))\n\n if cert_is_oscrypto:\n responder_certificate = responder_certificate.asn1\n\n if self._certificate is None:\n raise ValueError(_pretty_message(\n '''\n certificate must be set if the response_status is\n \"successful\"\n '''\n ))\n if self._certificate_status is None:\n raise ValueError(_pretty_message(\n '''\n certificate_status must be set if the response_status is\n \"successful\"\n '''\n ))\n\n def _make_extension(name, value):\n return {\n 'extn_id': name,\n 'critical': False,\n 'extn_value': value\n }\n\n response_data_extensions = []\n single_response_extensions = []\n\n for name, value in self._response_data_extensions.items():\n response_data_extensions.append(_make_extension(name, value))\n if self._nonce:\n response_data_extensions.append(\n _make_extension('nonce', self._nonce)\n )\n\n if not response_data_extensions:\n response_data_extensions = None\n\n for name, value in self._single_response_extensions.items():\n single_response_extensions.append(_make_extension(name, value))\n\n if self._certificate_issuer:\n single_response_extensions.append(\n _make_extension(\n 'certificate_issuer',\n [\n x509.GeneralName(\n name='directory_name',\n value=self._certificate_issuer.subject\n )\n ]\n )\n )\n\n if not single_response_extensions:\n single_response_extensions = None\n\n responder_key_hash = getattr(responder_certificate.public_key, self._key_hash_algo)\n\n if self._certificate_status == 'good':\n cert_status = ocsp.CertStatus(\n name='good',\n value=core.Null()\n )\n elif self._certificate_status == 'unknown':\n cert_status = ocsp.CertStatus(\n name='unknown',\n value=core.Null()\n )\n else:\n status = self._certificate_status\n reason = status if status != 'revoked' else 'unspecified'\n cert_status = ocsp.CertStatus(\n name='revoked',\n value={\n 'revocation_time': self._revocation_date,\n 'revocation_reason': reason,\n }\n )\n\n issuer = self._certificate_issuer if self._certificate_issuer else responder_certificate\n if issuer.subject != self._certificate.issuer:\n raise ValueError(_pretty_message(\n '''\n responder_certificate does not appear to be the issuer for\n the certificate. Perhaps set the .certificate_issuer attribute?\n '''\n ))\n\n produced_at = datetime.now(timezone.utc)\n\n if self._this_update is None:\n self._this_update = produced_at\n\n if self._next_update is None:\n self._next_update = self._this_update + timedelta(days=7)\n\n response_data = ocsp.ResponseData({\n 'responder_id': ocsp.ResponderId(name='by_key', value=responder_key_hash),\n 'produced_at': produced_at,\n 'responses': [\n {\n 'cert_id': {\n 'hash_algorithm': {\n 'algorithm': self._key_hash_algo\n },\n 'issuer_name_hash': getattr(self._certificate.issuer, self._key_hash_algo),\n 'issuer_key_hash': getattr(issuer.public_key, self._key_hash_algo),\n 'serial_number': self._certificate.serial_number,\n },\n 'cert_status': cert_status,\n 'this_update': self._this_update,\n 'next_update': self._next_update,\n 'single_extensions': single_response_extensions\n }\n ],\n 'response_extensions': response_data_extensions\n })\n\n signature_algo = responder_private_key.algorithm\n if signature_algo == 'ec':\n signature_algo = 'ecdsa'\n\n signature_algorithm_id = '%s_%s' % (self._hash_algo, signature_algo)\n\n if responder_private_key.algorithm == 'rsa':\n sign_func = asymmetric.rsa_pkcs1v15_sign\n elif responder_private_key.algorithm == 'dsa':\n sign_func = asymmetric.dsa_sign\n elif responder_private_key.algorithm == 'ec':\n sign_func = asymmetric.ecdsa_sign\n\n if not is_oscrypto:\n responder_private_key = asymmetric.load_private_key(responder_private_key)\n signature_bytes = sign_func(responder_private_key, response_data.dump(), self._hash_algo)\n\n certs = None\n if self._certificate_issuer:\n certs = [responder_certificate]\n\n return ocsp.OCSPResponse({\n 'response_status': self._response_status,\n 'response_bytes': {\n 'response_type': 'basic_ocsp_response',\n 'response': {\n 'tbs_response_data': response_data,\n 'signature_algorithm': {'algorithm': signature_algorithm_id},\n 'signature': signature_bytes,\n 'certs': certs\n }\n }\n })"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild a OpenID request from an Authorization request.", "response": "def make_openid_request(arq, keys, issuer, request_object_signing_alg, recv):\n \"\"\"\n Construct the JWT to be passed by value (the request parameter) or by\n reference (request_uri).\n The request will be signed\n\n :param arq: The Authorization request\n :param keys: Keys to use for signing/encrypting. A KeyJar instance\n :param issuer: Who is signing this JSON Web Token\n :param request_object_signing_alg: Which signing algorithm to use\n :param recv: The intended receiver of the request\n :return: JWT encoded OpenID request\n \"\"\"\n\n _jwt = JWT(key_jar=keys, iss=issuer, sign_alg=request_object_signing_alg)\n return _jwt.pack(arq.to_dict(), owner=issuer, recv=recv)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef verify(self, **kwargs):\n super(AuthorizationRequest, self).verify(**kwargs)\n\n clear_verified_claims(self)\n\n args = {}\n for arg in [\"keyjar\", \"opponent_id\", \"sender\", \"alg\", \"encalg\",\n \"encenc\"]:\n try:\n args[arg] = kwargs[arg]\n except KeyError:\n pass\n\n if \"opponent_id\" not in kwargs:\n args[\"opponent_id\"] = self[\"client_id\"]\n\n if \"request\" in self:\n if isinstance(self[\"request\"], str):\n # Try to decode the JWT, checks the signature\n oidr = OpenIDRequest().from_jwt(str(self[\"request\"]), **args)\n\n # check if something is change in the original message\n for key, val in oidr.items():\n if key in self:\n if self[key] != val:\n # log but otherwise ignore\n logger.warning('{} != {}'.format(self[key], val))\n\n # remove all claims\n _keys = list(self.keys())\n for key in _keys:\n if key not in oidr:\n del self[key]\n\n self.update(oidr)\n\n # replace the JWT with the parsed and verified instance\n self[verified_claim_name(\"request\")] = oidr\n\n if \"id_token_hint\" in self:\n if isinstance(self[\"id_token_hint\"], str):\n idt = IdToken().from_jwt(str(self[\"id_token_hint\"]), **args)\n self[\"verified_id_token_hint\"] = idt\n\n if \"response_type\" not in self:\n raise MissingRequiredAttribute(\"response_type missing\", self)\n\n _rt = self[\"response_type\"]\n if \"id_token\" in _rt:\n if \"nonce\" not in self:\n raise MissingRequiredAttribute(\"Nonce missing\", self)\n else:\n try:\n if self['nonce'] != kwargs['nonce']:\n raise ValueError(\n 'Nonce in id_token not matching nonce in authz '\n 'request')\n except KeyError:\n pass\n\n if \"openid\" not in self.get(\"scope\", []):\n raise MissingRequiredValue(\"openid not in scope\", self)\n\n if \"offline_access\" in self.get(\"scope\", []):\n if \"prompt\" not in self or \"consent\" not in self[\"prompt\"]:\n raise MissingRequiredValue(\"consent in prompt\", self)\n\n if \"prompt\" in self:\n if \"none\" in self[\"prompt\"] and len(self[\"prompt\"]) > 1:\n raise InvalidRequest(\"prompt none combined with other value\",\n self)\n\n return True", "response": "Verify that the request and response are valid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nverify that the message is OK.", "response": "def verify(self, **kwargs):\n \"\"\"\n Implementations MUST either return both a Client Configuration Endpoint\n and a Registration Access Token or neither of them.\n :param kwargs:\n :return: True if the message is OK otherwise False\n \"\"\"\n super(RegistrationResponse, self).verify(**kwargs)\n\n has_reg_uri = \"registration_client_uri\" in self\n has_reg_at = \"registration_access_token\" in self\n if has_reg_uri != has_reg_at:\n raise VerificationError((\n \"Only one of registration_client_uri\"\n \" and registration_access_token present\"), self)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef in_a_while(days=0, seconds=0, microseconds=0, milliseconds=0,\n minutes=0, hours=0, weeks=0, time_format=TIME_FORMAT):\n \"\"\"\n :param days:\n :param seconds:\n :param microseconds:\n :param milliseconds:\n :param minutes:\n :param hours:\n :param weeks:\n :param time_format:\n :return: Formatet string\n \"\"\"\n if not time_format:\n time_format = TIME_FORMAT\n\n return time_in_a_while(days, seconds, microseconds, milliseconds,\n minutes, hours, weeks).strftime(time_format)", "response": "Return a string representation of the current time in a while."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef before(point):\n if not point:\n return True\n\n if isinstance(point, str):\n point = str_to_time(point)\n elif isinstance(point, int):\n point = time.gmtime(point)\n\n return time.gmtime() < point", "response": "True if point datetime specification is before now"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntrues if after is later or equal to that", "response": "def later_than(after, before):\n \"\"\" True if then is later or equal to that \"\"\"\n if isinstance(after, str):\n after = str_to_time(after)\n elif isinstance(after, int):\n after = time.gmtime(after)\n\n if isinstance(before, str):\n before = str_to_time(before)\n elif isinstance(before, int):\n before = time.gmtime(before)\n\n return after >= before"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the number of seconds since epoch a while from now.", "response": "def epoch_in_a_while(days=0, seconds=0, microseconds=0, milliseconds=0,\n minutes=0, hours=0, weeks=0):\n \"\"\"\n Return the number of seconds since epoch a while from now.\n\n :param days:\n :param seconds:\n :param microseconds:\n :param milliseconds:\n :param minutes:\n :param hours:\n :param weeks:\n :return: Seconds since epoch (1970-01-01)\n \"\"\"\n\n dt = time_in_a_while(days, seconds, microseconds, milliseconds, minutes,\n hours, weeks)\n return int((dt - datetime(1970, 1, 1)).total_seconds())"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the default values of the parameters.", "response": "def set_defaults(self):\n \"\"\"\n Based on specification set a parameters value to the default value.\n \"\"\"\n for key, val in self.c_default.items():\n self._dict[key] = val"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_urlencoded(self, lev=0):\n\n _spec = self.c_param\n if not self.lax:\n for attribute, (_, req, _ser, _, na) in _spec.items():\n if req and attribute not in self._dict:\n raise MissingRequiredAttribute(\"%s\" % attribute,\n \"%s\" % self)\n\n params = []\n\n for key, val in self._dict.items():\n try:\n (_, req, _ser, _, null_allowed) = _spec[key]\n except KeyError: # extra attribute\n try:\n _key, lang = key.split(\"#\")\n (_, req, _ser, _deser, null_allowed) = _spec[_key]\n except (ValueError, KeyError):\n try:\n (_, req, _ser, _, null_allowed) = _spec['*']\n except KeyError:\n _ser = None\n null_allowed = False\n\n if val is None and null_allowed is False:\n continue\n elif isinstance(val, str):\n # Should I allow parameters with \"\" as value ???\n params.append((key, val.encode(\"utf-8\")))\n elif isinstance(val, list):\n if _ser:\n params.append((key, str(_ser(val, sformat=\"urlencoded\",\n lev=lev))))\n else:\n for item in val:\n params.append((key, str(item).encode('utf-8')))\n elif isinstance(val, Message):\n try:\n _val = json.dumps(_ser(val, sformat=\"dict\", lev=lev + 1))\n params.append((key, _val))\n except TypeError:\n params.append((key, val))\n elif val is None:\n params.append((key, val))\n else:\n try:\n params.append((key, _ser(val, lev=lev)))\n except Exception:\n params.append((key, str(val)))\n\n try:\n return urlencode(params)\n except UnicodeEncodeError:\n _val = []\n for k, v in params:\n try:\n _val.append((k, v.encode(\"utf-8\")))\n except TypeError:\n _val.append((k, v))\n return urlencode(_val)", "response": "Creates a string using the application / x - www - form - urlencoded format of the resource."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef serialize(self, method=\"urlencoded\", lev=0, **kwargs):\n return getattr(self, \"to_%s\" % method)(lev=lev, **kwargs)", "response": "Serialize this instance to another representation."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef deserialize(self, info, method=\"urlencoded\", **kwargs):\n try:\n func = getattr(self, \"from_%s\" % method)\n except AttributeError:\n raise FormatError(\"Unknown serialization method (%s)\" % method)\n else:\n return func(info, **kwargs)", "response": "Deserialize from an external representation to an internal representation."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_urlencoded(self, urlencoded, **kwargs):\n\n # parse_qs returns a dictionary with keys and values. The values are\n # always lists even if there is only one value in the list.\n # keys only appears once.\n\n if isinstance(urlencoded, str):\n pass\n elif isinstance(urlencoded, list):\n urlencoded = urlencoded[0]\n\n _spec = self.c_param\n\n _info = parse_qs(urlencoded)\n if len(urlencoded) and _info == {}:\n raise FormatError('Wrong format')\n\n for key, val in _info.items():\n try:\n (typ, _, _, _deser, null_allowed) = _spec[key]\n except KeyError:\n try:\n _key, lang = key.split(\"#\")\n (typ, _, _, _deser, null_allowed) = _spec[_key]\n except (ValueError, KeyError):\n try:\n (typ, _, _, _deser, null_allowed) = _spec['*']\n except KeyError:\n if len(val) == 1:\n val = val[0]\n\n self._dict[key] = val\n continue\n\n if isinstance(typ, list):\n if _deser:\n self._dict[key] = _deser(val[0], \"urlencoded\")\n else:\n self._dict[key] = val\n else: # must be single value\n if len(val) == 1:\n if _deser:\n self._dict[key] = _deser(val[0], \"urlencoded\")\n elif isinstance(val[0], typ):\n self._dict[key] = val[0]\n else:\n self._dict[key] = val[0]\n else:\n raise TooManyValues('{}'.format(key))\n\n return self", "response": "This method creates a new instance of the class instance from a urlencoded string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a dictionary representation of the class Taxonomy.", "response": "def to_dict(self, lev=0):\n \"\"\"\n Return a dictionary representation of the class\n\n :return: A dict\n \"\"\"\n\n _spec = self.c_param\n\n _res = {}\n lev += 1\n for key, val in self._dict.items():\n try:\n (_, req, _ser, _, null_allowed) = _spec[str(key)]\n except KeyError:\n try:\n _key, lang = key.split(\"#\")\n (_, req, _ser, _, null_allowed) = _spec[_key]\n except (ValueError, KeyError):\n try:\n (_, req, _ser, _, null_allowed) = _spec['*']\n except KeyError:\n _ser = None\n\n if _ser:\n val = _ser(val, \"dict\", lev)\n\n if isinstance(val, Message):\n _res[key] = val.to_dict(lev + 1)\n elif isinstance(val, list) and isinstance(\n next(iter(val or []), None), Message):\n _res[key] = [v.to_dict(lev) for v in val]\n else:\n _res[key] = val\n\n return _res"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndirects translation, so the value for one key might be a list or a single value. :param dictionary: The info :return: A class instance or raise an exception on error", "response": "def from_dict(self, dictionary, **kwargs):\n \"\"\"\n Direct translation, so the value for one key might be a list or a\n single value.\n\n :param dictionary: The info\n :return: A class instance or raise an exception on error\n \"\"\"\n\n _spec = self.c_param\n\n for key, val in dictionary.items():\n # Earlier versions of python don't like unicode strings as\n # variable names\n if val == \"\" or val == [\"\"]:\n continue\n\n skey = str(key)\n try:\n (vtyp, req, _, _deser, null_allowed) = _spec[key]\n except KeyError:\n # might be a parameter with a lang tag\n try:\n _key, lang = skey.split(\"#\")\n except ValueError:\n try:\n (vtyp, _, _, _deser, null_allowed) = _spec['*']\n if val is None:\n self._dict[key] = val\n continue\n except KeyError:\n self._dict[key] = val\n continue\n else:\n try:\n (vtyp, req, _, _deser, null_allowed) = _spec[_key]\n except KeyError:\n try:\n (vtyp, _, _, _deser, null_allowed) = _spec['*']\n if val is None:\n self._dict[key] = val\n continue\n except KeyError:\n self._dict[key] = val\n continue\n\n self._add_value(skey, vtyp, key, val, _deser, null_allowed)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a value to the instance.", "response": "def _add_value(self, skey, vtyp, key, val, _deser, null_allowed):\n \"\"\"\n Main method for adding a value to the instance. Does all the\n checking on type of value and if among allowed values.\n \n :param skey: string version of the key \n :param vtyp: Type of value\n :param key: original representation of the key\n :param val: The value to add\n :param _deser: A deserializer for this value type\n :param null_allowed: Whether null is an allowed value for this key\n \"\"\"\n\n if isinstance(val, list):\n if (len(val) == 0 or val[0] is None) and null_allowed is False:\n return\n\n if isinstance(vtyp, tuple):\n vtyp = vtyp[0]\n\n if isinstance(vtyp, list):\n vtype = vtyp[0]\n if isinstance(val, vtype):\n if issubclass(vtype, Message):\n self._dict[skey] = [val]\n elif _deser:\n try:\n self._dict[skey] = _deser(val, sformat=\"urlencoded\")\n except Exception as exc:\n raise DecodeError(ERRTXT % (key, exc))\n else:\n setattr(self, skey, [val])\n elif isinstance(val, list):\n if _deser:\n try:\n val = _deser(val, sformat=\"dict\")\n except Exception as exc:\n raise DecodeError(ERRTXT % (key, exc))\n\n if issubclass(vtype, Message):\n try:\n _val = []\n for v in val:\n _val.append(vtype(**dict([(str(x), y) for x, y\n in v.items()])))\n val = _val\n except Exception as exc:\n raise DecodeError(ERRTXT % (key, exc))\n else:\n for v in val:\n if not isinstance(v, vtype):\n raise DecodeError(\n ERRTXT % (key, \"type != %s (%s)\" % (\n vtype, type(v))))\n\n self._dict[skey] = val\n elif isinstance(val, dict):\n try:\n val = _deser(val, sformat=\"dict\")\n except Exception as exc:\n raise DecodeError(ERRTXT % (key, exc))\n else:\n self._dict[skey] = val\n else:\n raise DecodeError(ERRTXT % (key, \"type != %s\" % vtype))\n else:\n if val is None:\n self._dict[skey] = None\n elif isinstance(val, bool):\n if vtyp is bool:\n self._dict[skey] = val\n else:\n raise ValueError(\n '\"{}\", wrong type of value for \"{}\"'.format(val, skey))\n elif isinstance(val, vtyp): # Not necessary to do anything\n self._dict[skey] = val\n else:\n if _deser:\n try:\n val = _deser(val, sformat=\"dict\")\n except Exception as exc:\n raise DecodeError(ERRTXT % (key, exc))\n else:\n # if isinstance(val, str):\n # self._dict[skey] = val\n # elif isinstance(val, list):\n # if len(val) == 1:\n # self._dict[skey] = val[0]\n # elif not len(val):\n # pass\n # else:\n # raise TooManyValues(key)\n # else:\n self._dict[skey] = val\n elif vtyp is int:\n try:\n self._dict[skey] = int(val)\n except (ValueError, TypeError):\n raise ValueError(\n '\"{}\", wrong type of value for \"{}\"'.format(val,\n skey))\n elif vtyp is bool:\n raise ValueError(\n '\"{}\", wrong type of value for \"{}\"'.format(val, skey))\n elif vtyp != type(val):\n if vtyp == Message:\n if type(val) == dict or isinstance(val, str):\n self._dict[skey] = val\n else:\n raise ValueError(\n '\"{}\", wrong type of value for \"{}\"'.format(\n val, skey))\n else:\n raise ValueError(\n '\"{}\", wrong type of value for \"{}\"'.format(val,\n skey))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_json(self, lev=0, indent=None):\n if lev:\n return self.to_dict(lev + 1)\n else:\n return json.dumps(self.to_dict(1), indent=indent)", "response": "Serialize the content of this object into a JSON string."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting from a JSON string to an instance of this class.", "response": "def from_json(self, txt, **kwargs):\n \"\"\"\n Convert from a JSON string to an instance of this class.\n \n :param txt: The JSON string (a ``str``, ``bytes`` or ``bytearray``\n instance containing a JSON document)\n :param kwargs: extra keyword arguments\n :return: The instantiated instance \n \"\"\"\n _dict = json.loads(txt)\n return self.from_dict(_dict)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_jwt(self, key=None, algorithm=\"\", lev=0, lifetime=0):\n\n _jws = JWS(self.to_json(lev), alg=algorithm)\n return _jws.sign_compact(key)", "response": "Create a signed JWT representation of the class instance"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngive a signed and or encrypted JWT create a new instance of the class instance.", "response": "def from_jwt(self, txt, keyjar, verify=True, **kwargs):\n \"\"\"\n Given a signed and/or encrypted JWT, verify its correctness and then\n create a class instance from the content.\n\n :param txt: The JWT\n :param key: keys that might be used to decrypt and/or verify the\n signature of the JWT\n :param verify: Whether the signature should be verified or not\n :param keyjar: A KeyJar that might contain the necessary key.\n :param kwargs: Extra key word arguments\n :return: A class instance\n \"\"\"\n\n algarg = {}\n if 'encalg' in kwargs:\n algarg['alg'] = kwargs['encalg']\n if 'encenc' in kwargs:\n algarg['enc'] = kwargs['encenc']\n _decryptor = jwe_factory(txt, **algarg)\n\n if _decryptor:\n logger.debug(\"JWE headers: {}\".format(_decryptor.jwt.headers))\n\n dkeys = keyjar.get_decrypt_key(owner=\"\")\n\n logger.debug('Decrypt class: {}'.format(_decryptor.__class__))\n _res = _decryptor.decrypt(txt, dkeys)\n logger.debug('decrypted message:{}'.format(_res))\n if isinstance(_res, tuple):\n txt = as_unicode(_res[0])\n elif isinstance(_res, list) and len(_res) == 2:\n txt = as_unicode(_res[0])\n else:\n txt = as_unicode(_res)\n self.jwe_header = _decryptor.jwt.headers\n\n try:\n _verifier = jws_factory(txt, alg=kwargs['sigalg'])\n except:\n _verifier = jws_factory(txt)\n\n if _verifier:\n try:\n _jwt = _verifier.jwt\n jso = _jwt.payload()\n _header = _jwt.headers\n\n key = []\n\n # if \"sender\" in kwargs:\n # key.extend(keyjar.get_verify_key(owner=kwargs[\"sender\"]))\n\n logger.debug(\"Raw JSON: {}\".format(jso))\n logger.debug(\"JWS header: {}\".format(_header))\n if _header[\"alg\"] == \"none\":\n pass\n elif verify:\n if keyjar:\n key.extend(keyjar.get_jwt_verify_keys(_jwt, **kwargs))\n\n if \"alg\" in _header and _header[\"alg\"] != \"none\":\n if not key:\n raise MissingSigningKey(\n \"alg=%s\" % _header[\"alg\"])\n\n logger.debug(\"Found signing key.\")\n try:\n _verifier.verify_compact(txt, key)\n except NoSuitableSigningKeys:\n if keyjar:\n update_keyjar(keyjar)\n key = keyjar.get_jwt_verify_keys(_jwt, **kwargs)\n _verifier.verify_compact(txt, key)\n except Exception:\n raise\n else:\n self.jws_header = _jwt.headers\n else:\n jso = json.loads(txt)\n\n self.jwt = txt\n return self.from_dict(jso)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nverifies that all the required values are there and that the values are of the correct type.", "response": "def verify(self, **kwargs):\n \"\"\"\n Make sure all the required values are there and that the values are\n of the correct type\n \"\"\"\n _spec = self.c_param\n try:\n _allowed = self.c_allowed_values\n except KeyError:\n _allowed = {}\n\n for (attribute, (typ, required, _, _, na)) in _spec.items():\n if attribute == \"*\":\n continue\n\n try:\n val = self._dict[attribute]\n except KeyError:\n if required:\n raise MissingRequiredAttribute(\"%s\" % attribute)\n continue\n else:\n if typ == bool:\n pass\n elif not val:\n if required:\n raise MissingRequiredAttribute(\"%s\" % attribute)\n continue\n\n try:\n _allowed_val = _allowed[attribute]\n except KeyError:\n pass\n else:\n if not self._type_check(typ, _allowed_val, val, na):\n raise NotAllowedValue(val)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives a URL this method will add a fragment, a query part or extend a query part if it already exists with the information in this instance. :param location: A URL :param fragment_enc: Whether the information should be placed in a fragment (True) or in a query part (False) :return: The extended URL", "response": "def request(self, location, fragment_enc=False):\n \"\"\"\n Given a URL this method will add a fragment, a query part or extend\n a query part if it already exists with the information in this instance.\n \n :param location: A URL \n :param fragment_enc: Whether the information should be placed in a\n fragment (True) or in a query part (False)\n :return: The extended URL \n \"\"\"\n _l = as_unicode(location)\n _qp = as_unicode(self.to_urlencoded())\n if fragment_enc:\n return \"%s#%s\" % (_l, _qp)\n else:\n if \"?\" in location:\n return \"%s&%s\" % (_l, _qp)\n else:\n return \"%s?%s\" % (_l, _qp)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the extra parameters that this instance is not listed in the c_params specification.", "response": "def extra(self):\n \"\"\"\n Return the extra parameters that this instance. Extra meaning those\n that are not listed in the c_params specification.\n \n :return: The key,value pairs for keys that are not in the c_params\n specification,\n \"\"\"\n return dict([(key, val) for key, val in\n self._dict.items() if key not in self.c_param])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the information in this instance.", "response": "def update(self, item, **kwargs):\n \"\"\"\n Update the information in this instance.\n \n :param item: a dictionary or a Message instance \n \"\"\"\n if isinstance(item, dict):\n self._dict.update(item)\n elif isinstance(item, Message):\n for key, val in item.items():\n self._dict[key] = val\n else:\n raise ValueError(\"Can't update message using: '%s'\" % (item,))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_jwe(self, keys, enc, alg, lev=0):\n\n _jwe = JWE(self.to_json(lev), alg=alg, enc=enc)\n return _jwe.encrypt(keys)", "response": "This method will take the information in this instance and then encrypt it with the given keys."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_jwe(self, msg, keys):\n\n jwe = JWE()\n _res = jwe.decrypt(msg, keys)\n return self.from_json(_res.decode())", "response": "Decrypt an encrypted JWT and load the JSON object that was the body\n of the JWT into this object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef weed(self):\n _ext = [k for k in self._dict.keys() if k not in self.c_param]\n for k in _ext:\n del self._dict[k]", "response": "Remove all key value pairs that are not standard"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving parameters that have no value.", "response": "def rm_blanks(self):\n \"\"\"\n Get rid of parameters that has no value.\n \"\"\"\n _blanks = [k for k in self._dict.keys() if not self._dict[k]]\n for key in _blanks:\n del self._dict[key]"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncleans up the path specification so it looks like something I could use.", "response": "def proper_path(path):\n \"\"\"\n Clean up the path specification so it looks like something I could use.\n \"./\" \"/\"\n \"\"\"\n if path.startswith(\"./\"):\n pass\n elif path.startswith(\"/\"):\n path = \".%s\" % path\n elif path.startswith(\".\"):\n while path.startswith(\".\"):\n path = path[1:]\n if path.startswith(\"/\"):\n path = \".%s\" % path\n else:\n path = \"./%s\" % path\n\n if not path.endswith(\"/\"):\n path += \"/\"\n\n return path"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ansi(color, text):\n code = COLOR_CODES[color]\n return '\\033[1;{0}m{1}{2}'.format(code, text, RESET_TERM)", "response": "Wrap text in an ansi escape sequence"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef require_flush(fun):\n\n @wraps(fun)\n def ensure_flushed(service, *args, **kwargs):\n if service.app_state.needs_db_flush:\n session = db.session()\n if not session._flushing and any(\n isinstance(m, (RoleAssignment, SecurityAudit))\n for models in (session.new, session.dirty, session.deleted)\n for m in models\n ):\n session.flush()\n service.app_state.needs_db_flush = False\n\n return fun(service, *args, **kwargs)\n\n return ensure_flushed", "response": "Decorator for methods that need to query security.\n It ensures all security related operations are flushed to DB."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nquery for a : class:`PermissionAssignment` using the given session and permission role and object.", "response": "def query_pa_no_flush(session, permission, role, obj):\n \"\"\"Query for a :class:`PermissionAssignment` using `session` without any\n `flush()`.\n\n It works by looking in session `new`, `dirty` and `deleted`, and issuing a\n query with no autoflush.\n\n .. note::\n\n This function is used by `add_permission` and `delete_permission` to allow\n to add/remove the same assignment twice without issuing any flush. Since\n :class:`Entity` creates its initial permissions in during\n :sqlalchemy:`sqlalchemy.orm.events.SessionEvents.after_attach`, it might be\n problematic to issue a flush when entity is not yet ready to be flushed\n (missing required attributes for example).\n \"\"\"\n to_visit = [session.deleted, session.dirty, session.new]\n with session.no_autoflush:\n # no_autoflush is required to visit PERMISSIONS_ATTR without emitting a\n # flush()\n if obj:\n to_visit.append(getattr(obj, PERMISSIONS_ATTR))\n\n permissions = (\n p for p in chain(*to_visit) if isinstance(p, PermissionAssignment)\n )\n\n for instance in permissions:\n if (\n instance.permission == permission\n and instance.role == role\n and instance.object == obj\n ):\n return instance\n\n # Last chance: perform a filtered query. If obj is not None, sometimes\n # getattr(obj, PERMISSIONS_ATTR) has objects not present in session\n # not in this query (maybe in a parent session transaction `new`?).\n if obj is not None and obj.id is None:\n obj = None\n\n return (\n session.query(PermissionAssignment)\n .filter(\n PermissionAssignment.permission == permission,\n PermissionAssignment.role == role,\n PermissionAssignment.object == obj,\n )\n .first()\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the current user or SYSTEM user.", "response": "def _current_user_manager(self, session=None):\n \"\"\"Return the current user, or SYSTEM user.\"\"\"\n if session is None:\n session = db.session()\n\n try:\n user = g.user\n except Exception:\n return session.query(User).get(0)\n\n if sa.orm.object_session(user) is not session:\n # this can happen when called from a celery task during development\n # (with CELERY_ALWAYS_EAGER=True): the task SA session is not\n # app.db.session, and we should not attach this object to\n # the other session, because it can make weird, hard-to-debug\n # errors related to session.identity_map.\n return session.query(User).get(user.id)\n else:\n return user"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all the roles attached to a given principal on a given object.", "response": "def get_roles(self, principal, object=None, no_group_roles=False):\n \"\"\"Get all the roles attached to given `principal`, on a given\n `object`.\n\n :param principal: a :class:`User` or :class:`Group`\n\n :param object: an :class:`Entity`\n\n :param no_group_roles: If `True`, return only direct roles, not roles\n acquired through group membership.\n \"\"\"\n assert principal\n if hasattr(principal, \"is_anonymous\") and principal.is_anonymous:\n return [AnonymousRole]\n\n query = db.session.query(RoleAssignment.role)\n if isinstance(principal, Group):\n filter_principal = RoleAssignment.group == principal\n else:\n filter_principal = RoleAssignment.user == principal\n if not no_group_roles:\n groups = [g.id for g in principal.groups]\n if groups:\n filter_principal |= RoleAssignment.group_id.in_(groups)\n\n query = query.filter(filter_principal)\n\n if object is not None:\n assert isinstance(object, Entity)\n\n query = query.filter(RoleAssignment.object == object)\n roles = {i[0] for i in query.all()}\n\n if object is not None:\n for attr, role in ((\"creator\", Creator), (\"owner\", Owner)):\n if getattr(object, attr) == principal:\n roles.add(role)\n return list(roles)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_principals(\n self, role, anonymous=True, users=True, groups=True, object=None, as_list=True\n ):\n \"\"\"Return all users which are assigned given role.\"\"\"\n if not isinstance(role, Role):\n role = Role(role)\n assert role\n assert users or groups\n query = RoleAssignment.query.filter_by(role=role)\n\n if not anonymous:\n query = query.filter(RoleAssignment.anonymous == False)\n if not users:\n query = query.filter(RoleAssignment.user == None)\n elif not groups:\n query = query.filter(RoleAssignment.group == None)\n\n query = query.filter(RoleAssignment.object == object)\n principals = {(ra.user or ra.group) for ra in query.all()}\n\n if object is not None and role in (Creator, Owner):\n p = object.creator if role == Creator else object.owner\n if p:\n principals.add(p)\n\n if not as_list:\n return principals\n\n return list(principals)", "response": "Return all users which are assigned given role."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfill role cache for the given principal.", "response": "def _fill_role_cache(self, principal, overwrite=False):\n \"\"\"Fill role cache for `principal` (User or Group), in order to avoid\n too many queries when checking role access with 'has_role'.\n\n Return role_cache of `principal`\n \"\"\"\n if not self.app_state.use_cache:\n return None\n\n if not self._has_role_cache(principal) or overwrite:\n self._set_role_cache(principal, self._all_roles(principal))\n return self._role_cache(principal)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfill role cache for principals in order to avoid too many queries when checking role access with has_role.", "response": "def _fill_role_cache_batch(self, principals, overwrite=False):\n \"\"\"Fill role cache for `principals` (Users and/or Groups), in order to\n avoid too many queries when checking role access with 'has_role'.\"\"\"\n if not self.app_state.use_cache:\n return\n\n query = db.session.query(RoleAssignment)\n users = {u for u in principals if isinstance(u, User)}\n groups = {g for g in principals if isinstance(g, Group)}\n groups |= {g for u in users for g in u.groups}\n\n if not overwrite:\n users = {u for u in users if not self._has_role_cache(u)}\n groups = {g for g in groups if not self._has_role_cache(g)}\n\n if not (users or groups):\n return\n\n # ensure principals processed here will have role cache. Thus users or\n # groups without any role will have an empty role cache, to avoid\n # unneeded individual DB query when calling self._fill_role_cache(p).\n for p in chain(users, groups):\n self._set_role_cache(p, {})\n\n filter_cond = []\n if users:\n filter_cond.append(RoleAssignment.user_id.in_(u.id for u in users))\n if groups:\n filter_cond.append(RoleAssignment.group_id.in_(g.id for g in groups))\n\n query = query.filter(sql.or_(*filter_cond))\n ra_users = {}\n ra_groups = {}\n for ra in query.all():\n if ra.user:\n all_roles = ra_users.setdefault(ra.user, {})\n else:\n all_roles = ra_groups.setdefault(ra.group, {})\n\n object_key = (\n f\"{ra.object.entity_type}:{ra.object_id:d}\"\n if ra.object is not None\n else None\n )\n all_roles.setdefault(object_key, set()).add(ra.role)\n\n for group, all_roles in ra_groups.items():\n self._set_role_cache(group, all_roles)\n\n for user, all_roles in ra_users.items():\n for gr in user.groups:\n group_roles = self._fill_role_cache(gr)\n for object_key, roles in group_roles.items():\n obj_roles = all_roles.setdefault(object_key, set())\n obj_roles |= roles\n\n self._set_role_cache(user, all_roles)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef has_role(self, principal, role, object=None):\n if not principal:\n return False\n\n principal = unwrap(principal)\n if not self.running:\n return True\n\n if isinstance(role, (Role, (str,))):\n role = (role,)\n\n # admin & manager always have role\n valid_roles = frozenset((Admin, Manager) + tuple(role))\n\n if AnonymousRole in valid_roles:\n # everybody has the role 'Anonymous'\n return True\n\n if (\n Authenticated in valid_roles\n and isinstance(principal, User)\n and not principal.is_anonymous\n ):\n return True\n\n if principal is AnonymousRole or (\n hasattr(principal, \"is_anonymous\") and principal.is_anonymous\n ):\n # anonymous user, and anonymous role isn't in valid_roles\n return False\n\n # root always have any role\n if isinstance(principal, User) and principal.id == 0:\n return True\n\n if object:\n assert isinstance(object, Entity)\n object_key = \"{}:{}\".format(object.object_type, str(object.id))\n if Creator in role:\n if object.creator == principal:\n return True\n if Owner in role:\n if object.owner == principal:\n return True\n\n else:\n object_key = None\n\n all_roles = (\n self._fill_role_cache(principal)\n if self.app_state.use_cache\n else self._all_roles(principal)\n )\n roles = set()\n roles |= all_roles.get(None, set())\n roles |= all_roles.get(object_key, set())\n return len(valid_roles & roles) > 0", "response": "Returns True if the user has the specified role."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngrants role to user.", "response": "def grant_role(self, principal, role, obj=None):\n \"\"\"Grant `role` to `user` (either globally, if `obj` is None, or on the\n specific `obj`).\"\"\"\n assert principal\n principal = unwrap(principal)\n session = object_session(obj) if obj is not None else db.session\n manager = self._current_user_manager(session=session)\n args = {\n \"role\": role,\n \"object\": obj,\n \"anonymous\": False,\n \"user\": None,\n \"group\": None,\n }\n\n if principal is AnonymousRole or (\n hasattr(principal, \"is_anonymous\") and principal.is_anonymous\n ):\n args[\"anonymous\"] = True\n elif isinstance(principal, User):\n args[\"user\"] = principal\n else:\n args[\"group\"] = principal\n\n query = session.query(RoleAssignment)\n if query.filter_by(**args).limit(1).count():\n # role already granted, nothing to do\n return\n\n # same as above but in current, not yet flushed objects in session. We\n # cannot call flush() in grant_role() since this method may be called a\n # great number of times in the same transaction, and sqlalchemy limits\n # to 100 flushes before triggering a warning\n for ra in (\n o\n for models in (session.new, session.dirty)\n for o in models\n if isinstance(o, RoleAssignment)\n ):\n if all(getattr(ra, attr) == val for attr, val in args.items()):\n return\n\n ra = RoleAssignment(**args)\n session.add(ra)\n audit = SecurityAudit(manager=manager, op=SecurityAudit.GRANT, **args)\n if obj is not None:\n audit.object_id = obj.id\n audit.object_type = obj.entity_type\n object_name = \"\"\n for attr_name in (\"name\", \"path\", \"__path_before_delete\"):\n if hasattr(obj, attr_name):\n object_name = getattr(obj, attr_name)\n audit.object_name = object_name\n\n session.add(audit)\n self._needs_flush()\n\n if hasattr(principal, \"__roles_cache__\"):\n del principal.__roles_cache__"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ungrant_role(self, principal, role, object=None):\n assert principal\n principal = unwrap(principal)\n session = object_session(object) if object is not None else db.session\n manager = self._current_user_manager(session=session)\n\n args = {\n \"role\": role,\n \"object\": object,\n \"anonymous\": False,\n \"user\": None,\n \"group\": None,\n }\n query = session.query(RoleAssignment)\n query = query.filter(\n RoleAssignment.role == role, RoleAssignment.object == object\n )\n\n if principal is AnonymousRole or (\n hasattr(principal, \"is_anonymous\") and principal.is_anonymous\n ):\n args[\"anonymous\"] = True\n query.filter(\n RoleAssignment.anonymous == False,\n RoleAssignment.user == None,\n RoleAssignment.group == None,\n )\n\n elif isinstance(principal, User):\n args[\"user\"] = principal\n query = query.filter(RoleAssignment.user == principal)\n else:\n args[\"group\"] = principal\n query = query.filter(RoleAssignment.group == principal)\n\n ra = query.one()\n session.delete(ra)\n audit = SecurityAudit(manager=manager, op=SecurityAudit.REVOKE, **args)\n session.add(audit)\n self._needs_flush()\n self._clear_role_cache(principal)", "response": "Ungrant a role to a user."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if user has permission.", "response": "def has_permission(self, user, permission, obj=None, inherit=False, roles=None):\n \"\"\"\n :param obj: target object to check permissions.\n :param inherit: check with permission inheritance. By default, check only\n local roles.\n :param roles: additional valid role or iterable of roles having\n `permission`.\n \"\"\"\n if not isinstance(permission, Permission):\n assert permission in PERMISSIONS\n permission = Permission(permission)\n user = unwrap(user)\n\n if not self.running:\n return True\n\n session = None\n if obj is not None:\n session = object_session(obj)\n\n if session is None:\n session = db.session()\n\n # root always have any permission\n if isinstance(user, User) and user.id == 0:\n return True\n\n # valid roles\n # 1: from database\n pa_filter = PermissionAssignment.object == None\n if obj is not None and obj.id is not None:\n pa_filter |= PermissionAssignment.object == obj\n\n pa_filter &= PermissionAssignment.permission == permission\n valid_roles = session.query(PermissionAssignment.role).filter(pa_filter)\n valid_roles = {res[0] for res in valid_roles.yield_per(1000)}\n\n # complete with defaults\n valid_roles |= {Admin} # always have all permissions\n valid_roles |= DEFAULT_PERMISSION_ROLE.get(permission, set())\n\n # FIXME: obj.__class__ could define default permisssion matrix too\n\n if roles is not None:\n if isinstance(roles, (Role,) + (str,)):\n roles = (roles,)\n\n for r in roles:\n valid_roles.add(Role(r))\n\n # FIXME: query permission_role: global and on object\n\n if AnonymousRole in valid_roles:\n return True\n\n if Authenticated in valid_roles and not user.is_anonymous:\n return True\n\n # first test global roles, then object local roles\n checked_objs = [None, obj]\n if inherit and obj is not None:\n while obj.inherit_security and obj.parent is not None:\n obj = obj.parent\n checked_objs.append(obj)\n\n principals = [user] + list(user.groups)\n self._fill_role_cache_batch(principals)\n\n return any(\n self.has_role(principal, valid_roles, item)\n for principal in principals\n for item in checked_objs\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef query_entity_with_permission(self, permission, user=None, Model=Entity):\n assert isinstance(permission, Permission)\n assert issubclass(Model, Entity)\n RA = sa.orm.aliased(RoleAssignment)\n PA = sa.orm.aliased(PermissionAssignment)\n # id column from entity table. Model.id would refer to 'model' table.\n # this allows the DB to use indexes / foreign key constraints.\n id_column = sa.inspect(Model).primary_key[0]\n creator = Model.creator\n owner = Model.owner\n\n if not self.running:\n return sa.sql.exists([1])\n\n if user is None:\n user = unwrap(current_user)\n\n # build role CTE\n principal_filter = RA.anonymous == True\n\n if not user.is_anonymous:\n principal_filter |= RA.user == user\n\n if user.groups:\n principal_filter |= RA.group_id.in_([g.id for g in user.groups])\n\n RA = sa.sql.select([RA], principal_filter).cte()\n permission_exists = sa.sql.exists([1]).where(\n sa.sql.and_(\n PA.permission == permission,\n PA.object_id == id_column,\n (RA.c.role == PA.role) | (PA.role == AnonymousRole),\n (RA.c.object_id == PA.object_id) | (RA.c.object_id == None),\n )\n )\n\n # is_admin: self-explanatory. It search for local or global admin\n # role, but PermissionAssignment is not involved, thus it can match on\n # entities that don't have *any permission assignment*, whereas previous\n # expressions cannot.\n is_admin = sa.sql.exists([1]).where(\n sa.sql.and_(\n RA.c.role == Admin,\n (RA.c.object_id == id_column) | (RA.c.object_id == None),\n principal_filter,\n )\n )\n\n filter_expr = permission_exists | is_admin\n\n if user and not user.is_anonymous:\n is_owner_or_creator = sa.sql.exists([1]).where(\n sa.sql.and_(\n PA.permission == permission,\n PA.object_id == id_column,\n sa.sql.or_(\n (PA.role == Owner) & (owner == user),\n (PA.role == Creator) & (creator == user),\n ),\n )\n )\n filter_expr |= is_owner_or_creator\n\n return filter_expr", "response": "Filter a query on an Entity or a Model with a permission."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning only roles having this permission", "response": "def get_permissions_assignments(self, obj=None, permission=None):\n \"\"\"\n :param permission: return only roles having this permission\n\n :returns: an dict where keys are `permissions` and values `roles` iterable.\n \"\"\"\n session = None\n if obj is not None:\n assert isinstance(obj, Entity)\n session = object_session(obj)\n\n if obj.id is None:\n obj = None\n\n if session is None:\n session = db.session()\n\n pa = session.query(\n PermissionAssignment.permission, PermissionAssignment.role\n ).filter(PermissionAssignment.object == obj)\n\n if permission:\n pa = pa.filter(PermissionAssignment.permission == permission)\n\n results = {}\n for permission, role in pa.yield_per(1000):\n results.setdefault(permission, set()).add(role)\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register_asset(self, type_, *assets):\n supported = list(self._assets_bundles.keys())\n if type_ not in supported:\n msg = \"Invalid type: {}. Valid types: {}\".format(\n repr(type_), \", \".join(sorted(supported))\n )\n raise KeyError(msg)\n\n for asset in assets:\n if not isinstance(asset, Bundle) and callable(asset):\n asset = asset()\n\n self._assets_bundles[type_].setdefault(\"bundles\", []).append(asset)", "response": "Register webassets bundle to be served on all pages."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef register_i18n_js(self, *paths):\n languages = self.config[\"BABEL_ACCEPT_LANGUAGES\"]\n assets = self.extensions[\"webassets\"]\n\n for path in paths:\n for lang in languages:\n filename = path.format(lang=lang)\n try:\n assets.resolver.search_for_source(assets, filename)\n except IOError:\n pass\n # logger.debug('i18n JS not found, skipped: \"%s\"', filename)\n else:\n self.register_asset(\"js-i18n-\" + lang, filename)", "response": "Register templates path translations files like\n select2_locale_lang. js."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters assets needed by Abilian.", "response": "def register_base_assets(self):\n \"\"\"Register assets needed by Abilian.\n\n This is done in a separate method in order to allow applications\n to redefine it at will.\n \"\"\"\n from abilian.web import assets as bundles\n\n self.register_asset(\"css\", bundles.LESS)\n self.register_asset(\"js-top\", bundles.TOP_JS)\n self.register_asset(\"js\", bundles.JS)\n self.register_i18n_js(*bundles.JS_I18N)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn url to use for this user.", "response": "def user_photo_url(user, size):\n \"\"\"Return url to use for this user.\"\"\"\n endpoint, kwargs = user_url_args(user, size)\n return url_for(endpoint, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef newlogins(sessions):\n if not sessions:\n return [], []\n users = {}\n dates = {}\n\n for session in sessions:\n user = session.user\n # time value is discarded to aggregate on days only\n date = session.started_at.strftime(\"%Y/%m/%d\")\n # keep the info only it's the first time we encounter a user\n if user not in users:\n users[user] = date\n # build the list of users on a given day\n if date not in dates:\n dates[date] = [user]\n else:\n dates[date].append(user)\n\n data = []\n total = []\n previous = 0\n for date in sorted(dates.keys()):\n # print u\"{} : {}\".format(date, len(dates[date]))\n date_epoch = unix_time_millis(datetime.strptime(date, \"%Y/%m/%d\"))\n data.append({\"x\": date_epoch, \"y\": len(dates[date])})\n previous += len(dates[date])\n total.append({\"x\": date_epoch, \"y\": previous})\n\n return data, total", "response": "Brand new logins each day and total of users each day."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef uniquelogins(sessions):\n # sessions = LoginSession.query.order_by(LoginSession.started_at.asc()).all()\n if not sessions:\n return [], [], []\n dates = {}\n for session in sessions:\n user = session.user\n # time value is discarded to aggregate on days only\n date = session.started_at.strftime(\"%Y/%m/%d\")\n\n if date not in dates:\n dates[date] = set() # we want unique users on a given day\n dates[date].add(user)\n else:\n dates[date].add(user)\n\n daily = []\n weekly = []\n monthly = []\n\n for date in sorted(dates.keys()):\n # print u\"{} : {}\".format(date, len(dates[date]))\n date_epoch = unix_time_millis(datetime.strptime(date, \"%Y/%m/%d\"))\n daily.append({\"x\": date_epoch, \"y\": len(dates[date])})\n\n # first_day = data[0]['x']\n # last_day = data[-1]['x']\n\n daily_serie = pd.Series(dates)\n # convert the index to Datetime type\n daily_serie.index = pd.DatetimeIndex(daily_serie.index)\n # calculate the values instead of users lists\n daily_serie = daily_serie.apply(lambda x: len(x))\n\n # GroupBy Week/month, Thanks Panda\n weekly_serie = daily_serie.groupby(pd.Grouper(freq=\"W\")).aggregate(numpysum)\n monthly_serie = daily_serie.groupby(pd.Grouper(freq=\"M\")).aggregate(numpysum)\n\n for date, value in weekly_serie.items():\n try:\n value = int(value)\n except ValueError:\n continue\n date_epoch = unix_time_millis(date)\n weekly.append({\"x\": date_epoch, \"y\": value})\n\n for date, value in monthly_serie.items():\n try:\n value = int(value)\n except ValueError:\n continue\n date_epoch = unix_time_millis(date)\n monthly.append({\"x\": date_epoch, \"y\": value})\n\n return daily, weekly, monthly", "response": "Unique logins per day weeks and months."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\naccess control helper to check user s roles against a list of valid roles.", "response": "def allow_access_for_roles(roles):\n \"\"\"Access control helper to check user's roles against a list of valid\n roles.\"\"\"\n if isinstance(roles, Role):\n roles = (roles,)\n valid_roles = frozenset(roles)\n\n if Anonymous in valid_roles:\n return allow_anonymous\n\n def check_role(user, roles, **kwargs):\n from abilian.services import get_service\n\n security = get_service(\"security\")\n return security.has_role(user, valid_roles)\n\n return check_role"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_pk_from_identity(obj):\n from sqlalchemy.orm.util import identity_key\n\n cls, key = identity_key(instance=obj)[0:2]\n return \":\".join(text_type(x) for x in key)", "response": "Gets the primary key from an identity object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists all keys with optional prefix filtering.", "response": "def keys(self, prefix=None):\n \"\"\"List all keys, with optional prefix filtering.\"\"\"\n query = Setting.query\n if prefix:\n query = query.filter(Setting.key.startswith(prefix))\n\n # don't use iteritems: 'value' require little processing whereas we only\n # want 'key'\n return [i[0] for i in query.yield_per(1000).values(Setting.key)]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register(cls):\n if not issubclass(cls, Entity):\n raise ValueError(\"Class must be a subclass of abilian.core.entities.Entity\")\n\n Commentable.register(cls)\n return cls", "response": "Register an entity as a commentable class."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if the object is Commentable", "response": "def is_commentable(obj_or_class):\n \"\"\"\n :param obj_or_class: a class or instance\n \"\"\"\n if isinstance(obj_or_class, type):\n return issubclass(obj_or_class, Commentable)\n\n if not isinstance(obj_or_class, Commentable):\n return False\n\n if obj_or_class.id is None:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef for_entity(obj, check_commentable=False):\n if check_commentable and not is_commentable(obj):\n return []\n\n return getattr(obj, ATTRIBUTE)", "response": "Return comments on an entity."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetermines if this actions is available in this context.", "response": "def available(self, context):\n \"\"\"Determine if this actions is available in this `context`.\n\n :param context: a dict whose content is left to application needs; if\n :attr:`.condition` is a callable it receives `context`\n in parameter.\n \"\"\"\n if not self._enabled:\n return False\n try:\n return self.pre_condition(context) and self._check_condition(context)\n except Exception:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if the registry has been installed in current applications.", "response": "def installed(self, app=None):\n \"\"\"Return `True` if the registry has been installed in current\n applications.\"\"\"\n if app is None:\n app = current_app\n return self.__EXTENSION_NAME in app.extensions"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register(self, *actions):\n assert self.installed(), \"Actions not enabled on this application\"\n assert all(isinstance(a, Action) for a in actions)\n\n for action in actions:\n cat = action.category\n reg = self._state[\"categories\"].setdefault(cat, [])\n reg.append(action)", "response": "Register actions in the current application."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef actions(self, context=None):\n assert self.installed(), \"Actions not enabled on this application\"\n result = {}\n if context is None:\n context = self.context\n\n for cat, actions in self._state[\"categories\"].items():\n result[cat] = [a for a in actions if a.available(context)]\n return result", "response": "Return a mapping of category => actions list."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the list of actions for this category in current application.", "response": "def for_category(self, category, context=None):\n \"\"\"Returns actions list for this category in current application.\n\n Actions are filtered according to :meth:`.Action.available`.\n\n if `context` is None, then current action context is used\n (:attr:`context`)\n \"\"\"\n assert self.installed(), \"Actions not enabled on this application\"\n actions = self._state[\"categories\"].get(category, [])\n\n if context is None:\n context = self.context\n\n return [a for a in actions if a.available(context)]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting date format from babel to datetime. strptime.", "response": "def babel2datetime(pattern):\n \"\"\"Convert date format from babel (http://babel.pocoo.org/docs/dates/#date-\n fields)) to a format understood by datetime.strptime.\"\"\"\n if not isinstance(pattern, DateTimePattern):\n pattern = parse_pattern(pattern)\n\n map_fmt = {\n # days\n \"d\": \"%d\",\n \"dd\": \"%d\",\n \"EEE\": \"%a\",\n \"EEEE\": \"%A\",\n \"EEEEE\": \"%a\", # narrow name => short name\n # months\n \"M\": \"%m\",\n \"MM\": \"%m\",\n \"MMM\": \"%b\",\n \"MMMM\": \"%B\",\n # years\n \"y\": \"%Y\",\n \"yy\": \"%Y\",\n \"yyyy\": \"%Y\",\n # hours\n \"h\": \"%I\",\n \"hh\": \"%I\",\n \"H\": \"%H\",\n \"HH\": \"%H\",\n # minutes,\n \"m\": \"%M\",\n \"mm\": \"%M\",\n # seconds\n \"s\": \"%S\",\n \"ss\": \"%S\",\n # am/pm\n \"a\": \"%p\",\n }\n\n return pattern.format % map_fmt"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_active_for(self, user, user_agent=_MARK, ip_address=_MARK):\n conditions = [LoginSession.user == user]\n\n if user_agent is not _MARK:\n if user_agent is None:\n user_agent = request.environ.get(\"HTTP_USER_AGENT\", \"\")\n conditions.append(LoginSession.user_agent == user_agent)\n\n if ip_address is not _MARK:\n if ip_address is None:\n ip_addresses = request.headers.getlist(\"X-Forwarded-For\")\n ip_address = ip_addresses[0] if ip_addresses else request.remote_addr\n conditions.append(LoginSession.ip_address == ip_address)\n\n session = (\n LoginSession.query.filter(*conditions)\n .order_by(LoginSession.id.desc())\n .first()\n )\n return session", "response": "Return the last known session for given user."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nshows the current config.", "response": "def show(only_path=False):\n \"\"\"Show the current config.\"\"\"\n logger.setLevel(logging.INFO)\n infos = [\"\\n\", f'Instance path: \"{current_app.instance_path}\"']\n\n logger.info(\"\\n \".join(infos))\n\n if not only_path:\n log_config(current_app.config)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef should_handle(self, event_type, filename):\n return (event_type in (\"modified\", \"created\") and\n filename.startswith(self.searchpath) and\n os.path.isfile(filename))", "response": "Check if an event should be handled."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef protect(view):\n\n @wraps(view)\n def csrf_check(*args, **kwargs):\n # an empty form is used to validate current csrf token and only that!\n if not FlaskForm().validate():\n raise Forbidden(\"CSRF validation failed.\")\n\n return view(*args, **kwargs)\n\n return csrf_check", "response": "Protects a view agains CSRF attacks by checking the csrf_token value in\n submitted values."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new user.", "response": "def createuser(email, password, role=None, name=None, first_name=None):\n \"\"\"Create new user.\"\"\"\n\n if User.query.filter(User.email == email).count() > 0:\n print(f\"A user with email '{email}' already exists, aborting.\")\n return\n\n # if password is None:\n # password = prompt_pass(\"Password\")\n\n user = User(\n email=email,\n password=password,\n last_name=name,\n first_name=first_name,\n can_login=True,\n )\n db.session.add(user)\n\n if role in (\"admin\",):\n # FIXME: add other valid roles\n security = get_service(\"security\")\n security.grant_role(user, role)\n\n db.session.commit()\n print(f\"User {email} added\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef auto_slug_on_insert(mapper, connection, target):\n if target.slug is None and target.name:\n target.slug = target.auto_slug", "response": "Generate a slug from Entity. auto_slug if it is not already set."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef auto_slug_after_insert(mapper, connection, target):\n if target.slug is None:\n target.slug = \"{name}{sep}{id}\".format(\n name=target.entity_class.lower(), sep=target.SLUG_SEPARATOR, id=target.id\n )", "response": "Generate a slug from entity_type and id unless slug is already set."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setup_default_permissions(session, instance):\n if instance not in session.new or not isinstance(instance, Entity):\n return\n\n if not current_app:\n # working outside app_context. Raw object manipulation\n return\n\n _setup_default_permissions(instance)", "response": "Setup default permissions on newly created entities according to.\n ."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nseparating method to conveniently call it from scripts for example.", "response": "def _setup_default_permissions(instance):\n \"\"\"Separate method to conveniently call it from scripts for example.\"\"\"\n from abilian.services import get_service\n\n security = get_service(\"security\")\n for permission, roles in instance.__default_permissions__:\n if permission == \"create\":\n # use str for comparison instead of `abilian.services.security.CREATE`\n # symbol to avoid imports that quickly become circular.\n #\n # FIXME: put roles and permissions in a separate, isolated module.\n continue\n for role in roles:\n security.add_permission(permission, role, obj=instance)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef all_entity_classes():\n persistent_classes = Entity._decl_class_registry.values()\n # with sqlalchemy 0.8 _decl_class_registry holds object that are not\n # classes\n return [\n cls for cls in persistent_classes if isclass(cls) and issubclass(cls, Entity)\n ]", "response": "Return the list of all concrete persistent classes that are subclasses\n of Entity."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a string made for indexing roles having READ permission on this object.", "response": "def _indexable_roles_and_users(self):\n \"\"\"Return a string made for indexing roles having :any:`READ`\n permission on this object.\"\"\"\n from abilian.services.indexing import indexable_role\n from abilian.services.security import READ, Admin, Anonymous, Creator, Owner\n from abilian.services import get_service\n\n result = []\n security = get_service(\"security\")\n\n # roles - required to match when user has a global role\n assignments = security.get_permissions_assignments(permission=READ, obj=self)\n allowed_roles = assignments.get(READ, set())\n allowed_roles.add(Admin)\n\n for r in allowed_roles:\n result.append(indexable_role(r))\n\n for role, attr in ((Creator, \"creator\"), (Owner, \"owner\")):\n if role in allowed_roles:\n user = getattr(self, attr)\n if user:\n result.append(indexable_role(user))\n\n # users and groups\n principals = set()\n for user, role in security.get_role_assignements(self):\n if role in allowed_roles:\n principals.add(user)\n\n if Anonymous in principals:\n # it's a role listed in role assignments - legacy when there wasn't\n # permission-role assignments\n principals.remove(Anonymous)\n\n for p in principals:\n result.append(indexable_role(p))\n\n return \" \".join(result)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nindexes tag ids for tags defined in this Entity s default tags namespace.", "response": "def _indexable_tags(self):\n \"\"\"Index tag ids for tags defined in this Entity's default tags\n namespace.\"\"\"\n tags = current_app.extensions.get(\"tags\")\n if not tags or not tags.supports_taggings(self):\n return \"\"\n\n default_ns = tags.entity_default_ns(self)\n return [t for t in tags.entity_tags(self) if t.ns == default_ns]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a list of ( code countries ) alphabetically sorted on localized country name.", "response": "def country_choices(first=None, default_country_first=True):\n \"\"\"Return a list of (code, countries), alphabetically sorted on localized\n country name.\n\n :param first: Country code to be placed at the top\n :param default_country_first:\n :type default_country_first: bool\n \"\"\"\n locale = _get_locale()\n territories = [\n (code, name) for code, name in locale.territories.items() if len(code) == 2\n ] # skip 3-digit regions\n\n if first is None and default_country_first:\n first = default_country()\n\n def sortkey(item):\n if first is not None and item[0] == first:\n return \"0\"\n return to_lower_ascii(item[1])\n\n territories.sort(key=sortkey)\n return territories"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef supported_app_locales():\n locale = _get_locale()\n codes = current_app.config[\"BABEL_ACCEPT_LANGUAGES\"]\n return ((Locale.parse(code), locale.languages.get(code, code)) for code in codes)", "response": "Returns an iterable of tuples containing locale codes and labels supported by current application."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an iterable of tuples with code and label for each timezone in current locale.", "response": "def timezones_choices():\n \"\"\"Timezones values and their labels for current locale.\n\n :return: an iterable of `(code, label)`, code being a timezone code and label\n the timezone name in current locale.\n \"\"\"\n utcnow = pytz.utc.localize(datetime.utcnow())\n locale = _get_locale()\n for tz in sorted(pytz.common_timezones):\n tz = get_timezone(tz)\n now = tz.normalize(utcnow.astimezone(tz))\n label = \"({}) {}\".format(get_timezone_gmt(now, locale=locale), tz.zone)\n yield (tz, label)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the correct gettext translations that should be used for this request.", "response": "def _get_translations_multi_paths():\n \"\"\"Return the correct gettext translations that should be used for this\n request.\n\n This will never fail and return a dummy translation object if used\n outside of the request or if a translation cannot be found.\n \"\"\"\n ctx = _request_ctx_stack.top\n if ctx is None:\n return None\n\n translations = getattr(ctx, \"babel_translations\", None)\n if translations is None:\n babel_ext = ctx.app.extensions[\"babel\"]\n translations = None\n trs = None\n\n # reverse order: thus the application catalog is loaded last, so that\n # translations from libraries can be overriden\n for (dirname, domain) in reversed(babel_ext._translations_paths):\n trs = Translations.load(\n dirname, locales=[flask_babel.get_locale()], domain=domain\n )\n\n # babel.support.Translations is a subclass of\n # babel.support.NullTranslations, so we test if object has a 'merge'\n # method\n\n if not trs or not hasattr(trs, \"merge\"):\n # got None or NullTranslations instance\n continue\n elif translations is not None and hasattr(translations, \"merge\"):\n translations.merge(trs)\n else:\n translations = trs\n\n # ensure translations is at least a NullTranslations object\n if translations is None:\n translations = trs\n\n ctx.babel_translations = translations\n\n return translations"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndefault locale selector used in abilian applications.", "response": "def localeselector():\n \"\"\"Default locale selector used in abilian applications.\"\"\"\n # if a user is logged in, use the locale from the user settings\n user = getattr(g, \"user\", None)\n if user is not None:\n locale = getattr(user, \"locale\", None)\n if locale:\n return locale\n\n # Otherwise, try to guess the language from the user accept header the browser\n # transmits. By default we support en/fr. The best match wins.\n return request.accept_languages.best_match(\n current_app.config[\"BABEL_ACCEPT_LANGUAGES\"]\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_template_i18n(template_name, locale):\n if locale is None:\n return [template_name]\n\n template_list = []\n parts = template_name.rsplit(\".\", 1)\n root = parts[0]\n suffix = parts[1]\n\n if locale.territory is not None:\n locale_string = \"_\".join([locale.language, locale.territory])\n localized_template_path = \".\".join([root, locale_string, suffix])\n template_list.append(localized_template_path)\n\n localized_template_path = \".\".join([root, locale.language, suffix])\n template_list.append(localized_template_path)\n\n # append the default\n template_list.append(template_name)\n return template_list", "response": "Build a list of templates with preceding locale if found."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntries to build an ordered list of template to satisfy the current locale.", "response": "def render_template_i18n(template_name_or_list, **context):\n \"\"\"Try to build an ordered list of template to satisfy the current\n locale.\"\"\"\n template_list = []\n # Use locale if present in **context\n if \"locale\" in context:\n locale = Locale.parse(context[\"locale\"])\n else:\n # Use get_locale() or default_locale\n locale = flask_babel.get_locale()\n\n if isinstance(template_name_or_list, str):\n template_list = get_template_i18n(template_name_or_list, locale)\n else:\n # Search for locale for each member of the list, do not bypass\n for template in template_name_or_list:\n template_list.extend(get_template_i18n(template, locale))\n\n with ensure_request_context(), force_locale(locale):\n return render_template(template_list, **context)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_translations(\n self, module_name, translations_dir=\"translations\", domain=\"messages\"\n ):\n \"\"\"Add translations from external module.\n\n For example::\n\n babel.add_translations('abilian.core')\n\n Will add translations files from `abilian.core` module.\n \"\"\"\n module = importlib.import_module(module_name)\n for path in (Path(p, translations_dir) for p in module.__path__):\n if not (path.exists() and path.is_dir()):\n continue\n\n if not os.access(str(path), os.R_OK):\n self.app.logger.warning(\n \"Babel translations: read access not allowed {}, skipping.\"\n \"\".format(repr(str(path).encode(\"utf-8\")))\n )\n continue\n\n self._translations_paths.append((str(path), domain))", "response": "Add translations from external module."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get(self, attach, *args, **kwargs):\n response = self.make_response(*args, **kwargs) # type: Response\n response.content_type = self.get_content_type(*args, **kwargs)\n\n if attach:\n filename = self.get_filename(*args, **kwargs)\n if not filename:\n filename = \"file.bin\"\n headers = response.headers\n headers.add(\"Content-Disposition\", \"attachment\", filename=filename)\n\n self.set_cache_headers(response)\n return response", "response": "Returns a response object with the content - type and filename as the content - disposition header."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npings the connection to the database.", "response": "def ping_connection(dbapi_connection, connection_record, connection_proxy):\n \"\"\"Ensure connections are valid.\n\n From: `http://docs.sqlalchemy.org/en/rel_0_8/core/pooling.html`\n\n In case db has been restarted pool may return invalid connections.\n \"\"\"\n cursor = dbapi_connection.cursor()\n try:\n cursor.execute(\"SELECT 1\")\n except Exception:\n # optional - dispose the whole pool\n # instead of invalidating one at a time\n # connection_proxy._pool.dispose()\n\n # raise DisconnectionError - pool will try\n # connecting again up to three times before raising.\n raise sa.exc.DisconnectionError()\n cursor.close()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning columnsnames for a model except named ones.", "response": "def filter_cols(model, *filtered_columns):\n \"\"\"Return columnsnames for a model except named ones.\n\n Useful for defer() for example to retain only columns of interest\n \"\"\"\n m = sa.orm.class_mapper(model)\n return list(\n {p.key for p in m.iterate_properties if hasattr(p, \"columns\")}.difference(\n filtered_columns\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef JSONList(*args, **kwargs):\n type_ = JSON\n try:\n if kwargs.pop(\"unique_sorted\"):\n type_ = JSONUniqueListType\n except KeyError:\n pass\n\n return MutationList.as_mutable(type_(*args, **kwargs))", "response": "Stores a list as JSON on database with mutability support."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting plain dictionaries to MutationDict.", "response": "def coerce(cls, key, value):\n \"\"\"Convert plain dictionaries to MutationDict.\"\"\"\n if not isinstance(value, MutationDict):\n if isinstance(value, dict):\n return MutationDict(value)\n\n # this call will raise ValueError\n return Mutable.coerce(key, value)\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef coerce(cls, key, value):\n if not isinstance(value, MutationList):\n if isinstance(value, list):\n return MutationList(value)\n\n # this call will raise ValueError\n return Mutable.coerce(key, value)\n else:\n return value", "response": "Convert list to MutationList."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a new file.", "response": "def add_file(self, user, file_obj, **metadata):\n \"\"\"Add a new file.\n\n :returns: file handle\n \"\"\"\n user_dir = self.user_dir(user)\n if not user_dir.exists():\n user_dir.mkdir(mode=0o775)\n\n handle = str(uuid1())\n file_path = user_dir / handle\n\n with file_path.open(\"wb\") as out:\n for chunk in iter(lambda: file_obj.read(CHUNK_SIZE), b\"\"):\n out.write(chunk)\n\n if metadata:\n meta_file = user_dir / f\"{handle}.metadata\"\n with meta_file.open(\"wb\") as out:\n metadata_json = json.dumps(metadata, skipkeys=True).encode(\"ascii\")\n out.write(metadata_json)\n\n return handle"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve a file for a user.", "response": "def get_file(self, user, handle):\n \"\"\"Retrieve a file for a user.\n\n :returns: a :class:`pathlib.Path` instance to this file,\n or None if no file can be found for this handle.\n \"\"\"\n user_dir = self.user_dir(user)\n if not user_dir.exists():\n return None\n\n if not is_valid_handle(handle):\n return None\n\n file_path = user_dir / handle\n\n if not file_path.exists() and not file_path.is_file():\n return None\n\n return file_path"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clear_stalled_files(self):\n # FIXME: put lock in directory?\n CLEAR_AFTER = self.config[\"DELETE_STALLED_AFTER\"]\n minimum_age = time.time() - CLEAR_AFTER\n\n for user_dir in self.UPLOAD_DIR.iterdir():\n if not user_dir.is_dir():\n logger.error(\"Found non-directory in upload dir: %r\", bytes(user_dir))\n continue\n\n for content in user_dir.iterdir():\n if not content.is_file():\n logger.error(\n \"Found non-file in user upload dir: %r\", bytes(content)\n )\n continue\n\n if content.stat().st_ctime < minimum_age:\n content.unlink()", "response": "Scan upload directory and delete stalled files."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef friendly_fqcn(cls_name):\n if isinstance(cls_name, type):\n cls_name = fqcn(cls_name)\n\n return cls_name.rsplit(\".\", 1)[-1]", "response": "Friendly name of fully qualified class name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef local_dt(dt):\n if not dt.tzinfo:\n dt = pytz.utc.localize(dt)\n return LOCALTZ.normalize(dt.astimezone(LOCALTZ))", "response": "Return a naive or aware datetime in system timezone."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets UTC timezone on a datetime object.", "response": "def utc_dt(dt):\n \"\"\"Set UTC timezone on a datetime object.\n\n A naive datetime is assumed to be in UTC TZ.\n \"\"\"\n if not dt.tzinfo:\n return pytz.utc.localize(dt)\n return dt.astimezone(pytz.utc)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a dictionary with params from request.", "response": "def get_params(names):\n \"\"\"Return a dictionary with params from request.\n\n TODO: I think we don't use it anymore and it should be removed\n before someone gets hurt.\n \"\"\"\n params = {}\n for name in names:\n value = request.form.get(name) or request.files.get(name)\n if value is not None:\n params[name] = value\n return params"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef luhn(n):\n r = [int(ch) for ch in str(n)][::-1]\n return (sum(r[0::2]) + sum(sum(divmod(d * 2, 10)) for d in r[1::2])) % 10 == 0", "response": "Validate that a string made of numeric characters verify Luhn test. Used by siret validator."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef siret_validator():\n\n def _validate_siret(form, field, siret=\"\"):\n \"\"\"SIRET validator.\n\n A WTForm validator wants a form and a field as parameters. We\n also want to give directly a siret, for a scripting use.\n \"\"\"\n if field is not None:\n siret = (field.data or \"\").strip()\n\n if len(siret) != 14:\n msg = _(\"SIRET must have exactly 14 characters ({count})\").format(\n count=len(siret)\n )\n raise validators.ValidationError(msg)\n\n if not all((\"0\" <= c <= \"9\") for c in siret):\n if not siret[-3:] in SIRET_CODES:\n msg = _(\n \"SIRET looks like special SIRET but geographical \"\n \"code seems invalid (%(code)s)\",\n code=siret[-3:],\n )\n raise validators.ValidationError(msg)\n\n elif not luhn(siret):\n msg = _(\"SIRET number is invalid (length is ok: verify numbers)\")\n raise validators.ValidationError(msg)\n\n return _validate_siret", "response": "Validate a SIRET field."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns absolute path for given uuid.", "response": "def abs_path(self, uuid):\n # type: (UUID) -> Path\n \"\"\"Return absolute :class:`Path` object for given uuid.\n\n :param:uuid: :class:`UUID` instance\n \"\"\"\n top = self.app_state.path\n rel_path = self.rel_path(uuid)\n dest = top / rel_path\n assert top in dest.parents\n return dest"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning absolute path object for given uuid if this uuid does not exist in repository or default if it doesn t exist.", "response": "def get(self, uuid, default=None):\n # type: (UUID, Optional[Path]) -> Optional[Path]\n \"\"\"Return absolute :class:`Path` object for given uuid, if this uuid\n exists in repository, or `default` if it doesn't.\n\n :param:uuid: :class:`UUID` instance\n \"\"\"\n path = self.abs_path(uuid)\n if not path.exists():\n return default\n return path"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstore binary content with uuid as key.", "response": "def set(self, uuid, content, encoding=\"utf-8\"):\n # type: (UUID, Any, Optional[Text]) -> None\n \"\"\"Store binary content with uuid as key.\n\n :param:uuid: :class:`UUID` instance\n :param:content: string, bytes, or any object with a `read()` method\n :param:encoding: encoding to use when content is Unicode\n \"\"\"\n dest = self.abs_path(uuid)\n if not dest.parent.exists():\n dest.parent.mkdir(0o775, parents=True)\n\n if hasattr(content, \"read\"):\n content = content.read()\n\n mode = \"tw\"\n if not isinstance(content, str):\n mode = \"bw\"\n encoding = None\n\n with dest.open(mode, encoding=encoding) as f:\n f.write(content)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete(self, uuid):\n # type: (UUID) -> None\n \"\"\"Delete file with given uuid.\n\n :param:uuid: :class:`UUID` instance\n :raises:KeyError if file does not exists\n \"\"\"\n dest = self.abs_path(uuid)\n if not dest.exists():\n raise KeyError(\"No file can be found for this uuid\", uuid)\n\n dest.unlink()", "response": "Delete file with given uuid."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the given session as the current transaction.", "response": "def set_transaction(self, session, transaction):\n # type: (Session, RepositoryTransaction) -> None\n \"\"\"\n :param:session: :class:`sqlalchemy.orm.session.Session` instance\n :param:transaction: :class:`RepositoryTransaction` instance\n \"\"\"\n if isinstance(session, sa.orm.scoped_session):\n session = session()\n\n s_id = id(session)\n self.transactions[s_id] = (weakref.ref(session), transaction)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a session instance for the specified object parameter.", "response": "def _session_for(self, model_or_session):\n \"\"\"Return session instance for object parameter.\n\n If parameter is a session instance, it is return as is.\n If parameter is a registered model instance, its session will be used.\n\n If parameter is a detached model instance, or None, application scoped\n session will be used (db.session())\n\n If parameter is a scoped_session instance, a new session will be\n instanciated.\n \"\"\"\n session = model_or_session\n if not isinstance(session, (Session, sa.orm.scoped_session)):\n if session is not None:\n session = sa.orm.object_session(model_or_session)\n\n if session is None:\n session = db.session\n\n if isinstance(session, sa.orm.scoped_session):\n session = session()\n\n return session"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef commit(self, session=None):\n if self.__cleared:\n return\n\n if self._parent:\n # nested transaction\n self._commit_parent()\n else:\n self._commit_repository()\n self._clear()", "response": "Commit all modified objects into parent transaction."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _add_to(self, uuid, dest, other):\n _assert_uuid(uuid)\n try:\n other.remove(uuid)\n except KeyError:\n pass\n dest.add(uuid)", "response": "Add item to dest set ensuring item is not present in other set."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ns(ns):\n\n def setup_ns(cls):\n setattr(cls, ENTITY_DEFAULT_NS_ATTR, ns)\n return cls\n\n return setup_ns", "response": "Class decorator that sets default tags namespace to use with its\n instances."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tags_from_hit(self, tag_ids):\n ids = []\n for t in tag_ids.split():\n t = t.strip()\n try:\n t = int(t)\n except ValueError:\n pass\n else:\n ids.append(t)\n\n if not ids:\n return []\n\n return Tag.query.filter(Tag.id.in_(ids)).all()", "response": "Returns an iterable of Tag instances from a hit."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconstructs a form class with a field for tags in namespace ns.", "response": "def entity_tags_form(self, entity, ns=None):\n \"\"\"Construct a form class with a field for tags in namespace `ns`.\"\"\"\n if ns is None:\n ns = self.entity_default_ns(entity)\n\n field = TagsField(label=_l(\"Tags\"), ns=ns)\n cls = type(\"EntityNSTagsForm\", (_TagsForm,), {\"tags\": field})\n return cls"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns all tags for the namespace ns ordered by label.", "response": "def get(self, ns, label=None):\n \"\"\"Return :class:`tags instances<~Tag>` for the namespace `ns`, ordered\n by label.\n\n If `label` is not None the only one instance may be returned, or\n `None` if no tags exists for this label.\n \"\"\"\n query = Tag.query.filter(Tag.ns == ns)\n\n if label is not None:\n return query.filter(Tag.label == label).first()\n\n return query.all()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_form_context(self, obj, ns=None):\n return {\n \"url\": url_for(\"entity_tags.edit\", object_id=obj.id),\n \"form\": self.entity_tags_form(obj)(obj=obj, ns=ns),\n \"buttons\": [EDIT_BUTTON],\n }", "response": "Return a dict of form instance action button submit url..."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef reindex(clear: bool, progressive: bool, batch_size: int):\n reindexer = Reindexer(clear, progressive, batch_size)\n reindexer.reindex_all()", "response": "Reindex all content ; optionally clear index before."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef url_for_hit(hit, default=\"#\"):\n try:\n object_type = hit[\"object_type\"]\n object_id = int(hit[\"id\"])\n return current_app.default_view.url_for(hit, object_type, object_id)\n except KeyError:\n return default\n except Exception:\n logger.error(\"Error building URL for search result\", exc_info=True)\n return default", "response": "Helper for building URLs from results."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the items in an index.", "response": "def index_update(index, items):\n \"\"\"\n :param:index: index name\n :param:items: list of (operation, full class name, primary key, data) tuples.\n \"\"\"\n index_name = index\n index = service.app_state.indexes[index_name]\n adapted = service.adapted\n\n session = safe_session()\n updated = set()\n writer = AsyncWriter(index)\n try:\n for op, cls_name, pk, data in items:\n if pk is None:\n continue\n\n # always delete. Whoosh manual says that 'update' is actually delete + add\n # operation\n object_key = f\"{cls_name}:{pk}\"\n writer.delete_by_term(\"object_key\", object_key)\n\n adapter = adapted.get(cls_name)\n if not adapter:\n # FIXME: log to sentry?\n continue\n\n if object_key in updated:\n # don't add twice the same document in same transaction. The writer will\n # not delete previous records, ending in duplicate records for same\n # document.\n continue\n\n if op in (\"new\", \"changed\"):\n with session.begin(nested=True):\n obj = adapter.retrieve(pk, _session=session, **data)\n\n if obj is None:\n # deleted after task queued, but before task run\n continue\n\n document = service.get_document(obj, adapter)\n try:\n writer.add_document(**document)\n except ValueError:\n # logger is here to give us more infos in order to catch a weird bug\n # that happens regularly on CI but is not reliably\n # reproductible.\n logger.error(\"writer.add_document(%r)\", document, exc_info=True)\n raise\n updated.add(object_key)\n except Exception:\n writer.cancel()\n raise\n\n session.close()\n writer.commit()\n try:\n # async thread: wait for its termination\n writer.join()\n except RuntimeError:\n # happens when actual writer was already available: asyncwriter didn't need\n # to start a thread\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init_indexes(self):\n state = self.app_state\n\n for name, schema in self.schemas.items():\n if current_app.testing:\n storage = TestingStorage()\n else:\n index_path = (Path(state.whoosh_base) / name).absolute()\n if not index_path.exists():\n index_path.mkdir(parents=True)\n storage = FileStorage(str(index_path))\n\n if storage.index_exists(name):\n index = FileIndex(storage, schema, name)\n else:\n index = FileIndex.create(storage, schema, name)\n\n state.indexes[name] = index", "response": "Create indexes for schemas."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving all content from indexes and unregister all classes.", "response": "def clear(self):\n \"\"\"Remove all content from indexes, and unregister all classes.\n\n After clear() the service is stopped. It must be started again\n to create new indexes and register classes.\n \"\"\"\n logger.info(\"Resetting indexes\")\n state = self.app_state\n\n for _name, idx in state.indexes.items():\n writer = AsyncWriter(idx)\n writer.commit(merge=True, optimize=True, mergetype=CLEAR)\n\n state.indexes.clear()\n state.indexed_classes.clear()\n state.indexed_fqcn.clear()\n self.clear_update_queue()\n\n if self.running:\n self.stop()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef searchable_object_types(self):\n try:\n idx = self.index()\n except KeyError:\n # index does not exists: service never started, may happens during\n # tests\n return []\n\n with idx.reader() as r:\n indexed = sorted(set(r.field_terms(\"object_type\")))\n app_indexed = self.app_state.indexed_fqcn\n\n return [(name, friendly_fqcn(name)) for name in indexed if name in app_indexed]", "response": "List of object types present in the index."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef search(\n self,\n q,\n index=\"default\",\n fields=None,\n Models=(),\n object_types=(),\n prefix=True,\n facet_by_type=None,\n **search_args,\n ):\n \"\"\"Interface to search indexes.\n\n :param q: unparsed search string.\n :param index: name of index to use for search.\n :param fields: optionnal mapping of field names -> boost factor?\n :param Models: list of Model classes to limit search on.\n :param object_types: same as `Models`, but directly the model string.\n :param prefix: enable or disable search by prefix\n :param facet_by_type: if set, returns a dict of object_type: results with a\n max of `limit` matches for each type.\n :param search_args: any valid parameter for\n :meth:`whoosh.searching.Search.search`. This includes `limit`,\n `groupedby` and `sortedby`\n \"\"\"\n index = self.app_state.indexes[index]\n if not fields:\n fields = self.default_search_fields\n\n valid_fields = {\n f\n for f in index.schema.names(check_names=fields)\n if prefix or not f.endswith(\"_prefix\")\n }\n\n for invalid in set(fields) - valid_fields:\n del fields[invalid]\n\n parser = DisMaxParser(fields, index.schema)\n query = parser.parse(q)\n\n filters = search_args.setdefault(\"filter\", None)\n filters = [filters] if filters is not None else []\n del search_args[\"filter\"]\n\n if not hasattr(g, \"is_manager\") or not g.is_manager:\n # security access filter\n user = current_user\n roles = {indexable_role(user)}\n if not user.is_anonymous:\n roles.add(indexable_role(Anonymous))\n roles.add(indexable_role(Authenticated))\n roles |= {indexable_role(r) for r in security.get_roles(user)}\n\n filter_q = wq.Or(\n [wq.Term(\"allowed_roles_and_users\", role) for role in roles]\n )\n filters.append(filter_q)\n\n object_types = set(object_types)\n for m in Models:\n object_type = m.entity_type\n if not object_type:\n continue\n object_types.add(object_type)\n\n if object_types:\n object_types &= self.app_state.indexed_fqcn\n else:\n # ensure we don't show content types previously indexed but not yet\n # cleaned from index\n object_types = self.app_state.indexed_fqcn\n\n # limit object_type\n filter_q = wq.Or([wq.Term(\"object_type\", t) for t in object_types])\n filters.append(filter_q)\n\n for func in self.app_state.search_filter_funcs:\n filter_q = func()\n if filter_q is not None:\n filters.append(filter_q)\n\n if filters:\n filter_q = wq.And(filters) if len(filters) > 1 else filters[0]\n # search_args['filter'] = filter_q\n query = filter_q & query\n\n if facet_by_type:\n if not object_types:\n object_types = [t[0] for t in self.searchable_object_types()]\n\n # limit number of documents to score, per object type\n collapse_limit = 5\n search_args[\"groupedby\"] = \"object_type\"\n search_args[\"collapse\"] = \"object_type\"\n search_args[\"collapse_limit\"] = collapse_limit\n search_args[\"limit\"] = search_args[\"collapse_limit\"] * max(\n len(object_types), 1\n )\n\n with index.searcher(closereader=False) as searcher:\n # 'closereader' is needed, else results cannot by used outside 'with'\n # statement\n results = searcher.search(query, **search_args)\n\n if facet_by_type:\n positions = {\n doc_id: pos\n for pos, doc_id in enumerate(i[1] for i in results.top_n)\n }\n sr = results\n results = {}\n for typename, doc_ids in sr.groups(\"object_type\").items():\n results[typename] = [\n sr[positions[oid]] for oid in doc_ids[:collapse_limit]\n ]\n\n return results", "response": "Search for a single entry in the index."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_class(self, cls, app_state=None):\n state = app_state if app_state is not None else self.app_state\n\n for Adapter in self.adapters_cls:\n if Adapter.can_adapt(cls):\n break\n else:\n return\n\n cls_fqcn = fqcn(cls)\n self.adapted[cls_fqcn] = Adapter(cls, self.schemas[\"default\"])\n state.indexed_classes.add(cls)\n state.indexed_fqcn.add(cls_fqcn)", "response": "Register a model class."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef index_objects(self, objects, index=\"default\"):\n if not objects:\n return\n\n index_name = index\n index = self.app_state.indexes[index_name]\n indexed = set()\n\n with index.writer() as writer:\n for obj in objects:\n document = self.get_document(obj)\n if document is None:\n continue\n\n object_key = document[\"object_key\"]\n if object_key in indexed:\n continue\n\n writer.delete_by_term(\"object_key\", object_key)\n try:\n writer.add_document(**document)\n except ValueError:\n # logger is here to give us more infos in order to catch a weird bug\n # that happens regularly on CI but is not reliably\n # reproductible.\n logger.error(\"writer.add_document(%r)\", document, exc_info=True)\n raise\n indexed.add(object_key)", "response": "Bulk index a list of objects."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef linkify_url(value):\n\n value = value.strip()\n\n rjs = r\"[\\s]*(&#x.{1,7})?\".join(list(\"javascript:\"))\n rvb = r\"[\\s]*(&#x.{1,7})?\".join(list(\"vbscript:\"))\n re_scripts = re.compile(f\"({rjs})|({rvb})\", re.IGNORECASE)\n\n value = re_scripts.sub(\"\", value)\n\n url = value\n if not url.startswith(\"http://\") and not url.startswith(\"https://\"):\n url = \"http://\" + url\n\n url = parse.urlsplit(url).geturl()\n if '\"' in url:\n url = url.split('\"')[0]\n if \"<\" in url:\n url = url.split(\"<\")[0]\n\n if value.startswith(\"http://\"):\n value = value[len(\"http://\") :]\n elif value.startswith(\"https://\"):\n value = value[len(\"https://\") :]\n\n if value.count(\"/\") == 1 and value.endswith(\"/\"):\n value = value[0:-1]\n\n return '{} '.format(\n url, value\n )", "response": "Tranform an URL pulled from the database to a safe HTML fragment."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_preferences(self, user=None):\n if user is None:\n user = current_user\n return {pref.key: pref.value for pref in user.preferences}", "response": "Return a string - > value dictionnary representing the given user s preferences."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting preferences from keyword arguments.", "response": "def set_preferences(self, user=None, **kwargs):\n \"\"\"Set preferences from keyword arguments.\"\"\"\n if user is None:\n user = current_user\n\n d = {pref.key: pref for pref in user.preferences}\n for k, v in kwargs.items():\n if k in d:\n d[k].value = v\n else:\n d[k] = UserPreference(user=user, key=k, value=v)\n db.session.add(d[k])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the AttachmentsManager instance for this object.", "response": "def manager(self, obj):\n \"\"\"Returns the :class:`AttachmentsManager` instance for this object.\"\"\"\n manager = getattr(obj, _MANAGER_ATTR, None)\n if manager is None:\n manager = AttachmentsManager()\n setattr(obj.__class__, _MANAGER_ATTR, manager)\n\n return manager"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dict of form instance action button submit url...", "response": "def get_form_context(self, obj):\n \"\"\"Return a dict: form instance, action button, submit url...\n\n Used by macro m_attachment_form(entity)\n \"\"\"\n return {\n \"url\": url_for(\"attachments.create\", entity_id=obj.id),\n \"form\": self.Form(),\n \"buttons\": [UPLOAD_BUTTON],\n }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_entities_for_reindex(tags):\n if isinstance(tags, Tag):\n tags = (tags,)\n\n session = db.session()\n indexing = get_service(\"indexing\")\n tbl = Entity.__table__\n tag_ids = [t.id for t in tags]\n query = (\n sa.sql.select([tbl.c.entity_type, tbl.c.id])\n .select_from(tbl.join(entity_tag_tbl, entity_tag_tbl.c.entity_id == tbl.c.id))\n .where(entity_tag_tbl.c.tag_id.in_(tag_ids))\n )\n\n entities = set()\n\n with session.no_autoflush:\n for entity_type, entity_id in session.execute(query):\n if entity_type not in indexing.adapted:\n logger.debug(\"%r is not indexed, skipping\", entity_type)\n\n item = (\"changed\", entity_type, entity_id, ())\n entities.add(item)\n\n return entities", "response": "Collect entities for theses tags."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef schedule_entities_reindex(entities):\n entities = [(e[0], e[1], e[2], dict(e[3])) for e in entities]\n return index_update.apply_async(kwargs={\"index\": \"default\", \"items\": entities})", "response": "Schedule entities to be reindexed."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef scan(self, file_or_stream):\n if not clamd:\n return None\n\n res = self._scan(file_or_stream)\n if isinstance(file_or_stream, Blob):\n file_or_stream.meta[\"antivirus\"] = res\n return res", "response": "Scan the file for an anti - virus and store the result in the object s meta attribute."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_output(*args, **kwargs):\n '''Compatibility wrapper for Python 2.6 missin g subprocess.check_output'''\n if hasattr(subprocess, 'check_output'):\n return subprocess.check_output(stderr=subprocess.STDOUT, universal_newlines=True,\n *args, **kwargs)\n else:\n process = subprocess.Popen(*args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,\n universal_newlines=True, **kwargs)\n output, _ = process.communicate()\n retcode = process.poll()\n if retcode:\n error = subprocess.CalledProcessError(retcode, args[0])\n error.output = output\n raise error\n return output", "response": "Compatibility wrapper for Python 2. 6 missin g subprocess. check_output"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef prepare_args(self, args, kwargs):\n args, kwargs = super().prepare_args(args, kwargs)\n form_kwargs = self.get_form_kwargs()\n self.form = self.Form(**form_kwargs)\n return args, kwargs", "response": "This method is called by the view class to prepare the arguments for the view."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall when form is validated.", "response": "def form_valid(self, redirect_to=None):\n \"\"\"Save object.\n\n Called when form is validated.\n\n :param redirect_to: real url (created with url_for) to redirect to,\n instead of the view by default.\n \"\"\"\n session = db.session()\n\n with session.no_autoflush:\n self.before_populate_obj()\n self.form.populate_obj(self.obj)\n session.add(self.obj)\n self.after_populate_obj()\n\n try:\n session.flush()\n self.send_activity()\n session.commit()\n\n except ValidationError as e:\n rv = self.handle_commit_exception(e)\n if rv is not None:\n return rv\n session.rollback()\n flash(str(e), \"error\")\n return self.get()\n\n except sa.exc.IntegrityError as e:\n rv = self.handle_commit_exception(e)\n if rv is not None:\n return rv\n session.rollback()\n logger.error(e)\n flash(_(\"An entity with this name already exists in the system.\"), \"error\")\n return self.get()\n\n else:\n self.commit_success()\n flash(self.message_success(), \"success\")\n\n if redirect_to:\n return redirect(redirect_to)\n else:\n return self.redirect_to_view()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a result item with at least id and text values.", "response": "def get_item(self, obj):\n \"\"\"Return a result item.\n\n :param obj: Instance object\n :returns: a dictionnary with at least `id` and `text` values\n \"\"\"\n return {\"id\": obj.id, \"text\": self.get_label(obj), \"name\": obj.name}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse pip requirements file", "response": "def pip(name):\n '''Parse requirements file'''\n with io.open(os.path.join('requirements', '{0}.pip'.format(name))) as f:\n return f.readlines()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting the image from one file to another.", "response": "def convert(self, blob, size=500):\n \"\"\"Size is the maximum horizontal size.\"\"\"\n file_list = []\n with make_temp_file(blob) as in_fn, make_temp_file() as out_fn:\n try:\n subprocess.check_call([\"pdftoppm\", \"-jpeg\", in_fn, out_fn])\n file_list = sorted(glob.glob(f\"{out_fn}-*.jpg\"))\n\n converted_images = []\n for fn in file_list:\n converted = resize(open(fn, \"rb\").read(), size, size)\n converted_images.append(converted)\n\n return converted_images\n except Exception as e:\n raise ConversionError(\"pdftoppm failed\") from e\n finally:\n for fn in file_list:\n try:\n os.remove(fn)\n except OSError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef convert(self, blob, **kw):\n timeout = self.run_timeout\n with make_temp_file(blob) as in_fn, make_temp_file(\n prefix=\"tmp-unoconv-\", suffix=\".pdf\"\n ) as out_fn:\n\n args = [\"-f\", \"pdf\", \"-o\", out_fn, in_fn]\n # Hack for my Mac, FIXME later\n if Path(\"/Applications/LibreOffice.app/Contents/program/python\").exists():\n cmd = [\n \"/Applications/LibreOffice.app/Contents/program/python\",\n \"/usr/local/bin/unoconv\",\n ] + args\n else:\n cmd = [self.unoconv] + args\n\n def run_uno():\n try:\n self._process = subprocess.Popen(\n cmd, close_fds=True, cwd=bytes(self.tmp_dir)\n )\n self._process.communicate()\n except Exception as e:\n logger.error(\"run_uno error: %s\", bytes(e), exc_info=True)\n raise ConversionError(\"unoconv failed\") from e\n\n run_thread = threading.Thread(target=run_uno)\n run_thread.start()\n run_thread.join(timeout)\n\n try:\n if run_thread.is_alive():\n # timeout reached\n self._process.terminate()\n if self._process.poll() is not None:\n try:\n self._process.kill()\n except OSError:\n logger.warning(\"Failed to kill process %s\", self._process)\n\n self._process = None\n raise ConversionError(f\"Conversion timeout ({timeout})\")\n\n converted = open(out_fn).read()\n return converted\n finally:\n self._process = None", "response": "Convert a PDF file using Unoconv."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert(self, blob, **kw):\n timeout = self.run_timeout\n with make_temp_file(blob) as in_fn:\n\n cmd = [self.soffice, \"--headless\", \"--convert-to\", \"pdf\", in_fn]\n\n # # TODO: fix this if needed, or remove if not needed\n # if os.path.exists(\n # \"/Applications/LibreOffice.app/Contents/program/python\"):\n # cmd = [\n # '/Applications/LibreOffice.app/Contents/program/python',\n # '/usr/local/bin/unoconv', '-f', 'pdf', '-o', out_fn, in_fn\n # ]\n\n def run_soffice():\n try:\n self._process = subprocess.Popen(\n cmd, close_fds=True, cwd=bytes(self.tmp_dir)\n )\n self._process.communicate()\n except Exception as e:\n logger.error(\"soffice error: %s\", bytes(e), exc_info=True)\n raise ConversionError(\"soffice conversion failed\") from e\n\n run_thread = threading.Thread(target=run_soffice)\n run_thread.start()\n run_thread.join(timeout)\n\n try:\n if run_thread.is_alive():\n # timeout reached\n self._process.terminate()\n if self._process.poll() is not None:\n try:\n self._process.kill()\n except OSError:\n logger.warning(\"Failed to kill process %s\", self._process)\n\n self._process = None\n raise ConversionError(f\"Conversion timeout ({timeout})\")\n\n out_fn = os.path.splitext(in_fn)[0] + \".pdf\"\n converted = open(out_fn, \"rb\").read()\n return converted\n finally:\n self._process = None", "response": "Convert a file using soffice converter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef paragraphs(value):\n result = \"\\n\\n\".join(\n \"

{}

\".format(p.strip().replace(\"\\n\", Markup(\"
\\n\")))\n for p in _PARAGRAPH_RE.split(escape(value))\n )\n return result", "response": "Blank lines delimitates paragraphs."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef roughsize(size, above=20, mod=10):\n if size < above:\n return str(size)\n\n return \"{:d}+\".format(size - size % mod)", "response": "Returns a rough size string."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef datetimeparse(s):\n try:\n dt = dateutil.parser.parse(s)\n except ValueError:\n return None\n\n return utc_dt(dt)", "response": "Parse a string date time to a datetime object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef age(dt, now=None, add_direction=True, date_threshold=None):\n # Fail silently for now XXX\n if not dt:\n return \"\"\n\n if not now:\n now = datetime.datetime.utcnow()\n\n locale = babel.get_locale()\n dt = utc_dt(dt)\n now = utc_dt(now)\n delta = dt - now\n\n if date_threshold is not None:\n dy, dw, dd = dt_cal = dt.isocalendar()\n ny, nw, nd = now_cal = now.isocalendar()\n\n if dt_cal != now_cal:\n # not same day\n remove_year = dy == ny\n date_fmt = locale.date_formats[\"long\"].pattern\n time_fmt = locale.time_formats[\"short\"].pattern\n fmt = locale.datetime_formats[\"medium\"]\n\n if remove_year:\n date_fmt = date_fmt.replace(\"y\", \"\").strip()\n # remove leading or trailing spaces, comma, etc...\n date_fmt = re.sub(\"^[^A-Za-z]*|[^A-Za-z]*$\", \"\", date_fmt)\n\n fmt = fmt.format(time_fmt, date_fmt)\n return babel.format_datetime(dt, format=fmt)\n\n # don't use (flask.ext.)babel.format_timedelta: as of Flask-Babel 0.9 it\n # doesn't support \"threshold\" arg.\n return format_timedelta(\n delta,\n locale=locale,\n granularity=\"minute\",\n threshold=0.9,\n add_direction=add_direction,\n )", "response": "Returns a string representation of the current date or time."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts date format from babel to bootstrap - datepicker.", "response": "def babel2datepicker(pattern):\n \"\"\"Convert date format from babel (http://babel.pocoo.org/docs/dates/#date-\n fields)) to a format understood by bootstrap-datepicker.\"\"\"\n if not isinstance(pattern, DateTimePattern):\n pattern = parse_pattern(pattern)\n\n map_fmt = {\n # days\n \"d\": \"dd\",\n \"dd\": \"dd\",\n \"EEE\": \"D\",\n \"EEEE\": \"DD\",\n \"EEEEE\": \"D\", # narrow name => short name\n # months\n \"M\": \"mm\",\n \"MM\": \"mm\",\n \"MMM\": \"M\",\n \"MMMM\": \"MM\",\n # years\n \"y\": \"yyyy\",\n \"yy\": \"yyyy\",\n \"yyy\": \"yyyy\",\n \"yyyy\": \"yyyy\",\n # time picker format\n # hours\n \"h\": \"%I\",\n \"hh\": \"%I\",\n \"H\": \"%H\",\n \"HH\": \"%H\",\n # minutes,\n \"m\": \"%M\",\n \"mm\": \"%M\",\n # seconds\n \"s\": \"%S\",\n \"ss\": \"%S\",\n # am/pm\n \"a\": \"%p\",\n }\n\n return pattern.format % map_fmt"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the current state of the service in current application.", "response": "def app_state(self):\n \"\"\"Current service state in current application.\n\n :raise:RuntimeError if working outside application context.\n \"\"\"\n try:\n return current_app.extensions[self.name]\n except KeyError:\n raise ServiceNotRegistered(self.name)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef if_running(meth):\n\n @wraps(meth)\n def check_running(self, *args, **kwargs):\n if not self.running:\n return\n return meth(self, *args, **kwargs)\n\n return check_running", "response": "Decorator for service methods that must be ran only if service is in\n running state."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npublishes the current release to PyPI", "response": "def publish(self):\n '''Publish the current release to PyPI'''\n if self.config.publish:\n logger.info('Publish')\n self.execute(self.config.publish)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef success(text):\n '''Display a success message'''\n print(' '.join((green('\u2714'), white(text))))\n sys.stdout.flush()", "response": "Display a success message"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndisplaying an error message", "response": "def error(text):\n '''Display an error message'''\n print(red('\u2718 {0}'.format(text)))\n sys.stdout.flush()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninstall or update development dependencies", "response": "def deps(ctx):\n '''Install or update development dependencies'''\n header(deps.__doc__)\n with ctx.cd(ROOT):\n ctx.run('pip install -r requirements/develop.pip -r requirements/doc.pip', pty=True)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef qa(ctx):\n '''Run a quality report'''\n header(qa.__doc__)\n with ctx.cd(ROOT):\n info('Python Static Analysis')\n flake8_results = ctx.run('flake8 bumpr', pty=True, warn=True)\n if flake8_results.failed:\n error('There is some lints to fix')\n else:\n success('No lint to fix')\n info('Ensure PyPI can render README and CHANGELOG')\n readme_results = ctx.run('python setup.py check -r -s', pty=True, warn=True, hide=True)\n if readme_results.failed:\n print(readme_results.stdout)\n error('README and/or CHANGELOG is not renderable by PyPI')\n else:\n success('README and CHANGELOG are renderable by PyPI')\n if flake8_results.failed or readme_results.failed:\n exit('Quality check failed', flake8_results.return_code or readme_results.return_code)\n success('Quality check OK')", "response": "Run a quality report"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef completion(ctx):\n '''Generate bash completion script'''\n header(completion.__doc__)\n with ctx.cd(ROOT):\n ctx.run('_bumpr_COMPLETE=source bumpr > bumpr-complete.sh', pty=True)\n success('Completion generated in bumpr-complete.sh')", "response": "Generate bash completion script"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if the current user has permission to view.", "response": "def _check_view_permission(self, view):\n \"\"\"\n :param view: a :class:`ObjectView` class or instance\n \"\"\"\n security = get_service(\"security\")\n return security.has_permission(current_user, view.permission, self.obj)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register(cls):\n if not issubclass(cls, Entity):\n raise ValueError(\"Class must be a subclass of abilian.core.entities.Entity\")\n\n SupportTagging.register(cls)\n return cls", "response": "Register an entity as a taggable class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef supports_tagging(obj):\n if isinstance(obj, type):\n return issubclass(obj, SupportTagging)\n\n if not isinstance(obj, SupportTagging):\n return False\n\n if obj.id is None:\n return False\n\n return True", "response": "returns True if the object supports tagging"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register(cls):\n if not issubclass(cls, Entity):\n raise ValueError(\"Class must be a subclass of abilian.core.entities.Entity\")\n\n SupportAttachment.register(cls)\n return cls", "response": "Register an entity as a attachmentable class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if the object supports attachments.", "response": "def supports_attachments(obj):\n \"\"\"\n :param obj: a class or instance\n\n :returns: True is obj supports attachments.\n \"\"\"\n if isinstance(obj, type):\n return issubclass(obj, SupportAttachment)\n\n if not isinstance(obj, SupportAttachment):\n return False\n\n if obj.id is None:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of attachments on an entity.", "response": "def for_entity(obj, check_support_attachments=False):\n \"\"\"Return attachments on an entity.\"\"\"\n if check_support_attachments and not supports_attachments(obj):\n return []\n\n return getattr(obj, ATTRIBUTE)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstripping labels at ORM level so the unique = True means something.", "response": "def strip_label(mapper, connection, target):\n \"\"\"Strip labels at ORM level so the unique=True means something.\"\"\"\n if target.label is not None:\n target.label = target.label.strip()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _before_insert(mapper, connection, target):\n if target.position is None:\n func = sa.sql.func\n stmt = sa.select([func.coalesce(func.max(mapper.mapped_table.c.position), -1)])\n target.position = connection.execute(stmt).scalar() + 1", "response": "Set item to last position if position not defined."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef by_label(self, label):\n # don't use .first(), so that MultipleResultsFound can be raised\n try:\n return self.filter_by(label=label).one()\n except sa.orm.exc.NoResultFound:\n return None", "response": "Like. get but by label."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlike get but by position number.", "response": "def by_position(self, position):\n \"\"\"Like `.get()`, but by position number.\"\"\"\n # don't use .first(), so that MultipleResultsFound can be raised\n try:\n return self.filter_by(position=position).one()\n except sa.orm.exc.NoResultFound:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _remove_session_save_objects(self):\n if self.testing:\n return\n # Before destroying the session, get all instances to be attached to the\n # new session. Without this, we get DetachedInstance errors, like when\n # tryin to get user's attribute in the error page...\n old_session = db.session()\n g_objs = []\n for key in iter(g):\n obj = getattr(g, key)\n if isinstance(obj, db.Model) and sa.orm.object_session(obj) in (\n None,\n old_session,\n ):\n g_objs.append((key, obj, obj in old_session.dirty))\n\n db.session.remove()\n session = db.session()\n\n for key, obj, load in g_objs:\n # replace obj instance in bad session by new instance in fresh\n # session\n setattr(g, key, session.merge(obj, load=load))\n\n # refresh `current_user`\n user = getattr(_request_ctx_stack.top, \"user\", None)\n if user is not None and isinstance(user, db.Model):\n _request_ctx_stack.top.user = session.merge(user, load=load)", "response": "Used during exception handling in case we need to remove session and merge instances in the new session."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef log_exception(self, exc_info):\n dsn = self.config.get(\"SENTRY_DSN\")\n if not dsn:\n super().log_exception(exc_info)", "response": "Log exception only if Sentry is not used."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef init_sentry(self):\n dsn = self.config.get(\"SENTRY_DSN\")\n if not dsn:\n return\n\n try:\n import sentry_sdk\n except ImportError:\n logger.error(\n 'SENTRY_DSN is defined in config but package \"sentry-sdk\"'\n \" is not installed.\"\n )\n return\n\n from sentry_sdk.integrations.flask import FlaskIntegration\n\n sentry_sdk.init(dsn=dsn, integrations=[FlaskIntegration()])", "response": "Install Sentry handler if config defines Sentry_DSN."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef install_default_handler(self, http_error_code):\n logger.debug(\n \"Set Default HTTP error handler for status code %d\", http_error_code\n )\n handler = partial(self.handle_http_error, http_error_code)\n self.errorhandler(http_error_code)(handler)", "response": "Install a default error handler for the given HTTP error code."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register(self, entity, url_func):\n if not inspect.isclass(entity):\n entity = entity.__class__\n assert issubclass(entity, db.Model)\n self._map[entity.entity_type] = url_func", "response": "Associate a url_func with an entity s type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn canonical view URL for given entity instance.", "response": "def url_for(self, entity=None, object_type=None, object_id=None, **kwargs):\n \"\"\"Return canonical view URL for given entity instance.\n\n If no view has been registered the registry will try to find an\n endpoint named with entity's class lowercased followed by '.view'\n and that accepts `object_id=entity.id` to generates an url.\n\n :param entity: a instance of a subclass of\n :class:`abilian.core.extensions.db.Model`,\n :class:`whoosh.searching.Hit` or :class:`python:dict`\n\n :param object_id: if `entity` is not an instance, this parameter\n must be set to target id. This is usefull when you know the type and\n id of an object but don't want to retrieve it from DB.\n\n :raise KeyError: if no view can be found for the given entity.\n \"\"\"\n if object_type is None:\n assert isinstance(entity, (db.Model, Hit, dict))\n getter = attrgetter if isinstance(entity, db.Model) else itemgetter\n object_id = getter(\"id\")(entity)\n object_type = getter(\"object_type\")(entity)\n\n url_func = self._map.get(object_type) # type: Optional[Callable]\n if url_func is not None:\n return url_func(entity, object_type, object_id, **kwargs)\n\n try:\n return url_for(\n \"{}.view\".format(object_type.rsplit(\".\")[-1].lower()),\n object_id=object_id,\n **kwargs\n )\n except Exception:\n raise KeyError(object_type)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef file(self):\n from abilian.services.repository import session_repository as repository\n\n return repository.get(self, self.uuid)", "response": "Return the path object used for storing the value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef value(self):\n # type: () -> bytes\n \"\"\"Binary value content.\"\"\"\n v = self.file\n return v.open(\"rb\").read() if v is not None else v", "response": "Return the value of the key."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstores binary content to applications s repository and update self. meta. mimetype.", "response": "def value(self, value, encoding=\"utf-8\"):\n \"\"\"Store binary content to applications's repository and update\n `self.meta['md5']`.\n\n :param:content: string, bytes, or any object with a `read()` method\n :param:encoding: encoding to use when content is Unicode\n \"\"\"\n from abilian.services.repository import session_repository as repository\n\n repository.set(self, self.uuid, value)\n self.meta[\"md5\"] = str(hashlib.md5(self.value).hexdigest())\n\n if hasattr(value, \"filename\"):\n filename = value.filename\n if isinstance(filename, bytes):\n filename = filename.decode(\"utf-8\")\n self.meta[\"filename\"] = filename\n\n if hasattr(value, \"content_type\"):\n self.meta[\"mimetype\"] = value.content_type"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves value from repository.", "response": "def value(self):\n \"\"\"Remove value from repository.\"\"\"\n from abilian.services.repository import session_repository as repository\n\n repository.delete(self, self.uuid)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef md5(self):\n md5 = self.meta.get(\"md5\")\n if md5 is None:\n md5 = str(hashlib.md5(self.value).hexdigest())\n\n return md5", "response": "Return md5 from meta or compute it if absent."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nforget password for users who have already activated their accounts.", "response": "def forgotten_pw(new_user=False):\n \"\"\"Reset password for users who have already activated their accounts.\"\"\"\n email = request.form.get(\"email\", \"\").lower()\n\n action = request.form.get(\"action\")\n if action == \"cancel\":\n return redirect(url_for(\"login.login_form\"))\n\n if not email:\n flash(_(\"You must provide your email address.\"), \"error\")\n return render_template(\"login/forgotten_password.html\")\n\n try:\n user = User.query.filter(\n sql.func.lower(User.email) == email, User.can_login == True\n ).one()\n except NoResultFound:\n flash(\n _(\"Sorry, we couldn't find an account for \" \"email '{email}'.\").format(\n email=email\n ),\n \"error\",\n )\n return render_template(\"login/forgotten_password.html\"), 401\n\n if user.can_login and not user.password:\n user.set_password(random_password())\n db.session.commit()\n\n send_reset_password_instructions(user)\n flash(\n _(\"Password reset instructions have been sent to your email address.\"), \"info\"\n )\n\n return redirect(url_for(\"login.login_form\"))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsends the reset password instructions email for the specified user.", "response": "def send_reset_password_instructions(user):\n # type: (User) -> None\n \"\"\"Send the reset password instructions email for the specified user.\n\n :param user: The user to send the instructions to\n \"\"\"\n token = generate_reset_password_token(user)\n url = url_for(\"login.reset_password\", token=token)\n reset_link = request.url_root[:-1] + url\n\n subject = _(\"Password reset instruction for {site_name}\").format(\n site_name=current_app.config.get(\"SITE_NAME\")\n )\n mail_template = \"password_reset_instructions\"\n send_mail(subject, user.email, mail_template, user=user, reset_link=reset_link)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating a unique reset password token for the specified user.", "response": "def generate_reset_password_token(user):\n # type: (User) -> Any\n \"\"\"Generate a unique reset password token for the specified user.\n\n :param user: The user to work with\n \"\"\"\n data = [str(user.id), md5(user.password)]\n return get_serializer(\"reset\").dumps(data)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsend an email using the Flask - Mail extension.", "response": "def send_mail(subject, recipient, template, **context):\n \"\"\"Send an email using the Flask-Mail extension.\n\n :param subject: Email subject\n :param recipient: Email recipient\n :param template: The name of the email template\n :param context: The context to render the template with\n \"\"\"\n\n config = current_app.config\n sender = config[\"MAIL_SENDER\"]\n msg = Message(subject, sender=sender, recipients=[recipient])\n\n template_name = f\"login/email/{template}.txt\"\n msg.body = render_template_i18n(template_name, **context)\n # msg.html = render_template('%s/%s.html' % ctx, **context)\n\n mail = current_app.extensions.get(\"mail\")\n current_app.logger.debug(\"Sending mail...\")\n mail.send(msg)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload and register a plugin given its package name.", "response": "def register_plugin(self, name):\n \"\"\"Load and register a plugin given its package name.\"\"\"\n logger.info(\"Registering plugin: \" + name)\n module = importlib.import_module(name)\n module.register_plugin(self)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading plugins listed in config variable PLUGINS.", "response": "def register_plugins(self):\n \"\"\"Load plugins listed in config variable 'PLUGINS'.\"\"\"\n registered = set()\n for plugin_fqdn in chain(self.APP_PLUGINS, self.config[\"PLUGINS\"]):\n if plugin_fqdn not in registered:\n self.register_plugin(plugin_fqdn)\n registered.add(plugin_fqdn)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninserts the first element in breadcrumbs.", "response": "def init_breadcrumbs(self):\n \"\"\"Insert the first element in breadcrumbs.\n\n This happens during `request_started` event, which is triggered\n before any url_value_preprocessor and `before_request` handlers.\n \"\"\"\n g.breadcrumb.append(BreadcrumbItem(icon=\"home\", url=\"/\" + request.script_root))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_instance_folder(self, create=False):\n path = Path(self.instance_path)\n err = None\n eno = 0\n\n if not path.exists():\n if create:\n logger.info(\"Create instance folder: %s\", path)\n path.mkdir(0o775, parents=True)\n else:\n err = \"Instance folder does not exists\"\n eno = errno.ENOENT\n elif not path.is_dir():\n err = \"Instance folder is not a directory\"\n eno = errno.ENOTDIR\n elif not os.access(str(path), os.R_OK | os.W_OK | os.X_OK):\n err = 'Require \"rwx\" access rights, please verify permissions'\n eno = errno.EPERM\n\n if err:\n raise OSError(eno, err, str(path))", "response": "Verify that the instance folder exists is a directory and has necessary permissions."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_extensions(self):\n extensions.redis.init_app(self)\n extensions.mail.init_app(self)\n extensions.deferred_js.init_app(self)\n extensions.upstream_info.extension.init_app(self)\n actions.init_app(self)\n\n # auth_service installs a `before_request` handler (actually it's\n # flask-login). We want to authenticate user ASAP, so that sentry and\n # logs can report which user encountered any error happening later,\n # in particular in a before_request handler (like csrf validator)\n auth_service.init_app(self)\n\n # webassets\n self.setup_asset_extension()\n self.register_base_assets()\n\n # Babel (for i18n)\n babel = abilian.i18n.babel\n # Temporary (?) workaround\n babel.locale_selector_func = None\n babel.timezone_selector_func = None\n\n babel.init_app(self)\n babel.add_translations(\"wtforms\", translations_dir=\"locale\", domain=\"wtforms\")\n babel.add_translations(\"abilian\")\n babel.localeselector(abilian.i18n.localeselector)\n babel.timezoneselector(abilian.i18n.timezoneselector)\n\n # Flask-Migrate\n Migrate(self, db)\n\n # CSRF by default\n if self.config.get(\"WTF_CSRF_ENABLED\"):\n extensions.csrf.init_app(self)\n self.extensions[\"csrf\"] = extensions.csrf\n extensions.abilian_csrf.init_app(self)\n\n self.register_blueprint(csrf.blueprint)\n\n # images blueprint\n from .web.views.images import blueprint as images_bp\n\n self.register_blueprint(images_bp)\n\n # Abilian Core services\n security_service.init_app(self)\n repository_service.init_app(self)\n session_repository_service.init_app(self)\n audit_service.init_app(self)\n index_service.init_app(self)\n activity_service.init_app(self)\n preferences_service.init_app(self)\n conversion_service.init_app(self)\n vocabularies_service.init_app(self)\n antivirus.init_app(self)\n\n from .web.preferences.user import UserPreferencesPanel\n\n preferences_service.register_panel(UserPreferencesPanel(), self)\n\n from .web.coreviews import users\n\n self.register_blueprint(users.blueprint)\n\n # Admin interface\n Admin().init_app(self)\n\n # Celery async service\n # this allows all shared tasks to use this celery app\n if getattr(self, \"celery_app_cls\", None):\n celery_app = self.extensions[\"celery\"] = self.celery_app_cls()\n # force reading celery conf now - default celery app will\n # also update our config with default settings\n celery_app.conf # noqa\n celery_app.set_default()\n\n # dev helper\n if self.debug:\n # during dev, one can go to /http_error/403 to see rendering of 403\n http_error_pages = Blueprint(\"http_error_pages\", __name__)\n\n @http_error_pages.route(\"/\")\n def error_page(code):\n \"\"\"Helper for development to show 403, 404, 500...\"\"\"\n abort(code)\n\n self.register_blueprint(http_error_pages, url_prefix=\"/http_error\")", "response": "Initialize flask extensions helpers and services."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a url rule to the current flask instance.", "response": "def add_url_rule(self, rule, endpoint=None, view_func=None, roles=None, **options):\n \"\"\"See :meth:`Flask.add_url_rule`.\n\n If `roles` parameter is present, it must be a\n :class:`abilian.service.security.models.Role` instance, or a list of\n Role instances.\n \"\"\"\n super().add_url_rule(rule, endpoint, view_func, **options)\n\n if roles:\n self.add_access_controller(\n endpoint, allow_access_for_roles(roles), endpoint=True\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_access_controller(self, name: str, func: Callable, endpoint: bool = False):\n auth_state = self.extensions[auth_service.name]\n adder = auth_state.add_bp_access_controller\n\n if endpoint:\n adder = auth_state.add_endpoint_access_controller\n if not isinstance(name, str):\n msg = \"{} is not a valid endpoint name\".format(repr(name))\n raise ValueError(msg)\n\n adder(name, func)", "response": "Add an access controller function to the application level."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new url rule for static files.", "response": "def add_static_url(self, url_path, directory, endpoint=None, roles=None):\n \"\"\"Add a new url rule for static files.\n\n :param url_path: subpath from application static url path. No heading\n or trailing slash.\n :param directory: directory to serve content from.\n :param endpoint: flask endpoint name for this url rule.\n\n Example::\n\n app.add_static_url('myplugin',\n '/path/to/myplugin/resources',\n endpoint='myplugin_static')\n\n With default setup it will serve content from directory\n `/path/to/myplugin/resources` from url `http://.../static/myplugin`\n \"\"\"\n url_path = self.static_url_path + \"/\" + url_path + \"/\"\n self.add_url_rule(\n url_path,\n endpoint=endpoint,\n view_func=partial(send_file_from_directory, directory=directory),\n roles=roles,\n )\n self.add_access_controller(\n endpoint, allow_access_for_roles(Anonymous), endpoint=True\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _message_send(self, connection):\n sender = current_app.config[\"MAIL_SENDER\"]\n if not self.extra_headers:\n self.extra_headers = {}\n self.extra_headers[\"Sender\"] = sender\n connection.send(self, sender)", "response": "Send a single message instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _filter_metadata_for_connection(target, connection, **kw):\n engine = connection.engine.name\n default_engines = (engine,)\n tables = target if isinstance(target, sa.Table) else kw.get(\"tables\", [])\n for table in tables:\n indexes = list(table.indexes)\n for idx in indexes:\n if engine not in idx.info.get(\"engines\", default_engines):\n table.indexes.remove(idx)", "response": "Filter the metadata for a connection."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef url_for(obj, **kw):\n if isinstance(obj, str):\n return flask_url_for(obj, **kw)\n\n try:\n return current_app.default_view.url_for(obj, **kw)\n except KeyError:\n if hasattr(obj, \"_url\"):\n return obj._url\n elif hasattr(obj, \"url\"):\n return obj.url\n\n raise BuildError(repr(obj), kw, \"GET\")", "response": "Polymorphic variant of Flask s url_for function."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef send_file_from_directory(filename, directory, app=None):\n if app is None:\n app = current_app\n cache_timeout = app.get_send_file_max_age(filename)\n return send_from_directory(directory, filename, cache_timeout=cache_timeout)", "response": "Send a file from a directory."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting models modified at the given date with the Audit service.", "response": "def get_model_changes(\n entity_type, year=None, month=None, day=None, hour=None, since=None\n):\n # type: (Text, int, int, int, int, datetime) -> Query\n \"\"\"Get models modified at the given date with the Audit service.\n\n :param entity_type: string like \"extranet_medicen.apps.crm.models.Compte\".\n Beware the typo, there won't be a warning message.\n :param since: datetime\n :param year: int\n :param month: int\n :param day: int\n :param hour: int\n\n :returns: a query object\n \"\"\"\n query = AuditEntry.query\n\n if since:\n query = query.filter(AuditEntry.happened_at >= since)\n\n if year:\n query = query.filter(extract(\"year\", AuditEntry.happened_at) == year)\n if month:\n query = query.filter(extract(\"month\", AuditEntry.happened_at) == month)\n if day:\n query = query.filter(extract(\"day\", AuditEntry.happened_at) == day)\n if hour:\n query = query.filter(extract(\"hour\", AuditEntry.happened_at) == hour)\n\n query = query.filter(AuditEntry.entity_type.like(entity_type)).order_by(\n AuditEntry.happened_at\n )\n\n return query"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding the changed columns as a diff attribute.", "response": "def get_columns_diff(changes):\n \"\"\"Add the changed columns as a diff attribute.\n\n - changes: a list of changes (get_model_changes query.all())\n\n Return: the same list, to which elements we added a \"diff\"\n attribute containing the changed columns. Diff defaults to [].\n \"\"\"\n for change in changes:\n change.diff = []\n elt_changes = change.get_changes()\n if elt_changes:\n change.diff = elt_changes.columns\n\n return changes"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _has_argument(func):\n if hasattr(inspect, 'signature'):\n # New way in python 3.3\n sig = inspect.signature(func)\n return bool(sig.parameters)\n else:\n # Old way\n return bool(inspect.getargspec(func).args)", "response": "Test whether a function expects an argument."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new Site object.", "response": "def make_site(cls,\n searchpath=\"templates\",\n outpath=\".\",\n contexts=None,\n rules=None,\n encoding=\"utf8\",\n followlinks=True,\n extensions=None,\n staticpaths=None,\n filters=None,\n env_globals=None,\n env_kwargs=None,\n mergecontexts=False):\n \"\"\"Create a :class:`Site ` object.\n\n :param searchpath:\n A string representing the absolute path to the directory that the\n Site should search to discover templates. Defaults to\n ``'templates'``.\n\n If a relative path is provided, it will be coerced to an absolute\n path by prepending the directory name of the calling module. For\n example, if you invoke staticjinja using ``python build.py`` in\n directory ``/foo``, then *searchpath* will be ``/foo/templates``.\n\n :param outpath:\n A string representing the name of the directory that the Site\n should store rendered files in. Defaults to ``'.'``.\n\n :param contexts:\n A list of *(regex, context)* pairs. The Site will render templates\n whose name match *regex* using *context*. *context* must be either\n a dictionary-like object or a function that takes either no\n arguments or a single :class:`jinja2.Template` as an argument and\n returns a dictionary representing the context. Defaults to ``[]``.\n\n :param rules:\n A list of *(regex, function)* pairs. The Site will delegate\n rendering to *function* if *regex* matches the name of a template\n during rendering. *function* must take a\n :class:`jinja2.Environment` object, a filename, and a context as\n parameters and render the template. Defaults to ``[]``.\n\n :param encoding:\n A string representing the encoding that the Site should use when\n rendering templates. Defaults to ``'utf8'``.\n\n :param followlinks:\n A boolean describing whether symlinks in searchpath should be\n followed or not. Defaults to ``True``.\n\n :param extensions:\n A list of :ref:`Jinja extensions ` that the\n :class:`jinja2.Environment` should use. Defaults to ``[]``.\n\n :param staticpaths:\n List of directories to get static files from (relative to\n searchpath). Defaults to ``None``.\n\n :param filters:\n A dictionary of Jinja2 filters to add to the Environment. Defaults\n to ``{}``.\n\n :param env_globals:\n A mapping from variable names that should be available all the time\n to their values. Defaults to ``{}``.\n\n :param env_kwargs:\n A dictionary that will be passed as keyword arguments to the\n jinja2 Environment. Defaults to ``{}``.\n\n :param mergecontexts:\n A boolean value. If set to ``True``, then all matching regex from\n the contexts list will be merged (in order) to get the final\n context. Otherwise, only the first matching regex is used.\n Defaults to ``False``.\n \"\"\"\n # Coerce search to an absolute path if it is not already\n if not os.path.isabs(searchpath):\n # TODO: Determine if there is a better way to write do this\n calling_module = inspect.getmodule(inspect.stack()[-1][0])\n # Absolute path to project\n project_path = os.path.realpath(os.path.dirname(\n calling_module.__file__))\n searchpath = os.path.join(project_path, searchpath)\n\n if env_kwargs is None:\n env_kwargs = {}\n env_kwargs['loader'] = FileSystemLoader(searchpath=searchpath,\n encoding=encoding,\n followlinks=followlinks)\n env_kwargs.setdefault('extensions', extensions or [])\n environment = Environment(**env_kwargs)\n if filters:\n environment.filters.update(filters)\n\n if env_globals:\n environment.globals.update(env_globals)\n\n logger = logging.getLogger(__name__)\n logger.setLevel(logging.INFO)\n logger.addHandler(logging.StreamHandler())\n return cls(environment,\n searchpath=searchpath,\n outpath=outpath,\n encoding=encoding,\n logger=logger,\n rules=rules,\n contexts=contexts,\n staticpaths=staticpaths,\n mergecontexts=mergecontexts,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_context(self, template):\n context = {}\n for regex, context_generator in self.contexts:\n if re.match(regex, template.name):\n if inspect.isfunction(context_generator):\n if _has_argument(context_generator):\n context.update(context_generator(template))\n else:\n context.update(context_generator())\n else:\n context.update(context_generator)\n\n if not self.mergecontexts:\n break\n return context", "response": "Get the context for a template."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_rule(self, template_name):\n for regex, render_func in self.rules:\n if re.match(regex, template_name):\n return render_func\n raise ValueError(\"no matching rule\")", "response": "Find a matching compilation rule for a template name."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a file is static.", "response": "def is_static(self, filename):\n \"\"\"Check if a file is a static file (which should be copied, rather\n than compiled using Jinja2).\n\n A file is considered static if it lives in any of the directories\n specified in ``staticpaths``.\n\n :param filename: the name of the file to check\n\n \"\"\"\n if self.staticpaths is None:\n # We're not using static file support\n return False\n\n for path in self.staticpaths:\n if filename.startswith(path):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a file is a partial.", "response": "def is_partial(self, filename):\n \"\"\"Check if a file is a partial.\n\n Partial files are not rendered, but they are used in rendering\n templates.\n\n A file is considered a partial if it or any of its parent directories\n are prefixed with an ``'_'``.\n\n :param filename: the name of the file to check\n \"\"\"\n return any((x.startswith(\"_\") for x in filename.split(os.path.sep)))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_template(self, filename):\n if self.is_partial(filename):\n return False\n\n if self.is_ignored(filename):\n return False\n\n if self.is_static(filename):\n return False\n\n return True", "response": "Check if a file is a template."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nensure the output directory for a template exists.", "response": "def _ensure_dir(self, template_name):\n \"\"\"Ensure the output directory for a template exists.\"\"\"\n head = os.path.dirname(template_name)\n if head:\n file_dirpath = os.path.join(self.outpath, head)\n if not os.path.exists(file_dirpath):\n os.makedirs(file_dirpath)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrender a single template object.", "response": "def render_template(self, template, context=None, filepath=None):\n \"\"\"Render a single :class:`jinja2.Template` object.\n\n If a Rule matching the template is found, the rendering task is\n delegated to the rule.\n\n :param template:\n A :class:`jinja2.Template` to render.\n\n :param context:\n Optional. A dictionary representing the context to render\n *template* with. If no context is provided, :meth:`get_context` is\n used to provide a context.\n\n :param filepath:\n Optional. A file or file-like object to dump the complete template\n stream into. Defaults to to ``os.path.join(self.outpath,\n template.name)``.\n\n \"\"\"\n self.logger.info(\"Rendering %s...\" % template.name)\n\n if context is None:\n context = self.get_context(template)\n\n if not os.path.exists(self.outpath):\n os.makedirs(self.outpath)\n self._ensure_dir(template.name)\n\n try:\n rule = self.get_rule(template.name)\n except ValueError:\n if filepath is None:\n filepath = os.path.join(self.outpath, template.name)\n template.stream(**context).dump(filepath, self.encoding)\n else:\n rule(self, template, **context)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef render_templates(self, templates, filepath=None):\n for template in templates:\n self.render_template(template, filepath)", "response": "Render a collection of templates into a file or file - like object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of files that depend on the file named filename.", "response": "def get_dependencies(self, filename):\n \"\"\"Get a list of files that depends on the file named *filename*.\n\n :param filename: the name of the file to find dependencies of\n \"\"\"\n if self.is_partial(filename):\n return self.templates\n elif self.is_template(filename):\n return [self.get_template(filename)]\n elif self.is_static(filename):\n return [filename]\n else:\n return []"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating the site. :param use_reloader: if given, reload templates on modification", "response": "def render(self, use_reloader=False):\n \"\"\"Generate the site.\n\n :param use_reloader: if given, reload templates on modification\n \"\"\"\n self.render_templates(self.templates)\n self.copy_static(self.static_names)\n\n if use_reloader:\n self.logger.info(\"Watching '%s' for changes...\" %\n self.searchpath)\n self.logger.info(\"Press Ctrl+C to stop.\")\n Reloader(self).watch()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nregisters one or many jinja2. Loader instances for templates lookup.", "response": "def register_jinja_loaders(self, *loaders):\n \"\"\"Register one or many `jinja2.Loader` instances for templates lookup.\n\n During application initialization plugins can register a loader so that\n their templates are available to jinja2 renderer.\n\n Order of registration matters: last registered is first looked up (after\n standard Flask lookup in app template folder). This allows a plugin to\n override templates provided by others, or by base application. The\n application can override any template from any plugins from its template\n folder (See `Flask.Application.template_folder`).\n\n :raise: `ValueError` if a template has already been rendered\n \"\"\"\n if not hasattr(self, \"_jinja_loaders\"):\n raise ValueError(\n \"Cannot register new jinja loaders after first template rendered\"\n )\n\n self._jinja_loaders.extend(loaders)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef jinja_loader(self):\n loaders = self._jinja_loaders\n del self._jinja_loaders\n loaders.append(Flask.jinja_loader.func(self))\n loaders.reverse()\n return jinja2.ChoiceLoader(loaders)", "response": "Search templates in custom app templates dir ( default Flask\n behaviour fallback on abilian templates."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef render(args):\n srcpath = (\n os.path.join(os.getcwd(), 'templates') if args['--srcpath'] is None\n else args['--srcpath'] if os.path.isabs(args['--srcpath'])\n else os.path.join(os.getcwd(), args['--srcpath'])\n )\n\n if not os.path.isdir(srcpath):\n print(\"The templates directory '%s' is invalid.\"\n % srcpath)\n sys.exit(1)\n\n if args['--outpath'] is not None:\n outpath = args['--outpath']\n else:\n outpath = os.getcwd()\n\n if not os.path.isdir(outpath):\n print(\"The output directory '%s' is invalid.\"\n % outpath)\n sys.exit(1)\n\n staticdirs = args['--static']\n staticpaths = None\n\n if staticdirs:\n staticpaths = staticdirs.split(\",\")\n for path in staticpaths:\n path = os.path.join(srcpath, path)\n if not os.path.isdir(path):\n print(\"The static files directory '%s' is invalid.\" % path)\n sys.exit(1)\n\n site = staticjinja.make_site(\n searchpath=srcpath,\n outpath=outpath,\n staticpaths=staticpaths\n )\n\n use_reloader = args['watch']\n\n site.render(use_reloader=use_reloader)", "response": "Render a single site."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a string suitable for query against allowed_roles_and_users or allowed_users_and_users field.", "response": "def indexable_role(principal):\n \"\"\"Return a string suitable for query against `allowed_roles_and_users`\n field.\n\n :param principal: It can be :data:`Anonymous`, :data:`Authenticated`,\n or an instance of :class:`User` or :class:`Group`.\n \"\"\"\n principal = unwrap(principal)\n\n if hasattr(principal, \"is_anonymous\") and principal.is_anonymous:\n # transform anonymous user to anonymous role\n principal = Anonymous\n\n if isinstance(principal, Role):\n return f\"role:{principal.name}\"\n elif isinstance(principal, User):\n fmt = \"user:{:d}\"\n elif isinstance(principal, Group):\n fmt = \"group:{:d}\"\n else:\n raise ValueError(repr(principal))\n\n return fmt.format(principal.id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a file to plain text.", "response": "def to_text(self, digest, blob, mime_type):\n \"\"\"Convert a file to plain text.\n\n Useful for full-text indexing. Returns a Unicode string.\n \"\"\"\n # Special case, for now (XXX).\n if mime_type.startswith(\"image/\"):\n return \"\"\n\n cache_key = \"txt:\" + digest\n\n text = self.cache.get(cache_key)\n if text:\n return text\n\n # Direct conversion possible\n for handler in self.handlers:\n if handler.accept(mime_type, \"text/plain\"):\n text = handler.convert(blob)\n self.cache[cache_key] = text\n return text\n\n # Use PDF as a pivot format\n pdf = self.to_pdf(digest, blob, mime_type)\n for handler in self.handlers:\n if handler.accept(\"application/pdf\", \"text/plain\"):\n text = handler.convert(pdf)\n self.cache[cache_key] = text\n return text\n\n raise HandlerNotFound(f\"No handler found to convert from {mime_type} to text\")"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntell if there is a preview image.", "response": "def has_image(self, digest, mime_type, index, size=500):\n \"\"\"Tell if there is a preview image.\"\"\"\n cache_key = f\"img:{index}:{size}:{digest}\"\n return mime_type.startswith(\"image/\") or cache_key in self.cache"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn an image for the given content in the image cache.", "response": "def get_image(self, digest, blob, mime_type, index, size=500):\n \"\"\"Return an image for the given content, only if it already exists in\n the image cache.\"\"\"\n # Special case, for now (XXX).\n if mime_type.startswith(\"image/\"):\n return \"\"\n\n cache_key = f\"img:{index}:{size}:{digest}\"\n return self.cache.get(cache_key)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_image(self, digest, blob, mime_type, index, size=500):\n # Special case, for now (XXX).\n if mime_type.startswith(\"image/\"):\n return \"\"\n\n cache_key = f\"img:{index}:{size}:{digest}\"\n converted = self.cache.get(cache_key)\n if converted:\n return converted\n\n # Direct conversion possible\n for handler in self.handlers:\n if handler.accept(mime_type, \"image/jpeg\"):\n converted_images = handler.convert(blob, size=size)\n for i in range(0, len(converted_images)):\n converted = converted_images[i]\n cache_key = f\"img:{i}:{size}:{digest}\"\n self.cache[cache_key] = converted\n return converted_images[index]\n\n # Use PDF as a pivot format\n pdf = self.to_pdf(digest, blob, mime_type)\n for handler in self.handlers:\n if handler.accept(\"application/pdf\", \"image/jpeg\"):\n converted_images = handler.convert(pdf, size=size)\n for i in range(0, len(converted_images)):\n converted = converted_images[i]\n cache_key = f\"img:{i}:{size}:{digest}\"\n self.cache[cache_key] = converted\n return converted_images[index]\n\n raise HandlerNotFound(f\"No handler found to convert from {mime_type} to image\")", "response": "Convert a file to a list of images."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a dictionary representing the metadata embedded in the given content.", "response": "def get_metadata(self, digest, content, mime_type):\n \"\"\"Get a dictionary representing the metadata embedded in the given\n content.\"\"\"\n\n # XXX: ad-hoc for now, refactor later\n if mime_type.startswith(\"image/\"):\n img = Image.open(BytesIO(content))\n ret = {}\n if not hasattr(img, \"_getexif\"):\n return {}\n info = img._getexif()\n if not info:\n return {}\n for tag, value in info.items():\n decoded = TAGS.get(tag, tag)\n ret[\"EXIF:\" + str(decoded)] = value\n return ret\n else:\n if mime_type != \"application/pdf\":\n content = self.to_pdf(digest, content, mime_type)\n\n with make_temp_file(content) as in_fn:\n try:\n output = subprocess.check_output([\"pdfinfo\", in_fn])\n except OSError:\n logger.error(\"Conversion failed, probably pdfinfo is not installed\")\n raise\n\n ret = {}\n for line in output.split(b\"\\n\"):\n if b\":\" in line:\n key, value = line.strip().split(b\":\", 1)\n key = str(key)\n ret[\"PDF:\" + key] = str(value.strip(), errors=\"replace\")\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef ToByteArray(self):\n lc = self.InternalEncodeLc()\n out = bytearray(4) # will extend\n\n out[0] = self.cla\n out[1] = self.ins\n out[2] = self.p1\n out[3] = self.p2\n if self.data:\n out.extend(lc)\n out.extend(self.data)\n out.extend([0x00, 0x00]) # Le\n else:\n out.extend([0x00, 0x00, 0x00]) # Le\n return out", "response": "Serialize the command.\n\n Encodes the command as per the U2F specs, using the standard\n ISO 7816-4 extended encoding. All Commands expect data, so\n Le is always present.\n\n Returns:\n Python bytearray of the encoded command."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nauthenticate the user with the challenge_data.", "response": "def Authenticate(self, app_id, challenge_data,\n print_callback=sys.stderr.write):\n \"\"\"See base class.\"\"\"\n\n # Ensure environment variable is present\n plugin_cmd = os.environ.get(SK_SIGNING_PLUGIN_ENV_VAR)\n if plugin_cmd is None:\n raise errors.PluginError('{} env var is not set'\n .format(SK_SIGNING_PLUGIN_ENV_VAR))\n\n # Prepare input to signer\n client_data_map, signing_input = self._BuildPluginRequest(\n app_id, challenge_data, self.origin)\n\n # Call plugin\n print_callback('Please insert and touch your security key\\n')\n response = self._CallPlugin([plugin_cmd], signing_input)\n\n # Handle response\n key_challenge_pair = (response['keyHandle'], response['challengeHash'])\n client_data_json = client_data_map[key_challenge_pair]\n client_data = client_data_json.encode()\n return self._BuildAuthenticatorResponse(app_id, client_data, response)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding a JSON request for the plugin.", "response": "def _BuildPluginRequest(self, app_id, challenge_data, origin):\n \"\"\"Builds a JSON request in the form that the plugin expects.\"\"\"\n client_data_map = {}\n encoded_challenges = []\n app_id_hash_encoded = self._Base64Encode(self._SHA256(app_id))\n for challenge_item in challenge_data:\n key = challenge_item['key']\n key_handle_encoded = self._Base64Encode(key.key_handle)\n\n raw_challenge = challenge_item['challenge']\n\n client_data_json = model.ClientData(\n model.ClientData.TYP_AUTHENTICATION,\n raw_challenge,\n origin).GetJson()\n\n challenge_hash_encoded = self._Base64Encode(\n self._SHA256(client_data_json))\n\n # Populate challenges list\n encoded_challenges.append({\n 'appIdHash': app_id_hash_encoded,\n 'challengeHash': challenge_hash_encoded,\n 'keyHandle': key_handle_encoded,\n 'version': key.version,\n })\n\n # Populate ClientData map\n key_challenge_pair = (key_handle_encoded, challenge_hash_encoded)\n client_data_map[key_challenge_pair] = client_data_json\n\n signing_request = {\n 'type': 'sign_helper_request',\n 'signData': encoded_challenges,\n 'timeoutSeconds': U2F_SIGNATURE_TIMEOUT_SECONDS,\n 'localAlways': True\n }\n\n return client_data_map, json.dumps(signing_request)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _BuildAuthenticatorResponse(self, app_id, client_data, plugin_response):\n encoded_client_data = self._Base64Encode(client_data)\n signature_data = str(plugin_response['signatureData'])\n key_handle = str(plugin_response['keyHandle'])\n\n response = {\n 'clientData': encoded_client_data,\n 'signatureData': signature_data,\n 'applicationId': app_id,\n 'keyHandle': key_handle,\n }\n return response", "response": "Builds the response to return to the caller."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalls the plugin and validates the response.", "response": "def _CallPlugin(self, cmd, input_json):\n \"\"\"Calls the plugin and validates the response.\"\"\"\n # Calculate length of input\n input_length = len(input_json)\n length_bytes_le = struct.pack(' 0:\n max_payload = self.packet_size - 5\n next_frame = payload[0:max_payload]\n del payload[0:max_payload]\n length_to_send -= len(next_frame)\n next_packet = UsbHidTransport.ContPacket(self.packet_size, self.cid, seq,\n next_frame)\n self.InternalSendPacket(next_packet)\n seq += 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreceive a message from the device including defragmenting it.", "response": "def InternalRecv(self):\n \"\"\"Receives a message from the device, including defragmenting it.\"\"\"\n first_read = self.InternalReadFrame()\n first_packet = UsbHidTransport.InitPacket.FromWireFormat(self.packet_size,\n first_read)\n\n data = first_packet.payload\n to_read = first_packet.size - len(first_packet.payload)\n\n seq = 0\n while to_read > 0:\n next_read = self.InternalReadFrame()\n next_packet = UsbHidTransport.ContPacket.FromWireFormat(self.packet_size,\n next_read)\n if self.cid != next_packet.cid:\n # Skip over packets that are for communication with other clients.\n # HID is broadcast, so we see potentially all communication from the\n # device. For well-behaved devices, these should be BUSY messages\n # sent to other clients of the device because at this point we're\n # in mid-message transit.\n continue\n\n if seq != next_packet.seq:\n raise errors.HardwareError('Packets received out of order')\n\n # This packet for us at this point, so debit it against our\n # balance of bytes to read.\n to_read -= len(next_packet.payload)\n\n data.extend(next_packet.payload)\n seq += 1\n\n # truncate incomplete frames\n data = data[0:first_packet.size]\n return (first_packet.cmd, data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nauthenticating the user with the given application ID.", "response": "def Authenticate(self, app_id, challenge_data,\n print_callback=sys.stderr.write):\n \"\"\"See base class.\"\"\"\n # If authenticator is not plugged in, prompt\n try:\n device = u2f.GetLocalU2FInterface(origin=self.origin)\n except errors.NoDeviceFoundError:\n print_callback('Please insert your security key and press enter...')\n six.moves.input()\n device = u2f.GetLocalU2FInterface(origin=self.origin)\n\n print_callback('Please touch your security key.\\n')\n\n for challenge_item in challenge_data:\n raw_challenge = challenge_item['challenge']\n key = challenge_item['key']\n\n try:\n result = device.Authenticate(app_id, raw_challenge, [key])\n except errors.U2FError as e:\n if e.code == errors.U2FError.DEVICE_INELIGIBLE:\n continue\n else:\n raise\n\n client_data = self._base64encode(result.client_data.GetJson().encode())\n signature_data = self._base64encode(result.signature_data)\n key_handle = self._base64encode(result.key_handle)\n\n return {\n 'clientData': client_data,\n 'signatureData': signature_data,\n 'applicationId': app_id,\n 'keyHandle': key_handle,\n }\n\n raise errors.U2FError(errors.U2FError.DEVICE_INELIGIBLE)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef InternalPlatformSwitch(funcname, *args, **kwargs):\n # pylint: disable=g-import-not-at-top\n clz = None\n if sys.platform.startswith('linux'):\n from pyu2f.hid import linux\n clz = linux.LinuxHidDevice\n elif sys.platform.startswith('win32'):\n from pyu2f.hid import windows\n clz = windows.WindowsHidDevice\n elif sys.platform.startswith('darwin'):\n from pyu2f.hid import macos\n clz = macos.MacOsHidDevice\n\n if not clz:\n raise Exception('Unsupported platform: ' + sys.platform)\n\n if funcname == '__init__':\n return clz(*args, **kwargs)\n return getattr(clz, funcname)(*args, **kwargs)", "response": "Determine on a platform - specific basis which module to use."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn JSON version of ClientData compatible with FIDO spec.", "response": "def GetJson(self):\n \"\"\"Returns JSON version of ClientData compatible with FIDO spec.\"\"\"\n\n # The U2F Raw Messages specification specifies that the challenge is encoded\n # with URL safe Base64 without padding encoding specified in RFC 4648.\n # Python does not natively support a paddingless encoding, so we simply\n # remove the padding from the end of the string.\n server_challenge_b64 = base64.urlsafe_b64encode(\n self.raw_server_challenge).decode()\n server_challenge_b64 = server_challenge_b64.rstrip('=')\n return json.dumps({'typ': self.typ,\n 'challenge': server_challenge_b64,\n 'origin': self.origin}, sort_keys=True)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the length of a value in a report descriptor.", "response": "def GetValueLength(rd, pos):\n \"\"\"Get value length for a key in rd.\n\n For a key at position pos in the Report Descriptor rd, return the length\n of the associated value. This supports both short and long format\n values.\n\n Args:\n rd: Report Descriptor\n pos: The position of the key in rd.\n\n Returns:\n (key_size, data_len) where key_size is the number of bytes occupied by\n the key and data_len is the length of the value associated by the key.\n \"\"\"\n rd = bytearray(rd)\n key = rd[pos]\n if key == LONG_ITEM_ENCODING:\n # If the key is tagged as a long item (0xfe), then the format is\n # [key (1 byte)] [data len (1 byte)] [item tag (1 byte)] [data (n # bytes)].\n # Thus, the entire key record is 3 bytes long.\n if pos + 1 < len(rd):\n return (3, rd[pos + 1])\n else:\n raise errors.HidError('Malformed report descriptor')\n\n else:\n # If the key is tagged as a short item, then the item tag and data len are\n # packed into one byte. The format is thus:\n # [tag (high 4 bits)] [type (2 bits)] [size code (2 bits)] [data (n bytes)].\n # The size code specifies 1,2, or 4 bytes (0x03 means 4 bytes).\n code = key & 0x03\n if code <= 0x02:\n return (1, code)\n elif code == 0x03:\n return (1, 4)\n\n raise errors.HidError('Cannot happen')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ReadLsbBytes(rd, offset, value_size):\n\n encoding = None\n if value_size == 1:\n encoding = '= pos + 1 + value_length:\n report_count = ReadLsbBytes(rd, pos + 1, value_length)\n elif key & REPORT_DESCRIPTOR_KEY_MASK == REPORT_SIZE:\n if len(rd) >= pos + 1 + value_length:\n report_size = ReadLsbBytes(rd, pos + 1, value_length)\n elif key & REPORT_DESCRIPTOR_KEY_MASK == USAGE_PAGE:\n if len(rd) >= pos + 1 + value_length:\n usage_page = ReadLsbBytes(rd, pos + 1, value_length)\n elif key & REPORT_DESCRIPTOR_KEY_MASK == USAGE:\n if len(rd) >= pos + 1 + value_length:\n usage = ReadLsbBytes(rd, pos + 1, value_length)\n\n pos += value_length + key_size\n return desc"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting packet to the device.", "response": "def Write(self, packet):\n \"\"\"See base class.\"\"\"\n out = bytearray([0] + packet) # Prepend the zero-byte (report ID)\n os.write(self.dev, out)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread the next chunk of data from the device.", "response": "def Read(self):\n \"\"\"See base class.\"\"\"\n raw_in = os.read(self.dev, self.GetInReportDataLength())\n decoded_in = list(bytearray(raw_in))\n return decoded_in"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef FillDeviceAttributes(device, descriptor):\n attributes = HidAttributes()\n result = hid.HidD_GetAttributes(device, ctypes.byref(attributes))\n if not result:\n raise ctypes.WinError()\n\n buf = ctypes.create_string_buffer(1024)\n result = hid.HidD_GetProductString(device, buf, 1024)\n\n if not result:\n raise ctypes.WinError()\n\n descriptor.vendor_id = attributes.VendorID\n descriptor.product_id = attributes.ProductID\n descriptor.product_string = ctypes.wstring_at(buf)", "response": "Fills out the attributes of the device into the device descriptor."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfill out the HidCapabilitites of the device into the device descriptor.", "response": "def FillDeviceCapabilities(device, descriptor):\n \"\"\"Fill out device capabilities.\n\n Fills the HidCapabilitites of the device into descriptor.\n\n Args:\n device: A handle to the open device\n descriptor: DeviceDescriptor to populate with the\n capabilities\n\n Returns:\n none\n\n Raises:\n WindowsError when unable to obtain capabilitites.\n \"\"\"\n preparsed_data = PHIDP_PREPARSED_DATA(0)\n ret = hid.HidD_GetPreparsedData(device, ctypes.byref(preparsed_data))\n if not ret:\n raise ctypes.WinError()\n\n try:\n caps = HidCapabilities()\n ret = hid.HidP_GetCaps(preparsed_data, ctypes.byref(caps))\n\n if ret != HIDP_STATUS_SUCCESS:\n raise ctypes.WinError()\n\n descriptor.usage = caps.Usage\n descriptor.usage_page = caps.UsagePage\n descriptor.internal_max_in_report_len = caps.InputReportByteLength\n descriptor.internal_max_out_report_len = caps.OutputReportByteLength\n\n finally:\n hid.HidD_FreePreparsedData(preparsed_data)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef OpenDevice(path, enum=False):\n desired_access = GENERIC_WRITE | GENERIC_READ\n share_mode = FILE_SHARE_READ | FILE_SHARE_WRITE\n if enum:\n desired_access = 0\n\n h = kernel32.CreateFileA(path,\n desired_access,\n share_mode,\n None, OPEN_EXISTING, 0, None)\n if h == INVALID_HANDLE_VALUE:\n raise ctypes.WinError()\n return h", "response": "Open the device and return a handle to it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nenumerating the available devices.", "response": "def Enumerate():\n \"\"\"See base class.\"\"\"\n hid_guid = GUID()\n hid.HidD_GetHidGuid(ctypes.byref(hid_guid))\n\n devices = setupapi.SetupDiGetClassDevsA(\n ctypes.byref(hid_guid), None, None, 0x12)\n index = 0\n interface_info = DeviceInterfaceData()\n interface_info.cbSize = ctypes.sizeof(DeviceInterfaceData) # pylint: disable=invalid-name\n\n out = []\n while True:\n result = setupapi.SetupDiEnumDeviceInterfaces(\n devices, 0, ctypes.byref(hid_guid), index,\n ctypes.byref(interface_info))\n index += 1\n if not result:\n break\n\n detail_len = wintypes.DWORD()\n result = setupapi.SetupDiGetDeviceInterfaceDetailA(\n devices, ctypes.byref(interface_info), None, 0,\n ctypes.byref(detail_len), None)\n\n detail_len = detail_len.value\n if detail_len == 0:\n # skip this device, some kind of error\n continue\n\n buf = ctypes.create_string_buffer(detail_len)\n interface_detail = DeviceInterfaceDetailData.from_buffer(buf)\n interface_detail.cbSize = ctypes.sizeof(DeviceInterfaceDetailData)\n\n result = setupapi.SetupDiGetDeviceInterfaceDetailA(\n devices, ctypes.byref(interface_info),\n ctypes.byref(interface_detail), detail_len, None, None)\n\n if not result:\n raise ctypes.WinError()\n\n descriptor = base.DeviceDescriptor()\n # This is a bit of a hack to work around a limitation of ctypes and\n # \"header\" structures that are common in windows. DevicePath is a\n # ctypes array of length 1, but it is backed with a buffer that is much\n # longer and contains a null terminated string. So, we read the null\n # terminated string off DevicePath here. Per the comment above, the\n # alignment of this struct varies depending on architecture, but\n # in all cases the path string starts 1 DWORD into the structure.\n #\n # The path length is:\n # length of detail buffer - header length (1 DWORD)\n path_len = detail_len - ctypes.sizeof(wintypes.DWORD)\n descriptor.path = ctypes.string_at(\n ctypes.addressof(interface_detail.DevicePath), path_len)\n\n device = None\n try:\n device = OpenDevice(descriptor.path, True)\n except WindowsError as e: # pylint: disable=undefined-variable\n if e.winerror == ERROR_ACCESS_DENIED: # Access Denied, e.g. a keyboard\n continue\n else:\n raise e\n\n try:\n FillDeviceAttributes(device, descriptor)\n FillDeviceCapabilities(device, descriptor)\n out.append(descriptor.ToPublicDict())\n finally:\n kernel32.CloseHandle(device)\n\n return out"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Write(self, packet):\n if len(packet) != self.GetOutReportDataLength():\n raise errors.HidError(\"Packet length must match report data length.\")\n\n packet_data = [0] + packet # Prepend the zero-byte (report ID)\n out = bytes(bytearray(packet_data))\n num_written = wintypes.DWORD()\n ret = (\n kernel32.WriteFile(\n self.dev, out, len(out),\n ctypes.byref(num_written), None))\n if num_written.value != len(out):\n raise errors.HidError(\n \"Failed to write complete packet. \" + \"Expected %d, but got %d\" %\n (len(out), num_written.value))\n if not ret:\n raise ctypes.WinError()", "response": "Writes a packet to the report."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads the next report from the device.", "response": "def Read(self):\n \"\"\"See base class.\"\"\"\n buf = ctypes.create_string_buffer(self.desc.internal_max_in_report_len)\n num_read = wintypes.DWORD()\n ret = kernel32.ReadFile(\n self.dev, buf, len(buf), ctypes.byref(num_read), None)\n\n if num_read.value != self.desc.internal_max_in_report_len:\n raise errors.HidError(\"Failed to read full length report from device.\")\n\n if not ret:\n raise ctypes.WinError()\n\n # Convert the string buffer to a list of numbers. Throw away the first\n # byte, which is the report id (which we don't care about).\n return list(bytearray(buf[1:]))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nauthenticating the user with the challenge data.", "response": "def Authenticate(self, app_id, challenge_data,\n print_callback=sys.stderr.write):\n \"\"\"See base class.\"\"\"\n for authenticator in self.authenticators:\n if authenticator.IsAvailable():\n result = authenticator.Authenticate(app_id,\n challenge_data,\n print_callback)\n return result\n\n raise ValueError('No valid authenticators found')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef GetLocalU2FInterface(origin=socket.gethostname()):\n hid_transports = hidtransport.DiscoverLocalHIDU2FDevices()\n for t in hid_transports:\n try:\n return U2FInterface(security_key=hardware.SecurityKey(transport=t),\n origin=origin)\n except errors.UnsupportedVersionException:\n # Skip over devices that don't speak the proper version of the protocol.\n pass\n\n # Unable to find a device\n raise errors.NoDeviceFoundError()", "response": "Obtains a U2FInterface for the first valid local U2FHID device found."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef Register(self, app_id, challenge, registered_keys):\n client_data = model.ClientData(model.ClientData.TYP_REGISTRATION, challenge,\n self.origin)\n challenge_param = self.InternalSHA256(client_data.GetJson())\n app_param = self.InternalSHA256(app_id)\n\n for key in registered_keys:\n try:\n # skip non U2F_V2 keys\n if key.version != u'U2F_V2':\n continue\n resp = self.security_key.CmdAuthenticate(challenge_param, app_param,\n key.key_handle, True)\n # check_only mode CmdAuthenticate should always raise some\n # exception\n raise errors.HardwareError('Should Never Happen')\n\n except errors.TUPRequiredError:\n # This indicates key was valid. Thus, no need to register\n raise errors.U2FError(errors.U2FError.DEVICE_INELIGIBLE)\n except errors.InvalidKeyHandleError as e:\n # This is the case of a key for a different token, so we just ignore it.\n pass\n except errors.HardwareError as e:\n raise errors.U2FError(errors.U2FError.BAD_REQUEST, e)\n\n # Now register the new key\n for _ in range(30):\n try:\n resp = self.security_key.CmdRegister(challenge_param, app_param)\n return model.RegisterResponse(resp, client_data)\n except errors.TUPRequiredError as e:\n self.security_key.CmdWink()\n time.sleep(0.5)\n except errors.HardwareError as e:\n raise errors.U2FError(errors.U2FError.BAD_REQUEST, e)\n\n raise errors.U2FError(errors.U2FError.TIMEOUT)", "response": "Registers the app_id with the security key."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Authenticate(self, app_id, challenge, registered_keys):\n client_data = model.ClientData(model.ClientData.TYP_AUTHENTICATION,\n challenge, self.origin)\n app_param = self.InternalSHA256(app_id)\n challenge_param = self.InternalSHA256(client_data.GetJson())\n num_invalid_keys = 0\n for key in registered_keys:\n try:\n if key.version != u'U2F_V2':\n continue\n for _ in range(30):\n try:\n resp = self.security_key.CmdAuthenticate(challenge_param, app_param,\n key.key_handle)\n return model.SignResponse(key.key_handle, resp, client_data)\n except errors.TUPRequiredError:\n self.security_key.CmdWink()\n time.sleep(0.5)\n except errors.InvalidKeyHandleError:\n num_invalid_keys += 1\n continue\n except errors.HardwareError as e:\n raise errors.U2FError(errors.U2FError.BAD_REQUEST, e)\n\n if num_invalid_keys == len(registered_keys):\n # In this case, all provided keys were invalid.\n raise errors.U2FError(errors.U2FError.DEVICE_INELIGIBLE)\n\n # In this case, the TUP was not pressed.\n raise errors.U2FError(errors.U2FError.TIMEOUT)", "response": "Authenticates app_id with the security key."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef CmdRegister(self, challenge_param, app_param):\n self.logger.debug('CmdRegister')\n if len(challenge_param) != 32 or len(app_param) != 32:\n raise errors.InvalidRequestError()\n\n body = bytearray(challenge_param + app_param)\n response = self.InternalSendApdu(apdu.CommandApdu(\n 0,\n apdu.CMD_REGISTER,\n 0x03, # Per the U2F reference code tests\n 0x00,\n body))\n response.CheckSuccessOrRaise()\n\n return response.body", "response": "This function is used to register a security key with a particular origin & client."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef CmdAuthenticate(self,\n challenge_param,\n app_param,\n key_handle,\n check_only=False):\n \"\"\"Attempt to obtain an authentication signature.\n\n Ask the security key to sign a challenge for a particular key handle\n in order to authenticate the user.\n\n Args:\n challenge_param: SHA-256 hash of client_data object as a bytes\n object.\n app_param: SHA-256 hash of the app id as a bytes object.\n key_handle: The key handle to use to issue the signature as a bytes\n object.\n check_only: If true, only check if key_handle is valid.\n\n Returns:\n A binary structure containing the key handle, attestation, and a\n signature over that by the attestation key. The precise format\n is dictated by the FIDO U2F specs.\n\n Raises:\n TUPRequiredError: If check_only is False, a Test of User Precense\n is required to proceed. If check_only is True, this means\n the key_handle is valid.\n InvalidKeyHandleError: The key_handle is not valid for this device.\n ApduError: Something else went wrong on the device.\n \"\"\"\n self.logger.debug('CmdAuthenticate')\n if len(challenge_param) != 32 or len(app_param) != 32:\n raise errors.InvalidRequestError()\n control = 0x07 if check_only else 0x03\n\n body = bytearray(challenge_param + app_param +\n bytearray([len(key_handle)]) + key_handle)\n response = self.InternalSendApdu(apdu.CommandApdu(\n 0, apdu.CMD_AUTH, control, 0x00, body))\n response.CheckSuccessOrRaise()\n\n return response.body", "response": "This function is used to request an authentication signature over a particular key handle."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nobtaining the version of the device and test transport format. Obtains the version of the device and determines whether to use ISO 7816-4 or the U2f variant. This function should be called at least once before CmdAuthenticate or CmdRegister to make sure the object is using the proper transport for the device. Returns: The version of the U2F protocol in use.", "response": "def CmdVersion(self):\n \"\"\"Obtain the version of the device and test transport format.\n\n Obtains the version of the device and determines whether to use ISO\n 7816-4 or the U2f variant. This function should be called at least once\n before CmdAuthenticate or CmdRegister to make sure the object is using the\n proper transport for the device.\n\n Returns:\n The version of the U2F protocol in use.\n \"\"\"\n self.logger.debug('CmdVersion')\n response = self.InternalSendApdu(apdu.CommandApdu(\n 0, apdu.CMD_VERSION, 0x00, 0x00))\n\n if not response.IsSuccess():\n raise errors.ApduError(response.sw1, response.sw2)\n\n return response.body"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsend an APDU to the device.", "response": "def InternalSendApdu(self, apdu_to_send):\n \"\"\"Send an APDU to the device.\n\n Sends an APDU to the device, possibly falling back to the legacy\n encoding format that is not ISO7816-4 compatible.\n\n Args:\n apdu_to_send: The CommandApdu object to send\n\n Returns:\n The ResponseApdu object constructed out of the devices reply.\n \"\"\"\n response = None\n if not self.use_legacy_format:\n response = apdu.ResponseApdu(self.transport.SendMsgBytes(\n apdu_to_send.ToByteArray()))\n if response.sw1 == 0x67 and response.sw2 == 0x00:\n # If we failed using the standard format, retry with the\n # legacy format.\n self.use_legacy_format = True\n return self.InternalSendApdu(apdu_to_send)\n else:\n response = apdu.ResponseApdu(self.transport.SendMsgBytes(\n apdu_to_send.ToLegacyU2FByteArray()))\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads int property from the HID device.", "response": "def GetDeviceIntProperty(dev_ref, key):\n \"\"\"Reads int property from the HID device.\"\"\"\n cf_key = CFStr(key)\n type_ref = iokit.IOHIDDeviceGetProperty(dev_ref, cf_key)\n cf.CFRelease(cf_key)\n if not type_ref:\n return None\n\n if cf.CFGetTypeID(type_ref) != cf.CFNumberGetTypeID():\n raise errors.OsHidError('Expected number type, got {}'.format(\n cf.CFGetTypeID(type_ref)))\n\n out = ctypes.c_int32()\n ret = cf.CFNumberGetValue(type_ref, K_CF_NUMBER_SINT32_TYPE,\n ctypes.byref(out))\n if not ret:\n return None\n\n return out.value"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads string property from the HID device.", "response": "def GetDeviceStringProperty(dev_ref, key):\n \"\"\"Reads string property from the HID device.\"\"\"\n cf_key = CFStr(key)\n type_ref = iokit.IOHIDDeviceGetProperty(dev_ref, cf_key)\n cf.CFRelease(cf_key)\n if not type_ref:\n return None\n\n if cf.CFGetTypeID(type_ref) != cf.CFStringGetTypeID():\n raise errors.OsHidError('Expected string type, got {}'.format(\n cf.CFGetTypeID(type_ref)))\n\n type_ref = ctypes.cast(type_ref, CF_STRING_REF)\n out = ctypes.create_string_buffer(DEVICE_STRING_PROPERTY_BUFFER_SIZE)\n ret = cf.CFStringGetCString(type_ref, out, DEVICE_STRING_PROPERTY_BUFFER_SIZE,\n K_CF_STRING_ENCODING_UTF8)\n if not ret:\n return None\n\n return out.value.decode('utf8')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef GetDevicePath(device_handle):\n # Obtain device path from IO Registry\n io_service_obj = iokit.IOHIDDeviceGetService(device_handle)\n str_buffer = ctypes.create_string_buffer(DEVICE_PATH_BUFFER_SIZE)\n iokit.IORegistryEntryGetPath(io_service_obj, K_IO_SERVICE_PLANE, str_buffer)\n\n return str_buffer.value", "response": "Retrieves the unique path for the device."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef HidReadCallback(read_queue, result, sender, report_type, report_id, report,\n report_length):\n \"\"\"Handles incoming IN report from HID device.\"\"\"\n del result, sender, report_type, report_id # Unused by the callback function\n\n incoming_bytes = [report[i] for i in range(report_length)]\n read_queue.put(incoming_bytes)", "response": "Handles incoming IN report from HID device."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbinding a device to the run loop and starts the run loop.", "response": "def DeviceReadThread(hid_device):\n \"\"\"Binds a device to the thread's run loop, then starts the run loop.\n\n Args:\n hid_device: The MacOsHidDevice object\n\n The HID manager requires a run loop to handle Report reads. This thread\n function serves that purpose.\n \"\"\"\n\n # Schedule device events with run loop\n hid_device.run_loop_ref = cf.CFRunLoopGetCurrent()\n if not hid_device.run_loop_ref:\n logger.error('Failed to get current run loop')\n return\n\n iokit.IOHIDDeviceScheduleWithRunLoop(hid_device.device_handle,\n hid_device.run_loop_ref,\n K_CF_RUNLOOP_DEFAULT_MODE)\n\n # Run the run loop\n run_loop_run_result = K_CF_RUN_LOOP_RUN_TIMED_OUT\n while (run_loop_run_result == K_CF_RUN_LOOP_RUN_TIMED_OUT or\n run_loop_run_result == K_CF_RUN_LOOP_RUN_HANDLED_SOURCE):\n run_loop_run_result = cf.CFRunLoopRunInMode(\n K_CF_RUNLOOP_DEFAULT_MODE,\n 1000, # Timeout in seconds\n False) # Return after source handled\n\n # log any unexpected run loop exit\n if run_loop_run_result != K_CF_RUN_LOOP_RUN_STOPPED:\n logger.error('Unexpected run loop exit code: %d', run_loop_run_result)\n\n # Unschedule from run loop\n iokit.IOHIDDeviceUnscheduleFromRunLoop(hid_device.device_handle,\n hid_device.run_loop_ref,\n K_CF_RUNLOOP_DEFAULT_MODE)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef Enumerate():\n # Init a HID manager\n hid_mgr = iokit.IOHIDManagerCreate(None, None)\n if not hid_mgr:\n raise errors.OsHidError('Unable to obtain HID manager reference')\n iokit.IOHIDManagerSetDeviceMatching(hid_mgr, None)\n\n # Get devices from HID manager\n device_set_ref = iokit.IOHIDManagerCopyDevices(hid_mgr)\n if not device_set_ref:\n raise errors.OsHidError('Failed to obtain devices from HID manager')\n\n num = iokit.CFSetGetCount(device_set_ref)\n devices = (IO_HID_DEVICE_REF * num)()\n iokit.CFSetGetValues(device_set_ref, devices)\n\n # Retrieve and build descriptor dictionaries for each device\n descriptors = []\n for dev in devices:\n d = base.DeviceDescriptor()\n d.vendor_id = GetDeviceIntProperty(dev, HID_DEVICE_PROPERTY_VENDOR_ID)\n d.product_id = GetDeviceIntProperty(dev, HID_DEVICE_PROPERTY_PRODUCT_ID)\n d.product_string = GetDeviceStringProperty(dev,\n HID_DEVICE_PROPERTY_PRODUCT)\n d.usage = GetDeviceIntProperty(dev, HID_DEVICE_PROPERTY_PRIMARY_USAGE)\n d.usage_page = GetDeviceIntProperty(\n dev, HID_DEVICE_PROPERTY_PRIMARY_USAGE_PAGE)\n d.report_id = GetDeviceIntProperty(dev, HID_DEVICE_PROPERTY_REPORT_ID)\n d.path = GetDevicePath(dev)\n descriptors.append(d.ToPublicDict())\n\n # Clean up CF objects\n cf.CFRelease(device_set_ref)\n cf.CFRelease(hid_mgr)\n\n return descriptors", "response": "Enumerate the HID components of a single HID container."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting a report to device.", "response": "def Write(self, packet):\n \"\"\"See base class.\"\"\"\n report_id = 0\n out_report_buffer = (ctypes.c_uint8 * self.internal_max_out_report_len)()\n out_report_buffer[:] = packet[:]\n\n result = iokit.IOHIDDeviceSetReport(self.device_handle,\n K_IO_HID_REPORT_TYPE_OUTPUT,\n report_id,\n out_report_buffer,\n self.internal_max_out_report_len)\n\n # Non-zero status indicates failure\n if result != K_IO_RETURN_SUCCESS:\n raise errors.OsHidError('Failed to write report to device')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading a single element from the read queue.", "response": "def Read(self):\n \"\"\"See base class.\"\"\"\n\n result = None\n while result is None:\n try:\n result = self.read_queue.get(timeout=60)\n except queue.Empty:\n continue\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrendering detailed model edit page.", "response": "def change_view(self, *args, **kwargs):\n \"\"\"Renders detailed model edit page.\"\"\"\n Hierarchy.init_hierarchy(self)\n\n self.hierarchy.hook_change_view(self, args, kwargs)\n\n return super(HierarchicalModelAdmin, self).change_view(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef action_checkbox(self, obj):\n if getattr(obj, Hierarchy.UPPER_LEVEL_MODEL_ATTR, False):\n return ''\n\n return super(HierarchicalModelAdmin, self).action_checkbox(obj)", "response": "Renders checkboxes.\n\n Disable checkbox for parent item navigation link."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hierarchy_nav(self, obj):\n\n result_repr = '' # For items without children.\n ch_count = getattr(obj, Hierarchy.CHILD_COUNT_MODEL_ATTR, 0)\n\n is_parent_link = getattr(obj, Hierarchy.UPPER_LEVEL_MODEL_ATTR, False)\n\n if is_parent_link or ch_count: # For items with children and parent links.\n icon = 'icon icon-folder'\n title = _('Objects inside: %s') % ch_count\n\n if is_parent_link:\n icon = 'icon icon-folder-up'\n title = _('Upper level')\n\n url = './'\n\n if obj.pk:\n url = '?%s=%s' % (Hierarchy.PARENT_ID_QS_PARAM, obj.pk)\n\n if self._current_changelist.is_popup:\n\n qs_get = copy(self._current_changelist._request.GET)\n\n try:\n del qs_get[Hierarchy.PARENT_ID_QS_PARAM]\n\n except KeyError:\n pass\n\n qs_get = qs_get.urlencode()\n url = ('%s&%s' if '?' in url else '%s?%s') % (url, qs_get)\n\n result_repr = format_html('', url, icon, force_text(title))\n\n return result_repr", "response": "Renders hierarchy navigation elements."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconstruct a query set.", "response": "def get_queryset(self, request):\n \"\"\"Constructs a query set.\n\n :param request:\n :return:\n \"\"\"\n self._hierarchy.hook_get_queryset(self, request)\n\n return super(HierarchicalChangeList, self).get_queryset(request)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_results(self, request):\n super(HierarchicalChangeList, self).get_results(request)\n\n self._hierarchy.hook_get_results(self)", "response": "Gets query set results."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_field_exists(self, field_name):\n if not settings.DEBUG:\n return\n\n try:\n self.lookup_opts.get_field(field_name)\n\n except FieldDoesNotExist as e:\n raise AdmirarchyConfigurationError(e)", "response": "Implements field exists check for debugging purposes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_hierarchy(cls, model_admin):\n hierarchy = getattr(model_admin, 'hierarchy')\n\n if hierarchy:\n if not isinstance(hierarchy, Hierarchy):\n hierarchy = AdjacencyList() # For `True` and etc. TODO heuristics maybe.\n\n else:\n hierarchy = NoHierarchy()\n\n model_admin.hierarchy = hierarchy", "response": "Initializes model admin with hierarchy data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_pid_from_request(cls, changelist, request):\n val = request.GET.get(cls.PARENT_ID_QS_PARAM, False)\n pid = val or None\n\n try:\n del changelist.params[cls.PARENT_ID_QS_PARAM]\n\n except KeyError:\n pass\n\n return pid", "response": "Gets parent ID from query string."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hook_get_queryset(self, changelist, request):\n changelist.check_field_exists(self.pid_field)\n self.pid = self.get_pid_from_request(changelist, request)\n changelist.params[self.pid_field] = self.pid", "response": "Hooked by ChangeList. get_queryset."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef hook_get_results(self, changelist):\n result_list = list(changelist.result_list)\n\n if self.pid:\n # Render to upper level link.\n parent = changelist.model.objects.get(pk=self.pid)\n parent = changelist.model(pk=getattr(parent, self.pid_field_real, None))\n setattr(parent, self.UPPER_LEVEL_MODEL_ATTR, True)\n result_list = [parent] + result_list\n\n # Get children stats.\n kwargs_filter = {'%s__in' % self.pid_field: result_list}\n\n stats_qs = changelist.model.objects.filter(\n **kwargs_filter).values_list(self.pid_field).annotate(cnt=models.Count(self.pid_field))\n\n stats = {item[0]: item[1] for item in stats_qs}\n\n for item in result_list:\n\n if hasattr(item, self.CHILD_COUNT_MODEL_ATTR):\n continue\n\n try:\n setattr(item, self.CHILD_COUNT_MODEL_ATTR, stats[item.id])\n\n except KeyError:\n setattr(item, self.CHILD_COUNT_MODEL_ATTR, 0)\n\n changelist.result_list = result_list", "response": "Hooked by ChangeList. get_results."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hook_get_queryset(self, changelist, request):\n changelist.check_field_exists(self.left_field)\n changelist.check_field_exists(self.right_field)\n self.pid = self.get_pid_from_request(changelist, request)\n\n # Get parent item first.\n qs = changelist.root_queryset\n\n if self.pid:\n self.parent = qs.get(pk=self.pid)\n changelist.params.update(self.get_immediate_children_filter(self.parent))\n\n else:\n changelist.params[self.level_field] = self.root_level\n self.parent = qs.get(**{key: val for key, val in changelist.params.items() if not key.startswith('_')})", "response": "Hooked by ChangeList. get_queryset."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read_file(fpath):\n with io.open(os.path.join(PATH_BASE, fpath)) as f:\n return f.read()", "response": "Reads a file within package directories."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_version():\n contents = read_file(os.path.join('admirarchy', '__init__.py'))\n version = re.search('VERSION = \\(([^)]+)\\)', contents)\n version = version.group(1).replace(', ', '.').strip()\n return version", "response": "Returns version number without module import"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprocess an incoming update from a remote NetworkTables", "response": "def process_update(self, update):\n \"\"\"Process an incoming update from a remote NetworkTables\"\"\"\n data = json.loads(update)\n NetworkTables.getEntry(data[\"k\"]).setValue(data[\"v\"])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _send_update(self, data):\n if isinstance(data, dict):\n data = json.dumps(data)\n self.update_callback(data)", "response": "Send a NetworkTables update via the stored send_update callback"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncleans up listeners that are no longer needed.", "response": "def close(self):\n \"\"\"\n Clean up NetworkTables listeners\n \"\"\"\n NetworkTables.removeGlobalListener(self._nt_on_change)\n NetworkTables.removeConnectionListener(self._nt_connected)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_lat_long_climate_zones(latitude, longitude):\n try:\n from shapely.geometry import Point\n except ImportError: # pragma: no cover\n raise ImportError(\"Finding climate zone of lat/long points requires shapely.\")\n\n (\n iecc_climate_zones,\n iecc_moisture_regimes,\n ba_climate_zones,\n ca_climate_zones,\n ) = cached_data.climate_zone_geometry\n\n point = Point(longitude, latitude) # x,y\n climate_zones = {}\n for iecc_climate_zone, shape in iecc_climate_zones:\n if shape.contains(point):\n climate_zones[\"iecc_climate_zone\"] = iecc_climate_zone\n break\n else:\n climate_zones[\"iecc_climate_zone\"] = None\n\n for iecc_moisture_regime, shape in iecc_moisture_regimes:\n if shape.contains(point):\n climate_zones[\"iecc_moisture_regime\"] = iecc_moisture_regime\n break\n else:\n climate_zones[\"iecc_moisture_regime\"] = None\n\n for ba_climate_zone, shape in ba_climate_zones:\n if shape.contains(point):\n climate_zones[\"ba_climate_zone\"] = ba_climate_zone\n break\n else:\n climate_zones[\"ba_climate_zone\"] = None\n\n for ca_climate_zone, shape in ca_climate_zones:\n if shape.contains(point):\n climate_zones[\"ca_climate_zone\"] = ca_climate_zone\n break\n else:\n climate_zones[\"ca_climate_zone\"] = None\n\n return climate_zones", "response": "Returns a dict of str climate zones that contain lat and long coordinates."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_zcta_metadata(zcta):\n conn = metadata_db_connection_proxy.get_connection()\n cur = conn.cursor()\n cur.execute(\n \"\"\"\n select\n *\n from\n zcta_metadata\n where\n zcta_id = ?\n \"\"\",\n (zcta,),\n )\n row = cur.fetchone()\n if row is None:\n raise UnrecognizedZCTAError(zcta)\n return {col[0]: row[i] for i, col in enumerate(cur.description)}", "response": "Get the metadata about a ZIP Code Tabulation Area."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget latitude and longitude of the target ZCTA in the specified location.", "response": "def zcta_to_lat_long(zcta):\n \"\"\"Get location of ZCTA centroid\n\n Retrieves latitude and longitude of centroid of ZCTA\n to use for matching with weather station.\n\n Parameters\n ----------\n zcta : str\n ID of the target ZCTA.\n\n Returns\n -------\n latitude : float\n Latitude of centroid of ZCTA.\n longitude : float\n Target Longitude of centroid of ZCTA.\n \"\"\"\n valid_zcta_or_raise(zcta)\n\n conn = metadata_db_connection_proxy.get_connection()\n cur = conn.cursor()\n\n cur.execute(\n \"\"\"\n select\n latitude\n , longitude\n from\n zcta_metadata\n where\n zcta_id = ?\n \"\"\",\n (zcta,),\n )\n # match existence checked in validate_zcta_or_raise(zcta)\n latitude, longitude = cur.fetchone()\n\n return float(latitude), float(longitude)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_zcta_ids(state=None):\n conn = metadata_db_connection_proxy.get_connection()\n cur = conn.cursor()\n\n if state is None:\n cur.execute(\n \"\"\"\n select zcta_id from zcta_metadata\n \"\"\"\n )\n else:\n cur.execute(\n \"\"\"\n select zcta_id from zcta_metadata where state = ?\n \"\"\",\n (state,),\n )\n return [row[0] for row in cur.fetchall()]", "response": "Returns a list of all supported ZCTAs optionally by state."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if ZCTA is valid and raise UnrecognizedZCTAError if not.", "response": "def valid_zcta_or_raise(zcta):\n \"\"\" Check if ZCTA is valid and raise eeweather.UnrecognizedZCTAError if not. \"\"\"\n conn = metadata_db_connection_proxy.get_connection()\n cur = conn.cursor()\n\n cur.execute(\n \"\"\"\n select exists (\n select\n zcta_id\n from\n zcta_metadata\n where\n zcta_id = ?\n )\n \"\"\",\n (zcta,),\n )\n (exists,) = cur.fetchone()\n if exists:\n return True\n else:\n raise UnrecognizedZCTAError(zcta)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef valid_usaf_id_or_raise(usaf_id):\n conn = metadata_db_connection_proxy.get_connection()\n cur = conn.cursor()\n\n cur.execute(\n \"\"\"\n select exists (\n select\n usaf_id\n from\n isd_station_metadata\n where\n usaf_id = ?\n )\n \"\"\",\n (usaf_id,),\n )\n (exists,) = cur.fetchone()\n if exists:\n return True\n else:\n raise UnrecognizedUSAFIDError(usaf_id)", "response": "Check if USAF ID is valid and raise UnrecognizedUSAFIDError if not."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a ranked version of the candidate weather stations and metadata structures for a particular site.", "response": "def rank_stations(\n site_latitude,\n site_longitude,\n site_state=None,\n site_elevation=None,\n match_iecc_climate_zone=False,\n match_iecc_moisture_regime=False,\n match_ba_climate_zone=False,\n match_ca_climate_zone=False,\n match_state=False,\n minimum_quality=None,\n minimum_tmy3_class=None,\n max_distance_meters=None,\n max_difference_elevation_meters=None,\n is_tmy3=None,\n is_cz2010=None,\n):\n \"\"\" Get a ranked, filtered set of candidate weather stations and metadata\n for a particular site.\n\n Parameters\n ----------\n site_latitude : float\n Latitude of target site for which to find candidate weather stations.\n site_longitude : float\n Longitude of target site for which to find candidate weather stations.\n site_state : str, 2 letter abbreviation\n US state of target site, used optionally to filter potential candidate\n weather stations. Ignored unless ``match_state=True``.\n site_elevation : float\n Elevation of target site in meters, used optionally to filter potential\n candidate weather stations. Ignored unless\n ``max_difference_elevation_meters`` is set.\n match_iecc_climate_zone : bool\n If ``True``, filter candidate weather stations to those\n matching the IECC climate zone of the target site.\n match_iecc_moisture_regime : bool\n If ``True``, filter candidate weather stations to those\n matching the IECC moisture regime of the target site.\n match_ca_climate_zone : bool\n If ``True``, filter candidate weather stations to those\n matching the CA climate zone of the target site.\n match_ba_climate_zone : bool\n If ``True``, filter candidate weather stations to those\n matching the Building America climate zone of the target site.\n match_state : bool\n If ``True``, filter candidate weather stations to those\n matching the US state of the target site, as specified by\n ``site_state=True``.\n minimum_quality : str, ``'high'``, ``'medium'``, ``'low'``\n If given, filter candidate weather stations to those meeting or\n exceeding the given quality, as summarized by the frequency and\n availability of observations in the NOAA Integrated Surface Database.\n minimum_tmy3_class : str, ``'I'``, ``'II'``, ``'III'``\n If given, filter candidate weather stations to those meeting or\n exceeding the given class, as reported in the NREL TMY3 metadata.\n max_distance_meters : float\n If given, filter candidate weather stations to those within the\n ``max_distance_meters`` of the target site location.\n max_difference_elevation_meters : float\n If given, filter candidate weather stations to those with elevations\n within ``max_difference_elevation_meters`` of the target site elevation.\n is_tmy3 : bool\n If given, filter candidate weather stations to those for which TMY3\n normal year temperature data is available.\n is_cz2010 : bool\n If given, filter candidate weather stations to those for which CZ2010\n normal year temperature data is available.\n\n Returns\n -------\n ranked_filtered_candidates : :any:`pandas.DataFrame`\n Index is ``usaf_id``. Each row contains a potential weather station\n match and metadata. Contains the following columns:\n\n - ``rank``: Rank of weather station match for the target site.\n - ``distance_meters``: Distance from target site to weather station site.\n - ``latitude``: Latitude of weather station site.\n - ``longitude``: Longitude of weather station site.\n - ``iecc_climate_zone``: IECC Climate Zone ID (1-8)\n - ``iecc_moisture_regime``: IECC Moisture Regime ID (A-C)\n - ``ba_climate_zone``: Building America climate zone name\n - ``ca_climate_zone``: Califoria climate zone number\n - ``rough_quality``: Approximate measure of frequency of ISD\n observations data at weather station.\n - ``elevation``: Elevation of weather station site, if available.\n - ``state``: US state of weather station site, if applicable.\n - ``tmy3_class``: Weather station class as reported by NREL TMY3, if\n available\n - ``is_tmy3``: Weather station has associated TMY3 data.\n - ``is_cz2010``: Weather station has associated CZ2010 data.\n - ``difference_elevation_meters``: Absolute difference in meters\n between target site elevation and weather station elevation, if\n available.\n\n \"\"\"\n candidates = cached_data.all_station_metadata\n\n # compute distances\n candidates_defined_lat_long = candidates[\n candidates.latitude.notnull() & candidates.longitude.notnull()\n ]\n candidates_latitude = candidates_defined_lat_long.latitude\n candidates_longitude = candidates_defined_lat_long.longitude\n tiled_site_latitude = np.tile(site_latitude, candidates_latitude.shape)\n tiled_site_longitude = np.tile(site_longitude, candidates_longitude.shape)\n geod = pyproj.Geod(ellps=\"WGS84\")\n dists = geod.inv(\n tiled_site_longitude,\n tiled_site_latitude,\n candidates_longitude.values,\n candidates_latitude.values,\n )[2]\n distance_meters = pd.Series(dists, index=candidates_defined_lat_long.index).reindex(\n candidates.index\n )\n candidates[\"distance_meters\"] = distance_meters\n\n if site_elevation is not None:\n difference_elevation_meters = (candidates.elevation - site_elevation).abs()\n else:\n difference_elevation_meters = None\n candidates[\"difference_elevation_meters\"] = difference_elevation_meters\n\n site_climate_zones = get_lat_long_climate_zones(site_latitude, site_longitude)\n site_iecc_climate_zone = site_climate_zones[\"iecc_climate_zone\"]\n site_iecc_moisture_regime = site_climate_zones[\"iecc_moisture_regime\"]\n site_ca_climate_zone = site_climate_zones[\"ca_climate_zone\"]\n site_ba_climate_zone = site_climate_zones[\"ba_climate_zone\"]\n\n # create filters\n filters = []\n\n if match_iecc_climate_zone:\n if site_iecc_climate_zone is None:\n filters.append(candidates.iecc_climate_zone.isnull())\n else:\n filters.append(candidates.iecc_climate_zone == site_iecc_climate_zone)\n if match_iecc_moisture_regime:\n if site_iecc_moisture_regime is None:\n filters.append(candidates.iecc_moisture_regime.isnull())\n else:\n filters.append(candidates.iecc_moisture_regime == site_iecc_moisture_regime)\n if match_ba_climate_zone:\n if site_ba_climate_zone is None:\n filters.append(candidates.ba_climate_zone.isnull())\n else:\n filters.append(candidates.ba_climate_zone == site_ba_climate_zone)\n if match_ca_climate_zone:\n if site_ca_climate_zone is None:\n filters.append(candidates.ca_climate_zone.isnull())\n else:\n filters.append(candidates.ca_climate_zone == site_ca_climate_zone)\n\n if match_state:\n if site_state is None:\n filters.append(candidates.state.isnull())\n else:\n filters.append(candidates.state == site_state)\n\n if is_tmy3 is not None:\n filters.append(candidates.is_tmy3.isin([is_tmy3]))\n if is_cz2010 is not None:\n filters.append(candidates.is_cz2010.isin([is_cz2010]))\n\n if minimum_quality == \"low\":\n filters.append(candidates.rough_quality.isin([\"high\", \"medium\", \"low\"]))\n elif minimum_quality == \"medium\":\n filters.append(candidates.rough_quality.isin([\"high\", \"medium\"]))\n elif minimum_quality == \"high\":\n filters.append(candidates.rough_quality.isin([\"high\"]))\n\n if minimum_tmy3_class == \"III\":\n filters.append(candidates.tmy3_class.isin([\"I\", \"II\", \"III\"]))\n elif minimum_tmy3_class == \"II\":\n filters.append(candidates.tmy3_class.isin([\"I\", \"II\"]))\n elif minimum_tmy3_class == \"I\":\n filters.append(candidates.tmy3_class.isin([\"I\"]))\n\n if max_distance_meters is not None:\n filters.append(candidates.distance_meters <= max_distance_meters)\n\n if max_difference_elevation_meters is not None and site_elevation is not None:\n filters.append(\n candidates.difference_elevation_meters <= max_difference_elevation_meters\n )\n\n combined_filters = _combine_filters(filters, candidates.index)\n filtered_candidates = candidates[combined_filters]\n ranked_filtered_candidates = filtered_candidates.sort_values(by=[\"distance_meters\"])\n\n # add rank column\n ranks = range(1, 1 + len(ranked_filtered_candidates))\n ranked_filtered_candidates.insert(0, \"rank\", ranks)\n\n return ranked_filtered_candidates[\n [\n \"rank\",\n \"distance_meters\",\n \"latitude\",\n \"longitude\",\n \"iecc_climate_zone\",\n \"iecc_moisture_regime\",\n \"ba_climate_zone\",\n \"ca_climate_zone\",\n \"rough_quality\",\n \"elevation\",\n \"state\",\n \"tmy3_class\",\n \"is_tmy3\",\n \"is_cz2010\",\n \"difference_elevation_meters\",\n ]\n ]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef combine_ranked_stations(rankings):\n\n if len(rankings) == 0:\n raise ValueError(\"Requires at least one ranking.\")\n\n combined_ranking = rankings[0]\n for ranking in rankings[1:]:\n filtered_ranking = ranking[~ranking.index.isin(combined_ranking.index)]\n combined_ranking = pd.concat([combined_ranking, filtered_ranking])\n\n combined_ranking[\"rank\"] = range(1, 1 + len(combined_ranking))\n return combined_ranking", "response": "Combine the ranked stations into a single hybrid ranking dataframe."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nselect a single station from a list of candidates that meets a given data.", "response": "def select_station(\n candidates,\n coverage_range=None,\n min_fraction_coverage=0.9,\n distance_warnings=(50000, 200000),\n rank=1,\n):\n \"\"\" Select a station from a list of candidates that meets given data\n quality criteria.\n\n Parameters\n ----------\n candidates : :any:`pandas.DataFrame`\n A dataframe of the form given by :any:`eeweather.rank_stations` or\n :any:`eeweather.combine_ranked_stations`, specifically having at least\n an index with ``usaf_id`` values and the column ``distance_meters``.\n\n Returns\n -------\n isd_station, warnings : tuple of (:any:`eeweather.ISDStation`, list of str)\n A qualified weather station. ``None`` if no station meets criteria.\n \"\"\"\n\n def _test_station(station):\n if coverage_range is None:\n return True, []\n else:\n start_date, end_date = coverage_range\n try:\n tempC, warnings = eeweather.mockable.load_isd_hourly_temp_data(\n station, start_date, end_date\n )\n except ISDDataNotAvailableError:\n return False, [] # reject\n\n # TODO(philngo): also need to incorporate within-day limits\n if len(tempC) > 0:\n fraction_coverage = tempC.notnull().sum() / float(len(tempC))\n return (fraction_coverage > min_fraction_coverage), warnings\n else:\n return False, [] # reject\n\n def _station_warnings(station, distance_meters):\n return [\n EEWeatherWarning(\n qualified_name=\"eeweather.exceeds_maximum_distance\",\n description=(\n \"Distance from target to weather station is greater\"\n \"than the specified km.\"\n ),\n data={\n \"distance_meters\": distance_meters,\n \"max_distance_meters\": d,\n \"rank\": rank,\n },\n )\n for d in distance_warnings\n if distance_meters > d\n ]\n\n n_stations_passed = 0\n for usaf_id, row in candidates.iterrows():\n station = ISDStation(usaf_id)\n test_result, warnings = _test_station(station)\n if test_result:\n n_stations_passed += 1\n if n_stations_passed == rank:\n if not warnings:\n warnings = []\n warnings.extend(_station_warnings(station, row.distance_meters))\n return station, warnings\n\n no_station_warning = EEWeatherWarning(\n qualified_name=\"eeweather.no_weather_station_selected\",\n description=(\n \"No weather station found with the specified rank and\"\n \" minimum fracitional coverage.\"\n ),\n data={\"rank\": rank, \"min_fraction_coverage\": min_fraction_coverage},\n )\n return None, [no_station_warning]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload ISD stations metadata from a CSV file.", "response": "def _load_isd_station_metadata(download_path):\n \"\"\" Collect metadata for US isd stations.\n \"\"\"\n from shapely.geometry import Point\n\n # load ISD history which contains metadata\n isd_history = pd.read_csv(\n os.path.join(download_path, \"isd-history.csv\"),\n dtype=str,\n parse_dates=[\"BEGIN\", \"END\"],\n )\n\n hasGEO = (\n isd_history.LAT.notnull() & isd_history.LON.notnull() & (isd_history.LAT != 0)\n )\n isUS = (\n ((isd_history.CTRY == \"US\") & (isd_history.STATE.notnull()))\n # AQ = American Samoa, GQ = Guam, RQ = Peurto Rico, VQ = Virgin Islands\n | (isd_history.CTRY.str[1] == \"Q\")\n )\n hasUSAF = isd_history.USAF != \"999999\"\n\n metadata = {}\n for usaf_station, group in isd_history[hasGEO & isUS & hasUSAF].groupby(\"USAF\"):\n # find most recent\n recent = group.loc[group.END.idxmax()]\n wban_stations = list(group.WBAN)\n metadata[usaf_station] = {\n \"usaf_id\": usaf_station,\n \"wban_ids\": wban_stations,\n \"recent_wban_id\": recent.WBAN,\n \"name\": recent[\"STATION NAME\"],\n \"icao_code\": recent.ICAO,\n \"latitude\": recent.LAT if recent.LAT not in (\"+00.000\",) else None,\n \"longitude\": recent.LON if recent.LON not in (\"+000.000\",) else None,\n \"point\": Point(float(recent.LON), float(recent.LAT)),\n \"elevation\": recent[\"ELEV(M)\"]\n if not str(float(recent[\"ELEV(M)\"])).startswith(\"-999\")\n else None,\n \"state\": recent.STATE,\n }\n\n return metadata"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _load_isd_file_metadata(download_path, isd_station_metadata):\n\n isd_inventory = pd.read_csv(\n os.path.join(download_path, \"isd-inventory.csv\"), dtype=str\n )\n\n # filter to stations with metadata\n station_keep = [usaf in isd_station_metadata for usaf in isd_inventory.USAF]\n isd_inventory = isd_inventory[station_keep]\n # filter by year\n year_keep = isd_inventory.YEAR > \"2005\"\n isd_inventory = isd_inventory[year_keep]\n\n metadata = {}\n for (usaf_station, year), group in isd_inventory.groupby([\"USAF\", \"YEAR\"]):\n if usaf_station not in metadata:\n metadata[usaf_station] = {\"usaf_id\": usaf_station, \"years\": {}}\n metadata[usaf_station][\"years\"][year] = [\n {\n \"wban_id\": row.WBAN,\n \"counts\": [\n row.JAN,\n row.FEB,\n row.MAR,\n row.APR,\n row.MAY,\n row.JUN,\n row.JUL,\n row.AUG,\n row.SEP,\n row.OCT,\n row.NOV,\n row.DEC,\n ],\n }\n for i, row in group.iterrows()\n ]\n return metadata", "response": "Load the metadata for the isd files."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds the metadata database from primary sources.", "response": "def build_metadata_db(\n zcta_geometry=False,\n iecc_climate_zone_geometry=True,\n iecc_moisture_regime_geometry=True,\n ba_climate_zone_geometry=True,\n ca_climate_zone_geometry=True,\n):\n \"\"\" Build database of metadata from primary sources.\n\n Downloads primary sources, clears existing DB, and rebuilds from scratch.\n\n Parameters\n ----------\n zcta_geometry : bool, optional\n Whether or not to include ZCTA geometry in database.\n iecc_climate_zone_geometry : bool, optional\n Whether or not to include IECC Climate Zone geometry in database.\n iecc_moisture_regime_geometry : bool, optional\n Whether or not to include IECC Moisture Regime geometry in database.\n ba_climate_zone_geometry : bool, optional\n Whether or not to include Building America Climate Zone geometry in database.\n ca_climate_zone_geometry : bool, optional\n Whether or not to include California Building Climate Zone Area geometry in database.\n \"\"\"\n\n try:\n import shapely\n except ImportError:\n raise ImportError(\"Loading polygons requires shapely.\")\n\n try:\n from bs4 import BeautifulSoup\n except ImportError:\n raise ImportError(\"Scraping TMY3 station data requires beautifulsoup4.\")\n\n try:\n import pyproj\n except ImportError:\n raise ImportError(\"Computing distances requires pyproj.\")\n\n try:\n import simplejson\n except ImportError:\n raise ImportError(\"Writing geojson requires simplejson.\")\n\n download_path = _download_primary_sources()\n conn = metadata_db_connection_proxy.reset_database()\n\n # Load data into memory\n print(\"Loading ZCTAs\")\n zcta_metadata = _load_zcta_metadata(download_path)\n\n print(\"Loading counties\")\n county_metadata = _load_county_metadata(download_path)\n\n print(\"Merging county climate zones\")\n (\n iecc_climate_zone_metadata,\n iecc_moisture_regime_metadata,\n ba_climate_zone_metadata,\n ) = _create_merged_climate_zones_metadata(county_metadata)\n\n print(\"Loading CA climate zones\")\n ca_climate_zone_metadata = _load_CA_climate_zone_metadata(download_path)\n\n print(\"Loading ISD station metadata\")\n isd_station_metadata = _load_isd_station_metadata(download_path)\n\n print(\"Loading ISD station file metadata\")\n isd_file_metadata = _load_isd_file_metadata(download_path, isd_station_metadata)\n\n print(\"Loading TMY3 station metadata\")\n tmy3_station_metadata = _load_tmy3_station_metadata(download_path)\n\n print(\"Loading CZ2010 station metadata\")\n cz2010_station_metadata = _load_cz2010_station_metadata()\n\n # Augment data in memory\n print(\"Computing ISD station quality\")\n # add rough station quality to station metadata\n # (all months in last 5 years have at least 600 points)\n _compute_isd_station_quality(isd_station_metadata, isd_file_metadata)\n\n print(\"Mapping ZCTAs to climate zones\")\n # add county and ca climate zone mappings\n _map_zcta_to_climate_zones(\n zcta_metadata,\n iecc_climate_zone_metadata,\n iecc_moisture_regime_metadata,\n ba_climate_zone_metadata,\n ca_climate_zone_metadata,\n )\n\n print(\"Mapping ISD stations to climate zones\")\n # add county and ca climate zone mappings\n _map_isd_station_to_climate_zones(\n isd_station_metadata,\n iecc_climate_zone_metadata,\n iecc_moisture_regime_metadata,\n ba_climate_zone_metadata,\n ca_climate_zone_metadata,\n )\n\n # Write tables\n print(\"Creating table structures\")\n _create_table_structures(conn)\n\n print(\"Writing ZCTA data\")\n _write_zcta_metadata_table(conn, zcta_metadata, geometry=zcta_geometry)\n\n print(\"Writing IECC climate zone data\")\n _write_iecc_climate_zone_metadata_table(\n conn, iecc_climate_zone_metadata, geometry=iecc_climate_zone_geometry\n )\n\n print(\"Writing IECC moisture regime data\")\n _write_iecc_moisture_regime_metadata_table(\n conn, iecc_moisture_regime_metadata, geometry=iecc_moisture_regime_geometry\n )\n\n print(\"Writing BA climate zone data\")\n _write_ba_climate_zone_metadata_table(\n conn, ba_climate_zone_metadata, geometry=ba_climate_zone_geometry\n )\n\n print(\"Writing CA climate zone data\")\n _write_ca_climate_zone_metadata_table(\n conn, ca_climate_zone_metadata, geometry=ca_climate_zone_geometry\n )\n\n print(\"Writing ISD station metadata\")\n _write_isd_station_metadata_table(conn, isd_station_metadata)\n\n print(\"Writing ISD file metadata\")\n _write_isd_file_metadata_table(conn, isd_file_metadata)\n\n print(\"Writing TMY3 station metadata\")\n _write_tmy3_station_metadata_table(conn, tmy3_station_metadata)\n\n print(\"Writing CZ2010 station metadata\")\n _write_cz2010_station_metadata_table(conn, cz2010_station_metadata)\n\n print(\"Cleaning up...\")\n shutil.rmtree(download_path)\n\n print(\"\\u2728 Completed! \\u2728\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a JSON - serializeable object containing station metadata.", "response": "def json(self):\n \"\"\" Return a JSON-serializeable object containing station metadata.\"\"\"\n return {\n \"elevation\": self.elevation,\n \"latitude\": self.latitude,\n \"longitude\": self.longitude,\n \"icao_code\": self.icao_code,\n \"name\": self.name,\n \"quality\": self.quality,\n \"wban_ids\": self.wban_ids,\n \"recent_wban_id\": self.recent_wban_id,\n \"climate_zones\": {\n \"iecc_climate_zone\": self.iecc_climate_zone,\n \"iecc_moisture_regime\": self.iecc_moisture_regime,\n \"ba_climate_zone\": self.ba_climate_zone,\n \"ca_climate_zone\": self.ca_climate_zone,\n },\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets filenames of raw ISD station data.", "response": "def get_isd_filenames(self, year=None, with_host=False):\n \"\"\" Get filenames of raw ISD station data. \"\"\"\n return get_isd_filenames(self.usaf_id, year, with_host=with_host)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting filenames of GSOD station data.", "response": "def get_gsod_filenames(self, year=None, with_host=False):\n \"\"\" Get filenames of raw GSOD station data. \"\"\"\n return get_gsod_filenames(self.usaf_id, year, with_host=with_host)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_isd_hourly_temp_data(\n self,\n start,\n end,\n read_from_cache=True,\n write_to_cache=True,\n error_on_missing_years=True,\n ):\n \"\"\" Load resampled hourly ISD temperature data from start date to end date (inclusive).\n\n This is the primary convenience method for loading resampled hourly ISD temperature data.\n\n Parameters\n ----------\n start : datetime.datetime\n The earliest date from which to load data.\n end : datetime.datetime\n The latest date until which to load data.\n read_from_cache : bool\n Whether or not to load data from cache.\n write_to_cache : bool\n Whether or not to write newly loaded data to cache.\n \"\"\"\n return load_isd_hourly_temp_data(\n self.usaf_id,\n start,\n end,\n read_from_cache=read_from_cache,\n write_to_cache=write_to_cache,\n error_on_missing_years=error_on_missing_years,\n )", "response": "Load ISD hourly temperature data from start to end date."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_isd_daily_temp_data(\n self, start, end, read_from_cache=True, write_to_cache=True\n ):\n \"\"\" Load resampled daily ISD temperature data from start date to end date (inclusive).\n\n This is the primary convenience method for loading resampled daily ISD temperature data.\n\n Parameters\n ----------\n start : datetime.datetime\n The earliest date from which to load data.\n end : datetime.datetime\n The latest date until which to load data.\n read_from_cache : bool\n Whether or not to load data from cache.\n write_to_cache : bool\n Whether or not to write newly loaded data to cache.\n \"\"\"\n return load_isd_daily_temp_data(\n self.usaf_id,\n start,\n end,\n read_from_cache=read_from_cache,\n write_to_cache=write_to_cache,\n )", "response": "Load resampled daily ISD temperature data from start date to end date."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads resampled GSOD temperature data from start date to end date.", "response": "def load_gsod_daily_temp_data(\n self, start, end, read_from_cache=True, write_to_cache=True\n ):\n \"\"\" Load resampled daily GSOD temperature data from start date to end date (inclusive).\n\n This is the primary convenience method for loading resampled daily GSOD temperature data.\n\n Parameters\n ----------\n start : datetime.datetime\n The earliest date from which to load data.\n end : datetime.datetime\n The latest date until which to load data.\n read_from_cache : bool\n Whether or not to load data from cache.\n write_to_cache : bool\n Whether or not to write newly loaded data to cache.\n \"\"\"\n return load_gsod_daily_temp_data(\n self.usaf_id,\n start,\n end,\n read_from_cache=read_from_cache,\n write_to_cache=write_to_cache,\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_tmy3_hourly_temp_data(\n self, start, end, read_from_cache=True, write_to_cache=True\n ):\n \"\"\" Load hourly TMY3 temperature data from start date to end date (inclusive).\n\n This is the primary convenience method for loading hourly TMY3 temperature data.\n\n Parameters\n ----------\n start : datetime.datetime\n The earliest date from which to load data.\n end : datetime.datetime\n The latest date until which to load data.\n read_from_cache : bool\n Whether or not to load data from cache.\n write_to_cache : bool\n Whether or not to write newly loaded data to cache.\n \"\"\"\n return load_tmy3_hourly_temp_data(\n self.usaf_id,\n start,\n end,\n read_from_cache=read_from_cache,\n write_to_cache=write_to_cache,\n )", "response": "Load TMY3 hourly temperature data from start to end date."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_cz2010_hourly_temp_data(\n self, start, end, read_from_cache=True, write_to_cache=True\n ):\n \"\"\" Load hourly CZ2010 temperature data from start date to end date (inclusive).\n\n This is the primary convenience method for loading hourly CZ2010 temperature data.\n\n Parameters\n ----------\n start : datetime.datetime\n The earliest date from which to load data.\n end : datetime.datetime\n The latest date until which to load data.\n read_from_cache : bool\n Whether or not to load data from cache.\n write_to_cache : bool\n Whether or not to write newly loaded data to cache.\n \"\"\"\n return load_cz2010_hourly_temp_data(\n self.usaf_id,\n start,\n end,\n read_from_cache=read_from_cache,\n write_to_cache=write_to_cache,\n )", "response": "Load hourly CZ2010 temperature data from start date to end date."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef plot_station_mapping(\n target_latitude,\n target_longitude,\n isd_station,\n distance_meters,\n target_label=\"target\",\n): # pragma: no cover\n \"\"\" Plots this mapping on a map.\"\"\"\n try:\n import matplotlib.pyplot as plt\n except ImportError:\n raise ImportError(\"Plotting requires matplotlib.\")\n\n try:\n import cartopy.crs as ccrs\n import cartopy.feature as cfeature\n import cartopy.io.img_tiles as cimgt\n except ImportError:\n raise ImportError(\"Plotting requires cartopy.\")\n\n lat, lng = isd_station.coords\n t_lat, t_lng = float(target_latitude), float(target_longitude)\n\n # fiture\n fig = plt.figure(figsize=(16, 8))\n\n # axes\n tiles = cimgt.StamenTerrain()\n ax = plt.subplot(1, 1, 1, projection=tiles.crs)\n\n # offsets for labels\n x_max = max([lng, t_lng])\n x_min = min([lng, t_lng])\n x_diff = x_max - x_min\n\n y_max = max([lat, t_lat])\n y_min = min([lat, t_lat])\n y_diff = y_max - y_min\n\n xoffset = x_diff * 0.05\n yoffset = y_diff * 0.05\n\n # minimum\n left = x_min - x_diff * 0.5\n right = x_max + x_diff * 0.5\n bottom = y_min - y_diff * 0.3\n top = y_max + y_diff * 0.3\n\n width_ratio = 2.\n height_ratio = 1.\n\n if (right - left) / (top - bottom) > width_ratio / height_ratio:\n # too short\n goal = (right - left) * height_ratio / width_ratio\n diff = goal - (top - bottom)\n bottom = bottom - diff / 2.\n top = top + diff / 2.\n else:\n # too skinny\n goal = (top - bottom) * width_ratio / height_ratio\n diff = goal - (right - left)\n left = left - diff / 2.\n right = right + diff / 2.\n\n ax.set_extent([left, right, bottom, top])\n\n # determine zoom level\n # tile size at level 1 = 64 km\n # level 2 = 32 km, level 3 = 16 km, etc, i.e. 128/(2^n) km\n N_TILES = 600 # (how many tiles approximately fit in distance)\n km = distance_meters / 1000.0\n zoom_level = int(np.log2(128 * N_TILES / km))\n\n ax.add_image(tiles, zoom_level)\n\n # line between\n plt.plot(\n [lng, t_lng],\n [lat, t_lat],\n linestyle=\"-\",\n dashes=[2, 2],\n transform=ccrs.Geodetic(),\n )\n\n # station\n ax.plot(lng, lat, \"ko\", markersize=7, transform=ccrs.Geodetic())\n\n # target\n ax.plot(t_lng, t_lat, \"ro\", markersize=7, transform=ccrs.Geodetic())\n\n # station label\n station_label = \"{} ({})\".format(isd_station.usaf_id, isd_station.name)\n ax.text(lng + xoffset, lat + yoffset, station_label, transform=ccrs.Geodetic())\n\n # target label\n ax.text(t_lng + xoffset, t_lat + yoffset, target_label, transform=ccrs.Geodetic())\n\n # distance labels\n mid_lng = (lng + t_lng) / 2\n mid_lat = (lat + t_lat) / 2\n dist_text = \"{:.01f} km\".format(km)\n ax.text(mid_lng + xoffset, mid_lat + yoffset, dist_text, transform=ccrs.Geodetic())\n\n plt.show()", "response": "Plots this mapping on a map."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nplots a list of mapping results on a map.", "response": "def plot_station_mappings(mapping_results): # pragma: no cover\n \"\"\" Plot a list of mapping results on a map.\n\n Requires matplotlib and cartopy.\n\n Parameters\n ----------\n mapping_results : list of MappingResult objects\n Mapping results to plot\n \"\"\"\n try:\n import matplotlib.pyplot as plt\n except ImportError:\n raise ImportError(\"Plotting requires matplotlib.\")\n\n try:\n import cartopy.crs as ccrs\n import cartopy.feature as cfeature\n except ImportError:\n raise ImportError(\"Plotting requires cartopy.\")\n\n lats = []\n lngs = []\n t_lats = []\n t_lngs = []\n n_discards = 0\n for mapping_result in mapping_results:\n if not mapping_result.is_empty():\n lat, lng = mapping_result.isd_station.coords\n t_lat, t_lng = map(float, mapping_result.target_coords)\n lats.append(lat)\n lngs.append(lng)\n t_lats.append(t_lat)\n t_lngs.append(t_lng)\n else:\n n_discards += 1\n\n print(\"Discarded {} empty mappings\".format(n_discards))\n\n # figure\n fig = plt.figure(figsize=(60, 60))\n\n # axes\n ax = plt.subplot(1, 1, 1, projection=ccrs.Mercator())\n\n # offsets for labels\n all_lngs = lngs + t_lngs\n all_lats = lats + t_lats\n x_max = max(all_lngs) # lists\n x_min = min(all_lngs)\n x_diff = x_max - x_min\n\n y_max = max(all_lats)\n y_min = min(all_lats)\n y_diff = y_max - y_min\n\n # minimum\n x_pad = 0.1 * x_diff\n y_pad = 0.1 * y_diff\n left = x_min - x_pad\n right = x_max + x_pad\n bottom = y_min - y_pad\n top = y_max + y_pad\n\n width_ratio = 2.\n height_ratio = 1.\n\n if (right - left) / (top - bottom) > height_ratio / width_ratio:\n # too short\n goal = (right - left) * height_ratio / width_ratio\n diff = goal - (top - bottom)\n bottom = bottom - diff / 2.\n top = top + diff / 2.\n else:\n # too skinny\n goal = (top - bottom) * width_ratio / height_ratio\n diff = goal - (right - left)\n left = left - diff / 2.\n right = right + diff / 2.\n\n left = max(left, -179.9)\n right = min(right, 179.9)\n bottom = max([bottom, -89.9])\n top = min([top, 89.9])\n\n ax.set_extent([left, right, bottom, top])\n\n # OCEAN\n ax.add_feature(\n cfeature.NaturalEarthFeature(\n \"physical\",\n \"ocean\",\n \"50m\",\n edgecolor=\"face\",\n facecolor=cfeature.COLORS[\"water\"],\n )\n )\n\n # LAND\n ax.add_feature(\n cfeature.NaturalEarthFeature(\n \"physical\",\n \"land\",\n \"50m\",\n edgecolor=\"face\",\n facecolor=cfeature.COLORS[\"land\"],\n )\n )\n\n # BORDERS\n ax.add_feature(\n cfeature.NaturalEarthFeature(\n \"cultural\",\n \"admin_0_boundary_lines_land\",\n \"50m\",\n edgecolor=\"black\",\n facecolor=\"none\",\n )\n )\n\n # LAKES\n ax.add_feature(\n cfeature.NaturalEarthFeature(\n \"physical\",\n \"lakes\",\n \"50m\",\n edgecolor=\"face\",\n facecolor=cfeature.COLORS[\"water\"],\n )\n )\n\n # COASTLINE\n ax.add_feature(\n cfeature.NaturalEarthFeature(\n \"physical\", \"coastline\", \"50m\", edgecolor=\"black\", facecolor=\"none\"\n )\n )\n\n # lines between\n # for lat, t_lat, lng, t_lng in zip(lats, t_lats, lngs, t_lngs):\n ax.plot(\n [lngs, t_lngs],\n [lats, t_lats],\n color=\"k\",\n linestyle=\"-\",\n transform=ccrs.Geodetic(),\n linewidth=0.3,\n )\n\n # stations\n ax.plot(lngs, lats, \"bo\", markersize=1, transform=ccrs.Geodetic())\n\n plt.title(\"Location to weather station mapping\")\n\n plt.show()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef execute(self, input_data):\n ''' Execute the ViewPDF worker '''\n\n # Just a small check to make sure we haven't been called on the wrong file type\n if (input_data['meta']['type_tag'] != 'pdf'):\n return {'error': self.__class__.__name__+': called on '+input_data['meta']['type_tag']}\n\n view = {}\n view['strings'] = input_data['strings']['string_list'][:5]\n view.update(input_data['meta'])\n return view", "response": "Execute the ViewPDF worker"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds a graph of nodes and edges from a conn. log file.", "response": "def conn_log_graph(self, stream):\n ''' Build up a graph (nodes and edges from a Bro conn.log) '''\n conn_log = list(stream)\n print 'Entering conn_log_graph...(%d rows)' % len(conn_log)\n for row in stream:\n\n # Add the connection id with service as one of the labels\n self.add_node(row['uid'], row['uid'][:6], ['conn_id', row['service']])\n\n # Add the originating host\n self.add_node(row['id.orig_h'], row['id.orig_h'], ['ip', 'origin'])\n\n # Add the response host\n self.add_node(row['id.resp_h'], row['id.resp_h'], ['ip', 'response'])\n\n # Add the ip->connection relationships\n self.add_rel(row['uid'], row['id.orig_h'], 'origin')\n self.add_rel(row['uid'], row['id.resp_h'], 'response')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dns_log_graph(self, stream):\n ''' Build up a graph (nodes and edges from a Bro dns.log) '''\n dns_log = list(stream)\n print 'Entering dns_log_graph...(%d rows)' % len(dns_log)\n for row in dns_log:\n \n # Skip '-' hosts\n if (row['id.orig_h'] == '-'):\n continue\n\n # Add the originating host\n self.add_node(row['id.orig_h'], row['id.orig_h'], ['host', 'origin'])\n\n # Add the query host\n self.add_node(row['query'], row['query'], ['host', 'dns_query'])\n\n # The relationship between origin host and query\n self.add_rel(row['id.orig_h'], row['query'], 'dns_query')\n\n # Add the DNS answers as hosts and add the relationships\n for answer in row['answers'].split(','):\n self.add_node(answer, answer, ['host'])\n self.add_rel(row['query'], answer, row['qtype_name'])", "response": "Build a graph of nodes and edges from a Bro dns. log file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef weird_log_graph(self, stream):\n ''' Build up a graph (nodes and edges from a Bro weird.log) '''\n weird_log = list(stream)\n print 'Entering weird_log_graph...(%d rows)' % len(weird_log)\n\n # Here we're just going to capture that something weird\n # happened between two hosts\n weird_pairs = set()\n for row in weird_log:\n weird_pairs.add((row['id.orig_h'], row['id.resp_h']))\n\n # Okay now make the weird node for each pair\n for pair in weird_pairs:\n\n # Skip '-' hosts\n if (pair[0] == '-'):\n continue\n\n # Add the originating host\n self.add_node(pair[0], pair[0], ['host', 'origin'])\n\n # Add the response host\n self.add_node(pair[1], pair[1], ['host'])\n\n # Add a weird node\n weird_name = 'weird'+pair[0]+'_'+pair[1]\n self.add_node(weird_name, 'weird', ['weird'])\n\n # The relationships between the nodes\n self.add_rel(pair[0], weird_name, 'weird')\n self.add_rel(weird_name, pair[1], 'weird')", "response": "Build a graph of nodes and edges from a Bro weird. log file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nbuilding a graph of nodes and edges from a Bro files. log file stream.", "response": "def files_log_graph(self, stream):\n ''' Build up a graph (nodes and edges from a Bro files.log) '''\n file_log = list(stream)\n print 'Entering file_log_graph...(%d rows)' % len(file_log)\n for row in file_log:\n\n # If the mime-type is interesting add the uri and the host->uri->host relationships\n if row['mime_type'] not in self.exclude_mime_types:\n\n # Check for weird conditions\n if (row['total_bytes'] == '-'):\n continue\n if ('-' in row['md5']):\n continue\n\n # Check for missing bytes\n if row['missing_bytes']:\n labels = ['file','missing']\n else:\n labels = ['file']\n\n # Make the file node name kewl\n name = '%6s %s %.0f-KB' % (row['md5'][:6], row['mime_type'], row['total_bytes']/1024.0)\n if row['missing_bytes']:\n name += '*'\n name = name.replace('application/','')\n\n # Add the file node\n self.add_node(row['md5'], name, labels)\n\n # Add the tx_host\n self.add_node(row['tx_hosts'], row['tx_hosts'], ['host'])\n\n # Add the file->tx_host relationship\n self.add_rel(row['tx_hosts'], row['md5'], 'file')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef execute(self, input_data):\n ''' Execute the URL worker '''\n string_output = input_data['strings']['string_list']\n flatten = ' '.join(string_output)\n urls = self.url_match.findall(flatten)\n return {'url_list': urls}", "response": "Execute the URL worker"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef all_files_in_directory(path):\n file_list = []\n for dirname, dirnames, filenames in os.walk(path):\n for filename in filenames:\n file_list.append(os.path.join(dirname, filename))\n return file_list", "response": "Recursively ist all files under a directory"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_all_plugins(self):\n\n # Go through the existing python files in the plugin directory\n self.plugin_path = os.path.realpath(self.plugin_dir)\n sys.path.append(self.plugin_dir)\n print '<<< Plugin Manager >>>'\n for f in [os.path.join(self.plugin_dir, child) for child in os.listdir(self.plugin_dir)]:\n\n # Skip certain files\n if '.DS_Store' in f or '__init__.py' in f: \n continue\n\n # Add the plugin\n self.add_plugin(f)", "response": "Load all the plugins in the plugin directory"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving a deleted plugin.", "response": "def remove_plugin(self, f):\n \"\"\"Remvoing a deleted plugin.\n\n Args:\n f: the filepath for the plugin.\n \"\"\"\n if f.endswith('.py'):\n plugin_name = os.path.splitext(os.path.basename(f))[0]\n print '- %s %sREMOVED' % (plugin_name, color.Red)\n print '\\t%sNote: still in memory, restart Workbench to remove...%s' % \\\n (color.Yellow, color.Normal)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding and verifying a plugin.", "response": "def add_plugin(self, f):\n \"\"\"Adding and verifying plugin.\n\n Args:\n f: the filepath for the plugin.\n \"\"\"\n if f.endswith('.py'):\n\n # Just the basename without extension\n plugin_name = os.path.splitext(os.path.basename(f))[0]\n\n # It's possible the plugin has been modified and needs to be reloaded\n if plugin_name in sys.modules:\n try:\n handler = reload(sys.modules[plugin_name])\n print'\\t- %s %sRELOAD%s' % (plugin_name, color.Yellow, color.Normal)\n except ImportError, error:\n print 'Failed to import plugin: %s (%s)' % (plugin_name, error)\n return\n else:\n # Not already loaded so try to import it\n try:\n handler = __import__(plugin_name, globals(), locals(), [], -1)\n except ImportError, error:\n print 'Failed to import plugin: %s (%s)' % (plugin_name, error)\n return\n\n # Run the handler through plugin validation\n plugin = self.validate(handler)\n print '\\t- %s %sOK%s' % (plugin_name, color.Green, color.Normal)\n if plugin:\n\n # Okay must be successfully loaded so capture the plugin meta-data,\n # modification time and register the plugin through the callback\n plugin['name'] = plugin_name\n plugin['dependencies'] = plugin['class'].dependencies\n plugin['docstring'] = plugin['class'].__doc__\n plugin['mod_time'] = datetime.utcfromtimestamp(os.path.getmtime(f))\n\n # Plugin may accept sample_sets as input\n try:\n plugin['sample_set_input'] = getattr(plugin['class'], 'sample_set_input')\n except AttributeError:\n plugin['sample_set_input'] = False\n\n # Now pass the plugin back to workbench\n self.plugin_callback(plugin)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate(self, handler):\n\n # Check for the test method first\n test_method = self.plugin_test_validation(handler)\n if not test_method:\n return None\n\n # Here we iterate through the classes found in the module and pick\n # the first one that satisfies the validation\n for name, plugin_class in inspect.getmembers(handler, inspect.isclass):\n if self.plugin_class_validation(plugin_class):\n return {'class':plugin_class, 'test':test_method}\n\n # If we're here the plugin didn't pass validation\n print 'Failure for plugin: %s' % (handler.__name__)\n print 'Validation Error: Worker class is required to have a dependencies list and an execute method'\n return None", "response": "Validate the plugin, each plugin must have the following:\n 1) The worker class must have an execute method: execute(self, input_data).\n 2) The worker class must have a dependencies list (even if it's empty).\n 3) The file must have a top level test() method.\n\n Args:\n handler: the loaded plugin."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef store_sample(self, sample_bytes, filename, type_tag):\n\n # Temp sanity check for old clients\n if len(filename) > 1000:\n print 'switched bytes/filename... %s %s' % (sample_bytes[:100], filename[:100])\n exit(1)\n\n sample_info = {}\n\n # Compute the MD5 hash\n sample_info['md5'] = hashlib.md5(sample_bytes).hexdigest()\n\n # Check if sample already exists\n if self.has_sample(sample_info['md5']):\n return sample_info['md5']\n\n # Run the periodic operations\n self.periodic_ops()\n\n # Check if we need to expire anything\n self.expire_data()\n\n # Okay start populating the sample for adding to the data store\n # Filename, length, import time and type_tag\n sample_info['filename'] = filename\n sample_info['length'] = len(sample_bytes)\n sample_info['import_time'] = datetime.datetime.utcnow()\n sample_info['type_tag'] = type_tag\n\n # Random customer for now\n import random\n sample_info['customer'] = random.choice(['Mega Corp', 'Huge Inc', 'BearTron', 'Dorseys Mom'])\n\n # Push the file into the MongoDB GridFS\n sample_info['__grid_fs'] = self.gridfs_handle.put(sample_bytes)\n self.database[self.sample_collection].insert(sample_info)\n\n # Print info\n print 'Sample Storage: %.2f out of %.2f MB' % (self.sample_storage_size(), self.samples_cap)\n\n # Return the sample md5\n return sample_info['md5']", "response": "Store a sample into the datastore."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the storage size of the samples storage collection.", "response": "def sample_storage_size(self):\n \"\"\"Get the storage size of the samples storage collection.\"\"\"\n\n try:\n coll_stats = self.database.command('collStats', 'fs.chunks')\n sample_storage_size = coll_stats['size']/1024.0/1024.0\n return sample_storage_size\n except pymongo.errors.OperationFailure:\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef expire_data(self):\n\n # Do we need to start deleting stuff?\n while self.sample_storage_size() > self.samples_cap:\n\n # This should return the 'oldest' record in samples\n record = self.database[self.sample_collection].find().sort('import_time',pymongo.ASCENDING).limit(1)[0]\n self.remove_sample(record['md5'])", "response": "Expire data within the samples collection."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove_sample(self, md5):\n\n # Grab the sample\n record = self.database[self.sample_collection].find_one({'md5': md5})\n if not record:\n return\n\n # Delete it\n print 'Deleting sample: %s (%.2f MB)...' % (record['md5'], record['length']/1024.0/1024.0)\n self.database[self.sample_collection].remove({'md5': record['md5']})\n self.gridfs_handle.delete(record['__grid_fs'])\n\n # Print info\n print 'Sample Storage: %.2f out of %.2f MB' % (self.sample_storage_size(), self.samples_cap)", "response": "Delete a specific sample from the database"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nclean data in preparation for serialization.", "response": "def clean_for_serialization(self, data):\n \"\"\"Clean data in preparation for serialization.\n\n Deletes items having key either a BSON, datetime, dict or a list instance, or\n starting with __.\n\n Args:\n data: Sample data to be serialized.\n\n Returns:\n Cleaned data dictionary.\n \"\"\"\n\n if isinstance(data, dict):\n for k in data.keys():\n if (k.startswith('__')): \n del data[k]\n elif isinstance(data[k], bson.objectid.ObjectId): \n del data[k]\n elif isinstance(data[k], datetime.datetime):\n data[k] = data[k].isoformat()+'Z'\n elif isinstance(data[k], dict):\n data[k] = self.clean_for_serialization(data[k])\n elif isinstance(data[k], list):\n data[k] = [self.clean_for_serialization(item) for item in data[k]]\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncleaning data in preparation for storage.", "response": "def clean_for_storage(self, data):\n \"\"\"Clean data in preparation for storage.\n\n Deletes items with key having a '.' or is '_id'. Also deletes those items\n whose value is a dictionary or a list.\n\n Args:\n data: Sample data dictionary to be cleaned.\n\n Returns:\n Cleaned data dictionary.\n \"\"\"\n data = self.data_to_unicode(data)\n if isinstance(data, dict):\n for k in dict(data).keys():\n if k == '_id':\n del data[k]\n continue\n if '.' in k:\n new_k = k.replace('.', '_')\n data[new_k] = data[k]\n del data[k]\n k = new_k\n if isinstance(data[k], dict):\n data[k] = self.clean_for_storage(data[k])\n elif isinstance(data[k], list):\n data[k] = [self.clean_for_storage(item) for item in data[k]]\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsupport partial md5s return the full md5 with this method", "response": "def get_full_md5(self, partial_md5, collection):\n \"\"\"Support partial/short md5s, return the full md5 with this method\"\"\"\n print 'Notice: Performing slow md5 search...'\n starts_with = '%s.*' % partial_md5\n sample_info = self.database[collection].find_one({'md5': {'$regex' : starts_with}},{'md5':1})\n return sample_info['md5'] if sample_info else None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_sample(self, md5):\n\n # Support 'short' md5s but don't waste performance if the full md5 is provided\n if len(md5) < 32:\n md5 = self.get_full_md5(md5, self.sample_collection)\n\n # Grab the sample\n sample_info = self.database[self.sample_collection].find_one({'md5': md5})\n if not sample_info:\n return None\n\n # Get the raw bytes from GridFS (note: this could fail)\n try:\n grid_fs_id = sample_info['__grid_fs']\n sample_info = self.clean_for_serialization(sample_info)\n sample_info.update({'raw_bytes':self.gridfs_handle.get(grid_fs_id).read()})\n return sample_info\n except gridfs.errors.CorruptGridFile:\n # If we don't have the gridfs files, delete the entry from samples\n self.database[self.sample_collection].update({'md5': md5}, {'md5': None})\n return None", "response": "Get the sample from the data store."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_sample_window(self, type_tag, size=10):\n\n # Convert size to MB\n size = size * 1024 * 1024\n\n # Grab all the samples of type=type_tag, sort by import_time (newest to oldest)\n cursor = self.database[self.sample_collection].find({'type_tag': type_tag},\n {'md5': 1,'length': 1}).sort('import_time',pymongo.DESCENDING)\n total_size = 0\n md5_list = []\n for item in cursor:\n if total_size > size:\n return md5_list\n md5_list.append(item['md5'])\n total_size += item['length']\n\n # If you get this far you don't have 'size' amount of data\n # so just return what you've got\n return md5_list", "response": "Get a window of samples of type type_tag not to exceed size."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef has_sample(self, md5):\n\n # The easiest thing is to simply get the sample and if that\n # succeeds than return True, else return False\n sample = self.get_sample(md5)\n return True if sample else False", "response": "Checks if the data store has this sample with the given md5."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _list_samples(self, predicate=None):\n cursor = self.database[self.sample_collection].find(predicate, {'_id':0, 'md5':1})\n return [item['md5'] for item in cursor]", "response": "List all samples that meet the predicate or all."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting all samples that match the tags or all if tags are not specified.", "response": "def tag_match(self, tags=None):\n \"\"\"List all samples that match the tags or all if tags are not specified.\n\n Args:\n tags: Match samples against these tags (or all if not specified)\n\n Returns:\n List of the md5s for the matching samples\n \"\"\"\n if 'tags' not in self.database.collection_names():\n print 'Warning: Searching on non-existance tags collection'\n return None\n if not tags:\n cursor = self.database['tags'].find({}, {'_id':0, 'md5':1})\n else:\n cursor = self.database['tags'].find({'tags': {'$in': tags}}, {'_id':0, 'md5':1})\n\n # We have the tags, now make sure we only return those md5 which \n # also exist in the samples collection\n tag_md5s = set([item['md5'] for item in cursor])\n sample_md5s = set(item['md5'] for item in self.database['samples'].find({}, {'_id':0, 'md5':1}))\n return list(tag_md5s.intersection(sample_md5s))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef tags_all(self):\n if 'tags' not in self.database.collection_names():\n print 'Warning: Searching on non-existance tags collection'\n return None\n\n cursor = self.database['tags'].find({}, {'_id':0, 'md5':1, 'tags':1})\n return [item for item in cursor]", "response": "Returns a list of the tags and md5s for all samples\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef store_work_results(self, results, collection, md5):\n\n # Make sure the md5 and time stamp is on the data before storing\n results['md5'] = md5\n results['__time_stamp'] = datetime.datetime.utcnow()\n\n # If the data doesn't have a 'mod_time' field add one now\n if 'mod_time' not in results:\n results['mod_time'] = results['__time_stamp']\n\n # Fixme: Occasionally a capped collection will not let you update with a \n # larger object, if you have MongoDB 2.6 or above this shouldn't\n # really happen, so for now just kinda punting and giving a message.\n try:\n self.database[collection].update({'md5':md5}, self.clean_for_storage(results), True)\n except pymongo.errors.OperationFailure:\n #self.database[collection].insert({'md5':md5}, self.clean_for_storage(results), True)\n print 'Could not update exising object in capped collection, punting...'\n print 'collection: %s md5:%s' % (collection, md5)", "response": "Store the results of the worker in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef all_sample_md5s(self, type_tag=None):\n\n if type_tag:\n cursor = self.database[self.sample_collection].find({'type_tag': type_tag}, {'md5': 1, '_id': 0})\n else:\n cursor = self.database[self.sample_collection].find({}, {'md5': 1, '_id': 0})\n return [match.values()[0] for match in cursor]", "response": "Return a list of all md5s matching the type_tag."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef clear_worker_output(self):\n \n print 'Dropping all of the worker output collections... Whee!'\n # Get all the collections in the workbench database\n all_c = self.database.collection_names()\n\n # Remove collections that we don't want to cap\n try:\n all_c.remove('system.indexes')\n all_c.remove('fs.chunks')\n all_c.remove('fs.files')\n all_c.remove('sample_set')\n all_c.remove('tags')\n all_c.remove(self.sample_collection)\n except ValueError:\n print 'Catching a benign exception thats expected...'\n\n for collection in all_c:\n self.database.drop_collection(collection)", "response": "Removes all of the worker output collections"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef periodic_ops(self):\n\n # Only run every 30 seconds\n if (time.time() - self.last_ops_run) < 30:\n return\n\n try:\n\n # Reset last ops run\n self.last_ops_run = time.time()\n print 'Running Periodic Ops'\n\n # Get all the collections in the workbench database\n all_c = self.database.collection_names()\n\n # Remove collections that we don't want to cap\n try:\n all_c.remove('system.indexes')\n all_c.remove('fs.chunks')\n all_c.remove('fs.files')\n all_c.remove('info')\n all_c.remove('tags')\n all_c.remove(self.sample_collection)\n except ValueError:\n print 'Catching a benign exception thats expected...'\n\n # Convert collections to capped if desired\n if self.worker_cap:\n size = self.worker_cap * pow(1024, 2) # MegaBytes per collection\n for collection in all_c:\n self.database.command('convertToCapped', collection, size=size)\n\n # Loop through all collections ensuring they have an index on MD5s\n for collection in all_c:\n self.database[collection].ensure_index('md5')\n\n # Add required indexes for samples collection\n self.database[self.sample_collection].create_index('import_time')\n\n # Create an index on tags\n self.database['tags'].create_index('tags')\n\n \n # Mongo may throw an autoreconnect exception so catch it and just return\n # the autoreconnect means that some operations didn't get executed but\n # because this method gets called every 30 seconds no biggy...\n except pymongo.errors.AutoReconnect as e:\n print 'Warning: MongoDB raised an AutoReconnect...' % e\n return\n except Exception as e:\n print 'Critical: MongoDB raised an exception' % e\n return", "response": "Run periodic operations on the data store."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_unicode(self, s):\n\n # Fixme: This is total horseshit\n if isinstance(s, unicode):\n return s\n if isinstance(s, str):\n return unicode(s, errors='ignore')\n\n # Just return the original object\n return s", "response": "Convert an elementary datatype to unicode."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef execute(self, input_data):\n \n # Spin up SWF class\n swf = SWF()\n \n # Get the raw_bytes\n raw_bytes = input_data['sample']['raw_bytes']\n \n # Parse it\n swf.parse(StringIO(raw_bytes))\n\n # Header info\n head = swf.header\n output = {'version':head.version,'file_length':head.file_length,'frame_count':head.frame_count,\n 'frame_rate':head.frame_rate,'frame_size':head.frame_size.__str__(),'compressed':head.compressed}\n\n # Loop through all the tags\n output['tags'] = [tag.__str__() for tag in swf.tags]\n\n # Add the meta data to the output\n output.update(input_data['meta'])\n return output\n\n '''\n # Map all tag names to indexes\n tag_map = {tag.name:index for tag,index in enumerate(swf.tags)}\n\n # FileAttribute Info\n file_attr_tag = swf.tags[tag_map]\n \n '''\n '''\n\n # Build up return data structure\n output = {name:value for name,value in locals().iteritems()\n if name not in ['self', 'input_data','raw_bytes']}\n output.update(input_data['meta'])\n return output\n '''", "response": "Execute the SWF command."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef grab_server_args():\n \n workbench_conf = ConfigParser.ConfigParser()\n config_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'config.ini')\n workbench_conf.read(config_path)\n server = workbench_conf.get('workbench', 'server_uri')\n port = workbench_conf.get('workbench', 'server_port')\n\n # Collect args from the command line\n parser = argparse.ArgumentParser()\n parser.add_argument('-s', '--server', type=str, default=server, help='location of workbench server')\n parser.add_argument('-p', '--port', type=int, default=port, help='port used by workbench server')\n args, commands = parser.parse_known_args()\n server = str(args.server)\n port = str(args.port)\n\n return {'server':server, 'port':port, 'commands': commands}", "response": "Grab server info from command line"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _make_attachment(self, attachment, str_encoding=None):\n is_inline_image = False\n if isinstance(attachment, MIMEBase):\n name = attachment.get_filename()\n content = attachment.get_payload(decode=True)\n mimetype = attachment.get_content_type()\n\n if attachment.get_content_maintype() == 'image' and attachment['Content-ID'] is not None:\n is_inline_image = True\n name = attachment['Content-ID']\n else:\n (name, content, mimetype) = attachment\n\n # Guess missing mimetype from filename, borrowed from\n # django.core.mail.EmailMessage._create_attachment()\n if mimetype is None and name is not None:\n mimetype, _ = mimetypes.guess_type(name)\n if mimetype is None:\n mimetype = DEFAULT_ATTACHMENT_MIME_TYPE\n\n try:\n # noinspection PyUnresolvedReferences\n if isinstance(content, unicode):\n # Python 2.x unicode string\n content = content.encode(str_encoding)\n except NameError:\n # Python 3 doesn't differentiate between strings and unicode\n # Convert python3 unicode str to bytes attachment:\n if isinstance(content, str):\n content = content.encode(str_encoding)\n\n content_b64 = b64encode(content)\n\n mj_attachment = {\n 'Content-type': mimetype,\n 'Filename': name or '',\n 'content': content_b64.decode('ascii'),\n }\n return mj_attachment, is_inline_image", "response": "Returns a dict of email. attachments item formatted for sending with Mailjet\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef index_data(self, data, index_name, doc_type):\n\n # Index the data (which needs to be a dict/object) if it's not\n # we're going to toss an exception\n if not isinstance(data, dict):\n raise RuntimeError('Index failed, data needs to be a dict!')\n\n try:\n self.els_search.index(index=index_name, doc_type=doc_type, body=data)\n except Exception, error:\n print 'Index failed: %s' % str(error)\n raise RuntimeError('Index failed: %s' % str(error))", "response": "Take an arbitrary dictionary of data and index it with ELS."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsearch the given index with the given ELS query.", "response": "def search(self, index_name, query):\n \"\"\"Search the given index_name with the given ELS query.\n\n Args:\n index_name: Name of the Index\n query: The string to be searched.\n\n Returns:\n List of results.\n\n Raises:\n RuntimeError: When the search query fails.\n \"\"\"\n\n try:\n results = self.els_search.search(index=index_name, body=query)\n return results\n except Exception, error:\n error_str = 'Query failed: %s\\n' % str(error)\n error_str += '\\nIs there a dynamic script in the query?, see www.elasticsearch.org'\n print error_str\n raise RuntimeError(error_str)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nindexing data in Stub Indexer.", "response": "def index_data(self, data, index_name, doc_type):\n \"\"\"Index data in Stub Indexer.\"\"\"\n\n print 'ELS Stub Indexer getting called...'\n print '%s %s %s %s' % (self, data, index_name, doc_type)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute(self, input_data):\n ''' Execute method '''\n my_ssdeep = input_data['meta_deep']['ssdeep']\n my_md5 = input_data['meta_deep']['md5']\n\n # For every PE sample in the database compute my ssdeep fuzzy match\n sample_set = self.workbench.generate_sample_set('exe')\n results = self.workbench.set_work_request('meta_deep', sample_set, ['md5','ssdeep'])\n sim_list = []\n for result in results:\n if result['md5'] != my_md5:\n sim_list.append({'md5':result['md5'], 'sim':ssd.compare(my_ssdeep, result['ssdeep'])})\n\n # Sort and return the sim_list (with some logic for threshold)\n sim_list.sort(key=itemgetter('sim'), reverse=True)\n output_list = [sim for sim in sim_list if sim['sim'] > 0]\n return {'md5': my_md5, 'sim_list':output_list}", "response": "Execute method of the cluster_cache_get_cluster_entry method"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexecute the help_formatter command", "response": "def execute(self, input_data):\n ''' Do CLI formatting and coloring based on the type_tag '''\n input_data = input_data['help_base']\n type_tag = input_data['type_tag']\n\n # Standard help text\n if type_tag == 'help':\n output = '%s%s%s' % (color.LightBlue, input_data['help'], color.Normal)\n\n # Worker\n elif type_tag == 'worker':\n output = '%s%s' % (color.Yellow, input_data['name'])\n output += '\\n %sInput: %s%s%s' % (color.LightBlue, color.Green, input_data['dependencies'], color.Normal)\n output += '\\n %s%s' % (color.Green, input_data['docstring'])\n\n # Command\n elif type_tag == 'command':\n output = '%s%s%s %s' % (color.Yellow, input_data['command'], color.LightBlue, input_data['sig'])\n output += '\\n %s%s%s' % (color.Green, input_data['docstring'], color.Normal)\n\n # WTF: Alert on unknown type_tag and return a string of the input_data\n else:\n print 'Alert: help_formatter worker received malformed object: %s' % str(input_data)\n output = '\\n%s%s%s' % (color.Red, str(input_data), color.Normal)\n\n # Return the formatted and colored help\n return {'help': output}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run():\n\n # Load the configuration file relative to this script location\n config_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'config.ini')\n workbench_conf = ConfigParser.ConfigParser()\n config_ini = workbench_conf.read(config_path)\n if not config_ini:\n print 'Could not locate config.ini file, tried %s : exiting...' % config_path\n exit(1)\n\n # Pull configuration settings\n datastore_uri = workbench_conf.get('workbench', 'datastore_uri')\n database = workbench_conf.get('workbench', 'database')\n worker_cap = workbench_conf.getint('workbench', 'worker_cap')\n samples_cap = workbench_conf.getint('workbench', 'samples_cap')\n\n # Spin up Workbench ZeroRPC\n try:\n store_args = {'uri': datastore_uri, 'database': database, 'worker_cap':worker_cap, 'samples_cap':samples_cap}\n workbench = zerorpc.Server(WorkBench(store_args=store_args), name='workbench', heartbeat=60)\n workbench.bind('tcp://0.0.0.0:4242')\n print '\\nWorkbench is ready and feeling super duper!'\n gevent_signal(signal.SIGTERM, workbench.stop)\n gevent_signal(signal.SIGINT, workbench.stop)\n gevent_signal(signal.SIGKILL, workbench.stop)\n workbench.run()\n print '\\nWorkbench Server Shutting Down... and dreaming of sheep...'\n\n except zmq.error.ZMQError:\n print '\\nInfo: Could not start Workbench server (no worries, probably already running...)\\n'", "response": "Run the workbench server"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef store_sample(self, input_bytes, filename, type_tag):\n\n # If the sample comes in with an unknown type_tag try to determine it\n if type_tag == 'unknown':\n print 'Info: Unknown File -- Trying to Determine Type...'\n type_tag = self.guess_type_tag(input_bytes, filename)\n\n # Do we have a compressed sample? If so decompress it\n if type_tag == 'lz4':\n input_bytes = lz4.loads(input_bytes)\n\n # Store the sample\n md5 = self.data_store.store_sample(input_bytes, filename, type_tag)\n\n # Add the type_tags to tags\n if type_tag != 'lz4':\n self.add_tags(md5, type_tag)\n\n return md5", "response": "Store a sample into the DataStore."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_sample(self, md5):\n # First we try a sample, if we can't find one we try getting a sample_set.\n sample = self.data_store.get_sample(md5)\n if not sample:\n return {'sample_set': {'md5_list': self.get_sample_set(md5)}}\n return {'sample': sample}", "response": "Get a sample from the DataStore."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns True if the md5 represent a sample_set False otherwise", "response": "def is_sample_set(self, md5):\n \"\"\" Does the md5 represent a sample_set?\n Args:\n md5: the md5 of the sample_set\n Returns:\n True/False\n \"\"\"\n try:\n self.get_sample_set(md5)\n return True\n except WorkBench.DataNotFound:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a sample from the DataStore.", "response": "def get_sample_window(self, type_tag, size):\n \"\"\" Get a sample from the DataStore.\n Args:\n type_tag: the type of samples ('pcap','exe','pdf')\n size: the size of the window in MegaBytes (10 = 10MB)\n Returns:\n A sample_set handle which represents the newest samples within the size window\n \"\"\"\n md5_list = self.data_store.get_sample_window(type_tag, size)\n return self.store_sample_set(md5_list)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncombine the samples together and store the result in the file.", "response": "def combine_samples(self, md5_list, filename, type_tag):\n \"\"\"Combine samples together. This may have various use cases the most significant \n involving a bunch of sample 'chunks' got uploaded and now we combine them together\n\n Args:\n md5_list: The list of md5s to combine, order matters!\n filename: name of the file (used purely as meta data not for lookup)\n type_tag: ('exe','pcap','pdf','json','swf', or ...)\n Returns:\n the computed md5 of the combined samples\n \"\"\"\n total_bytes = \"\"\n for md5 in md5_list:\n total_bytes += self.get_sample(md5)['sample']['raw_bytes']\n self.remove_sample(md5)\n\n # Store it\n return self.store_sample(total_bytes, filename, type_tag)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef stream_sample(self, md5, kwargs=None):\n\n # Get the max_rows if specified\n max_rows = kwargs.get('max_rows', None) if kwargs else None\n\n # Grab the sample and it's raw bytes\n sample = self.get_sample(md5)['sample']\n raw_bytes = sample['raw_bytes']\n\n # Figure out the type of file to be streamed\n type_tag = sample['type_tag']\n if type_tag == 'bro':\n bro_log = bro_log_reader.BroLogReader(convert_datetimes=False)\n mem_file = StringIO(raw_bytes)\n generator = bro_log.read_log(mem_file)\n return generator\n elif type_tag == 'els_query':\n els_log = json.loads(raw_bytes)\n # Try to determine a couple of different types of ELS query results\n if 'fields' in els_log['hits']['hits'][0]:\n generator = (row['fields'] for row in els_log['hits']['hits'][:max_rows])\n else:\n generator = (row['_source'] for row in els_log['hits']['hits'][:max_rows])\n return generator\n elif type_tag == 'log':\n generator = ({'row':row} for row in raw_bytes.split('\\n')[:max_rows])\n return generator\n elif type_tag == 'json':\n generator = (row for row in json.loads(raw_bytes)[:max_rows])\n return generator\n else:\n raise RuntimeError('Cannot stream file %s with type_tag:%s' % (md5, type_tag))", "response": "Stream the sample by giving back a generator that yields rows of the file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a dataframe from the DataStore. This is just a convenience method that uses get_sample internally.", "response": "def get_dataframe(self, md5, compress='lz4'):\n \"\"\"Return a dataframe from the DataStore. This is just a convenience method\n that uses get_sample internally. \n Args:\n md5: the md5 of the dataframe\n compress: compression to use: (defaults to 'lz4' but can be set to None)\n Returns:\n A msgpack'd Pandas DataFrame\n Raises:\n Workbench.DataNotFound if the dataframe is not found.\n \"\"\"\n # First we try a sample, if we can't find one we try getting a sample_set.\n sample = self.data_store.get_sample(md5)\n if not sample:\n raise WorkBench.DataNotFound(\"Could not find %s in the data store\", md5)\n if not compress:\n return sample['raw_bytes']\n else:\n compress_df = lz4.dumps(sample['raw_bytes'])\n print 'Info: DataFrame compression %.0f%%' % (len(compress_df)*100.0/float(len(sample['raw_bytes'])))\n return compress_df"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef guess_type_tag(self, input_bytes, filename):\n mime_to_type = {'application/jar': 'jar',\n 'application/java-archive': 'jar',\n 'application/octet-stream': 'data',\n 'application/pdf': 'pdf',\n 'application/vnd.ms-cab-compressed': 'cab',\n 'application/vnd.ms-fontobject': 'ms_font',\n 'application/vnd.tcpdump.pcap': 'pcap',\n 'application/x-dosexec': 'exe',\n 'application/x-empty': 'empty',\n 'application/x-shockwave-flash': 'swf',\n 'application/xml': 'xml',\n 'application/zip': 'zip',\n 'image/gif': 'gif',\n 'text/html': 'html',\n 'image/jpeg': 'jpg',\n 'image/png': 'png',\n 'image/x-icon': 'icon',\n 'text/plain': 'txt'\n }\n\n # See what filemagic can determine\n with magic.Magic(flags=magic.MAGIC_MIME_TYPE) as mag:\n mime_type = mag.id_buffer(input_bytes[:1024])\n if mime_type in mime_to_type:\n type_tag = mime_to_type[mime_type]\n\n # If we get 'data' back look at the filename\n if type_tag == 'data':\n print 'Info: File -- Trying to Determine Type from filename...'\n ext = os.path.splitext(filename)[1][1:]\n if ext in ['mem','vmem']:\n type_tag = 'mem'\n else:\n print 'Alert: Failed to Determine Type for %s' % filename\n exit(1) # Temp\n return type_tag\n else:\n print 'Alert: Sample Type could not be Determined'\n return 'unknown'", "response": "Try to guess the type of the sample from the file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_tags(self, md5, tags):\n if not tags: return\n tag_set = set(self.get_tags(md5)) if self.get_tags(md5) else set()\n if isinstance(tags, str):\n tags = [tags]\n for tag in tags:\n tag_set.add(tag)\n self.data_store.store_work_results({'tags': list(tag_set)}, 'tags', md5)", "response": "Add tags to this sample"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_tags(self, md5, tags):\n if isinstance(tags, str):\n tags = [tags]\n tag_set = set(tags)\n self.data_store.store_work_results({'tags': list(tag_set)}, 'tags', md5)", "response": "Set the tags for this sample"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting tags for this sample", "response": "def get_tags(self, md5):\n \"\"\"Get tags for this sample\"\"\"\n tag_data = self.data_store.get_work_results('tags', md5)\n return tag_data['tags'] if tag_data else None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef index_sample(self, md5, index_name):\n generator = self.stream_sample(md5)\n for row in generator:\n self.indexer.index_data(row, index_name)", "response": "Index a stored sample with the Indexer.\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef index_worker_output(self, worker_name, md5, index_name, subfield):\n\n # Grab the data\n if subfield:\n data = self.work_request(worker_name, md5)[worker_name][subfield]\n else:\n data = self.work_request(worker_name, md5)[worker_name]\n\n # Okay now index the data\n self.indexer.index_data(data, index_name=index_name, doc_type='unknown')", "response": "Index the data in the index_name with the Indexer."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a node to the database.", "response": "def add_node(self, node_id, name, labels):\n \"\"\" Add a node to the graph with name and labels.\n Args:\n node_id: the unique node_id e.g. 'www.evil4u.com'\n name: the display name of the node e.g. 'evil4u'\n labels: a list of labels e.g. ['domain','evil']\n Returns:\n Nothing\n \"\"\"\n self.neo_db.add_node(node_id, name, labels)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a relationship between two nodes.", "response": "def add_rel(self, source_id, target_id, rel):\n \"\"\" Add a relationship: source, target must already exist (see add_node)\n 'rel' is the name of the relationship 'contains' or whatever.\n Args:\n source_id: the unique node_id of the source\n target_id: the unique node_id of the target\n rel: name of the relationship\n Returns:\n Nothing\n \"\"\"\n self.neo_db.add_rel(source_id, target_id, rel)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clear_db(self):\n self.data_store.clear_db()\n\n # Have the plugin manager reload all the plugins\n self.plugin_manager.load_all_plugins()\n\n # Store information about commands and workbench\n self._store_information()", "response": "Clears the Main Database of all samples and worker output."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef clear_worker_output(self):\n self.data_store.clear_worker_output()\n\n # Have the plugin manager reload all the plugins\n self.plugin_manager.load_all_plugins()\n\n # Store information about commands and workbench\n self._store_information()", "response": "Drops all of the worker output collections\n Args : None\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef work_request(self, worker_name, md5, subkeys=None):\n\n # Pull the worker output\n work_results = self._recursive_work_resolver(worker_name, md5)\n\n # Subkeys (Fixme this is super klutzy)\n if subkeys:\n if isinstance(subkeys, str):\n subkeys = [subkeys]\n try:\n sub_results = {}\n for subkey in subkeys:\n tmp = work_results[worker_name]\n\n # Traverse any subkeys\n for key in subkey.split('.')[:-1]:\n tmp = tmp[key]\n\n # Last subkey\n key = subkey.split('.')[-1]\n if key == '*':\n for key in tmp.keys():\n sub_results[key] = tmp[key]\n else:\n sub_results[key] = tmp[key]\n\n # Set the output\n work_results = sub_results\n\n except (KeyError, TypeError):\n raise RuntimeError('Could not get one or more subkeys for: %s' % (work_results))\n\n # Clean it and ship it\n return self.data_store.clean_for_serialization(work_results)", "response": "This function is used to make a work request for an existing stored sample."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes a work request for an existing stored sample (or sample_set). Args: worker_name: 'strings', 'pe_features', whatever sample_set: the md5 of a sample_set in the Workbench data store subkeys: just get a subkey of the output: 'foo' or 'foo.bar' (None for all) Returns: The output is a generator of the results of the worker output for the sample_set", "response": "def set_work_request(self, worker_name, sample_set, subkeys=None):\n \"\"\" Make a work request for an existing stored sample (or sample_set).\n Args:\n worker_name: 'strings', 'pe_features', whatever\n sample_set: the md5 of a sample_set in the Workbench data store\n subkeys: just get a subkey of the output: 'foo' or 'foo.bar' (None for all) \n Returns:\n The output is a generator of the results of the worker output for the sample_set\n \"\"\"\n\n # Does worker support sample_set_input?\n if self.plugin_meta[worker_name]['sample_set_input']:\n yield self.work_request(worker_name, sample_set, subkeys)\n \n # Loop through all the md5s and return a generator with yield\n else:\n md5_list = self.get_sample_set(sample_set)\n for md5 in md5_list:\n if subkeys:\n yield self.work_request(worker_name, md5, subkeys)\n else:\n yield self.work_request(worker_name, md5)[worker_name]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef store_sample_set(self, md5_list):\n\n # Sanity check\n if not md5_list:\n print 'Warning: Trying to store an empty sample_set'\n return None\n\n # Remove any duplicates\n md5_list = list(set(md5_list))\n\n for md5 in md5_list:\n if not self.has_sample(md5):\n raise RuntimeError('%s: Not found! All items in sample_set must be in the datastore' % (md5))\n set_md5 = hashlib.md5(str(md5_list)).hexdigest()\n self._store_work_results({'md5_list':md5_list}, 'sample_set', set_md5)\n return set_md5", "response": "Store a sample set in the data store."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_sample_set(self, tags=None):\n if isinstance(tags, str):\n tags = [tags]\n md5_list = self.data_store.tag_match(tags)\n return self.store_sample_set(md5_list)", "response": "Generate a sample_set that maches the tags or all if tags are not specified."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef help(self, topic=None):\n if not topic:\n topic = 'workbench'\n\n # It's possible to ask for help on something that doesn't exist\n # so we'll catch the exception and push back an object that\n # indicates we didn't find what they were asking for\n try:\n return self.work_request('help_formatter', topic)['help_formatter']['help']\n except WorkBench.DataNotFound as e:\n\n # Okay this is a bit tricky we want to give the user a nice error\n # message that has both the md5 of what they were looking for and\n # a nice informative message that explains what might have happened\n sample_md5 = e.args[0]\n return '%s%s\\n\\t%s%s%s' % (color.Yellow, sample_md5, color.Green, e.message(), color.Normal)", "response": "Returns the formatted help for the given topic"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _help_basic(self):\n help = '%sWorkbench: Getting started...' % (color.Yellow)\n help += '\\n%sStore a sample into Workbench:' % (color.Green)\n help += '\\n\\t%s$ workbench.store_sample(raw_bytes, filename, type_tag)' % (color.LightBlue)\n help += '\\n\\n%sNotice store_sample returns an md5 of the sample...'% (color.Yellow)\n help += '\\n%sRun workers on the sample (view, meta, whatever...):' % (color.Green)\n help += '\\n\\t%s$ workbench.work_request(\\'view\\', md5)%s' % (color.LightBlue, color.Normal)\n return help", "response": "Help for Workbench Basics"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhelping on all available commands", "response": "def _help_commands(self):\n \"\"\" Help on all the available commands \"\"\"\n help = 'Workbench Commands:'\n for command in self.list_all_commands():\n full_help = self.work_request('help_formatter', command)['help_formatter']['help']\n compact_help = full_help.split('\\n')[:2]\n help += '\\n\\n%s' % '\\n'.join(compact_help)\n return help"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _help_workers(self):\n help = 'Workbench Workers:'\n for worker in self.list_all_workers():\n full_help = self.work_request('help_formatter', worker)['help_formatter']['help']\n compact_help = full_help.split('\\n')[:4]\n help += '\\n\\n%s' % '\\n'.join(compact_help)\n return help", "response": "Help on all the available workers"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_all_commands(self):\n commands = [name for name, _ in inspect.getmembers(self, predicate=inspect.isroutine) if not name.startswith('_')]\n return commands", "response": "Returns a list of all the Workbench commands"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_info(self, component):\n\n # Grab it, clean it and ship it\n work_results = self._get_work_results('info', component)\n return self.data_store.clean_for_serialization(work_results)", "response": "Get the information about this component"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef store_info(self, info_dict, component, type_tag):\n\n # Enforce dictionary input\n if not isinstance(info_dict, dict):\n print 'Critical: info_dict must be a python dictionary, got %s' % type(info_dict)\n return\n\n # Ensure values are not functions/methods/classes\n info_storage = {key:value for key, value in info_dict.iteritems() if not hasattr(value, '__call__')}\n\n # Place the type_tag on it and store it\n info_storage['type_tag'] = type_tag\n self._store_work_results(info_storage, 'info', component)", "response": "Store information about a specific component."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _store_information(self):\n \n print '<<< Generating Information Storage >>>'\n\n # Stores information on Workbench commands and signatures\n for name, meth in inspect.getmembers(self, predicate=inspect.isroutine):\n if not name.startswith('_'):\n info = {'command': name, 'sig': str(funcsigs.signature(meth)), 'docstring': meth.__doc__}\n self.store_info(info, name, type_tag='command')\n\n # Stores help text into the workbench information system\n self.store_info({'help': '<<< Workbench Server Version %s >>>' % self.version}, 'version', type_tag='help')\n self.store_info({'help': self._help_workbench()}, 'workbench', type_tag='help')\n self.store_info({'help': self._help_basic()}, 'basic', type_tag='help')\n self.store_info({'help': self._help_commands()}, 'commands', type_tag='help')\n self.store_info({'help': self._help_workers()}, 'workers', type_tag='help')", "response": "Stores information about Workbench commands and workers and help text into the system s information."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _new_plugin(self, plugin):\n\n # First store the plugin info into our data store\n self.store_info(plugin, plugin['name'], type_tag='worker')\n\n # Place it into our active plugin list\n self.plugin_meta[plugin['name']] = plugin", "response": "Internal method for handling new plugins."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _store_work_results(self, results, collection, md5):\n self.data_store.store_work_results(results, collection, md5)", "response": "Internal method to store the work results in the data store."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _recursive_work_resolver(self, worker_name, md5):\n\n # Looking for the sample?\n if worker_name == 'sample':\n return self.get_sample(md5)\n\n # Looking for info?\n if worker_name == 'info':\n return self._get_work_results('info', md5)\n\n # Looking for tags?\n if worker_name == 'tags':\n return self._get_work_results('tags', md5)\n\n # Do I actually have this plugin? (might have failed, etc)\n if (worker_name not in self.plugin_meta):\n print 'Alert: Request for non-existing or failed plugin: %s' % (worker_name)\n return {}\n\n # If the results exist and the time_stamp is newer than the entire work_chain, I'm done\n collection = self.plugin_meta[worker_name]['name']\n try:\n work_results = self._get_work_results(collection, md5)\n work_chain_mod_time = self._work_chain_mod_time(worker_name)\n if work_chain_mod_time < work_results[collection]['__time_stamp']:\n return work_results\n elif self.VERBOSE:\n print 'VERBOSE: %s work_chain is newer than data' % (worker_name)\n except WorkBench.DataNotFound:\n if self.VERBOSE:\n print 'Verbose: %s data not found generating' % (worker_name)\n\n # Okay either need to generate (or re-generate) the work results\n dependencies = self.plugin_meta[worker_name]['dependencies']\n dependant_results = {}\n for dependency in dependencies:\n dependant_results.update(self._recursive_work_resolver(dependency, md5))\n if self.VERBOSE:\n print 'Verbose: new work for plugin: %s' % (worker_name)\n work_results = self.plugin_meta[worker_name]['class']().execute(dependant_results)\n\n # Enforce dictionary output\n if not isinstance(work_results, dict):\n print 'Critical: Plugin %s MUST produce a python dictionary!' % worker_name\n return None\n\n # Store the results and return\n self._store_work_results(work_results, collection, md5)\n return self._get_work_results(collection, md5)", "response": "Internal method that recursively generates the work results for the given worker_name and md5."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_sample(self, file_path, tags=None):\n\n # Recommend a tag\n if not tags:\n print '\\n%sRequired: Add a list of tags when you load samples (put \\'unknown\\' if you must). \\\n \\n\\t%sExamples: [\\'bad\\'], [\\'good\\'], [\\'bad\\',\\'aptz13\\']%s' % (color.Yellow, color.Green, color.Normal)\n return\n\n # Do they want everything under a directory?\n if os.path.isdir(file_path):\n file_list = self._all_files_in_directory(file_path)\n else:\n file_list = [file_path]\n\n # Upload the files into workbench\n md5_list = []\n for path in file_list:\n with open(path, 'rb') as my_file:\n raw_bytes = my_file.read()\n md5 = hashlib.md5(raw_bytes).hexdigest()\n if not self.workbench.has_sample(md5):\n print '%sStreaming Sample...%s' % (color.LightPurple, color.Normal)\n basename = os.path.basename(path)\n md5 = self.streamer.stream_to_workbench(raw_bytes, basename, 'unknown', tags)\n\n print '\\n%s %s%s %sLocked and Loaded...%s\\n' % \\\n (self.beer, color.LightPurple, md5[:6], color.Yellow, color.Normal)\n\n # Add tags to the sample\n self.workbench.add_tags(md5, tags)\n md5_list.append(md5)\n\n # Pivot on the sample_set\n set_md5 = self.workbench.store_sample_set(md5_list)\n self.pivot(set_md5, '_'.join(tags))\n\n # Dump out tag information\n self.tags()", "response": "Load a sample into a workbench object"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pivot(self, md5, tag=''):\n\n # Is the md5 a tag?\n ss = self.workbench.generate_sample_set(md5)\n if ss:\n tag = md5 if not tag else tag\n md5 = ss\n\n # Is the md5 a sample_set?\n if self.workbench.is_sample_set(md5):\n\n # Is the sample_set one sample?\n ss = self.workbench.get_sample_set(md5)\n if len(ss) == 1:\n md5 = ss[0]\n deco = '(%s:%d)' % (tag, len(ss))\n self.ipshell.push({'prompt_deco': deco})\n else:\n deco = '(%s:1)' % tag\n self.ipshell.push({'prompt_deco': deco})\n\n # Set the new md5\n self.session.md5 = md5\n self.session.short_md5 = md5[:6]\n self.ipshell.push({'md5': self.session.md5})\n self.ipshell.push({'short_md5': self.session.short_md5})", "response": "Pivot on an md5"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tags(self):\n '''Display tag information for all samples in database'''\n tags = self.workbench.get_all_tags()\n if not tags:\n return\n tag_df = pd.DataFrame(tags)\n tag_df = self.vectorize(tag_df, 'tags')\n print '\\n%sSamples in Database%s' % (color.LightPurple, color.Normal)\n self.top_corr(tag_df)", "response": "Display tag information for all samples in database"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pull_df(self, md5):\n try:\n _packed_df = self.workbench.get_dataframe(md5)\n _df = pd.read_msgpack(lz4.loads(_packed_df))\n return _df\n except zerorpc.exceptions.RemoteError as e:\n return repr_to_str_decorator.r_to_s(self._data_not_found)(e)", "response": "Wrapper for the Workbench get_dataframe method\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef flatten(self, df, column_name):\n _exp_list = [[md5, x] for md5, value_list in zip(df['md5'], df[column_name]) for x in value_list]\n return pd.DataFrame(_exp_list, columns=['md5',column_name])", "response": "Flatten a column in the dataframe that contains lists"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef top_corr(self, df):\n tag_freq = df.sum()\n tag_freq.sort(ascending=False)\n\n corr = df.corr().fillna(1)\n corr_dict = corr.to_dict()\n for tag, count in tag_freq.iteritems():\n print ' %s%s: %s%s%s (' % (color.Green, tag, color.LightBlue, count, color.Normal),\n tag_corrs = sorted(corr_dict[tag].iteritems(), key=operator.itemgetter(1), reverse=True)\n for corr_tag, value in tag_corrs[:5]:\n if corr_tag != tag and (value > .2):\n print '%s%s:%s%.1f' % (color.Green, corr_tag, color.LightBlue, value),\n print '%s)' % color.Normal", "response": "Give aggregation counts and correlations"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef search(self, tags=None):\n if isinstance(tags, str):\n tags = [tags]\n return self.workbench.generate_sample_set(tags)", "response": "Wrapper for the Workbench search method\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef versions(self):\n print '%s<<< Workbench CLI Version %s >>>%s' % (color.LightBlue, self.version, color.Normal)\n print self.workbench.help('version')", "response": "Announce Versions of CLI and Server\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef run(self):\n ''' Running the workbench CLI '''\n\n # Announce versions\n self.versions()\n\n # Sample/Tag info and Help\n self.tags()\n print '\\n%s' % self.workbench.help('cli')\n\n # Now that we have the Workbench connection spun up, we register some stuff\n # with the embedded IPython interpreter and than spin it up\n # cfg = IPython.config.loader.Config()\n cfg = Config()\n cfg.InteractiveShellEmbed.autocall = 2\n cfg.InteractiveShellEmbed.colors = 'Linux'\n cfg.InteractiveShellEmbed.color_info = True\n cfg.InteractiveShellEmbed.autoindent = True\n cfg.InteractiveShellEmbed.deep_reload = True\n cfg.PromptManager.in_template = (\n r'{color.LightPurple}{short_md5}{color.Yellow}{prompt_deco}{color.LightBlue} Workbench{color.Green}[\\#]> ')\n # cfg.PromptManager.out_template = ''\n\n # Create the IPython shell\n self.ipshell = IPython.terminal.embed.InteractiveShellEmbed(\n config=cfg, banner1='', exit_msg='\\nWorkbench has SuperCowPowers...')\n\n # Register our transformer, the shell will use this for 'shortcut' commands\n auto_quoter = auto_quote_xform.AutoQuoteTransformer(self.ipshell, self.ipshell.prefilter_manager)\n auto_quoter.register_command_set(self.command_set)\n\n # Setting up some Pandas options\n pd.set_option('display.width', 140)\n pd.set_option('max_colwidth', 15)\n\n # Start up the shell with our set of workbench commands\n self.ipshell(local_ns=self.command_dict)", "response": "Runs the workbench CLI"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _connect(self, server_info):\n\n # First we do a temp connect with a short heartbeat\n _tmp_connect = zerorpc.Client(timeout=300, heartbeat=2)\n _tmp_connect.connect('tcp://'+server_info['server']+':'+server_info['port'])\n try:\n _tmp_connect._zerorpc_name()\n _tmp_connect.close()\n del _tmp_connect\n except zerorpc.exceptions.LostRemote:\n print '%sError: Could not connect to Workbench Server at %s:%s%s' % \\\n (color.Red, server_info['server'], server_info['port'], color.Normal)\n sys.exit(1)\n\n # Okay do the real connection\n if self.workbench:\n self.workbench.close()\n self.workbench = zerorpc.Client(timeout=300, heartbeat=60)\n self.workbench.connect('tcp://'+server_info['server']+':'+server_info['port'])\n print '\\n%s<<< Connected: %s:%s >>>%s' % (color.Green, server_info['server'], server_info['port'], color.Normal)", "response": "Connect to the workbench server"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _progress_print(self, sent, total):\n percent = min(int(sent*100.0/total), 100)\n sys.stdout.write('\\r{0}[{1}{2}] {3}{4}%{5}'.\n format(color.Green, '#'*(percent/2), ' '*(50-percent/2), color.Yellow, percent, color.Normal))\n sys.stdout.flush()", "response": "Print the current progress of the current upload."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap for a work_request to workbench", "response": "def _work_request(self, worker, md5=None):\n \"\"\"Wrapper for a work_request to workbench\"\"\"\n\n # I'm sure there's a better way to do this\n if not md5 and not self.session.md5:\n return 'Must call worker with an md5 argument...'\n elif not md5:\n md5 = self.session.md5\n\n # Is the md5 a sample_set?\n if self.workbench.is_sample_set(md5):\n return self.workbench.set_work_request(worker, md5)\n\n # Make the work_request with worker and md5 args\n try:\n return self.workbench.work_request(worker, md5)\n except zerorpc.exceptions.RemoteError as e:\n return repr_to_str_decorator.r_to_s(self._data_not_found)(e)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _generate_command_dict(self):\n\n # First add all the workers\n commands = {}\n for worker in self.workbench.list_all_workers():\n commands[worker] = lambda md5=None, worker=worker: self._work_request(worker, md5)\n\n # Next add all the commands\n for command in self.workbench.list_all_commands():\n # Fixme: is there a better way to get the lambda function from ZeroRPC\n commands[command] = self.workbench.__getattr__(command)\n\n # Now the general commands which are often overloads\n # for some of the workbench commands\n general = {\n 'workbench': self.workbench,\n 'help': self._help,\n 'load_sample': self.load_sample,\n 'pull_df': self.pull_df,\n 'flatten': self.flatten,\n 'vectorize': self.vectorize,\n 'top_corr': self.top_corr,\n 'tags': self.tags,\n 'pivot': self.pivot,\n 'search': self.search,\n 'reconnect': lambda info=self.server_info: self._connect(info),\n 'version': self.versions,\n 'versions': self.versions,\n 'short_md5': self.session.short_md5,\n 'prompt_deco': self.session.prompt_deco\n }\n commands.update(general)\n\n # Return the list of workbench commands\n return commands", "response": "Generate a dict of functions that will be used to make the workbench with a bunch of shortcuts\n and helper/alias functions that will make the shell MUCH easier."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _register_info(self):\n # Stores information on Workbench commands and signatures\n for name, meth in inspect.getmembers(self, predicate=inspect.isroutine):\n if not name.startswith('_') and name != 'run':\n info = {'command': name, 'sig': str(funcsigs.signature(meth)), 'docstring': meth.__doc__}\n self.workbench.store_info(info, name, 'command')\n\n # Register help information\n self.workbench.store_info({'help': self.help.help_cli()}, 'cli', 'help')\n self.workbench.store_info({'help': self.help.help_cli_basic()}, 'cli_basic', 'help')\n self.workbench.store_info({'help': self.help.help_cli_search()}, 'search', 'help')\n self.workbench.store_info({'help': self.help.help_dataframe()}, 'dataframe', 'help')\n self.workbench.store_info({'help': self.help.help_dataframe_memory()}, 'dataframe_memory', 'help')\n self.workbench.store_info({'help': self.help.help_dataframe_pe()}, 'dataframe_pe', 'help')", "response": "Register local methods in the Workbench Information system"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef transform(self, line, _continue_prompt):\n\n # Capture the original line\n orig_line = line\n\n # Get tokens from all the currently active namespace\n ns_token_set = set([token for nspace in self.shell.all_ns_refs for token in nspace])\n\n # Build up token set and info out of the incoming line\n token_list = re.split(' |;|,|(|)|\\'|\"', line)\n token_list = [item for item in token_list if item != None and item != '']\n num_tokens = len(token_list)\n first_token = token_list[0]\n token_set = set(token_list) \n\n # Very conservative logic (but possibly flawed)\n # 1) Lines with any of these symbols ; , ' \" ( ) aren't touched\n # 2) Need to have more than one token\n # 3) First token in line must be in the workbench command set\n # 4) If first token is 'help' than all other tokens are quoted\n # 5) Otherwise only tokens that are not in any of the namespace are quoted\n\n # Fixme: Horse shit temp hack for load_sample\n # 0) If load_sample do special processing\n if first_token == 'load_sample':\n # If the second arg isn't in namespace quote it\n if token_list[1] not in ns_token_set:\n line = line.replace(token_list[1], '\"'+token_list[1]+'\",')\n return line\n\n # Fixme: Horse shit temp hack for pivot\n # 0) If pivot do special processing\n if first_token == 'pivot':\n # Quote all other tokens\n for token in token_list:\n if token not in ns_token_set:\n line = line.replace(token, '\"' + token + '\",')\n return line\n\n # 1) Lines with any of these symbols ; , ' \" ( ) aren't touched\n skip_symbols = [';', ',', '\\'', '\"', '(', ')']\n if any([sym in line for sym in skip_symbols]):\n return line\n\n # 2) Need to have more than one token\n # 3) First token in line must be in the workbench command set\n if num_tokens > 1 and first_token in self.command_set:\n\n # 4) If first token is 'help' than all other tokens are quoted\n if first_token == 'help':\n token_set.remove('help')\n for token in token_set:\n line = line.replace(token, '\"'+token+'\"')\n\n # 5) Otherwise only tokens that are not in any of the namespace are quoted\n else: # Not help\n for token in token_set:\n if token not in ns_token_set:\n line = line.replace(token, '\"'+token+'\"')\n\n # Return the processed line\n return line", "response": "Shortcut Workbench commands by using auto - quotes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef decodes(self, s: str) -> BioCCollection:\n tree = etree.parse(io.BytesIO(bytes(s, encoding='UTF-8')))\n collection = self.__parse_collection(tree.getroot())\n collection.encoding = tree.docinfo.encoding\n collection.standalone = tree.docinfo.standalone\n collection.version = tree.docinfo.xml_version\n return collection", "response": "Deserialize a string representation of a BioC collection into a BioC object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef decode(self, fp: TextIO) -> BioCCollection:\n # utf8_parser = etree.XMLParser(encoding='utf-8')\n tree = etree.parse(fp)\n collection = self.__parse_collection(tree.getroot())\n collection.encoding = tree.docinfo.encoding\n collection.standalone = tree.docinfo.standalone\n collection.version = tree.docinfo.xml_version\n return collection", "response": "Deserialize a file - like object containing a BioC collection."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexecute the plugin and return the output", "response": "def execute(self, input_data):\n ''' Execute method '''\n\n # Grab the raw bytes of the sample\n raw_bytes = input_data['sample']['raw_bytes']\n\n # Spin up the rekall session and render components\n session = MemSession(raw_bytes)\n renderer = WorkbenchRenderer(session=session)\n\n # Run the plugin\n session.RunPlugin(self.plugin_name, renderer=renderer)\n\n return renderer.get_output()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef process_row(cls, data, column_map):\n row = {}\n for key,value in data.iteritems():\n if not value:\n value = '-' \n elif isinstance(value, list):\n value = value[1]\n elif isinstance(value, dict):\n if 'type_name' in value:\n if 'UnixTimeStamp' in value['type_name']:\n value = datetime.datetime.utcfromtimestamp(value['epoch'])\n if value == datetime.datetime(1970, 1, 1, 0, 0):\n value = '-'\n\n # Assume the value is somehow well formed when we get here\n row[column_map[key]] = value\n return row", "response": "Process the row data from Rekall"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef start(self, plugin_name=None, kwargs=None):\n self.output = [] # Start basically resets the output data\n super(WorkbenchRenderer, self).start(plugin_name=plugin_name)\n return self", "response": "Start method that initial data structures and store some meta data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format(self, formatstring, *args):\n\n # Make a new section\n if self.incoming_section:\n self.SendMessage(['s', {'name': args}])\n self.incoming_section = False", "response": "Presentation Information from the Plugin"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nopen a file for writing or reading.", "response": "def open(self, directory=None, filename=None, mode=\"rb\"):\n \"\"\"Opens a file for writing or reading.\"\"\"\n path = os.path.join(directory, filename)\n return open(path, mode)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nyielding compressed chunks from a data array", "response": "def _file_chunks(self, data, chunk_size):\n \"\"\" Yield compressed chunks from a data array\"\"\"\n for i in xrange(0, len(data), chunk_size):\n yield self.compressor(data[i:i+chunk_size])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stream_to_workbench(self, raw_bytes, filename, type_tag, tags):\n md5_list = []\n sent_bytes = 0\n total_bytes = len(raw_bytes)\n for chunk in self._file_chunks(raw_bytes, self.chunk_size):\n md5_list.append(self.workbench.store_sample(chunk, filename, self.compress_ident))\n sent_bytes += self.chunk_size\n self.progress(sent_bytes, total_bytes)\n\n # Now we just ask Workbench to combine these\n full_md5 = self.workbench.combine_samples(md5_list, filename, type_tag)\n\n # Add the tags\n self.workbench.add_tags(full_md5, tags)\n\n # Return the md5 of the finalized sample\n return full_md5", "response": "Stream a large file into chunks and send them to Workbench"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nserializes a BioC object to a JSON formatted str.", "response": "def dumps(obj, **kwargs) -> str:\n \"\"\"\n Serialize a BioC ``obj`` to a JSON formatted ``str``.\n \"\"\"\n return json.dumps(obj, cls=BioCJSONEncoder, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nserialize obj as a JSON formatted stream to fp.", "response": "def dump(obj, fp, **kwargs):\n \"\"\"\n Serialize obj as a JSON formatted stream to fp (a .write()-supporting file-like object)\n \"\"\"\n return json.dump(obj, fp, cls=BioCJSONEncoder, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write(self, obj: BioCDocument or BioCPassage or BioCSentence):\n if self.level == DOCUMENT and not isinstance(obj, BioCDocument):\n raise ValueError\n if self.level == PASSAGE and not isinstance(obj, BioCPassage):\n raise ValueError\n if self.level == SENTENCE and not isinstance(obj, BioCSentence):\n raise ValueError\n self.writer.write(BioCJSONEncoder().default(obj))", "response": "Encode and write a single object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute(self, input_data):\n ''' Execute method '''\n\n # Spin up the rekall adapter\n adapter = RekallAdapter()\n adapter.set_plugin_name(self.plugin_name)\n rekall_output = adapter.execute(input_data)\n\n # Process the output data\n for line in rekall_output:\n\n if line['type'] == 'm': # Meta\n self.output['meta'] = line['data']\n elif line['type'] == 's': # New Session (Table)\n self.current_table_name = line['data']['name'][1]\n elif line['type'] == 't': # New Table Headers (column names)\n self.column_map = {item['cname']: item['name'] if 'name' in item else item['cname'] for item in line['data']}\n elif line['type'] == 'r': # Row\n \n # Add the row to our current table\n row = RekallAdapter.process_row(line['data'], self.column_map)\n self.output['tables'][self.current_table_name].append(row)\n else:\n print 'Note: Ignoring rekall message of type %s: %s' % (line['type'], line['data'])\n\n # All done\n return self.output", "response": "Execute the rekall adapter and return the output"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef execute(self, input_data):\n ''' Execute the ViewMemoryDeep worker '''\n\n # Aggregate the output from all the memory workers, clearly this could be kewler\n output = input_data['view_memory']\n output['tables'] = {}\n for data in [input_data[key] for key in ViewMemoryDeep.dependencies]:\n for name,table in data['tables'].iteritems():\n output['tables'].update({name: table})\n return output", "response": "Execute the ViewMemoryDeep worker"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef execute(self, input_data):\n ''' Execute the ViewMemory worker '''\n\n # Aggregate the output from all the memory workers into concise summary info\n output = {'meta': input_data['mem_meta']['tables']['info']}\n output['connscan'] = list(set([item['Remote Address'] for item in input_data['mem_connscan']['tables']['connscan']]))\n pslist_md5s = {self.file_to_pid(item['filename']): item['md5'] for item in input_data['mem_procdump']['tables']['dumped_files']}\n output['pslist'] = ['PPID: %d PID: %d Name: %s - %s' % (item['PPID'], item['PID'], item['Name'], pslist_md5s[item['PID']])\n for item in input_data['mem_pslist']['tables']['pslist']]\n return output", "response": "Execute the ViewMemory worker"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstores a config value in a dictionary", "response": "def store(self, name, value, atype, new_name=None, multiplier=None, allowed_values=None):\n ''' store a config value in a dictionary, these values are used to populate a trasnfer spec\n validation -- check type, check allowed values and rename if required '''\n if value is not None:\n _bad_type = (not isinstance(value, atype))\n if not _bad_type:\n # special case\n _bad_type = (isinstance(value, bool) and atype == int)\n\n if _bad_type:\n # could be a special value\n if allowed_values and value in allowed_values:\n allowed_values = None\n else:\n raise ValueError(\"%s should be value of type (%s)\" % (name, atype.__name__))\n\n if allowed_values:\n if isinstance(value, str):\n if value not in allowed_values:\n raise ValueError(\"%s can be %s\" % (name, allowed_values))\n elif isinstance(value, int):\n if isinstance(allowed_values[0], int):\n if value < allowed_values[0]:\n raise ValueError(\"%s must be >= %d\" % (name, allowed_values[0]))\n\n _val = value if not multiplier else (multiplier * value)\n _name = name if not new_name else new_name\n self._dict[_name] = _val"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef multi_session(self):\n ''' convert the multi_session param a number '''\n _val = 0\n if \"multi_session\" in self._dict:\n _val = self._dict[\"multi_session\"]\n if str(_val).lower() == 'all':\n _val = -1\n\n return int(_val)", "response": "convert the multi_session param a number"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the Aspera connection details on Aspera enabled buckets", "response": "def _raw_aspera_metadata(self, bucket):\n ''' get the Aspera connection details on Aspera enabled buckets '''\n response = self._client.get_bucket_aspera(Bucket=bucket)\n\n # Parse metadata from response\n aspera_access_key = response['AccessKey']['Id']\n aspera_secret_key = response['AccessKey']['Secret']\n ats_endpoint = response['ATSEndpoint']\n\n return aspera_access_key, aspera_secret_key, ats_endpoint"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes hhtp call to Aspera to fetch back trasnfer spec", "response": "def _fetch_transfer_spec(self, node_action, token, bucket_name, paths):\n ''' make hhtp call to Aspera to fetch back trasnfer spec '''\n aspera_access_key, aspera_secret_key, ats_endpoint = self._get_aspera_metadata(bucket_name)\n\n _headers = {'accept': \"application/json\",\n 'Content-Type': \"application/json\"}\n\n credentials = {'type': 'token',\n 'token': {'delegated_refresh_token': token}}\n\n _url = ats_endpoint\n _headers['X-Aspera-Storage-Credentials'] = json.dumps(credentials)\n _data = {'transfer_requests': [\n {'transfer_request': {'paths': paths, 'tags': {'aspera': {\n 'node': {'storage_credentials': credentials}}}}}]}\n\n _session = requests.Session()\n _response = _session.post(url=_url + \"/files/\" + node_action,\n auth=(aspera_access_key, aspera_secret_key),\n headers=_headers, json=_data, verify=self._config.verify_ssl)\n return _response"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npasses the transfer details to aspera and receive back a populated with access token", "response": "def _create_transfer_spec(self, call_args):\n ''' pass the transfer details to aspera and receive back a\n populated transfer spec complete with access token '''\n _paths = []\n for _file_pair in call_args.file_pair_list:\n _path = OrderedDict()\n if call_args.direction == enumAsperaDirection.SEND:\n _action = \"upload_setup\"\n _path['source'] = _file_pair.fileobj\n _path['destination'] = _file_pair.key\n else:\n _action = \"download_setup\"\n _path['source'] = _file_pair.key\n _path['destination'] = _file_pair.fileobj\n _paths.append(_path)\n\n # Add credentials before the transfer spec is requested.\n delegated_token = self._delegated_token_manager.get_token()\n _response = self._fetch_transfer_spec(_action, delegated_token, call_args.bucket, _paths)\n\n tspec_dict = json.loads(_response.content)['transfer_specs'][0]['transfer_spec']\n\n tspec_dict[\"destination_root\"] = \"/\"\n\n if (call_args.transfer_config):\n tspec_dict.update(call_args.transfer_config.dict)\n if call_args.transfer_config.is_multi_session_all:\n tspec_dict['multi_session'] = 0\n _remote_host = tspec_dict['remote_host'].split('.')\n # now we append '-all' to the remote host\n _remote_host[0] += \"-all\"\n tspec_dict['remote_host'] = \".\".join(_remote_host)\n logger.info(\"New remote_host(%s)\" % tspec_dict['remote_host'])\n\n call_args.transfer_spec = json.dumps(tspec_dict)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupload a directory using Aspera", "response": "def upload_directory(self, directory, bucket, key, transfer_config=None, subscribers=None):\n ''' upload a directory using Aspera '''\n check_io_access(directory, os.R_OK)\n return self._queue_task(bucket, [FilePair(key, directory)], transfer_config,\n subscribers, enumAsperaDirection.SEND)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_directory(self, bucket, key, directory, transfer_config=None, subscribers=None):\n ''' download a directory using Aspera '''\n check_io_access(directory, os.W_OK)\n return self._queue_task(bucket, [FilePair(key, directory)], transfer_config,\n subscribers, enumAsperaDirection.RECEIVE)", "response": "download a directory using Aspera"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nuploads a file using Aspera", "response": "def upload(self, fileobj, bucket, key, transfer_config=None, subscribers=None):\n ''' upload a file using Aspera '''\n check_io_access(fileobj, os.R_OK, True)\n return self._queue_task(bucket, [FilePair(key, fileobj)], transfer_config,\n subscribers, enumAsperaDirection.SEND)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef download(self, bucket, key, fileobj, transfer_config=None, subscribers=None):\n ''' download a file using Aspera '''\n check_io_access(os.path.dirname(fileobj), os.W_OK)\n return self._queue_task(bucket, [FilePair(key, fileobj)], transfer_config,\n subscribers, enumAsperaDirection.RECEIVE)", "response": "download a file using Aspera"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the log details for the current aspera process", "response": "def set_log_details(aspera_log_path=None,\n sdk_log_level=logging.NOTSET):\n ''' set the aspera log path - used by th Ascp process\n set the internal aspera sdk activity - for debug purposes '''\n if aspera_log_path:\n check_io_access(aspera_log_path, os.W_OK)\n AsperaTransferCoordinator.set_log_location(aspera_log_path)\n\n if sdk_log_level != logging.NOTSET:\n if logger:\n if not len(logger.handlers):\n handler = logging.StreamHandler()\n _fmt = '%(asctime)s %(levelname)s %(message)s'\n handler.setFormatter(logging.Formatter(_fmt))\n logger.addHandler(handler)\n logger.setLevel(sdk_log_level)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate the user arguments", "response": "def _validate_args(self, args):\n ''' validate the user arguments '''\n assert(args.bucket)\n\n if args.subscribers:\n for _subscriber in args.subscribers:\n assert(isinstance(_subscriber, AsperaBaseSubscriber))\n\n if (args.transfer_config):\n assert(isinstance(args.transfer_config, AsperaConfig))\n\n # number of sessions requested cant be greater than max ascps\n if args.transfer_config.multi_session > self._config.ascp_max_concurrent:\n raise ValueError(\"Max sessions is %d\" % self._config.ascp_max_concurrent)\n\n for _pair in args.file_pair_list:\n if not _pair.key or not _pair.fileobj:\n raise ValueError(\"Invalid file pair\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _shutdown(self, cancel, cancel_msg, exc_type=CancelledError):\n ''' Internal shutdown used by 'shutdown' method above '''\n if cancel:\n # Cancel all in-flight transfers if requested, before waiting\n # for them to complete.\n self._coordinator_controller.cancel(cancel_msg, exc_type)\n try:\n # Wait until there are no more in-progress transfers. This is\n # wrapped in a try statement because this can be interrupted\n # with a KeyboardInterrupt that needs to be caught.\n self._coordinator_controller.wait()\n except KeyboardInterrupt:\n # If not errors were raised in the try block, the cancel should\n # have no coordinators it needs to run cancel on. If there was\n # an error raised in the try statement we want to cancel all of\n # the inflight transfers before shutting down to speed that\n # process up.\n self._coordinator_controller.cancel('KeyboardInterrupt()')\n raise\n finally:\n self._coordinator_controller.cleanup()", "response": "Internal shutdown used by shutdown method above."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cleanup(self):\n ''' Stop backgroud thread and cleanup resources '''\n self._processing_stop = True\n self._wakeup_processing_thread()\n self._processing_stopped_event.wait(3)", "response": "Stop backgroud thread and cleanup resources"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tracked_coordinator_count(self, count_ascps=False):\n ''' count the number of cooridnators currently being processed\n or count the number of ascps currently being used '''\n with self._lock:\n _count = 0\n if count_ascps:\n for _coordinator in self._tracked_transfer_coordinators:\n _count += _coordinator.session_count\n else:\n _count = len(self._tracked_transfer_coordinators)\n return _count", "response": "count the number of cooridnators currently being processed and the number of ascps currently being used"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _queue_task(self, args):\n ''' add transfer to waiting queue if possible\n then notify the background thread to process it '''\n if self._cancel_called:\n raise AsperaTransferQueueError(\"Cancel already called\")\n elif self._wait_called:\n raise AsperaTransferQueueError(\"Cant queue items during wait\")\n elif self.waiting_coordinator_count() >= self._config.max_submission_queue_size:\n raise AsperaTransferQueueError(\"Max queued items reached\")\n else:\n _coordinator = AsperaTransferCoordinator(args)\n _components = {'meta': TransferMeta(args, transfer_id=args.transfer_id),\n 'coordinator': _coordinator}\n\n _transfer_future = AsperaTransferFuture(**_components)\n _coordinator.add_subscribers(args.subscribers, future=_transfer_future)\n _coordinator.add_done_callback(self.remove_aspera_coordinator,\n transfer_coordinator=_coordinator)\n self.append_waiting_queue(_coordinator)\n\n if not self._processing_thread:\n self._processing_thread = threading.Thread(target=self._process_waiting_queue)\n self._processing_thread.daemon = True\n self._processing_thread.start()\n\n self._wakeup_processing_thread()\n\n return _transfer_future", "response": "add a transfer to waiting queue if possible then notify the background thread to process it"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving entry from waiting waiting or remove item from processig queue and add to processed quque notify background thread as it may be able to process watiign requests", "response": "def remove_aspera_coordinator(self, transfer_coordinator):\n ''' remove entry from the waiting waiting\n or remove item from processig queue and add to processed quque\n notify background thread as it may be able to process watiign requests\n '''\n # usually called on processing completion - but can be called for a cancel\n if self._in_waiting_queue(transfer_coordinator):\n logger.info(\"Remove from waiting queue count=%d\" % self.waiting_coordinator_count())\n with self._lockw:\n self._waiting_transfer_coordinators.remove(transfer_coordinator)\n else:\n logger.info(\"Remove from processing queue count=%d\" % self.tracked_coordinator_count())\n try:\n self.remove_transfer_coordinator(transfer_coordinator)\n self.append_processed_queue(transfer_coordinator)\n except Exception:\n pass\n\n self._wakeup_processing_thread()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nappending item to waiting queue", "response": "def append_waiting_queue(self, transfer_coordinator):\n ''' append item to waiting queue '''\n logger.debug(\"Add to waiting queue count=%d\" % self.waiting_coordinator_count())\n with self._lockw:\n self._waiting_transfer_coordinators.append(transfer_coordinator)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef free_processed_queue(self):\n ''' call the Aspera sdk to freeup resources '''\n with self._lock:\n if len(self._processed_coordinators) > 0:\n for _coordinator in self._processed_coordinators:\n _coordinator.free_resources()\n self._processed_coordinators = []", "response": "call the Aspera sdk to freeup resources"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhave either of the stop processing flags been set", "response": "def is_stop(self):\n ''' has either of the stop processing flags been set '''\n if len(self._processed_coordinators) > 0:\n self.free_processed_queue()\n return self._cancel_called or self._processing_stop"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _process_waiting_queue(self):\n ''' thread to processes the waiting queue\n fetches transfer spec\n then calls start transfer\n ensures that max ascp is not exceeded '''\n logger.info(\"Queue processing thread started\")\n while not self.is_stop():\n self._processing_event.wait(3)\n self._processing_event.clear()\n if self.is_stop():\n break\n\n while self.waiting_coordinator_count() > 0:\n if self.is_stop():\n break\n _used_slots = self.tracked_coordinator_count(True)\n _free_slots = self._config.ascp_max_concurrent - _used_slots\n if _free_slots <= 0:\n break\n\n with self._lockw:\n # check are there enough free slots\n _req_slots = self._waiting_transfer_coordinators[0].session_count\n if _req_slots > _free_slots:\n break\n _coordinator = self._waiting_transfer_coordinators.popleft()\n self.add_transfer_coordinator(_coordinator)\n\n if not _coordinator.set_transfer_spec():\n self.remove_aspera_coordinator(_coordinator)\n else:\n logger.info(\"ASCP process queue - Max(%d) InUse(%d) Free(%d) New(%d)\" %\n (self._config.ascp_max_concurrent,\n _used_slots,\n _free_slots,\n _req_slots))\n _coordinator.start_transfer()\n\n logger.info(\"Queue processing thread stopped\")\n self._processing_stopped_event.set()", "response": "thread to process the waiting queue\n fetches transfer spec\n then calls start transfer"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove all entries from waiting queue or cancell all in waiting queue", "response": "def clear_waiting_coordinators(self, cancel=False):\n ''' remove all entries from waiting queue or cancell all in waiting queue '''\n with self._lockw:\n if cancel:\n for _coordinator in self._waiting_transfer_coordinators:\n _coordinator.notify_cancelled(\"Clear Waiting Queue\", False)\n self._waiting_transfer_coordinators.clear()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncancel all queue items - then attempt to cancel all in progress items - then attempt to cancel all in progress items - then clear waiting coordinator list", "response": "def cancel(self, *args, **kwargs):\n \"\"\" Cancel all queue items - then attempt to cancel all in progress items \"\"\"\n self._cancel_called = True\n self.clear_waiting_coordinators(cancel=True)\n super(AsperaTransferCoordinatorController, self).cancel(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wait(self):\n self._wait_called = True\n while self.tracked_coordinator_count() > 0 or \\\n self.waiting_coordinator_count() > 0:\n time.sleep(1)\n super(AsperaTransferCoordinatorController, self).wait()\n self._wait_called = False", "response": "Wait until all in progress and queued items are processed."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute(self, input_data):\n ''' Execute the ViewZip worker '''\n\n # Just a small check to make sure we haven't been called on the wrong file type\n if (input_data['meta']['type_tag'] != 'zip'):\n return {'error': self.__class__.__name__+': called on '+input_data['meta']['type_tag']}\n\n view = {}\n view['payload_md5s'] = input_data['unzip']['payload_md5s']\n view['yara_sigs'] = input_data['yara_sigs']['matches'].keys()\n view.update(input_data['meta'])\n\n # Okay this view is going to also give the meta data about the payloads\n view['payload_meta'] = [self.workbench.work_request('meta', md5) for md5 in input_data['unzip']['payload_md5s']]\n return view", "response": "Execute the ViewZip worker"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_exception(self, exception):\n if not self.is_done():\n raise TransferNotDoneError(\n 'set_exception can only be called once the transfer is '\n 'complete.')\n self._coordinator.set_exception(exception, override=True)", "response": "Sets the exception on the future."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npass the transfer spec to the Aspera sdk and start the transfer", "response": "def start_transfer(self):\n ''' pass the transfer spec to the Aspera sdk and start the transfer '''\n try:\n if not self.is_done():\n faspmanager2.startTransfer(self.get_transfer_id(),\n None,\n self.get_transfer_spec(),\n self)\n except Exception as ex:\n self.notify_exception(ex)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck whether a transfer is currently running", "response": "def is_running(self, is_stopped):\n ''' check whether a transfer is currently running '''\n if is_stopped and self.is_stopped():\n return False\n\n return faspmanager2.isRunning(self.get_transfer_id())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef is_stopped(self, is_stopping=True):\n ''' check whether a transfer is stopped or is being stopped '''\n if is_stopping:\n return self._is_stopped or self._is_stopping\n return self._is_stopped", "response": "check whether a transfer is being stopped"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmodifies an in progress eg pause or resume", "response": "def _modify_transfer(self, option, value=0):\n ''' call Apsera sdk modify an in progress eg pause/resume\n allowed values defined in enumAsperaModifyTransfer class '''\n _ret = False\n try:\n if self.is_running(True):\n logger.info(\"ModifyTransfer called %d = %d\" % (option, value))\n _ret = faspmanager2.modifyTransfer(self.get_transfer_id(), option, value)\n logger.info(\"ModifyTransfer returned %s\" % _ret)\n except Exception as ex:\n self.notify_exception(ex)\n\n return _ret"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef stop(self, free_resource=False):\n ''' send a stop transfer request to the Aspera sdk, can be done for:\n cancel - stop an in progress transfer\n free_resource - request to the Aspera sdk free resouces related to trasnfer_id\n '''\n if not self.is_stopped():\n self._is_stopping = True\n try:\n if free_resource or self.is_running(False):\n if not free_resource:\n logger.info(\"StopTransfer called - %s\" % self.get_transfer_id())\n self._is_stopped = faspmanager2.stopTransfer(self.get_transfer_id())\n if not free_resource:\n logger.info(\"StopTransfer returned %s - %s\" % (\n self._is_stopped, self.get_transfer_id()))\n except Exception as ex:\n self.notify_exception(ex)\n\n self._is_stopping = False\n\n return self.is_stopped(False)", "response": "send a stop transfer request to the Aspera sdk"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef free_resources(self):\n ''' call stop to free up resources '''\n if not self.is_stopped():\n logger.info(\"Freeing resources: %s\" % self.get_transfer_id())\n self.stop(True)", "response": "call stop to free up resources"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches message to find and extract a named value", "response": "def extract_message_value(self, name):\n ''' search message to find and extract a named value '''\n name += \":\"\n assert(self._message)\n _start = self._message.find(name)\n if _start >= 0:\n _start += len(name) + 1\n _end = self._message.find(\"\\n\", _start)\n _value = self._message[_start:_end]\n return _value.strip()\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _set_status(self, status, ex=None):\n ''' set session status - eg failed, success --\n valid values contained in enumAsperaControllerStatus class '''\n self._status = status\n logger.debug(\"Set status(%s) for %s\" % (self._status, self.session_id))\n self.set_done()\n if ex:\n self._exception = ex", "response": "set status of the aspera controller"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the number of bytes transferred - return True if it has changed return False", "response": "def set_bytes_transferred(self, bytes_transferred):\n ''' set the number of bytes transferred - if it has changed return True '''\n _changed = False\n if bytes_transferred:\n _changed = (self._bytes_transferred != int(bytes_transferred))\n if _changed:\n self._bytes_transferred = int(bytes_transferred)\n logger.debug(\"(%s) BytesTransferred: %d\" % (\n self.session_id, self._bytes_transferred))\n if AsperaSession.PROGRESS_MSGS_SEND_ALL:\n return True\n return _changed"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the exception message and set the status to failed", "response": "def set_exception(self, exception):\n ''' set the exception message and set the status to failed '''\n logger.error(\"%s : %s\" % (exception.__class__.__name__, str(exception)))\n self._set_status(enumAsperaControllerStatus.FAILED, exception)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef wait(self):\n ''' wait for the done event to be set - no timeout'''\n self._done_event.wait(MAXINT)\n return self._status, self._exception", "response": "wait for the status and exception to be set"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncancel the TransferFuture :param msg: The message to attach to the cancellation :param exc_type: The type of exception to set for the cancellation", "response": "def cancel(self, msg='', exc_type=CancelledError):\n \"\"\"Cancels the TransferFuture\n :param msg: The message to attach to the cancellation\n :param exc_type: The type of exception to set for the cancellation\n \"\"\"\n _ret = False\n if not self.is_done():\n self.notify_cancelled(msg, True)\n _ret = True\n\n return _ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating the session count based on the type of session", "response": "def _update_session_count(self, type=0, actutal_session_count=0):\n ''' update the session/ascp count\n 0 : set the number of sessions being used to 1 or number specified in transfer config\n -1: decrement the session count by one\n 1: set the session count to param value\n '''\n if type == 0: # init\n _count = 0\n if self._args.transfer_config:\n _count = self._args.transfer_config.multi_session\n self._session_count = _count if _count > 0 else 1\n elif type == -1: # decrement\n self._session_count -= 1\n elif type == 1: # set from number of actual session objects\n self._session_count = actutal_session_count"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwaiting until the TransferFuture is done and returns the result. If the TransferFuture failed it will raise an exception associated to the exception associated to the exception. If the TransferFuture failed it will raise an exception associated to the exception.", "response": "def result(self, raise_exception=True):\n \"\"\"Waits until TransferFuture is done and returns the result\n\n If the TransferFuture succeeded, it will return the result. If the\n TransferFuture failed, it will raise the exception associated to the\n failure.\n \"\"\"\n _status = None\n _exception = None\n self._done_event.wait(MAXINT) # first wait for session global\n if self.is_failed(): # global exception set\n _exception = self._exception\n _status = enumAsperaControllerStatus.FAILED\n else:\n for _session in self._sessions.values():\n _status_tmp, _exception_tmp = _session.wait()\n if _exception_tmp and not _exception:\n _exception = _exception_tmp\n _status = _status_tmp\n\n # Once done waiting, raise an exception if present or return the final status\n if _exception and raise_exception:\n raise _exception\n\n return _status"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef notify_init(self):\n ''' run the queed callback for just the first session only '''\n _session_count = len(self._sessions)\n self._update_session_count(1, _session_count)\n if _session_count == 1:\n self._run_queued_callbacks()", "response": "run the queed callback for just the first session only"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nnotify all the active session that all other users have done", "response": "def notify_done(self, error=False, run_done_callbacks=True):\n ''' if error clear all sessions otherwise check to see if all other sessions are complete\n then run the done callbacks\n '''\n if error:\n for _session in self._sessions.values():\n _session.set_done()\n self._session_count = 0\n else:\n self._update_session_count(-1)\n for _session in self._sessions.values():\n if not _session.is_done():\n return\n\n if run_done_callbacks:\n self._run_done_callbacks()\n self._done_event.set()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef notify_progress(self):\n ''' only call the progress callback if total has changed\n or PROGRESS_MSGS_SEND_ALL is set '''\n _total = 0\n for _session in self._sessions.values():\n _total += _session.bytes_transferred\n\n if AsperaSession.PROGRESS_MSGS_SEND_ALL:\n self._run_progress_callbacks(_total)\n else:\n # dont call progress callback unless total has changed\n if self._total_bytes_transferred != _total:\n self._total_bytes_transferred = _total\n self._run_progress_callbacks(_total)", "response": "only call the progress callback if total has changed\n or PROGRESS_MSGS_SEND_ALL is set"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the exception message stop transfer if running and set the done event", "response": "def notify_exception(self, exception, run_done_callbacks=True):\n ''' set the exception message, stop transfer if running and set the done event '''\n logger.error(\"%s : %s\" % (exception.__class__.__name__, str(exception)))\n self._exception = exception\n if self.is_running(True):\n # wait for a short 5 seconds for it to finish\n for _cnt in range(0, 5):\n if not self._cancel():\n time.sleep(1)\n else:\n break\n\n self.notify_done(error=True, run_done_callbacks=run_done_callbacks)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_success(self):\n ''' check all sessions to see if they have completed successfully '''\n for _session in self._sessions.values():\n if not _session.is_success():\n return False\n return True", "response": "check all sessions to see if they have completed successfully"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns the function to set the transfer spec on error set associated exception", "response": "def set_transfer_spec(self):\n ''' run the function to set the transfer spec on error set associated exception '''\n _ret = False\n try:\n self._args.transfer_spec_func(self._args)\n _ret = True\n except Exception as ex:\n self.notify_exception(AsperaTransferSpecError(ex), False)\n return _ret"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _add_subscribers_for_type(self, callback_type, subscribers, callbacks, **kwargs):\n ''' add a done/queued/progress callback to the appropriate list '''\n for subscriber in subscribers:\n callback_name = 'on_' + callback_type\n if hasattr(subscriber, callback_name):\n _function = functools.partial(getattr(subscriber, callback_name), **kwargs)\n callbacks.append(_function)", "response": "add a done or queued progress callback to the appropriate list of callbacks for a specific type of event"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a done callback to be invoked when transfer is complete", "response": "def add_done_callback(self, function, **kwargs):\n \"\"\"Add a done callback to be invoked when transfer is complete \"\"\"\n with self._callbacks_lock:\n _function = functools.partial(function, **kwargs)\n self._done_callbacks.append(_function)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_subscribers(self, subscribers, **kwargs):\n if subscribers:\n with self._callbacks_lock:\n self._add_subscribers_for_type(\n 'done', subscribers, self._done_callbacks, **kwargs)\n self._add_subscribers_for_type(\n 'queued', subscribers, self._queued_callbacks, **kwargs)\n self._add_subscribers_for_type(\n 'progress', subscribers, self._progress_callbacks, **kwargs)", "response": "Add a callbacks to be invoked during transfer"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _run_queued_callbacks(self):\n ''' run the init/quued calback when the trasnfer is initiated on apsera '''\n for callback in self._queued_callbacks:\n try:\n callback()\n except Exception as ex:\n logger.error(\"Exception: %s\" % str(ex))", "response": "run the init or quued calback when the trasnfer is initiated on apsera"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _run_progress_callbacks(self, bytes_transferred):\n ''' pass the number of bytes process to progress callbacks '''\n if bytes_transferred:\n for callback in self._progress_callbacks:\n try:\n callback(bytes_transferred=bytes_transferred)\n except Exception as ex:\n logger.error(\"Exception: %s\" % str(ex))", "response": "pass the number of bytes process to progress callbacks"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns the callbacks and remove the callbacks from the internal list so they do not get run again.", "response": "def _run_done_callbacks(self):\n ''' Run the callbacks and remove the callbacks from the internal\n List so they do not get run again if done is notified more than once.\n '''\n with self._callbacks_lock:\n for callback in self._done_callbacks:\n try:\n callback()\n # We do not want a callback interrupting the process, especially\n # in the failure cleanups. So log and catch, the excpetion.\n except Exception as ex:\n logger.error(\"Exception: %s\" % str(ex))\n logger.error(\"Exception raised in %s.\" % callback, exc_info=True)\n\n self._done_callbacks = []"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the first node with the given role", "response": "def get_node(self, role: str, default=None) -> BioCNode:\n \"\"\"\n Get the first node with role\n\n Args:\n role: role\n default: node returned instead of raising StopIteration\n\n Returns:\n the first node with role\n \"\"\"\n return next((node for node in self.nodes if node.role == role), default)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_sentence(self, offset: int) -> BioCSentence or None:\n for sentence in self.sentences:\n if sentence.offset == offset:\n return sentence\n return None", "response": "Gets the BioCSentence object with the specified offset"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a passage with the specified offset", "response": "def get_passage(self, offset: int) -> BioCPassage or None:\n \"\"\"\n Gets passage\n\n Args:\n offset: passage offset\n\n Return:\n the passage with specified offset\n \"\"\"\n for passage in self.passages:\n if passage.offset == offset:\n return passage\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef of(cls, *passages: BioCPassage):\n if len(passages) <= 0:\n raise ValueError(\"There has to be at least one passage.\")\n c = BioCDocument()\n for passage in passages:\n if passage is None:\n raise ValueError('Passage is None')\n c.add_passage(passage)\n return c", "response": "Returns a collection of passages."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a collection of documents.", "response": "def of(cls, *documents: BioCDocument):\n \"\"\"\n Returns a collection documents\n \"\"\"\n if len(documents) <= 0:\n raise ValueError(\"There has to be at least one document.\")\n c = BioCCollection()\n for document in documents:\n if document is None:\n raise ValueError('Document is None')\n c.add_document(document)\n return c"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_it(workbench, file_list, labels):\n md5s = []\n for filename in file_list:\n if filename != '.DS_Store':\n with open(filename, 'rb') as pe_file:\n base_name = os.path.basename(filename)\n md5 = workbench.store_sample(pe_file.read(), base_name, 'exe')\n workbench.add_node(md5, md5[:6], labels)\n md5s.append(md5)\n return md5s", "response": "Add the given file_list to workbench as samples also add them as nodes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef jaccard_sims(feature_list):\n\n sim_info_list = []\n for feature_info in feature_list:\n md5_source = feature_info['md5']\n features_source = feature_info['features']\n for feature_info in feature_list:\n md5_target = feature_info['md5']\n features_target = feature_info['features']\n if md5_source == md5_target: \n continue\n sim = jaccard_sim(features_source, features_target)\n if sim > .5:\n sim_info_list.append({'source': md5_source, 'target': md5_target, 'sim': sim})\n\n return sim_info_list", "response": "Compute Jaccard similarities between all the observations in the feature list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef jaccard_sim(features1, features2):\n set1 = set(features1)\n set2 = set(features2)\n try:\n return len(set1.intersection(set2))/float(max(len(set1), len(set2)))\n except ZeroDivisionError:\n return 0", "response": "Compute similarity between two sets using Jaccard similarity."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npadding a text until the given length.", "response": "def pad_char(text: str, width: int, char: str = '\\n') -> str:\r\n \"\"\"Pads a text until length width.\"\"\"\r\n dis = width - len(text)\r\n if dis < 0:\r\n raise ValueError\r\n if dis > 0:\r\n text += char * dis\r\n return text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the text of the most recent occurrence of a single object in the document with its offset in the document.", "response": "def get_text(obj) -> Tuple[int, str]:\r\n \"\"\"\r\n Return text with its offset in the document\r\n\r\n Args:\r\n obj: BioCDocument, BioCPassage, or BioCSentence\r\n\r\n Returns:\r\n offset, text\r\n \"\"\"\r\n from bioc.bioc import BioCDocument, BioCPassage, BioCSentence\r\n\r\n if isinstance(obj, BioCSentence):\r\n return obj.offset, obj.text\r\n if isinstance(obj, BioCPassage):\r\n if obj.text:\r\n return obj.offset, obj.text\r\n text = ''\r\n for sentence in obj.sentences:\r\n try:\r\n text = pad_char(text, sentence.offset - obj.offset, ' ')\r\n assert sentence.text, f'BioC sentence has no text: {sentence.offset}'\r\n text += sentence.text\r\n except ValueError:\r\n raise ValueError(f'Overlapping sentences {sentence.offset}')\r\n return obj.offset, text\r\n if isinstance(obj, BioCDocument):\r\n text = ''\r\n for passage in obj.passages:\r\n try:\r\n text = pad_char(text, passage.offset)\r\n text += get_text(passage)[1]\r\n except ValueError:\r\n raise ValueError(f'{obj.id}: overlapping passages {passage.offset}')\r\n return 0, text\r\n raise TypeError(f'Object of type {obj.__class__.__name__} must be BioCCollection, '\r\n f'BioCDocument, BioCPassage, or BioCSentence')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef shorten_text(text: str):\r\n if len(text) <= 40:\r\n text = text\r\n else:\r\n text = text[:17] + ' ... ' + text[-17:]\r\n return repr(text)", "response": "Return a short repr of text if it is longer than 40"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute(self, input_data):\n ''' This worker computes meta data for any file type. '''\n raw_bytes = input_data['sample']['raw_bytes']\n self.meta['md5'] = hashlib.md5(raw_bytes).hexdigest()\n self.meta['tags'] = input_data['tags']['tags']\n self.meta['type_tag'] = input_data['sample']['type_tag']\n with magic.Magic() as mag:\n self.meta['file_type'] = mag.id_buffer(raw_bytes[:1024])\n with magic.Magic(flags=magic.MAGIC_MIME_TYPE) as mag:\n self.meta['mime_type'] = mag.id_buffer(raw_bytes[:1024])\n with magic.Magic(flags=magic.MAGIC_MIME_ENCODING) as mag:\n try:\n self.meta['encoding'] = mag.id_buffer(raw_bytes[:1024])\n except magic.MagicError:\n self.meta['encoding'] = 'unknown'\n self.meta['file_size'] = len(raw_bytes)\n self.meta['filename'] = input_data['sample']['filename']\n self.meta['import_time'] = input_data['sample']['import_time']\n self.meta['customer'] = input_data['sample']['customer']\n self.meta['length'] = input_data['sample']['length']\n\n return self.meta", "response": "This worker computes meta data for any file type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute(self, input_data):\n ''' Execute the Strings worker '''\n raw_bytes = input_data['sample']['raw_bytes']\n strings = self.find_strings.findall(raw_bytes)\n return {'string_list': strings}", "response": "Execute the Strings worker"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef execute(self, input_data):\n ''' Execute the PEIndicators worker '''\n raw_bytes = input_data['sample']['raw_bytes']\n\n # Analyze the output of pefile for any anomalous conditions.\n # Have the PE File module process the file\n try:\n self.pefile_handle = pefile.PE(data=raw_bytes, fast_load=False)\n except (AttributeError, pefile.PEFormatError), error:\n return {'error': str(error), 'indicator_list': [{'Error': 'PE module failed!'}]}\n\n indicators = []\n indicators += [{'description': warn, 'severity': 2, 'category': 'PE_WARN'} \n for warn in self.pefile_handle.get_warnings()]\n\n # Automatically invoke any method of this class that starts with 'check'\n check_methods = self._get_check_methods()\n for check_method in check_methods:\n hit_data = check_method()\n if hit_data:\n indicators.append(hit_data)\n\n return {'indicator_list': indicators}", "response": "Execute the PEIndicators worker"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking for a checksum that doesn t match the generated checksum", "response": "def check_checksum_mismatch(self):\n ''' Checking for a checksum that doesn't match the generated checksum '''\n if self.pefile_handle.OPTIONAL_HEADER:\n if self.pefile_handle.OPTIONAL_HEADER.CheckSum != self.pefile_handle.generate_checksum():\n return {'description': 'Reported Checksum does not match actual checksum', \n 'severity': 2, 'category': 'MALFORMED'}\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if a non - standard section name is present in the PE file.", "response": "def check_nonstandard_section_name(self):\n ''' Checking for an non-standard section name '''\n std_sections = ['.text', '.bss', '.rdata', '.data', '.rsrc', '.edata', '.idata',\n '.pdata', '.debug', '.reloc', '.stab', '.stabstr', '.tls',\n '.crt', '.gnu_deb', '.eh_fram', '.exptbl', '.rodata']\n for i in range(200):\n std_sections.append('/'+str(i))\n non_std_sections = []\n for section in self.pefile_handle.sections:\n name = convert_to_ascii_null_term(section.Name).lower()\n if (name not in std_sections):\n non_std_sections.append(name)\n if non_std_sections:\n return{'description': 'Section(s) with a non-standard name, tamper indication', \n 'severity': 3, 'category': 'MALFORMED', 'attributes': non_std_sections}\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if the reported image size matches the actual image size", "response": "def check_image_size_incorrect(self):\n ''' Checking if the reported image size matches the actual image size '''\n last_virtual_address = 0\n last_virtual_size = 0\n\n section_alignment = self.pefile_handle.OPTIONAL_HEADER.SectionAlignment\n total_image_size = self.pefile_handle.OPTIONAL_HEADER.SizeOfImage\n\n for section in self.pefile_handle.sections:\n if section.VirtualAddress > last_virtual_address:\n last_virtual_address = section.VirtualAddress\n last_virtual_size = section.Misc_VirtualSize\n\n # Just pad the size to be equal to the alignment and check for mismatch\n last_virtual_size += section_alignment - (last_virtual_size % section_alignment)\n if (last_virtual_address + last_virtual_size) != total_image_size:\n return {'description': 'Image size does not match reported size', \n 'severity': 3, 'category': 'MALFORMED'}\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_section_unaligned(self):\n ''' Checking if any of the sections are unaligned '''\n file_alignment = self.pefile_handle.OPTIONAL_HEADER.FileAlignment\n unaligned_sections = []\n for section in self.pefile_handle.sections:\n if section.PointerToRawData % file_alignment:\n unaligned_sections.append(section.Name)\n\n # If we had any unaligned sections, return them\n if unaligned_sections:\n return {'description': 'Unaligned section, tamper indication',\n 'severity': 3, 'category': 'MALFORMED', 'attributes': unaligned_sections}\n return None", "response": "Checks if any of the sections are unaligned"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_section_oversized(self):\n ''' Checking if any of the sections go past the total size of the image '''\n total_image_size = self.pefile_handle.OPTIONAL_HEADER.SizeOfImage\n\n for section in self.pefile_handle.sections:\n if section.PointerToRawData + section.SizeOfRawData > total_image_size:\n return {'description': 'Oversized section, storing addition data within the PE',\n 'severity': 3, 'category': 'MALFORMED', 'attributes': section.Name}\n\n return None", "response": "Checks if any of the sections go past the total size of the image"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsearching for all symbols that match a pattern in the list of matches.", "response": "def _search_for_import_symbols(self, matches):\n ''' Just encapsulating a search that takes place fairly often '''\n\n # Sanity check\n if not hasattr(self.pefile_handle, 'DIRECTORY_ENTRY_IMPORT'):\n return []\n\n # Find symbols that match\n pattern = '|'.join(re.escape(match) for match in matches)\n exp = re.compile(pattern)\n symbol_list = []\n for module in self.pefile_handle.DIRECTORY_ENTRY_IMPORT:\n for symbol in module.imports:\n if (symbol.name):\n symbol_list.append(symbol.name.lower())\n symbol_matches = []\n for symbol in symbol_list:\n if exp.search(symbol):\n symbol_matches.append(symbol)\n return symbol_matches"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _search_for_export_symbols(self, matches):\n ''' Just encapsulating a search that takes place fairly often '''\n pattern = '|'.join(re.escape(match) for match in matches)\n exp = re.compile(pattern)\n symbol_list = []\n try:\n for symbol in self.pefile_handle.DIRECTORY_ENTRY_EXPORT.symbols:\n if symbol.name:\n symbol_list.append(symbol.name.lower())\n symbol_matches = []\n for symbol in symbol_list:\n if exp.search(symbol):\n symbol_matches.append(symbol)\n return symbol_matches\n except AttributeError:\n return []", "response": "This method uses a regular expression to search for export symbols."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn an iterator over the annotations of a given document or a collection of documents.", "response": "def annotations(obj: BioCCollection or BioCDocument or BioCPassage or BioCSentence,\r\n docid: str = None, level: int = PASSAGE) -> Generator[BioCAnnotation, None, None]:\r\n \"\"\"\r\n Get all annotations in document id.\r\n\r\n Args:\r\n obj: BioCCollection, BioCDocument, BioCPassage, or BioCSentence\r\n docid: document id. If None, all documents\r\n level: one of DOCUMENT, PASSAGE, SENTENCE\r\n\r\n Yields:\r\n one annotation\r\n \"\"\"\r\n if isinstance(obj, BioCCollection):\r\n for document in filter(lambda d: docid is None or docid == d.id, obj.documents):\r\n yield from annotations(document, level=level)\r\n elif isinstance(obj, BioCDocument):\r\n if level == DOCUMENT:\r\n yield from obj.annotations\r\n elif level in (PASSAGE, SENTENCE):\r\n for passage in obj.passages:\r\n yield from annotations(passage, level=level)\r\n else:\r\n raise ValueError('level must be DOCUMENT, PASSAGE, or SENTENCE')\r\n elif isinstance(obj, BioCPassage):\r\n if level == PASSAGE:\r\n yield from obj.annotations\r\n elif level == SENTENCE:\r\n for sentence in obj.sentences:\r\n yield from annotations(sentence, level=level)\r\n else:\r\n raise ValueError('level must be PASSAGE or SENTENCE')\r\n elif isinstance(obj, BioCSentence):\r\n if level == SENTENCE:\r\n yield from obj.annotations\r\n else:\r\n raise ValueError('level must be SENTENCE')\r\n else:\r\n raise TypeError(f'Object of type {obj.__class__.__name__} must be BioCCollection, '\r\n f'BioCDocument, BioCPassage, or BioCSentence')"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns an iterator over all sentences in a document id.", "response": "def sentences(obj: BioCCollection or BioCDocument or BioCPassage) \\\r\n -> Generator[BioCSentence, None, None]:\r\n \"\"\"\r\n Get all sentences in document id.\r\n\r\n Args:\r\n obj: BioCCollection, BioCDocument, or BioCPassage\r\n\r\n Yields:\r\n one sentence\r\n \"\"\"\r\n if isinstance(obj, BioCCollection):\r\n for document in obj.documents:\r\n yield from sentences(document)\r\n elif isinstance(obj, BioCDocument):\r\n for passage in obj.passages:\r\n yield from sentences(passage)\r\n elif isinstance(obj, BioCPassage):\r\n yield from obj.sentences\r\n else:\r\n raise TypeError(f'Object of type {obj.__class__.__name__} must be BioCCollection, '\r\n f'BioCDocument, BioCPassage, or BioCSentence')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute(self, input_data):\n ''' This worker classifies PEFiles as Evil or AOK (TOY not a real classifier at this point)'''\n\n # In general you'd do something different with these two outputs\n # for this toy example will just smash them in a big string\n pefile_output = input_data['pe_features']\n indicators = input_data['pe_indicators']\n all_input = str(pefile_output) + str(indicators)\n\n flag = 'Reported Checksum does not match actual checksum'\n if flag in all_input:\n self.output['classification'] = 'Toy/Fake Classifier says Evil!'\n\n return self.output", "response": "This worker classifies PEFiles as Evil or AOK"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nserializes a collection as a BioC formatted stream to fp.", "response": "def dump(collection: BioCCollection, fp, pretty_print: bool = True):\r\n \"\"\"\r\n Serialize ``collection`` as a BioC formatted stream to ``fp``.\r\n\r\n Args:\r\n collection: the BioC collection\r\n fp: a ``.write()``-supporting file-like object\r\n pretty_print: enables formatted XML\r\n \"\"\"\r\n fp.write(dumps(collection, pretty_print))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nserialize a BioC collection to a BioC formatted str.", "response": "def dumps(collection: BioCCollection, pretty_print: bool = True) -> str:\r\n \"\"\"\r\n Serialize ``collection`` to a BioC formatted ``str``.\r\n\r\n Args:\r\n collection: the BioC collection\r\n pretty_print: enables formatted XML\r\n\r\n Returns:\r\n a BioC formatted ``str``\r\n \"\"\"\r\n doc = etree.ElementTree(BioCXMLEncoder().encode(collection))\r\n s = etree.tostring(doc, pretty_print=pretty_print, encoding=collection.encoding,\r\n standalone=collection.standalone)\r\n return s.decode(collection.encoding)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef encode_location(location: BioCLocation):\r\n return etree.Element('location',\r\n {'offset': str(location.offset), 'length': str(location.length)})", "response": "Encode a single location."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode a single relation.", "response": "def encode_relation(relation: BioCRelation):\r\n \"\"\"Encode a single relation.\"\"\"\r\n tree = etree.Element('relation', {'id': relation.id})\r\n encode_infons(tree, relation.infons)\r\n for node in relation.nodes:\r\n tree.append(encode_node(node))\r\n return tree"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef encode_annotation(annotation):\r\n tree = etree.Element('annotation', {'id': annotation.id})\r\n encode_infons(tree, annotation.infons)\r\n for location in annotation.locations:\r\n tree.append(encode_location(location))\r\n etree.SubElement(tree, 'text').text = annotation.text\r\n return tree", "response": "Encode a single annotation."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nencodes a single sentence.", "response": "def encode_sentence(sentence):\r\n \"\"\"Encode a single sentence.\"\"\"\r\n tree = etree.Element('sentence')\r\n encode_infons(tree, sentence.infons)\r\n etree.SubElement(tree, 'offset').text = str(sentence.offset)\r\n if sentence.text:\r\n etree.SubElement(tree, 'text').text = sentence.text\r\n for ann in sentence.annotations:\r\n tree.append(encode_annotation(ann))\r\n for rel in sentence.relations:\r\n tree.append(encode_relation(rel))\r\n return tree"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nencode a single passage.", "response": "def encode_passage(passage):\r\n \"\"\"Encode a single passage.\"\"\"\r\n tree = etree.Element('passage')\r\n encode_infons(tree, passage.infons)\r\n etree.SubElement(tree, 'offset').text = str(passage.offset)\r\n if passage.text:\r\n etree.SubElement(tree, 'text').text = passage.text\r\n for sen in passage.sentences:\r\n tree.append(encode_sentence(sen))\r\n for ann in passage.annotations:\r\n tree.append(encode_annotation(ann))\r\n for rel in passage.relations:\r\n tree.append(encode_relation(rel))\r\n return tree"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encode_document(document):\r\n tree = etree.Element('document')\r\n etree.SubElement(tree, 'id').text = document.id\r\n encode_infons(tree, document.infons)\r\n for passage in document.passages:\r\n tree.append(encode_passage(passage))\r\n for ann in document.annotations:\r\n tree.append(encode_annotation(ann))\r\n for rel in document.relations:\r\n tree.append(encode_relation(rel))\r\n return tree", "response": "Encode a single document."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nencodes a single collection.", "response": "def encode_collection(collection):\r\n \"\"\"Encode a single collection.\"\"\"\r\n tree = etree.Element('collection')\r\n etree.SubElement(tree, 'source').text = collection.source\r\n etree.SubElement(tree, 'date').text = collection.date\r\n etree.SubElement(tree, 'key').text = collection.key\r\n encode_infons(tree, collection.infons)\r\n for doc in collection.documents:\r\n tree.append(encode_document(doc))\r\n return tree"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef default(self, obj):\r\n if isinstance(obj, BioCDocument):\r\n return encode_document(obj)\r\n if isinstance(obj, BioCCollection):\r\n return encode_collection(obj)\r\n raise TypeError(f'Object of type {obj.__class__.__name__} is not BioC XML serializable')", "response": "Implement this method in a subclass such that it returns a tree for o."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_collection_info(self, collection: BioCCollection):\r\n elem = etree.Element('source')\r\n elem.text = collection.source\r\n self.__writer.send(elem)\r\n\r\n elem = etree.Element('date')\r\n elem.text = collection.date\r\n self.__writer.send(elem)\r\n\r\n elem = etree.Element('key')\r\n elem.text = collection.key\r\n self.__writer.send(elem)\r\n\r\n for k, v in collection.infons.items():\r\n elem = etree.Element('infon', {'key': str(k)})\r\n elem.text = str(v)\r\n self.__writer.send(elem)", "response": "Writes the collection information to the underlying XML file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_document(self, document: BioCDocument):\r\n tree = self.encoder.encode(document)\r\n self.__writer.send(tree)", "response": "Encode and write a single document."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run():\n \n # Grab server args\n args = client_helper.grab_server_args()\n\n # Start up workbench connection\n workbench = zerorpc.Client(timeout=300, heartbeat=60)\n workbench.connect('tcp://'+args['server']+':'+args['port'])\n\n all_set = workbench.generate_sample_set()\n results = workbench.set_work_request('view_customer', all_set)\n for customer in results:\n print customer['customer']", "response": "This client generates customer reports on all the samples in workbench."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks if a file or folder exists and has a given IO access", "response": "def check_io_access(ioobj, access, is_file=False):\n ''' check if a file/folder exists and has a given IO access '''\n if ((is_file and not os.path.isfile(ioobj)) or\n (not is_file and not os.path.isdir(ioobj)) or\n not os.access(ioobj, access)):\n _objtype = \"File\" if is_file else \"Directory\"\n raise IOError(\"Error accessing %s: %s\" % (_objtype, ioobj))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(collection, onerror: Callable[[str, List], None] = None):\n BioCValidator(onerror).validate(collection)", "response": "Validate BioC data structure."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate_doc(self, document: BioCDocument):\n annotations = []\n annotations.extend(document.annotations)\n annotations.extend(document.relations)\n for passage in document.passages:\n annotations.extend(passage.annotations)\n annotations.extend(passage.relations)\n for sentence in passage.sentences:\n annotations.extend(sentence.annotations)\n annotations.extend(sentence.relations)\n\n self.current_docid = document.id\n self.traceback.append(document)\n\n text = self.__get_doc_text(document)\n self.__validate_ann(document.annotations, text, 0)\n self.__validate_rel(annotations, document.relations, f'document {document.id}')\n\n for passage in document.passages:\n self.traceback.append(passage)\n\n text = self.__get_passage_text(passage)\n self.__validate_ann(passage.annotations, text, passage.offset)\n self.__validate_rel(annotations, passage.relations,\n f'document {document.id} --> passage {passage.offset}')\n\n for sentence in passage.sentences:\n self.traceback.append(sentence)\n self.__validate_ann(sentence.annotations, sentence.text, sentence.offset)\n self.__validate_rel(annotations, sentence.relations,\n f'document {document.id} --> sentence {sentence.offset}')\n self.traceback.pop()\n self.traceback.pop()\n self.traceback.pop()", "response": "Validate a single document."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate a single collection.", "response": "def validate(self, collection: BioCCollection):\n \"\"\"Validate a single collection.\"\"\"\n for document in collection.documents:\n self.validate_doc(document)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute(self, input_data):\n ''' Execute method '''\n\n # Spin up the rekall adapter\n adapter = RekallAdapter()\n adapter.set_plugin_name(self.plugin_name)\n \n # Create a temporary directory and run this plugin from there\n with self.goto_temp_directory():\n\n # Run the procdump plugin\n rekall_output = adapter.execute(input_data)\n \n # Process the output data\n for line in rekall_output:\n \n if line['type'] == 'm': # Meta\n self.output['meta'] = line['data']\n elif line['type'] == 't': # New Table Headers (column names)\n self.column_map = {item['cname']: item['name'] if 'name' in item else item['cname'] for item in line['data']}\n elif line['type'] == 'r': # Row\n \n # Add the row to our current table\n row = RekallAdapter.process_row(line['data'], self.column_map)\n self.output['tables'][self.current_table_name].append(row)\n \n # Scrape any extracted files\n print 'mem_procdump: Scraping dumped files...'\n for output_file in glob.glob('*'):\n \n # Store the output into workbench, put md5s in the 'dumped_files' field\n output_name = os.path.basename(output_file)\n output_name = output_name.replace('executable.', '')\n with open(output_file, 'rb') as dumped_file:\n raw_bytes = dumped_file.read()\n md5 = self.c.store_sample(raw_bytes, output_name, 'exe')\n \n # Remove some columns from meta data\n meta = self.c.work_request('meta', md5)['meta']\n del meta['customer']\n del meta['encoding']\n del meta['import_time']\n del meta['mime_type']\n self.output['tables'][self.current_table_name].append(meta)\n \n # All done\n return self.output", "response": "Execute the procdump plugin and store the output into the output field of the current table."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert_to_utf8(string):\n ''' Convert string to UTF8 '''\n if (isinstance(string, unicode)):\n return string.encode('utf-8')\n try:\n u = unicode(string, 'utf-8')\n except TypeError:\n return str(string)\n utf8 = u.encode('utf-8')\n return utf8", "response": "Convert string to UTF8"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef execute(self, input_data):\n\n ''' Process the input bytes with pefile '''\n raw_bytes = input_data['sample']['raw_bytes']\n\n # Have the PE File module process the file\n pefile_handle, error_str = self.open_using_pefile('unknown', raw_bytes)\n if not pefile_handle:\n return {'error': error_str, 'dense_features': [], 'sparse_features': []}\n\n # Now extract the various features using pefile\n dense_features, sparse_features = self.extract_features_using_pefile(pefile_handle)\n\n # Okay set my response\n return {'dense_features': dense_features, 'sparse_features': sparse_features, 'tags': input_data['tags']['tags']}", "response": "Process the input bytes with pefile and extract the features"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nopen the PE File using the Python PEFile module.", "response": "def open_using_pefile(input_name, input_bytes):\n ''' Open the PE File using the Python pefile module. '''\n try:\n pef = pefile.PE(data=input_bytes, fast_load=False)\n except (AttributeError, pefile.PEFormatError), error:\n print 'warning: pe_fail (with exception from pefile module) on file: %s' % input_name\n error_str = '(Exception):, %s' % (str(error))\n return None, error_str\n\n # Now test to see if the features are there/extractable if not return FAIL flag\n if pef.PE_TYPE is None or pef.OPTIONAL_HEADER is None or len(pef.OPTIONAL_HEADER.DATA_DIRECTORY) < 7:\n print 'warning: pe_fail on file: %s' % input_name\n error_str = 'warning: pe_fail on file: %s' % input_name\n return None, error_str\n\n # Success\n return pef, None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef extract_features_using_pefile(self, pef):\n ''' Process the PE File using the Python pefile module. '''\n\n # Store all extracted features into feature lists\n extracted_dense = {}\n extracted_sparse = {}\n\n # Now slog through the info and extract the features\n feature_not_found_flag = -99\n feature_default_value = 0\n self._warnings = []\n\n # Set all the dense features and sparse features to 'feature not found'\n # value and then check later to see if it was found\n for feature in self._dense_feature_list:\n extracted_dense[feature] = feature_not_found_flag\n for feature in self._sparse_feature_list:\n extracted_sparse[feature] = feature_not_found_flag\n\n # Check to make sure all the section names are standard\n std_sections = ['.text', '.bss', '.rdata', '.data', '.rsrc', '.edata', '.idata',\n '.pdata', '.debug', '.reloc', '.stab', '.stabstr', '.tls',\n '.crt', '.gnu_deb', '.eh_fram', '.exptbl', '.rodata']\n for i in range(200):\n std_sections.append('/'+str(i))\n std_section_names = 1\n extracted_sparse['section_names'] = []\n for section in pef.sections:\n name = convert_to_ascii_null_term(section.Name).lower()\n extracted_sparse['section_names'].append(name)\n if name not in std_sections:\n std_section_names = 0\n\n extracted_dense['std_section_names'] = std_section_names\n extracted_dense['debug_size'] = pef.OPTIONAL_HEADER.DATA_DIRECTORY[6].Size\n extracted_dense['major_version'] = pef.OPTIONAL_HEADER.MajorImageVersion\n extracted_dense['minor_version'] = pef.OPTIONAL_HEADER.MinorImageVersion\n extracted_dense['iat_rva']\t = pef.OPTIONAL_HEADER.DATA_DIRECTORY[1].VirtualAddress\n extracted_dense['export_size'] = pef.OPTIONAL_HEADER.DATA_DIRECTORY[0].Size\n extracted_dense['check_sum'] = pef.OPTIONAL_HEADER.CheckSum\n try:\n extracted_dense['generated_check_sum'] = pef.generate_checksum()\n except ValueError:\n extracted_dense['generated_check_sum'] = 0\n if len(pef.sections) > 0:\n extracted_dense['virtual_address'] = pef.sections[0].VirtualAddress\n extracted_dense['virtual_size'] = pef.sections[0].Misc_VirtualSize\n extracted_dense['number_of_sections'] = pef.FILE_HEADER.NumberOfSections\n extracted_dense['compile_date'] = pef.FILE_HEADER.TimeDateStamp\n extracted_dense['number_of_rva_and_sizes'] = pef.OPTIONAL_HEADER.NumberOfRvaAndSizes\n extracted_dense['total_size_pe'] = len(pef.__data__)\n\n # Number of import and exports\n if hasattr(pef, 'DIRECTORY_ENTRY_IMPORT'):\n extracted_dense['number_of_imports'] = len(pef.DIRECTORY_ENTRY_IMPORT)\n num_imported_symbols = 0\n for module in pef.DIRECTORY_ENTRY_IMPORT:\n num_imported_symbols += len(module.imports)\n extracted_dense['number_of_import_symbols'] = num_imported_symbols\n\n if hasattr(pef, 'DIRECTORY_ENTRY_BOUND_IMPORT'):\n extracted_dense['number_of_bound_imports'] = len(pef.DIRECTORY_ENTRY_BOUND_IMPORT)\n num_imported_symbols = 0\n for module in pef.DIRECTORY_ENTRY_BOUND_IMPORT:\n num_imported_symbols += len(module.entries)\n extracted_dense['number_of_bound_import_symbols'] = num_imported_symbols\n\n if hasattr(pef, 'DIRECTORY_ENTRY_EXPORT'):\n try:\n extracted_dense['number_of_export_symbols'] = len(pef.DIRECTORY_ENTRY_EXPORT.symbols)\n symbol_set = set()\n for symbol in pef.DIRECTORY_ENTRY_EXPORT.symbols:\n symbol_info = 'unknown'\n if not symbol.name:\n symbol_info = 'ordinal=' + str(symbol.ordinal)\n else:\n symbol_info = 'name=' + symbol.name\n symbol_set.add(convert_to_utf8('%s' % (symbol_info)).lower())\n\n # Now convert set to list and add to features\n extracted_sparse['ExportedSymbols'] = list(symbol_set)\n\n except AttributeError:\n extracted_sparse['ExportedSymbols'] = ['AttributeError']\n\n # Specific Import info (Note this will be a sparse field woo hoo!)\n if hasattr(pef, 'DIRECTORY_ENTRY_IMPORT'):\n symbol_set = set()\n for module in pef.DIRECTORY_ENTRY_IMPORT:\n for symbol in module.imports:\n symbol_info = 'unknown'\n if symbol.import_by_ordinal is True:\n symbol_info = 'ordinal=' + str(symbol.ordinal)\n else:\n symbol_info = 'name=' + symbol.name\n # symbol_info['hint'] = symbol.hint\n if symbol.bound:\n symbol_info += ' bound=' + str(symbol.bound)\n\n symbol_set.add(convert_to_utf8('%s:%s' % (module.dll, symbol_info)).lower())\n\n # Now convert set to list and add to features\n extracted_sparse['imported_symbols'] = list(symbol_set)\n\n # Do we have a second section\n if len(pef.sections) >= 2:\n extracted_dense['virtual_size_2'] = pef.sections[1].Misc_VirtualSize\n\n extracted_dense['size_image'] = pef.OPTIONAL_HEADER.SizeOfImage\n extracted_dense['size_code'] = pef.OPTIONAL_HEADER.SizeOfCode\n extracted_dense['size_initdata'] = pef.OPTIONAL_HEADER.SizeOfInitializedData\n extracted_dense['size_uninit'] = pef.OPTIONAL_HEADER.SizeOfUninitializedData\n extracted_dense['pe_majorlink'] = pef.OPTIONAL_HEADER.MajorLinkerVersion\n extracted_dense['pe_minorlink'] = pef.OPTIONAL_HEADER.MinorLinkerVersion\n extracted_dense['pe_driver'] = 1 if pef.is_driver() else 0\n extracted_dense['pe_exe'] = 1 if pef.is_exe() else 0\n extracted_dense['pe_dll'] = 1 if pef.is_dll() else 0\n extracted_dense['pe_i386'] = 1\n if pef.FILE_HEADER.Machine != 0x014c:\n extracted_dense['pe_i386'] = 0\n extracted_dense['pe_char'] = pef.FILE_HEADER.Characteristics\n\n # Data directory features!!\n datadirs = { \n 0: 'IMAGE_DIRECTORY_ENTRY_EXPORT', 1: 'IMAGE_DIRECTORY_ENTRY_IMPORT', \n 2: 'IMAGE_DIRECTORY_ENTRY_RESOURCE', 5: 'IMAGE_DIRECTORY_ENTRY_BASERELOC', \n 12: 'IMAGE_DIRECTORY_ENTRY_IAT'}\n for idx, datadir in datadirs.items():\n datadir = pefile.DIRECTORY_ENTRY[idx]\n if len(pef.OPTIONAL_HEADER.DATA_DIRECTORY) <= idx:\n continue\n\n directory = pef.OPTIONAL_HEADER.DATA_DIRECTORY[idx]\n extracted_dense['datadir_%s_size' % datadir] = directory.Size\n\n # Section features\n section_flags = ['IMAGE_SCN_MEM_EXECUTE', 'IMAGE_SCN_CNT_CODE', 'IMAGE_SCN_MEM_WRITE', 'IMAGE_SCN_MEM_READ']\n rawexecsize = 0\n vaexecsize = 0\n for sec in pef.sections:\n if not sec:\n continue\n\n for char in section_flags:\n # does the section have one of our attribs?\n if hasattr(sec, char):\n rawexecsize += sec.SizeOfRawData\n vaexecsize += sec.Misc_VirtualSize\n break\n\n # Take out any weird characters in section names\n secname = convert_to_ascii_null_term(sec.Name).lower()\n secname = secname.replace('.', '')\n if secname in std_sections:\n extracted_dense['sec_entropy_%s' % secname] = sec.get_entropy()\n extracted_dense['sec_rawptr_%s' % secname] = sec.PointerToRawData\n extracted_dense['sec_rawsize_%s' % secname] = sec.SizeOfRawData\n extracted_dense['sec_vasize_%s' % secname] = sec.Misc_VirtualSize\n\n extracted_dense['sec_va_execsize'] = vaexecsize\n extracted_dense['sec_raw_execsize'] = rawexecsize\n\n # Imphash (implemented in pefile 1.2.10-139 or later)\n try:\n extracted_sparse['imp_hash'] = pef.get_imphash()\n except AttributeError:\n extracted_sparse['imp_hash'] = 'Not found: Install pefile 1.2.10-139 or later'\n\n # Register if there were any pe warnings\n warnings = pef.get_warnings()\n if warnings:\n extracted_dense['pe_warnings'] = 1\n extracted_sparse['pe_warning_strings'] = warnings\n else:\n extracted_dense['pe_warnings'] = 0\n\n # Issue a warning if the feature isn't found\n for feature in self._dense_feature_list:\n if extracted_dense[feature] == feature_not_found_flag:\n extracted_dense[feature] = feature_default_value\n if (self._verbose):\n print 'info: Feature: %s not found! Setting to %d' % (feature, feature_default_value)\n\n # Issue a warning if the feature isn't found\n for feature in self._sparse_feature_list:\n if extracted_sparse[feature] == feature_not_found_flag:\n extracted_sparse[feature] = [] # For sparse data probably best default\n if (self._verbose):\n print 'info: Feature: %s not found! Setting to %d' % (feature, feature_default_value)\n\n # Set the features for the class var\n self._dense_features = extracted_dense\n self._sparse_features = extracted_sparse\n\n return self.get_dense_features(), self.get_sparse_features()", "response": "Process the PE File using the Python PEFile module."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _parse_bro_header(self, logfile):\n\n # Skip until you find the #fields line\n _line = next(logfile)\n while (not _line.startswith('#fields')):\n _line = next(logfile)\n\n # Read in the field names\n _field_names = _line.strip().split(self.delimiter)[1:]\n\n # Read in the types\n _line = next(logfile)\n _field_types = _line.strip().split(self.delimiter)[1:]\n\n # Return the header info\n return _field_names, _field_types", "response": "This method tries to parse the Bro log header section."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _cast_dict(self, data_dict):\n for key, value in data_dict.iteritems():\n data_dict[key] = self._cast_value(value)\n\n # Fixme: resp_body_data can be very large so removing it for now\n if 'resp_body_data' in data_dict:\n del data_dict['resp_body_data']\n\n return data_dict", "response": "Internal method that makes sure any dictionary elements\n are properly cast into the correct types."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run():\n \n # Grab server args\n args = client_helper.grab_server_args()\n\n # Start up workbench connection\n workbench = zerorpc.Client(timeout=300, heartbeat=60)\n workbench.connect('tcp://'+args['server']+':'+args['port'])\n\n # Test out zip data\n data_path = os.path.join(os.path.dirname(os.path.realpath(__file__)),'../data/zip')\n file_list = [os.path.join(data_path, child) for child in os.listdir(data_path)]\n for filename in file_list:\n with open(filename,'rb') as f:\n base_name = os.path.basename(filename)\n md5 = workbench.store_sample(f.read(), base_name, 'zip')\n results = workbench.work_request('view', md5)\n print 'Filename: %s ' % (base_name)\n pprint.pprint(results)\n\n # The unzip worker gives you a list of md5s back\n # Run meta on all the unzipped files.\n results = workbench.work_request('unzip', md5)\n print '\\n*** Filename: %s ***' % (base_name)\n for child_md5 in results['unzip']['payload_md5s']:\n pprint.pprint(workbench.work_request('meta', child_md5))", "response": "This client shows workbench extacting files from a zip file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexecutes the VTQuery worker", "response": "def execute(self, input_data):\n ''' Execute the VTQuery worker '''\n md5 = input_data['meta']['md5']\n response = requests.get('http://www.virustotal.com/vtapi/v2/file/report', \n params={'apikey':self.apikey,'resource':md5, 'allinfo':1})\n\n # Make sure we got a json blob back\n try:\n vt_output = response.json()\n except ValueError:\n return {'vt_error': 'VirusTotal Query Error, no valid response... past per min quota?'}\n \n # Just pull some of the fields\n output = {field:vt_output[field] for field in vt_output.keys() if field not in self.exclude}\n \n # Check for not-found\n not_found = False if output else True \n\n # Add in file_type\n output['file_type'] = input_data['meta']['file_type']\n \n # Toss back a not found\n if not_found:\n output['not_found'] = True\n return output\n\n # Organize the scans fields\n scan_results = collections.Counter()\n for scan in vt_output['scans'].values():\n if 'result' in scan:\n if scan['result']:\n scan_results[scan['result']] += 1\n output['scan_results'] = scan_results.most_common(5)\n return output"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngrab the peid_userdb. txt file from local disk", "response": "def get_peid_db():\n ''' Grab the peid_userdb.txt file from local disk '''\n\n # Try to find the yara rules directory relative to the worker\n my_dir = os.path.dirname(os.path.realpath(__file__))\n db_path = os.path.join(my_dir, 'peid_userdb.txt')\n if not os.path.exists(db_path):\n raise RuntimeError('peid could not find peid_userdb.txt under: %s' % db_path)\n\n # Okay load up signature\n signatures = peutils.SignatureDatabase(data = open(db_path, 'rb').read())\n return signatures"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef peid_features(self, pefile_handle):\n ''' Get features from PEid signature database'''\n peid_match = self.peid_sigs.match(pefile_handle)\n return peid_match if peid_match else []", "response": "Get features from PEid signature database"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run():\n\n global WORKBENCH\n \n # Grab grab_server_argsrver args\n args = client_helper.grab_server_args()\n\n # Start up workbench connection\n WORKBENCH = zerorpc.Client(timeout=300, heartbeat=60)\n WORKBENCH.connect('tcp://'+args['server']+':'+args['port'])\n\n data_path = os.path.join(\n os.path.dirname(os.path.realpath(__file__)), '../data/pcap')\n file_list = [os.path.join(data_path, child) for child in \\\n os.listdir(data_path)]\n results = []\n for filename in file_list:\n\n # Skip OS generated files\n if '.DS_Store' in filename: continue\n\n # Process the pcap file\n with open(filename,'rb') as f:\n md5 = WORKBENCH.store_sample(f.read(), filename, 'pcap')\n result = WORKBENCH.work_request('view_pcap', md5)\n result.update(WORKBENCH.work_request('meta', result['view_pcap']['md5']))\n result['filename'] = result['meta']['filename'].split('/')[-1]\n results.append(result)\n\n return results", "response": "This client pulls PCAP files for building report."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef show_files(md5):\n '''Renders template with `view` of the md5.'''\n if not WORKBENCH:\n return flask.redirect('/')\n\n md5_view = WORKBENCH.work_request('view', md5)\n return flask.render_template('templates/md5_view.html', md5_view=md5_view['view'], md5=md5)", "response": "Renders template with view of the md5."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrenders template with stream_sample of the md5.", "response": "def show_md5_view(md5):\n '''Renders template with `stream_sample` of the md5.'''\n\n if not WORKBENCH:\n return flask.redirect('/')\n\n md5_view = WORKBENCH.stream_sample(md5)\n return flask.render_template('templates/md5_view.html', md5_view=list(md5_view), md5=md5)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_node(self, node_id, name, labels):\n node = self.graph_db.get_or_create_indexed_node('Node', 'node_id', node_id, {'node_id': node_id, 'name': name})\n try:\n node.add_labels(*labels)\n except NotImplementedError:\n pass", "response": "Add the node with name and labels."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_rel(self, source_node_id, target_node_id, rel):\n\n # Add the relationship\n n1_ref = self.graph_db.get_indexed_node('Node', 'node_id', source_node_id)\n n2_ref = self.graph_db.get_indexed_node('Node', 'node_id', target_node_id)\n\n # Sanity check\n if not n1_ref or not n2_ref:\n print 'Cannot add relationship between unfound nodes: %s --> %s' % (source_node_id, target_node_id)\n return\n path = neo4j.Path(n1_ref, rel, n2_ref)\n path.get_or_create(self.graph_db)", "response": "Add a relationship between nodes."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_node(self, node_id, name, labels):\n print 'NeoDB Stub getting called...'\n print '%s %s %s %s' % (self, node_id, name, labels)", "response": "Add a node to the database."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_rel(self, source_node_id, target_node_id, rel):\n print 'NeoDB Stub getting called...'\n print '%s %s %s %s' % (self, source_node_id, target_node_id, rel)", "response": "Add a relationship between two nodes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef tail_file(filename):\n ''' Tail a file using pygtail. Note: this could probably be improved '''\n with make_temp_file() as offset_file:\n while True:\n for line in pygtail.Pygtail(filename, offset_file=offset_file):\n yield line\n time.sleep(1.0)", "response": "Yields the lines of a file in a generator."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting the ViewPE worker", "response": "def execute(self, input_data):\n ''' Execute the ViewPE worker '''\n\n # Just a small check to make sure we haven't been called on the wrong file type\n if (input_data['meta']['type_tag'] != 'exe'):\n return {'error': self.__class__.__name__+': called on '+input_data['meta']['type_tag']}\n\n view = {}\n view['indicators'] = list(set([item['category'] for item in input_data['pe_indicators']['indicator_list']]))\n view['peid_matches'] = input_data['pe_peid']['match_list']\n view['yara_sigs'] = input_data['yara_sigs']['matches'].keys()\n view['classification'] = input_data['pe_classifier']['classification']\n view['disass'] = self.safe_get(input_data, ['pe_disass', 'decode'])[:15]\n view.update(input_data['meta'])\n\n return view"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef safe_get(data, key_list):\n ''' Safely access dictionary keys when plugin may have failed '''\n for key in key_list:\n data = data.get(key, {})\n return data if data else 'plugin_failed'", "response": "Safely access dictionary keys when plugin may have failed"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbeginning capturing PCAPs and sending them to workbench", "response": "def execute(self):\n ''' Begin capturing PCAPs and sending them to workbench '''\n\n # Create a temporary directory\n self.temp_dir = tempfile.mkdtemp()\n os.chdir(self.temp_dir)\n\n # Spin up the directory watcher\n DirWatcher(self.temp_dir, self.file_created)\n\n # Spin up tcpdump\n self.subprocess_manager(self.tcpdump_cmd)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nstore a file into workbench", "response": "def store_file(self, filename):\n ''' Store a file into workbench '''\n \n # Spin up workbench\n self.workbench = zerorpc.Client(timeout=300, heartbeat=60)\n self.workbench.connect(\"tcp://127.0.0.1:4242\") \n\n # Open the file and send it to workbench\n storage_name = \"streaming_pcap\" + str(self.pcap_index)\n print filename, storage_name\n with open(filename,'rb') as f:\n self.workbench.store_sample(f.read(), storage_name, 'pcap')\n self.pcap_index += 1\n\n # Close workbench client\n self.workbench.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode a single document.", "response": "def encode_document(obj):\r\n \"\"\"Encode a single document.\"\"\"\r\n warnings.warn(\"deprecated. Please use bioc.biocxml.encoder.encode_document\", DeprecationWarning)\r\n return bioc.biocxml.encoder.encode_document(obj)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef encode_passage(obj):\r\n warnings.warn(\"deprecated. Please use bioc.biocxml.encoder.encode_passage\", DeprecationWarning)\r\n return bioc.biocxml.encoder.encode_passage(obj)", "response": "Encode a single passage."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nencoding a single sentence.", "response": "def encode_sentence(obj):\r\n \"\"\"Encode a single sentence.\"\"\"\r\n warnings.warn(\"deprecated. Please use bioc.biocxml.encoder.encode_sentence\", DeprecationWarning)\r\n return bioc.biocxml.encoder.encode_sentence(obj)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef encode_annotation(obj):\r\n warnings.warn(\"deprecated. Please use bioc.biocxml.encoder.encode_annotation\",\r\n DeprecationWarning)\r\n return bioc.biocxml.encoder.encode_annotation(obj)", "response": "Encode a single annotation."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encode_relation(obj):\r\n warnings.warn(\"deprecated. Please use bioc.biocxml.encoder.encode_relation\", DeprecationWarning)\r\n return bioc.biocxml.encoder.encode_relation(obj)", "response": "Encode a single relation."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the EProcess object we get from some rekall output", "response": "def parse_eprocess(self, eprocess_data):\n \"\"\"Parse the EProcess object we get from some rekall output\"\"\"\n Name = eprocess_data['_EPROCESS']['Cybox']['Name']\n PID = eprocess_data['_EPROCESS']['Cybox']['PID']\n PPID = eprocess_data['_EPROCESS']['Cybox']['Parent_PID']\n return {'Name': Name, 'PID': PID, 'PPID': PPID}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef execute(self, input_data):\n ''' Execute the Unzip worker '''\n raw_bytes = input_data['sample']['raw_bytes']\n zipfile_output = zipfile.ZipFile(StringIO(raw_bytes))\n payload_md5s = []\n for name in zipfile_output.namelist():\n filename = os.path.basename(name)\n payload_md5s.append(self.workbench.store_sample(zipfile_output.read(name), name, 'unknown'))\n return {'payload_md5s': payload_md5s}", "response": "Execute the Unzip worker"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexecutes the method in the base class.", "response": "def execute(self, input_data):\n ''' Execute method '''\n\n # Spin up the rekall adapter\n adapter = RekallAdapter()\n adapter.set_plugin_name(self.plugin_name)\n rekall_output = adapter.execute(input_data)\n\n # Process the output data\n for line in rekall_output:\n\n if line['type'] == 'm': # Meta\n self.output['meta'] = line['data']\n elif line['type'] == 's': # New Session (Table)\n if line['data']['name']:\n self.current_table_name = str(line['data']['name'][1].v())\n elif line['type'] == 't': # New Table Headers (column names)\n self.column_map = {item['cname']: item['name'] if 'name' in item else item['cname'] for item in line['data']}\n elif line['type'] == 'r': # Row\n\n # Add the row to our current table\n row = RekallAdapter.process_row(line['data'], self.column_map)\n self.output['tables'][self.current_table_name].append(row)\n\n # Process Base entries\n if 'Base' in row:\n base_info = self.parse_base(row)\n row.update(base_info)\n else:\n print 'Got unknown line %s: %s' % (line['type'], line['data'])\n\n # All done\n return self.output"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nhelping on Workbench CLI", "response": "def help_cli(self):\n \"\"\" Help on Workbench CLI \"\"\"\n help = '%sWelcome to Workbench CLI Help:%s' % (color.Yellow, color.Normal)\n help += '\\n\\t%s> help cli_basic %s for getting started help' % (color.Green, color.LightBlue)\n help += '\\n\\t%s> help workers %s for help on available workers' % (color.Green, color.LightBlue)\n help += '\\n\\t%s> help search %s for help on searching samples' % (color.Green, color.LightBlue)\n help += '\\n\\t%s> help dataframe %s for help on making dataframes' % (color.Green, color.LightBlue)\n help += '\\n\\t%s> help commands %s for help on workbench commands' % (color.Green, color.LightBlue)\n help += '\\n\\t%s> help topic %s where topic can be a help, command or worker' % (color.Green, color.LightBlue)\n help += '\\n\\n%sNote: cli commands are transformed into python calls' % (color.Yellow)\n help += '\\n\\t%s> help cli_basic --> help(\"cli_basic\")%s' % (color.Green, color.Normal)\n return help"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nhelp for Workbench CLI Basics", "response": "def help_cli_basic(self):\n \"\"\" Help for Workbench CLI Basics \"\"\"\n help = '%sWorkbench: Getting started...' % (color.Yellow)\n help += '\\n%sLoad in a sample:' % (color.Green)\n help += '\\n\\t%s> load_sample /path/to/file' % (color.LightBlue)\n help += '\\n\\n%sNotice the prompt now shows the md5 of the sample...'% (color.Yellow)\n help += '\\n%sRun workers on the sample:' % (color.Green)\n help += '\\n\\t%s> view' % (color.LightBlue)\n help += '\\n%sType the \\'help workers\\' or the first part of the worker ...' % (color.Green)\n help += '\\n\\t%s> help workers (lists all possible workers)' % (color.LightBlue)\n help += '\\n\\t%s> pe_ (will give you pe_classifier, pe_deep_sim, pe_features, pe_indicators, pe_peid)%s' % (color.LightBlue, color.Normal)\n return help"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef help_cli_search(self):\n help = '%sSearch: %s returns sample_sets, a sample_set is a set/list of md5s.' % (color.Yellow, color.Green)\n help += '\\n\\n\\t%sSearch for all samples in the database that are known bad pe files,' % (color.Green)\n help += '\\n\\t%sthis command returns the sample_set containing the matching items'% (color.Green)\n help += '\\n\\t%s> my_bad_exes = search([\\'bad\\', \\'exe\\'])' % (color.LightBlue)\n help += '\\n\\n\\t%sRun workers on this sample_set:' % (color.Green)\n help += '\\n\\t%s> pe_outputs = pe_features(my_bad_exes) %s' % (color.LightBlue, color.Normal)\n help += '\\n\\n\\t%sLoop on the generator (or make a DataFrame see >help dataframe)' % (color.Green)\n help += '\\n\\t%s> for output in pe_outputs: %s' % (color.LightBlue, color.Normal)\n help += '\\n\\t\\t%s print output %s' % (color.LightBlue, color.Normal)\n return help", "response": "Help for Workbench CLI Search"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef help_dataframe(self):\n help = '%sMaking a DataFrame: %s how to make a dataframe from raw data (pcap, memory, pe files)' % (color.Yellow, color.Green)\n help += '\\n\\t%sNote: for memory_image and pe_files see > help dataframe_memory or dataframe_pe' % (color.Green)\n help += '\\n\\n%sPCAP Example:' % (color.Green)\n help += '\\n\\t%s> load_sample /path/to/pcap/gold_xxx.pcap [\\'bad\\', \\'threatglass\\']' % (color.LightBlue)\n help += '\\n\\t%s> view # view is your friend use it often' % (color.LightBlue)\n help += '\\n\\n%sGrab the http_log from the pcap (also play around with other logs):' % (color.Green)\n help += '\\n\\t%s> http_log_md5 = view()[\\'view\\'][\\'bro_logs\\'][\\'http_log\\']' % (color.LightBlue)\n help += '\\n\\t%s> http_log_md5 (returns the md5 of the http_log)' % (color.LightBlue)\n help += '\\n\\n%sStream back the ^contents^ of the http_log:' % (color.Green)\n help += '\\n\\t%s> http_log = stream_sample(http_log_md5)' % (color.LightBlue) \n help += '\\n\\n%sPut the http_log into a dataframe:' % (color.Green)\n help += '\\n\\t%s> http_df = pd.DataFrame(http_log)' % (color.LightBlue)\n help += '\\n\\t%s> http_df.head()' % (color.LightBlue)\n help += '\\n\\t%s> http_df.groupby([\\'host\\',\\'id.resp_h\\',\\'resp_mime_types\\'])[[\\'response_body_len\\']].sum()' % (color.LightBlue)\n help += '\\n\\t%s> http_df.describe() %s' % (color.LightBlue, color.Normal)\n return help", "response": "Help for making a DataFrame with Workbench CLI"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhelps for making a dataframe with Workbench CLI", "response": "def help_dataframe_memory(self):\n \"\"\" Help for making a DataFrame with Workbench CLI \"\"\"\n help = '%sMaking a DataFrame: %s how to make a dataframe from memory_forensics sample' % (color.Yellow, color.Green)\n help += '\\n\\n%sMemory Images Example:' % (color.Green)\n help += '\\n\\t%s> load_sample /path/to/pcap/exemplar4.vmem [\\'bad\\', \\'aptz13\\']' % (color.LightBlue)\n help += '\\n\\t%s> view # view is your friend use it often' % (color.LightBlue)\n help += '\\n\\t%s> <<< TODO :) >>> %s' % (color.LightBlue, color.Normal)\n return help"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nhelp for making a DataFrame from PE files", "response": "def help_dataframe_pe(self):\n \"\"\" Help for making a DataFrame with Workbench CLI \"\"\"\n help = '%sMaking a DataFrame: %s how to make a dataframe from pe files' % (color.Yellow, color.Green)\n help += '\\n\\n%sPE Files Example (loading a directory):' % (color.Green)\n help += '\\n\\t%s> load_sample /path/to/pe/bad [\\'bad\\', \\'case_69\\']' % (color.LightBlue)\n help += '\\n\\n\\t%sSearch for all samples in the database that are pe files,' % (color.Green)\n help += '\\n\\t%sthis command returns the sample_set containing the matching items'% (color.Green)\n help += '\\n\\t%s> my_exes = search([\\'exe\\'])' % (color.LightBlue)\n help += '\\n\\n\\t%sRun workers on this sample_set:' % (color.Green)\n help += '\\n\\t%s> pe_outputs = set_work_request(\\'pe_features\\', my_exes, [\\'md5\\', \\'dense_features.*\\', \\'tags\\'])' % (color.LightBlue)\n help += '\\n\\n\\t%sMake a DataFrame:' % (color.Green)\n help += '\\n\\t%s> pe_df = pd.DataFrame(pe_outputs) %s' % (color.LightBlue, color.Normal)\n help += '\\n\\t%s> pe_df.head() %s' % (color.LightBlue, color.Normal)\n help += '\\n\\t%s> pe_df = flatten_tags(pe_df) %s' % (color.LightBlue, color.Normal)\n help += '\\n\\t%s> pe_df.hist(\\'check_sum\\',\\'tags\\') %s' % (color.LightBlue, color.Normal)\n help += '\\n\\t%s> pe_df.bloxplot(\\'check_sum\\',\\'tags\\') %s' % (color.LightBlue, color.Normal)\n return help"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _all_help_methods(self):\n methods = {name:method for name, method in inspect.getmembers(self, predicate=inspect.isroutine) if not name.startswith('_')}\n return methods", "response": "Returns a list of all the Workbench commands"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexecutes the view_pcap_deep method", "response": "def execute(self, input_data):\n ''' ViewPcapDeep execute method '''\n\n # Copy info from input\n view = input_data['view_pcap']\n\n # Grab a couple of handles\n extracted_files = input_data['view_pcap']['extracted_files']\n\n # Dump a couple of fields\n del view['extracted_files'] \n\n # Grab additional info about the extracted files\n view['extracted_files'] = [self.workbench.work_request('meta_deep', md5, \n ['md5', 'sha256', 'entropy', 'ssdeep', 'file_size', 'file_type']) for md5 in extracted_files]\n\n return view"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef execute(self, input_data):\n ''' Execute Method '''\n\n # View on all the meta data files in the sample\n fields = ['filename', 'md5', 'length', 'customer', 'import_time', 'type_tag']\n view = {key:input_data['meta'][key] for key in fields}\n return view", "response": "Execute method that returns the sample s meta data files"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting the action for the info object", "response": "def execute(self, input_data):\n \"\"\" Info objects all have a type_tag of ('help','worker','command', or 'other') \"\"\"\n input_data = input_data['info']\n type_tag = input_data['type_tag']\n if type_tag == 'help':\n return {'help': input_data['help'], 'type_tag': input_data['type_tag']}\n elif type_tag == 'worker':\n out_keys = ['name', 'dependencies', 'docstring', 'type_tag'] \n return {key: value for key, value in input_data.iteritems() if key in out_keys}\n elif type_tag == 'command':\n out_keys = ['command', 'sig', 'docstring', 'type_tag']\n return {key: value for key, value in input_data.iteritems() if key in out_keys}\n elif type_tag == 'other':\n return input_data\n else:\n print 'Got a malformed info object %s' % input_data\n return input_data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_collection(obj: dict) -> BioCCollection:\r\n collection = BioCCollection()\r\n collection.source = obj['source']\r\n collection.date = obj['date']\r\n collection.key = obj['key']\r\n collection.infons = obj['infons']\r\n for doc in obj['documents']:\r\n collection.add_document(parse_doc(doc))\r\n return collection", "response": "Deserialize a dict obj to a BioCCollection object"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_sentence(obj: dict) -> BioCSentence:\r\n sentence = BioCSentence()\r\n sentence.offset = obj['offset']\r\n sentence.infons = obj['infons']\r\n sentence.text = obj['text']\r\n for annotation in obj['annotations']:\r\n sentence.add_annotation(parse_annotation(annotation))\r\n for relation in obj['relations']:\r\n sentence.add_relation(parse_relation(relation))\r\n return sentence", "response": "Deserialize a dict obj to a BioCSentence object"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse_doc(obj: dict) -> BioCDocument:\r\n doc = BioCDocument()\r\n doc.id = obj['id']\r\n doc.infons = obj['infons']\r\n for passage in obj['passages']:\r\n doc.add_passage(parse_passage(passage))\r\n for annotation in obj['annotations']:\r\n doc.add_annotation(parse_annotation(annotation))\r\n for relation in obj['relations']:\r\n doc.add_relation(parse_relation(relation))\r\n return doc", "response": "Deserialize a dict obj to a BioCDocument object"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef loads(s: str, **kwargs) -> BioCCollection:\r\n obj = json.loads(s, **kwargs)\r\n return parse_collection(obj)", "response": "Deserialize a string containing a JSON document into a collection of BioC objects."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nokay this worker is going build graphs from PCAP Bro output logs", "response": "def execute(self, input_data):\n ''' Okay this worker is going build graphs from PCAP Bro output logs '''\n\n # Grab the Bro log handles from the input\n bro_logs = input_data['pcap_bro']\n\n # Weird log\n if 'weird_log' in bro_logs:\n stream = self.workbench.stream_sample(bro_logs['weird_log'])\n self.weird_log_graph(stream)\n\n # HTTP log\n gsleep()\n stream = self.workbench.stream_sample(bro_logs['http_log'])\n self.http_log_graph(stream)\n\n # Files log\n gsleep()\n stream = self.workbench.stream_sample(bro_logs['files_log'])\n self.files_log_graph(stream)\n\n return {'output':'go to http://localhost:7474/browser and execute this query \"match (s:origin), (t:file), p=allShortestPaths((s)--(t)) return p\"'}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef http_log_graph(self, stream):\n ''' Build up a graph (nodes and edges from a Bro http.log) '''\n print 'Entering http_log_graph...'\n for row in list(stream):\n\n # Skip '-' hosts\n if (row['id.orig_h'] == '-'):\n continue\n\n # Add the originating host\n self.add_node(row['id.orig_h'], row['id.orig_h'], ['host', 'origin'])\n\n # Add the response host and reponse ip\n self.add_node(row['host'], row['host'], ['host'])\n self.add_node(row['id.resp_h'], row['id.resp_h'], ['host'])\n\n # Add the http request relationships\n self.add_rel(row['id.orig_h'], row['host'], 'http_request')\n self.add_rel(row['host'], row['id.resp_h'], 'A')", "response": "Build a graph of nodes and edges from a Bro http. log file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild a graph of the files in the files log file.", "response": "def files_log_graph(self, stream):\n ''' Build up a graph (nodes and edges from a Bro dns.log) '''\n for row in list(stream):\n\n # dataframes['files_log'][['md5','mime_type','missing_bytes','rx_hosts','source','tx_hosts']]\n \n # If the mime-type is interesting add the uri and the host->uri->host relationships\n if row['mime_type'] not in self.exclude_mime_types:\n\n # Check for weird conditions\n if (row['total_bytes'] == '-'):\n continue\n if ('-' in row['md5']):\n continue\n\n # Check for missing bytes and small file\n if row['missing_bytes']:\n labels = ['missing', 'file']\n elif row['total_bytes'] < 50*1024:\n labels = ['small','file']\n else:\n labels = ['file']\n\n # Make the file node name kewl\n name = '%6s %s %.0f-KB' % (row['md5'][:6], row['mime_type'], row['total_bytes']/1024.0)\n if row['missing_bytes']:\n name += '*'\n name = name.replace('application/','')\n \n # Add the file node\n self.add_node(row['md5'], name, labels)\n\n # Add the tx_host\n self.add_node(row['tx_hosts'], row['tx_hosts'], ['host'])\n\n # Add the file->tx_host relationship\n self.add_rel(row['tx_hosts'], row['md5'], 'file')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_callbacks(self, on_create, on_modify, on_delete):\n self.on_create = on_create\n self.on_modify = on_modify\n self.on_delete = on_delete", "response": "Register callbacks for file creation modification and deletion."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _start_monitoring(self):\n \n # Grab all the timestamp info\n before = self._file_timestamp_info(self.path)\n\n while True:\n gevent.sleep(1)\n after = self._file_timestamp_info(self.path)\n\n added = [fname for fname in after.keys() if fname not in before.keys()]\n removed = [fname for fname in before.keys() if fname not in after.keys()]\n modified = []\n\n for fname in before.keys():\n if fname not in removed:\n if os.path.getmtime(fname) != before.get(fname):\n modified.append(fname)\n\n if added: \n self.on_create(added)\n if removed: \n self.on_delete(removed)\n if modified: \n self.on_modify(modified)\n \n before = after", "response": "Internal method that monitors the directory for changes"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngrab all the timestamps for the files in the directory", "response": "def _file_timestamp_info(self, path):\n \"\"\" Grab all the timestamps for the files in the directory \"\"\"\n files = [os.path.join(path, fname) for fname in os.listdir(path) if '.py' in fname]\n return dict ([(fname, os.path.getmtime(fname)) for fname in files])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_rules_from_disk(self):\n ''' Recursively traverse the yara/rules directory for rules '''\n\n # Try to find the yara rules directory relative to the worker\n my_dir = os.path.dirname(os.path.realpath(__file__))\n yara_rule_path = os.path.join(my_dir, 'yara/rules')\n if not os.path.exists(yara_rule_path):\n raise RuntimeError('yara could not find yara rules directory under: %s' % my_dir)\n\n # Okay load in all the rules under the yara rule path\n self.rules = yara.load_rules(rules_rootpath=yara_rule_path)\n\n # Save rules to Workbench\n self.save_rules_to_workbench(self.rules)\n\n return self.rules", "response": "Recursively load the rules from the yara rules directory and save them to Workbench"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes the yara match data worker", "response": "def execute(self, input_data):\n ''' yara worker execute method '''\n raw_bytes = input_data['sample']['raw_bytes']\n matches = self.rules.match_data(raw_bytes)\n\n # The matches data is organized in the following way\n # {'filename1': [match_list], 'filename2': [match_list]}\n # match_list = list of match\n # match = {'meta':{'description':'blah}, tags=[], matches:True,\n # strings:[string_list]}\n # string = {'flags':blah, 'identifier':'$', 'data': FindWindow, 'offset'}\n # \n # So we're going to flatten a bit (shrug)\n # {filename_match_meta_description: string_list}\n flat_data = collections.defaultdict(list)\n for filename, match_list in matches.iteritems():\n for match in match_list:\n if 'description' in match['meta']:\n new_tag = filename+'_'+match['meta']['description']\n else:\n new_tag = filename+'_'+match['rule']\n for match in match['strings']:\n flat_data[new_tag].append(match['data'])\n # Remove duplicates\n flat_data[new_tag] = list(set(flat_data[new_tag]))\n\n return {'matches': flat_data}"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nyield chunk_size chunks from data.", "response": "def chunks(data, chunk_size):\n \"\"\" Yield chunk_size chunks from data.\"\"\"\n for i in xrange(0, len(data), chunk_size):\n yield data[i:i+chunk_size]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_rules_from_disk():\n ''' Recursively traverse the yara/rules directory for rules '''\n\n # Try to find the yara rules directory relative to the worker\n my_dir = os.path.dirname(os.path.realpath(__file__))\n yara_rule_path = os.path.join(my_dir, 'yara/rules')\n if not os.path.exists(yara_rule_path):\n raise RuntimeError('yara could not find yara rules directory under: %s' % my_dir)\n\n # Okay load in all the rules under the yara rule path\n rules = yara.load_rules(rules_rootpath=yara_rule_path, fast_match=True)\n\n return rules", "response": "Recursively traverse the yara rules directory for rules"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setup_pcap_inputs(self, input_data):\n ''' Write the PCAPs to disk for Bro to process and return the pcap filenames '''\n\n # Setup the pcap in the input data for processing by Bro. The input\n # may be either an individual sample or a sample set.\n file_list = []\n if 'sample' in input_data:\n raw_bytes = input_data['sample']['raw_bytes']\n filename = os.path.basename(input_data['sample']['filename'])\n file_list.append({'filename': filename, 'bytes': raw_bytes})\n else:\n for md5 in input_data['sample_set']['md5_list']:\n sample = self.workbench.get_sample(md5)['sample']\n raw_bytes = sample['raw_bytes']\n filename = os.path.basename(sample['filename'])\n file_list.append({'filename': filename, 'bytes': raw_bytes})\n\n # Write the pcaps to disk and keep the filenames for Bro to process\n for file_info in file_list:\n with open(file_info['filename'], 'wb') as pcap_file:\n pcap_file.write(file_info['bytes'])\n\n # Return filenames\n return [file_info['filename'] for file_info in file_list]", "response": "Setup the PCAPs for Bro to process and return the pcap filenames"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute(self, input_data):\n ''' Execute '''\n\n # Get the bro script path (workers/bro/__load__.bro)\n script_path = self.bro_script_dir\n\n # Create a temporary directory\n with self.goto_temp_directory() as temp_dir:\n\n # Get the pcap inputs (filenames)\n print 'pcap_bro: Setting up PCAP inputs...'\n filenames = self.setup_pcap_inputs(input_data)\n command_line = ['bro']\n for filename in filenames:\n command_line += ['-C', '-r', filename]\n if script_path:\n command_line.append(script_path)\n\n # Execute command line as a subprocess\n print 'pcap_bro: Executing subprocess...'\n self.subprocess_manager(command_line)\n\n # Scrape up all the output log files\n gsleep()\n print 'pcap_bro: Scraping output logs...'\n my_output = {}\n for output_log in glob.glob('*.log'):\n\n # Store the output into workbench, put the name:md5 in my output\n output_name = os.path.splitext(output_log)[0] + '_log'\n with open(output_log, 'rb') as bro_file:\n raw_bytes = bro_file.read()\n my_output[output_name] = self.workbench.store_sample(raw_bytes, output_name, 'bro')\n\n # Scrape any extracted files\n gsleep()\n print 'pcap_bro: Scraping extracted files...'\n my_output['extracted_files'] = []\n for output_file in glob.glob('extract_files/*'):\n\n # Store the output into workbench, put md5s in the 'extracted_files' field\n output_name = os.path.basename(output_file)\n with open(output_file, 'rb') as extracted_file:\n if output_name.endswith('exe'):\n type_tag = 'exe'\n else:\n type_tag = output_name[-3:]\n raw_bytes = extracted_file.read()\n my_output['extracted_files'].append(self.workbench.store_sample(raw_bytes, output_name, type_tag))\n\n # Construct back-pointers to the PCAPs\n if 'sample' in input_data:\n my_output['pcaps'] = [input_data['sample']['md5']]\n else:\n my_output['pcaps'] = input_data['sample_set']['md5_list']\n\n # Return my output\n return my_output", "response": "Execute the command line and return the result of the command line."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the console output of a virtual machine.", "response": "def getConsole(rh):\n \"\"\"\n Get the virtual machine's console output.\n\n Input:\n Request Handle with the following properties:\n function - 'CMDVM'\n subfunction - 'CMD'\n userid - userid of the virtual machine\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter getVM.getConsole\")\n\n # Transfer the console to this virtual machine.\n parms = [\"-T\", rh.userid]\n results = invokeSMCLI(rh, \"Image_Console_Get\", parms)\n\n if results['overallRC'] != 0:\n if (results['overallRC'] == 8 and results['rc'] == 8 and\n results['rs'] == 8):\n # Give a more specific message. Userid is either\n # not logged on or not spooling their console.\n msg = msgs.msg['0409'][1] % (modId, rh.userid)\n else:\n msg = results['response']\n rh.updateResults(results) # Use results from invokeSMCLI\n rh.printLn(\"ES\", msg)\n rh.printSysLog(\"Exit getVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n\n # Check whether the reader is online\n with open('/sys/bus/ccw/drivers/vmur/0.0.000c/online', 'r') as myfile:\n out = myfile.read().replace('\\n', '')\n myfile.close()\n\n # Nope, offline, error out and exit\n if int(out) != 1:\n msg = msgs.msg['0411'][1]\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0411'][0])\n rh.printSysLog(\"Exit getVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n\n # We should set class to *, otherwise we will get errors like:\n # vmur: Reader device class does not match spool file class\n cmd = [\"sudo\", \"/sbin/vmcp\", \"spool reader class *\"]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n subprocess.check_output(\n cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n # If we couldn't change the class, that's not fatal\n # But we want to warn about possibly incomplete\n # results\n msg = msgs.msg['0407'][1] % (modId, strCmd, e.output)\n rh.printLn(\"WS\", msg)\n except Exception as e:\n # All other exceptions.\n # If we couldn't change the class, that's not fatal\n # But we want to warn about possibly incomplete\n # results\n rh.printLn(\"ES\", msgs.msg['0422'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n rh.printLn(\"ES\", msgs.msg['0423'][1] % modId, strCmd,\n type(e).__name__, str(e))\n\n # List the spool files in the reader\n cmd = [\"sudo\", \"/usr/sbin/vmur\", \"list\"]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n files = subprocess.check_output(\n cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as e:\n # Uh oh, vmur list command failed for some reason\n msg = msgs.msg['0408'][1] % (modId, rh.userid,\n strCmd, e.output)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0408'][0])\n rh.printSysLog(\"Exit getVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n rh.updateResults(msgs.msg['0421'][0])\n rh.printSysLog(\"Exit getVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n\n # Now for each line that contains our user and is a\n # class T console file, add the spool id to our list\n spoolFiles = files.split('\\n')\n outstr = \"\"\n for myfile in spoolFiles:\n if (myfile != \"\" and\n myfile.split()[0] == rh.userid and\n myfile.split()[2] == \"T\" and\n myfile.split()[3] == \"CON\"):\n\n fileId = myfile.split()[1]\n outstr += fileId + \" \"\n\n # No files in our list\n if outstr == \"\":\n msg = msgs.msg['0410'][1] % (modId, rh.userid)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0410'][0])\n rh.printSysLog(\"Exit getVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n\n # Output the list\n rh.printLn(\"N\", \"List of spool files containing \"\n \"console logs from %s: %s\" % (rh.userid, outstr))\n\n rh.results['overallRC'] = 0\n rh.printSysLog(\"Exit getVM.getConsole, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getDirectory(rh):\n rh.printSysLog(\"Enter getVM.getDirectory\")\n\n parms = [\"-T\", rh.userid]\n results = invokeSMCLI(rh, \"Image_Query_DM\", parms)\n if results['overallRC'] == 0:\n results['response'] = re.sub('\\*DVHOPT.*', '', results['response'])\n rh.printLn(\"N\", results['response'])\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit getVM.getDirectory, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Returns the virtual machine s directory statements."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getStatus(rh):\n\n rh.printSysLog(\"Enter getVM.getStatus, userid: \" + rh.userid)\n\n results = isLoggedOn(rh, rh.userid)\n if results['rc'] != 0:\n # Uhoh, can't determine if guest is logged on or not\n rh.updateResults(results)\n rh.printSysLog(\"Exit getVM.getStatus, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n\n if results['rs'] == 1:\n # Guest is logged off, everything is 0\n powerStr = \"Power state: off\"\n memStr = \"Total Memory: 0M\"\n usedMemStr = \"Used Memory: 0M\"\n procStr = \"Processors: 0\"\n timeStr = \"CPU Used Time: 0 sec\"\n\n else:\n powerStr = \"Power state: on\"\n\n if 'power' in rh.parms:\n # Test here to see if we only need power state\n # Then we can return early\n rh.printLn(\"N\", powerStr)\n rh.updateResults(results)\n rh.printSysLog(\"Exit getVM.getStatus, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n\n if results['rs'] != 1:\n # Guest is logged on, go get more info\n results = getPerfInfo(rh, rh.userid)\n\n if results['overallRC'] != 0:\n # Something went wrong in subroutine, exit\n rh.updateResults(results)\n rh.printSysLog(\"Exit getVM.getStatus, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n else:\n # Everything went well, response should be good\n memStr = results['response'].split(\"\\n\")[0]\n usedMemStr = results['response'].split(\"\\n\")[1]\n procStr = results['response'].split(\"\\n\")[2]\n timeStr = results['response'].split(\"\\n\")[3]\n\n # Build our output string according\n # to what information was asked for\n if 'memory' in rh.parms:\n outStr = memStr + \"\\n\" + usedMemStr\n elif 'cpu' in rh.parms:\n outStr = procStr + \"\\n\" + timeStr\n else:\n # Default to all\n outStr = powerStr + \"\\n\" + memStr + \"\\n\" + usedMemStr\n outStr += \"\\n\" + procStr + \"\\n\" + timeStr\n rh.printLn(\"N\", outStr)\n rh.printSysLog(\"Exit getVM.getStatus, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Get the basic status of a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextract data from smcli System_WWPN_Query output.", "response": "def extract_fcp_data(raw_data, status):\n \"\"\"\n extract data from smcli System_WWPN_Query output.\n Input:\n raw data returned from smcli\n Output:\n data extracted would be like:\n 'status:Free \\n\n fcp_dev_no:1D2F\\n\n physical_wwpn:C05076E9928051D1\\n\n channel_path_id:8B\\n\n npiv_wwpn': 'NONE'\\n\n\n status:Free\\n\n fcp_dev_no:1D29\\n\n physical_wwpn:C05076E9928051D1\\n\n channel_path_id:8B\\n\n npiv_wwpn:NONE\n \"\"\"\n raw_data = raw_data.split('\\n')\n\n # clear blank lines\n data = []\n for i in raw_data:\n i = i.strip(' \\n')\n if i == '':\n continue\n else:\n data.append(i)\n # process data into one list of dicts\n results = []\n for i in range(0, len(data), 5):\n temp = data[i + 1].split(':')[-1].strip()\n # only return results match the status\n if temp.lower() == status.lower():\n results.extend(data[i:i + 5])\n\n return '\\n'.join(results)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget fcp info and filter by the status. Input: Request Handle with the following properties: function - 'GETVM' subfunction - 'FCPINFO' userid - userid of the virtual machine parms['status'] - The status for filter results. Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def fcpinfo(rh):\n \"\"\"\n Get fcp info and filter by the status.\n\n Input:\n Request Handle with the following properties:\n function - 'GETVM'\n subfunction - 'FCPINFO'\n userid - userid of the virtual machine\n parms['status'] - The status for filter results.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n rh.printSysLog(\"Enter changeVM.dedicate\")\n\n parms = [\"-T\", rh.userid]\n\n hideList = []\n results = invokeSMCLI(rh,\n \"System_WWPN_Query\",\n parms,\n hideInLog=hideList)\n\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n if results['overallRC'] == 0:\n # extract data from smcli return\n ret = extract_fcp_data(results['response'], rh.parms['status'])\n # write the ret into results['response']\n rh.printLn(\"N\", ret)\n else:\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _no_auto_update_getter(self):\n if getattr(self, '_no_auto_update', None) is not None:\n return self._no_auto_update\n else:\n self._no_auto_update = utils._TempBool()\n return self._no_auto_update", "response": "A method to get the value of _no_auto_update"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _infer_interval_breaks(coord):\n coord = np.asarray(coord)\n deltas = 0.5 * (coord[1:] - coord[:-1])\n first = coord[0] - deltas[0]\n last = coord[-1] + deltas[-1]\n return np.r_[[first], coord[:-1] + deltas, [last]]", "response": "Infer interval breaks from a coordinate."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_variable_names(arr):\n if VARIABLELABEL in arr.dims:\n return arr.coords[VARIABLELABEL].tolist()\n else:\n return arr.name", "response": "Return the variable names of an array"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_slice(arr):\n if isinstance(arr, slice):\n return arr\n if len(arr) == 1:\n return slice(arr[0], arr[0] + 1)\n step = np.unique(arr[1:] - arr[:-1])\n if len(step) == 1:\n return slice(arr[0], arr[-1] + step[0], step[0])", "response": "Test whether arr can be replaced by a slice\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfunctioning to return the coordinate as integer array or slice of integers or slice of integers", "response": "def get_index_from_coord(coord, base_index):\n \"\"\"Function to return the coordinate as integer, integer array or slice\n\n If `coord` is zero-dimensional, the corresponding integer in `base_index`\n will be supplied. Otherwise it is first tried to return a slice, if that\n does not work an integer array with the corresponding indices is returned.\n\n Parameters\n ----------\n coord: xarray.Coordinate or xarray.Variable\n Coordinate to convert\n base_index: pandas.Index\n The base index from which the `coord` was extracted\n\n Returns\n -------\n int, array of ints or slice\n The indexer that can be used to access the `coord` in the\n `base_index`\n \"\"\"\n try:\n values = coord.values\n except AttributeError:\n values = coord\n if values.ndim == 0:\n return base_index.get_loc(values[()])\n if len(values) == len(base_index) and (values == base_index).all():\n return slice(None)\n values = np.array(list(map(lambda i: base_index.get_loc(i), values)))\n return to_slice(values) or values"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the time information from file names", "response": "def get_tdata(t_format, files):\n \"\"\"\n Get the time information from file names\n\n Parameters\n ----------\n t_format: str\n The string that can be used to get the time information in the files.\n Any numeric datetime format string (e.g. %Y, %m, %H) can be used, but\n not non-numeric strings like %b, etc. See [1]_ for the datetime format\n strings\n files: list of str\n The that contain the time informations\n\n Returns\n -------\n pandas.Index\n The time coordinate\n list of str\n The file names as they are sorten in the returned index\n\n References\n ----------\n .. [1] https://docs.python.org/2/library/datetime.html\"\"\"\n def median(arr):\n return arr.min() + (arr.max() - arr.min())/2\n import re\n from pandas import Index\n t_pattern = t_format\n for fmt, patt in t_patterns.items():\n t_pattern = t_pattern.replace(fmt, patt)\n t_pattern = re.compile(t_pattern)\n time = list(range(len(files)))\n for i, f in enumerate(files):\n time[i] = median(np.array(list(map(\n lambda s: np.datetime64(dt.datetime.strptime(s, t_format)),\n t_pattern.findall(f)))))\n ind = np.argsort(time) # sort according to time\n files = np.array(files)[ind]\n time = np.array(time)[ind]\n return to_datetime(Index(time, name='time')), files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_netcdf(ds, *args, **kwargs):\n to_update = {}\n for v, obj in six.iteritems(ds.variables):\n units = obj.attrs.get('units', obj.encoding.get('units', None))\n if units == 'day as %Y%m%d.%f' and np.issubdtype(\n obj.dtype, np.datetime64):\n to_update[v] = xr.Variable(\n obj.dims, AbsoluteTimeEncoder(obj), attrs=obj.attrs.copy(),\n encoding=obj.encoding)\n to_update[v].attrs['units'] = units\n if to_update:\n ds = ds.copy()\n ds.update(to_update)\n return xarray_api.to_netcdf(ds, *args, **kwargs)", "response": "This function will store the given dataset as a netCDF file and return the netCDF file."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntry to get the file name from the NioDataStore store", "response": "def _get_fname_nio(store):\n \"\"\"Try to get the file name from the NioDataStore store\"\"\"\n try:\n f = store.ds.file\n except AttributeError:\n return None\n try:\n return f.path\n except AttributeError:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_filename_ds(ds, dump=True, paths=None, **kwargs):\n from tempfile import NamedTemporaryFile\n\n # if already specified, return that filename\n if ds.psy._filename is not None:\n return tuple([ds.psy._filename] + list(ds.psy.data_store))\n\n def dump_nc():\n # make sure that the data store is not closed by providing a\n # write argument\n if xr_version < (0, 11):\n kwargs.setdefault('writer', xarray_api.ArrayWriter())\n store = to_netcdf(ds, fname, **kwargs)\n else:\n # `writer` parameter was removed by\n # https://github.com/pydata/xarray/pull/2261\n kwargs.setdefault('multifile', True)\n store = to_netcdf(ds, fname, **kwargs)[1]\n store_mod = store.__module__\n store_cls = store.__class__.__name__\n ds._file_obj = store\n return store_mod, store_cls\n\n def tmp_it():\n while True:\n yield NamedTemporaryFile(suffix='.nc').name\n\n fname = None\n if paths is True or (dump and paths is None):\n paths = tmp_it()\n elif paths is not None:\n if isstring(paths):\n paths = iter([paths])\n else:\n paths = iter(paths)\n # try to get the filename from the data store of the obj\n store_mod, store_cls = ds.psy.data_store\n if store_mod is not None:\n store = ds._file_obj\n # try several engines\n if hasattr(store, 'file_objs'):\n fname = []\n store_mod = []\n store_cls = []\n for obj in store.file_objs: # mfdataset\n _fname = None\n for func in get_fname_funcs:\n if _fname is None:\n _fname = func(obj)\n if _fname is not None:\n fname.append(_fname)\n store_mod.append(obj.__module__)\n store_cls.append(obj.__class__.__name__)\n fname = tuple(fname)\n store_mod = tuple(store_mod)\n store_cls = tuple(store_cls)\n else:\n for func in get_fname_funcs:\n fname = func(store)\n if fname is not None:\n break\n # check if paths is provided and if yes, save the file\n if fname is None and paths is not None:\n fname = next(paths, None)\n if dump and fname is not None:\n store_mod, store_cls = dump_nc()\n\n ds.psy.filename = fname\n ds.psy.data_store = (store_mod, store_cls)\n\n return fname, store_mod, store_cls", "response": "Returns the path to the dataset that corresponds to a single file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nopen an instance of xarray. Dataset.", "response": "def open_dataset(filename_or_obj, decode_cf=True, decode_times=True,\n decode_coords=True, engine=None, gridfile=None, **kwargs):\n \"\"\"\n Open an instance of :class:`xarray.Dataset`.\n\n This method has the same functionality as the :func:`xarray.open_dataset`\n method except that is supports an additional 'gdal' engine to open\n gdal Rasters (e.g. GeoTiffs) and that is supports absolute time units like\n ``'day as %Y%m%d.%f'`` (if `decode_cf` and `decode_times` are True).\n\n Parameters\n ----------\n %(xarray.open_dataset.parameters.no_engine)s\n engine: {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'gdal'}, optional\n Engine to use when reading netCDF files. If not provided, the default\n engine is chosen based on available dependencies, with a preference for\n 'netcdf4'.\n %(CFDecoder.decode_coords.parameters.gridfile)s\n\n Returns\n -------\n xarray.Dataset\n The dataset that contains the variables from `filename_or_obj`\"\"\"\n # use the absolute path name (is saver when saving the project)\n if isstring(filename_or_obj) and osp.exists(filename_or_obj):\n filename_or_obj = osp.abspath(filename_or_obj)\n if engine == 'gdal':\n from psyplot.gdal_store import GdalStore\n filename_or_obj = GdalStore(filename_or_obj)\n engine = None\n ds = xr.open_dataset(filename_or_obj, decode_cf=decode_cf,\n decode_coords=False, engine=engine,\n decode_times=decode_times, **kwargs)\n if decode_cf:\n ds = CFDecoder.decode_ds(\n ds, decode_coords=decode_coords, decode_times=decode_times,\n gridfile=gridfile)\n return ds"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nopens multiple files as a single dataset.", "response": "def open_mfdataset(paths, decode_cf=True, decode_times=True,\n decode_coords=True, engine=None, gridfile=None,\n t_format=None, **kwargs):\n \"\"\"\n Open multiple files as a single dataset.\n\n This function is essentially the same as the :func:`xarray.open_mfdataset`\n function but (as the :func:`open_dataset`) supports additional decoding\n and the ``'gdal'`` engine.\n You can further specify the `t_format` parameter to get the time\n information from the files and use the results to concatenate the files\n\n Parameters\n ----------\n %(xarray.open_mfdataset.parameters.no_engine)s\n %(open_dataset.parameters.engine)s\n %(get_tdata.parameters.t_format)s\n %(CFDecoder.decode_coords.parameters.gridfile)s\n\n Returns\n -------\n xarray.Dataset\n The dataset that contains the variables from `filename_or_obj`\"\"\"\n if t_format is not None or engine == 'gdal':\n if isinstance(paths, six.string_types):\n paths = sorted(glob(paths))\n if not paths:\n raise IOError('no files to open')\n if t_format is not None:\n time, paths = get_tdata(t_format, paths)\n kwargs['concat_dim'] = time\n if engine == 'gdal':\n from psyplot.gdal_store import GdalStore\n paths = list(map(GdalStore, paths))\n engine = None\n kwargs['lock'] = False\n\n ds = xr.open_mfdataset(\n paths, decode_cf=decode_cf, decode_times=decode_times, engine=engine,\n decode_coords=False, **kwargs)\n if decode_cf:\n ds = CFDecoder.decode_ds(ds, gridfile=gridfile,\n decode_coords=decode_coords,\n decode_times=decode_times)\n ds.psy._concat_dim = kwargs.get('concat_dim')\n return ds"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nopening a dataset and return it", "response": "def _open_ds_from_store(fname, store_mod=None, store_cls=None, **kwargs):\n \"\"\"Open a dataset and return it\"\"\"\n if isinstance(fname, xr.Dataset):\n return fname\n if not isstring(fname):\n try: # test iterable\n fname[0]\n except TypeError:\n pass\n else:\n if store_mod is not None and store_cls is not None:\n if isstring(store_mod):\n store_mod = repeat(store_mod)\n if isstring(store_cls):\n store_cls = repeat(store_cls)\n fname = [_open_store(sm, sc, f)\n for sm, sc, f in zip(store_mod, store_cls, fname)]\n kwargs['engine'] = None\n kwargs['lock'] = False\n return open_mfdataset(fname, **kwargs)\n if store_mod is not None and store_cls is not None:\n fname = _open_store(store_mod, store_cls, fname)\n return open_dataset(fname, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisconnect a function call to the signal.", "response": "def disconnect(self, func=None):\n \"\"\"Disconnect a function call to the signal. If None, all connections\n are disconnected\"\"\"\n if func is None:\n self._connections = []\n else:\n self._connections.remove(func)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef logger(self):\n try:\n return self._logger\n except AttributeError:\n name = '%s.%s' % (self.__module__, self.__class__.__name__)\n self._logger = logging.getLogger(name)\n self.logger.debug('Initializing...')\n return self._logger", "response": "Returns the logger of this instance."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the right decoder class that can decode the given dataset and variable.", "response": "def get_decoder(cls, ds, var):\n \"\"\"\n Class method to get the right decoder class that can decode the\n given dataset and variable\n\n Parameters\n ----------\n %(CFDecoder.can_decode.parameters)s\n\n Returns\n -------\n CFDecoder\n The decoder for the given dataset that can decode the variable\n `var`\"\"\"\n for decoder_cls in cls._registry:\n if decoder_cls.can_decode(ds, var):\n return decoder_cls(ds)\n return CFDecoder(ds)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the coordinates and bounds in a dataset This static method sets those coordinates and bounds that are marked marked in the netCDF attributes as coordinates in :attr:`ds` (without deleting them from the variable attributes because this information is necessary for visualizing the data correctly) Parameters ---------- ds: xarray.Dataset The dataset to decode gridfile: str The path to a separate grid file or a xarray.Dataset instance which may store the coordinates used in `ds` Returns ------- xarray.Dataset `ds` with additional coordinates", "response": "def decode_coords(ds, gridfile=None):\n \"\"\"\n Sets the coordinates and bounds in a dataset\n\n This static method sets those coordinates and bounds that are marked\n marked in the netCDF attributes as coordinates in :attr:`ds` (without\n deleting them from the variable attributes because this information is\n necessary for visualizing the data correctly)\n\n Parameters\n ----------\n ds: xarray.Dataset\n The dataset to decode\n gridfile: str\n The path to a separate grid file or a xarray.Dataset instance which\n may store the coordinates used in `ds`\n\n Returns\n -------\n xarray.Dataset\n `ds` with additional coordinates\"\"\"\n def add_attrs(obj):\n if 'coordinates' in obj.attrs:\n extra_coords.update(obj.attrs['coordinates'].split())\n obj.encoding['coordinates'] = obj.attrs.pop('coordinates')\n if 'bounds' in obj.attrs:\n extra_coords.add(obj.attrs['bounds'])\n if gridfile is not None and not isinstance(gridfile, xr.Dataset):\n gridfile = open_dataset(gridfile)\n extra_coords = set(ds.coords)\n for k, v in six.iteritems(ds.variables):\n add_attrs(v)\n add_attrs(ds)\n if gridfile is not None:\n ds.update({k: v for k, v in six.iteritems(gridfile.variables)\n if k in extra_coords})\n if xr_version < (0, 11):\n ds.set_coords(extra_coords.intersection(ds.variables),\n inplace=True)\n else:\n ds._coord_names.update(extra_coords.intersection(ds.variables))\n return ds"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntesting if a variable is on a triangular grid.", "response": "def is_triangular(self, var):\n \"\"\"\n Test if a variable is on a triangular grid\n\n This method first checks the `grid_type` attribute of the variable (if\n existent) whether it is equal to ``\"unstructered\"``, then it checks\n whether the bounds are not two-dimensional.\n\n Parameters\n ----------\n var: xarray.Variable or xarray.DataArray\n The variable to check\n\n Returns\n -------\n bool\n True, if the grid is triangular, else False\"\"\"\n warn(\"The 'is_triangular' method is depreceated and will be removed \"\n \"soon! Use the 'is_unstructured' method!\", DeprecationWarning,\n stacklevel=1)\n return str(var.attrs.get('grid_type')) == 'unstructured' or \\\n self._check_triangular_bounds(var)[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_cell_node_coord(self, var, coords=None, axis='x', nans=None):\n if coords is None:\n coords = self.ds.coords\n axis = axis.lower()\n get_coord = self.get_x if axis == 'x' else self.get_y\n coord = get_coord(var, coords=coords)\n if coord is not None:\n bounds = self._get_coord_cell_node_coord(coord, coords, nans,\n var=var)\n if bounds is None:\n bounds = self.get_plotbounds(coord)\n dim0 = coord.dims[-1]\n\n bounds = xr.DataArray(\n np.dstack([bounds[:-1], bounds[1:]])[0],\n dims=(dim0, '_bnds'), attrs=coord.attrs.copy(),\n name=coord.name + '_bnds')\n if bounds is not None and bounds.shape[-1] == 2:\n # normal CF-Conventions for rectangular grids\n arr = bounds.values\n if axis == 'y':\n stacked = np.repeat(\n np.dstack([arr, arr]).reshape((-1, 4)),\n len(self.get_x(var, coords)), axis=0)\n else:\n stacked = np.tile(np.c_[arr, arr[:, ::-1]],\n (len(self.get_y(var, coords)), 1))\n bounds = xr.DataArray(\n stacked,\n dims=('cell', bounds.dims[1]), name=bounds.name,\n attrs=bounds.attrs)\n\n return bounds\n return None", "response": "Returns the coordinates of the cell node in the variable attribute."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the boundaries of an unstructed coordinate", "response": "def _get_coord_cell_node_coord(self, coord, coords=None, nans=None,\n var=None):\n \"\"\"\n Get the boundaries of an unstructed coordinate\n\n Parameters\n ----------\n coord: xr.Variable\n The coordinate whose bounds should be returned\n %(CFDecoder.get_cell_node_coord.parameters.no_var|axis)s\n\n Returns\n -------\n %(CFDecoder.get_cell_node_coord.returns)s\n \"\"\"\n bounds = coord.attrs.get('bounds')\n if bounds is not None:\n bounds = self.ds.coords.get(bounds)\n if bounds is not None:\n if coords is not None:\n bounds = bounds.sel(**{\n key: coords[key]\n for key in set(coords).intersection(bounds.dims)})\n if nans is not None and var is None:\n raise ValueError(\"Need the variable to deal with NaN!\")\n elif nans is None:\n pass\n elif nans == 'skip':\n bounds = bounds[~np.isnan(var.values)]\n elif nans == 'only':\n bounds = bounds[np.isnan(var.values)]\n else:\n raise ValueError(\n \"`nans` must be either None, 'skip', or 'only'! \"\n \"Not {0}!\".format(str(nans)))\n return bounds"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking whether the bounds in the variable attribute are triangular.", "response": "def _check_triangular_bounds(self, var, coords=None, axis='x', nans=None):\n \"\"\"\n Checks whether the bounds in the variable attribute are triangular\n\n Parameters\n ----------\n %(CFDecoder.get_cell_node_coord.parameters)s\n\n Returns\n -------\n bool or None\n True, if unstructered, None if it could not be determined\n xarray.Coordinate or None\n the bounds corrdinate (if existent)\"\"\"\n # !!! WILL BE REMOVED IN THE NEAR FUTURE! !!!\n bounds = self.get_cell_node_coord(var, coords, axis=axis,\n nans=nans)\n if bounds is not None:\n return bounds.shape[-1] == 3, bounds\n else:\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntesting if a variable is on an unstructered grid.", "response": "def is_unstructured(self, var):\n \"\"\"\n Test if a variable is on an unstructered grid\n\n Parameters\n ----------\n %(CFDecoder.is_triangular.parameters)s\n\n Returns\n -------\n %(CFDecoder.is_triangular.returns)s\n\n Notes\n -----\n Currently this is the same as :meth:`is_triangular` method, but may\n change in the future to support hexagonal grids\"\"\"\n if str(var.attrs.get('grid_type')) == 'unstructured':\n return True\n xcoord = self.get_x(var)\n if xcoord is not None:\n bounds = self._get_coord_cell_node_coord(xcoord)\n if bounds is not None and bounds.shape[-1] > 2:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntests if a variable is on a circumpolar grid", "response": "def is_circumpolar(self, var):\n \"\"\"\n Test if a variable is on a circumpolar grid\n\n Parameters\n ----------\n %(CFDecoder.is_triangular.parameters)s\n\n Returns\n -------\n %(CFDecoder.is_triangular.returns)s\"\"\"\n xcoord = self.get_x(var)\n return xcoord is not None and xcoord.ndim == 2"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_variable_by_axis(self, var, axis, coords=None):\n axis = axis.lower()\n if axis not in list('xyzt'):\n raise ValueError(\"Axis must be one of X, Y, Z, T, not {0}\".format(\n axis))\n # we first check for the dimensions and then for the coordinates\n # attribute\n coords = coords or self.ds.coords\n coord_names = var.attrs.get('coordinates', var.encoding.get(\n 'coordinates', '')).split()\n if not coord_names:\n return\n ret = []\n for coord in map(lambda dim: coords[dim], filter(\n lambda dim: dim in coords, chain(\n coord_names, var.dims))):\n # check for the axis attribute or whether the coordinate is in the\n # list of possible coordinate names\n if (coord.name not in (c.name for c in ret) and\n (coord.attrs.get('axis', '').lower() == axis or\n coord.name in getattr(self, axis))):\n ret.append(coord)\n if ret:\n return None if len(ret) > 1 else ret[0]\n # If the coordinates attribute is specified but the coordinate\n # variables themselves have no 'axis' attribute, we interpret the\n # coordinates such that x: -1, y: -2, z: -3\n # Since however the CF Conventions do not determine the order on how\n # the coordinates shall be saved, we try to use a pattern matching\n # for latitude and longitude. This is not very nice, hence it is\n # better to specify the :attr:`x` and :attr:`y` attribute\n tnames = self.t.intersection(coord_names)\n if axis == 'x':\n for cname in filter(lambda cname: re.search('lon', cname),\n coord_names):\n return coords[cname]\n return coords.get(coord_names[-1])\n elif axis == 'y' and len(coord_names) >= 2:\n for cname in filter(lambda cname: re.search('lat', cname),\n coord_names):\n return coords[cname]\n return coords.get(coord_names[-2])\n elif (axis == 'z' and len(coord_names) >= 3 and\n coord_names[-3] not in tnames):\n return coords.get(coord_names[-3])\n elif axis == 't' and tnames:\n tname = next(iter(tnames))\n if len(tnames) > 1:\n warn(\"Found multiple matches for time coordinate in the \"\n \"coordinates: %s. I use %s\" % (', '.join(tnames), tname),\n PsyPlotRuntimeWarning)\n return coords.get(tname)", "response": "This method returns the coordinate matching the specified axis for the given variable."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the x - coordinate of a variable in the x - dimension of the dataset.", "response": "def get_x(self, var, coords=None):\n \"\"\"\n Get the x-coordinate of a variable\n\n This method searches for the x-coordinate in the :attr:`ds`. It first\n checks whether there is one dimension that holds an ``'axis'``\n attribute with 'X', otherwise it looks whether there is an intersection\n between the :attr:`x` attribute and the variables dimensions, otherwise\n it returns the coordinate corresponding to the last dimension of `var`\n\n Possible types\n --------------\n var: xarray.Variable\n The variable to get the x-coordinate for\n coords: dict\n Coordinates to use. If None, the coordinates of the dataset in the\n :attr:`ds` attribute are used.\n\n Returns\n -------\n xarray.Coordinate or None\n The y-coordinate or None if it could be found\"\"\"\n coords = coords or self.ds.coords\n coord = self.get_variable_by_axis(var, 'x', coords)\n if coord is not None:\n return coord\n return coords.get(self.get_xname(var))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the name of the x - dimension of the variable.", "response": "def get_xname(self, var, coords=None):\n \"\"\"Get the name of the x-dimension\n\n This method gives the name of the x-dimension (which is not necessarily\n the name of the coordinate if the variable has a coordinate attribute)\n\n Parameters\n ----------\n var: xarray.Variables\n The variable to get the dimension for\n coords: dict\n The coordinates to use for checking the axis attribute. If None,\n they are not used\n\n Returns\n -------\n str\n The coordinate name\n\n See Also\n --------\n get_x\"\"\"\n if coords is not None:\n coord = self.get_variable_by_axis(var, 'x', coords)\n if coord is not None and coord.name in var.dims:\n return coord.name\n dimlist = list(self.x.intersection(var.dims))\n if dimlist:\n if len(dimlist) > 1:\n warn(\"Found multiple matches for x coordinate in the variable:\"\n \"%s. I use %s\" % (', '.join(dimlist), dimlist[0]),\n PsyPlotRuntimeWarning)\n return dimlist[0]\n # otherwise we return the coordinate in the last position\n return var.dims[-1]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the y - coordinate of a variable in the dict.", "response": "def get_y(self, var, coords=None):\n \"\"\"\n Get the y-coordinate of a variable\n\n This method searches for the y-coordinate in the :attr:`ds`. It first\n checks whether there is one dimension that holds an ``'axis'``\n attribute with 'Y', otherwise it looks whether there is an intersection\n between the :attr:`y` attribute and the variables dimensions, otherwise\n it returns the coordinate corresponding to the second last dimension of\n `var` (or the last if the dimension of var is one-dimensional)\n\n Possible types\n --------------\n var: xarray.Variable\n The variable to get the y-coordinate for\n coords: dict\n Coordinates to use. If None, the coordinates of the dataset in the\n :attr:`ds` attribute are used.\n\n Returns\n -------\n xarray.Coordinate or None\n The y-coordinate or None if it could be found\"\"\"\n coords = coords or self.ds.coords\n coord = self.get_variable_by_axis(var, 'y', coords)\n if coord is not None:\n return coord\n return coords.get(self.get_yname(var))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the name of the y - dimension for a variable.", "response": "def get_yname(self, var, coords=None):\n \"\"\"Get the name of the y-dimension\n\n This method gives the name of the y-dimension (which is not necessarily\n the name of the coordinate if the variable has a coordinate attribute)\n\n Parameters\n ----------\n var: xarray.Variables\n The variable to get the dimension for\n coords: dict\n The coordinates to use for checking the axis attribute. If None,\n they are not used\n\n Returns\n -------\n str\n The coordinate name\n\n See Also\n --------\n get_y\"\"\"\n if coords is not None:\n coord = self.get_variable_by_axis(var, 'y', coords)\n if coord is not None and coord.name in var.dims:\n return coord.name\n dimlist = list(self.y.intersection(var.dims))\n if dimlist:\n if len(dimlist) > 1:\n warn(\"Found multiple matches for y coordinate in the variable:\"\n \"%s. I use %s\" % (', '.join(dimlist), dimlist[0]),\n PsyPlotRuntimeWarning)\n return dimlist[0]\n # otherwise we return the coordinate in the last or second last\n # position\n if self.is_unstructured(var):\n return var.dims[-1]\n return var.dims[-2 if var.ndim > 1 else -1]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the z - coordinate of a variable in the dataset.", "response": "def get_z(self, var, coords=None):\n \"\"\"\n Get the vertical (z-) coordinate of a variable\n\n This method searches for the z-coordinate in the :attr:`ds`. It first\n checks whether there is one dimension that holds an ``'axis'``\n attribute with 'Z', otherwise it looks whether there is an intersection\n between the :attr:`z` attribute and the variables dimensions, otherwise\n it returns the coordinate corresponding to the third last dimension of\n `var` (or the second last or last if var is two or one-dimensional)\n\n Possible types\n --------------\n var: xarray.Variable\n The variable to get the z-coordinate for\n coords: dict\n Coordinates to use. If None, the coordinates of the dataset in the\n :attr:`ds` attribute are used.\n\n Returns\n -------\n xarray.Coordinate or None\n The z-coordinate or None if no z coordinate could be found\"\"\"\n coords = coords or self.ds.coords\n coord = self.get_variable_by_axis(var, 'z', coords)\n if coord is not None:\n return coord\n zname = self.get_zname(var)\n if zname is not None:\n return coords.get(zname)\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the name of the z-dimension This method gives the name of the z-dimension (which is not necessarily the name of the coordinate if the variable has a coordinate attribute) Parameters ---------- var: xarray.Variables The variable to get the dimension for coords: dict The coordinates to use for checking the axis attribute. If None, they are not used Returns ------- str or None The coordinate name or None if no vertical coordinate could be found See Also -------- get_z", "response": "def get_zname(self, var, coords=None):\n \"\"\"Get the name of the z-dimension\n\n This method gives the name of the z-dimension (which is not necessarily\n the name of the coordinate if the variable has a coordinate attribute)\n\n Parameters\n ----------\n var: xarray.Variables\n The variable to get the dimension for\n coords: dict\n The coordinates to use for checking the axis attribute. If None,\n they are not used\n\n Returns\n -------\n str or None\n The coordinate name or None if no vertical coordinate could be\n found\n\n See Also\n --------\n get_z\"\"\"\n if coords is not None:\n coord = self.get_variable_by_axis(var, 'z', coords)\n if coord is not None and coord.name in var.dims:\n return coord.name\n dimlist = list(self.z.intersection(var.dims))\n if dimlist:\n if len(dimlist) > 1:\n warn(\"Found multiple matches for z coordinate in the variable:\"\n \"%s. I use %s\" % (', '.join(dimlist), dimlist[0]),\n PsyPlotRuntimeWarning)\n return dimlist[0]\n # otherwise we return the coordinate in the third last position\n is_unstructured = self.is_unstructured(var)\n icheck = -2 if is_unstructured else -3\n min_dim = abs(icheck) if 'variable' not in var.dims else abs(icheck-1)\n if var.ndim >= min_dim and var.dims[icheck] != self.get_tname(\n var, coords):\n return var.dims[icheck]\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_t(self, var, coords=None):\n coords = coords or self.ds.coords\n coord = self.get_variable_by_axis(var, 't', coords)\n if coord is not None:\n return coord\n dimlist = list(self.t.intersection(var.dims).intersection(coords))\n if dimlist:\n if len(dimlist) > 1:\n warn(\"Found multiple matches for time coordinate in the \"\n \"variable: %s. I use %s\" % (\n ', '.join(dimlist), dimlist[0]),\n PsyPlotRuntimeWarning)\n return coords[dimlist[0]]\n tname = self.get_tname(var)\n if tname is not None:\n return coords.get(tname)\n return None", "response": "This method returns the time coordinate of a variable in the dataset."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_tname(self, var, coords=None):\n if coords is not None:\n coord = self.get_variable_by_axis(var, 't', coords)\n if coord is not None and coord.name in var.dims:\n return coord.name\n dimlist = list(self.t.intersection(var.dims))\n if dimlist:\n if len(dimlist) > 1:\n warn(\"Found multiple matches for t coordinate in the variable:\"\n \"%s. I use %s\" % (', '.join(dimlist), dimlist[0]),\n PsyPlotRuntimeWarning)\n return dimlist[0]\n # otherwise we return None\n return None", "response": "Get the name of the t - dimension for the variable"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_idims(self, arr, coords=None):\n if coords is None:\n coords = arr.coords\n else:\n coords = {\n label: coord for label, coord in six.iteritems(arr.coords)\n if label in coords}\n ret = self.get_coord_idims(coords)\n # handle the coordinates that are not in the dataset\n missing = set(arr.dims).difference(ret)\n if missing:\n warn('Could not get slices for the following dimensions: %r' % (\n missing, ), PsyPlotRuntimeWarning)\n return ret", "response": "Get the coordinates in the dataset as int or slice or list of integer that represent the the\n coordinates in the given array."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_coord_idims(self, coords):\n ret = dict(\n (label, get_index_from_coord(coord, self.ds.indexes[label]))\n for label, coord in six.iteritems(coords)\n if label in self.ds.indexes)\n return ret", "response": "Get the slicers for the given coordinates from the base dataset."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_plotbounds(self, coord, kind=None, ignore_shape=False):\n if 'bounds' in coord.attrs:\n bounds = self.ds.coords[coord.attrs['bounds']]\n if ignore_shape:\n return bounds.values.ravel()\n if not bounds.shape[:-1] == coord.shape:\n bounds = self.ds.isel(**self.get_idims(coord))\n try:\n return self._get_plotbounds_from_cf(coord, bounds)\n except ValueError as e:\n warn((e.message if six.PY2 else str(e)) +\n \" Bounds are calculated automatically!\")\n return self._infer_interval_breaks(coord, kind=kind)", "response": "Returns the plotbounds for a given coordinate."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_plotbounds_from_cf(coord, bounds):\n\n if bounds.shape[:-1] != coord.shape or bounds.shape[-1] != 2:\n raise ValueError(\n \"Cannot interprete bounds with shape {0} for {1} \"\n \"coordinate with shape {2}.\".format(\n bounds.shape, coord.name, coord.shape))\n ret = np.zeros(tuple(map(lambda i: i+1, coord.shape)))\n ret[tuple(map(slice, coord.shape))] = bounds[..., 0]\n last_slices = tuple(slice(-1, None) for _ in coord.shape)\n ret[last_slices] = bounds[tuple(chain(last_slices, [1]))]\n return ret", "response": "Get the plot bounds for a given coordinate and bounds."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the set of triangles for the variable in the current object.", "response": "def get_triangles(self, var, coords=None, convert_radian=True,\n copy=False, src_crs=None, target_crs=None,\n nans=None, stacklevel=1):\n \"\"\"\n Get the triangles for the variable\n\n Parameters\n ----------\n var: xarray.Variable or xarray.DataArray\n The variable to use\n coords: dict\n Alternative coordinates to use. If None, the coordinates of the\n :attr:`ds` dataset are used\n convert_radian: bool\n If True and the coordinate has units in 'radian', those are\n converted to degrees\n copy: bool\n If True, vertice arrays are copied\n src_crs: cartopy.crs.Crs\n The source projection of the data. If not None, a transformation\n to the given `target_crs` will be done\n target_crs: cartopy.crs.Crs\n The target projection for which the triangles shall be transformed.\n Must only be provided if the `src_crs` is not None.\n %(CFDecoder._check_triangular_bounds.parameters.nans)s\n\n Returns\n -------\n matplotlib.tri.Triangulation\n The spatial triangles of the variable\n\n Raises\n ------\n ValueError\n If `src_crs` is not None and `target_crs` is None\"\"\"\n warn(\"The 'get_triangles' method is depreceated and will be removed \"\n \"soon! Use the 'get_cell_node_coord' method!\",\n DeprecationWarning, stacklevel=stacklevel)\n from matplotlib.tri import Triangulation\n\n def get_vertices(axis):\n bounds = self._check_triangular_bounds(var, coords=coords,\n axis=axis, nans=nans)[1]\n if coords is not None:\n bounds = coords.get(bounds.name, bounds)\n vertices = bounds.values.ravel()\n if convert_radian:\n coord = getattr(self, 'get_' + axis)(var)\n if coord.attrs.get('units') == 'radian':\n vertices = vertices * 180. / np.pi\n return vertices if not copy else vertices.copy()\n\n if coords is None:\n coords = self.ds.coords\n\n xvert = get_vertices('x')\n yvert = get_vertices('y')\n if src_crs is not None and src_crs != target_crs:\n if target_crs is None:\n raise ValueError(\n \"Found %s for the source crs but got None for the \"\n \"target_crs!\" % (src_crs, ))\n arr = target_crs.transform_points(src_crs, xvert, yvert)\n xvert = arr[:, 0]\n yvert = arr[:, 1]\n triangles = np.reshape(range(len(xvert)), (len(xvert) // 3, 3))\n return Triangulation(xvert, yvert, triangles)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _infer_interval_breaks(coord, kind=None):\n if coord.ndim == 1:\n return _infer_interval_breaks(coord)\n elif coord.ndim == 2:\n from scipy.interpolate import interp2d\n kind = kind or rcParams['decoder.interp_kind']\n y, x = map(np.arange, coord.shape)\n new_x, new_y = map(_infer_interval_breaks, [x, y])\n coord = np.asarray(coord)\n return interp2d(x, y, coord, kind=kind, copy=False)(new_x, new_y)", "response": "Infer interval breaks from the data in coord."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef correct_dims(self, var, dims={}, remove=True):\n method_mapping = {'x': self.get_xname,\n 'z': self.get_zname, 't': self.get_tname}\n dims = dict(dims)\n if self.is_unstructured(var): # we assume a one-dimensional grid\n method_mapping['y'] = self.get_xname\n else:\n method_mapping['y'] = self.get_yname\n for key in six.iterkeys(dims.copy()):\n if key in method_mapping and key not in var.dims:\n dim_name = method_mapping[key](var, self.ds.coords)\n if dim_name in dims:\n dims.pop(key)\n else:\n new_name = method_mapping[key](var)\n if new_name is not None:\n dims[new_name] = dims.pop(key)\n # now remove the unnecessary dimensions\n if remove:\n for key in set(dims).difference(var.dims):\n dims.pop(key)\n self.logger.debug(\n \"Could not find a dimensions matching %s in variable %s!\",\n key, var)\n return dims", "response": "Expands the dimensions to match the dims in the variable\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreplace the coordinate names through x y z and t with the original dimensions of the original dimensions of the current object.", "response": "def standardize_dims(self, var, dims={}):\n \"\"\"Replace the coordinate names through x, y, z and t\n\n Parameters\n ----------\n var: xarray.Variable\n The variable to use the dimensions of\n dims: dict\n The dictionary to use for replacing the original dimensions\n\n Returns\n -------\n dict\n The dictionary with replaced dimensions\"\"\"\n dims = dict(dims)\n name_map = {self.get_xname(var, self.ds.coords): 'x',\n self.get_yname(var, self.ds.coords): 'y',\n self.get_zname(var, self.ds.coords): 'z',\n self.get_tname(var, self.ds.coords): 't'}\n dims = dict(dims)\n for dim in set(dims).intersection(name_map):\n dims[name_map[dim]] = dims.pop(dim)\n return dims"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the mesh variable for the given variable", "response": "def get_mesh(self, var, coords=None):\n \"\"\"Get the mesh variable for the given `var`\n\n Parameters\n ----------\n var: xarray.Variable\n The data source whith the ``'mesh'`` attribute\n coords: dict\n The coordinates to use. If None, the coordinates of the dataset of\n this decoder is used\n\n Returns\n -------\n xarray.Coordinate\n The mesh coordinate\"\"\"\n mesh = var.attrs.get('mesh')\n if mesh is None:\n return None\n if coords is None:\n coords = self.ds.coords\n return coords.get(mesh, self.ds.coords.get(mesh))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_triangles(self, var, coords=None, convert_radian=True, copy=False,\n src_crs=None, target_crs=None, nans=None, stacklevel=1):\n \"\"\"\n Get the of the given coordinate.\n\n Parameters\n ----------\n %(CFDecoder.get_triangles.parameters)s\n\n Returns\n -------\n %(CFDecoder.get_triangles.returns)s\n\n Notes\n -----\n If the ``'location'`` attribute is set to ``'node'``, a delaunay\n triangulation is performed using the\n :class:`matplotlib.tri.Triangulation` class.\n\n .. todo::\n Implement the visualization for UGrid data shown on the edge of the\n triangles\"\"\"\n warn(\"The 'get_triangles' method is depreceated and will be removed \"\n \"soon! Use the 'get_cell_node_coord' method!\",\n DeprecationWarning, stacklevel=stacklevel)\n from matplotlib.tri import Triangulation\n\n if coords is None:\n coords = self.ds.coords\n\n def get_coord(coord):\n return coords.get(coord, self.ds.coords.get(coord))\n\n mesh = self.get_mesh(var, coords)\n nodes = self.get_nodes(mesh, coords)\n if any(n is None for n in nodes):\n raise ValueError(\"Could not find the nodes variables!\")\n xvert, yvert = nodes\n xvert = xvert.values\n yvert = yvert.values\n loc = var.attrs.get('location', 'face')\n if loc == 'face':\n triangles = get_coord(\n mesh.attrs.get('face_node_connectivity', '')).values\n if triangles is None:\n raise ValueError(\n \"Could not find the connectivity information!\")\n elif loc == 'node':\n triangles = None\n else:\n raise ValueError(\n \"Could not interprete location attribute (%s) of mesh \"\n \"variable %s!\" % (loc, mesh.name))\n\n if convert_radian:\n for coord in nodes:\n if coord.attrs.get('units') == 'radian':\n coord = coord * 180. / np.pi\n if src_crs is not None and src_crs != target_crs:\n if target_crs is None:\n raise ValueError(\n \"Found %s for the source crs but got None for the \"\n \"target_crs!\" % (src_crs, ))\n xvert = xvert[triangles].ravel()\n yvert = yvert[triangles].ravel()\n arr = target_crs.transform_points(src_crs, xvert, yvert)\n xvert = arr[:, 0]\n yvert = arr[:, 1]\n if loc == 'face':\n triangles = np.reshape(range(len(xvert)), (len(xvert) // 3,\n 3))\n\n return Triangulation(xvert, yvert, triangles)", "response": "Get the triangles of a given variable."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_cell_node_coord(self, var, coords=None, axis='x', nans=None):\n if coords is None:\n coords = self.ds.coords\n\n idims = self.get_coord_idims(coords)\n\n def get_coord(coord):\n coord = coords.get(coord, self.ds.coords.get(coord))\n return coord.isel(**{d: sl for d, sl in idims.items()\n if d in coord.dims})\n\n mesh = self.get_mesh(var, coords)\n if mesh is None:\n return\n nodes = self.get_nodes(mesh, coords)\n if not len(nodes):\n raise ValueError(\"Could not find the nodes variables for the %s \"\n \"coordinate!\" % axis)\n vert = nodes[0 if axis == 'x' else 1]\n if vert is None:\n raise ValueError(\"Could not find the nodes variables for the %s \"\n \"coordinate!\" % axis)\n loc = var.attrs.get('location', 'face')\n if loc == 'node':\n # we assume a triangular grid and use matplotlibs triangulation\n from matplotlib.tri import Triangulation\n xvert, yvert = nodes\n triangles = Triangulation(xvert, yvert)\n if axis == 'x':\n bounds = triangles.x[triangles.triangles]\n else:\n bounds = triangles.y[triangles.triangles]\n elif loc in ['edge', 'face']:\n connectivity = get_coord(\n mesh.attrs.get('%s_node_connectivity' % loc, ''))\n if connectivity is None:\n raise ValueError(\n \"Could not find the connectivity information!\")\n connectivity = connectivity.values\n bounds = vert.values[\n np.where(np.isnan(connectivity), connectivity[:, :1],\n connectivity).astype(int)]\n else:\n raise ValueError(\n \"Could not interprete location attribute (%s) of mesh \"\n \"variable %s!\" % (loc, mesh.name))\n dim0 = '__face' if loc == 'node' else var.dims[-1]\n return xr.DataArray(\n bounds,\n coords={key: val for key, val in coords.items()\n if (dim0, ) == val.dims},\n dims=(dim0, '__bnds', ),\n name=vert.name + '_bnds', attrs=vert.attrs.copy())", "response": "Returns the node coordinates for a variable in the specified axis."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef decode_coords(ds, gridfile=None):\n extra_coords = set(ds.coords)\n for var in six.itervalues(ds.variables):\n if 'mesh' in var.attrs:\n mesh = var.attrs['mesh']\n if mesh not in extra_coords:\n extra_coords.add(mesh)\n try:\n mesh_var = ds.variables[mesh]\n except KeyError:\n warn('Could not find mesh variable %s' % mesh)\n continue\n if 'node_coordinates' in mesh_var.attrs:\n extra_coords.update(\n mesh_var.attrs['node_coordinates'].split())\n if 'face_node_connectivity' in mesh_var.attrs:\n extra_coords.add(\n mesh_var.attrs['face_node_connectivity'])\n if gridfile is not None and not isinstance(gridfile, xr.Dataset):\n gridfile = open_dataset(gridfile)\n ds.update({k: v for k, v in six.iteritems(gridfile.variables)\n if k in extra_coords})\n if xr_version < (0, 11):\n ds.set_coords(extra_coords.intersection(ds.variables),\n inplace=True)\n else:\n ds._coord_names.update(extra_coords.intersection(ds.variables))\n return ds", "response": "Reimplemented to set the variables as coordinates"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the variables containing the definition of the nodes", "response": "def get_nodes(self, coord, coords):\n \"\"\"Get the variables containing the definition of the nodes\n\n Parameters\n ----------\n coord: xarray.Coordinate\n The mesh variable\n coords: dict\n The coordinates to use to get node coordinates\"\"\"\n def get_coord(coord):\n return coords.get(coord, self.ds.coords.get(coord))\n return list(map(get_coord,\n coord.attrs.get('node_coordinates', '').split()[:2]))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_x(self, var, coords=None):\n if coords is None:\n coords = self.ds.coords\n # first we try the super class\n ret = super(UGridDecoder, self).get_x(var, coords)\n # but if that doesn't work because we get the variable name in the\n # dimension of `var`, we use the means of the triangles\n if ret is None or ret.name in var.dims:\n bounds = self.get_cell_node_coord(var, axis='x', coords=coords)\n if bounds is not None:\n centers = bounds.mean(axis=-1)\n x = self.get_nodes(self.get_mesh(var, coords), coords)[0]\n try:\n cls = xr.IndexVariable\n except AttributeError: # xarray < 0.9\n cls = xr.Coordinate\n return cls(x.name, centers, attrs=x.attrs.copy())", "response": "Get the centers of the triangles in the x - dimension."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start_update(self, draw=None, queues=None):\n if self.plotter is not None:\n return self.plotter.start_update(draw=draw, queues=queues)", "response": "Start an update of the current state of the a\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(self, fmt={}, replot=False, draw=None, auto_update=False,\n force=False, todefault=False, **kwargs):\n \"\"\"\n Update the coordinates and the plot\n\n This method updates all arrays in this list with the given coordinate\n values and formatoptions.\n\n Parameters\n ----------\n %(InteractiveBase._register_update.parameters)s\n auto_update: bool\n Boolean determining whether or not the :meth:`start_update` method\n is called at the end. This parameter has no effect if the\n :attr:`no_auto_update` attribute is set to ``True``.\n %(InteractiveBase.start_update.parameters.draw)s\n ``**kwargs``\n Any other formatoption that shall be updated (additionally to those\n in `fmt`)\n\n Notes\n -----\n If the :attr:`no_auto_update` attribute is True and the given\n `auto_update` parameter are is False, the update of the plots are\n registered and conducted at the next call of the :meth:`start_update`\n method or the next call of this method (if the `auto_update` parameter\n is then True).\n \"\"\"\n fmt = dict(fmt)\n fmt.update(kwargs)\n\n self._register_update(replot=replot, fmt=fmt, force=force,\n todefault=todefault)\n\n if not self.no_auto_update or auto_update:\n self.start_update(draw=draw)", "response": "Update the coordinates and the plot\n in this list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef names(self):\n ret = set()\n for arr in self:\n if isinstance(arr, InteractiveList):\n ret.update(arr.names)\n else:\n ret.add(arr.name)\n return ret", "response": "Returns a set of the variable names in this list"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef all_dims(self):\n return [\n _get_dims(arr) if not isinstance(arr, ArrayList) else\n arr.all_dims\n for arr in self]", "response": "The dimensions for each of the arrays in this list."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_unstructured(self):\n return [\n arr.psy.decoder.is_unstructured(arr)\n if not isinstance(arr, ArrayList) else\n arr.is_unstructured\n for arr in self]", "response": "A boolean for each array whether it is unstructured or not"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef coords_intersect(self):\n return set.intersection(*map(\n set, (getattr(arr, 'coords_intersect', arr.coords) for arr in self)\n ))", "response": "Coordinates of the arrays in this list that are used in all arrays\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a new instance with only the arrays that are visualized with a plotter.", "response": "def with_plotter(self):\n \"\"\"The arrays in this instance that are visualized with a plotter\"\"\"\n return self.__class__(\n (arr for arr in self if arr.psy.plotter is not None),\n auto_update=bool(self.auto_update))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef arrays(self):\n return list(chain.from_iterable(\n ([arr] if not isinstance(arr, InteractiveList) else arr.arrays\n for arr in self)))", "response": "A list of all the data arrays in this list."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rename(self, arr, new_name=True):\n name_in_me = arr.psy.arr_name in self.arr_names\n if not name_in_me:\n return arr, False\n elif name_in_me and not self._contains_array(arr):\n if new_name is False:\n raise ValueError(\n \"Array name %s is already in use! Set the `new_name` \"\n \"parameter to None for renaming!\" % arr.psy.arr_name)\n elif new_name is True:\n new_name = new_name if isstring(new_name) else 'arr{0}'\n arr.psy.arr_name = self.next_available_name(new_name)\n return arr, True\n return arr, None", "response": "Rename an array to find a name that isn t already in the list."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a copy of the list containing the same attributes and attributes.", "response": "def copy(self, deep=False):\n \"\"\"Returns a copy of the list\n\n Parameters\n ----------\n deep: bool\n If False (default), only the list is copied and not the contained\n arrays, otherwise the contained arrays are deep copied\"\"\"\n if not deep:\n return self.__class__(self[:], attrs=self.attrs.copy(),\n auto_update=not bool(self.no_auto_update))\n else:\n return self.__class__(\n [arr.psy.copy(deep) for arr in self], attrs=self.attrs.copy(),\n auto_update=not bool(self.auto_update))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconstructing an ArrayList instance from an existing base dataset.", "response": "def from_dataset(cls, base, method='isel', default_slice=None,\n decoder=None, auto_update=None, prefer_list=False,\n squeeze=True, attrs=None, load=False, **kwargs):\n \"\"\"\n Construct an ArrayList instance from an existing base dataset\n\n Parameters\n ----------\n base: xarray.Dataset\n Dataset instance that is used as reference\n %(InteractiveArray.update.parameters.method)s\n %(InteractiveBase.parameters.auto_update)s\n prefer_list: bool\n If True and multiple variable names pher array are found, the\n :class:`InteractiveList` class is used. Otherwise the arrays are\n put together into one :class:`InteractiveArray`.\n default_slice: indexer\n Index (e.g. 0 if `method` is 'isel') that shall be used for\n dimensions not covered by `dims` and `furtherdims`. If None, the\n whole slice will be used.\n decoder: CFDecoder\n The decoder that shall be used to decoder the `base` dataset\n squeeze: bool, optional\n Default True. If True, and the created arrays have a an axes with\n length 1, it is removed from the dimension list (e.g. an array\n with shape (3, 4, 1, 5) will be squeezed to shape (3, 4, 5))\n attrs: dict, optional\n Meta attributes that shall be assigned to the selected data arrays\n (additional to those stored in the `base` dataset)\n load: bool or dict\n If True, load the data from the dataset using the\n :meth:`xarray.DataArray.load` method. If :class:`dict`, those will\n be given to the above mentioned ``load`` method\n\n Other Parameters\n ----------------\n %(setup_coords.parameters)s\n\n Returns\n -------\n ArrayList\n The list with the specified :class:`InteractiveArray` instances\n that hold a reference to the given `base`\"\"\"\n try:\n load = dict(load)\n except (TypeError, ValueError):\n def maybe_load(arr):\n return arr.load() if load else arr\n else:\n def maybe_load(arr):\n return arr.load(**load)\n\n def iter_dims(dims):\n \"\"\"Split the given dictionary into multiples and iterate over it\"\"\"\n if not dims:\n while 1:\n yield {}\n else:\n dims = OrderedDict(dims)\n keys = dims.keys()\n for vals in zip(*map(cycle, map(safe_list, dims.values()))):\n yield dict(zip(keys, vals))\n\n def recursive_selection(key, dims, names):\n names = safe_list(names)\n if len(names) > 1 and prefer_list:\n keys = ('arr%i' % i for i in range(len(names)))\n return InteractiveList(\n starmap(sel_method, zip(keys, iter_dims(dims), names)),\n auto_update=auto_update, arr_name=key)\n elif len(names) > 1:\n return sel_method(key, dims, tuple(names))\n else:\n return sel_method(key, dims, names[0])\n\n def ds2arr(arr):\n base_var = next(var for key, var in arr.variables.items()\n if key not in arr.coords)\n attrs = base_var.attrs\n arr = arr.to_array()\n if 'coordinates' in base_var.encoding:\n arr.encoding['coordinates'] = base_var.encoding[\n 'coordinates']\n arr.attrs.update(attrs)\n return arr\n\n if decoder is not None:\n def get_decoder(arr):\n return decoder\n else:\n def get_decoder(arr):\n return CFDecoder.get_decoder(base, arr)\n\n def add_missing_dimensions(arr):\n # add the missing dimensions to the dataset. This is not anymore\n # done by default from xarray >= 0.9 but we need it to ensure the\n # interactive treatment of DataArrays\n missing = set(arr.dims).difference(base.coords) - {'variable'}\n for dim in missing:\n base[dim] = arr.coords[dim] = np.arange(base.dims[dim])\n\n if squeeze:\n def squeeze_array(arr):\n return arr.isel(**{dim: 0 for i, dim in enumerate(arr.dims)\n if arr.shape[i] == 1})\n else:\n def squeeze_array(arr):\n return arr\n if method == 'isel':\n def sel_method(key, dims, name=None):\n if name is None:\n return recursive_selection(key, dims, dims.pop('name'))\n elif (isinstance(name, six.string_types) or\n not utils.is_iterable(name)):\n arr = base[name]\n else:\n arr = base[list(name)]\n add_missing_dimensions(arr)\n if not isinstance(arr, xr.DataArray):\n arr = ds2arr(arr)\n def_slice = slice(None) if default_slice is None else \\\n default_slice\n decoder = get_decoder(arr)\n dims = decoder.correct_dims(arr, dims)\n dims.update({\n dim: def_slice for dim in set(arr.dims).difference(\n dims) if dim != 'variable'})\n ret = squeeze_array(arr.isel(**dims))\n # delete the variable dimension for the idims\n dims.pop('variable', None)\n ret.psy.init_accessor(arr_name=key, base=base, idims=dims)\n return maybe_load(ret)\n else:\n def sel_method(key, dims, name=None):\n if name is None:\n return recursive_selection(key, dims, dims.pop('name'))\n elif (isinstance(name, six.string_types) or\n not utils.is_iterable(name)):\n arr = base[name]\n else:\n arr = base[list(name)]\n add_missing_dimensions(arr)\n if not isinstance(arr, xr.DataArray):\n arr = ds2arr(arr)\n # idims will be calculated by the array (maybe not the most\n # efficient way...)\n decoder = get_decoder(arr)\n dims = decoder.correct_dims(arr, dims)\n if default_slice is not None:\n dims.update({\n key: default_slice for key in set(arr.dims).difference(\n dims) if key != 'variable'})\n # the sel method does not work with slice objects\n if any(isinstance(idx, slice) for idx in dims.values()):\n # ignore method argument\n ret = squeeze_array(arr.sel(**dims))\n else:\n ret = squeeze_array(arr.sel(method=method, **dims))\n ret.psy.init_accessor(arr_name=key, base=base)\n return maybe_load(ret)\n if 'name' not in kwargs:\n default_names = list(\n key for key in base.variables if key not in base.coords)\n try:\n default_names.sort()\n except TypeError:\n pass\n kwargs['name'] = default_names\n names = setup_coords(**kwargs)\n # check coordinates\n possible_keys = ['t', 'x', 'y', 'z', 'name'] + list(base.dims)\n for key in set(chain(*six.itervalues(names))):\n utils.check_key(key, possible_keys, name='dimension')\n instance = cls(starmap(sel_method, six.iteritems(names)),\n attrs=base.attrs, auto_update=auto_update)\n # convert to interactive lists if an instance is not\n if prefer_list and any(\n not isinstance(arr, InteractiveList) for arr in instance):\n # if any instance is an interactive list, than convert the others\n if any(isinstance(arr, InteractiveList) for arr in instance):\n for i, arr in enumerate(instance):\n if not isinstance(arr, InteractiveList):\n instance[i] = InteractiveList([arr])\n else: # put everything into one single interactive list\n instance = cls([InteractiveList(instance, attrs=base.attrs,\n auto_update=auto_update)])\n instance[0].psy.arr_name = instance[0][0].psy.arr_name\n if attrs is not None:\n for arr in instance:\n arr.attrs.update(attrs)\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_ds_descriptions_unsorted(\n cls, data, ignore_keys=['attrs', 'plotter'], nums=None):\n \"\"\"Recursive method to get all the file names or datasets out of a\n dictionary `data` created with the :meth`array_info` method\"\"\"\n ds_description = {'ds', 'fname', 'num', 'arr', 'store'}\n if 'ds' in data:\n # make sure that the data set has a number assigned to it\n data['ds'].psy.num\n keys_in_data = ds_description.intersection(data)\n if keys_in_data:\n return {key: data[key] for key in keys_in_data}\n for key in ignore_keys:\n data.pop(key, None)\n func = partial(cls._get_ds_descriptions_unsorted,\n ignore_keys=ignore_keys, nums=nums)\n return chain(*map(lambda d: [d] if isinstance(d, dict) else d,\n map(func, six.itervalues(data))))", "response": "Recursive method to get all the file names or datasets out of a\n dictionary data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_dict(cls, d, alternative_paths={}, datasets=None,\n pwd=None, ignore_keys=['attrs', 'plotter', 'ds'],\n only=None, chname={}, **kwargs):\n \"\"\"\n Create a list from the dictionary returned by :meth:`array_info`\n\n This classmethod creates an :class:`~psyplot.data.ArrayList` instance\n from a dictionary containing filename, dimension infos and array names\n\n Parameters\n ----------\n d: dict\n The dictionary holding the data\n alternative_paths: dict or list or str\n A mapping from original filenames as used in `d` to filenames that\n shall be used instead. If `alternative_paths` is not None,\n datasets must be None. Paths must be accessible from the current\n working directory.\n If `alternative_paths` is a list (or any other iterable) is\n provided, the file names will be replaced as they appear in `d`\n (note that this is very unsafe if `d` is not and OrderedDict)\n datasets: dict or list or None\n A mapping from original filenames in `d` to the instances of\n :class:`xarray.Dataset` to use. If it is an iterable, the same\n holds as for the `alternative_paths` parameter\n pwd: str\n Path to the working directory from where the data can be imported.\n If None, use the current working directory.\n ignore_keys: list of str\n Keys specified in this list are ignored and not seen as array\n information (note that ``attrs`` are used anyway)\n only: string, list or callable\n Can be one of the following three things:\n\n - a string that represents a pattern to match the array names\n that shall be included\n - a list of array names to include\n - a callable with two arguments, a string and a dict such as\n\n .. code-block:: python\n\n def filter_func(arr_name: str, info: dict): -> bool\n '''\n Filter the array names\n\n This function should return True if the array shall be\n included, else False\n\n Parameters\n ----------\n arr_name: str\n The array name (i.e. the ``arr_name`` attribute)\n info: dict\n The dictionary with the array informations. Common\n keys are ``'name'`` that points to the variable name\n and ``'dims'`` that points to the dimensions and\n ``'fname'`` that points to the file name\n '''\n return True or False\n\n The function should return ``True`` if the array shall be\n included, else ``False``. This function will also be given to\n subsequents instances of :class:`InteractiveList` objects that\n are contained in the returned value\n chname: dict\n A mapping from variable names in the project to variable names\n that should be used instead\n\n Other Parameters\n ----------------\n ``**kwargs``\n Any other parameter from the `psyplot.data.open_dataset` function\n %(open_dataset.parameters)s\n\n Returns\n -------\n psyplot.data.ArrayList\n The list with the interactive objects\n\n See Also\n --------\n from_dataset, array_info\"\"\"\n pwd = pwd or getcwd()\n if only is None:\n def only_filter(arr_name, info):\n return True\n elif callable(only):\n only_filter = only\n elif isstring(only):\n def only_filter(arr_name, info):\n return patt.search(arr_name) is not None\n patt = re.compile(only)\n only = None\n else:\n def only_filter(arr_name, info):\n return arr_name in save_only\n save_only = only\n only = None\n\n def get_fname_use(fname):\n squeeze = isstring(fname)\n fname = safe_list(fname)\n ret = tuple(f if utils.is_remote_url(f) or osp.isabs(f) else\n osp.join(pwd, f)\n for f in fname)\n return ret[0] if squeeze else ret\n\n def get_name(name):\n if not isstring(name):\n return list(map(get_name, name))\n else:\n return chname.get(name, name)\n\n if not isinstance(alternative_paths, dict):\n it = iter(alternative_paths)\n alternative_paths = defaultdict(partial(next, it, None))\n # first open all datasets if not already done\n if datasets is None:\n replace_concat_dim = 'concat_dim' not in kwargs\n\n names_and_stores = cls._get_dsnames(d, concat_dim=True)\n datasets = {}\n for fname, (store_mod, store_cls), concat_dim in names_and_stores:\n fname_use = fname\n got = True\n if replace_concat_dim and concat_dim is not None:\n kwargs['concat_dim'] = concat_dim\n elif replace_concat_dim and concat_dim is None:\n kwargs.pop('concat_dim', None)\n try:\n fname_use = alternative_paths[fname]\n except KeyError:\n got = False\n if not got or not fname_use:\n if fname is not None:\n fname_use = get_fname_use(fname)\n if fname_use is not None:\n datasets[fname] = _open_ds_from_store(\n fname_use, store_mod, store_cls, **kwargs)\n if alternative_paths is not None:\n for fname in set(alternative_paths).difference(datasets):\n datasets[fname] = _open_ds_from_store(fname, **kwargs)\n elif not isinstance(datasets, dict):\n it_datasets = iter(datasets)\n datasets = defaultdict(partial(next, it_datasets, None))\n arrays = [0] * len(d)\n i = 0\n for arr_name, info in six.iteritems(d):\n if arr_name in ignore_keys or not only_filter(arr_name, info):\n arrays.pop(i)\n continue\n if not {'fname', 'ds', 'arr'}.intersection(info):\n # the described object is an InteractiveList\n arr = InteractiveList.from_dict(\n info, alternative_paths=alternative_paths,\n datasets=datasets, chname=chname)\n if not arr:\n warn(\"Skipping empty list %s!\" % arr_name)\n arrays.pop(i)\n continue\n else:\n if 'arr' in info:\n arr = info.pop('arr')\n elif 'ds' in info:\n arr = cls.from_dataset(\n info['ds'], dims=info['dims'],\n name=get_name(info['name']))[0]\n else:\n fname = info['fname']\n if fname is None:\n warn(\"Could not open array %s because no filename was \"\n \"specified!\" % arr_name)\n arrays.pop(i)\n continue\n try: # in case, datasets is a defaultdict\n datasets[fname]\n except KeyError:\n pass\n if fname not in datasets:\n warn(\"Could not open array %s because %s was not in \"\n \"the list of datasets!\" % (arr_name, fname))\n arrays.pop(i)\n continue\n arr = cls.from_dataset(\n datasets[fname], dims=info['dims'],\n name=get_name(info['name']))[0]\n for key, val in six.iteritems(info.get('attrs', {})):\n arr.attrs.setdefault(key, val)\n arr.psy.arr_name = arr_name\n arrays[i] = arr\n i += 1\n return cls(arrays, attrs=d.get('attrs', {}))", "response": "Create a list from a dictionary containing filename dimension infos and array names."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget dimension informations on you arrays This method returns a dictionary containing informations on the array in this instance Parameters ---------- dump: bool If True and the dataset has not been dumped so far, it is dumped to a temporary file or the one generated by `paths` is used. If it is False or both, `dump` and `paths` are None, no data will be stored. If it is None and `paths` is not None, `dump` is set to True. %(get_filename_ds.parameters.no_ds|dump)s attrs: bool, optional If True (default), the :attr:`ArrayList.attrs` and :attr:`xarray.DataArray.attrs` attributes are included in the returning dictionary standardize_dims: bool, optional If True (default), the real dimension names in the dataset are replaced by x, y, z and t to be more general. pwd: str Path to the working directory from where the data can be imported. If None, use the current working directory. use_rel_paths: bool, optional If True (default), paths relative to the current working directory are used. Otherwise absolute paths to `pwd` are used ds_description: 'all' or set of {'fname', 'ds', 'num', 'arr', 'store'} Keys to describe the datasets of the arrays. If all, all keys are used. The key descriptions are fname the file name is inserted in the ``'fname'`` key store the data store class and module is inserted in the ``'store'`` key ds the dataset is inserted in the ``'ds'`` key num The unique number assigned to the dataset is inserted in the ``'num'`` key arr The array itself is inserted in the ``'arr'`` key full_ds: bool If True and ``'ds'`` is in `ds_description`, the entire dataset is included. Otherwise, only the DataArray converted to a dataset is included copy: bool If True, the arrays and datasets are deep copied Other Parameters ---------------- %(get_filename_ds.other_parameters)s Returns ------- OrderedDict An ordered mapping from array names to dimensions and filename corresponding to the array See Also -------- from_dict", "response": "def array_info(self, dump=None, paths=None, attrs=True,\n standardize_dims=True, pwd=None, use_rel_paths=True,\n alternative_paths={}, ds_description={'fname', 'store'},\n full_ds=True, copy=False, **kwargs):\n \"\"\"\n Get dimension informations on you arrays\n\n This method returns a dictionary containing informations on the\n array in this instance\n\n Parameters\n ----------\n dump: bool\n If True and the dataset has not been dumped so far, it is dumped to\n a temporary file or the one generated by `paths` is used. If it is\n False or both, `dump` and `paths` are None, no data will be stored.\n If it is None and `paths` is not None, `dump` is set to True.\n %(get_filename_ds.parameters.no_ds|dump)s\n attrs: bool, optional\n If True (default), the :attr:`ArrayList.attrs` and\n :attr:`xarray.DataArray.attrs` attributes are included in the\n returning dictionary\n standardize_dims: bool, optional\n If True (default), the real dimension names in the dataset are\n replaced by x, y, z and t to be more general.\n pwd: str\n Path to the working directory from where the data can be imported.\n If None, use the current working directory.\n use_rel_paths: bool, optional\n If True (default), paths relative to the current working directory\n are used. Otherwise absolute paths to `pwd` are used\n ds_description: 'all' or set of {'fname', 'ds', 'num', 'arr', 'store'}\n Keys to describe the datasets of the arrays. If all, all keys\n are used. The key descriptions are\n\n fname\n the file name is inserted in the ``'fname'`` key\n store\n the data store class and module is inserted in the ``'store'``\n key\n ds\n the dataset is inserted in the ``'ds'`` key\n num\n The unique number assigned to the dataset is inserted in the\n ``'num'`` key\n arr\n The array itself is inserted in the ``'arr'`` key\n full_ds: bool\n If True and ``'ds'`` is in `ds_description`, the entire dataset is\n included. Otherwise, only the DataArray converted to a dataset is\n included\n copy: bool\n If True, the arrays and datasets are deep copied\n\n\n Other Parameters\n ----------------\n %(get_filename_ds.other_parameters)s\n\n Returns\n -------\n OrderedDict\n An ordered mapping from array names to dimensions and filename\n corresponding to the array\n\n See Also\n --------\n from_dict\"\"\"\n saved_ds = kwargs.pop('_saved_ds', {})\n\n def get_alternative(f):\n return next(filter(lambda t: osp.samefile(f, t[0]),\n six.iteritems(alternative_paths)), [False, f])\n\n if copy:\n def copy_obj(obj):\n # try to get the number of the dataset and create only one copy\n # copy for each dataset\n try:\n num = obj.psy.num\n except AttributeError:\n pass\n else:\n try:\n return saved_ds[num]\n except KeyError:\n saved_ds[num] = obj.psy.copy(True)\n return saved_ds[num]\n return obj.psy.copy(True)\n else:\n def copy_obj(obj):\n return obj\n ret = OrderedDict()\n if ds_description == 'all':\n ds_description = {'fname', 'ds', 'num', 'arr', 'store'}\n if paths is not None:\n if dump is None:\n dump = True\n paths = iter(paths)\n elif dump is None:\n dump = False\n if pwd is None:\n pwd = getcwd()\n for arr in self:\n if isinstance(arr, InteractiveList):\n ret[arr.arr_name] = arr.array_info(\n dump, paths, pwd=pwd, attrs=attrs,\n standardize_dims=standardize_dims,\n use_rel_paths=use_rel_paths, ds_description=ds_description,\n alternative_paths=alternative_paths, copy=copy,\n _saved_ds=saved_ds, **kwargs)\n else:\n if standardize_dims:\n idims = arr.psy.decoder.standardize_dims(\n next(arr.psy.iter_base_variables), arr.psy.idims)\n else:\n idims = arr.psy.idims\n ret[arr.psy.arr_name] = d = {'dims': idims}\n if 'variable' in arr.coords:\n d['name'] = [list(arr.coords['variable'].values)]\n else:\n d['name'] = arr.name\n if 'fname' in ds_description or 'store' in ds_description:\n fname, store_mod, store_cls = get_filename_ds(\n arr.psy.base, dump=dump, paths=paths, **kwargs)\n if 'store' in ds_description:\n d['store'] = (store_mod, store_cls)\n if 'fname' in ds_description:\n d['fname'] = []\n for i, f in enumerate(safe_list(fname)):\n if (f is None or utils.is_remote_url(f)):\n d['fname'].append(f)\n else:\n found, f = get_alternative(f)\n if use_rel_paths:\n f = osp.relpath(f, pwd)\n else:\n f = osp.abspath(f)\n d['fname'].append(f)\n if fname is None or isinstance(fname,\n six.string_types):\n d['fname'] = d['fname'][0]\n else:\n d['fname'] = tuple(safe_list(fname))\n if arr.psy.base.psy._concat_dim is not None:\n d['concat_dim'] = arr.psy.base.psy._concat_dim\n if 'ds' in ds_description:\n if full_ds:\n d['ds'] = copy_obj(arr.psy.base)\n else:\n d['ds'] = copy_obj(arr.to_dataset())\n if 'num' in ds_description:\n d['num'] = arr.psy.base.psy.num\n if 'arr' in ds_description:\n d['arr'] = copy_obj(arr)\n if attrs:\n d['attrs'] = arr.attrs\n ret['attrs'] = self.attrs\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the name of the time coordinate of the objects in this list", "response": "def _get_tnames(self):\n \"\"\"Get the name of the time coordinate of the objects in this list\"\"\"\n tnames = set()\n for arr in self:\n if isinstance(arr, InteractiveList):\n tnames.update(arr.get_tnames())\n else:\n tnames.add(arr.psy.decoder.get_tname(\n next(arr.psy.iter_base_variables), arr.coords))\n return tnames - {None}"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nregisters new dimensions and formatoptions for updating.", "response": "def _register_update(self, method='isel', replot=False, dims={}, fmt={},\n force=False, todefault=False):\n \"\"\"\n Register new dimensions and formatoptions for updating. The keywords\n are the same as for each single array\n\n Parameters\n ----------\n %(InteractiveArray._register_update.parameters)s\"\"\"\n\n for arr in self:\n arr.psy._register_update(method=method, replot=replot, dims=dims,\n fmt=fmt, force=force, todefault=todefault)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef start_update(self, draw=None):\n def worker(arr):\n results[arr.psy.arr_name] = arr.psy.start_update(\n draw=False, queues=queues)\n if len(self) == 0:\n return\n\n results = {}\n threads = [Thread(target=worker, args=(arr,),\n name='update_%s' % arr.psy.arr_name)\n for arr in self]\n jobs = [arr.psy._njobs for arr in self]\n queues = [Queue() for _ in range(max(map(len, jobs)))]\n # populate the queues\n for i, arr in enumerate(self):\n for j, n in enumerate(jobs[i]):\n for k in range(n):\n queues[j].put(arr.psy.arr_name)\n for thread in threads:\n thread.setDaemon(True)\n for thread in threads:\n thread.start()\n for thread in threads:\n thread.join()\n if draw is None:\n draw = rcParams['auto_draw']\n if draw:\n self(arr_name=[name for name, adraw in six.iteritems(results)\n if adraw]).draw()\n if rcParams['auto_show']:\n self.show()", "response": "Start the update of the items in the list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update(self, method='isel', dims={}, fmt={}, replot=False,\n auto_update=False, draw=None, force=False, todefault=False,\n enable_post=None, **kwargs):\n \"\"\"\n Update the coordinates and the plot\n\n This method updates all arrays in this list with the given coordinate\n values and formatoptions.\n\n Parameters\n ----------\n %(InteractiveArray._register_update.parameters)s\n %(InteractiveArray.update.parameters.auto_update)s\n %(ArrayList.start_update.parameters)s\n enable_post: bool\n If not None, enable (``True``) or disable (``False``) the\n :attr:`~psyplot.plotter.Plotter.post` formatoption in the plotters\n ``**kwargs``\n Any other formatoption or dimension that shall be updated\n (additionally to those in `fmt` and `dims`)\n\n Notes\n -----\n %(InteractiveArray.update.notes)s\n\n See Also\n --------\n no_auto_update, start_update\"\"\"\n dims = dict(dims)\n fmt = dict(fmt)\n vars_and_coords = set(chain(\n self.dims, self.coords, ['name', 'x', 'y', 'z', 't']))\n furtherdims, furtherfmt = utils.sort_kwargs(kwargs, vars_and_coords)\n dims.update(furtherdims)\n fmt.update(furtherfmt)\n\n self._register_update(method=method, replot=replot, dims=dims, fmt=fmt,\n force=force, todefault=todefault)\n if enable_post is not None:\n for arr in self.with_plotter:\n arr.psy.plotter.enable_post = enable_post\n if not self.no_auto_update or auto_update:\n self.start_update(draw)", "response": "Update the coordinates and formatoptions of the current object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef draw(self):\n for fig in set(chain(*map(\n lambda arr: arr.psy.plotter.figs2draw, self.with_plotter))):\n self.logger.debug(\"Drawing figure %s\", fig.number)\n fig.canvas.draw()\n for arr in self:\n if arr.psy.plotter is not None:\n arr.psy.plotter._figs2draw.clear()\n self.logger.debug(\"Done drawing.\")", "response": "Draws all the figures in this instance"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck whether exactly this array is in the list", "response": "def _contains_array(self, val):\n \"\"\"Checks whether exactly this array is in the list\"\"\"\n arr = self(arr_name=val.psy.arr_name)[0]\n is_not_list = any(\n map(lambda a: not isinstance(a, InteractiveList),\n [arr, val]))\n is_list = any(map(lambda a: isinstance(a, InteractiveList),\n [arr, val]))\n # if one is an InteractiveList and the other not, they differ\n if is_list and is_not_list:\n return False\n # if both are interactive lists, check the lists\n if is_list:\n return all(a in arr for a in val) and all(a in val for a in arr)\n # else we check the shapes and values\n return arr is val"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new array out of the given format string and return the next available name.", "response": "def next_available_name(self, fmt_str='arr{0}', counter=None):\n \"\"\"Create a new array out of the given format string\n\n Parameters\n ----------\n format_str: str\n The base string to use. ``'{0}'`` will be replaced by a counter\n counter: iterable\n An iterable where the numbers should be drawn from. If None,\n ``range(100)`` is used\n\n Returns\n -------\n str\n A possible name that is not in the current project\"\"\"\n names = self.arr_names\n counter = counter or iter(range(1000))\n try:\n new_name = next(\n filter(lambda n: n not in names,\n map(fmt_str.format, counter)))\n except StopIteration:\n raise ValueError(\n \"{0} already in the list\".format(fmt_str))\n return new_name"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nappend a new array to the list.", "response": "def append(self, value, new_name=False):\n \"\"\"\n Append a new array to the list\n\n Parameters\n ----------\n value: InteractiveBase\n The data object to append to this list\n %(ArrayList.rename.parameters.new_name)s\n\n Raises\n ------\n %(ArrayList.rename.raises)s\n\n See Also\n --------\n list.append, extend, rename\"\"\"\n arr, renamed = self.rename(value, new_name)\n if renamed is not None:\n super(ArrayList, self).append(value)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef extend(self, iterable, new_name=False):\n # extend those arrays that aren't alredy in the list\n super(ArrayList, self).extend(t[0] for t in filter(\n lambda t: t[1] is not None, (\n self.rename(arr, new_name) for arr in iterable)))", "response": "Extend the list with the elements from an iterable."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove(self, arr):\n name = arr if isinstance(arr, six.string_types) else arr.psy.arr_name\n if arr not in self:\n raise ValueError(\n \"Array {0} not in the list\".format(name))\n for i, arr in enumerate(self):\n if arr.psy.arr_name == name:\n del self[i]\n return\n raise ValueError(\n \"No array found with name {0}\".format(name))", "response": "Removes an array from the list."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the number of jobs in the current thread.", "response": "def _njobs(self):\n \"\"\"%(InteractiveBase._njobs)s\"\"\"\n ret = super(self.__class__, self)._njobs or [0]\n ret[0] += 1\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _register_update(self, method='isel', replot=False, dims={}, fmt={},\n force=False, todefault=False):\n \"\"\"\n Register new dimensions and formatoptions for updating\n\n Parameters\n ----------\n %(InteractiveArray._register_update.parameters)s\"\"\"\n ArrayList._register_update(self, method=method, dims=dims)\n InteractiveBase._register_update(self, fmt=fmt, todefault=todefault,\n replot=bool(dims) or replot,\n force=force)", "response": "Register new dimensions and formatoptions for updating\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstarts updating the formerly registered updates with the current state of the entry.", "response": "def start_update(self, draw=None, queues=None):\n \"\"\"\n Conduct the formerly registered updates\n\n This method conducts the updates that have been registered via the\n :meth:`update` method. You can call this method if the\n :attr:`auto_update` attribute of this instance is True and the\n `auto_update` parameter in the :meth:`update` method has been set to\n False\n\n Parameters\n ----------\n %(InteractiveBase.start_update.parameters)s\n\n Returns\n -------\n %(InteractiveBase.start_update.returns)s\n\n See Also\n --------\n :attr:`no_auto_update`, update\n \"\"\"\n if queues is not None:\n queues[0].get()\n try:\n for arr in self:\n arr.psy.start_update(draw=False)\n self.onupdate.emit()\n except Exception:\n self._finish_all(queues)\n raise\n if queues is not None:\n queues[0].task_done()\n return InteractiveBase.start_update(self, draw=draw, queues=queues)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_dataset(cls, *args, **kwargs):\n plotter = kwargs.pop('plotter', None)\n make_plot = kwargs.pop('make_plot', True)\n instance = super(InteractiveList, cls).from_dataset(*args, **kwargs)\n if plotter is not None:\n plotter.initialize_plot(instance, make_plot=make_plot)\n return instance", "response": "Create an InteractiveList instance from the given base dataset."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a virtual machine in z/VM. Input: Request Handle with the following properties: function - 'CMDVM' subfunction - 'CMD' userid - userid of the virtual machine Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def createVM(rh):\n \"\"\"\n Create a virtual machine in z/VM.\n\n Input:\n Request Handle with the following properties:\n function - 'CMDVM'\n subfunction - 'CMD'\n userid - userid of the virtual machine\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter makeVM.createVM\")\n\n dirLines = []\n dirLines.append(\"USER \" + rh.userid + \" \" + rh.parms['pw'] +\n \" \" + rh.parms['priMemSize'] + \" \" +\n rh.parms['maxMemSize'] + \" \" + rh.parms['privClasses'])\n if 'profName' in rh.parms:\n dirLines.append(\"INCLUDE \" + rh.parms['profName'])\n\n if 'maxCPU' in rh.parms:\n dirLines.append(\"MACHINE ESA %i\" % rh.parms['maxCPU'])\n\n dirLines.append(\"CPU 00 BASE\")\n if 'cpuCnt' in rh.parms:\n for i in range(1, rh.parms['cpuCnt']):\n dirLines.append(\"CPU %0.2X\" % i)\n\n if 'ipl' in rh.parms:\n ipl_string = \"IPL %s \" % rh.parms['ipl']\n\n if 'iplParam' in rh.parms:\n ipl_string += (\"PARM %s \" % rh.parms['iplParam'])\n\n if 'iplLoadparam' in rh.parms:\n ipl_string += (\"LOADPARM %s \" % rh.parms['iplLoadparam'])\n\n dirLines.append(ipl_string)\n\n if 'byUsers' in rh.parms:\n for user in rh.parms['byUsers']:\n dirLines.append(\"LOGONBY \" + user)\n\n priMem = rh.parms['priMemSize'].upper()\n maxMem = rh.parms['maxMemSize'].upper()\n if 'setReservedMem' in rh.parms and (priMem != maxMem):\n reservedSize = getReservedMemSize(rh, priMem, maxMem)\n if rh.results['overallRC'] != 0:\n rh.printSysLog(\"Exit makeVM.createVM, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']\n if reservedSize != '0M':\n dirLines.append(\"COMMAND DEF STOR RESERVED %s\" % reservedSize)\n\n # Construct the temporary file for the USER entry.\n fd, tempFile = mkstemp()\n to_write = '\\n'.join(dirLines) + '\\n'\n os.write(fd, to_write.encode())\n os.close(fd)\n\n parms = [\"-T\", rh.userid, \"-f\", tempFile]\n results = invokeSMCLI(rh, \"Image_Create_DM\", parms)\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n os.remove(tempFile)\n\n rh.printSysLog(\"Exit makeVM.createVM, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef showOperandLines(rh):\n\n if rh.function == 'HELP':\n rh.printLn(\"N\", \" For the MakeVM function:\")\n else:\n rh.printLn(\"N\", \"Sub-Functions(s):\")\n rh.printLn(\"N\", \" directory - \" +\n \"Create a virtual machine in the z/VM user directory.\")\n rh.printLn(\"N\", \" help - Displays this help information.\")\n rh.printLn(\"N\", \" version - \" +\n \"show the version of the makeVM function\")\n if rh.subfunction != '':\n rh.printLn(\"N\", \"Operand(s):\")\n rh.printLn(\"N\", \" --cpus - \" +\n \"Specifies the desired number of virtual CPUs the\")\n rh.printLn(\"N\", \" \" +\n \"guest will have.\")\n rh.printLn(\"N\", \" --maxcpu - \" +\n \"Specifies the maximum number of virtual CPUs the\")\n rh.printLn(\"N\", \" \" +\n \"guest is allowed to define.\")\n rh.printLn(\"N\", \" --ipl - \" +\n \"Specifies an IPL disk or NSS for the virtual\")\n rh.printLn(\"N\", \" \" +\n \"machine's directory entry.\")\n rh.printLn(\"N\", \" --logonby - \" +\n \"Specifies a list of up to 8 z/VM userids who can log\")\n rh.printLn(\"N\", \" \" +\n \"on to the virtual machine using their id and password.\")\n rh.printLn(\"N\", \" --maxMemSize - \" +\n \"Specifies the maximum memory the virtual machine\")\n rh.printLn(\"N\", \" \" +\n \"is allowed to define.\")\n rh.printLn(\"N\", \" --setReservedMem - \" +\n \"Set the additional memory space (maxMemSize - priMemSize)\")\n rh.printLn(\"N\", \" \" +\n \"as reserved memory of the virtual machine.\")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the password for the new virtual\")\n rh.printLn(\"N\", \" \" +\n \"machine.\")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the initial memory size for the new virtual\")\n rh.printLn(\"N\", \" \" +\n \"machine.\")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the privilege classes for the new virtual\")\n rh.printLn(\"N\", \" \" +\n \"machine.\")\n rh.printLn(\"N\", \" --profile - \" +\n \"Specifies the z/VM PROFILE to include in the\")\n rh.printLn(\"N\", \" \" +\n \"virtual machine's directory entry.\")\n rh.printLn(\"N\", \" - \" +\n \"Userid of the virtual machine to create.\")\n return", "response": "Prints the output related to operands."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef capture_guest(userid):\n # check power state, if down, start it\n ret = sdk_client.send_request('guest_get_power_state', userid)\n power_status = ret['output']\n if power_status == 'off':\n sdk_client.send_request('guest_start', userid)\n # TODO: how much time?\n time.sleep(1)\n\n # do capture\n image_name = 'image_captured_%03d' % (time.time() % 1000)\n sdk_client.send_request('guest_capture', userid, image_name,\n capture_type='rootonly', compress_level=6)\n return image_name", "response": "Capture a virtual machine image."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nimport image. Input parameters: :image_path: Image file path :os_version: Operating system version. e.g. rhel7.2", "response": "def import_image(image_path, os_version):\n \"\"\"Import image.\n\n Input parameters:\n :image_path: Image file path\n :os_version: Operating system version. e.g. rhel7.2\n \"\"\"\n image_name = os.path.basename(image_path)\n print(\"Checking if image %s exists or not, import it if not exists\" %\n image_name)\n image_info = sdk_client.send_request('image_query', imagename=image_name)\n if 'overallRC' in image_info and image_info['overallRC']:\n print(\"Importing image %s ...\" % image_name)\n url = 'file://' + image_path\n ret = sdk_client.send_request('image_import', image_name, url,\n {'os_version': os_version})\n else:\n print(\"Image %s already exists\" % image_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _run_guest(userid, image_path, os_version, profile,\n cpu, memory, network_info, disks_list):\n \"\"\"Deploy and provision a virtual machine.\n\n Input parameters:\n :userid: USERID of the guest, no more than 8.\n :image_name: path of the image file\n :os_version: os version of the image file\n :profile: profile of the userid\n :cpu: the number of vcpus\n :memory: memory\n :network_info: dict of network info.members:\n :ip_addr: ip address of vm\n :gateway: gateway of net\n :vswitch_name: switch name\n :cidr: CIDR\n :disks_list: list of disks to add.eg:\n disks_list = [{'size': '3g',\n 'is_boot_disk': True,\n 'disk_pool': 'ECKD:xcateckd'}]\n \"\"\"\n # Import image if not exists\n import_image(image_path, os_version)\n\n # Start time\n spawn_start = time.time()\n # Create userid\n print(\"Creating userid %s ...\" % userid)\n ret = sdk_client.send_request('guest_create', userid, cpu, memory,\n disk_list=disks_list,\n user_profile=profile)\n if ret['overallRC']:\n print 'guest_create error:%s' % ret\n return -1\n\n # Deploy image to root disk\n image_name = os.path.basename(image_path)\n print(\"Deploying %s to %s ...\" % (image_name, userid))\n ret = sdk_client.send_request('guest_deploy', userid, image_name)\n if ret['overallRC']:\n print 'guest_deploy error:%s' % ret\n return -2\n\n # Create network device and configure network interface\n print(\"Configuring network interface for %s ...\" % userid)\n ret = sdk_client.send_request('guest_create_network_interface', userid,\n os_version, [network_info])\n if ret['overallRC']:\n print 'guest_create_network error:%s' % ret\n return -3\n # Couple to vswitch\n ret = sdk_client.send_request('guest_nic_couple_to_vswitch', userid,\n '1000', network_info['vswitch_name'])\n if ret['overallRC']:\n print 'guest_nic_couple error:%s' % ret\n return -4\n # Grant user\n ret = sdk_client.send_request('vswitch_grant_user',\n network_info['vswitch_name'],\n userid)\n if ret['overallRC']:\n print 'vswitch_grant_user error:%s' % ret\n return -5\n\n # Power on the vm\n print(\"Starting guest %s\" % userid)\n ret = sdk_client.send_request('guest_start', userid)\n if ret['overallRC']:\n print 'guest_start error:%s' % ret\n return -6\n\n # End time\n spawn_time = time.time() - spawn_start\n print \"Instance-%s pawned succeeded in %s seconds\" % (userid, spawn_time)", "response": "Create a new virtual machine and provision it."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove unexpected properties from the req. GET.", "response": "def _remove_unexpected_query_parameters(schema, req):\n \"\"\"Remove unexpected properties from the req.GET.\"\"\"\n additional_properties = schema.get('addtionalProperties', True)\n\n if additional_properties:\n pattern_regexes = []\n\n patterns = schema.get('patternProperties', None)\n if patterns:\n for regex in patterns:\n pattern_regexes.append(re.compile(regex))\n\n for param in set(req.GET.keys()):\n if param not in schema['properties'].keys():\n if not (list(regex for regex in pattern_regexes if\n regex.match(param))):\n del req.GET[param]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef query_schema(query_params_schema, min_version=None,\n max_version=None):\n \"\"\"Register a schema to validate request query parameters.\"\"\"\n\n def add_validator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n if 'req' in kwargs:\n req = kwargs['req']\n else:\n req = args[1]\n\n if req.environ['wsgiorg.routing_args'][1]:\n if _schema_validation_helper(query_params_schema,\n req.environ['wsgiorg.routing_args'][1],\n args, kwargs, is_body=False):\n _remove_unexpected_query_parameters(query_params_schema,\n req)\n else:\n if _schema_validation_helper(query_params_schema,\n req.GET.dict_of_lists(),\n args, kwargs, is_body=False):\n _remove_unexpected_query_parameters(query_params_schema,\n req)\n return func(*args, **kwargs)\n return wrapper\n\n return add_validator", "response": "Register a schema to validate request query parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\niterates over the content and ensure the response is closed after.", "response": "def _close_after_stream(self, response, chunk_size):\n \"\"\"Iterate over the content and ensure the response is closed after.\"\"\"\n # Yield each chunk in the response body\n for chunk in response.iter_content(chunk_size=chunk_size):\n yield chunk\n # Once we're done streaming the body, ensure everything is closed.\n response.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsaves an file to the specified path.", "response": "def _save_file(self, data, path):\n \"\"\"Save an file to the specified path.\n :param data: binary data of the file\n :param path: path to save the file to\n \"\"\"\n with open(path, 'wb') as tfile:\n for chunk in data:\n tfile.write(chunk)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses the docstring for a given python object.", "response": "def process_docstring(app, what, name, obj, options, lines):\n \"\"\"Process the docstring for a given python object.\n\n Called when autodoc has read and processed a docstring. `lines` is a list\n of docstring lines that `_process_docstring` modifies in place to change\n what Sphinx outputs.\n\n The following settings in conf.py control what styles of docstrings will\n be parsed:\n\n * ``napoleon_google_docstring`` -- parse Google style docstrings\n * ``napoleon_numpy_docstring`` -- parse NumPy style docstrings\n\n Parameters\n ----------\n app : sphinx.application.Sphinx\n Application object representing the Sphinx process.\n what : str\n A string specifying the type of the object to which the docstring\n belongs. Valid values: \"module\", \"class\", \"exception\", \"function\",\n \"method\", \"attribute\".\n name : str\n The fully qualified name of the object.\n obj : module, class, exception, function, method, or attribute\n The object to which the docstring belongs.\n options : sphinx.ext.autodoc.Options\n The options given to the directive: an object with attributes\n inherited_members, undoc_members, show_inheritance and noindex that\n are True if the flag option of same name was given to the auto\n directive.\n lines : list of str\n The lines of the docstring, see above.\n\n .. note:: `lines` is modified *in place*\n\n Notes\n -----\n This function is (to most parts) taken from the :mod:`sphinx.ext.napoleon`\n module, sphinx version 1.3.1, and adapted to the classes defined here\"\"\"\n result_lines = lines\n if app.config.napoleon_numpy_docstring:\n docstring = ExtendedNumpyDocstring(\n result_lines, app.config, app, what, name, obj, options)\n result_lines = docstring.lines()\n if app.config.napoleon_google_docstring:\n docstring = ExtendedGoogleDocstring(\n result_lines, app.config, app, what, name, obj, options)\n result_lines = docstring.lines()\n\n lines[:] = result_lines[:]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setup(app):\n from sphinx.application import Sphinx\n if not isinstance(app, Sphinx):\n return # probably called by tests\n\n app.connect('autodoc-process-docstring', process_docstring)\n return napoleon_setup(app)", "response": "Sphinx extension setup function\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nformats date values in the .", "response": "def format_time(x):\n \"\"\"Formats date values\n\n This function formats :class:`datetime.datetime` and\n :class:`datetime.timedelta` objects (and the corresponding numpy objects)\n using the :func:`xarray.core.formatting.format_timestamp` and the\n :func:`xarray.core.formatting.format_timedelta` functions.\n\n Parameters\n ----------\n x: object\n The value to format. If not a time object, the value is returned\n\n Returns\n -------\n str or `x`\n Either the formatted time object or the initial `x`\"\"\"\n if isinstance(x, (datetime64, datetime)):\n return format_timestamp(x)\n elif isinstance(x, (timedelta64, timedelta)):\n return format_timedelta(x)\n elif isinstance(x, ndarray):\n return list(x) if x.ndim else x[()]\n return x"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_data_dependent(fmto, data):\n if callable(fmto.data_dependent):\n return fmto.data_dependent(data)\n return fmto.data_dependent", "response": "Checks whether a formatoption is data dependent"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_value(self, value, validate=True, todefault=False):\n value = value if not validate else self.validate(value)\n # if the key in the plotter is not already set (i.e. it is initialized\n # with None, we set it)\n if self.plotter[self.key] is None:\n with self.plotter.no_validation:\n self.plotter[self.key] = value.copy()\n # in case of an empty dict, clear the value\n elif not value:\n self.plotter[self.key].clear()\n # otherwhise we update the dictionary\n else:\n if todefault:\n self.plotter[self.key].clear()\n self.plotter[self.key].update(value)", "response": "Set the value of the key in the plotter with the given value."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ax(self):\n if self._ax is None:\n import matplotlib.pyplot as plt\n plt.figure()\n self._ax = plt.axes(projection=self._get_sample_projection())\n return self._ax", "response": "Axes instance of the plot"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef iter_base_variables(self):\n if isinstance(self.data, InteractiveList):\n return chain(*(arr.psy.iter_base_variables for arr in self.data))\n else:\n return self.data.psy.iter_base_variables", "response": "A mapping from the base_variable names to the variables"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef changed(self):\n return {key: value for key, value in six.iteritems(self)\n if getattr(self, key).changed}", "response": "Returns the dictionary containing the key value pairs that are not the\n default"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _fmto_groups(self):\n ret = defaultdict(set)\n for key in self:\n ret[getattr(self, key).group].add(getattr(self, key))\n return dict(ret)", "response": "Returns a dictionary mapping from group to a set of formatoptions"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the logging. Logger of this plotter.", "response": "def logger(self):\n \"\"\":class:`logging.Logger` of this plotter\"\"\"\n try:\n return self.data.psy.logger.getChild(self.__class__.__name__)\n except AttributeError:\n name = '%s.%s' % (self.__module__, self.__class__.__name__)\n return logging.getLogger(name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _try2set(self, fmto, *args, **kwargs):\n try:\n fmto.set_value(*args, **kwargs)\n except Exception as e:\n critical(\"Error while setting %s!\" % fmto.key,\n logger=getattr(self, 'logger', None))\n raise e", "response": "Try to set the value in fmto and gives additional informations when fail\n is set"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_key(self, key, raise_error=True, *args, **kwargs):\n return check_key(\n key, possible_keys=list(self), raise_error=raise_error,\n name='formatoption keyword', *args, **kwargs)", "response": "Checks whether the key is a valid formatoption\n ."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef initialize_plot(self, data=None, ax=None, make_plot=True, clear=False,\n draw=False, remove=False, priority=None):\n \"\"\"\n Initialize the plot for a data array\n\n Parameters\n ----------\n data: InteractiveArray or ArrayList, optional\n Data object that shall be visualized.\n\n - If not None and `plot` is True, the given data is visualized.\n - If None and the :attr:`data` attribute is not None, the data in\n the :attr:`data` attribute is visualized\n - If both are None, nothing is done.\n %(Plotter.parameters.ax|make_plot|clear)s\n %(InteractiveBase.start_update.parameters.draw)s\n remove: bool\n If True, old effects by the formatoptions in this plotter are\n undone first\n priority: int\n If given, initialize only the formatoption with the given priority.\n This value must be out of :data:`START`, :data:`BEFOREPLOTTING` or\n :data:`END`\n \"\"\"\n if data is None and self.data is not None:\n data = self.data\n else:\n self.data = data\n self.ax = ax\n if data is None: # nothing to do if no data is given\n return\n self.no_auto_update = not (\n not self.no_auto_update or not data.psy.no_auto_update)\n data.psy.plotter = self\n if not make_plot: # stop here if we shall not plot\n return\n self.logger.debug(\"Initializing plot...\")\n if remove:\n self.logger.debug(\" Removing old formatoptions...\")\n for fmto in self._fmtos:\n try:\n fmto.remove()\n except Exception:\n self.logger.debug(\n \"Could not remove %s while initializing\", fmto.key,\n exc_info=True)\n if clear:\n self.logger.debug(\" Clearing axes...\")\n self.ax.clear()\n self.cleared = True\n # get the formatoptions. We sort them here by key to make sure that the\n # order always stays the same (easier for debugging)\n fmto_groups = self._grouped_fmtos(self._sorted_by_priority(\n sorted(self._fmtos, key=lambda fmto: fmto.key)))\n self.plot_data = self.data\n self._updating = True\n for fmto_priority, grouper in fmto_groups:\n if priority is None or fmto_priority == priority:\n self._plot_by_priority(fmto_priority, grouper,\n initializing=True)\n self._release_all(True) # finish the update\n self.cleared = False\n self.replot = False\n self._initialized = True\n self._updating = False\n\n if draw is None:\n draw = rcParams['auto_draw']\n if draw:\n self.draw()\n if rcParams['auto_show']:\n self.show()", "response": "Initialize the plot for a given data array."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters formatoptions for the update", "response": "def _register_update(self, fmt={}, replot=False, force=False,\n todefault=False):\n \"\"\"\n Register formatoptions for the update\n\n Parameters\n ----------\n fmt: dict\n Keys can be any valid formatoptions with the corresponding values\n (see the :attr:`formatoptions` attribute)\n replot: bool\n Boolean that determines whether the data specific formatoptions\n shall be updated in any case or not.\n %(InteractiveBase._register_update.parameters.force|todefault)s\"\"\"\n if self.disabled:\n return\n self.replot = self.replot or replot\n self._todefault = self._todefault or todefault\n if force is True:\n force = list(fmt)\n self._force.update(\n [ret[0] for ret in map(self.check_key, force or [])])\n # check the keys\n list(map(self.check_key, fmt))\n self._registered_updates.update(fmt)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstarts the update of the formatoptions.", "response": "def start_update(self, draw=None, queues=None, update_shared=True):\n \"\"\"\n Conduct the registered plot updates\n\n This method starts the updates from what has been registered by the\n :meth:`update` method. You can call this method if you did not set the\n `auto_update` parameter to True when calling the :meth:`update` method\n and when the :attr:`no_auto_update` attribute is True.\n\n Parameters\n ----------\n %(InteractiveBase.start_update.parameters)s\n\n Returns\n -------\n %(InteractiveBase.start_update.returns)s\n\n See Also\n --------\n :attr:`no_auto_update`, update\"\"\"\n def update_the_others():\n for fmto in fmtos:\n for other_fmto in fmto.shared:\n if not other_fmto.plotter._updating:\n other_fmto.plotter._register_update(\n force=[other_fmto.key])\n for fmto in fmtos:\n for other_fmto in fmto.shared:\n if not other_fmto.plotter._updating:\n other_draw = other_fmto.plotter.start_update(\n draw=False, update_shared=False)\n if other_draw:\n self._figs2draw.add(\n other_fmto.plotter.ax.get_figure())\n if self.disabled:\n return False\n\n if queues is not None:\n queues[0].get()\n self.logger.debug(\"Starting update of %r\",\n self._registered_updates.keys())\n # update the formatoptions\n self._save_state()\n try:\n # get the formatoptions. We sort them here by key to make sure that\n # the order always stays the same (easier for debugging)\n fmtos = sorted(self._set_and_filter(), key=lambda fmto: fmto.key)\n except Exception:\n # restore last (working) state\n last_state = self._old_fmt.pop(-1)\n with self.no_validation:\n for key in self:\n self[key] = last_state.get(key, getattr(self, key).default)\n if queues is not None:\n queues[0].task_done()\n self._release_all(queue=None if queues is None else queues[1])\n # raise the error\n raise\n for fmto in fmtos:\n for fmto2 in fmto.shared:\n fmto2.plotter._to_update[fmto2] = self\n if queues is not None:\n self._updating = True\n queues[0].task_done()\n # wait for the other tasks to finish\n queues[0].join()\n queues[1].get()\n fmtos.extend([fmto for fmto in self._insert_additionals(list(\n self._to_update)) if fmto not in fmtos])\n self._to_update.clear()\n\n fmto_groups = self._grouped_fmtos(self._sorted_by_priority(fmtos[:]))\n # if any formatoption requires a clearing of the axes is updated,\n # we reinitialize the plot\n if self.cleared:\n self.reinit(draw=draw)\n update_the_others()\n self._release_all(queue=None if queues is None else queues[1])\n return True\n # otherwise we update it\n arr_draw = False\n try:\n for priority, grouper in fmto_groups:\n arr_draw = True\n self._plot_by_priority(priority, grouper)\n update_the_others()\n except Exception:\n raise\n finally:\n # make sure that all locks are released\n self._release_all(finish=True,\n queue=None if queues is None else queues[1])\n if draw is None:\n draw = rcParams['auto_draw']\n if draw and arr_draw:\n self.draw()\n if rcParams['auto_show']:\n self.show()\n self.replot = False\n return arr_draw"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef draw(self):\n for fig in self.figs2draw:\n fig.canvas.draw()\n self._figs2draw.clear()", "response": "Draw the figures and those that are shared and have been changed"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninsert additional formatoptions into fmtos.", "response": "def _insert_additionals(self, fmtos, seen=None):\n \"\"\"\n Insert additional formatoptions into `fmtos`.\n\n This method inserts those formatoptions into `fmtos` that are required\n because one of the following criteria is fullfilled:\n\n 1. The :attr:`replot` attribute is True\n 2. Any formatoption with START priority is in `fmtos`\n 3. A dependency of one formatoption is in `fmtos`\n\n Parameters\n ----------\n fmtos: list\n The list of formatoptions that shall be updated\n seen: set\n The formatoption keys that shall not be included. If None, all\n formatoptions in `fmtos` are used\n\n Returns\n -------\n fmtos\n The initial `fmtos` plus further formatoptions\n\n Notes\n -----\n `fmtos` and `seen` are modified in place (except that any formatoption\n in the initial `fmtos` has :attr:`~Formatoption.requires_clearing`\n attribute set to True)\"\"\"\n def get_dependencies(fmto):\n if fmto is None:\n return []\n return fmto.dependencies + list(chain(*map(\n lambda key: get_dependencies(getattr(self, key, None)),\n fmto.dependencies)))\n seen = seen or {fmto.key for fmto in fmtos}\n keys = {fmto.key for fmto in fmtos}\n self.replot = self.replot or any(\n fmto.requires_replot for fmto in fmtos)\n if self.replot or any(fmto.priority >= START for fmto in fmtos):\n self.replot = True\n self.plot_data = self.data\n new_fmtos = dict((f.key, f) for f in self._fmtos\n if ((f not in fmtos and is_data_dependent(\n f, self.data))))\n seen.update(new_fmtos)\n keys.update(new_fmtos)\n fmtos += list(new_fmtos.values())\n\n # insert the formatoptions that have to be updated if the plot is\n # changed\n if any(fmto.priority >= BEFOREPLOTTING for fmto in fmtos):\n new_fmtos = dict((f.key, f) for f in self._fmtos\n if ((f not in fmtos and f.update_after_plot)))\n fmtos += list(new_fmtos.values())\n for fmto in set(self._fmtos).difference(fmtos):\n all_dependencies = get_dependencies(fmto)\n if keys.intersection(all_dependencies):\n fmtos.append(fmto)\n if any(fmto.requires_clearing for fmto in fmtos):\n self.cleared = True\n return list(self._fmtos)\n return fmtos"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _sorted_by_priority(self, fmtos, changed=None):\n def pop_fmto(key):\n idx = fmtos_keys.index(key)\n del fmtos_keys[idx]\n return fmtos.pop(idx)\n\n def get_children(fmto, parents_keys):\n all_fmtos = fmtos_keys + parents_keys\n for key in fmto.children + fmto.dependencies:\n if key not in fmtos_keys:\n continue\n child_fmto = pop_fmto(key)\n for childs_child in get_children(\n child_fmto, parents_keys + [child_fmto.key]):\n yield childs_child\n # filter out if parent is in update list\n if (any(key in all_fmtos for key in child_fmto.parents) or\n fmto.key in child_fmto.parents):\n continue\n yield child_fmto\n\n fmtos.sort(key=lambda fmto: fmto.priority, reverse=True)\n fmtos_keys = [fmto.key for fmto in fmtos]\n self._last_update = changed or fmtos_keys[:]\n self.logger.debug(\"Update the formatoptions %s\", fmtos_keys)\n while fmtos:\n del fmtos_keys[0]\n fmto = fmtos.pop(0)\n # first update children\n for child_fmto in get_children(fmto, [fmto.key]):\n yield child_fmto\n # filter out if parent is in update list\n if any(key in fmtos_keys for key in fmto.parents):\n continue\n yield fmto", "response": "Yields all formatoption objects in the order of their priority and dependency."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_formatoptions(cls, include_bases=True):\n def base_fmtos(base):\n return filter(\n lambda key: isinstance(getattr(cls, key), Formatoption),\n getattr(base, '_get_formatoptions', empty)(False))\n\n def empty(*args, **kwargs):\n return list()\n fmtos = (attr for attr, obj in six.iteritems(cls.__dict__)\n if isinstance(obj, Formatoption))\n if not include_bases:\n return fmtos\n return unique_everseen(chain(fmtos, *map(base_fmtos, cls.__mro__)))", "response": "This method returns an iterator over formatoptions in this class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _enhance_keys(cls, keys=None, *args, **kwargs):\n all_keys = list(cls._get_formatoptions())\n if isinstance(keys, six.string_types):\n keys = [keys]\n else:\n keys = list(keys or sorted(all_keys))\n fmto_groups = defaultdict(list)\n for key in all_keys:\n fmto_groups[getattr(cls, key).group].append(key)\n new_i = 0\n for i, key in enumerate(keys[:]):\n\n if key in fmto_groups:\n del keys[new_i]\n for key2 in fmto_groups[key]:\n if key2 not in keys:\n keys.insert(new_i, key2)\n new_i += 1\n else:\n valid, similar, message = check_key(\n key, all_keys, False, 'formatoption keyword', *args,\n **kwargs)\n if not valid:\n keys.remove(key)\n new_i -= 1\n warn(message)\n new_i += 1\n return keys", "response": "Enhance the given keys by groups and return the enhanced list of formatoptions."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef show_summaries(cls, keys=None, indent=0, *args, **kwargs):\n def find_summary(key, key_txt, doc):\n return '\\n'.join(wrapper.wrap(doc[:doc.find('\\n\\n')]))\n str_indent = \" \" * indent\n wrapper = TextWrapper(width=80, initial_indent=str_indent + ' ' * 4,\n subsequent_indent=str_indent + ' ' * 4)\n return cls._show_doc(find_summary, keys=keys, indent=indent,\n *args, **kwargs)", "response": "A method to print the summary of the keys and other parameters."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef show_docs(cls, keys=None, indent=0, *args, **kwargs):\n def full_doc(key, key_txt, doc):\n return ('=' * len(key_txt)) + '\\n' + doc + '\\n'\n return cls._show_doc(full_doc, keys=keys, indent=indent,\n *args, **kwargs)", "response": "A method to print the full documentations of the formatoptions\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _set_rc(self):\n base_str = self._get_rc_strings()\n # to make sure that the '.' is not interpreted as a regex pattern,\n # we specify the pattern_base by ourselves\n pattern_base = map(lambda s: s.replace('.', '\\.'), base_str)\n # pattern for valid keys being all formatoptions in this plotter\n pattern = '(%s)(?=$)' % '|'.join(self._get_formatoptions())\n self._rc = rcParams.find_and_replace(base_str, pattern=pattern,\n pattern_base=pattern_base)\n user_rc = SubDict(rcParams['plotter.user'], base_str, pattern=pattern,\n pattern_base=pattern_base)\n self._rc.update(user_rc.data)\n\n self._defaultParams = SubDict(rcParams.defaultParams, base_str,\n pattern=pattern,\n pattern_base=pattern_base)", "response": "Method to set the rcparams and defaultParams for this plotter"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the formatoptions and the plotter.", "response": "def update(self, fmt={}, replot=False, auto_update=False, draw=None,\n force=False, todefault=False, **kwargs):\n \"\"\"\n Update the formatoptions and the plot\n\n If the :attr:`data` attribute of this plotter is None, the plotter is\n updated like a usual dictionary (see :meth:`dict.update`). Otherwise\n the update is registered and the plot is updated if `auto_update` is\n True or if the :meth:`start_update` method is called (see below).\n\n Parameters\n ----------\n %(Plotter._register_update.parameters)s\n %(InteractiveBase.start_update.parameters)s\n %(InteractiveBase.update.parameters.auto_update)s\n ``**kwargs``\n Any other formatoption that shall be updated (additionally to those\n in `fmt`)\n\n Notes\n -----\n %(InteractiveBase.update.notes)s\"\"\"\n if self.disabled:\n return\n fmt = dict(fmt)\n if kwargs:\n fmt.update(kwargs)\n # if the data is None, update like a usual dictionary (but with\n # validation)\n if not self._initialized:\n for key, val in six.iteritems(fmt):\n self[key] = val\n return\n\n self._register_update(fmt=fmt, replot=replot, force=force,\n todefault=todefault)\n if not self.no_auto_update or auto_update:\n self.start_update(draw=draw)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the keys to share or unshare the given set of formatoptions.", "response": "def _set_sharing_keys(self, keys):\n \"\"\"\n Set the keys to share or unshare\n\n Parameters\n ----------\n keys: string or iterable of strings\n The iterable may contain formatoptions that shall be shared (or\n unshared), or group names of formatoptions to share all\n formatoptions of that group (see the :attr:`fmt_groups` property).\n If None, all formatoptions of this plotter are inserted.\n\n Returns\n -------\n set\n The set of formatoptions to share (or unshare)\"\"\"\n if isinstance(keys, str):\n keys = {keys}\n keys = set(self) if keys is None else set(keys)\n fmto_groups = self._fmto_groups\n keys.update(chain(*(map(lambda fmto: fmto.key, fmto_groups[key])\n for key in keys.intersection(fmto_groups))))\n keys.difference_update(fmto_groups)\n return keys"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef share(self, plotters, keys=None, draw=None, auto_update=False):\n auto_update = auto_update or not self.no_auto_update\n if isinstance(plotters, Plotter):\n plotters = [plotters]\n keys = self._set_sharing_keys(keys)\n for plotter in plotters:\n for key in keys:\n fmto = self._shared.get(key, getattr(self, key))\n if not getattr(plotter, key) == fmto:\n plotter._shared[key] = getattr(self, key)\n fmto.shared.add(getattr(plotter, key))\n # now exit if we are not initialized\n if self._initialized:\n self.update(force=keys, auto_update=auto_update, draw=draw)\n for plotter in plotters:\n if not plotter._initialized:\n continue\n old_registered = plotter._registered_updates.copy()\n plotter._registered_updates.clear()\n try:\n plotter.update(force=keys, auto_update=auto_update, draw=draw)\n except:\n raise\n finally:\n plotter._registered_updates.clear()\n plotter._registered_updates.update(old_registered)\n if draw is None:\n draw = rcParams['auto_draw']\n if draw:\n self.draw()\n if rcParams['auto_show']:\n self.show()", "response": "This method shares the formatoptions of this plotter with the others."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unshare(self, plotters, keys=None, auto_update=False, draw=None):\n auto_update = auto_update or not self.no_auto_update\n if isinstance(plotters, Plotter):\n plotters = [plotters]\n keys = self._set_sharing_keys(keys)\n for plotter in plotters:\n plotter.unshare_me(keys, auto_update=auto_update, draw=draw,\n update_other=False)\n self.update(force=keys, auto_update=auto_update, draw=draw)", "response": "Unshare the given plotters with the given keys."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses the sharing connection of this plotter with others This method undoes the sharing connections made by the :meth:`share` method and release this plotter again. Parameters ---------- keys: string or iterable of strings The formatoptions to unshare, or group names of formatoptions to unshare all formatoptions of that group (see the :attr:`fmt_groups` property). If None, all formatoptions of this plotter are unshared. %(InteractiveBase.start_update.parameters.draw)s %(InteractiveBase.update.parameters.auto_update)s See Also -------- share, unshare", "response": "def unshare_me(self, keys=None, auto_update=False, draw=None,\n update_other=True):\n \"\"\"\n Close the sharing connection of this plotter with others\n\n This method undoes the sharing connections made by the :meth:`share`\n method and release this plotter again.\n\n Parameters\n ----------\n keys: string or iterable of strings\n The formatoptions to unshare, or group names of formatoptions to\n unshare all formatoptions of that group (see the\n :attr:`fmt_groups` property). If None, all formatoptions of this\n plotter are unshared.\n %(InteractiveBase.start_update.parameters.draw)s\n %(InteractiveBase.update.parameters.auto_update)s\n\n See Also\n --------\n share, unshare\"\"\"\n auto_update = auto_update or not self.no_auto_update\n keys = self._set_sharing_keys(keys)\n to_update = []\n for key in keys:\n fmto = getattr(self, key)\n try:\n other_fmto = self._shared.pop(key)\n except KeyError:\n pass\n else:\n other_fmto.shared.remove(fmto)\n if update_other:\n other_fmto.plotter._register_update(\n force=[other_fmto.key])\n to_update.append(other_fmto.plotter)\n self.update(force=keys, draw=draw, auto_update=auto_update)\n if update_other and auto_update:\n for plotter in to_update:\n plotter.start_update(draw=draw)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef has_changed(self, key, include_last=True):\n if self._initializing or key not in self:\n return\n fmto = getattr(self, key)\n if self._old_fmt and key in self._old_fmt[-1]:\n old_val = self._old_fmt[-1][key]\n else:\n old_val = fmto.default\n if (fmto.diff(old_val) or (include_last and\n fmto.key in self._last_update)):\n return [old_val, fmto.value]", "response": "Returns a list of the old value and the new value of the formatoption with the given key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef execute(cmd):\n\n # Parse cmd string to a list\n if not isinstance(cmd, list):\n cmd = shlex.split(cmd)\n # Execute command\n rc = 0\n output = \"\"\n try:\n output = subprocess.check_output(cmd, close_fds=True,\n stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as err:\n rc = err.returncode\n output = err.output\n except Exception as err:\n err_msg = ('Command \"%s\" Error: %s' % (' '.join(cmd), str(err)))\n raise exception.SDKInternalError(msg=err_msg)\n\n output = bytes.decode(output)\n return (rc, output)", "response": "execute command return rc and output string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts memory size from GB to MB.", "response": "def convert_to_mb(s):\n \"\"\"Convert memory size from GB to MB.\"\"\"\n s = s.upper()\n try:\n if s.endswith('G'):\n return float(s[:-1].strip()) * 1024\n elif s.endswith('T'):\n return float(s[:-1].strip()) * 1024 * 1024\n else:\n return float(s[:-1].strip())\n except (IndexError, ValueError, KeyError, TypeError):\n errmsg = (\"Invalid memory format: %s\") % s\n raise exception.SDKInternalError(msg=errmsg)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nvalidating a mac address", "response": "def valid_mac_addr(addr):\n ''' Validates a mac address'''\n if not isinstance(addr, six.string_types):\n return False\n valid = re.compile(r'''\n (^([0-9A-F]{2}[:]){5}([0-9A-F]{2})$)\n ''',\n re.VERBOSE | re.IGNORECASE)\n return valid.match(addr) is not None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef expect_invalid_resp_data(data=''):\n try:\n yield\n except (ValueError, TypeError, IndexError, AttributeError,\n KeyError) as err:\n msg = ('Invalid smt response data: %s. Error: %s' %\n (data, six.text_type(err)))\n LOG.error(msg)\n raise exception.SDKInternalError(msg=msg)", "response": "Catch exceptions when using zvm client response data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncatching exceptions when using zvm client response data.", "response": "def wrap_invalid_resp_data_error(function):\n \"\"\"Catch exceptions when using zvm client response data.\"\"\"\n\n @functools.wraps(function)\n def decorated_function(*arg, **kwargs):\n try:\n return function(*arg, **kwargs)\n except (ValueError, TypeError, IndexError, AttributeError,\n KeyError) as err:\n msg = ('Invalid smt response data. Error: %s' %\n six.text_type(err))\n LOG.error(msg)\n raise exception.SDKInternalError(msg=msg)\n\n return decorated_function"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncatching all kinds of zvm client request failure and reraise the exception. SDKInternalError", "response": "def expect_and_reraise_internal_error(modID='SDK'):\n \"\"\"Catch all kinds of zvm client request failure and reraise.\n\n modID: the moduleID that the internal error happens in.\n \"\"\"\n try:\n yield\n except exception.SDKInternalError as err:\n msg = err.format_message()\n raise exception.SDKInternalError(msg, modID=modID)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncatch SDK base exception and print error log before reraise exception.", "response": "def log_and_reraise_smt_request_failed(action=None):\n \"\"\"Catch SDK base exception and print error log before reraise exception.\n\n msg: the error message to be logged.\n \"\"\"\n try:\n yield\n except exception.SDKSMTRequestFailed as err:\n msg = ''\n if action is not None:\n msg = \"Failed to %s. \" % action\n msg += \"SMT error: %s\" % err.format_message()\n LOG.error(msg)\n raise exception.SDKSMTRequestFailed(err.results, msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_smt_userid():\n cmd = [\"sudo\", \"/sbin/vmcp\", \"query userid\"]\n try:\n userid = subprocess.check_output(cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n userid = bytes.decode(userid)\n userid = userid.split()[0]\n return userid\n except Exception as err:\n msg = (\"Could not find the userid of the smt server: %s\") % err\n raise exception.SDKInternalError(msg=msg)", "response": "Get the userid of the smt server"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates namelist. Either through set CONF.zvm.namelist, or by generate based on smt userid.", "response": "def get_namelist():\n \"\"\"Generate namelist.\n\n Either through set CONF.zvm.namelist, or by generate based on smt userid.\n \"\"\"\n if CONF.zvm.namelist is not None:\n # namelist length limit should be 64, but there's bug limit to 8\n # will change the limit to 8 once the bug fixed\n if len(CONF.zvm.namelist) <= 8:\n return CONF.zvm.namelist\n\n # return ''.join(('NL', get_smt_userid().rjust(6, '0')[-6:]))\n # py3 compatible changes\n userid = get_smt_userid()\n return 'NL' + userid.rjust(6, '0')[-6:]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates the iucv_authorized_userid file", "response": "def generate_iucv_authfile(fn, client):\n \"\"\"Generate the iucv_authorized_userid file\"\"\"\n lines = ['#!/bin/bash\\n',\n 'echo -n %s > /etc/iucv_authorized_userid\\n' % client]\n with open(fn, 'w') as f:\n f.writelines(lines)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef translate_response_to_dict(rawdata, dirt):\n data_list = rawdata.split(\"\\n\")\n data = {}\n\n for ls in data_list:\n for k in list(dirt.keys()):\n if ls.__contains__(dirt[k]):\n data[k] = ls[(ls.find(dirt[k]) + len(dirt[k])):].strip()\n break\n\n if data == {}:\n msg = (\"Invalid smt response data. Error: No value matched with \"\n \"keywords. Raw Data: %(raw)s; Keywords: %(kws)s\" %\n {'raw': rawdata, 'kws': str(dirt)})\n raise exception.SDKInternalError(msg=msg)\n\n return data", "response": "Translate SMT response to a python dictionary."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete_guest(userid):\n # Check if the guest exists.\n guest_list_info = client.send_request('guest_list')\n # the string 'userid' need to be coded as 'u'userid' in case of py2 interpreter.\n userid_1 = (unicode(userid, \"utf-8\") if sys.version[0] == '2' else userid)\n\n if userid_1 not in guest_list_info['output']:\n RuntimeError(\"Userid %s does not exist!\" % userid)\n\n\n # Delete the guest.\n guest_delete_info = client.send_request('guest_delete', userid)\n\n if guest_delete_info['overallRC']:\n print(\"\\nFailed to delete guest %s!\" % userid)\n else:\n print(\"\\nSucceeded to delete guest %s!\" % userid)", "response": "Delete a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the basic information of a virtual machine.", "response": "def describe_guest(userid):\n \"\"\" Get the basic information of virtual machine.\n\n Input parameters:\n :userid: USERID of the guest, last 8 if length > 8\n \"\"\"\n # Check if the guest exists.\n guest_list_info = client.send_request('guest_list')\n userid_1 = (unicode(userid, \"utf-8\") if sys.version[0] == '2' else userid)\n\n if userid_1 not in guest_list_info['output']:\n raise RuntimeError(\"Guest %s does not exist!\" % userid)\n\n\n guest_describe_info = client.send_request('guest_get_definition_info', userid)\n print(\"\\nThe created guest %s's info are: \\n%s\\n\" % (userid, guest_describe_info))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nimport the specific image.", "response": "def import_image(image_path, os_version):\n \"\"\" Import the specific image.\n\n Input parameters:\n :image_path: Image file name\n :os_version: Operation System version. e.g. rhel7.4\n \"\"\"\n image_name = os.path.basename(image_path)\n print(\"\\nChecking if image %s exists ...\" % image_name)\n\n image_query_info = client.send_request('image_query', imagename = image_name)\n if image_query_info['overallRC']:\n print(\"Importing image %s ...\" % image_name)\n url = \"file://\" + image_path\n image_import_info = client.send_request('image_import', image_name, url, \n {'os_version': os_version})\n if image_import_info['overallRC']:\n raise RuntimeError(\"Failed to import image %s!\\n%s\" % \n (image_name, image_import_info))\n else:\n print(\"Succeeded to import image %s!\" % image_name)\n else:\n print(\"Image %s already exists!\" % image_name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_guest(userid, cpu, memory, disks_list, profile):\n # Check if the userid already exists.\n guest_list_info = client.send_request('guest_list')\n userid_1 = (unicode(userid, \"utf-8\") if sys.version[0] == '2' else userid)\n\n if userid_1 in guest_list_info['output']:\n raise RuntimeError(\"Guest %s already exists!\" % userid)\n\n # Create the guest.\n print(\"\\nCreating guest: %s ...\" % userid)\n guest_create_info = client.send_request('guest_create', userid, cpu, memory, \n disk_list = disks_list, \n user_profile = profile)\n\n if guest_create_info['overallRC']:\n raise RuntimeError(\"Failed to create guest %s!\\n%s\" % \n (userid, guest_create_info))\n else:\n print(\"Succeeded to create guest %s!\" % userid)", "response": "Create the userid.\n\n Input parameters:\n :userid: USERID of the guest, last 8 if length > 8\n :cpu: the number of vcpus\n :memory: memory\n :disks_list: list of disks to add\n :profile: profile of the userid"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeploying image to root disk.", "response": "def deploy_guest(userid, image_name):\n \"\"\" Deploy image to root disk.\n\n Input parameters:\n :userid: USERID of the guest, last 8 if length > 8\n :image_path: Image file name\n \"\"\"\n print(\"\\nDeploying %s to %s ...\" % (image_name, userid))\n guest_deploy_info = client.send_request('guest_deploy', userid, image_name)\n \n # if failed to deploy, then delete the guest.\n if guest_deploy_info['overallRC']:\n print(\"\\nFailed to deploy guest %s!\\n%s\" % (userid, guest_deploy_info))\n print(\"\\nDeleting the guest %s that failed to deploy...\" % userid)\n\n # call terminage_guest() to delete the guest that failed to deploy.\n delete_guest(userid)\n os._exit(0)\n else:\n print(\"Succeeded to deploy %s!\" % userid)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_network(userid, os_version, network_info):\n print(\"\\nConfiguring network interface for %s ...\" % userid)\n network_create_info = client.send_request('guest_create_network_interface', \n userid, os_version, network_info)\n\n if network_create_info['overallRC']:\n raise RuntimeError(\"Failed to create network for guest %s!\\n%s\" % \n (userid, network_create_info))\n else:\n print(\"Succeeded to create network for guest %s!\" % userid)", "response": "Create network device and configure network interface."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef coupleTo_vswitch(userid, vswitch_name):\n print(\"\\nCoupleing to vswitch for %s ...\" % userid)\n vswitch_info = client.send_request('guest_nic_couple_to_vswitch', \n userid, '1000', vswitch_name)\n\n if vswitch_info['overallRC']:\n raise RuntimeError(\"Failed to couple to vswitch for guest %s!\\n%s\" % \n (userid, vswitch_info))\n else:\n print(\"Succeeded to couple to vswitch for guest %s!\" % userid)", "response": "Couple to vswitch.\n\n Input parameters:\n :userid: USERID of the guest, last 8 if length > 8\n :network_info: dict of network info"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngrants user to guest", "response": "def grant_user(userid, vswitch_name):\n \"\"\" Grant user.\n\n Input parameters:\n :userid: USERID of the guest, last 8 if length > 8\n :network_info: dict of network info\n \"\"\"\n print(\"\\nGranting user %s ...\" % userid)\n user_grant_info = client.send_request('vswitch_grant_user', vswitch_name, userid)\n\n if user_grant_info['overallRC']:\n raise RuntimeError(\"Failed to grant user %s!\" %userid)\n else:\n print(\"Succeeded to grant user %s!\" % userid)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start_guest(userid):\n # Check the power state before starting guest.\n power_state_info = client.send_request('guest_get_power_state', userid)\n print(\"\\nPower state is: %s.\" % power_state_info['output'])\n\n # start guest.\n guest_start_info = client.send_request('guest_start', userid)\n\n if guest_start_info['overallRC']:\n raise RuntimeError('Failed to start guest %s!\\n%s' % \n (userid, guest_start_info))\n else:\n print(\"Succeeded to start guest %s!\" % userid)\n\n # Check the power state after starting guest.\n power_state_info = client.send_request('guest_get_power_state', userid)\n print(\"Power state is: %s.\" % power_state_info['output'])\n\n if guest_start_info['overallRC']:\n print(\"Guest_start error: %s\" % guest_start_info)", "response": "Power on the guest."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a new virtual machine and create the network and grant user and configure network interface.", "response": "def _run_guest(userid, image_path, os_version, profile, \n cpu, memory, network_info, vswitch_name, disks_list):\n \"\"\" Deploy and provide a virtual machine.\n\n Input parameters:\n :userid: USERID of the guest, no more than 8\n :image_path: image file path\n :os_version: os version of the image file\n :profile: profile of the userid\n :cpu: the number of vcpus\n :memory: memory\n :network_info: dict of network info. Members are:\n :ip_addr: ip address of vm\n :gateway: gateway of network\n :cidr: CIDR\n :vswitch_name: vswitch name\n :disks_list: list of disks to add. For example:\n disks_list = [{'size': '3g', \n 'is_boot_disk': True, \n 'disk_pool': 'ECDK: xcateckd'}]\n \"\"\"\n print(\"Start deploying a virtual machine:\")\n \n # Import image if not exists.\n import_image(image_path, os_version)\n \n # Start time.\n spawn_start = time.time()\n \n # Create guest.\n create_guest(userid, cpu, memory, disks_list, profile)\n \n # Deploy image to root disk.\n image_name = os.path.basename(image_path)\n deploy_guest(userid, image_name)\n \n # Create network device and configure network interface.\n create_network(userid, os_version, network_info)\n \n # Couple to vswitch.\n coupleTo_vswitch(userid, vswitch_name)\n \n # Grant user.\n grant_user(userid, vswitch_name)\n \n # Power on the vm.\n start_guest(userid)\n \n # End the time.\n spawn_time = time.time() - spawn_start\n print(\"Instance-%s spawned succeeded in %s seconds!\" % \n (userid, spawn_time))\n\n # Describe guest.\n describe_guest(userid)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run_guest():\n\n # user input the properties of guest, image and network.\n _user_input_properties()\n\n # run a guest.\n _run_guest(GUEST_USERID, IMAGE_PATH, IMAGE_OS_VERSION, GUEST_PROFILE, \n GUEST_VCPUS, GUEST_MEMORY, NETWORK_INFO, VSWITCH_NAME, DISKS_LIST)", "response": "A sample for quickly deploy and start a virtual guest."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef package_version(filename, varname):\n _locals = {}\n with open(filename) as fp:\n exec(fp.read(), None, _locals)\n return _locals[varname]", "response": "Return package version string by reading filename and retrieving its\n module - global variable varname."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef request(self, requestData, **kwArgs):\n\n self.reqCnt = self.reqCnt + 1\n\n # Determine whether the request will be capturing logs\n if 'captureLogs' in kwArgs.keys():\n logFlag = kwArgs['captureLogs']\n else:\n logFlag = self.captureLogs\n\n # Pass along or generate a request Id\n if 'requestId' in kwArgs.keys():\n requestId = kwArgs['requestId']\n else:\n requestId = str(self.reqIdPrefix) + str(self.reqCnt)\n\n rh = ReqHandle(\n requestId=requestId,\n captureLogs=logFlag,\n smt=self)\n\n rh.parseCmdline(requestData)\n if rh.results['overallRC'] == 0:\n rh.printSysLog(\"Processing: \" + rh.requestString)\n rh.driveFunction()\n\n return rh.results", "response": "Process a request.\n\n Input:\n Request as either a string or a list.\n captureLogs=\n Enables or disables log capture per request.\n This overrides the value from SMT.\n requestId= to pass a value for the request Id instead of\n using one generated by SMT.\n\n Output:\n Dictionary containing the results. See ReqHandle.buildReturnDict()\n for information on the contents of the dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninvokes a SMAPI API and return the result.", "response": "def invokeSmapiApi(rh):\n \"\"\"\n Invoke a SMAPI API.\n\n Input:\n Request Handle with the following properties:\n function - 'SMAPI'\n subfunction - 'API'\n userid - 'HYPERVISOR'\n parms['apiName'] - Name of API as defined by SMCLI\n parms['operands'] - List (array) of operands to send or\n an empty list.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter smapi.invokeSmapiApi\")\n if rh.userid != 'HYPERVISOR':\n userid = rh.userid\n else:\n userid = 'dummy'\n\n parms = [\"-T\", userid]\n if 'operands' in rh.parms:\n parms.extend(rh.parms['operands'])\n\n results = invokeSMCLI(rh, rh.parms['apiName'], parms)\n if results['overallRC'] == 0:\n rh.printLn(\"N\", results['response'])\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit smapi.invokeCmd, rc: \" + str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the FCP device object from several lines of string containing properties of the FCP device object.", "response": "def _parse(self, init_info):\n \"\"\"Initialize a FCP device object from several lines of string\n describing properties of the FCP device.\n Here is a sample:\n opnstk1: FCP device number: B83D\n opnstk1: Status: Free\n opnstk1: NPIV world wide port number: NONE\n opnstk1: Channel path ID: 59\n opnstk1: Physical world wide port number: 20076D8500005181\n The format comes from the response of xCAT, do not support\n arbitrary format.\n \"\"\"\n if isinstance(init_info, list) and (len(init_info) == 5):\n self._dev_no = self._get_dev_number_from_line(init_info[0])\n self._npiv_port = self._get_wwpn_from_line(init_info[2])\n self._chpid = self._get_chpid_from_line(init_info[3])\n self._physical_port = self._get_wwpn_from_line(init_info[4])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninitializing the FCP pool.", "response": "def _init_fcp_pool(self, fcp_list, assigner_id):\n \"\"\"The FCP infomation got from smt(zthin) looks like :\n host: FCP device number: xxxx\n host: Status: Active\n host: NPIV world wide port number: xxxxxxxx\n host: Channel path ID: xx\n host: Physical world wide port number: xxxxxxxx\n ......\n host: FCP device number: xxxx\n host: Status: Active\n host: NPIV world wide port number: xxxxxxxx\n host: Channel path ID: xx\n host: Physical world wide port number: xxxxxxxx\n\n \"\"\"\n complete_fcp_set = self._expand_fcp_list(fcp_list)\n fcp_info = self._get_all_fcp_info(assigner_id)\n lines_per_item = 5\n\n num_fcps = len(fcp_info) // lines_per_item\n for n in range(0, num_fcps):\n fcp_init_info = fcp_info[(5 * n):(5 * (n + 1))]\n fcp = FCP(fcp_init_info)\n dev_no = fcp.get_dev_no()\n if dev_no in complete_fcp_set:\n if fcp.is_valid():\n self._fcp_pool[dev_no] = fcp\n else:\n errmsg = (\"Find an invalid FCP device with properties {\"\n \"dev_no: %(dev_no)s, \"\n \"NPIV_port: %(NPIV_port)s, \"\n \"CHPID: %(CHPID)s, \"\n \"physical_port: %(physical_port)s} !\") % {\n 'dev_no': fcp.get_dev_no(),\n 'NPIV_port': fcp.get_npiv_port(),\n 'CHPID': fcp.get_chpid(),\n 'physical_port': fcp.get_physical_port()}\n LOG.warning(errmsg)\n else:\n # normal, FCP not used by cloud connector at all\n msg = \"Found a fcp %s not in fcp_list\" % dev_no\n LOG.debug(msg)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _expand_fcp_list(fcp_list):\n\n LOG.debug(\"Expand FCP list %s\" % fcp_list)\n\n if not fcp_list:\n return set()\n\n range_pattern = '[0-9a-fA-F]{1,4}(-[0-9a-fA-F]{1,4})?'\n match_pattern = \"^(%(range)s)(;%(range)s)*$\" % {'range': range_pattern}\n if not re.match(match_pattern, fcp_list):\n errmsg = (\"Invalid FCP address %s\") % fcp_list\n raise exception.SDKInternalError(msg=errmsg)\n\n fcp_devices = set()\n for _range in fcp_list.split(';'):\n if '-' not in _range:\n # single device\n fcp_addr = int(_range, 16)\n fcp_devices.add(\"%04x\" % fcp_addr)\n else:\n # a range of address\n (_min, _max) = _range.split('-')\n _min = int(_min, 16)\n _max = int(_max, 16)\n for fcp_addr in range(_min, _max + 1):\n fcp_devices.add(\"%04x\" % fcp_addr)\n\n # remove duplicate entries\n return fcp_devices", "response": "Expand fcp list into a python list object which contains\n each fcp device address and range indicator -."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd fcp to db if it s not in db and init it", "response": "def _add_fcp(self, fcp):\n \"\"\"add fcp to db if it's not in db but in fcp list and init it\"\"\"\n try:\n LOG.info(\"fcp %s found in CONF.volume.fcp_list, add it to db\" %\n fcp)\n self.db.new(fcp)\n except Exception:\n LOG.info(\"failed to add fcp %s into db\", fcp)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _sync_db_fcp_list(self):\n fcp_db_list = self.db.get_all()\n\n for fcp_rec in fcp_db_list:\n if not fcp_rec[0].lower() in self._fcp_pool:\n self._report_orphan_fcp(fcp_rec[0])\n\n for fcp_conf_rec, v in self._fcp_pool.items():\n res = self.db.get_from_fcp(fcp_conf_rec)\n\n # if not found this record, a [] will be returned\n if len(res) == 0:\n self._add_fcp(fcp_conf_rec)", "response": "sync db records from given fcp list for example you need to add new FCP to the database"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find_and_reserve_fcp(self, assigner_id):\n fcp_list = self.db.get_from_assigner(assigner_id)\n if not fcp_list:\n new_fcp = self.db.find_and_reserve()\n if new_fcp is None:\n LOG.info(\"no more fcp to be allocated\")\n return None\n\n LOG.debug(\"allocated %s fcp for %s assigner\" %\n (new_fcp, assigner_id))\n return new_fcp\n else:\n # we got it from db, let's reuse it\n old_fcp = fcp_list[0][0]\n self.db.reserve(fcp_list[0][0])\n return old_fcp", "response": "find a fcp and reserve it"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef increase_fcp_usage(self, fcp, assigner_id=None):\n # TODO: check assigner_id to make sure on the correct fcp record\n connections = self.db.get_connections_from_assigner(assigner_id)\n new = False\n\n if connections == 0:\n self.db.assign(fcp, assigner_id)\n new = True\n else:\n self.db.increase_usage(fcp)\n\n return new", "response": "Increase the fcp usage of given fcp Returns True if it s a new fcp otherwise return False"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_available_fcp(self):\n # get the unreserved FCP devices belongs to assigner_id\n available_list = []\n free_unreserved = self.db.get_all_free_unreserved()\n for item in free_unreserved:\n available_list.append(item[0])\n return available_list", "response": "get all the fcps not reserved"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nattaches a volume to a specific FCP.", "response": "def _attach(self, fcp, assigner_id, target_wwpn, target_lun,\n multipath, os_version, mount_point):\n \"\"\"Attach a volume\n\n First, we need translate fcp into local wwpn, then\n dedicate fcp to the user if it's needed, after that\n call smt layer to call linux command\n \"\"\"\n LOG.info('Start to attach device to %s' % assigner_id)\n self.fcp_mgr.init_fcp(assigner_id)\n new = self.fcp_mgr.increase_fcp_usage(fcp, assigner_id)\n try:\n if new:\n self._dedicate_fcp(fcp, assigner_id)\n\n self._add_disk(fcp, assigner_id, target_wwpn, target_lun,\n multipath, os_version, mount_point)\n except exception.SDKBaseException as err:\n errmsg = 'rollback attach because error:' + err.format_message()\n LOG.error(errmsg)\n connections = self.fcp_mgr.decrease_fcp_usage(fcp, assigner_id)\n # if connections less than 1, undedicate the device\n if not connections:\n with zvmutils.ignore_errors():\n self._undedicate_fcp(fcp, assigner_id)\n raise exception.SDKBaseException(msg=errmsg)\n # TODO: other exceptions?\n\n LOG.info('Attaching device to %s is done.' % assigner_id)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _detach(self, fcp, assigner_id, target_wwpn, target_lun,\n multipath, os_version, mount_point):\n \"\"\"Detach a volume from a guest\"\"\"\n LOG.info('Start to detach device from %s' % assigner_id)\n connections = self.fcp_mgr.decrease_fcp_usage(fcp, assigner_id)\n\n try:\n self._remove_disk(fcp, assigner_id, target_wwpn, target_lun,\n multipath, os_version, mount_point)\n if not connections:\n self._undedicate_fcp(fcp, assigner_id)\n except (exception.SDKBaseException,\n exception.SDKSMTRequestFailed) as err:\n errmsg = 'rollback detach because error:' + err.format_message()\n LOG.error(errmsg)\n self.fcp_mgr.increase_fcp_usage(fcp, assigner_id)\n with zvmutils.ignore_errors():\n self._add_disk(fcp, assigner_id, target_wwpn, target_lun,\n multipath, os_version, mount_point)\n raise exception.SDKBaseException(msg=errmsg)\n\n LOG.info('Detaching device to %s is done.' % assigner_id)", "response": "Detach a volume from a guest"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetach a volume from a guest", "response": "def detach(self, connection_info):\n \"\"\"Detach a volume from a guest\n \"\"\"\n fcp = connection_info['zvm_fcp']\n fcp = fcp.lower()\n target_wwpn = connection_info['target_wwpn']\n target_lun = connection_info['target_lun']\n assigner_id = connection_info['assigner_id']\n assigner_id = assigner_id.upper()\n multipath = connection_info['multipath']\n os_version = connection_info['os_version']\n mount_point = connection_info['mount_point']\n\n if not zvmutils.check_userid_exist(assigner_id):\n LOG.error(\"Guest '%s' does not exist\" % assigner_id)\n raise exception.SDKObjectNotExistError(\n obj_desc=(\"Guest '%s'\" % assigner_id), modID='volume')\n else:\n self._detach(fcp, assigner_id, target_wwpn, target_lun,\n multipath, os_version, mount_point)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_volume_connector(self, assigner_id):\n\n empty_connector = {'zvm_fcp': [], 'wwpns': [], 'host': ''}\n\n # init fcp pool\n self.fcp_mgr.init_fcp(assigner_id)\n fcp_list = self.fcp_mgr.get_available_fcp()\n if not fcp_list:\n errmsg = \"No available FCP device found.\"\n LOG.warning(errmsg)\n return empty_connector\n wwpns = []\n for fcp_no in fcp_list:\n wwpn = self.fcp_mgr.get_wwpn(fcp_no)\n if not wwpn:\n errmsg = \"FCP device %s has no available WWPN.\" % fcp_no\n LOG.warning(errmsg)\n else:\n wwpns.append(wwpn)\n\n if not wwpns:\n errmsg = \"No available WWPN found.\"\n LOG.warning(errmsg)\n return empty_connector\n\n inv_info = self._smtclient.get_host_info()\n zvm_host = inv_info['zvm_host']\n if zvm_host == '':\n errmsg = \"zvm host not specified.\"\n LOG.warning(errmsg)\n return empty_connector\n\n connector = {'zvm_fcp': fcp_list,\n 'wwpns': wwpns,\n 'host': zvm_host}\n LOG.debug('get_volume_connector returns %s for %s' %\n (connector, assigner_id))\n return connector", "response": "Get connector information of the instance for attaching to volumes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef salsa20_8(B, x, src, s_start, dest, d_start):\n\n # Merged blockxor for speed\n for i in xrange(16):\n x[i] = B[i] = B[i] ^ src[s_start + i]\n\n # This is the actual Salsa 20/8: four identical double rounds\n for i in xrange(4):\n a = (x[0]+x[12]) & 0xffffffff\n b = (x[5]+x[1]) & 0xffffffff\n x[4] ^= (a << 7) | (a >> 25)\n x[9] ^= (b << 7) | (b >> 25)\n a = (x[10]+x[6]) & 0xffffffff\n b = (x[15]+x[11]) & 0xffffffff\n x[14] ^= (a << 7) | (a >> 25)\n x[3] ^= (b << 7) | (b >> 25)\n a = (x[4]+x[0]) & 0xffffffff\n b = (x[9]+x[5]) & 0xffffffff\n x[8] ^= (a << 9) | (a >> 23)\n x[13] ^= (b << 9) | (b >> 23)\n a = (x[14]+x[10]) & 0xffffffff\n b = (x[3]+x[15]) & 0xffffffff\n x[2] ^= (a << 9) | (a >> 23)\n x[7] ^= (b << 9) | (b >> 23)\n a = (x[8]+x[4]) & 0xffffffff\n b = (x[13]+x[9]) & 0xffffffff\n x[12] ^= (a << 13) | (a >> 19)\n x[1] ^= (b << 13) | (b >> 19)\n a = (x[2]+x[14]) & 0xffffffff\n b = (x[7]+x[3]) & 0xffffffff\n x[6] ^= (a << 13) | (a >> 19)\n x[11] ^= (b << 13) | (b >> 19)\n a = (x[12]+x[8]) & 0xffffffff\n b = (x[1]+x[13]) & 0xffffffff\n x[0] ^= (a << 18) | (a >> 14)\n x[5] ^= (b << 18) | (b >> 14)\n a = (x[6]+x[2]) & 0xffffffff\n b = (x[11]+x[7]) & 0xffffffff\n x[10] ^= (a << 18) | (a >> 14)\n x[15] ^= (b << 18) | (b >> 14)\n a = (x[0]+x[3]) & 0xffffffff\n b = (x[5]+x[4]) & 0xffffffff\n x[1] ^= (a << 7) | (a >> 25)\n x[6] ^= (b << 7) | (b >> 25)\n a = (x[10]+x[9]) & 0xffffffff\n b = (x[15]+x[14]) & 0xffffffff\n x[11] ^= (a << 7) | (a >> 25)\n x[12] ^= (b << 7) | (b >> 25)\n a = (x[1]+x[0]) & 0xffffffff\n b = (x[6]+x[5]) & 0xffffffff\n x[2] ^= (a << 9) | (a >> 23)\n x[7] ^= (b << 9) | (b >> 23)\n a = (x[11]+x[10]) & 0xffffffff\n b = (x[12]+x[15]) & 0xffffffff\n x[8] ^= (a << 9) | (a >> 23)\n x[13] ^= (b << 9) | (b >> 23)\n a = (x[2]+x[1]) & 0xffffffff\n b = (x[7]+x[6]) & 0xffffffff\n x[3] ^= (a << 13) | (a >> 19)\n x[4] ^= (b << 13) | (b >> 19)\n a = (x[8]+x[11]) & 0xffffffff\n b = (x[13]+x[12]) & 0xffffffff\n x[9] ^= (a << 13) | (a >> 19)\n x[14] ^= (b << 13) | (b >> 19)\n a = (x[3]+x[2]) & 0xffffffff\n b = (x[4]+x[7]) & 0xffffffff\n x[0] ^= (a << 18) | (a >> 14)\n x[5] ^= (b << 18) | (b >> 14)\n a = (x[9]+x[8]) & 0xffffffff\n b = (x[14]+x[13]) & 0xffffffff\n x[10] ^= (a << 18) | (a >> 14)\n x[15] ^= (b << 18) | (b >> 14)\n\n # While we are handling the data, write it to the correct dest.\n # The latter half is still part of salsa20\n for i in xrange(16):\n dest[d_start + i] = B[i] = (x[i] + B[i]) & 0xffffffff", "response": "Salsa 20 / 8 algorithm."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef blockmix_salsa8(BY, Yi, r):\n\n start = (2 * r - 1) * 16\n X = BY[start:start+16] # BlockMix - 1\n tmp = [0]*16\n\n for i in xrange(2 * r): # BlockMix - 2\n #blockxor(BY, i * 16, X, 0, 16) # BlockMix - 3(inner)\n salsa20_8(X, tmp, BY, i * 16, BY, Yi + i*16) # BlockMix - 3(outer)\n #array_overwrite(X, 0, BY, Yi + (i * 16), 16) # BlockMix - 4\n\n for i in xrange(r): # BlockMix - 6\n BY[i * 16:(i * 16)+(16)] = BY[Yi + (i * 2) * 16:(Yi + (i * 2) * 16)+(16)]\n BY[(i + r) * 16:((i + r) * 16)+(16)] = BY[Yi + (i*2 + 1) * 16:(Yi + (i*2 + 1) * 16)+(16)]", "response": "Blockmix ; Used by SMix"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef smix(B, Bi, r, N, V, X):\n\n X[0:(0)+(32 * r)] = B[Bi:(Bi)+(32 * r)]\n\n for i in xrange(N): # ROMix - 2\n V[i * (32 * r):(i * (32 * r))+(32 * r)] = X[0:(0)+(32 * r)]\n blockmix_salsa8(X, 32 * r, r) # ROMix - 4\n\n for i in xrange(N): # ROMix - 6\n j = integerify(X, r) & (N - 1) # ROMix - 7\n blockxor(V, j * (32 * r), X, 0, 32 * r) # ROMix - 8(inner)\n blockmix_salsa8(X, 32 * r, r) # ROMix - 9(outer)\n\n B[Bi:(Bi)+(32 * r)] = X[0:(0)+(32 * r)]", "response": "SMix based on Salsa20 and 8"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a key derived using the scrypt key-derivarion function N must be a power of two larger than 1 but no larger than 2 ** 63 (insane) r and p must be positive numbers such that r * p < 2 ** 30 The default values are: N -- 2**14 (~16k) r -- 8 p -- 1 Memory usage is proportional to N*r. Defaults require about 16 MiB. Time taken is proportional to N*p. Defaults take <100ms of a recent x86. The last one differs from libscrypt defaults, but matches the 'interactive' work factor from the original paper. For long term storage where runtime of key derivation is not a problem, you could use 16 as in libscrypt or better yet increase N if memory is plentiful.", "response": "def scrypt(password, salt, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p, olen=64):\n \"\"\"Returns a key derived using the scrypt key-derivarion function\n\n N must be a power of two larger than 1 but no larger than 2 ** 63 (insane)\n r and p must be positive numbers such that r * p < 2 ** 30\n\n The default values are:\n N -- 2**14 (~16k)\n r -- 8\n p -- 1\n\n Memory usage is proportional to N*r. Defaults require about 16 MiB.\n Time taken is proportional to N*p. Defaults take <100ms of a recent x86.\n\n The last one differs from libscrypt defaults, but matches the 'interactive'\n work factor from the original paper. For long term storage where runtime of\n key derivation is not a problem, you could use 16 as in libscrypt or better\n yet increase N if memory is plentiful.\n \"\"\"\n\n check_args(password, salt, N, r, p, olen)\n\n # Everything is lists of 32-bit uints for all but pbkdf2\n try:\n B = _pbkdf2('sha256', password, salt, 1, p * 128 * r)\n B = list(struct.unpack('<%dI' % (len(B) // 4), B))\n XY = [0] * (64 * r)\n V = [0] * (32 * r * N)\n except (MemoryError, OverflowError):\n raise ValueError(\"scrypt parameters don't fit in memory\")\n\n for i in xrange(p):\n smix(B, i * 32 * r, r, N, V, XY)\n\n B = struct.pack('<%dI' % len(B), *B)\n return _pbkdf2('sha256', password, B, 1, olen)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef scrypt_mcf(password, salt=None, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p,\n prefix=SCRYPT_MCF_PREFIX_DEFAULT):\n \"\"\"Derives a Modular Crypt Format hash using the scrypt KDF\n\n Parameter space is smaller than for scrypt():\n N must be a power of two larger than 1 but no larger than 2 ** 31\n r and p must be positive numbers between 1 and 255\n Salt must be a byte string 1-16 bytes long.\n\n If no salt is given, a random salt of 128+ bits is used. (Recommended.)\n \"\"\"\n return mcf_mod.scrypt_mcf(scrypt, password, salt, N, r, p, prefix)", "response": "Derives a Modular Crypt Format hash using the scrypt KDF"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the _versions attribute with the registered plotter methods", "response": "def _update_versions():\n \"\"\"Update :attr:`_versions` with the registered plotter methods\"\"\"\n for pm_name in plot._plot_methods:\n pm = getattr(plot, pm_name)\n plugin = pm._plugin\n if (plugin is not None and plugin not in _versions and\n pm.module in sys.modules):\n _versions.update(get_versions(key=lambda s: s == plugin))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef multiple_subplots(rows=1, cols=1, maxplots=None, n=1, delete=True,\n for_maps=False, *args, **kwargs):\n \"\"\"\n Function to create subplots.\n\n This function creates so many subplots on so many figures until the\n specified number `n` is reached.\n\n Parameters\n ----------\n rows: int\n The number of subplots per rows\n cols: int\n The number of subplots per column\n maxplots: int\n The number of subplots per figure (if None, it will be row*cols)\n n: int\n number of subplots to create\n delete: bool\n If True, the additional subplots per figure are deleted\n for_maps: bool\n If True this is a simple shortcut for setting\n ``subplot_kw=dict(projection=cartopy.crs.PlateCarree())`` and is\n useful if you want to use the :attr:`~ProjectPlotter.mapplot`,\n :attr:`~ProjectPlotter.mapvector` or\n :attr:`~ProjectPlotter.mapcombined` plotting methods\n ``*args`` and ``**kwargs``\n anything that is passed to the :func:`matplotlib.pyplot.subplots`\n function\n\n Returns\n -------\n list\n list of maplotlib.axes.SubplotBase instances\"\"\"\n import matplotlib.pyplot as plt\n axes = np.array([])\n maxplots = maxplots or rows * cols\n kwargs.setdefault('figsize', [\n min(8.*cols, 16), min(6.5*rows, 12)])\n if for_maps:\n import cartopy.crs as ccrs\n subplot_kw = kwargs.setdefault('subplot_kw', {})\n subplot_kw['projection'] = ccrs.PlateCarree()\n for i in range(0, n, maxplots):\n fig, ax = plt.subplots(rows, cols, *args, **kwargs)\n try:\n axes = np.append(axes, ax.ravel()[:maxplots])\n if delete:\n for iax in range(maxplots, rows * cols):\n fig.delaxes(ax.ravel()[iax])\n except AttributeError: # got a single subplot\n axes = np.append(axes, [ax])\n if i + maxplots > n and delete:\n for ax2 in axes[n:]:\n fig.delaxes(ax2)\n axes = axes[:n]\n return axes", "response": "Function to create subplots on the same figure."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalls the given func only from the main project", "response": "def _only_main(func):\n \"\"\"Call the given `func` only from the main project\"\"\"\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n if not self.is_main:\n return getattr(self.main, func.__name__)(*args, **kwargs)\n return func(self, *args, **kwargs)\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef gcp(main=False):\n if main:\n return project() if _current_project is None else _current_project\n else:\n return gcp(True) if _current_subproject is None else \\\n _current_subproject", "response": "Returns the current project or project depending on main flag."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _scp(p, main=False):\n global _current_subproject\n global _current_project\n if p is None:\n mp = project() if main or _current_project is None else \\\n _current_project\n _current_subproject = Project(main=mp)\n elif not main:\n _current_subproject = p\n else:\n _current_project = p", "response": "scp version that allows a bit more control over whether the project is a\n main project or not"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new project with the given number", "response": "def project(num=None, *args, **kwargs):\n \"\"\"\n Create a new main project\n\n Parameters\n ----------\n num: int\n The number of the project\n %(Project.parameters.no_num)s\n\n Returns\n -------\n Project\n The with the given `num` (if it does not already exist, it is created)\n\n See Also\n --------\n scp: Sets the current project\n gcp: Returns the current project\n \"\"\"\n numbers = [project.num for project in _open_projects]\n if num in numbers:\n return _open_projects[numbers.index(num)]\n if num is None:\n num = max(numbers) + 1 if numbers else 1\n project = PROJECT_CLS.new(num, *args, **kwargs)\n _open_projects.append(project)\n return project"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclosing the current project or the current project.", "response": "def close(num=None, figs=True, data=True, ds=True, remove_only=False):\n \"\"\"\n Close the project\n\n This method closes the current project (figures, data and datasets) or the\n project specified by `num`\n\n Parameters\n ----------\n num: int, None or 'all'\n if :class:`int`, it specifies the number of the project, if None, the\n current subproject is closed, if ``'all'``, all open projects are\n closed\n %(Project.close.parameters)s\n\n See Also\n --------\n Project.close\"\"\"\n kws = dict(figs=figs, data=data, ds=ds, remove_only=remove_only)\n cp_num = gcp(True).num\n got_cp = False\n if num is None:\n project = gcp()\n scp(None)\n project.close(**kws)\n elif num == 'all':\n for project in _open_projects[:]:\n project.close(**kws)\n got_cp = got_cp or project.main.num == cp_num\n del _open_projects[0]\n else:\n if isinstance(num, Project):\n project = num\n else:\n project = [project for project in _open_projects\n if project.num == num][0]\n project.close(**kws)\n try:\n _open_projects.remove(project)\n except ValueError:\n pass\n got_cp = got_cp or project.main.num == cp_num\n if got_cp:\n if _open_projects:\n # set last opened project to the current\n scp(_open_projects[-1])\n else:\n _scp(None, True)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister a plotter class for the specified project and returns the resulting tree structure.", "response": "def register_plotter(identifier, module, plotter_name, plotter_cls=None,\n sorter=True, plot_func=True, import_plotter=None,\n **kwargs):\n \"\"\"\n Register a :class:`psyplot.plotter.Plotter` for the projects\n\n This function registers plotters for the :class:`Project` class to allow\n a dynamical handling of different plotter classes.\n\n Parameters\n ----------\n %(Project._register_plotter.parameters.no_plotter_cls)s\n sorter: bool, optional\n If True, the :class:`Project` class gets a new property with the name\n of the specified `identifier` which allows you to access the instances\n that are plotted by the specified `plotter_name`\n plot_func: bool, optional\n If True, the :class:`ProjectPlotter` (the class that holds the\n plotting method for the :class:`Project` class and can be accessed via\n the :attr:`Project.plot` attribute) gets an additional method to plot\n via the specified `plotter_name` (see `Other Parameters` below.)\n import_plotter: bool, optional\n If True, the plotter is automatically imported, otherwise it is only\n imported when it is needed. If `import_plotter` is None, then it is\n determined by the :attr:`psyplot.rcParams` ``'project.auto_import'``\n item.\n\n Other Parameters\n ----------------\n %(ProjectPlotter._register_plotter.other_parameters)s\n \"\"\"\n if plotter_cls is None:\n if ((import_plotter is None and rcParams['project.auto_import']) or\n import_plotter):\n try:\n plotter_cls = getattr(import_module(module), plotter_name)\n except Exception as e:\n critical((\"Could not import %s!\\n\" % module) +\n e.message if six.PY2 else str(e))\n return\n if sorter:\n if hasattr(Project, identifier):\n raise ValueError(\n \"Project class already has a %s attribute\" % identifier)\n Project._register_plotter(\n identifier, module, plotter_name, plotter_cls)\n if plot_func:\n if hasattr(ProjectPlotter, identifier):\n raise ValueError(\n \"Project class already has a %s attribute\" % identifier)\n ProjectPlotter._register_plotter(\n identifier, module, plotter_name, plotter_cls, **kwargs)\n DatasetPlotter._register_plotter(\n identifier, module, plotter_name, plotter_cls, **kwargs)\n DataArrayPlotter._register_plotter(\n identifier, module, plotter_name, plotter_cls, **kwargs)\n if identifier not in registered_plotters:\n kwargs.update(dict(\n module=module, plotter_name=plotter_name, sorter=sorter,\n plot_func=plot_func, import_plotter=import_plotter))\n registered_plotters[identifier] = kwargs\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef unregister_plotter(identifier, sorter=True, plot_func=True):\n d = registered_plotters.get(identifier, {})\n if sorter and hasattr(Project, identifier):\n delattr(Project, identifier)\n d['sorter'] = False\n if plot_func and hasattr(ProjectPlotter, identifier):\n for cls in [ProjectPlotter, DatasetPlotter, DataArrayPlotter]:\n delattr(cls, identifier)\n try:\n delattr(plot, '_' + identifier)\n except AttributeError:\n pass\n d['plot_func'] = False\n if sorter and plot_func:\n registered_plotters.pop(identifier, None)", "response": "Unregisters a plotter for the projects\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _fmtos(self):\n plotters = self.plotters\n if len(plotters) == 0:\n return {}\n p0 = plotters[0]\n if len(plotters) == 1:\n return p0._fmtos\n return (getattr(p0, key) for key in set(p0).intersection(\n *map(set, plotters[1:])))", "response": "An iterator over formatoption objects containing only the formatoption whose keys are in all plotters in this\n list Contains only the formatoption whose keys are in all plotters in this\n list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef figs(self):\n ret = utils.DefaultOrderedDict(lambda: self[1:0])\n for arr in self:\n if arr.psy.plotter is not None:\n ret[arr.psy.plotter.ax.get_figure()].append(arr)\n return OrderedDict(ret)", "response": "A mapping from figures to data objects with the plotter in this\n figure"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef axes(self):\n ret = utils.DefaultOrderedDict(lambda: self[1:0])\n for arr in self:\n if arr.psy.plotter is not None:\n ret[arr.psy.plotter.ax].append(arr)\n return OrderedDict(ret)", "response": "A mapping from axes to data objects with the plotter in this axes"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef logger(self):\n if not self.is_main:\n return self.main.logger\n try:\n return self._logger\n except AttributeError:\n name = '%s.%s.%s' % (self.__module__, self.__class__.__name__,\n self.num)\n self._logger = logging.getLogger(name)\n self.logger.debug('Initializing...')\n return self._logger", "response": "Returns the logger of this instance."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef datasets(self):\n return {key: val['ds'] for key, val in six.iteritems(\n self._get_ds_descriptions(self.array_info(ds_description=['ds'])))}", "response": "A mapping from dataset numbers to datasets in this list"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nregistering a plotter in the project class.", "response": "def _register_plotter(cls, identifier, module, plotter_name,\n plotter_cls=None):\n \"\"\"\n Register a plotter in the :class:`Project` class to easy access it\n\n Parameters\n ----------\n identifier: str\n Name of the attribute that is used to filter for the instances\n belonging to this plotter\n module: str\n The module from where to import the `plotter_name`\n plotter_name: str\n The name of the plotter class in `module`\n plotter_cls: type\n The imported class of `plotter_name`. If None, it will be imported\n when it is needed\n \"\"\"\n if plotter_cls is not None: # plotter has already been imported\n def get_x(self):\n return self(plotter_cls)\n else:\n def get_x(self):\n return self(getattr(import_module(module), plotter_name))\n setattr(cls, identifier, property(get_x, doc=(\n \"List of data arrays that are plotted by :class:`%s.%s`\"\n \" plotters\") % (module, plotter_name)))\n cls._registered_plotters[identifier] = (module, plotter_name)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef disable(self):\n for arr in self:\n if arr.psy.plotter:\n arr.psy.plotter.disabled = True", "response": "Disables the plotters in this list"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncloses the current instance of the object and all its children.", "response": "def close(self, figs=True, data=False, ds=False, remove_only=False):\n \"\"\"\n Close this project instance\n\n Parameters\n ----------\n figs: bool\n Close the figures\n data: bool\n delete the arrays from the (main) project\n ds: bool\n If True, close the dataset as well\n remove_only: bool\n If True and `figs` is True, the figures are not closed but the\n plotters are removed\"\"\"\n import matplotlib.pyplot as plt\n close_ds = ds\n for arr in self[:]:\n if figs and arr.psy.plotter is not None:\n if remove_only:\n for fmto in arr.psy.plotter._fmtos:\n try:\n fmto.remove()\n except Exception:\n pass\n else:\n plt.close(arr.psy.plotter.ax.get_figure().number)\n arr.psy.plotter = None\n if data:\n self.remove(arr)\n if not self.is_main:\n try:\n self.main.remove(arr)\n except ValueError: # arr not in list\n pass\n if close_ds:\n if isinstance(arr, InteractiveList):\n for ds in [val['ds'] for val in six.itervalues(\n arr._get_ds_descriptions(\n arr.array_info(ds_description=['ds'],\n standardize_dims=False)))]:\n ds.close()\n else:\n arr.psy.base.close()\n if self.is_main and self is gcp(True) and data:\n scp(None)\n elif self.is_main and self.is_cmp:\n self.oncpchange.emit(self)\n elif self.main.is_cmp:\n self.oncpchange.emit(self.main)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd data to the current object.", "response": "def _add_data(self, plotter_cls, filename_or_obj, fmt={}, make_plot=True,\n draw=False, mf_mode=False, ax=None, engine=None, delete=True,\n share=False, clear=False, enable_post=None,\n concat_dim=_concat_dim_default, load=False,\n *args, **kwargs):\n \"\"\"\n Extract data from a dataset and visualize it with the given plotter\n\n Parameters\n ----------\n plotter_cls: type\n The subclass of :class:`psyplot.plotter.Plotter` to use for\n visualization\n filename_or_obj: filename, :class:`xarray.Dataset` or data store\n The object (or file name) to open. If not a dataset, the\n :func:`psyplot.data.open_dataset` will be used to open a dataset\n fmt: dict\n Formatoptions that shall be when initializing the plot (you can\n however also specify them as extra keyword arguments)\n make_plot: bool\n If True, the data is plotted at the end. Otherwise you have to\n call the :meth:`psyplot.plotter.Plotter.initialize_plot` method or\n the :meth:`psyplot.plotter.Plotter.reinit` method by yourself\n %(InteractiveBase.start_update.parameters.draw)s\n mf_mode: bool\n If True, the :func:`psyplot.open_mfdataset` method is used.\n Otherwise we use the :func:`psyplot.open_dataset` method which can\n open only one single dataset\n ax: None, tuple (x, y[, z]) or (list of) matplotlib.axes.Axes\n Specifies the subplots on which to plot the new data objects.\n\n - If None, a new figure will be created for each created plotter\n - If tuple (x, y[, z]), `x` specifies the number of rows, `y` the\n number of columns and the optional third parameter `z` the\n maximal number of subplots per figure.\n - If :class:`matplotlib.axes.Axes` (or list of those, e.g. created\n by the :func:`matplotlib.pyplot.subplots` function), the data\n will be plotted on these subplots\n %(open_dataset.parameters.engine)s\n %(multiple_subplots.parameters.delete)s\n share: bool, fmt key or list of fmt keys\n Determines whether the first created plotter shares it's\n formatoptions with the others. If True, all formatoptions are\n shared. Strings or list of strings specify the keys to share.\n clear: bool\n If True, axes are cleared before making the plot. This is only\n necessary if the `ax` keyword consists of subplots with projection\n that differs from the one that is needed\n enable_post: bool\n If True, the :attr:`~psyplot.plotter.Plotter.post` formatoption is\n enabled and post processing scripts are allowed. If ``None``, this\n parameter is set to True if there is a value given for the `post`\n formatoption in `fmt` or `kwargs`\n %(xarray.open_mfdataset.parameters.concat_dim)s\n This parameter only does have an effect if `mf_mode` is True.\n load: bool\n If True, load the complete dataset into memory before plotting.\n This might be useful if the data of other variables in the dataset\n has to be accessed multiple times, e.g. for unstructured grids.\n %(ArrayList.from_dataset.parameters.no_base)s\n\n Other Parameters\n ----------------\n %(ArrayList.from_dataset.other_parameters.no_args_kwargs)s\n ``**kwargs``\n Any other dimension or formatoption that shall be passed to `dims`\n or `fmt` respectively.\n\n Returns\n -------\n Project\n The subproject that contains the new (visualized) data array\"\"\"\n if not isinstance(filename_or_obj, xarray.Dataset):\n if mf_mode:\n filename_or_obj = open_mfdataset(filename_or_obj,\n engine=engine,\n concat_dim=concat_dim)\n else:\n filename_or_obj = open_dataset(filename_or_obj,\n engine=engine)\n if load:\n old = filename_or_obj\n filename_or_obj = filename_or_obj.load()\n old.close()\n\n fmt = dict(fmt)\n possible_fmts = list(plotter_cls._get_formatoptions())\n additional_fmt, kwargs = utils.sort_kwargs(\n kwargs, possible_fmts)\n fmt.update(additional_fmt)\n if enable_post is None:\n enable_post = bool(fmt.get('post'))\n # create the subproject\n sub_project = self.from_dataset(filename_or_obj, **kwargs)\n sub_project.main = self\n sub_project.no_auto_update = not (\n not sub_project.no_auto_update or not self.no_auto_update)\n # create the subplots\n proj = plotter_cls._get_sample_projection()\n if isinstance(ax, tuple):\n axes = iter(multiple_subplots(\n *ax, n=len(sub_project), subplot_kw={'projection': proj}))\n elif ax is None or isinstance(ax, (mpl.axes.SubplotBase,\n mpl.axes.Axes)):\n axes = repeat(ax)\n else:\n axes = iter(ax)\n clear = clear or (isinstance(ax, tuple) and proj is not None)\n for arr in sub_project:\n plotter_cls(arr, make_plot=(not bool(share) and make_plot),\n draw=False, ax=next(axes), clear=clear,\n project=self, enable_post=enable_post, **fmt)\n if share:\n if share is True:\n share = possible_fmts\n elif isinstance(share, six.string_types):\n share = [share]\n else:\n share = list(share)\n sub_project[0].psy.plotter.share(\n [arr.psy.plotter for arr in sub_project[1:]], keys=share,\n draw=False)\n if make_plot:\n for arr in sub_project:\n arr.psy.plotter.reinit(draw=False, clear=clear)\n if draw is None:\n draw = rcParams['auto_draw']\n if draw:\n sub_project.draw()\n if rcParams['auto_show']:\n self.show()\n self.extend(sub_project, new_name=True)\n if self is gcp(True):\n scp(sub_project)\n return sub_project"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef joined_attrs(self, delimiter=', ', enhanced=True, plot_data=False,\n keep_all=True):\n \"\"\"Join the attributes of the arrays in this project\n\n Parameters\n ----------\n %(join_dicts.parameters.delimiter)s\n enhanced: bool\n If True, the :meth:`psyplot.plotter.Plotter.get_enhanced_attrs`\n method is used, otherwise the :attr:`xarray.DataArray.attrs`\n attribute is used.\n plot_data: bool\n It True, use the :attr:`psyplot.plotter.Plotter.plot_data`\n attribute of the plotters rather than the raw data in this project\n %(join_dicts.parameters.keep_all)s\n\n Returns\n -------\n dict\n A mapping from the attribute to the joined attributes which are\n either strings or (if there is only one attribute value), the\n data type of the corresponding value\"\"\"\n if enhanced:\n all_attrs = [\n plotter.get_enhanced_attrs(\n getattr(plotter, 'plot_data' if plot_data else 'data'))\n for plotter in self.plotters]\n else:\n if plot_data:\n all_attrs = [plotter.plot_data.attrs\n for plotter in self.plotters]\n else:\n all_attrs = [arr.attrs for arr in self]\n return utils.join_dicts(all_attrs, delimiter=delimiter,\n keep_all=keep_all)", "response": "Returns a string with the joined attributes of the array in this project."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef export(self, output, tight=False, concat=True, close_pdf=None,\n use_time=False, **kwargs):\n \"\"\"Exports the figures of the project to one or more image files\n\n Parameters\n ----------\n output: str, iterable or matplotlib.backends.backend_pdf.PdfPages\n if string or list of strings, those define the names of the output\n files. Otherwise you may provide an instance of\n :class:`matplotlib.backends.backend_pdf.PdfPages` to save the\n figures in it.\n If string (or iterable of strings), attribute names in the\n xarray.DataArray.attrs attribute as well as index dimensions\n are replaced by the respective value (see examples below).\n Furthermore a single format string without key (e.g. %i, %s, %d,\n etc.) is replaced by a counter.\n tight: bool\n If True, it is tried to figure out the tight bbox of the figure\n (same as bbox_inches='tight')\n concat: bool\n if True and the output format is `pdf`, all figures are\n concatenated into one single pdf\n close_pdf: bool or None\n If True and the figures are concatenated into one single pdf,\n the resulting pdf instance is closed. If False it remains open.\n If None and `output` is a string, it is the same as\n ``close_pdf=True``, if None and `output` is neither a string nor an\n iterable, it is the same as ``close_pdf=False``\n use_time: bool\n If True, formatting strings for the\n :meth:`datetime.datetime.strftime` are expected to be found in\n `output` (e.g. ``'%m'``, ``'%Y'``, etc.). If so, other formatting\n strings must be escaped by double ``'%'`` (e.g. ``'%%i'`` instead\n of (``'%i'``))\n ``**kwargs``\n Any valid keyword for the :func:`matplotlib.pyplot.savefig`\n function\n\n Returns\n -------\n matplotlib.backends.backend_pdf.PdfPages or None\n a PdfPages instance if output is a string and close_pdf is False,\n otherwise None\n\n Examples\n --------\n Simply save all figures into one single pdf::\n\n >>> p = psy.gcp()\n >>> p.export('my_plots.pdf')\n\n Save all figures into separate pngs with increasing numbers (e.g.\n ``'my_plots_1.png'``)::\n\n >>> p.export('my_plots_%i.png')\n\n Save all figures into separate pngs with the name of the variables\n shown in each figure (e.g. ``'my_plots_t2m.png'``)::\n\n >>> p.export('my_plots_%(name)s.png')\n\n Save all figures into separate pngs with the name of the variables\n shown in each figure and with increasing numbers (e.g.\n ``'my_plots_1_t2m.png'``)::\n\n >>> p.export('my_plots_%i_%(name)s.png')\n\n Specify the names for each figure directly via a list::\n\n >>> p.export(['my_plots1.pdf', 'my_plots2.pdf'])\n \"\"\"\n from matplotlib.backends.backend_pdf import PdfPages\n if tight:\n kwargs['bbox_inches'] = 'tight'\n\n if use_time:\n def insert_time(s, attrs):\n time = attrs[tname]\n try: # assume a valid datetime.datetime instance\n s = pd.to_datetime(time).strftime(s)\n except ValueError:\n pass\n return s\n\n tnames = self._get_tnames()\n tname = next(iter(tnames)) if len(tnames) == 1 else None\n else:\n def insert_time(s, attrs):\n return s\n\n tname = None\n\n if isinstance(output, six.string_types): # a single string\n out_fmt = kwargs.pop('format', os.path.splitext(output))[1][1:]\n if out_fmt.lower() == 'pdf' and concat:\n attrs = self.joined_attrs('-')\n if tname is not None and tname in attrs:\n output = insert_time(output, attrs)\n pdf = PdfPages(safe_modulo(output, attrs))\n\n def save(fig):\n pdf.savefig(fig, **kwargs)\n\n def close():\n if close_pdf is None or close_pdf:\n pdf.close()\n return\n return pdf\n else:\n def save(fig):\n attrs = self.figs[fig].joined_attrs('-')\n out = output\n if tname is not None and tname in attrs:\n out = insert_time(out, attrs)\n try:\n out = safe_modulo(out, i, print_warning=False)\n except TypeError:\n pass\n fig.savefig(safe_modulo(out, attrs), **kwargs)\n\n def close():\n pass\n elif isinstance(output, Iterable): # a list of strings\n output = cycle(output)\n\n def save(fig):\n attrs = self.figs[fig].joined_attrs('-')\n out = next(output)\n if tname is not None and tname in attrs:\n out = insert_time(out, attrs)\n try:\n out = safe_modulo(next(output), i, print_warning=False)\n except TypeError:\n pass\n fig.savefig(safe_modulo(out, attrs), **kwargs)\n\n def close():\n pass\n else: # an instances of matplotlib.backends.backend_pdf.PdfPages\n def save(fig):\n output.savefig(fig, **kwargs)\n\n def close():\n if close_pdf:\n output.close()\n for i, fig in enumerate(self.figs, 1):\n save(fig)\n return close()", "response": "Exports the figures of the project to one or more image files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef share(self, base=None, keys=None, by=None, **kwargs):\n if by is not None:\n if base is not None:\n if hasattr(base, 'psy') or isinstance(base, Plotter):\n base = [base]\n if by.lower() in ['ax', 'axes']:\n bases = {ax: p[0] for ax, p in six.iteritems(\n Project(base).axes)}\n elif by.lower() in ['fig', 'figure']:\n bases = {fig: p[0] for fig, p in six.iteritems(\n Project(base).figs)}\n else:\n raise ValueError(\n \"*by* must be out of {'fig', 'figure', 'ax', 'axes'}. \"\n \"Not %s\" % (by, ))\n else:\n bases = {}\n projects = self.axes if by == 'axes' else self.figs\n for obj, p in projects.items():\n p.share(bases.get(obj), keys, **kwargs)\n else:\n plotters = self.plotters\n if not plotters:\n return\n if base is None:\n if len(plotters) == 1:\n return\n base = plotters[0]\n plotters = plotters[1:]\n elif not isinstance(base, Plotter):\n base = getattr(getattr(base, 'psy', base), 'plotter', base)\n base.share(plotters, keys=keys, **kwargs)", "response": "This method shares the formatoptions of one plotter with all the others."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_project(self, fname=None, pwd=None, pack=False, **kwargs):\n # store the figure informatoptions and array informations\n if fname is not None and pwd is None and not pack:\n pwd = os.path.dirname(fname)\n if pack and fname is not None:\n target_dir = os.path.dirname(fname)\n if not os.path.exists(target_dir):\n os.makedirs(target_dir)\n\n def tmp_it():\n from tempfile import NamedTemporaryFile\n while True:\n yield NamedTemporaryFile(\n dir=target_dir, suffix='.nc').name\n\n kwargs.setdefault('paths', tmp_it())\n if fname is not None:\n kwargs['copy'] = True\n\n _update_versions()\n ret = {'figs': dict(map(_ProjectLoader.inspect_figure, self.figs)),\n 'arrays': self.array_info(pwd=pwd, **kwargs),\n 'versions': _deepcopy(_versions)}\n if pack and fname is not None:\n # we get the filenames out of the results and copy the datasets\n # there. After that we check the filenames again and force them\n # to the desired directory\n from shutil import copyfile\n fnames = (f[0] for f in self._get_dsnames(ret['arrays']))\n alternative_paths = kwargs.pop('alternative_paths', {})\n counters = defaultdict(int)\n if kwargs.get('use_rel_paths', True):\n get_path = partial(os.path.relpath, start=target_dir)\n else:\n get_path = os.path.abspath\n for ds_fname in unique_everseen(chain(alternative_paths, fnames)):\n if ds_fname is None or utils.is_remote_url(ds_fname):\n continue\n dst_file = alternative_paths.get(\n ds_fname, os.path.join(target_dir, os.path.basename(\n ds_fname)))\n orig_dst_file = dst_file\n if counters[dst_file] and (\n not os.path.exists(dst_file) or\n not os.path.samefile(ds_fname, dst_file)):\n dst_file, ext = os.path.splitext(dst_file)\n dst_file += '-' + str(counters[orig_dst_file]) + ext\n if (not os.path.exists(dst_file) or\n not os.path.samefile(ds_fname, dst_file)):\n copyfile(ds_fname, dst_file)\n counters[orig_dst_file] += 1\n alternative_paths.setdefault(ds_fname, get_path(dst_file))\n ret['arrays'] = self.array_info(\n pwd=pwd, alternative_paths=alternative_paths, **kwargs)\n # store the plotter settings\n for arr, d in zip(self, six.itervalues(ret['arrays'])):\n if arr.psy.plotter is None:\n continue\n plotter = arr.psy.plotter\n d['plotter'] = {\n 'ax': _ProjectLoader.inspect_axes(plotter.ax),\n 'fmt': {key: getattr(plotter, key).value2pickle\n for key in plotter},\n 'cls': (plotter.__class__.__module__,\n plotter.__class__.__name__),\n 'shared': {}}\n d['plotter']['ax']['shared'] = set(\n other.psy.arr_name for other in self\n if other.psy.ax == plotter.ax)\n if plotter.ax._sharex:\n d['plotter']['ax']['sharex'] = next(\n (other.psy.arr_name for other in self\n if other.psy.ax == plotter.ax._sharex), None)\n if plotter.ax._sharey:\n d['plotter']['ax']['sharey'] = next(\n (other.psy.arr_name for other in self\n if other.psy.ax == plotter.ax._sharey), None)\n shared = d['plotter']['shared']\n for fmto in plotter._fmtos:\n if fmto.shared:\n shared[fmto.key] = [other_fmto.plotter.data.psy.arr_name\n for other_fmto in fmto.shared]\n if fname is not None:\n with open(fname, 'wb') as f:\n pickle.dump(ret, f)\n return None\n\n return ret", "response": "Save this project to a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef keys(self, *args, **kwargs):\n\n class TmpClass(Plotter):\n pass\n for fmto in self._fmtos:\n setattr(TmpClass, fmto.key, type(fmto)(fmto.key))\n return TmpClass.show_keys(*args, **kwargs)", "response": "Show the available formatoptions in this project."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nshows the available formatoptions and their summaries in this project.", "response": "def summaries(self, *args, **kwargs):\n \"\"\"\n Show the available formatoptions and their summaries in this project\n\n Parameters\n ----------\n %(Plotter.show_keys.parameters)s\n\n Other Parameters\n ----------------\n %(Plotter.show_keys.other_parameters)s\n\n Returns\n -------\n %(Plotter.show_keys.returns)s\"\"\"\n\n class TmpClass(Plotter):\n pass\n for fmto in self._fmtos:\n setattr(TmpClass, fmto.key, type(fmto)(fmto.key))\n return TmpClass.show_summaries(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef docs(self, *args, **kwargs):\n\n class TmpClass(Plotter):\n pass\n for fmto in self._fmtos:\n setattr(TmpClass, fmto.key, type(fmto)(fmto.key))\n return TmpClass.show_docs(*args, **kwargs)", "response": "Show the available formatoptions in this project and their full docu"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_dataset(cls, *args, **kwargs):\n main = kwargs.pop('main', None)\n ret = super(Project, cls).from_dataset(*args, **kwargs)\n if main is not None:\n ret.main = main\n main.extend(ret, new_name=False)\n return ret", "response": "Construct an ArrayList instance from an existing base dataset."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading a project from a file or dict This classmethod allows to load a project that has been stored using the :meth:`save_project` method and reads all the data and creates the figures. Since the data is stored in external files when saving a project, make sure that the data is accessible under the relative paths as stored in the file `fname` or from the current working directory if `fname` is a dictionary. Alternatively use the `alternative_paths` parameter or the `pwd` parameter Parameters ---------- fname: str or dict The string might be the path to a file created with the :meth:`save_project` method, or it might be a dictionary from this method %(InteractiveBase.parameters.auto_update)s %(Project._add_data.parameters.make_plot)s %(InteractiveBase.start_update.parameters.draw)s alternative_axes: dict, None or list alternative axes instances to use - If it is None, the axes and figures from the saving point will be reproduced. - a dictionary should map from array names in the created project to matplotlib axes instances - a list should contain axes instances that will be used for iteration main: bool, optional If True, a new main project is created and returned. Otherwise (by default default) the data is added to the current main project. encoding: str The encoding to use for loading the project. If None, it is automatically determined by pickle. Note: Set this to ``'latin1'`` if using a project created with python2 on python3. enable_post: bool If True, the :attr:`~psyplot.plotter.Plotter.post` formatoption is enabled and post processing scripts are allowed. Do only set this parameter to ``True`` if you know you can trust the information in `fname` new_fig: bool If True (default) and `alternative_axes` is None, new figures are created if the figure already exists %(Project._add_data.parameters.clear)s pwd: str or None, optional Path to the working directory from where the data can be imported. If None and `fname` is the path to a file, `pwd` is set to the directory of this file. Otherwise the current working directory is used. %(ArrayList.from_dict.parameters.no_d|pwd)s Other Parameters ---------------- %(ArrayList.from_dict.parameters)s Returns ------- Project The project in state of the saving point", "response": "def load_project(cls, fname, auto_update=None, make_plot=True,\n draw=False, alternative_axes=None, main=False,\n encoding=None, enable_post=False, new_fig=True,\n clear=None, **kwargs):\n \"\"\"\n Load a project from a file or dict\n\n This classmethod allows to load a project that has been stored using\n the :meth:`save_project` method and reads all the data and creates the\n figures.\n\n Since the data is stored in external files when saving a project,\n make sure that the data is accessible under the relative paths\n as stored in the file `fname` or from the current working directory\n if `fname` is a dictionary. Alternatively use the `alternative_paths`\n parameter or the `pwd` parameter\n\n Parameters\n ----------\n fname: str or dict\n The string might be the path to a file created with the\n :meth:`save_project` method, or it might be a dictionary from this\n method\n %(InteractiveBase.parameters.auto_update)s\n %(Project._add_data.parameters.make_plot)s\n %(InteractiveBase.start_update.parameters.draw)s\n alternative_axes: dict, None or list\n alternative axes instances to use\n\n - If it is None, the axes and figures from the saving point will be\n reproduced.\n - a dictionary should map from array names in the created\n project to matplotlib axes instances\n - a list should contain axes instances that will be used for\n iteration\n main: bool, optional\n If True, a new main project is created and returned.\n Otherwise (by default default) the data is added to the current\n main project.\n encoding: str\n The encoding to use for loading the project. If None, it is\n automatically determined by pickle. Note: Set this to ``'latin1'``\n if using a project created with python2 on python3.\n enable_post: bool\n If True, the :attr:`~psyplot.plotter.Plotter.post` formatoption is\n enabled and post processing scripts are allowed. Do only set this\n parameter to ``True`` if you know you can trust the information in\n `fname`\n new_fig: bool\n If True (default) and `alternative_axes` is None, new figures are\n created if the figure already exists\n %(Project._add_data.parameters.clear)s\n pwd: str or None, optional\n Path to the working directory from where the data can be imported.\n If None and `fname` is the path to a file, `pwd` is set to the\n directory of this file. Otherwise the current working directory is\n used.\n %(ArrayList.from_dict.parameters.no_d|pwd)s\n\n Other Parameters\n ----------------\n %(ArrayList.from_dict.parameters)s\n\n Returns\n -------\n Project\n The project in state of the saving point\"\"\"\n from pkg_resources import iter_entry_points\n\n def get_ax_base(name, alternatives):\n ax_base = next(iter(obj(arr_name=name).axes), None)\n if ax_base is None:\n ax_base = next(iter(obj(arr_name=alternatives).axes), None)\n if ax_base is not None:\n alternatives.difference_update(obj(ax=ax_base).arr_names)\n return ax_base\n\n pwd = kwargs.pop('pwd', None)\n if isinstance(fname, six.string_types):\n with open(fname, 'rb') as f:\n pickle_kws = {} if not encoding else {'encoding': encoding}\n d = pickle.load(f, **pickle_kws)\n pwd = pwd or os.path.dirname(fname)\n else:\n d = dict(fname)\n pwd = pwd or getcwd()\n # check for patches of plugins\n for ep in iter_entry_points('psyplot', name='patches'):\n patches = ep.load()\n for arr_d in d.get('arrays').values():\n plotter_cls = arr_d.get('plotter', {}).get('cls')\n if plotter_cls is not None and plotter_cls in patches:\n # apply the patch\n patches[plotter_cls](arr_d['plotter'],\n d.get('versions', {}))\n fig_map = {}\n if alternative_axes is None:\n for fig_dict in six.itervalues(d.get('figs', {})):\n orig_num = fig_dict.get('num') or 1\n fig_map[orig_num] = _ProjectLoader.load_figure(\n fig_dict, new_fig=new_fig).number\n elif not isinstance(alternative_axes, dict):\n alternative_axes = cycle(iter(alternative_axes))\n obj = cls.from_dict(d['arrays'], pwd=pwd, **kwargs)\n if main:\n # we create a new project with the project factory to make sure\n # that everything is handled correctly\n obj = project(None, obj)\n axes = {}\n arr_names = obj.arr_names\n sharex = defaultdict(set)\n sharey = defaultdict(set)\n for arr, (arr_name, arr_dict) in zip(\n obj, filter(lambda t: t[0] in arr_names,\n six.iteritems(d['arrays']))):\n if not arr_dict.get('plotter'):\n continue\n plot_dict = arr_dict['plotter']\n plotter_cls = getattr(\n import_module(plot_dict['cls'][0]), plot_dict['cls'][1])\n ax = None\n if alternative_axes is not None:\n if isinstance(alternative_axes, dict):\n ax = alternative_axes.get(arr.arr_name)\n else:\n ax = next(alternative_axes, None)\n if ax is None and 'ax' in plot_dict:\n already_opened = plot_dict['ax'].get(\n 'shared', set()).intersection(axes)\n if already_opened:\n ax = axes[next(iter(already_opened))]\n else:\n plot_dict['ax'].pop('shared', None)\n plot_dict['ax']['fig'] = fig_map[\n plot_dict['ax'].get('fig') or 1]\n if plot_dict['ax'].get('sharex'):\n sharex[plot_dict['ax'].pop('sharex')].add(\n arr.psy.arr_name)\n if plot_dict['ax'].get('sharey'):\n sharey[plot_dict['ax'].pop('sharey')].add(\n arr.psy.arr_name)\n axes[arr.psy.arr_name] = ax = _ProjectLoader.load_axes(\n plot_dict['ax'])\n plotter_cls(\n arr, make_plot=False, draw=False, clear=False,\n ax=ax, project=obj.main, enable_post=enable_post,\n **plot_dict['fmt'])\n # handle shared x and y-axes\n for key, names in sharex.items():\n ax_base = get_ax_base(key, names)\n if ax_base is not None:\n ax_base.get_shared_x_axes().join(\n ax_base, *obj(arr_name=names).axes)\n for ax in obj(arr_name=names).axes:\n ax._sharex = ax_base\n for key, names in sharey.items():\n ax_base = get_ax_base(key, names)\n if ax_base is not None:\n ax_base.get_shared_y_axes().join(\n ax_base, *obj(arr_name=names).axes)\n for ax in obj(arr_name=names).axes:\n ax._sharey = ax_base\n for arr in obj.with_plotter:\n shared = d['arrays'][arr.psy.arr_name]['plotter'].get('shared', {})\n for key, arr_names in six.iteritems(shared):\n arr.psy.plotter.share(obj(arr_name=arr_names).plotters,\n keys=[key])\n if make_plot:\n for plotter in obj.plotters:\n plotter.reinit(\n draw=False,\n clear=clear or (\n clear is None and\n plotter_cls._get_sample_projection() is not None))\n if draw is None:\n draw = rcParams['auto_draw']\n if draw:\n obj.draw()\n if rcParams['auto_show']:\n obj.show()\n if auto_update is None:\n auto_update = rcParams['lists.auto_update']\n if not main:\n obj._main = gcp(True)\n obj.main.extend(obj, new_name=True)\n obj.no_auto_update = not auto_update\n scp(obj)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the current project to the given project.", "response": "def scp(cls, project):\n \"\"\"\n Set the current project\n\n Parameters\n ----------\n project: Project or None\n The project to set. If it is None, the current subproject is set\n to empty. If it is a sub project (see:attr:`Project.is_main`),\n the current subproject is set to this project. Otherwise it\n replaces the current main project\n\n See Also\n --------\n scp: The global version for setting the current project\n gcp: Returns the current project\n project: Creates a new project\"\"\"\n if project is None:\n _scp(None)\n cls.oncpchange.emit(gcp())\n elif not project.is_main:\n if project.main is not _current_project:\n _scp(project.main, True)\n cls.oncpchange.emit(project.main)\n _scp(project)\n cls.oncpchange.emit(project)\n else:\n _scp(project, True)\n cls.oncpchange.emit(project)\n sp = project[:]\n _scp(sp)\n cls.oncpchange.emit(sp)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new main project with the given parameters", "response": "def new(cls, num=None, *args, **kwargs):\n \"\"\"\n Create a new main project\n\n Parameters\n ----------\n num: int\n The number of the project\n %(Project.parameters.no_num)s\n\n Returns\n -------\n Project\n The with the given `num` (if it does not already exist, it is\n created)\n\n See Also\n --------\n scp: Sets the current project\n gcp: Returns the current project\n \"\"\"\n project = cls(*args, num=num, **kwargs)\n scp(project)\n return project"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef inspect_figure(fig):\n return fig.number, {\n 'num': fig.number,\n 'figsize': (fig.get_figwidth(), fig.get_figheight()),\n 'dpi': fig.get_dpi() / getattr(fig.canvas, '_dpi_ratio', 1),\n 'facecolor': fig.get_facecolor(),\n 'edgecolor': fig.get_edgecolor(),\n 'frameon': fig.get_frameon(),\n 'tight_layout': fig.get_tight_layout(),\n 'subplotpars': vars(fig.subplotpars)}", "response": "Get the parameters ( heigth width etc. to create a figure\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_figure(d, new_fig=True):\n import matplotlib.pyplot as plt\n subplotpars = d.pop('subplotpars', None)\n if subplotpars is not None:\n subplotpars.pop('validate', None)\n subplotpars = mfig.SubplotParams(**subplotpars)\n if new_fig:\n nums = plt.get_fignums()\n if d.get('num') in nums:\n d['num'] = next(\n i for i in range(max(plt.get_fignums()) + 1, 0, -1)\n if i not in nums)\n return plt.figure(subplotpars=subplotpars, **d)", "response": "Create a figure from what is returned by inspect_figure"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninspecting an axes or subplot to get the initialization parameters", "response": "def inspect_axes(ax):\n \"\"\"Inspect an axes or subplot to get the initialization parameters\"\"\"\n ret = {'fig': ax.get_figure().number}\n if mpl.__version__ < '2.0':\n ret['axisbg'] = ax.get_axis_bgcolor()\n else: # axisbg is depreceated\n ret['facecolor'] = ax.get_facecolor()\n proj = getattr(ax, 'projection', None)\n if proj is not None and not isinstance(proj, six.string_types):\n proj = (proj.__class__.__module__, proj.__class__.__name__)\n ret['projection'] = proj\n ret['visible'] = ax.get_visible()\n ret['spines'] = {}\n ret['zorder'] = ax.get_zorder()\n ret['yaxis_inverted'] = ax.yaxis_inverted()\n ret['xaxis_inverted'] = ax.xaxis_inverted()\n for key, val in ax.spines.items():\n ret['spines'][key] = {}\n for prop in ['linestyle', 'edgecolor', 'linewidth',\n 'facecolor', 'visible']:\n ret['spines'][key][prop] = getattr(val, 'get_' + prop)()\n if isinstance(ax, mfig.SubplotBase):\n sp = ax.get_subplotspec().get_topmost_subplotspec()\n ret['grid_spec'] = sp.get_geometry()[:2]\n ret['subplotspec'] = [sp.num1, sp.num2]\n ret['is_subplot'] = True\n else:\n ret['args'] = [ax.get_position(True).bounds]\n ret['is_subplot'] = False\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an axes or subplot from what is returned by inspect_axes", "response": "def load_axes(d):\n \"\"\"Create an axes or subplot from what is returned by\n :meth:`inspect_axes`\"\"\"\n import matplotlib.pyplot as plt\n fig = plt.figure(d.pop('fig', None))\n proj = d.pop('projection', None)\n spines = d.pop('spines', None)\n invert_yaxis = d.pop('yaxis_inverted', None)\n invert_xaxis = d.pop('xaxis_inverted', None)\n if mpl.__version__ >= '2.0' and 'axisbg' in d: # axisbg is depreceated\n d['facecolor'] = d.pop('axisbg')\n elif mpl.__version__ < '2.0' and 'facecolor' in d:\n d['axisbg'] = d.pop('facecolor')\n if proj is not None and not isinstance(proj, six.string_types):\n proj = getattr(import_module(proj[0]), proj[1])()\n if d.pop('is_subplot', None):\n grid_spec = mpl.gridspec.GridSpec(*d.pop('grid_spec', (1, 1)))\n subplotspec = mpl.gridspec.SubplotSpec(\n grid_spec, *d.pop('subplotspec', (1, None)))\n return fig.add_subplot(subplotspec, projection=proj, **d)\n ret = fig.add_axes(*d.pop('args', []), projection=proj, **d)\n if spines is not None:\n for key, val in spines.items():\n ret.spines[key].update(val)\n if invert_xaxis:\n if ret.get_xlim()[0] < ret.get_xlim()[1]:\n ret.invert_xaxis()\n if invert_yaxis:\n if ret.get_ylim()[0] < ret.get_ylim()[1]:\n ret.invert_yaxis()\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting the plotmethods of this instance", "response": "def show_plot_methods(self):\n \"\"\"Print the plotmethods of this instance\"\"\"\n print_func = PlotterInterface._print_func\n if print_func is None:\n print_func = six.print_\n s = \"\\n\".join(\n \"%s\\n %s\" % t for t in six.iteritems(self._plot_methods))\n return print_func(s)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nregister a plotter for making plots for the given object.", "response": "def _register_plotter(cls, identifier, module, plotter_name,\n plotter_cls=None, summary='', prefer_list=False,\n default_slice=None, default_dims={},\n show_examples=True,\n example_call=\"filename, name=['my_variable'], ...\",\n plugin=None):\n \"\"\"\n Register a plotter for making plots\n\n This class method registeres a plot function for the :class:`Project`\n class under the name of the given `identifier`\n\n Parameters\n ----------\n %(Project._register_plotter.parameters)s\n\n Other Parameters\n ----------------\n prefer_list: bool\n Determines the `prefer_list` parameter in the `from_dataset`\n method. If True, the plotter is expected to work with instances of\n :class:`psyplot.InteractiveList` instead of\n :class:`psyplot.InteractiveArray`.\n %(ArrayList.from_dataset.parameters.default_slice)s\n default_dims: dict\n Default dimensions that shall be used for plotting (e.g.\n {'x': slice(None), 'y': slice(None)} for longitude-latitude plots)\n show_examples: bool, optional\n If True, examples how to access the plotter documentation are\n included in class documentation\n example_call: str, optional\n The arguments and keyword arguments that shall be included in the\n example of the generated plot method. This call will then appear as\n ``>>> psy.plot.%%(identifier)s(%%(example_call)s)`` in the\n documentation\n plugin: str\n The name of the plugin\n \"\"\"\n full_name = '%s.%s' % (module, plotter_name)\n if plotter_cls is not None: # plotter has already been imported\n docstrings.params['%s.formatoptions' % (full_name)] = \\\n plotter_cls.show_keys(\n indent=4, func=str,\n # include links in sphinx doc\n include_links=None)\n doc_str = ('Possible formatoptions are\\n\\n'\n '%%(%s.formatoptions)s') % full_name\n else:\n doc_str = ''\n\n summary = summary or (\n 'Open and plot data via :class:`%s.%s` plotters' % (\n module, plotter_name))\n\n if plotter_cls is not None:\n _versions.update(get_versions(key=lambda s: s == plugin))\n\n class PlotMethod(cls._plot_method_base_cls):\n __doc__ = cls._gen_doc(summary, full_name, identifier,\n example_call, doc_str, show_examples)\n\n _default_slice = default_slice\n _default_dims = default_dims\n _plotter_cls = plotter_cls\n _prefer_list = prefer_list\n _plugin = plugin\n\n _summary = summary\n\n setattr(cls, identifier, PlotMethod(identifier, module, plotter_name))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngenerating the documentation docstring for a PlotMethod", "response": "def _gen_doc(cls, summary, full_name, identifier, example_call, doc_str,\n show_examples):\n \"\"\"Generate the documentation docstring for a PlotMethod\"\"\"\n ret = docstrings.dedents(\"\"\"\n %s\n\n This plotting method adds data arrays and plots them via\n :class:`%s` plotters\n\n To plot data from a netCDF file type::\n\n >>> psy.plot.%s(%s)\n\n %s\"\"\" % (summary, full_name, identifier, example_call, doc_str))\n\n if show_examples:\n ret += '\\n\\n' + cls._gen_examples(identifier)\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_data(self, ds, name, dims):\n if isinstance(name, six.string_types):\n name = [name]\n dims = [dims]\n else:\n dims = list(dims)\n variables = [ds[safe_list(n)[0]] for n in name]\n decoders = [CFDecoder.get_decoder(ds, var) for var in variables]\n default_slice = slice(None) if self._default_slice is None else \\\n self._default_slice\n for i, (dim_dict, var, decoder) in enumerate(zip(\n dims, variables, decoders)):\n corrected = decoder.correct_dims(var, dict(chain(\n six.iteritems(self._default_dims),\n dim_dict.items())))\n # now use the default slice (we don't do this before because the\n # `correct_dims` method doesn't use 'x', 'y', 'z' and 't' (as used\n # for the _default_dims) if the real dimension name is already in\n # the dictionary)\n for dim in var.dims:\n corrected.setdefault(dim, default_slice)\n dims[i] = [\n dim for dim, val in map(lambda t: (t[0], safe_list(t[1])),\n six.iteritems(corrected))\n if val and (len(val) > 1 or _is_slice(val[0]))]\n return self.plotter_cls.check_data(\n name, dims, [decoder.is_unstructured(var) for decoder, var in zip(\n decoders, variables)])", "response": "A validation method for the data shape of the object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd new plots to the project holding the current data and the current time_of_work_on_data_set.", "response": "def _add_data(self, plotter_cls, *args, **kwargs):\n \"\"\"\n Add new plots to the project\n\n Parameters\n ----------\n %(ProjectPlotter._add_data.parameters.no_filename_or_obj)s\n\n Other Parameters\n ----------------\n %(ProjectPlotter._add_data.other_parameters)s\n\n Returns\n -------\n %(ProjectPlotter._add_data.returns)s\n \"\"\"\n # this method is just a shortcut to the :meth:`Project._add_data`\n # method but is reimplemented by subclasses as the\n # :class:`DatasetPlotter` or the :class:`DataArrayPlotter`\n return super(DatasetPlotter, self)._add_data(plotter_cls, self._ds,\n *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _gen_doc(cls, summary, full_name, identifier, example_call, doc_str,\n show_examples):\n \"\"\"Generate the documentation docstring for a PlotMethod\"\"\"\n # leave out the first argument\n example_call = ', '.join(map(str.strip, example_call.split(',')[1:]))\n ret = docstrings.dedents(\"\"\"\n %s\n\n This plotting method adds data arrays and plots them via\n :class:`%s` plotters\n\n To plot a variable in this dataset, type::\n\n >>> ds.psy.plot.%s(%s)\n\n %s\"\"\" % (summary, full_name, identifier, example_call, doc_str))\n\n if show_examples:\n ret += '\\n\\n' + cls._gen_examples(identifier)\n return ret", "response": "Generate the documentation docstring for a PlotMethod"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_data(self, *args, **kwargs):\n plotter_cls = self.plotter_cls\n da_list = self._project_plotter._da.psy.to_interactive_list()\n return plotter_cls.check_data(\n da_list.all_names, da_list.all_dims, da_list.is_unstructured)", "response": "Check whether the plotter of this plot method can visualize the data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _add_data(self, plotter_cls, *args, **kwargs):\n # this method is just a shortcut to the :meth:`Project._add_data`\n # method but is reimplemented by subclasses as the\n # :class:`DatasetPlotter` or the :class:`DataArrayPlotter`\n return plotter_cls(self._da, *args, **kwargs)", "response": "Add the data array to the plotter."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef yaml_from_file(self, fpath):\n\n lookup = self._load_param_file(fpath)\n if not lookup:\n return\n\n content = \"\\n\".join(self.content)\n parsed = yaml.safe_load(content)\n # self.app.info(\"Params loaded is %s\" % parsed)\n # self.app.info(\"Lookup table looks like %s\" % lookup)\n new_content = list()\n for paramlist in parsed:\n if not isinstance(paramlist, dict):\n self.app.warn(\n (\"Invalid parameter definition ``%s``. Expected \"\n \"format: ``name: reference``. \"\n \" Skipping.\" % paramlist),\n (self.state_machine.node.source,\n self.state_machine.node.line))\n continue\n for name, ref in paramlist.items():\n if ref in lookup:\n new_content.append((name, lookup[ref]))\n else:\n self.app.warn(\n (\"No field definition for ``%s`` found in ``%s``. \"\n \" Skipping.\" % (ref, fpath)),\n (self.state_machine.node.source,\n self.state_machine.node.line))\n\n # self.app.info(\"New content %s\" % new_content)\n self.yaml = new_content", "response": "Collect Parameter stanzas from inline + file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlocate the libsodium C library", "response": "def get_libsodium():\n '''Locate the libsodium C library'''\n\n __SONAMES = (13, 10, 5, 4)\n # Import libsodium from system\n sys_sodium = ctypes.util.find_library('sodium')\n if sys_sodium is None:\n sys_sodium = ctypes.util.find_library('libsodium')\n\n if sys_sodium:\n try:\n return ctypes.CDLL(sys_sodium)\n except OSError:\n pass\n\n # Import from local path\n if sys.platform.startswith('win'):\n\n try:\n return ctypes.cdll.LoadLibrary('libsodium')\n except OSError:\n pass\n for soname_ver in __SONAMES:\n try:\n return ctypes.cdll.LoadLibrary(\n 'libsodium-{0}'.format(soname_ver)\n )\n except OSError:\n pass\n elif sys.platform.startswith('darwin'):\n try:\n return ctypes.cdll.LoadLibrary('libsodium.dylib')\n except OSError:\n try:\n libidx = __file__.find('lib')\n if libidx > 0:\n libpath = __file__[0:libidx+3] + '/libsodium.dylib'\n return ctypes.cdll.LoadLibrary(libpath)\n except OSError:\n pass\n else:\n try:\n return ctypes.cdll.LoadLibrary('libsodium.so')\n except OSError:\n pass\n\n for soname_ver in __SONAMES:\n try:\n return ctypes.cdll.LoadLibrary(\n 'libsodium.so.{0}'.format(soname_ver)\n )\n except OSError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new plot of the specified data.", "response": "def make_plot(fnames=[], name=[], dims=None, plot_method=None,\n output=None, project=None, engine=None, formatoptions=None,\n tight=False, rc_file=None, encoding=None, enable_post=False,\n seaborn_style=None, output_project=None,\n concat_dim=get_default_value(xr.open_mfdataset, 'concat_dim'),\n chname={}):\n \"\"\"\n Eventually start the QApplication or only make a plot\n\n Parameters\n ----------\n fnames: list of str\n Either the filenames to show, or, if the `project` parameter is set,\n the a list of `,`-separated filenames to make a mapping from the\n original filename to a new one\n name: list of str\n The variable names to plot if the `output` parameter is set\n dims: dict\n A mapping from coordinate names to integers if the `project` is not\n given\n plot_method: str\n The name of the plot_method to use\n output: str or list of str\n If set, the data is loaded and the figures are saved to the specified\n filename and now graphical user interface is shown\n project: str\n If set, the project located at the given file name is loaded\n engine: str\n The engine to use for opening the dataset (see\n :func:`psyplot.data.open_dataset`)\n formatoptions: dict\n A dictionary of formatoption that is applied to the data visualized by\n the chosen `plot_method`\n tight: bool\n If True/set, it is tried to figure out the tight bbox of the figure and\n adjust the paper size of the `output` to it\n rc_file: str\n The path to a yaml configuration file that can be used to update the\n :attr:`~psyplot.config.rcsetup.rcParams`\n encoding: str\n The encoding to use for loading the project. If None, it is\n automatically determined by pickle. Note: Set this to ``'latin1'``\n if using a project created with python2 on python3.\n enable_post: bool\n Enable the :attr:`~psyplot.plotter.Plotter.post` processing\n formatoption. If True/set, post processing scripts are enabled in the\n given `project`. Only set this if you are sure that you can trust the\n given project file because it may be a security vulnerability.\n seaborn_style: str\n The name of the style of the seaborn package that can be used for\n the :func:`seaborn.set_style` function\n output_project: str\n The name of a project file to save the project to\n concat_dim: str\n The concatenation dimension if multiple files in `fnames` are\n provided\n chname: dict\n A mapping from variable names in the project to variable names in the\n datasets that should be used instead\n \"\"\"\n if project is not None and (name != [] or dims is not None):\n warn('The `name` and `dims` parameter are ignored if the `project`'\n ' parameter is set!')\n if rc_file is not None:\n rcParams.load_from_file(rc_file)\n\n if dims is not None and not isinstance(dims, dict):\n dims = dict(chain(*map(six.iteritems, dims)))\n\n if len(output) == 1:\n output = output[0]\n if not fnames and not project:\n raise ValueError(\n \"Either a filename or a project file must be provided if \"\n \"the output parameter is set!\")\n elif project is None and plot_method is None:\n raise ValueError(\n \"A plotting method must be provided if the output parameter \"\n \"is set and not the project!\")\n if seaborn_style is not None:\n import seaborn as sns\n sns.set_style(seaborn_style)\n import psyplot.project as psy\n if project is not None:\n fnames = [s.split(',') for s in fnames]\n chname = dict(chname)\n single_files = (l[0] for l in fnames if len(l) == 1)\n alternative_paths = defaultdict(lambda: next(single_files, None))\n alternative_paths.update([l for l in fnames if len(l) == 2])\n p = psy.Project.load_project(\n project, alternative_paths=alternative_paths,\n engine=engine, encoding=encoding, enable_post=enable_post,\n chname=chname)\n if formatoptions is not None:\n p.update(fmt=formatoptions)\n p.export(output, tight=tight)\n else:\n pm = getattr(psy.plot, plot_method, None)\n if pm is None:\n raise ValueError(\"Unknown plot method %s!\" % plot_method)\n kwargs = {'name': name} if name else {}\n p = pm(\n fnames, dims=dims or {}, engine=engine,\n fmt=formatoptions or {}, mf_mode=True, concat_dim=concat_dim,\n **kwargs)\n p.export(output, tight=tight)\n if output_project is not None:\n p.save_project(output_project)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a parser to make that can be used to make plots or open files from the command line.", "response": "def get_parser(create=True):\n \"\"\"Return a parser to make that can be used to make plots or open files\n from the command line\n\n Returns\n -------\n psyplot.parser.FuncArgParser\n The :class:`argparse.ArgumentParser` instance\"\"\"\n #: The parse that is used to parse arguments from the command line\n epilog = docstrings.get_sections(docstrings.dedents(\"\"\"\n Examples\n --------\n\n Here are some examples on how to use psyplot from the command line.\n\n Plot the variable ``'t2m'`` in a netCDF file ``'myfile.nc'`` and save\n the plot to ``'plot.pdf'``::\n\n $ psyplot myfile.nc -n t2m -pm mapplot -o test.pdf\n\n Create two plots for ``'t2m'`` with the first and second timestep on\n the second vertical level::\n\n $ psyplot myfile.nc -n t2m -pm mapplot -o test.pdf -d t,0,1 z,1\n\n If you have save a project using the\n :meth:`psyplot.project.Project.save_project` method into a file named\n ``'project.pkl'``, you can replot this via::\n\n $ psyplot -p project.pkl -o test.pdf\n\n If you use a different dataset than the one you used in the project\n (e.g. ``'other_ds.nc'``), you can replace it via::\n\n $ psyplot other_dataset.nc -p project.pkl -o test.pdf\n\n or explicitly via::\n\n $ psyplot old_ds.nc,other_ds.nc -p project.pkl -o test.pdf\n\n You can also load formatoptions from a configuration file, e.g.::\n\n $ echo 'title: my title' > fmt.yaml\n $ psyplot myfile.nc -n t2m -pm mapplot -fmt fmt.yaml -o test.pdf\n \"\"\"), 'parser', ['Examples'])\n\n if _on_rtd: # make a rubric examples section\n epilog = '.. rubric:: Examples\\n' + '\\n'.join(epilog.splitlines()[2:])\n\n parser = FuncArgParser(\n description=\"\"\"\n Load a dataset, make the plot and save the result to a file\"\"\",\n epilog=epilog,\n formatter_class=argparse.RawDescriptionHelpFormatter)\n\n info_grp = parser.add_argument_group(\n 'Info options',\n 'Options that print informations and quit afterwards')\n\n parser.update_arg('version', short='V', long='version', action='version',\n version=psyplot.__version__, if_existent=False,\n group=info_grp)\n\n parser.update_arg('all_versions', short='aV', long='all-versions',\n action=AllVersionsAction, if_existent=False,\n group=info_grp)\n\n parser.update_arg('list_plugins', short='lp', long='list-plugins',\n action=ListPluginsAction, if_existent=False,\n group=info_grp)\n parser.update_arg(\n 'list_plot_methods', short='lpm', long='list-plot-methods',\n action=ListPlotMethodsAction, if_existent=False, group=info_grp)\n parser.update_arg(\n 'list_datasets', short='lds', long='list-datasets',\n action=ListDsNamesAction, if_existent=False, group=info_grp,\n help=\"\"\"List the used dataset names in the given `project`.\"\"\")\n\n parser.setup_args(make_plot)\n\n output_grp = parser.add_argument_group(\n 'Output options',\n 'Options that only have an effect if the `-o` option is set.')\n\n parser.update_arg('fnames', positional=True, nargs='*')\n\n parser.update_arg('name', short='n', nargs='*', metavar='variable_name',\n const=None)\n\n parser.update_arg('dims', short='d', nargs='+', type=_load_dims,\n metavar='dim,val1[,val2[,...]]')\n\n pm_choices = {pm for pm, d in filter(\n lambda t: t[1].get('plot_func', True),\n six.iteritems(rcParams['project.plotters']))}\n if psyplot._project_imported:\n import psyplot.project as psy\n pm_choices.update(set(psy.plot._plot_methods))\n parser.update_arg('plot_method', short='pm', choices=pm_choices,\n metavar='{%s}' % ', '.join(map(repr, pm_choices)))\n\n parser.update_arg('output', short='o', group=output_grp)\n parser.update_arg('output_project', short='op', group=output_grp)\n\n parser.update_arg('project', short='p')\n\n parser.update_arg(\n 'formatoptions', short='fmt', type=_load_dict, help=\"\"\"\n The path to a yaml (``'.yml'`` or ``'.yaml'``) or pickle file\n defining a dictionary of formatoption that is applied to the data\n visualized by the chosen `plot_method`\"\"\", metavar='FILENAME')\n\n parser.update_arg(\n 'chname', type=lambda s: s.split(','), nargs='*', help=\"\"\"\n A mapping from variable names in the project to variable names in the\n datasets that should be used instead. Variable names should be\n separated by a comma.\"\"\", metavar='project-variable,variable-to-use')\n\n parser.update_arg('tight', short='t', group=output_grp)\n\n parser.update_arg('rc_file', short='rc')\n parser.pop_key('rc_file', 'metavar')\n\n parser.update_arg('encoding', short='e')\n\n parser.pop_key('enable_post', 'short')\n\n parser.update_arg('seaborn_style', short='sns')\n\n parser.update_arg('concat_dim', short='cd')\n\n if create:\n parser.create_arguments()\n\n return parser"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef scrypt(password, salt, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p, olen=64):\n check_args(password, salt, N, r, p, olen)\n\n if _scrypt_ll:\n out = ctypes.create_string_buffer(olen)\n if _scrypt_ll(password, len(password), salt, len(salt),\n N, r, p, out, olen):\n raise ValueError\n return out.raw\n\n if len(salt) != _scrypt_salt or r != 8 or (p & (p - 1)) or (N*p <= 512):\n return scr_mod.scrypt(password, salt, N, r, p, olen)\n\n s = next(i for i in range(1, 64) if 2**i == N)\n t = next(i for i in range(0, 30) if 2**i == p)\n m = 2**(10 + s)\n o = 2**(5 + t + s)\n if s > 53 or t + s > 58:\n raise ValueError\n out = ctypes.create_string_buffer(olen)\n if _scrypt(out, olen, password, len(password), salt, o, m) != 0:\n raise ValueError\n return out.raw", "response": "This function returns a key derived using the scrypt key - derivarion function."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef scrypt_mcf(password, salt=None, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p,\n prefix=SCRYPT_MCF_PREFIX_DEFAULT):\n \"\"\"Derives a Modular Crypt Format hash using the scrypt KDF\n\n Parameter space is smaller than for scrypt():\n N must be a power of two larger than 1 but no larger than 2 ** 31\n r and p must be positive numbers between 1 and 255\n Salt must be a byte string 1-16 bytes long.\n\n If no salt is given, a random salt of 128+ bits is used. (Recommended.)\n \"\"\"\n if isinstance(password, unicode):\n password = password.encode('utf8')\n elif not isinstance(password, bytes):\n raise TypeError('password must be a unicode or byte string')\n if N < 2 or (N & (N - 1)):\n raise ValueError('scrypt N must be a power of 2 greater than 1')\n if p > 255 or p < 1:\n raise ValueError('scrypt_mcf p out of range [1,255]')\n if N > 2**31:\n raise ValueError('scrypt_mcf N out of range [2,2**31]')\n\n if (salt is not None or r != 8 or (p & (p - 1)) or (N*p <= 512) or\n prefix not in (SCRYPT_MCF_PREFIX_7, SCRYPT_MCF_PREFIX_s1,\n SCRYPT_MCF_PREFIX_ANY) or\n _scrypt_ll):\n return mcf_mod.scrypt_mcf(scrypt, password, salt, N, r, p, prefix)\n\n s = next(i for i in range(1, 32) if 2**i == N)\n t = next(i for i in range(0, 8) if 2**i == p)\n m = 2**(10 + s)\n o = 2**(5 + t + s)\n mcf = ctypes.create_string_buffer(102)\n if _scrypt_str(mcf, password, len(password), o, m) != 0:\n return mcf_mod.scrypt_mcf(scrypt, password, salt, N, r, p, prefix)\n\n if prefix in (SCRYPT_MCF_PREFIX_7, SCRYPT_MCF_PREFIX_ANY):\n return mcf.raw.strip(b'\\0')\n\n _N, _r, _p, salt, hash, olen = mcf_mod._scrypt_mcf_decode_7(mcf.raw[:-1])\n assert _N == N and _r == r and _p == p, (_N, _r, _p, N, r, p, o, m)\n return mcf_mod._scrypt_mcf_encode_s1(N, r, p, salt, hash)", "response": "Derives a Modular Crypt Format hash using the scrypt algorithm."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns True if the password matches the given MCF hash", "response": "def scrypt_mcf_check(mcf, password):\n \"\"\"Returns True if the password matches the given MCF hash\"\"\"\n if isinstance(password, unicode):\n password = password.encode('utf8')\n elif not isinstance(password, bytes):\n raise TypeError('password must be a unicode or byte string')\n if not isinstance(mcf, bytes):\n raise TypeError('MCF must be a byte string')\n if mcf_mod._scrypt_mcf_7_is_standard(mcf) and not _scrypt_ll:\n return _scrypt_str_chk(mcf, password, len(password)) == 0\n return mcf_mod.scrypt_mcf_check(scrypt, mcf, password)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a property that uses the _TempBool class.", "response": "def _temp_bool_prop(propname, doc=\"\", default=False):\n \"\"\"Creates a property that uses the :class:`_TempBool` class\n\n Parameters\n ----------\n propname: str\n The attribute name to use. The _TempBool instance will be stored in the\n ``'_' + propname`` attribute of the corresponding instance\n doc: str\n The documentation of the property\n default: bool\n The default value of the _TempBool class\"\"\"\n def getx(self):\n if getattr(self, '_' + propname, None) is not None:\n return getattr(self, '_' + propname)\n else:\n setattr(self, '_' + propname, _TempBool(default))\n return getattr(self, '_' + propname)\n\n def setx(self, value):\n getattr(self, propname).value = bool(value)\n\n def delx(self):\n getattr(self, propname).value = default\n\n return property(getx, setx, delx, doc)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks whether a key is in a list of possible formatoptions and returns a list of similar sounding keys.", "response": "def check_key(key, possible_keys, raise_error=True,\n name='formatoption keyword',\n msg=(\"See show_fmtkeys function for possible formatopion \"\n \"keywords\"),\n *args, **kwargs):\n \"\"\"\n Checks whether the key is in a list of possible keys\n\n This function checks whether the given `key` is in `possible_keys` and if\n not looks for similar sounding keys\n\n Parameters\n ----------\n key: str\n Key to check\n possible_keys: list of strings\n a list of possible keys to use\n raise_error: bool\n If not True, a list of similar keys is returned\n name: str\n The name of the key that shall be used in the error message\n msg: str\n The additional message that shall be used if no close match to\n key is found\n ``*args,**kwargs``\n They are passed to the :func:`difflib.get_close_matches` function\n (i.e. `n` to increase the number of returned similar keys and\n `cutoff` to change the sensibility)\n\n Returns\n -------\n str\n The `key` if it is a valid string, else an empty string\n list\n A list of similar formatoption strings (if found)\n str\n An error message which includes\n\n Raises\n ------\n KeyError\n If the key is not a valid formatoption and `raise_error` is True\"\"\"\n if key not in possible_keys:\n similarkeys = get_close_matches(key, possible_keys, *args, **kwargs)\n if similarkeys:\n msg = ('Unknown %s %s! Possible similiar '\n 'frasings are %s.') % (name, key, ', '.join(similarkeys))\n else:\n msg = (\"Unknown %s %s! \") % (name, key) + msg\n if not raise_error:\n return '', similarkeys, msg\n raise KeyError(msg)\n else:\n return key, [key], ''"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfunction to sort keyword arguments and sort them into dictionaries", "response": "def sort_kwargs(kwargs, *param_lists):\n \"\"\"Function to sort keyword arguments and sort them into dictionaries\n\n This function returns dictionaries that contain the keyword arguments\n from `kwargs` corresponding given iterables in ``*params``\n\n Parameters\n ----------\n kwargs: dict\n Original dictionary\n ``*param_lists``\n iterables of strings, each standing for a possible key in kwargs\n\n Returns\n -------\n list\n len(params) + 1 dictionaries. Each dictionary contains the items of\n `kwargs` corresponding to the specified list in ``*param_lists``. The\n last dictionary contains the remaining items\"\"\"\n return chain(\n ({key: kwargs.pop(key) for key in params.intersection(kwargs)}\n for params in map(set, param_lists)), [kwargs])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef hashable(val):\n if val is None:\n return val\n try:\n hash(val)\n except TypeError:\n return repr(val)\n else:\n return val", "response": "Test if val is hashable and if not get it s string representation"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef join_dicts(dicts, delimiter=None, keep_all=False):\n if not dicts:\n return {}\n if keep_all:\n all_keys = set(chain(*(d.keys() for d in dicts)))\n else:\n all_keys = set(dicts[0])\n for d in dicts[1:]:\n all_keys.intersection_update(d)\n ret = {}\n for key in all_keys:\n vals = {hashable(d.get(key, None)) for d in dicts} - {None}\n if len(vals) == 1:\n ret[key] = next(iter(vals))\n elif delimiter is None:\n ret[key] = vals\n else:\n ret[key] = delimiter.join(map(str, vals))\n return ret", "response": "Join multiple dictionaries into one node - level dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_guest_exist(check_index=0):\n\n def outer(f):\n @six.wraps(f)\n def inner(self, *args, **kw):\n userids = args[check_index]\n\n if isinstance(userids, list):\n # convert all userids to upper case\n userids = [uid.upper() for uid in userids]\n new_args = (args[:check_index] + (userids,) +\n args[check_index + 1:])\n else:\n # convert the userid to upper case\n userids = userids.upper()\n new_args = (args[:check_index] + (userids,) +\n args[check_index + 1:])\n userids = [userids]\n\n self._vmops.check_guests_exist_in_db(userids)\n\n return f(self, *new_args, **kw)\n return inner\n return outer", "response": "Decorator to check guest exist in database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef guest_start(self, userid):\n action = \"start guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_start(userid)", "response": "Power on a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef guest_stop(self, userid, **kwargs):\n\n action = \"stop guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_stop(userid, **kwargs)", "response": "Power off a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef guest_softstop(self, userid, **kwargs):\n\n action = \"soft stop guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_softstop(userid, **kwargs)", "response": "Issue a soft stop command to shutdown the OS in a virtual machine and then log the guest off z/VM.."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guest_reset(self, userid):\n action = \"reset guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_reset(userid)", "response": "reset a virtual machine"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\npause a virtual machine.", "response": "def guest_pause(self, userid):\n \"\"\"Pause a virtual machine.\n\n :param str userid: the id of the virtual machine to be paused\n :returns: None\n \"\"\"\n action = \"pause guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_pause(userid)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef guest_unpause(self, userid):\n action = \"unpause guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_unpause(userid)", "response": "Unpause a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the power state of the guest.", "response": "def guest_get_power_state(self, userid):\n \"\"\"Returns power state.\"\"\"\n action = \"get power state of guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._vmops.get_power_state(userid)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef guest_get_info(self, userid):\n action = \"get info of guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._vmops.get_info(userid)", "response": "Get the status of a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving disk pool usage information.", "response": "def host_diskpool_get_info(self, disk_pool=None):\n \"\"\" Retrieve diskpool information.\n :param str disk_pool: the disk pool info. It use ':' to separate\n disk pool type and pool name, eg \"ECKD:eckdpool\" or \"FBA:fbapool\"\n :returns: Dictionary describing disk pool usage info\n \"\"\"\n # disk_pool must be assigned. disk_pool default to None because\n # it is more convenient for users to just type function name when\n # they want to get the disk pool info of CONF.zvm.disk_pool\n if disk_pool is None:\n disk_pool = CONF.zvm.disk_pool\n\n if ':' not in disk_pool:\n msg = ('Invalid input parameter disk_pool, expect \":\" in'\n 'disk_pool, eg. ECKD:eckdpool')\n LOG.error(msg)\n raise exception.SDKInvalidInputFormat(msg)\n diskpool_type = disk_pool.split(':')[0].upper()\n diskpool_name = disk_pool.split(':')[1]\n if diskpool_type not in ('ECKD', 'FBA'):\n msg = ('Invalid disk pool type found in disk_pool, expect'\n 'disk_pool like ECKD:eckdpool or FBA:fbapool')\n LOG.error(msg)\n raise exception.SDKInvalidInputFormat(msg)\n\n action = \"get information of disk pool: '%s'\" % disk_pool\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._hostops.diskpool_get_info(diskpool_name)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef image_delete(self, image_name):\n try:\n self._imageops.image_delete(image_name)\n except exception.SDKBaseException:\n LOG.error(\"Failed to delete image '%s'\" % image_name)\n raise", "response": "Delete image from image repository"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the root disk size of the image", "response": "def image_get_root_disk_size(self, image_name):\n \"\"\"Get the root disk size of the image\n\n :param image_name: the image name in image Repository\n :returns: the disk size in units CYL or BLK\n \"\"\"\n try:\n return self._imageops.image_get_root_disk_size(image_name)\n except exception.SDKBaseException:\n LOG.error(\"Failed to get root disk size units of image '%s'\" %\n image_name)\n raise"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef image_import(self, image_name, url, image_meta, remote_host=None):\n try:\n self._imageops.image_import(image_name, url, image_meta,\n remote_host=remote_host)\n except exception.SDKBaseException:\n LOG.error(\"Failed to import image '%s'\" % image_name)\n raise", "response": "Import image to zvmsdk image repository\n - set the local_host to be used to identify the image"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef image_query(self, imagename=None):\n try:\n return self._imageops.image_query(imagename)\n except exception.SDKBaseException:\n LOG.error(\"Failed to query image\")\n raise", "response": "Query the list of image info in image repository"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef image_export(self, image_name, dest_url, remote_host=None):\n try:\n return self._imageops.image_export(image_name, dest_url,\n remote_host)\n except exception.SDKBaseException:\n LOG.error(\"Failed to export image '%s'\" % image_name)\n raise", "response": "Export the image to the specified location"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef guest_deploy(self, userid, image_name, transportfiles=None,\n remotehost=None, vdev=None, hostname=None):\n \"\"\" Deploy the image to vm.\n\n :param userid: (str) the user id of the vm\n :param image_name: (str) the name of image that used to deploy the vm\n :param transportfiles: (str) the files that used to customize the vm\n :param remotehost: the server where the transportfiles located, the\n format is username@IP, eg nova@192.168.99.1\n :param vdev: (str) the device that image will be deploy to\n :param hostname: (str) the hostname of the vm. This parameter will be\n ignored if transportfiles present.\n \"\"\"\n action = (\"deploy image '%(img)s' to guest '%(vm)s'\" %\n {'img': image_name, 'vm': userid})\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_deploy(userid, image_name, transportfiles,\n remotehost, vdev, hostname)", "response": "Deploy the image to vm."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guest_capture(self, userid, image_name, capture_type='rootonly',\n compress_level=6):\n \"\"\" Capture the guest to generate a image\n\n :param userid: (str) the user id of the vm\n :param image_name: (str) the unique image name after capture\n :param capture_type: (str) the type of capture, the value can be:\n rootonly: indicate just root device will be captured\n alldisks: indicate all the devices of the userid will be\n captured\n :param compress_level: the compression level of the image, default\n is 6\n \"\"\"\n action = (\"capture guest '%(vm)s' to generate image '%(img)s'\" %\n {'vm': userid, 'img': image_name})\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_capture(userid, image_name,\n capture_type=capture_type,\n compress_level=compress_level)", "response": "Capture the guest to generate a image"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef guest_create_nic(self, userid, vdev=None, nic_id=None,\n mac_addr=None, active=False):\n \"\"\" Create the nic for the vm, add NICDEF record into the user direct.\n\n :param str userid: the user id of the vm\n :param str vdev: nic device number, 1- to 4- hexadecimal digits\n :param str nic_id: nic identifier\n :param str mac_addr: mac address, it is only be used when changing\n the guest's user direct. Format should be xx:xx:xx:xx:xx:xx,\n and x is a hexadecimal digit\n :param bool active: whether add a nic on active guest system\n\n :returns: nic device number, 1- to 4- hexadecimal digits\n :rtype: str\n \"\"\"\n if mac_addr is not None:\n if not zvmutils.valid_mac_addr(mac_addr):\n raise exception.SDKInvalidInputFormat(\n msg=(\"Invalid mac address, format should be \"\n \"xx:xx:xx:xx:xx:xx, and x is a hexadecimal digit\"))\n return self._networkops.create_nic(userid, vdev=vdev, nic_id=nic_id,\n mac_addr=mac_addr, active=active)", "response": "Create the nic for the guest."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef guest_delete_nic(self, userid, vdev, active=False):\n self._networkops.delete_nic(userid, vdev, active=active)", "response": "delete the nic for the guest"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the definition info for the specified guest vm.", "response": "def guest_get_definition_info(self, userid, **kwargs):\n \"\"\"Get definition info for the specified guest vm, also could be used\n to check specific info.\n\n :param str userid: the user id of the guest vm\n :param dict kwargs: Dictionary used to check specific info in user\n direct. Valid keywords for kwargs:\n nic_coupled=, where is the virtual\n device number of the nic to be checked the couple\n status.\n :returns: Dictionary describing user direct and check info result\n :rtype: dict\n \"\"\"\n action = \"get the definition info of guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._vmops.get_definition_info(userid, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmove an eligible running z/VM to another z/VM system and test for vm.", "response": "def guest_live_migrate(self, userid, dest_zcc_userid, destination,\n parms, lgr_action):\n \"\"\"Move an eligible, running z/VM(R) virtual machine transparently\n from one z/VM system to another within an SSI cluster.\n\n :param userid: (str) the userid of the vm to be relocated or tested\n :param dest_zcc_userid: (str) the userid of zcc on destination\n :param destination: (str) the system ID of the z/VM system to which\n the specified vm will be relocated or tested.\n :param parms: (dict) a dictionary of options for relocation.\n It has one dictionary that contains some of the below keys:\n {'maxtotal': i,\n 'maxquiesce': i,\n 'immediate': str}\n\n In which, 'maxtotal':indicates the maximum total time\n (in seconds)\n that the command issuer is willing to\n wait for the entire relocation\n to complete or -1 to indicate there is no limit for time.\n 'maxquiesce':indicates the maximum quiesce time\n for this relocation.\n This is the amount of time (in seconds)\n a virtual machine may be stopped\n during a relocation attempt or -1 to indicate\n there is no limit for time.\n 'immediate':If present, immediate=YES is set,\n which causes the VMRELOCATE command\n to do one early pass through virtual machine storage\n and then go directly to the quiesce stage.\n :param lgr_action: (str) indicates the action is move or test for vm.\n\n \"\"\"\n if lgr_action.lower() == 'move':\n if dest_zcc_userid == '':\n errmsg = (\"'dest_zcc_userid' is required if the value of \"\n \"'lgr_action' equals 'move'.\")\n LOG.error(errmsg)\n raise exception.SDKMissingRequiredInput(msg=errmsg)\n\n # Add authorization for new zcc.\n cmd = ('echo -n %s > /etc/iucv_authorized_userid\\n' %\n dest_zcc_userid)\n rc = self._smtclient.execute_cmd(userid, cmd)\n if rc != 0:\n err_msg = (\"Add authorization for new zcc failed\")\n LOG.error(err_msg)\n\n # Live_migrate the guest\n operation = \"Move guest '%s' to SSI '%s'\" % (userid, destination)\n with zvmutils.log_and_reraise_sdkbase_error(operation):\n self._vmops.live_migrate_vm(userid, destination,\n parms, lgr_action)\n comments = self._GuestDbOperator.get_comments_by_userid(userid)\n comments['migrated'] = 1\n action = \"update guest '%s' in database\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._GuestDbOperator.update_guest_by_userid(userid,\n comments=comments)\n if lgr_action.lower() == 'test':\n operation = \"Test move guest '%s' to SSI '%s'\" % (userid,\n destination)\n with zvmutils.log_and_reraise_sdkbase_error(operation):\n self._vmops.live_migrate_vm(userid, destination,\n parms, lgr_action)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new virtual machine in zvm.", "response": "def guest_create(self, userid, vcpus, memory, disk_list=None,\n user_profile=CONF.zvm.user_profile,\n max_cpu=CONF.zvm.user_default_max_cpu,\n max_mem=CONF.zvm.user_default_max_memory,\n ipl_from='', ipl_param='', ipl_loadparam=''):\n \"\"\"create a vm in z/VM\n\n :param userid: (str) the userid of the vm to be created\n :param vcpus: (int) amount of vcpus\n :param memory: (int) size of memory in MB\n :param disk_list: (dict) a list of disks info for the guest.\n It has one dictionary that contain some of the below keys for\n each disk, the root disk should be the first element in the\n list, the format is:\n {'size': str,\n 'format': str,\n 'is_boot_disk': bool,\n 'disk_pool': str}\n\n In which, 'size': case insensitive, the unit can be in\n Megabytes (M), Gigabytes (G), or number of cylinders/blocks, eg\n 512M, 1g or just 2000.\n 'format': can be ext2, ext3, ext4, xfs and none.\n 'is_boot_disk': For root disk, this key must be set to indicate\n the image that will be deployed on this disk.\n 'disk_pool': optional, if not specified, the disk will be\n created by using the value from configure file,the format is\n ECKD:eckdpoolname or FBA:fbapoolname.\n\n For example:\n [{'size': '1g',\n 'is_boot_disk': True,\n 'disk_pool': 'ECKD:eckdpool1'},\n {'size': '200000',\n 'disk_pool': 'FBA:fbapool1',\n 'format': 'ext3'}]\n In this case it will create one disk 0100(in case the vdev\n for root disk is 0100) with size 1g from ECKD disk pool\n eckdpool1 for guest , then set IPL 0100 in guest's user\n directory, and it will create 0101 with 200000 blocks from\n FBA disk pool fbapool1, and formated with ext3.\n :param user_profile: (str) the profile for the guest\n :param max_cpu: (int) the maximum number of virtual cpu this user can\n define. The value should be a decimal value between 1 and 64.\n :param max_mem: (str) the maximum size of memory the user can define.\n The value should be specified by 1-4 bits of number suffixed by\n either M (Megabytes) or G (Gigabytes). And the number should be\n an integer.\n :param ipl_from: (str) where to ipl the guest from, it can be given\n by guest input param, e.g CMS.\n :param ipl_param: the param to use when IPL for as PARM\n :param ipl_loadparam: the param to use when IPL for as LOADPARM\n \"\"\"\n userid = userid.upper()\n if disk_list:\n for disk in disk_list:\n if not isinstance(disk, dict):\n errmsg = ('Invalid \"disk_list\" input, it should be a '\n 'dictionary. Details could be found in doc.')\n LOG.error(errmsg)\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n # 'size' is required for each disk\n if 'size' not in disk.keys():\n errmsg = ('Invalid \"disk_list\" input, \"size\" is required '\n 'for each disk.')\n LOG.error(errmsg)\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n # 'disk_pool' format check\n disk_pool = disk.get('disk_pool') or CONF.zvm.disk_pool\n if ':' not in disk_pool or (disk_pool.split(':')[0].upper()\n not in ['ECKD', 'FBA']):\n errmsg = (\"Invalid disk_pool input, it should be in format\"\n \" ECKD:eckdpoolname or FBA:fbapoolname\")\n LOG.error(errmsg)\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n # 'format' value check\n if ('format' in disk.keys()) and (disk['format'].lower() not in\n ('ext2', 'ext3', 'ext4',\n 'xfs', 'none')):\n errmsg = (\"Invalid disk_pool input, supported 'format' \"\n \"includes 'ext2', 'ext3', 'ext4', 'xfs', 'none'\")\n LOG.error(errmsg)\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n\n action = \"create guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._vmops.create_vm(userid, vcpus, memory, disk_list,\n user_profile, max_cpu, max_mem,\n ipl_from, ipl_param, ipl_loadparam)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef guest_live_resize_cpus(self, userid, cpu_cnt):\n action = \"live resize guest '%s' to have '%i' virtual cpus\" % (userid,\n cpu_cnt)\n LOG.info(\"Begin to %s\" % action)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.live_resize_cpus(userid, cpu_cnt)\n LOG.info(\"%s successfully.\" % action)", "response": "Live resize virtual cpus of guests."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlive resize memory of guests.", "response": "def guest_live_resize_mem(self, userid, size):\n \"\"\"Live resize memory of guests.\n\n :param userid: (str) the userid of the guest to be live resized\n :param size: (str) The memory size that the guest should have\n in available status after live resize.\n The value should be specified by 1-4 bits of number suffixed by\n either M (Megabytes) or G (Gigabytes). And the number should be\n an integer.\n\n \"\"\"\n action = \"live resize guest '%s' to have '%s' memory\" % (userid,\n size)\n LOG.info(\"Begin to %s\" % action)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.live_resize_memory(userid, size)\n LOG.info(\"%s successfully.\" % action)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef guest_resize_mem(self, userid, size):\n action = \"resize guest '%s' to have '%s' memory\" % (userid, size)\n LOG.info(\"Begin to %s\" % action)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.resize_memory(userid, size)\n LOG.info(\"%s successfully.\" % action)", "response": "Resize memory of guests."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef guest_create_disks(self, userid, disk_list):\n if disk_list == [] or disk_list is None:\n # nothing to do\n LOG.debug(\"No disk specified when calling guest_create_disks, \"\n \"nothing happened\")\n return\n\n action = \"create disks '%s' for guest '%s'\" % (str(disk_list), userid)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._vmops.create_disks(userid, disk_list)", "response": "This function creates a list of root disks for a guest vm."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete disks from an existing guest vm.", "response": "def guest_delete_disks(self, userid, disk_vdev_list):\n \"\"\"Delete disks from an existing guest vm.\n\n :param userid: (str) the userid of the vm to be deleted\n :param disk_vdev_list: (list) the vdev list of disks to be deleted,\n for example: ['0101', '0102']\n \"\"\"\n action = \"delete disks '%s' from guest '%s'\" % (str(disk_vdev_list),\n userid)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.delete_disks(userid, disk_vdev_list)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef guest_nic_couple_to_vswitch(self, userid, nic_vdev,\n vswitch_name, active=False):\n \"\"\" Couple nic device to specified vswitch.\n\n :param str userid: the user's name who owns the nic\n :param str nic_vdev: nic device number, 1- to 4- hexadecimal digits\n :param str vswitch_name: the name of the vswitch\n :param bool active: whether make the change on active guest system\n \"\"\"\n self._networkops.couple_nic_to_vswitch(userid, nic_vdev,\n vswitch_name, active=active)", "response": "Couple nic device to specified vswitch."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guest_nic_uncouple_from_vswitch(self, userid, nic_vdev,\n active=False):\n \"\"\" Disonnect nic device with network.\n\n :param str userid: the user's name who owns the nic\n :param str nic_vdev: nic device number, 1- to 4- hexadecimal digits\n :param bool active: whether make the change on active guest system\n \"\"\"\n self._networkops.uncouple_nic_from_vswitch(userid, nic_vdev,\n active=active)", "response": "Unlink nic device from vswitch"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new virtual switch.", "response": "def vswitch_create(self, name, rdev=None, controller='*',\n connection='CONNECT', network_type='ETHERNET',\n router=\"NONROUTER\", vid='UNAWARE', port_type='ACCESS',\n gvrp='GVRP', queue_mem=8, native_vid=1,\n persist=True):\n \"\"\" Create vswitch.\n\n :param str name: the vswitch name\n :param str rdev: the real device number, a maximum of three devices,\n all 1-4 characters in length, delimited by blanks. 'NONE'\n may also be specified\n :param str controller: the vswitch's controller, it could be the userid\n controlling the real device, or '*' to specifies that any\n available controller may be used\n :param str connection:\n - CONnect:\n Activate the real device connection.\n - DISCONnect:\n Do not activate the real device connection.\n - NOUPLINK:\n The vswitch will never have connectivity through\n the UPLINK port\n :param str network_type: Specifies the transport mechanism to be used\n for the vswitch, as follows: IP, ETHERNET\n :param str router:\n - NONrouter:\n The OSA-Express device identified in\n real_device_address= will not act as a router to the\n vswitch\n - PRIrouter:\n The OSA-Express device identified in\n real_device_address= will act as a primary router to the\n vswitch\n - Note: If the network_type is ETHERNET, this value must be\n unspecified, otherwise, if this value is unspecified, default\n is NONROUTER\n :param str/int vid: the VLAN ID. This can be any of the following\n values: UNAWARE, AWARE or 1-4094\n :param str port_type:\n - ACCESS:\n The default porttype attribute for\n guests authorized for the virtual switch.\n The guest is unaware of VLAN IDs and sends and\n receives only untagged traffic\n - TRUNK:\n The default porttype attribute for\n guests authorized for the virtual switch.\n The guest is VLAN aware and sends and receives tagged\n traffic for those VLANs to which the guest is authorized.\n If the guest is also authorized to the natvid, untagged\n traffic sent or received by the guest is associated with\n the native VLAN ID (natvid) of the virtual switch.\n :param str gvrp:\n - GVRP:\n Indicates that the VLAN IDs in use on the virtual\n switch should be registered with GVRP-aware switches on the\n LAN. This provides dynamic VLAN registration and VLAN\n registration removal for networking switches. This\n eliminates the need to manually configure the individual\n port VLAN assignments.\n - NOGVRP:\n Do not register VLAN IDs with GVRP-aware switches on\n the LAN. When NOGVRP is specified VLAN port assignments\n must be configured manually\n :param int queue_mem: A number between 1 and 8, specifying the QDIO\n buffer size in megabytes.\n :param int native_vid: the native vlan id, 1-4094 or None\n :param bool persist: whether create the vswitch in the permanent\n configuration for the system\n \"\"\"\n if ((queue_mem < 1) or (queue_mem > 8)):\n errmsg = ('API vswitch_create: Invalid \"queue_mem\" input, '\n 'it should be 1-8')\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n\n if isinstance(vid, int) or vid.upper() != 'UNAWARE':\n if ((native_vid is not None) and\n ((native_vid < 1) or (native_vid > 4094))):\n errmsg = ('API vswitch_create: Invalid \"native_vid\" input, '\n 'it should be 1-4094 or None')\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n\n if network_type.upper() == 'ETHERNET':\n router = None\n\n self._networkops.add_vswitch(name, rdev=rdev, controller=controller,\n connection=connection,\n network_type=network_type,\n router=router, vid=vid,\n port_type=port_type, gvrp=gvrp,\n queue_mem=queue_mem,\n native_vid=native_vid,\n persist=persist)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the console output of the guest virtual machine.", "response": "def guest_get_console_output(self, userid):\n \"\"\"Get the console output of the guest virtual machine.\n\n :param str userid: the user id of the vm\n :returns: console log string\n :rtype: str\n \"\"\"\n action = \"get the console output of guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n output = self._vmops.get_console_output(userid)\n\n return output"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete guest. :param userid: the user id of the vm", "response": "def guest_delete(self, userid):\n \"\"\"Delete guest.\n\n :param userid: the user id of the vm\n \"\"\"\n\n # check guest exist in database or not\n userid = userid.upper()\n if not self._vmops.check_guests_exist_in_db(userid, raise_exc=False):\n if zvmutils.check_userid_exist(userid):\n LOG.error(\"Guest '%s' does not exist in guests database\" %\n userid)\n raise exception.SDKObjectNotExistError(\n obj_desc=(\"Guest '%s'\" % userid), modID='guest')\n else:\n LOG.debug(\"The guest %s does not exist.\" % userid)\n return\n\n action = \"delete guest '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._vmops.delete_vm(userid)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the cpu and mem of the guests", "response": "def guest_inspect_stats(self, userid_list):\n \"\"\"Get the statistics including cpu and mem of the guests\n\n :param userid_list: a single userid string or a list of guest userids\n :returns: dictionary describing the cpu statistics of the vm\n in the form {'UID1':\n {\n 'guest_cpus': xx,\n 'used_cpu_time_us': xx,\n 'elapsed_cpu_time_us': xx,\n 'min_cpu_count': xx,\n 'max_cpu_limit': xx,\n 'samples_cpu_in_use': xx,\n 'samples_cpu_delay': xx,\n 'used_mem_kb': xx,\n 'max_mem_kb': xx,\n 'min_mem_kb': xx,\n 'shared_mem_kb': xx\n },\n 'UID2':\n {\n 'guest_cpus': xx,\n 'used_cpu_time_us': xx,\n 'elapsed_cpu_time_us': xx,\n 'min_cpu_count': xx,\n 'max_cpu_limit': xx,\n 'samples_cpu_in_use': xx,\n 'samples_cpu_delay': xx,\n 'used_mem_kb': xx,\n 'max_mem_kb': xx,\n 'min_mem_kb': xx,\n 'shared_mem_kb': xx\n }\n }\n for the guests that are shutdown or not exist, no data\n returned in the dictionary\n \"\"\"\n if not isinstance(userid_list, list):\n userid_list = [userid_list]\n action = \"get the statistics of guest '%s'\" % str(userid_list)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._monitor.inspect_stats(userid_list)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef guest_inspect_vnics(self, userid_list):\n if not isinstance(userid_list, list):\n userid_list = [userid_list]\n action = \"get the vnics statistics of guest '%s'\" % str(userid_list)\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._monitor.inspect_vnics(userid_list)", "response": "Get the vnics statistics of the guest virtual machines"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef vswitch_set_vlan_id_for_user(self, vswitch_name, userid, vlan_id):\n self._networkops.set_vswitch_port_vlan_id(vswitch_name,\n userid, vlan_id)", "response": "Set vlan id for user when connecting to the vswitch"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\npunch the script that used to process additional disks for a guest.", "response": "def guest_config_minidisks(self, userid, disk_info):\n \"\"\"Punch the script that used to process additional disks to vm\n\n :param str userid: the user id of the vm\n :param disk_info: a list contains disks info for the guest. It\n contains dictionaries that describes disk info for each disk.\n\n Each dictionary has 3 keys, format is required, vdev and\n mntdir are optional. For example, if vdev is not specified, it\n will start from the next vdev of CONF.zvm.user_root_vdev, eg.\n if CONF.zvm.user_root_vdev is 0100, zvmsdk will use 0101 as the\n vdev for first additional disk in disk_info, and if mntdir is\n not specified, zvmsdk will use /mnt/ephemeral0 as the mount\n point of first additional disk\n\n Here are some examples:\n [{'vdev': '0101',\n 'format': 'ext3',\n 'mntdir': '/mnt/ephemeral0'}]\n In this case, the zvmsdk will treat 0101 as additional disk's\n vdev, and it's formatted with ext3, and will be mounted to\n /mnt/ephemeral0\n\n [{'format': 'ext3'},\n {'format': 'ext4'}]\n In this case, if CONF.zvm.user_root_vdev is 0100, zvmsdk will\n configure the first additional disk as 0101, mount it to\n /mnt/ephemeral0 with ext3, and configure the second additional\n disk 0102, mount it to /mnt/ephemeral1 with ext4.\n\n \"\"\"\n action = \"config disks for userid '%s'\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._vmops.guest_config_minidisks(userid, disk_info)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchanges the configuration of an existing virtual switch.", "response": "def vswitch_set(self, vswitch_name, **kwargs):\n \"\"\"Change the configuration of an existing virtual switch\n\n :param str vswitch_name: the name of the virtual switch\n :param dict kwargs:\n - grant_userid=:\n A userid to be added to the access list\n - user_vlan_id=:\n user VLAN ID. Support following ways:\n 1. As single values between 1 and 4094. A maximum of four\n values may be specified, separated by blanks.\n Example: 1010 2020 3030 4040\n 2. As a range of two numbers, separated by a dash (-).\n A maximum of two ranges may be specified.\n Example: 10-12 20-22\n - revoke_userid=:\n A userid to be removed from the access list\n - real_device_address=:\n The real device address or the real device address and\n OSA Express port number of a QDIO OSA\n Express device to be used to create the switch to the virtual\n adapter. If using a real device and an OSA Express port number,\n specify the real device number followed by a period(.),\n the letter 'P' (or 'p'), followed by the port number as a\n hexadecimal number. A maximum of three device addresses,\n all 1-7 characters in length, may be specified, delimited by\n blanks. 'None' may also be specified\n - port_name=:\n The name used to identify the OSA Expanded\n adapter. A maximum of three port names, all 1-8 characters in\n length, may be specified, delimited by blanks.\n - controller_name=:\n One of the following:\n 1. The userid controlling the real device. A maximum of eight\n userids, all 1-8 characters in length, may be specified,\n delimited by blanks.\n 2. '*': Specifies that any available controller may be used\n - connection_value=:\n One of the following values:\n CONnect: Activate the real device connection.\n DISCONnect: Do not activate the real device connection.\n - queue_memory_limit=:\n A number between 1 and 8\n specifying the QDIO buffer size in megabytes.\n - routing_value=:\n Specifies whether the OSA-Express QDIO\n device will act as a router to the virtual switch, as follows:\n NONrouter: The OSA-Express device identified in\n real_device_address= will not act as a router to the vswitch\n PRIrouter: The OSA-Express device identified in\n real_device_address= will act as a primary router to the\n vswitch\n - port_type=:\n Specifies the port type, ACCESS or TRUNK\n - persist=:\n one of the following values:\n NO: The vswitch is updated on the active system, but is not\n updated in the permanent configuration for the system.\n YES: The vswitch is updated on the active system and also in\n the permanent configuration for the system.\n If not specified, the default is NO.\n - gvrp_value=:\n GVRP or NOGVRP\n - mac_id=:\n A unique identifier (up to six hexadecimal\n digits) used as part of the vswitch MAC address\n - uplink=:\n One of the following:\n NO: The port being enabled is not the vswitch's UPLINK port.\n YES: The port being enabled is the vswitch's UPLINK port.\n - nic_userid=:\n One of the following:\n 1. The userid of the port to/from which the UPLINK port will\n be connected or disconnected. If a userid is specified,\n then nic_vdev= must also be specified\n 2. '*': Disconnect the currently connected guest port to/from\n the special virtual switch UPLINK port. (This is equivalent\n to specifying NIC NONE on CP SET VSWITCH).\n - nic_vdev=:\n The virtual device to/from which the the\n UPLINK port will be connected/disconnected. If this value is\n specified, nic_userid= must also be specified, with a userid.\n - lacp=:\n One of the following values:\n ACTIVE: Indicates that the virtual switch will initiate\n negotiations with the physical switch via the link aggregation\n control protocol (LACP) and will respond to LACP packets sent\n by the physical switch.\n INACTIVE: Indicates that aggregation is to be performed,\n but without LACP.\n - Interval=:\n The interval to be used by the control\n program (CP) when doing load balancing of conversations across\n multiple links in the group. This can be any of the following\n values:\n 1 - 9990: Indicates the number of seconds between load\n balancing operations across the link aggregation group.\n OFF: Indicates that no load balancing is done.\n - group_rdev=:\n The real device address or the real device\n address and OSA Express port number of a QDIO OSA Express\n devcie to be affected within the link aggregation group\n associated with this vswitch. If using a real device and an OSA\n Express port number, specify the real device number followed\n by a period (.), the letter 'P' (or 'p'), followed by the port\n number as a hexadecimal number. A maximum of eight device\n addresses all 1-7 characters in length, may be specified,\n delimited by blanks.\n Note: If a real device address is specified, this device will\n be added to the link aggregation group associated with this\n vswitch. (The link aggregation group will be created if it does\n not already exist.)\n - iptimeout=:\n A number between 1 and 240 specifying the\n length of time in minutes that a remote IP address table entry\n remains in the IP address table for the virtual switch.\n - port_isolation=:\n ON or OFF\n - promiscuous=:\n One of the following:\n NO: The userid or port on the grant is not authorized to use\n the vswitch in promiscuous mode\n YES: The userid or port on the grant is authorized to use the\n vswitch in promiscuous mode.\n - MAC_protect=:\n ON, OFF or UNSPECified\n - VLAN_counters=:\n ON or OFF\n \"\"\"\n for k in kwargs.keys():\n if k not in constants.SET_VSWITCH_KEYWORDS:\n errmsg = ('API vswitch_set: Invalid keyword %s' % k)\n raise exception.SDKInvalidInputFormat(msg=errmsg)\n\n self._networkops.set_vswitch(vswitch_name, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef vswitch_delete(self, vswitch_name, persist=True):\n self._networkops.delete_vswitch(vswitch_name, persist)", "response": "Delete the vswitch from the system"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef guests_get_nic_info(self, userid=None, nic_id=None, vswitch=None):\n action = \"get nic information\"\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._networkops.get_nic_info(userid=userid, nic_id=nic_id,\n vswitch=vswitch)", "response": "Retrieve nic information in the network database according to the requirements of the guests."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nquery the virtual switch status", "response": "def vswitch_query(self, vswitch_name):\n \"\"\"Check the virtual switch status\n\n :param str vswitch_name: the name of the virtual switch\n :returns: Dictionary describing virtual switch info\n :rtype: dict\n \"\"\"\n action = \"get virtual switch information\"\n with zvmutils.log_and_reraise_sdkbase_error(action):\n return self._networkops.vswitch_query(vswitch_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes the nic and network configuration for the guest", "response": "def guest_delete_network_interface(self, userid, os_version,\n vdev, active=False):\n \"\"\" delete the nic and network configuration for the vm\n\n :param str userid: the user id of the guest\n :param str os_version: operating system version of the guest\n :param str vdev: nic device number, 1- to 4- hexadecimal digits\n :param bool active: whether delete a nic on active guest system\n \"\"\"\n self._networkops.delete_nic(userid, vdev, active=active)\n self._networkops.delete_network_configuration(userid, os_version,\n vdev, active=active)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mock_session_with_fixtures(session, path, url):\n _orig_adapters = session.adapters\n mock_adapter = adapter.Adapter()\n session.adapters = OrderedDict()\n if isinstance(url, (list, tuple)):\n for u in url:\n session.mount(u, mock_adapter)\n else:\n session.mount(url, mock_adapter)\n mock_adapter.register_path(path)\n yield\n session.adapters = _orig_adapters", "response": "Context Manager that mockes the requests session with the given path to any files found within a static path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmock the requests session with a particular class instance with any private methods for the URLs.", "response": "def mock_session_with_class(session, cls, url):\n \"\"\"\n Context Manager\n\n Mock the responses with a particular session\n to any private methods for the URLs\n\n :param session: The requests session object\n :type session: :class:`requests.Session`\n\n :param cls: The class instance with private methods for URLs\n :type cls: ``object``\n\n :param url: The base URL to mock, e.g. http://mock.com, http://\n supports a single URL or a list\n :type url: ``str`` or ``list``\n \"\"\"\n _orig_adapters = session.adapters\n mock_adapter = adapter.ClassAdapter(cls)\n session.adapters = OrderedDict()\n if isinstance(url, (list, tuple)):\n for u in url:\n session.mount(u, mock_adapter)\n else:\n session.mount(url, mock_adapter)\n yield\n session.adapters = _orig_adapters"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\napplies tilde expansion and absolutization to a path.", "response": "def _fixpath(self, p):\r\n \"\"\"Apply tilde expansion and absolutization to a path.\"\"\"\r\n return os.path.abspath(os.path.expanduser(p))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_config_dirs(self):\r\n _cwd = os.path.split(os.path.abspath(__file__))[0]\r\n _pdir = os.path.split(_cwd)[0]\r\n _etcdir = ''.join((_pdir, '/', 'etc/'))\r\n cfg_dirs = [\r\n self._fixpath(_cwd),\r\n self._fixpath('/etc/zvmsdk/'),\r\n self._fixpath('/etc/'),\r\n self._fixpath('~'),\r\n self._fixpath(_etcdir),\r\n ]\r\n return [x for x in cfg_dirs if x]", "response": "Return a list of directories where config files may be located."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _search_dirs(self, dirs, basename, extension=\"\"):\r\n for d in dirs:\r\n path = os.path.join(d, '%s%s' % (basename, extension))\r\n if os.path.exists(path):\r\n return path\r\n\r\n return None", "response": "Search a list of directories for a given filename or directory name. Returns the first file with the supplied name and extension."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the config file.", "response": "def find_config_file(self, project=None, extension='.conf'):\r\n \"\"\"Return the config file.\r\n\r\n :param project: \"zvmsdk\"\r\n :param extension: the type of the config file\r\n\r\n \"\"\"\r\n cfg_dirs = self._get_config_dirs()\r\n config_files = self._search_dirs(cfg_dirs, project, extension)\r\n\r\n return config_files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef scrypt_mcf(scrypt, password, salt=None, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p,\n prefix=SCRYPT_MCF_PREFIX_DEFAULT):\n \"\"\"Derives a Modular Crypt Format hash using the scrypt KDF given\n\n Expects the signature:\n scrypt(password, salt, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p, olen=64)\n\n If no salt is given, a random salt of 128+ bits is used. (Recommended.)\n \"\"\"\n if isinstance(password, unicode):\n password = password.encode('utf8')\n elif not isinstance(password, bytes):\n raise TypeError('password must be a unicode or byte string')\n if salt is not None and not isinstance(salt, bytes):\n raise TypeError('salt must be a byte string')\n if salt is not None and not (1 <= len(salt) <= 16):\n raise ValueError('salt must be 1-16 bytes')\n if r > 255:\n raise ValueError('scrypt_mcf r out of range [1,255]')\n if p > 255:\n raise ValueError('scrypt_mcf p out of range [1,255]')\n if N > 2**31:\n raise ValueError('scrypt_mcf N out of range [2,2**31]')\n if b'\\0' in password:\n raise ValueError('scrypt_mcf password must not contain zero bytes')\n\n if prefix == SCRYPT_MCF_PREFIX_s1:\n if salt is None:\n salt = os.urandom(16)\n hash = scrypt(password, salt, N, r, p)\n return _scrypt_mcf_encode_s1(N, r, p, salt, hash)\n elif prefix == SCRYPT_MCF_PREFIX_7 or prefix == SCRYPT_MCF_PREFIX_ANY:\n if salt is None:\n salt = os.urandom(32)\n salt = _cb64enc(salt)\n hash = scrypt(password, salt, N, r, p, 32)\n return _scrypt_mcf_encode_7(N, r, p, salt, hash)\n else:\n raise ValueError(\"Unrecognized MCF format\")", "response": "Derives a Modular Crypt Format hash using the scrypt KDF given a password and a salt."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef scrypt_mcf_check(scrypt, mcf, password):\n if not isinstance(mcf, bytes):\n raise TypeError('MCF must be a byte string')\n if isinstance(password, unicode):\n password = password.encode('utf8')\n elif not isinstance(password, bytes):\n raise TypeError('password must be a unicode or byte string')\n\n N, r, p, salt, hash, hlen = _scrypt_mcf_decode(mcf)\n h = scrypt(password, salt, N=N, r=r, p=p, olen=hlen)\n cmp = 0\n for i, j in zip(bytearray(h), bytearray(hash)):\n cmp |= i ^ j\n return cmp == 0", "response": "Checks if the given password matches the given MCF hash."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new plugin for the psyplot package.", "response": "def new_plugin(odir, py_name=None, version='0.0.1.dev0',\n description='New plugin'):\n \"\"\"\n Create a new plugin for the psyplot package\n\n Parameters\n ----------\n odir: str\n The name of the directory where the data will be copied to. The\n directory must not exist! The name of the directory also defines the\n name of the package.\n py_name: str\n The name of the python package. If None, the basename of `odir` is used\n (and ``'-'`` is replaced by ``'_'``)\n version: str\n The version of the package\n description: str\n The description of the plugin\"\"\"\n name = osp.basename(odir)\n if py_name is None:\n py_name = name.replace('-', '_')\n\n src = osp.join(osp.dirname(__file__), 'plugin-template-files')\n # copy the source files\n shutil.copytree(src, odir)\n os.rename(osp.join(odir, 'plugin_template'), osp.join(odir, py_name))\n\n replacements = {'PLUGIN_NAME': name,\n 'PLUGIN_PYNAME': py_name,\n 'PLUGIN_VERSION': version,\n 'PLUGIN_DESC': description,\n }\n files = [\n 'README.md',\n 'setup.py',\n osp.join(py_name, 'plugin.py'),\n osp.join(py_name, 'plotters.py'),\n osp.join(py_name, '__init__.py'),\n ]\n\n for fname in files:\n with open(osp.join(odir, fname)) as f:\n s = f.read()\n for key, val in replacements.items():\n s = s.replace(key, val)\n with open(osp.join(odir, fname), 'w') as f:\n f.write(s)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the version information for psyplot and the plugins and its requirements.", "response": "def get_versions(requirements=True, key=None):\n \"\"\"\n Get the version information for psyplot, the plugins and its requirements\n\n Parameters\n ----------\n requirements: bool\n If True, the requirements of the plugins and psyplot are investigated\n key: func\n A function that determines whether a plugin shall be considererd or\n not. The function must take a single argument, that is the name of the\n plugin as string, and must return True (import the plugin) or False\n (skip the plugin). If None, all plugins are imported\n\n Returns\n -------\n dict\n A mapping from ``'psyplot'``/the plugin names to a dictionary with the\n ``'version'`` key and the corresponding version is returned. If\n `requirements` is True, it also contains a mapping from\n ``'requirements'`` a dictionary with the versions\n\n Examples\n --------\n Using the built-in JSON module, we get something like\n\n .. code-block:: python\n\n import json\n print(json.dumps(psyplot.get_versions(), indent=4))\n {\n \"psy_simple.plugin\": {\n \"version\": \"1.0.0.dev0\"\n },\n \"psyplot\": {\n \"version\": \"1.0.0.dev0\",\n \"requirements\": {\n \"matplotlib\": \"1.5.3\",\n \"numpy\": \"1.11.3\",\n \"pandas\": \"0.19.2\",\n \"xarray\": \"0.9.1\"\n }\n },\n \"psy_maps.plugin\": {\n \"version\": \"1.0.0.dev0\",\n \"requirements\": {\n \"cartopy\": \"0.15.0\"\n }\n }\n }\n \"\"\"\n from pkg_resources import iter_entry_points\n ret = {'psyplot': _get_versions(requirements)}\n for ep in iter_entry_points(group='psyplot', name='plugin'):\n if str(ep) in rcParams._plugins:\n logger.debug('Loading entrypoint %s', ep)\n if key is not None and not key(ep.module_name):\n continue\n mod = ep.load()\n try:\n ret[str(ep.module_name)] = mod.get_versions(requirements)\n except AttributeError:\n ret[str(ep.module_name)] = {\n 'version': getattr(mod, 'plugin_version',\n getattr(mod, '__version__', ''))}\n if key is None:\n try:\n import psyplot_gui\n except ImportError:\n pass\n else:\n ret['psyplot_gui'] = psyplot_gui.get_versions(requirements)\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves userid switch record from switch table.", "response": "def switch_delete_record_for_userid(self, userid):\n \"\"\"Remove userid switch record from switch table.\"\"\"\n with get_network_conn() as conn:\n conn.execute(\"DELETE FROM switch WHERE userid=?\",\n (userid,))\n LOG.debug(\"Switch record for user %s is removed from \"\n \"switch table\" % userid)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef switch_delete_record_for_nic(self, userid, interface):\n with get_network_conn() as conn:\n conn.execute(\"DELETE FROM switch WHERE userid=? and interface=?\",\n (userid, interface))\n LOG.debug(\"Switch record for user %s with nic %s is removed from \"\n \"switch table\" % (userid, interface))", "response": "Remove userid switch record for nic."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding userid nic name address into switch table.", "response": "def switch_add_record(self, userid, interface, port=None,\n switch=None, comments=None):\n \"\"\"Add userid and nic name address into switch table.\"\"\"\n with get_network_conn() as conn:\n conn.execute(\"INSERT INTO switch VALUES (?, ?, ?, ?, ?)\",\n (userid, interface, switch, port, comments))\n LOG.debug(\"New record in the switch table: user %s, \"\n \"nic %s, port %s\" %\n (userid, interface, port))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating user record with switch.", "response": "def switch_update_record_with_switch(self, userid, interface,\n switch=None):\n \"\"\"Update information in switch table.\"\"\"\n if not self._get_switch_by_user_interface(userid, interface):\n msg = \"User %s with nic %s does not exist in DB\" % (userid,\n interface)\n LOG.error(msg)\n obj_desc = ('User %s with nic %s' % (userid, interface))\n raise exception.SDKObjectNotExistError(obj_desc,\n modID=self._module_id)\n\n if switch is not None:\n with get_network_conn() as conn:\n conn.execute(\"UPDATE switch SET switch=? \"\n \"WHERE userid=? and interface=?\",\n (switch, userid, interface))\n LOG.debug(\"Set switch to %s for user %s with nic %s \"\n \"in switch table\" %\n (switch, userid, interface))\n else:\n with get_network_conn() as conn:\n conn.execute(\"UPDATE switch SET switch=NULL \"\n \"WHERE userid=? and interface=?\",\n (userid, interface))\n LOG.debug(\"Set switch to None for user %s with nic %s \"\n \"in switch table\" %\n (userid, interface))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef image_query_record(self, imagename=None):\n\n if imagename:\n with get_image_conn() as conn:\n result = conn.execute(\"SELECT * FROM image WHERE \"\n \"imagename=?\", (imagename,))\n image_list = result.fetchall()\n if not image_list:\n obj_desc = \"Image with name: %s\" % imagename\n raise exception.SDKObjectNotExistError(obj_desc=obj_desc,\n modID=self._module_id)\n else:\n with get_image_conn() as conn:\n result = conn.execute(\"SELECT * FROM image\")\n image_list = result.fetchall()\n\n # Map each image record to be a dict, with the key is the field name in\n # image DB\n image_keys_list = ['imagename', 'imageosdistro', 'md5sum',\n 'disk_size_units', 'image_size_in_bytes', 'type',\n 'comments']\n\n image_result = []\n for item in image_list:\n image_item = dict(zip(image_keys_list, item))\n image_result.append(image_item)\n return image_result", "response": "Query the image record from database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_comments_by_userid(self, userid):\n userid = userid\n with get_guest_conn() as conn:\n res = conn.execute(\"SELECT comments FROM guests \"\n \"WHERE userid=?\", (userid,))\n\n result = res.fetchall()\n comments = {}\n if result[0][0]:\n comments = json.loads(result[0][0])\n return comments", "response": "Get comments record by userid."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_metadata_by_userid(self, userid):\n userid = userid\n with get_guest_conn() as conn:\n res = conn.execute(\"SELECT * FROM guests \"\n \"WHERE userid=?\", (userid,))\n guest = res.fetchall()\n\n if len(guest) == 1:\n return guest[0][2]\n elif len(guest) == 0:\n LOG.debug(\"Guest with userid: %s not found from DB!\" % userid)\n return ''\n else:\n msg = \"Guest with userid: %s have multiple records!\" % userid\n LOG.error(msg)\n raise exception.SDKInternalError(msg=msg, modID=self._module_id)", "response": "get metadata record by userid"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntransfer str to dict. output should be like : 1 b 2 c 3", "response": "def transfer_metadata_to_dict(self, meta):\n \"\"\"transfer str to dict.\n output should be like: {'a':1, 'b':2, 'c':3}\n \"\"\"\n dic = {}\n arr = meta.strip(' ,').split(',')\n for i in arr:\n temp = i.split('=')\n key = temp[0].strip()\n value = temp[1].strip()\n dic[key] = value\n return dic"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntake an array and returns a 2D vertices and a MxN face_indices arrays", "response": "def face_index(vertices):\n \"\"\"Takes an MxNx3 array and returns a 2D vertices and MxN face_indices arrays\"\"\"\n new_verts = []\n face_indices = []\n for wall in vertices:\n face_wall = []\n for vert in wall:\n if new_verts:\n if not np.isclose(vert, new_verts).all(axis=1).any():\n new_verts.append(vert)\n else:\n new_verts.append(vert)\n face_index = np.where(np.isclose(vert, new_verts).all(axis=1))[0][0]\n face_wall.append(face_index)\n face_indices.append(face_wall)\n return np.array(new_verts), np.array(face_indices)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn an array of vertices in triangular order using a fan triangulation algorithm.", "response": "def fan_triangulate(indices):\n \"\"\"Return an array of vertices in triangular order using a fan triangulation algorithm.\"\"\"\n if len(indices[0]) != 4:\n raise ValueError(\"Assumes working with a sequence of quad indices\")\n new_indices = []\n for face in indices:\n new_indices.extend([face[:-1], face[1:]])\n return np.array(new_indices)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a wavefront. obj string using pre - triangulated vertex dict and normal dict as reference.", "response": "def from_dicts(cls, mesh_name, vert_dict, normal_dict):\n \"\"\"Returns a wavefront .obj string using pre-triangulated vertex dict and normal dict as reference.\"\"\"\n\n # Put header in string\n wavefront_str = \"o {name}\\n\".format(name=mesh_name)\n\n # Write Vertex data from vert_dict\n for wall in vert_dict:\n for vert in vert_dict[wall]:\n wavefront_str += \"v {0} {1} {2}\\n\".format(*vert)\n\n # Write (false) UV Texture data\n wavefront_str += \"vt 1.0 1.0\\n\"\n\n # Write Normal data from normal_dict\n for wall, norm in normal_dict.items():\n wavefront_str += \"vn {0} {1} {2}\\n\".format(*norm)\n\n # Write Face Indices (1-indexed)\n vert_idx = 0\n for wall in vert_dict:\n for _ in range(0, len(vert_dict[wall]), 3):\n wavefront_str += 'f '\n for vert in range(3): # 3 vertices in each face\n vert_idx += 1\n wavefront_str += \"{v}/1/{n} \".format(v=vert_idx, n=wall+1)\n wavefront_str = wavefront_str[:-1] + '\\n' # Cutoff trailing space and add a newline.\n\n # Return Wavefront string\n return WavefrontWriter(string=wavefront_str)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_arrays(cls, name, verts, normals, n_verts=3):\n\n \"\"\"Returns a wavefront .obj string using pre-triangulated vertex dict and normal dict as reference.\"\"\"\n\n # Put header in string\n wavefront_str = \"o {name}\\n\".format(name=name)\n\n # Write Vertex data from vert_dict\n for vert in verts:\n wavefront_str += \"v {0} {1} {2}\\n\".format(*vert)\n\n # Write (false) UV Texture data\n wavefront_str += \"vt 1.0 1.0\\n\"\n\n for norm in normals:\n wavefront_str += \"vn {0} {1} {2}\\n\".format(*norm)\n\n\n # Write Face Indices (1-indexed)\n for wall_idx, vert_indices in enumerate(grouper(n_verts, range(1, len(verts) + 1))):\n face_str = 'f '\n for idx in vert_indices:\n face_str += \"{v}/1/{n} \".format(v=idx, n=wall_idx + 1)\n wavefront_str += face_str + '\\n'\n\n return WavefrontWriter(string=wavefront_str)", "response": "Returns a wavefront. obj string using pre - triangulated vertex dict and normal dict as reference."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntakes Nx3 array vertices and normals and creates a new object from it.", "response": "def from_indexed_arrays(cls, name, verts, normals):\n \"\"\"Takes MxNx3 verts, Mx3 normals to build obj file\"\"\"\n\n # Put header in string\n wavefront_str = \"o {name}\\n\".format(name=name)\n\n\n new_verts, face_indices = face_index(verts)\n assert new_verts.shape[1] == 3, \"verts should be Nx3 array\"\n assert face_indices.ndim == 2\n\n face_indices = fan_triangulate(face_indices)\n\n # Write Vertex data from vert_dict\n for vert in new_verts:\n wavefront_str += \"v {0} {1} {2}\\n\".format(*vert)\n\n # Write (false) UV Texture data\n wavefront_str += \"vt 1.0 1.0\\n\"\n\n for norm in normals:\n wavefront_str += \"vn {0} {1} {2}\\n\".format(*norm)\n\n assert len(face_indices) == len(normals) * 2\n for norm_idx, vert_idx, in enumerate(face_indices):\n wavefront_str += \"f\"\n for vv in vert_idx:\n wavefront_str += \" {}/{}/{}\".format(vv + 1, 1, (norm_idx // 2) + 1 )\n wavefront_str += \"\\n\"\n\n return cls(string=wavefront_str)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dump(self, f):\n try:\n f.write(self._data)\n except AttributeError:\n with open(f, 'w') as wf:\n wf.write(self._data)", "response": "Write Wavefront data to file. Takes File object or filename."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef scrypt(password, salt, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p, olen=64):\n check_args(password, salt, N, r, p, olen)\n\n try:\n return _scrypt(password=password, salt=salt, N=N, r=r, p=p, buflen=olen)\n except:\n raise ValueError", "response": "Returns a key derived using the scrypt key - derivarion function"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef doIt(rh):\n\n rh.printSysLog(\"Enter cmdVM.doIt\")\n\n # Show the invocation parameters, if requested.\n if 'showParms' in rh.parms and rh.parms['showParms'] is True:\n rh.printLn(\"N\", \"Invocation parameters: \")\n rh.printLn(\"N\", \" Routine: cmdVM.\" +\n str(subfuncHandler[rh.subfunction][0]) + \"(reqHandle)\")\n rh.printLn(\"N\", \" function: \" + rh.function)\n rh.printLn(\"N\", \" userid: \" + rh.userid)\n rh.printLn(\"N\", \" subfunction: \" + rh.subfunction)\n rh.printLn(\"N\", \" parms{}: \")\n for key in rh.parms:\n if key != 'showParms':\n rh.printLn(\"N\", \" \" + key + \": \" + str(rh.parms[key]))\n rh.printLn(\"N\", \" \")\n\n # Call the subfunction handler\n subfuncHandler[rh.subfunction][1](rh)\n\n rh.printSysLog(\"Exit cmdVM.doIt, rc: \" + str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "This function is called by the daemon to perform the requested function by invoking the subfunction handler."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef invokeCmd(rh):\n\n rh.printSysLog(\"Enter cmdVM.invokeCmd, userid: \" + rh.userid)\n\n results = execCmdThruIUCV(rh, rh.userid, rh.parms['cmd'])\n\n if results['overallRC'] == 0:\n rh.printLn(\"N\", results['response'])\n else:\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit cmdVM.invokeCmd, rc: \" + str(results['overallRC']))\n return results['overallRC']", "response": "Invoke the command in the virtual machine s operating system."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the command line and return the handle of the next element.", "response": "def parseCmdline(rh):\n \"\"\"\n Parse the request command input.\n\n Input:\n Request Handle\n\n Output:\n Request Handle updated with parsed input.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter cmdVM.parseCmdline\")\n\n if rh.totalParms >= 2:\n rh.userid = rh.request[1].upper()\n else:\n # Userid is missing.\n msg = msgs.msg['0010'][1] % modId\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0010'][0])\n rh.printSysLog(\"Exit cmdVM.parseCmdLine, rc: \" +\n rh.results['overallRC'])\n return rh.results['overallRC']\n\n if rh.totalParms == 2:\n rh.subfunction = rh.userid\n rh.userid = ''\n\n if rh.totalParms >= 3:\n rh.subfunction = rh.request[2].upper()\n\n # Verify the subfunction is valid.\n if rh.subfunction not in subfuncHandler:\n # Subfunction is missing.\n subList = ', '.join(sorted(subfuncHandler.keys()))\n msg = msgs.msg['0011'][1] % (modId, subList)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0011'][0])\n\n # Parse the rest of the command line.\n if rh.results['overallRC'] == 0:\n rh.argPos = 3 # Begin Parsing at 4th operand\n generalUtils.parseCmdline(rh, posOpsList, keyOpsList)\n\n rh.printSysLog(\"Exit cmdVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _generate_vdev(self, base, offset):\n vdev = hex(int(base, 16) + offset)[2:]\n return vdev.rjust(4, '0')", "response": "Generate virtual device number based on base and offset."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates virtual device number for disks :param offset: offset of user_root_vdev. :return: virtual device number, string of 4 bit hex.", "response": "def generate_disk_vdev(self, start_vdev=None, offset=0):\n \"\"\"Generate virtual device number for disks\n :param offset: offset of user_root_vdev.\n :return: virtual device number, string of 4 bit hex.\n \"\"\"\n if not start_vdev:\n start_vdev = CONF.zvm.user_root_vdev\n vdev = self._generate_vdev(start_vdev, offset)\n if offset >= 0 and offset < 254:\n return vdev\n else:\n msg = (\"Failed to generate disk vdev, invalid virtual device\"\n \"number for disk:%s\" % vdev)\n LOG.error(msg)\n raise exception.SDKGuestOperationError(rs=2, msg=msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds disks for the userid to the list of disks.", "response": "def add_mdisks(self, userid, disk_list, start_vdev=None):\n \"\"\"Add disks for the userid\n\n :disks: A list dictionary to describe disk info, for example:\n disk: [{'size': '1g',\n 'format': 'ext3',\n 'disk_pool': 'ECKD:eckdpool1'}]\n\n \"\"\"\n\n for idx, disk in enumerate(disk_list):\n if 'vdev' in disk:\n # this means user want to create their own device number\n vdev = disk['vdev']\n else:\n vdev = self.generate_disk_vdev(start_vdev=start_vdev,\n offset=idx)\n\n self._add_mdisk(userid, disk, vdev)\n disk['vdev'] = vdev\n\n if disk.get('disk_pool') is None:\n disk['disk_pool'] = CONF.zvm.disk_pool\n\n sizeUpper = disk.get('size').strip().upper()\n sizeUnit = sizeUpper[-1]\n if sizeUnit != 'G' and sizeUnit != 'M':\n sizeValue = sizeUpper\n disk_pool = disk.get('disk_pool')\n [diskpool_type, diskpool_name] = disk_pool.split(':')\n if (diskpool_type.upper() == 'ECKD'):\n # Convert the cylinders to bytes\n convert = 737280\n else:\n # Convert the blocks to bytes\n convert = 512\n byteSize = float(float(int(sizeValue) * convert / 1024) / 1024)\n unit = \"M\"\n if (byteSize > 1024):\n byteSize = float(byteSize / 1024)\n unit = \"G\"\n byteSize = \"%.1f\" % byteSize\n disk['size'] = byteSize + unit\n\n return disk_list"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dedicate_device(self, userid, vaddr, raddr, mode):\n # dedicate device to directory entry\n self._dedicate_device(userid, vaddr, raddr, mode)", "response": "dedicate a virtual device to the directory entry"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget fcp information by the status", "response": "def get_fcp_info_by_status(self, userid, status):\n\n \"\"\"get fcp information by the status.\n\n :userid: The name of the image to query fcp info\n :status: The status of target fcps. eg:'active', 'free' or 'offline'.\n \"\"\"\n results = self._get_fcp_info_by_status(userid, status)\n return results"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _undedicate_device(self, userid, vaddr):\n action = 'undedicate'\n rd = ('changevm %(uid)s %(act)s %(va)s' %\n {'uid': userid, 'act': action,\n 'va': vaddr})\n action = \"undedicate device from userid '%s'\" % userid\n with zvmutils.log_and_reraise_smt_request_failed(action):\n self._request(rd)", "response": "undedicate a device from a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_image_performance_info(self, userid):\n pi_dict = self.image_performance_query([userid])\n return pi_dict.get(userid, None)", "response": "Get CPU and memory usage information for the zvm userid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing the Virtual_Network_Vswitch_Query_Byte_Stats data to get inspect data.", "response": "def _parse_vswitch_inspect_data(self, rd_list):\n \"\"\" Parse the Virtual_Network_Vswitch_Query_Byte_Stats data to get\n inspect data.\n \"\"\"\n def _parse_value(data_list, idx, keyword, offset):\n return idx + offset, data_list[idx].rpartition(keyword)[2].strip()\n\n vsw_dict = {}\n with zvmutils.expect_invalid_resp_data():\n # vswitch count\n idx = 0\n idx, vsw_count = _parse_value(rd_list, idx, 'vswitch count:', 2)\n vsw_dict['vswitch_count'] = int(vsw_count)\n\n # deal with each vswitch data\n vsw_dict['vswitches'] = []\n for i in range(vsw_dict['vswitch_count']):\n vsw_data = {}\n # skip vswitch number\n idx += 1\n # vswitch name\n idx, vsw_name = _parse_value(rd_list, idx, 'vswitch name:', 1)\n vsw_data['vswitch_name'] = vsw_name\n # uplink count\n idx, up_count = _parse_value(rd_list, idx, 'uplink count:', 1)\n # skip uplink data\n idx += int(up_count) * 9\n # skip bridge data\n idx += 8\n # nic count\n vsw_data['nics'] = []\n idx, nic_count = _parse_value(rd_list, idx, 'nic count:', 1)\n nic_count = int(nic_count)\n for j in range(nic_count):\n nic_data = {}\n idx, nic_id = _parse_value(rd_list, idx, 'nic_id:', 1)\n userid, toss, vdev = nic_id.partition(' ')\n nic_data['userid'] = userid\n nic_data['vdev'] = vdev\n idx, nic_data['nic_fr_rx'] = _parse_value(rd_list, idx,\n 'nic_fr_rx:', 1\n )\n idx, nic_data['nic_fr_rx_dsc'] = _parse_value(rd_list, idx,\n 'nic_fr_rx_dsc:', 1\n )\n idx, nic_data['nic_fr_rx_err'] = _parse_value(rd_list, idx,\n 'nic_fr_rx_err:', 1\n )\n idx, nic_data['nic_fr_tx'] = _parse_value(rd_list, idx,\n 'nic_fr_tx:', 1\n )\n idx, nic_data['nic_fr_tx_dsc'] = _parse_value(rd_list, idx,\n 'nic_fr_tx_dsc:', 1\n )\n idx, nic_data['nic_fr_tx_err'] = _parse_value(rd_list, idx,\n 'nic_fr_tx_err:', 1\n )\n idx, nic_data['nic_rx'] = _parse_value(rd_list, idx,\n 'nic_rx:', 1\n )\n idx, nic_data['nic_tx'] = _parse_value(rd_list, idx,\n 'nic_tx:', 1\n )\n vsw_data['nics'].append(nic_data)\n # vlan count\n idx, vlan_count = _parse_value(rd_list, idx, 'vlan count:', 1)\n # skip vlan data\n idx += int(vlan_count) * 3\n # skip the blank line\n idx += 1\n\n vsw_dict['vswitches'].append(vsw_data)\n\n return vsw_dict"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget power state of a z/VM instance.", "response": "def get_power_state(self, userid):\n \"\"\"Get power status of a z/VM instance.\"\"\"\n LOG.debug('Querying power stat of %s' % userid)\n requestData = \"PowerVM \" + userid + \" status\"\n action = \"query power state of '%s'\" % userid\n with zvmutils.log_and_reraise_smt_request_failed(action):\n results = self._request(requestData)\n with zvmutils.expect_invalid_resp_data(results):\n status = results['response'][0].partition(': ')[2]\n return status"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npowers off a VM.", "response": "def guest_stop(self, userid, **kwargs):\n \"\"\"Power off VM.\"\"\"\n requestData = \"PowerVM \" + userid + \" off\"\n\n if 'timeout' in kwargs.keys() and kwargs['timeout']:\n requestData += ' --maxwait ' + str(kwargs['timeout'])\n if 'poll_interval' in kwargs.keys() and kwargs['poll_interval']:\n requestData += ' --poll ' + str(kwargs['poll_interval'])\n\n with zvmutils.log_and_reraise_smt_request_failed():\n self._request(requestData)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmoves the specified virtual machine while it continues to run to run to the specified system within the SSI cluster.", "response": "def live_migrate_move(self, userid, destination, parms):\n \"\"\" moves the specified virtual machine, while it continues to run,\n to the specified system within the SSI cluster. \"\"\"\n rd = ('migratevm %(uid)s move --destination %(dest)s ' %\n {'uid': userid, 'dest': destination})\n\n if 'maxtotal' in parms:\n rd += ('--maxtotal ' + str(parms['maxTotal']))\n if 'maxquiesce' in parms:\n rd += ('--maxquiesce ' + str(parms['maxquiesce']))\n if 'immediate' in parms:\n rd += \" --immediate\"\n if 'forcearch' in parms:\n rd += \" --forcearch\"\n if 'forcedomain' in parms:\n rd += \" --forcedomain\"\n if 'forcestorage' in parms:\n rd += \" --forcestorage\"\n\n action = \"move userid '%s' to SSI '%s'\" % (userid, destination)\n\n try:\n self._request(rd)\n except exception.SDKSMTRequestFailed as err:\n msg = ''\n if action is not None:\n msg = \"Failed to %s. \" % action\n msg += \"SMT error: %s\" % err.format_message()\n LOG.error(msg)\n raise exception.SDKSMTRequestFailed(err.results, msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates VM and add disks if specified.", "response": "def create_vm(self, userid, cpu, memory, disk_list, profile,\n max_cpu, max_mem, ipl_from, ipl_param, ipl_loadparam):\n \"\"\" Create VM and add disks if specified. \"\"\"\n rd = ('makevm %(uid)s directory LBYONLY %(mem)im %(pri)s '\n '--cpus %(cpu)i --profile %(prof)s --maxCPU %(max_cpu)i '\n '--maxMemSize %(max_mem)s --setReservedMem' %\n {'uid': userid, 'mem': memory,\n 'pri': const.ZVM_USER_DEFAULT_PRIVILEGE,\n 'cpu': cpu, 'prof': profile,\n 'max_cpu': max_cpu, 'max_mem': max_mem})\n\n if CONF.zvm.default_admin_userid:\n rd += (' --logonby \"%s\"' % CONF.zvm.default_admin_userid)\n\n if (disk_list and 'is_boot_disk' in disk_list[0] and\n disk_list[0]['is_boot_disk']):\n # we assume at least one disk exist, which means, is_boot_disk\n # is true for exactly one disk.\n rd += (' --ipl %s' % self._get_ipl_param(ipl_from))\n\n # load param for ipl\n if ipl_param:\n rd += ' --iplParam %s' % ipl_param\n\n if ipl_loadparam:\n rd += ' --iplLoadparam %s' % ipl_loadparam\n\n action = \"create userid '%s'\" % userid\n\n try:\n self._request(rd)\n except exception.SDKSMTRequestFailed as err:\n if ((err.results['rc'] == 436) and (err.results['rs'] == 4)):\n result = \"Profile '%s'\" % profile\n raise exception.SDKObjectNotExistError(obj_desc=result,\n modID='guest')\n else:\n msg = ''\n if action is not None:\n msg = \"Failed to %s. \" % action\n msg += \"SMT error: %s\" % err.format_message()\n LOG.error(msg)\n raise exception.SDKSMTRequestFailed(err.results, msg)\n\n # Add the guest to db immediately after user created\n action = \"add guest '%s' to database\" % userid\n with zvmutils.log_and_reraise_sdkbase_error(action):\n self._GuestDbOperator.add_guest(userid)\n\n # Continue to add disk\n if disk_list:\n # Add disks for vm\n return self.add_mdisks(userid, disk_list)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd one mdisk to userid", "response": "def _add_mdisk(self, userid, disk, vdev):\n \"\"\"Create one disk for userid\n\n NOTE: No read, write and multi password specified, and\n access mode default as 'MR'.\n\n \"\"\"\n size = disk['size']\n fmt = disk.get('format', 'ext4')\n disk_pool = disk.get('disk_pool') or CONF.zvm.disk_pool\n [diskpool_type, diskpool_name] = disk_pool.split(':')\n\n if (diskpool_type.upper() == 'ECKD'):\n action = 'add3390'\n else:\n action = 'add9336'\n\n rd = ' '.join(['changevm', userid, action, diskpool_name,\n vdev, size, '--mode MR'])\n if fmt and fmt != 'none':\n rd += (' --filesystem %s' % fmt.lower())\n\n action = \"add mdisk to userid '%s'\" % userid\n with zvmutils.log_and_reraise_smt_request_failed(action):\n self._request(rd)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_vm_list(self):\n action = \"list all guests in database\"\n with zvmutils.log_and_reraise_sdkbase_error(action):\n guests_in_db = self._GuestDbOperator.get_guest_list()\n guests_migrated = self._GuestDbOperator.get_migrated_guest_list()\n\n # db query return value in tuple (uuid, userid, metadata, comments)\n userids_in_db = [g[1].upper() for g in guests_in_db]\n userids_migrated = [g[1].upper() for g in guests_migrated]\n userid_list = list(set(userids_in_db) - set(userids_migrated))\n\n return userid_list", "response": "Get the list of guests that are created by SDK\n return userid list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npunch a script that used to set the authorized client userid in vm", "response": "def guest_authorize_iucv_client(self, userid, client=None):\n \"\"\"Punch a script that used to set the authorized client userid in vm\n If the guest is in log off status, the change will take effect when\n the guest start up at first time.\n If the guest is in active status, power off and power on are needed\n for the change to take effect.\n\n :param str guest: the user id of the vm\n :param str client: the user id of the client that can communicate to\n guest using IUCV\"\"\"\n\n client = client or zvmutils.get_smt_userid()\n\n iucv_path = \"/tmp/\" + userid\n if not os.path.exists(iucv_path):\n os.makedirs(iucv_path)\n iucv_auth_file = iucv_path + \"/iucvauth.sh\"\n zvmutils.generate_iucv_authfile(iucv_auth_file, client)\n\n try:\n requestData = \"ChangeVM \" + userid + \" punchfile \" + \\\n iucv_auth_file + \" --class x\"\n self._request(requestData)\n except exception.SDKSMTRequestFailed as err:\n msg = (\"Failed to punch IUCV auth file to userid '%s'. SMT error:\"\n \" %s\" % (userid, err.format_message()))\n LOG.error(msg)\n raise exception.SDKSMTRequestFailed(err.results, msg)\n finally:\n self._pathutils.clean_temp_folder(iucv_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guest_deploy(self, userid, image_name, transportfiles=None,\n remotehost=None, vdev=None):\n \"\"\" Deploy image and punch config driver to target \"\"\"\n # (TODO: add the support of multiple disks deploy)\n msg = ('Start to deploy image %(img)s to guest %(vm)s'\n % {'img': image_name, 'vm': userid})\n LOG.info(msg)\n image_file = '/'.join([self._get_image_path_by_name(image_name),\n CONF.zvm.user_root_vdev])\n # Unpack image file to root disk\n vdev = vdev or CONF.zvm.user_root_vdev\n cmd = ['sudo', '/opt/zthin/bin/unpackdiskimage', userid, vdev,\n image_file]\n with zvmutils.expect_and_reraise_internal_error(modID='guest'):\n (rc, output) = zvmutils.execute(cmd)\n if rc != 0:\n err_msg = (\"unpackdiskimage failed with return code: %d.\" % rc)\n err_output = \"\"\n output_lines = output.split('\\n')\n for line in output_lines:\n if line.__contains__(\"ERROR:\"):\n err_output += (\"\\\\n\" + line.strip())\n LOG.error(err_msg + err_output)\n raise exception.SDKGuestOperationError(rs=3, userid=userid,\n unpack_rc=rc,\n err=err_output)\n\n # Purge guest reader to clean dirty data\n rd = (\"changevm %s purgerdr\" % userid)\n action = \"purge reader of '%s'\" % userid\n with zvmutils.log_and_reraise_smt_request_failed(action):\n self._request(rd)\n\n # Punch transport files if specified\n if transportfiles:\n # Copy transport file to local\n msg = ('Start to send customized file to vm %s' % userid)\n LOG.info(msg)\n\n try:\n tmp_trans_dir = tempfile.mkdtemp()\n local_trans = '/'.join([tmp_trans_dir,\n os.path.basename(transportfiles)])\n if remotehost:\n cmd = [\"/usr/bin/scp\", \"-B\",\n \"-P\", CONF.zvm.remotehost_sshd_port,\n \"-o StrictHostKeyChecking=no\",\n (\"%s:%s\" % (remotehost, transportfiles)),\n local_trans]\n else:\n cmd = [\"/usr/bin/cp\", transportfiles, local_trans]\n with zvmutils.expect_and_reraise_internal_error(modID='guest'):\n (rc, output) = zvmutils.execute(cmd)\n if rc != 0:\n err_msg = ('copy config drive with command %(cmd)s '\n 'failed with output: %(res)s' %\n {'cmd': str(cmd), 'res': output})\n LOG.error(err_msg)\n raise exception.SDKGuestOperationError(rs=4, userid=userid,\n err_info=err_msg)\n\n # Punch config drive to guest userid\n rd = (\"changevm %(uid)s punchfile %(file)s --class X\" %\n {'uid': userid, 'file': local_trans})\n action = \"punch config drive to userid '%s'\" % userid\n with zvmutils.log_and_reraise_smt_request_failed(action):\n self._request(rd)\n finally:\n # remove the local temp config drive folder\n self._pathutils.clean_temp_folder(tmp_trans_dir)\n # Authorize iucv client\n self.guest_authorize_iucv_client(userid)\n # Update os version in guest metadata\n # TODO: may should append to old metadata, not replace\n image_info = self._ImageDbOperator.image_query_record(image_name)\n metadata = 'os_version=%s' % image_info[0]['imageosdistro']\n self._GuestDbOperator.update_guest_by_userid(userid, meta=metadata)\n\n msg = ('Deploy image %(img)s to guest %(vm)s disk %(vdev)s'\n ' successfully' % {'img': image_name, 'vm': userid,\n 'vdev': vdev})\n LOG.info(msg)", "response": "Deploy image and punch config driver to target"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef grant_user_to_vswitch(self, vswitch_name, userid):\n smt_userid = zvmutils.get_smt_userid()\n requestData = ' '.join((\n 'SMAPI %s API Virtual_Network_Vswitch_Set_Extended' % smt_userid,\n \"--operands\",\n \"-k switch_name=%s\" % vswitch_name,\n \"-k grant_userid=%s\" % userid,\n \"-k persist=YES\"))\n\n try:\n self._request(requestData)\n except exception.SDKSMTRequestFailed as err:\n LOG.error(\"Failed to grant user %s to vswitch %s, error: %s\"\n % (userid, vswitch_name, err.format_message()))\n self._set_vswitch_exception(err, vswitch_name)", "response": "Grant user to vswitch."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalls Image_Performance_Query to get guest current status.", "response": "def image_performance_query(self, uid_list):\n \"\"\"Call Image_Performance_Query to get guest current status.\n\n :uid_list: A list of zvm userids to be queried\n \"\"\"\n if uid_list == []:\n return {}\n\n if not isinstance(uid_list, list):\n uid_list = [uid_list]\n\n smt_userid = zvmutils.get_smt_userid()\n rd = ' '.join((\n \"SMAPI %s API Image_Performance_Query\" % smt_userid,\n \"--operands\",\n '-T \"%s\"' % (' '.join(uid_list)),\n \"-c %d\" % len(uid_list)))\n action = \"get performance info of userid '%s'\" % str(uid_list)\n with zvmutils.log_and_reraise_smt_request_failed(action):\n results = self._request(rd)\n\n ipq_kws = {\n 'userid': \"Guest name:\",\n 'guest_cpus': \"Guest CPUs:\",\n 'used_cpu_time': \"Used CPU time:\",\n 'elapsed_cpu_time': \"Elapsed time:\",\n 'min_cpu_count': \"Minimum CPU count:\",\n 'max_cpu_limit': \"Max CPU limit:\",\n 'samples_cpu_in_use': \"Samples CPU in use:\",\n 'samples_cpu_delay': \"Samples CPU delay:\",\n 'used_memory': \"Used memory:\",\n 'max_memory': \"Max memory:\",\n 'min_memory': \"Minimum memory:\",\n 'shared_memory': \"Shared memory:\",\n }\n\n pi_dict = {}\n pi = {}\n rpi_list = ('\\n'.join(results['response'])).split(\"\\n\\n\")\n for rpi in rpi_list:\n try:\n pi = zvmutils.translate_response_to_dict(rpi, ipq_kws)\n except exception.SDKInternalError as err:\n emsg = err.format_message()\n # when there is only one userid queried and this userid is\n # in 'off'state, the smcli will only returns the queried\n # userid number, no valid performance info returned.\n if(emsg.__contains__(\"No value matched with keywords.\")):\n continue\n else:\n raise err\n for k, v in pi.items():\n pi[k] = v.strip('\" ')\n if pi.get('userid') is not None:\n pi_dict[pi['userid']] = pi\n\n return pi_dict"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncall System_Image_Performance_Query to get guest current status.", "response": "def system_image_performance_query(self, namelist):\n \"\"\"Call System_Image_Performance_Query to get guest current status.\n\n :namelist: A namelist that defined in smapi namelist file.\n \"\"\"\n smt_userid = zvmutils.get_smt_userid()\n rd = ' '.join((\n \"SMAPI %s API System_Image_Performance_Query\" % smt_userid,\n \"--operands -T %s\" % namelist))\n action = \"get performance info of namelist '%s'\" % namelist\n with zvmutils.log_and_reraise_smt_request_failed(action):\n results = self._request(rd)\n\n ipq_kws = {\n 'userid': \"Guest name:\",\n 'guest_cpus': \"Guest CPUs:\",\n 'used_cpu_time': \"Used CPU time:\",\n 'elapsed_cpu_time': \"Elapsed time:\",\n 'min_cpu_count': \"Minimum CPU count:\",\n 'max_cpu_limit': \"Max CPU limit:\",\n 'samples_cpu_in_use': \"Samples CPU in use:\",\n 'samples_cpu_delay': \"Samples CPU delay:\",\n 'used_memory': \"Used memory:\",\n 'max_memory': \"Max memory:\",\n 'min_memory': \"Minimum memory:\",\n 'shared_memory': \"Shared memory:\",\n }\n\n pi_dict = {}\n pi = {}\n rpi_list = ('\\n'.join(results['response'])).split(\"\\n\\n\")\n for rpi in rpi_list:\n try:\n pi = zvmutils.translate_response_to_dict(rpi, ipq_kws)\n except exception.SDKInternalError as err:\n emsg = err.format_message()\n # when there is only one userid queried and this userid is\n # in 'off'state, the smcli will only returns the queried\n # userid number, no valid performance info returned.\n if(emsg.__contains__(\"No value matched with keywords.\")):\n continue\n else:\n raise err\n for k, v in pi.items():\n pi[k] = v.strip('\" ')\n if pi.get('userid') is not None:\n pi_dict[pi['userid']] = pi\n\n return pi_dict"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the attributes of a virtual switch.", "response": "def set_vswitch(self, switch_name, **kwargs):\n \"\"\"Set vswitch\"\"\"\n smt_userid = zvmutils.get_smt_userid()\n rd = ' '.join((\n \"SMAPI %s API Virtual_Network_Vswitch_Set_Extended\" %\n smt_userid,\n \"--operands\",\n \"-k switch_name=%s\" % switch_name))\n\n for k, v in kwargs.items():\n rd = ' '.join((rd,\n \"-k %(key)s=\\'%(value)s\\'\" %\n {'key': k, 'value': v}))\n\n try:\n self._request(rd)\n except exception.SDKSMTRequestFailed as err:\n LOG.error(\"Failed to set vswitch %s, error: %s\" %\n (switch_name, err.format_message()))\n self._set_vswitch_exception(err, switch_name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncoupling NIC to vswitch by adding vswitch into user direct.", "response": "def _couple_nic(self, userid, vdev, vswitch_name,\n active=False):\n \"\"\"Couple NIC to vswitch by adding vswitch into user direct.\"\"\"\n if active:\n self._is_active(userid)\n\n msg = ('Start to couple nic device %(vdev)s of guest %(vm)s '\n 'with vswitch %(vsw)s'\n % {'vdev': vdev, 'vm': userid, 'vsw': vswitch_name})\n LOG.info(msg)\n requestData = ' '.join((\n 'SMAPI %s' % userid,\n \"API Virtual_Network_Adapter_Connect_Vswitch_DM\",\n \"--operands\",\n \"-v %s\" % vdev,\n \"-n %s\" % vswitch_name))\n\n try:\n self._request(requestData)\n except exception.SDKSMTRequestFailed as err:\n LOG.error(\"Failed to couple nic %s to vswitch %s for user %s \"\n \"in the guest's user direct, error: %s\" %\n (vdev, vswitch_name, userid, err.format_message()))\n self._couple_inactive_exception(err, userid, vdev, vswitch_name)\n\n # the inst must be active, or this call will failed\n if active:\n requestData = ' '.join((\n 'SMAPI %s' % userid,\n 'API Virtual_Network_Adapter_Connect_Vswitch',\n \"--operands\",\n \"-v %s\" % vdev,\n \"-n %s\" % vswitch_name))\n\n try:\n self._request(requestData)\n except (exception.SDKSMTRequestFailed,\n exception.SDKInternalError) as err1:\n results1 = err1.results\n msg1 = err1.format_message()\n if ((results1 is not None) and\n (results1['rc'] == 204) and\n (results1['rs'] == 20)):\n LOG.warning(\"Virtual device %s already connected \"\n \"on the active guest system\", vdev)\n else:\n persist_OK = True\n requestData = ' '.join((\n 'SMAPI %s' % userid,\n 'API Virtual_Network_Adapter_Disconnect_DM',\n \"--operands\",\n '-v %s' % vdev))\n try:\n self._request(requestData)\n except (exception.SDKSMTRequestFailed,\n exception.SDKInternalError) as err2:\n results2 = err2.results\n msg2 = err2.format_message()\n if ((results2 is not None) and\n (results2['rc'] == 212) and\n (results2['rs'] == 32)):\n persist_OK = True\n else:\n persist_OK = False\n if persist_OK:\n self._couple_active_exception(err1, userid, vdev,\n vswitch_name)\n else:\n raise exception.SDKNetworkOperationError(rs=3,\n nic=vdev, vswitch=vswitch_name,\n couple_err=msg1, revoke_err=msg2)\n\n \"\"\"Update information in switch table.\"\"\"\n self._NetDbOperator.switch_update_record_with_switch(userid, vdev,\n vswitch_name)\n msg = ('Couple nic device %(vdev)s of guest %(vm)s '\n 'with vswitch %(vsw)s successfully'\n % {'vdev': vdev, 'vm': userid, 'vsw': vswitch_name})\n LOG.info(msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef couple_nic_to_vswitch(self, userid, nic_vdev,\n vswitch_name, active=False):\n \"\"\"Couple nic to vswitch.\"\"\"\n if active:\n msg = (\"both in the user direct of guest %s and on \"\n \"the active guest system\" % userid)\n else:\n msg = \"in the user direct of guest %s\" % userid\n LOG.debug(\"Connect nic %s to switch %s %s\",\n nic_vdev, vswitch_name, msg)\n self._couple_nic(userid, nic_vdev, vswitch_name, active=active)", "response": "Couple nic to vswitch."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _uncouple_nic(self, userid, vdev, active=False):\n if active:\n self._is_active(userid)\n msg = ('Start to uncouple nic device %(vdev)s of guest %(vm)s'\n % {'vdev': vdev, 'vm': userid})\n LOG.info(msg)\n\n requestData = ' '.join((\n 'SMAPI %s' % userid,\n \"API Virtual_Network_Adapter_Disconnect_DM\",\n \"--operands\",\n \"-v %s\" % vdev))\n\n try:\n self._request(requestData)\n except (exception.SDKSMTRequestFailed,\n exception.SDKInternalError) as err:\n results = err.results\n emsg = err.format_message()\n if ((results is not None) and\n (results['rc'] == 212) and\n (results['rs'] == 32)):\n LOG.warning(\"Virtual device %s is already disconnected \"\n \"in the guest's user direct\", vdev)\n else:\n LOG.error(\"Failed to uncouple nic %s in the guest's user \"\n \"direct, error: %s\" % (vdev, emsg))\n self._uncouple_inactive_exception(err, userid, vdev)\n\n \"\"\"Update information in switch table.\"\"\"\n self._NetDbOperator.switch_update_record_with_switch(userid, vdev,\n None)\n\n # the inst must be active, or this call will failed\n if active:\n requestData = ' '.join((\n 'SMAPI %s' % userid,\n 'API Virtual_Network_Adapter_Disconnect',\n \"--operands\",\n \"-v %s\" % vdev))\n try:\n self._request(requestData)\n except (exception.SDKSMTRequestFailed,\n exception.SDKInternalError) as err:\n results = err.results\n emsg = err.format_message()\n if ((results is not None) and\n (results['rc'] == 204) and\n (results['rs'] == 48)):\n LOG.warning(\"Virtual device %s is already \"\n \"disconnected on the active \"\n \"guest system\", vdev)\n else:\n LOG.error(\"Failed to uncouple nic %s on the active \"\n \"guest system, error: %s\" % (vdev, emsg))\n self._uncouple_active_exception(err, userid, vdev)\n msg = ('Uncouple nic device %(vdev)s of guest %(vm)s successfully'\n % {'vdev': vdev, 'vm': userid})\n LOG.info(msg)", "response": "Uncouple NIC from vswitch."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute_cmd(self, userid, cmdStr):\n requestData = 'cmdVM ' + userid + ' CMD \\'' + cmdStr + '\\''\n with zvmutils.log_and_reraise_smt_request_failed(action='execute '\n 'command on vm via iucv channel'):\n results = self._request(requestData)\n\n ret = results['response']\n return ret", "response": "Execute a command on a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexecutes a direct command on the virtual machine.", "response": "def execute_cmd_direct(self, userid, cmdStr):\n \"\"\"\"cmdVM.\"\"\"\n requestData = 'cmdVM ' + userid + ' CMD \\'' + cmdStr + '\\''\n results = self._smt.request(requestData)\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nimports the image specified in url to SDK image repository and create a record in image db", "response": "def image_import(self, image_name, url, image_meta, remote_host=None):\n \"\"\"Import the image specified in url to SDK image repository, and\n create a record in image db, the imported images are located in\n image_repository/prov_method/os_version/image_name/, for example,\n /opt/sdk/images/netboot/rhel7.2/90685d2b-167bimage/0100\"\"\"\n image_info = []\n try:\n image_info = self._ImageDbOperator.image_query_record(image_name)\n except exception.SDKObjectNotExistError:\n msg = (\"The image record %s doens't exist in SDK image datebase,\"\n \" will import the image and create record now\" % image_name)\n LOG.info(msg)\n\n # Ensure the specified image is not exist in image DB\n if image_info:\n msg = (\"The image name %s has already exist in SDK image \"\n \"database, please check if they are same image or consider\"\n \" to use a different image name for import\" % image_name)\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=13, img=image_name)\n\n try:\n image_os_version = image_meta['os_version'].lower()\n target_folder = self._pathutils.create_import_image_repository(\n image_os_version, const.IMAGE_TYPE['DEPLOY'],\n image_name)\n except Exception as err:\n msg = ('Failed to create repository to store image %(img)s with '\n 'error: %(err)s, please make sure there are enough space '\n 'on zvmsdk server and proper permission to create the '\n 'repository' % {'img': image_name,\n 'err': six.text_type(err)})\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=14, msg=msg)\n\n try:\n import_image_fn = urlparse.urlparse(url).path.split('/')[-1]\n import_image_fpath = '/'.join([target_folder, import_image_fn])\n self._scheme2backend(urlparse.urlparse(url).scheme).image_import(\n image_name, url,\n import_image_fpath,\n remote_host=remote_host)\n\n # Check md5 after import to ensure import a correct image\n # TODO change to use query image name in DB\n expect_md5sum = image_meta.get('md5sum')\n real_md5sum = self._get_md5sum(import_image_fpath)\n if expect_md5sum and expect_md5sum != real_md5sum:\n msg = (\"The md5sum after import is not same as source image,\"\n \" the image has been broken\")\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=4)\n\n # After import to image repository, figure out the image type is\n # single disk image or multiple-disk image,if multiple disks image,\n # extract it, if it's single image, rename its name to be same as\n # specific vdev\n # TODO: (nafei) use sub-function to check the image type\n image_type = 'rootonly'\n if image_type == 'rootonly':\n final_image_fpath = '/'.join([target_folder,\n CONF.zvm.user_root_vdev])\n os.rename(import_image_fpath, final_image_fpath)\n elif image_type == 'alldisks':\n # For multiple disks image, extract it, after extract, the\n # content under image folder is like: 0100, 0101, 0102\n # and remove the image file 0100-0101-0102.tgz\n pass\n\n # TODO: put multiple disk image into consideration, update the\n # disk_size_units and image_size db field\n disk_size_units = self._get_disk_size_units(final_image_fpath)\n image_size = self._get_image_size(final_image_fpath)\n # TODO: update the real_md5sum field to include each disk image\n self._ImageDbOperator.image_add_record(image_name,\n image_os_version,\n real_md5sum,\n disk_size_units,\n image_size,\n image_type)\n LOG.info(\"Image %s is import successfully\" % image_name)\n except Exception:\n # Cleanup the image from image repository\n self._pathutils.clean_temp_folder(target_folder)\n raise"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef image_export(self, image_name, dest_url, remote_host=None):\n image_info = self._ImageDbOperator.image_query_record(image_name)\n if not image_info:\n msg = (\"The image %s does not exist in image repository\"\n % image_name)\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=20, img=image_name)\n\n image_type = image_info[0]['type']\n # TODO: (nafei) according to image_type, detect image exported path\n # For multiple disk image, make the tgz firstly, the specify the\n # source_path to be something like: 0100-0101-0102.tgz\n if image_type == 'rootonly':\n source_path = '/'.join([CONF.image.sdk_image_repository,\n const.IMAGE_TYPE['DEPLOY'],\n image_info[0]['imageosdistro'],\n image_name,\n CONF.zvm.user_root_vdev])\n else:\n pass\n\n self._scheme2backend(urlparse.urlparse(dest_url).scheme).image_export(\n source_path, dest_url,\n remote_host=remote_host)\n\n # TODO: (nafei) for multiple disks image, update the expect_dict\n # to be the tgz's md5sum\n export_dict = {'image_name': image_name,\n 'image_path': dest_url,\n 'os_version': image_info[0]['imageosdistro'],\n 'md5sum': image_info[0]['md5sum']}\n LOG.info(\"Image %s export successfully\" % image_name)\n return export_dict", "response": "Export the specific image to remote host or local file system"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns disk size in bytes", "response": "def _get_image_size(self, image_path):\n \"\"\"Return disk size in bytes\"\"\"\n command = 'du -b %s' % image_path\n (rc, output) = zvmutils.execute(command)\n if rc:\n msg = (\"Error happened when executing command du -b with\"\n \"reason: %s\" % output)\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=8)\n size = output.split()[0]\n return size"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalculates the md5sum of the specific image file", "response": "def _get_md5sum(self, fpath):\n \"\"\"Calculate the md5sum of the specific image file\"\"\"\n try:\n current_md5 = hashlib.md5()\n if isinstance(fpath, six.string_types) and os.path.exists(fpath):\n with open(fpath, \"rb\") as fh:\n for chunk in self._read_chunks(fh):\n current_md5.update(chunk)\n\n elif (fpath.__class__.__name__ in [\"StringIO\", \"StringO\"] or\n isinstance(fpath, IOBase)):\n for chunk in self._read_chunks(fpath):\n current_md5.update(chunk)\n else:\n return \"\"\n return current_md5.hexdigest()\n except Exception:\n msg = (\"Failed to calculate the image's md5sum\")\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=3)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the root disk size of the specified image", "response": "def image_get_root_disk_size(self, image_name):\n \"\"\"Return the root disk units of the specified image\n image_name: the unique image name in db\n Return the disk units in format like 3339:CYL or 467200:BLK\n \"\"\"\n image_info = self.image_query(image_name)\n if not image_info:\n raise exception.SDKImageOperationError(rs=20, img=image_name)\n disk_size_units = image_info[0]['disk_size_units'].split(':')[0]\n return disk_size_units"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget guest vm connection status.", "response": "def get_guest_connection_status(self, userid):\n '''Get guest vm connection status.'''\n rd = ' '.join(('getvm', userid, 'isreachable'))\n results = self._request(rd)\n if results['rs'] == 1:\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate and punch the scripts used to process additional disk into target vm s reader.", "response": "def process_additional_minidisks(self, userid, disk_info):\n '''Generate and punch the scripts used to process additional disk into\n target vm's reader.\n '''\n for idx, disk in enumerate(disk_info):\n vdev = disk.get('vdev') or self.generate_disk_vdev(\n offset = (idx + 1))\n fmt = disk.get('format')\n mount_dir = disk.get('mntdir') or ''.join(['/mnt/ephemeral',\n str(vdev)])\n disk_parms = self._generate_disk_parmline(vdev, fmt, mount_dir)\n func_name = '/var/lib/zvmsdk/setupDisk'\n self.aemod_handler(userid, func_name, disk_parms)\n\n # trigger do-script\n if self.get_power_state(userid) == 'on':\n self.execute_cmd(userid, \"/usr/bin/zvmguestconfigure start\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _request_with_error_ignored(self, rd):\n try:\n return self._request(rd)\n except Exception as err:\n # log as warning and ignore namelist operation failures\n LOG.warning(six.text_type(err))", "response": "Send smt request log and ignore any errors."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef image_import(cls, image_name, url, target, **kwargs):\n source = urlparse.urlparse(url).path\n if kwargs['remote_host']:\n if '@' in kwargs['remote_host']:\n source_path = ':'.join([kwargs['remote_host'], source])\n command = ' '.join(['/usr/bin/scp',\n \"-P\", CONF.zvm.remotehost_sshd_port,\n \"-o StrictHostKeyChecking=no\",\n '-r ', source_path, target])\n (rc, output) = zvmutils.execute(command)\n if rc:\n msg = (\"Copying image file from remote filesystem failed\"\n \" with reason: %s\" % output)\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=10, err=output)\n else:\n msg = (\"The specified remote_host %s format invalid\" %\n kwargs['remote_host'])\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=11,\n rh=kwargs['remote_host'])\n else:\n LOG.debug(\"Remote_host not specified, will copy from local\")\n try:\n shutil.copyfile(source, target)\n except Exception as err:\n msg = (\"Import image from local file system failed\"\n \" with reason %s\" % six.text_type(err))\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=12,\n err=six.text_type(err))", "response": "This method imports an image from remote host to local image repository using scp."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexport the specific image to remote host or local file system", "response": "def image_export(cls, source_path, dest_url, **kwargs):\n \"\"\"Export the specific image to remote host or local file system \"\"\"\n dest_path = urlparse.urlparse(dest_url).path\n if kwargs['remote_host']:\n target_path = ':'.join([kwargs['remote_host'], dest_path])\n command = ' '.join(['/usr/bin/scp',\n \"-P\", CONF.zvm.remotehost_sshd_port,\n \"-o StrictHostKeyChecking=no\",\n '-r ', source_path, target_path])\n (rc, output) = zvmutils.execute(command)\n if rc:\n msg = (\"Error happened when copying image file to remote \"\n \"host with reason: %s\" % output)\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=21, msg=output)\n else:\n # Copy to local file system\n LOG.debug(\"Remote_host not specified, will copy to local server\")\n try:\n shutil.copyfile(source_path, dest_path)\n except Exception as err:\n msg = (\"Export image from %(src)s to local file system\"\n \" %(dest)s failed: %(err)s\" %\n {'src': source_path,\n 'dest': dest_path,\n 'err': six.text_type(err)})\n LOG.error(msg)\n raise exception.SDKImageOperationError(rs=22,\n err=six.text_type(err))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dedent(func):\n if isinstance(func, types.MethodType) and not six.PY3:\n func = func.im_func\n func.__doc__ = func.__doc__ and dedents(func.__doc__)\n return func", "response": "Dedent the docstring of a function and substitute with params"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef append_original_doc(parent, num=0):\n def func(func):\n func.__doc__ = func.__doc__ and func.__doc__ + indent(\n parent.__doc__, num)\n return func\n return func", "response": "Return an iterator that appends the docstring of the given parent to the applied function"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nextract the specified sections out of the given string s.", "response": "def get_sections(self, s, base, sections=[\n 'Parameters', 'Other Parameters', 'Possible types']):\n \"\"\"\n Extract the specified sections out of the given string\n\n The same as the :meth:`docrep.DocstringProcessor.get_sections` method\n but uses the ``'Possible types'`` section by default, too\n\n Parameters\n ----------\n %(DocstringProcessor.get_sections.parameters)s\n\n Returns\n -------\n str\n The replaced string\n \"\"\"\n return super(PsyplotDocstringProcessor, self).get_sections(s, base,\n sections)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating config drive for zVM guest vm.", "response": "def create_config_drive(network_interface_info, os_version):\n \"\"\"Generate config driver for zVM guest vm.\n\n :param dict network_interface_info: Required keys:\n ip_addr - (str) IP address\n nic_vdev - (str) VDEV of the nic\n gateway_v4 - IPV4 gateway\n broadcast_v4 - IPV4 broadcast address\n netmask_v4 - IPV4 netmask\n :param str os_version: operating system version of the guest\n \"\"\"\n temp_path = CONF.guest.temp_path\n if not os.path.exists(temp_path):\n os.mkdir(temp_path)\n cfg_dir = os.path.join(temp_path, 'openstack')\n if os.path.exists(cfg_dir):\n shutil.rmtree(cfg_dir)\n content_dir = os.path.join(cfg_dir, 'content')\n latest_dir = os.path.join(cfg_dir, 'latest')\n os.mkdir(cfg_dir)\n os.mkdir(content_dir)\n os.mkdir(latest_dir)\n\n net_file = os.path.join(content_dir, '0000')\n generate_net_file(network_interface_info, net_file, os_version)\n znetconfig_file = os.path.join(content_dir, '0001')\n generate_znetconfig_file(znetconfig_file, os_version)\n meta_data_path = os.path.join(latest_dir, 'meta_data.json')\n generate_meta_data(meta_data_path)\n network_data_path = os.path.join(latest_dir, 'network_data.json')\n generate_file('{}', network_data_path)\n vendor_data_path = os.path.join(latest_dir, 'vendor_data.json')\n generate_file('{}', vendor_data_path)\n\n tar_path = os.path.join(temp_path, 'cfgdrive.tgz')\n tar = tarfile.open(tar_path, \"w:gz\")\n os.chdir(temp_path)\n tar.add('openstack')\n tar.close()\n\n return tar_path"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef scrypt(password, salt, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p, olen=64):\n check_args(password, salt, N, r, p, olen)\n\n # Set the memory required based on parameter values\n m = 128 * r * (N + p + 2)\n\n try:\n return _scrypt(\n password=password, salt=salt, n=N, r=r, p=p, maxmem=m, dklen=olen)\n except:\n raise ValueError", "response": "Returns a key derived using the scrypt key - derivarion function"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set(self, ctype, key, data):\n with zvmutils.acquire_lock(self._lock):\n target_cache = self._get_ctype_cache(ctype)\n target_cache['data'][key] = data", "response": "Set or update cache content."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconstruct a Good HTTP response object.", "response": "def GoodResponse(body, request, status_code=None,\n headers=None):\n \"\"\"\n Construct a Good HTTP response (defined in DEFAULT_GOOD_RESPONSE_CODE)\n\n :param body: The body of the response\n :type body: ``str``\n\n :param request: The HTTP request\n :type request: :class:`requests.Request`\n\n :param status_code: The return status code, defaults\n to DEFAULT_GOOD_STATUS_CODE if not specified\n :type status_code: ``int``\n\n :param headers: Response headers, defaults to\n DEFAULT_RESPONSE_HEADERS if not specified\n :type headers: ``dict``\n\n :rtype: :class:`requests.Response`\n :returns: a Response object\n \"\"\"\n response = Response()\n response.url = request.url\n response.raw = BytesIO(body)\n if status_code:\n response.status_code = status_code\n else:\n response.status_code = DEFAULT_GOOD_STATUS_CODE\n if headers:\n response.headers = headers\n else:\n response.headers = DEFAULT_RESPONSE_HEADERS\n response.request = request\n response._content = body\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef BadResponse(body, request, status_code=None,\n headers=None):\n \"\"\"\n Construct a Bad HTTP response (defined in DEFAULT_BAD_RESPONSE_CODE)\n\n :param body: The body of the response\n :type body: ``str``\n\n :param request: The HTTP request\n :type request: :class:`requests.Request`\n\n :param status_code: The return status code, defaults\n to DEFAULT_GOOD_STATUS_CODE if not specified\n :type status_code: ``int``\n\n :param headers: Response headers, defaults to\n DEFAULT_RESPONSE_HEADERS if not specified\n :type headers: ``dict``\n\n :rtype: :class:`requests.Response`\n :returns: a Response object\n \"\"\"\n response = Response()\n response.url = request.url\n response.raw = BytesIO(body)\n if status_code:\n response.status_code = status_code\n else:\n response.status_code = DEFAULT_BAD_STATUS_CODE\n if headers:\n response.headers = headers\n else:\n response.headers = DEFAULT_RESPONSE_HEADERS\n response.request = request\n response._content = body\n return response", "response": "Construct a Bad HTTP response object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfunction that disables all warnings related to the psyplot Module.", "response": "def disable_warnings(critical=False):\n \"\"\"Function that disables all warnings and all critical warnings (if\n critical evaluates to True) related to the psyplot Module.\n Please note that you can also configure the warnings via the\n psyplot.warning logger (logging.getLogger(psyplot.warning)).\"\"\"\n warnings.filterwarnings('ignore', '\\w', PsyPlotWarning, 'psyplot', 0)\n if critical:\n warnings.filterwarnings('ignore', '\\w', PsyPlotCritical, 'psyplot', 0)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps around the warnings. warn function for critical warnings. warn", "response": "def warn(message, category=PsyPlotWarning, logger=None):\n \"\"\"wrapper around the warnings.warn function for non-critical warnings.\n logger may be a logging.Logger instance\"\"\"\n if logger is not None:\n message = \"[Warning by %s]\\n%s\" % (logger.name, message)\n warnings.warn(message, category, stacklevel=3)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwrapping around the warnings. warn function for critical warnings. warn function for critical warnings. warn logger may be a logging. Logger instance", "response": "def critical(message, category=PsyPlotCritical, logger=None):\n \"\"\"wrapper around the warnings.warn function for critical warnings.\n logger may be a logging.Logger instance\"\"\"\n if logger is not None:\n message = \"[Critical warning by %s]\\n%s\" % (logger.name, message)\n warnings.warn(message, category, stacklevel=2)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nuses the psyplot.warning logger for categories being out of PsyPlotWarning and PsyPlotCritical and the default warnings.showwarning function for all the others.", "response": "def customwarn(message, category, filename, lineno, *args, **kwargs):\n \"\"\"Use the psyplot.warning logger for categories being out of\n PsyPlotWarning and PsyPlotCritical and the default warnings.showwarning\n function for all the others.\"\"\"\n if category is PsyPlotWarning:\n logger.warning(warnings.formatwarning(\n \"\\n%s\" % message, category, filename, lineno))\n elif category is PsyPlotCritical:\n logger.critical(warnings.formatwarning(\n \"\\n%s\" % message, category, filename, lineno),\n exc_info=True)\n else:\n old_showwarning(message, category, filename, lineno, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_home():\n try:\n if six.PY2 and sys.platform == 'win32':\n path = os.path.expanduser(b\"~\").decode(sys.getfilesystemencoding())\n else:\n path = os.path.expanduser(\"~\")\n except ImportError:\n # This happens on Google App Engine (pwd module is not present).\n pass\n else:\n if os.path.isdir(path):\n return path\n for evar in ('HOME', 'USERPROFILE', 'TMP'):\n path = os.environ.get(evar)\n if path is not None and os.path.isdir(path):\n return path\n return None", "response": "Find user s home directory if possible. Otherwise returns None."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef match_url(self, request):\n parsed_url = urlparse(request.path_url)\n path_url = parsed_url.path\n query_params = parsed_url.query\n match = None\n\n for path in self.paths:\n for item in self.index:\n target_path = os.path.join(BASE_PATH, path, path_url[1:])\n\n query_path = target_path.lower() + quote(\n '?' + query_params).lower()\n if target_path.lower() == item[0]:\n match = item[1]\n break\n elif query_path == item[0]:\n match = item[1]\n break\n return match", "response": "Match the request against a file in the adapter directory"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _reindex(self):\n self.index = []\n for path in self.paths:\n target_path = os.path.normpath(os.path.join(BASE_PATH,\n path))\n for root, subdirs, files in os.walk(target_path):\n for f in files:\n self.index.append(\n (os.path.join(root, f).lower(),\n os.path.join(root, f)))", "response": "Reindex the index of the paths in the database."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add3390(rh):\n\n rh.printSysLog(\"Enter changeVM.add3390\")\n results, cyl = generalUtils.cvtToCyl(rh, rh.parms['diskSize'])\n if results['overallRC'] != 0:\n # message already sent. Only need to update the final results.\n rh.updateResults(results)\n\n if results['overallRC'] == 0:\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-t\", \"3390\",\n \"-a\", \"AUTOG\",\n \"-r\", rh.parms['diskPool'],\n \"-u\", \"1\",\n \"-z\", cyl,\n \"-f\", \"1\"]\n hideList = []\n\n if 'mode' in rh.parms:\n parms.extend([\"-m\", rh.parms['mode']])\n else:\n parms.extend([\"-m\", 'W'])\n if 'readPW' in rh.parms:\n parms.extend([\"-R\", rh.parms['readPW']])\n hideList.append(len(parms) - 1)\n if 'writePW' in rh.parms:\n parms.extend([\"-W\", rh.parms['writePW']])\n hideList.append(len(parms) - 1)\n if 'multiPW' in rh.parms:\n parms.extend([\"-M\", rh.parms['multiPW']])\n hideList.append(len(parms) - 1)\n\n results = invokeSMCLI(rh,\n \"Image_Disk_Create_DM\",\n parms,\n hideInLog=hideList)\n\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results returned by invokeSMCLI\n\n if (results['overallRC'] == 0 and 'fileSystem' in rh.parms):\n results = installFS(\n rh,\n rh.parms['vaddr'],\n rh.parms['mode'],\n rh.parms['fileSystem'],\n \"3390\")\n\n if results['overallRC'] == 0:\n results = isLoggedOn(rh, rh.userid)\n if results['overallRC'] != 0:\n # Cannot determine if VM is logged on or off.\n # We have partially failed. Pass back the results.\n rh.updateResults(results)\n elif results['rs'] == 0:\n # Add the disk to the active configuration.\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-m\", rh.parms['mode']]\n\n results = invokeSMCLI(rh, \"Image_Disk_Create\", parms)\n if results['overallRC'] == 0:\n rh.printLn(\"N\", \"Added dasd \" + rh.parms['vaddr'] +\n \" to the active configuration.\")\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit changeVM.add3390, rc: \" +\n str(rh.results['overallRC']))\n\n return rh.results['overallRC']", "response": "This function adds a 3390 disk to a virtual machine s directory entry."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a 9336 (FBA) disk to virtual machine's directory entry. Input: Request Handle with the following properties: function - 'CHANGEVM' subfunction - 'ADD9336' userid - userid of the virtual machine parms['diskPool'] - Disk pool parms['diskSize'] - size of the disk in blocks or bytes. parms['fileSystem'] - Linux filesystem to install on the disk. parms['mode'] - Disk access mode parms['multiPW'] - Multi-write password parms['readPW'] - Read password parms['vaddr'] - Virtual address parms['writePW'] - Write password Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def add9336(rh):\n \"\"\"\n Adds a 9336 (FBA) disk to virtual machine's directory entry.\n\n Input:\n Request Handle with the following properties:\n function - 'CHANGEVM'\n subfunction - 'ADD9336'\n userid - userid of the virtual machine\n parms['diskPool'] - Disk pool\n parms['diskSize'] - size of the disk in blocks or bytes.\n parms['fileSystem'] - Linux filesystem to install on the disk.\n parms['mode'] - Disk access mode\n parms['multiPW'] - Multi-write password\n parms['readPW'] - Read password\n parms['vaddr'] - Virtual address\n parms['writePW'] - Write password\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter changeVM.add9336\")\n results, blocks = generalUtils.cvtToBlocks(rh, rh.parms['diskSize'])\n if results['overallRC'] != 0:\n # message already sent. Only need to update the final results.\n rh.updateResults(results)\n\n if results['overallRC'] == 0:\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-t\", \"9336\",\n \"-a\", \"AUTOG\",\n \"-r\", rh.parms['diskPool'],\n \"-u\", \"1\",\n \"-z\", blocks,\n \"-f\", \"1\"]\n hideList = []\n\n if 'mode' in rh.parms:\n parms.extend([\"-m\", rh.parms['mode']])\n else:\n parms.extend([\"-m\", 'W'])\n if 'readPW' in rh.parms:\n parms.extend([\"-R\", rh.parms['readPW']])\n hideList.append(len(parms) - 1)\n if 'writePW' in rh.parms:\n parms.extend([\"-W\", rh.parms['writePW']])\n hideList.append(len(parms) - 1)\n if 'multiPW' in rh.parms:\n parms.extend([\"-M\", rh.parms['multiPW']])\n hideList.append(len(parms) - 1)\n\n results = invokeSMCLI(rh,\n \"Image_Disk_Create_DM\",\n parms,\n hideInLog=hideList)\n\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n if (results['overallRC'] == 0 and 'fileSystem' in rh.parms):\n # Install the file system\n results = installFS(\n rh,\n rh.parms['vaddr'],\n rh.parms['mode'],\n rh.parms['fileSystem'],\n \"9336\")\n\n if results['overallRC'] == 0:\n results = isLoggedOn(rh, rh.userid)\n if (results['overallRC'] == 0 and results['rs'] == 0):\n # Add the disk to the active configuration.\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-m\", rh.parms['mode']]\n\n results = invokeSMCLI(rh, \"Image_Disk_Create\", parms)\n if results['overallRC'] == 0:\n rh.printLn(\"N\", \"Added dasd \" + rh.parms['vaddr'] +\n \" to the active configuration.\")\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit changeVM.add9336, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dedicate(rh):\n rh.printSysLog(\"Enter changeVM.dedicate\")\n\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-r\", rh.parms['raddr'],\n \"-R\", rh.parms['mode']]\n\n hideList = []\n results = invokeSMCLI(rh,\n \"Image_Device_Dedicate_DM\",\n parms,\n hideInLog=hideList)\n\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n if results['overallRC'] == 0:\n results = isLoggedOn(rh, rh.userid)\n if (results['overallRC'] == 0 and results['rs'] == 0):\n # Dedicate device to active configuration.\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-r\", rh.parms['raddr'],\n \"-R\", rh.parms['mode']]\n\n results = invokeSMCLI(rh, \"Image_Device_Dedicate\", parms)\n if results['overallRC'] == 0:\n rh.printLn(\"N\", \"Dedicated device \" + rh.parms['vaddr'] +\n \" to the active configuration.\")\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit changeVM.dedicate, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Dedicate a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef addAEMOD(rh):\n rh.printSysLog(\"Enter changeVM.addAEMOD\")\n invokeScript = \"invokeScript.sh\"\n trunkFile = \"aemod.doscript\"\n fileClass = \"X\"\n tempDir = tempfile.mkdtemp()\n\n if os.path.isfile(rh.parms['aeScript']):\n # Get the short name of our activation engine modifier script\n if rh.parms['aeScript'].startswith(\"/\"):\n s = rh.parms['aeScript']\n tmpAEScript = s[s.rindex(\"/\") + 1:]\n else:\n tmpAEScript = rh.parms['aeScript']\n\n # Copy the mod script to our temp directory\n shutil.copyfile(rh.parms['aeScript'], tempDir + \"/\" + tmpAEScript)\n\n # Create the invocation script.\n conf = \"#!/bin/bash \\n\"\n baseName = os.path.basename(rh.parms['aeScript'])\n parm = \"/bin/bash %s %s \\n\" % (baseName, rh.parms['invParms'])\n\n fh = open(tempDir + \"/\" + invokeScript, \"w\")\n fh.write(conf)\n fh.write(parm)\n fh.close()\n\n # Generate the tar package for punch\n tar = tarfile.open(tempDir + \"/\" + trunkFile, \"w\")\n for file in os.listdir(tempDir):\n tar.add(tempDir + \"/\" + file, arcname=file)\n tar.close()\n\n # Punch file to reader\n punch2reader(rh, rh.userid, tempDir + \"/\" + trunkFile, fileClass)\n shutil.rmtree(tempDir)\n\n else:\n # Worker script does not exist.\n shutil.rmtree(tempDir)\n msg = msgs.msg['0400'][1] % (modId, rh.parms['aeScript'])\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0400'][0])\n\n rh.printSysLog(\"Exit changeVM.addAEMOD, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "This function sends an Activation Modification Script to the virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the LOADDEV statement in the virtual machine's directory entry. Input: Request Handle with the following properties: function - 'CHANGEVM' subfunction - 'ADDLOADDEV' userid - userid of the virtual machine parms['boot'] - Boot program number parms['addr'] - Logical block address of the boot record parms['lun'] - One to eight-byte logical unit number of the FCP-I/O device. parms['wwpn'] - World-Wide Port Number parms['scpDataType'] - SCP data type parms['scpData'] - Designates information to be passed to the program is loaded during guest IPL. Note that any of the parms may be left blank, in which case we will not update them. Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def addLOADDEV(rh):\n \"\"\"\n Sets the LOADDEV statement in the virtual machine's directory entry.\n\n Input:\n Request Handle with the following properties:\n function - 'CHANGEVM'\n subfunction - 'ADDLOADDEV'\n userid - userid of the virtual machine\n parms['boot'] - Boot program number\n parms['addr'] - Logical block address of the boot record\n parms['lun'] - One to eight-byte logical unit number\n of the FCP-I/O device.\n parms['wwpn'] - World-Wide Port Number\n parms['scpDataType'] - SCP data type\n parms['scpData'] - Designates information to be passed to the\n program is loaded during guest IPL.\n\n Note that any of the parms may be left blank, in which case\n we will not update them.\n\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n rh.printSysLog(\"Enter changeVM.addLOADDEV\")\n\n # scpDataType and scpData must appear or disappear concurrently\n if ('scpData' in rh.parms and 'scpDataType' not in rh.parms):\n msg = msgs.msg['0014'][1] % (modId, \"scpData\", \"scpDataType\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0014'][0])\n return\n if ('scpDataType' in rh.parms and 'scpData' not in rh.parms):\n if rh.parms['scpDataType'].lower() == \"delete\":\n scpDataType = 1\n else:\n # scpDataType and scpData must appear or disappear\n # concurrently unless we're deleting data\n msg = msgs.msg['0014'][1] % (modId, \"scpDataType\", \"scpData\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0014'][0])\n return\n\n scpData = \"\"\n if 'scpDataType' in rh.parms:\n if rh.parms['scpDataType'].lower() == \"hex\":\n scpData = rh.parms['scpData']\n scpDataType = 3\n elif rh.parms['scpDataType'].lower() == \"ebcdic\":\n scpData = rh.parms['scpData']\n scpDataType = 2\n # scpDataType not hex, ebcdic or delete\n elif rh.parms['scpDataType'].lower() != \"delete\":\n msg = msgs.msg['0016'][1] % (modId, rh.parms['scpDataType'])\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0016'][0])\n return\n else:\n # Not specified, 0 for do nothing\n scpDataType = 0\n scpData = \"\"\n\n if 'boot' not in rh.parms:\n boot = \"\"\n else:\n boot = rh.parms['boot']\n if 'addr' not in rh.parms:\n block = \"\"\n else:\n block = rh.parms['addr']\n if 'lun' not in rh.parms:\n lun = \"\"\n else:\n lun = rh.parms['lun']\n # Make sure it doesn't have the 0x prefix\n lun.replace(\"0x\", \"\")\n if 'wwpn' not in rh.parms:\n wwpn = \"\"\n else:\n wwpn = rh.parms['wwpn']\n # Make sure it doesn't have the 0x prefix\n wwpn.replace(\"0x\", \"\")\n\n parms = [\n \"-T\", rh.userid,\n \"-b\", boot,\n \"-k\", block,\n \"-l\", lun,\n \"-p\", wwpn,\n \"-s\", str(scpDataType)]\n\n if scpData != \"\":\n parms.extend([\"-d\", scpData])\n\n results = invokeSMCLI(rh, \"Image_SCSI_Characteristics_Define_DM\", parms)\n\n # SMAPI API failed.\n if results['overallRC'] != 0:\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit changeVM.addLOADDEV, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef punchFile(rh):\n\n rh.printSysLog(\"Enter changeVM.punchFile\")\n\n # Default spool class in \"A\" , if specified change to specified class\n spoolClass = \"A\"\n if 'class' in rh.parms:\n spoolClass = str(rh.parms['class'])\n\n punch2reader(rh, rh.userid, rh.parms['file'], spoolClass)\n\n rh.printSysLog(\"Exit changeVM.punchFile, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Punch a file to a virtual reader of the specified virtual machine."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef purgeRDR(rh):\n\n rh.printSysLog(\"Enter changeVM.purgeRDR\")\n results = purgeReader(rh)\n rh.updateResults(results)\n rh.printSysLog(\"Exit changeVM.purgeRDR, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Purges the reader belonging to the virtual machine and returns the virtual machine ID."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef removeDisk(rh):\n\n rh.printSysLog(\"Enter changeVM.removeDisk\")\n\n results = {'overallRC': 0, 'rc': 0, 'rs': 0}\n\n # Is image logged on\n loggedOn = False\n results = isLoggedOn(rh, rh.userid)\n if results['overallRC'] == 0:\n if results['rs'] == 0:\n loggedOn = True\n\n results = disableEnableDisk(\n rh,\n rh.userid,\n rh.parms['vaddr'],\n '-d')\n if results['overallRC'] != 0:\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results)\n\n if results['overallRC'] == 0 and loggedOn:\n strCmd = \"/sbin/vmcp detach \" + rh.parms['vaddr']\n results = execCmdThruIUCV(rh, rh.userid, strCmd)\n if results['overallRC'] != 0:\n if re.search('(^HCP\\w\\w\\w040E)', results['response']):\n # Device does not exist, ignore the error\n results = {'overallRC': 0, 'rc': 0, 'rs': 0, 'response': ''}\n else:\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results)\n\n if results['overallRC'] == 0:\n # Remove the disk from the user entry.\n parms = [\n \"-T\", rh.userid,\n \"-v\", rh.parms['vaddr'],\n \"-e\", \"0\"]\n\n results = invokeSMCLI(rh, \"Image_Disk_Delete_DM\", parms)\n if results['overallRC'] != 0:\n if (results['overallRC'] == 8 and results['rc'] == 208 and\n results['rs'] == 36):\n # Disk does not exist, ignore the error\n results = {'overallRC': 0, 'rc': 0, 'rs': 0, 'response': ''}\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n else:\n # Unexpected error. Message already sent.\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit changeVM.removeDisk, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "This function removes a disk from a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef showInvLines(rh):\n\n if rh.subfunction != '':\n rh.printLn(\"N\", \"Usage:\")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM add3390 \")\n rh.printLn(\"N\", \" --mode \" +\n \" --readpw \")\n rh.printLn(\"N\", \" --writepw \" +\n \"--multipw --filesystem \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM add9336 \")\n rh.printLn(\"N\", \" --mode \" +\n \" --readpw \")\n rh.printLn(\"N\", \" --writepw \" +\n \"--multipw --filesystem \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM aemod --invparms \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM IPL --loadparms \")\n rh.printLn(\"N\", \" --parms \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM loaddev --boot --addr \")\n rh.printLn(\"N\", \" --wwpn --lun \" +\n \"--scpdatatype --scpdata \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM punchFile --class \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM purgeRDR\")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM removedisk \")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM removeIPL \")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" ChangeVM help\")\n rh.printLn(\"N\", \" python \" + rh.cmdName +\n \" ChangeVM version\")\n return", "response": "Prints out the help output related to the command synopsis\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef showOperandLines(rh):\n\n if rh.function == 'HELP':\n rh.printLn(\"N\", \" For the ChangeVM function:\")\n else:\n rh.printLn(\"N\", \"Sub-Functions(s):\")\n rh.printLn(\"N\", \" add3390 - Add a 3390 (ECKD) disk \" +\n \"to a virtual machine's directory\")\n rh.printLn(\"N\", \" entry.\")\n rh.printLn(\"N\", \" add9336 - Add a 9336 (FBA) disk \" +\n \"to virtual machine's directory\")\n rh.printLn(\"N\", \" entry.\")\n rh.printLn(\"N\", \" aemod - Sends an activation \" +\n \"engine script to the managed virtual\")\n rh.printLn(\"N\", \" machine.\")\n rh.printLn(\"N\", \" help - Displays this help \" +\n \"information.\")\n rh.printLn(\"N\", \" ipl - Sets the IPL statement in \" +\n \"the virtual machine's\")\n rh.printLn(\"N\", \" directory entry.\")\n rh.printLn(\"N\", \" loaddev - Sets the LOADDEV statement \" +\n \"in the virtual machine's\")\n rh.printLn(\"N\", \" directory entry.\")\n rh.printLn(\"N\", \" punchfile - Punch a file to a virtual \" +\n \"reader of the specified\")\n rh.printLn(\"N\", \" virtual machine.\")\n rh.printLn(\"N\", \" purgerdr - Purges the reader \" +\n \"belonging to the virtual machine.\")\n rh.printLn(\"N\", \" removedisk - \" +\n \"Remove an mdisk from a virtual machine.\")\n rh.printLn(\"N\", \" removeIPL - \" +\n \"Remove an IPL from a virtual machine's directory entry.\")\n rh.printLn(\"N\", \" version - \" +\n \"show the version of the power function\")\n if rh.subfunction != '':\n rh.printLn(\"N\", \"Operand(s):\")\n rh.printLn(\"N\", \" -addr - \" +\n \"Specifies the logical block address of the\")\n rh.printLn(\"N\", \" \" +\n \"boot record.\")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the virtual address or NSS name\")\n rh.printLn(\"N\", \" to IPL.\")\n rh.printLn(\"N\", \" - \" +\n \"aeScript is the fully qualified file\")\n rh.printLn(\"N\", \" \" +\n \"specification of the script to be sent\")\n rh.printLn(\"N\", \" --boot - \" +\n \"Boot program number\")\n rh.printLn(\"N\", \" --class - \" +\n \"The class is optional and specifies the spool\")\n rh.printLn(\"N\", \" \" +\n \"class for the reader file.\")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the directory manager disk pool to\")\n rh.printLn(\"N\", \" \" +\n \"use to obtain the disk.\")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the size of the ECKD minidisk. \")\n rh.printLn(\"N\", \" - \" +\n \"Specifies the size of the FBA type minidisk.\")\n rh.printLn(\"N\", \" - \" +\n \"File to punch to the target system.\")\n rh.printLn(\"N\", \" --filesystem - \" +\n \"Specifies type of filesystem to be created on\")\n rh.printLn(\"N\", \" the minidisk.\")\n rh.printLn(\"N\", \" --invparms - \" +\n \"Specifies the parameters to be specified in the\")\n rh.printLn(\"N\", \" \" +\n \"invocation script to call the aeScript.\")\n rh.printLn(\"N\", \" --loadparms - \" +\n \"Specifies a 1 to 8-character load parameter that\")\n rh.printLn(\"N\", \" \" +\n \"is used by the IPL'd system.\")\n rh.printLn(\"N\", \" --lun - \" +\n \"One to eight-byte logical unit number of the\")\n rh.printLn(\"N\", \" FCP-I/O device.\")\n rh.printLn(\"N\", \" --mode - \" +\n \"Specifies the access mode for the minidisk.\")\n rh.printLn(\"N\", \" --multipw - \" +\n \"Specifies the password that allows sharing the\")\n rh.printLn(\"N\", \" \" +\n \"minidisk in multiple-write mode.\")\n rh.printLn(\"N\", \" --parms - \" +\n \"Specifies a parameter string to pass to the\")\n rh.printLn(\"N\", \" \" +\n \"virtual machine in general-purpose registers at\")\n rh.printLn(\"N\", \" \" +\n \"user's the completion of the IPL.\")\n rh.printLn(\"N\", \" --readpw - \" +\n \"Specifies the password that allows sharing the\")\n rh.printLn(\"N\", \" \" +\n \"minidisk in read mode.\")\n rh.printLn(\"N\", \" --scpdata - \" +\n \"Provides the SCP data information.\")\n rh.printLn(\"N\", \" --scpdatatype - \" +\n \"Specifies whether the scp data is in hex,\")\n rh.printLn(\"N\", \" \" +\n \"EBCDIC, or should be deleted.\")\n rh.printLn(\"N\", \" - \" +\n \"Userid of the target virtual machine.\")\n rh.printLn(\"N\", \" - \" +\n \"Virtual address of the device.\")\n rh.printLn(\"N\", \" --writepw - \" +\n \"Specifies is the password that allows sharing\")\n rh.printLn(\"N\", \" \" +\n \"the minidisk in write mode.\")\n rh.printLn(\"N\", \" --wwpn - \" +\n \"The world-wide port number.\")\n\n return", "response": "Prints the output of the operands for the current virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsending a request to the SDK API.", "response": "def send_request(self, api_name, *api_args, **api_kwargs):\n \"\"\"Refer to SDK API documentation.\n\n :param api_name: SDK API name\n :param *api_args: SDK API sequence parameters\n :param **api_kwargs: SDK API keyword parameters\n \"\"\"\n return self.conn.request(api_name, *api_args, **api_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getDiskPoolNames(rh):\n\n rh.printSysLog(\"Enter getHost.getDiskPoolNames\")\n\n parms = [\"-q\", \"1\", \"-e\", \"3\", \"-T\", \"dummy\"]\n results = invokeSMCLI(rh, \"Image_Volume_Space_Query_DM\", parms)\n if results['overallRC'] == 0:\n for line in results['response'].splitlines():\n poolName = line.partition(' ')[0]\n rh.printLn(\"N\", poolName)\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit getHost.getDiskPoolNames, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "This function returns the list of disk pools known to the directory manager."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nobtains disk pool space information for all or a specific disk pool. Input: Request Handle with the following properties: function - 'GETHOST' subfunction - 'DISKPOOLSPACE' parms['poolName'] - Name of the disk pool. Optional, if not present then information for all disk pools is obtained. Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def getDiskPoolSpace(rh):\n \"\"\"\n Obtain disk pool space information for all or a specific disk pool.\n\n Input:\n Request Handle with the following properties:\n function - 'GETHOST'\n subfunction - 'DISKPOOLSPACE'\n parms['poolName'] - Name of the disk pool. Optional,\n if not present then information for all\n disk pools is obtained.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter getHost.getDiskPoolSpace\")\n\n results = {'overallRC': 0}\n\n if 'poolName' not in rh.parms:\n poolNames = [\"*\"]\n else:\n if isinstance(rh.parms['poolName'], list):\n poolNames = rh.parms['poolName']\n else:\n poolNames = [rh.parms['poolName']]\n\n if results['overallRC'] == 0:\n # Loop thru each pool getting total. Do it for query 2 & 3\n totals = {}\n for qType in [\"2\", \"3\"]:\n parms = [\n \"-q\", qType,\n \"-e\", \"3\",\n \"-T\", \"DUMMY\",\n \"-n\", \" \".join(poolNames)]\n\n results = invokeSMCLI(rh, \"Image_Volume_Space_Query_DM\", parms)\n if results['overallRC'] == 0:\n for line in results['response'].splitlines():\n parts = line.split()\n if len(parts) == 9:\n poolName = parts[7]\n else:\n poolName = parts[4]\n if poolName not in totals:\n totals[poolName] = {\"2\": 0., \"3\": 0.}\n\n if parts[1][:4] == \"3390\":\n totals[poolName][qType] += int(parts[3]) * 737280\n elif parts[1][:4] == \"9336\":\n totals[poolName][qType] += int(parts[3]) * 512\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n break\n\n if results['overallRC'] == 0:\n if len(totals) == 0:\n # No pool information found.\n msg = msgs.msg['0402'][1] % (modId, \" \".join(poolNames))\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0402'][0])\n else:\n # Produce a summary for each pool\n for poolName in sorted(totals):\n total = totals[poolName][\"2\"] + totals[poolName][\"3\"]\n rh.printLn(\"N\", poolName + \" Total: \" +\n generalUtils.cvtToMag(rh, total))\n rh.printLn(\"N\", poolName + \" Used: \" +\n generalUtils.cvtToMag(rh, totals[poolName][\"3\"]))\n rh.printLn(\"N\", poolName + \" Free: \" +\n generalUtils.cvtToMag(rh, totals[poolName][\"2\"]))\n\n rh.printSysLog(\"Exit getHost.getDiskPoolSpace, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists the FCP device channels that are active, free, or offline. Input: Request Handle with the following properties: function - 'GETHOST' subfunction - 'FCPDEVICES' Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def getFcpDevices(rh):\n \"\"\"\n Lists the FCP device channels that are active, free, or offline.\n\n Input:\n Request Handle with the following properties:\n function - 'GETHOST'\n subfunction - 'FCPDEVICES'\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter getHost.getFcpDevices\")\n\n parms = [\"-T\", \"dummy\"]\n results = invokeSMCLI(rh, \"System_WWPN_Query\", parms)\n if results['overallRC'] == 0:\n rh.printLn(\"N\", results['response'])\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit getHost.getFcpDevices, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getGeneralInfo(rh):\n\n rh.printSysLog(\"Enter getHost.getGeneralInfo\")\n\n # Get host using VMCP\n rh.results['overallRC'] = 0\n cmd = [\"sudo\", \"/sbin/vmcp\", \"query userid\"]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n host = subprocess.check_output(\n cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n host = bytes.decode(host)\n userid = host.split()[0]\n host = host.split()[2]\n except subprocess.CalledProcessError as e:\n msg = msgs.msg['0405'][1] % (modId, \"Hypervisor Name\",\n strCmd, e.output)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n host = \"no info\"\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n rh.updateResults(msgs.msg['0421'][0])\n host = \"no info\"\n\n # Get a bunch of info from /proc/sysinfo\n lparCpuTotal = \"no info\"\n lparCpuUsed = \"no info\"\n cecModel = \"no info\"\n cecVendor = \"no info\"\n hvInfo = \"no info\"\n with open('/proc/sysinfo', 'r') as myFile:\n for num, line in enumerate(myFile, 1):\n # Get total physical CPU in this LPAR\n if \"LPAR CPUs Total\" in line:\n lparCpuTotal = line.split()[3]\n # Get used physical CPU in this LPAR\n if \"LPAR CPUs Configured\" in line:\n lparCpuUsed = line.split()[3]\n # Get CEC model\n if \"Type:\" in line:\n cecModel = line.split()[1]\n # Get vendor of CEC\n if \"Manufacturer:\" in line:\n cecVendor = line.split()[1]\n # Get hypervisor type and version\n if \"VM00 Control Program\" in line:\n hvInfo = line.split()[3] + \" \" + line.split()[4]\n if lparCpuTotal == \"no info\":\n msg = msgs.msg['0405'][1] % (modId, \"LPAR CPUs Total\",\n \"cat /proc/sysinfo\", \"not found\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n if lparCpuUsed == \"no info\":\n msg = msgs.msg['0405'][1] % (modId, \"LPAR CPUs Configured\",\n \"cat /proc/sysinfo\", \"not found\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n if cecModel == \"no info\":\n msg = msgs.msg['0405'][1] % (modId, \"Type:\",\n \"cat /proc/sysinfo\", \"not found\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n if cecVendor == \"no info\":\n msg = msgs.msg['0405'][1] % (modId, \"Manufacturer:\",\n \"cat /proc/sysinfo\", \"not found\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n if hvInfo == \"no info\":\n msg = msgs.msg['0405'][1] % (modId, \"VM00 Control Program\",\n \"cat /proc/sysinfo\", \"not found\")\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n\n # Get processor architecture\n arch = str(os.uname()[4])\n\n # Get LPAR memory total & offline\n parm = [\"-T\", \"dummy\", \"-k\", \"STORAGE=\"]\n\n lparMemTotal = \"no info\"\n lparMemStandby = \"no info\"\n results = invokeSMCLI(rh, \"System_Information_Query\", parm)\n if results['overallRC'] == 0:\n for line in results['response'].splitlines():\n if \"STORAGE=\" in line:\n lparMemOnline = line.split()[0]\n lparMemStandby = line.split()[4]\n lparMemTotal = lparMemOnline.split(\"=\")[2]\n lparMemStandby = lparMemStandby.split(\"=\")[1]\n else:\n # SMAPI API failed, so we put out messages\n # 300 and 405 for consistency\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n msg = msgs.msg['0405'][1] % (modId, \"LPAR memory\",\n \"(see message 300)\", results['response'])\n rh.printLn(\"ES\", msg)\n\n # Get LPAR memory in use\n parm = [\"-T\", \"dummy\", \"-k\", \"detailed_cpu=show=no\"]\n\n lparMemUsed = \"no info\"\n results = invokeSMCLI(rh, \"System_Performance_Information_Query\",\n parm)\n if results['overallRC'] == 0:\n for line in results['response'].splitlines():\n if \"MEMORY_IN_USE=\" in line:\n lparMemUsed = line.split(\"=\")[1]\n lparMemUsed = generalUtils.getSizeFromPage(rh, lparMemUsed)\n else:\n # SMAPI API failed, so we put out messages\n # 300 and 405 for consistency\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n msg = msgs.msg['0405'][1] % (modId, \"LPAR memory in use\",\n \"(see message 300)\", results['response'])\n rh.printLn(\"ES\", msg)\n\n # Get IPL Time\n ipl = \"\"\n cmd = [\"sudo\", \"/sbin/vmcp\", \"query cplevel\"]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n ipl = subprocess.check_output(\n cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n ipl = bytes.decode(ipl).split(\"\\n\")[2]\n except subprocess.CalledProcessError as e:\n msg = msgs.msg['0405'][1] % (modId, \"IPL Time\",\n strCmd, e.output)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0405'][0])\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n rh.updateResults(msgs.msg['0421'][0])\n\n # Create output string\n outstr = \"ZCC USERID: \" + userid\n outstr += \"\\nz/VM Host: \" + host\n outstr += \"\\nArchitecture: \" + arch\n outstr += \"\\nCEC Vendor: \" + cecVendor\n outstr += \"\\nCEC Model: \" + cecModel\n outstr += \"\\nHypervisor OS: \" + hvInfo\n outstr += \"\\nHypervisor Name: \" + host\n outstr += \"\\nLPAR CPU Total: \" + lparCpuTotal\n outstr += \"\\nLPAR CPU Used: \" + lparCpuUsed\n outstr += \"\\nLPAR Memory Total: \" + lparMemTotal\n outstr += \"\\nLPAR Memory Offline: \" + lparMemStandby\n outstr += \"\\nLPAR Memory Used: \" + lparMemUsed\n outstr += \"\\nIPL Time: \" + ipl\n\n rh.printLn(\"N\", outstr)\n rh.printSysLog(\"Exit getHost.getGeneralInfo, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "This function returns general information about the host."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parseCmdline(rh):\n\n rh.printSysLog(\"Enter getHost.parseCmdline\")\n\n rh.userid = ''\n\n if rh.totalParms >= 2:\n rh.subfunction = rh.request[1].upper()\n\n # Verify the subfunction is valid.\n if rh.subfunction not in subfuncHandler:\n # Subfunction is missing.\n subList = ', '.join(sorted(subfuncHandler.keys()))\n msg = msgs.msg['0011'][1] % (modId, subList)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0011'][0])\n\n # Parse the rest of the command line.\n if rh.results['overallRC'] == 0:\n rh.argPos = 2 # Begin Parsing at 3rd operand\n generalUtils.parseCmdline(rh, posOpsList, keyOpsList)\n\n rh.printSysLog(\"Exit getHost.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Parse the command line and return the result."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nproduce help output related to command synopsis", "response": "def showInvLines(rh):\n \"\"\"\n Produce help output related to command synopsis\n\n Input:\n Request Handle\n \"\"\"\n\n if rh.subfunction != '':\n rh.printLn(\"N\", \"Usage:\")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" GetHost \" +\n \"diskpoolnames\")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" GetHost \" +\n \"diskpoolspace \")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" GetHost fcpdevices\")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" GetHost general\")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" GetHost help\")\n rh.printLn(\"N\", \" python \" + rh.cmdName + \" GetHost version\")\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef showOperandLines(rh):\n\n if rh.function == 'HELP':\n rh.printLn(\"N\", \" For the GetHost function:\")\n else:\n rh.printLn(\"N\", \"Sub-Functions(s):\")\n rh.printLn(\"N\", \" diskpoolnames - \" +\n \"Returns the names of the directory manager disk pools.\")\n rh.printLn(\"N\", \" diskpoolspace - \" +\n \"Returns disk pool size information.\")\n rh.printLn(\"N\", \" fcpdevices - \" +\n \"Lists the FCP device channels that are active, free, or\")\n rh.printLn(\"N\", \" offline.\")\n rh.printLn(\"N\", \" general - \" +\n \"Returns the general information related to the z/VM\")\n rh.printLn(\"N\", \" hypervisor environment.\")\n rh.printLn(\"N\", \" help - Returns this help information.\")\n rh.printLn(\"N\", \" version - Show the version of this function\")\n if rh.subfunction != '':\n rh.printLn(\"N\", \"Operand(s):\")\n rh.printLn(\"N\", \" - Name of the disk pool.\")\n\n return", "response": "Prints the output related to operands."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes a virtual machine from the user directory. Input: Request Handle with the following properties: function - 'DELETEVM' subfunction - 'DIRECTORY' userid - userid of the virtual machine to be deleted. Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def deleteMachine(rh):\n \"\"\"\n Delete a virtual machine from the user directory.\n\n Input:\n Request Handle with the following properties:\n function - 'DELETEVM'\n subfunction - 'DIRECTORY'\n userid - userid of the virtual machine to be deleted.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter deleteVM.deleteMachine\")\n\n results = {'overallRC': 0, 'rc': 0, 'rs': 0}\n\n # Is image logged on ?\n state = 'on' # Assume 'on'.\n results = isLoggedOn(rh, rh.userid)\n if results['overallRC'] != 0:\n # Cannot determine the log on/off state.\n # Message already included. Act as if it is 'on'.\n pass\n elif results['rs'] == 0:\n # State is powered on.\n pass\n else:\n state = 'off'\n # Reset values for rest of subfunction\n results['overallRC'] = 0\n results['rc'] = 0\n results['rs'] = 0\n\n if state == 'on':\n parms = [\"-T\", rh.userid, \"-f IMMED\"]\n results = invokeSMCLI(rh, \"Image_Deactivate\", parms)\n if results['overallRC'] == 0:\n pass\n elif (results['overallRC'] == 8 and results['rc'] == 200 and\n (results['rs'] == 12 or results['rs'] == 16)):\n # Tolerable error. Machine is already in or going into the state\n # that we want it to enter.\n rh.updateResults({}, reset=1)\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results returned by invokeSMCLI\n\n # Clean up the reader before delete\n if results['overallRC'] == 0:\n result = purgeReader(rh)\n if result['overallRC'] != 0:\n # Tolerable the purge failure error\n rh.updateResults({}, reset=1)\n\n if results['overallRC'] == 0:\n parms = [\"-T\", rh.userid, \"-e\", \"0\"]\n results = invokeSMCLI(rh, \"Image_Delete_DM\", parms)\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results returned by invokeSMCLI\n\n rh.printSysLog(\"Exit deleteVM.deleteMachine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef activate(rh):\n rh.printSysLog(\"Enter powerVM.activate, userid: \" + rh.userid)\n\n parms = [\"-T\", rh.userid]\n smcliResults = invokeSMCLI(rh, \"Image_Activate\", parms)\n if smcliResults['overallRC'] == 0:\n pass\n elif (smcliResults['overallRC'] == 8 and\n smcliResults['rc'] == 200 and smcliResults['rs'] == 8):\n pass # All good. No need to change the ReqHandle results.\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", smcliResults['response'])\n rh.updateResults(smcliResults) # Use results from invokeSMCLI\n\n if rh.results['overallRC'] == 0 and 'maxQueries' in rh.parms:\n # Wait for the system to be in the desired state of:\n # OS is 'up' and reachable or VM is 'on'.\n if rh.parms['desiredState'] == 'up':\n results = waitForOSState(\n rh,\n rh.userid,\n rh.parms['desiredState'],\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n else:\n results = waitForVMState(\n rh,\n rh.userid,\n rh.parms['desiredState'],\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n\n if results['overallRC'] == 0:\n rh.printLn(\"N\", \"%s: %s\" %\n (rh.userid, rh.parms['desiredState']))\n else:\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit powerVM.activate, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Activate a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if a virtual machine is reachable. Input: Request Handle with the following properties: function - 'POWERVM' subfunction - 'ISREACHABLE' userid - userid of the virtual machine Output: Request Handle updated with the results. overallRC - 0: determined the status, non-zero: some weird failure while trying to execute a command on the guest via IUCV rc - RC returned from execCmdThruIUCV rs - 0: not reachable, 1: reachable", "response": "def checkIsReachable(rh):\n \"\"\"\n Check if a virtual machine is reachable.\n\n Input:\n Request Handle with the following properties:\n function - 'POWERVM'\n subfunction - 'ISREACHABLE'\n userid - userid of the virtual machine\n\n Output:\n Request Handle updated with the results.\n overallRC - 0: determined the status, non-zero: some weird failure\n while trying to execute a command\n on the guest via IUCV\n rc - RC returned from execCmdThruIUCV\n rs - 0: not reachable, 1: reachable\n \"\"\"\n\n rh.printSysLog(\"Enter powerVM.checkIsReachable, userid: \" +\n rh.userid)\n\n strCmd = \"echo 'ping'\"\n results = execCmdThruIUCV(rh, rh.userid, strCmd)\n\n if results['overallRC'] == 0:\n rh.printLn(\"N\", rh.userid + \": reachable\")\n reachable = 1\n else:\n # A failure from execCmdThruIUCV is acceptable way of determining\n # that the system is unreachable. We won't pass along the\n # error message.\n rh.printLn(\"N\", rh.userid + \": unreachable\")\n reachable = 0\n\n rh.updateResults({\"rs\": reachable})\n rh.printSysLog(\"Exit powerVM.checkIsReachable, rc: 0\")\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeactivating a virtual machine.", "response": "def deactivate(rh):\n \"\"\"\n Deactivate a virtual machine.\n\n Input:\n Request Handle with the following properties:\n function - 'POWERVM'\n subfunction - 'OFF'\n userid - userid of the virtual machine\n parms['maxQueries'] - Maximum number of queries to issue.\n Optional.\n parms['maxWait'] - Maximum time to wait in seconds. Optional,\n unless 'maxQueries' is specified.\n parms['poll'] - Polling interval in seconds. Optional,\n unless 'maxQueries' is specified.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter powerVM.deactivate, userid: \" +\n rh.userid)\n\n parms = [\"-T\", rh.userid, \"-f\", \"IMMED\"]\n results = invokeSMCLI(rh, \"Image_Deactivate\", parms)\n if results['overallRC'] == 0:\n pass\n elif (results['overallRC'] == 8 and results['rc'] == 200 and\n (results['rs'] == 12 or results['rs'] == 16)):\n # Tolerable error. Machine is already in or going into the state\n # we want it to enter.\n rh.printLn(\"N\", rh.userid + \": off\")\n rh.updateResults({}, reset=1)\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n if results['overallRC'] == 0 and 'maxQueries' in rh.parms:\n results = waitForVMState(\n rh,\n rh.userid,\n 'off',\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n if results['overallRC'] == 0:\n rh.printLn(\"N\", rh.userid + \": off\")\n else:\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit powerVM.deactivate, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the power status of a virtual machine.", "response": "def getStatus(rh):\n \"\"\"\n Get the power (logon/off) status of a virtual machine.\n\n Input:\n Request Handle with the following properties:\n function - 'POWERVM'\n subfunction - 'STATUS'\n userid - userid of the virtual machine\n\n Output:\n Request Handle updated with the results.\n results['overallRC'] - 0: ok, non-zero: error\n if ok:\n results['rc'] - 0: for both on and off cases\n results['rs'] - 0: powered on\n results['rs'] - 1: powered off\n \"\"\"\n\n rh.printSysLog(\"Enter powerVM.getStatus, userid: \" +\n rh.userid)\n\n results = isLoggedOn(rh, rh.userid)\n if results['overallRC'] != 0:\n # Unexpected error\n pass\n elif results['rs'] == 0:\n rh.printLn(\"N\", rh.userid + \": on\")\n else:\n rh.printLn(\"N\", rh.userid + \": off\")\n\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit powerVM.getStatus, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the command line and return the handle of the current object.", "response": "def parseCmdline(rh):\n \"\"\"\n Parse the request command input.\n\n Input:\n Request Handle\n\n Output:\n Request Handle updated with parsed input.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter powerVM.parseCmdline\")\n\n if rh.totalParms >= 2:\n rh.userid = rh.request[1].upper()\n else:\n # Userid is missing.\n msg = msgs.msg['0010'][1] % modId\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0010'][0])\n rh.printSysLog(\"Exit powerVM.parseCmdLine, rc: \" +\n rh.results['overallRC'])\n return rh.results['overallRC']\n\n if rh.totalParms == 2:\n rh.subfunction = rh.userid\n rh.userid = ''\n\n if rh.totalParms >= 3:\n rh.subfunction = rh.request[2].upper()\n\n # Verify the subfunction is valid.\n if rh.subfunction not in subfuncHandler:\n # Subfunction is missing.\n subList = ', '.join(sorted(subfuncHandler.keys()))\n msg = msgs.msg['0011'][1] % (modId, subList)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0011'][0])\n\n # Parse the rest of the command line.\n if rh.results['overallRC'] == 0:\n rh.argPos = 3 # Begin Parsing at 4th operand\n generalUtils.parseCmdline(rh, posOpsList, keyOpsList)\n\n waiting = 0\n if rh.results['overallRC'] == 0:\n if rh.subfunction == 'WAIT':\n waiting = 1\n if rh.parms['desiredState'] not in vmOSUpDownStates:\n # Desired state is not: down, off, on or up.\n msg = msgs.msg['0013'][1] % (modId,\n rh.parms['desiredState'], \", \".join(vmOSUpDownStates))\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0013'][0])\n\n if (rh.results['overallRC'] == 0 and 'wait' in rh.parms):\n waiting = 1\n if 'desiredState' not in rh.parms:\n if rh.subfunction in ['ON', 'RESET', 'REBOOT']:\n rh.parms['desiredState'] = 'up'\n else:\n # OFF and SOFTOFF default to 'off'.\n rh.parms['desiredState'] = 'off'\n\n if rh.results['overallRC'] == 0 and waiting == 1:\n if rh.subfunction == 'ON' or rh.subfunction == 'RESET':\n if ('desiredState' not in rh.parms or\n rh.parms['desiredState'] not in vmOSUpStates):\n # Desired state is not: on or up.\n msg = msgs.msg['0013'][1] % (modId,\n rh.parms['desiredState'], \", \".join(vmOSUpStates))\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0013'][0])\n\n if rh.results['overallRC'] == 0:\n if 'maxWait' not in rh.parms:\n rh.parms['maxWait'] = 300\n if 'poll' not in rh.parms:\n rh.parms['poll'] = 15\n rh.parms['maxQueries'] = (rh.parms['maxWait'] +\n rh.parms['poll'] - 1) / rh.parms['poll']\n # If we had to do some rounding, give a warning\n # out to the command line user that the wait\n # won't be what they expected.\n if rh.parms['maxWait'] % rh.parms['poll'] != 0:\n msg = msgs.msg['0017'][1] % (modId,\n rh.parms['maxWait'], rh.parms['poll'],\n rh.parms['maxQueries'] * rh.parms['poll'],\n rh.parms['maxQueries'])\n rh.printLn(\"W\", msg)\n\n rh.printSysLog(\"Exit powerVM.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pause(rh):\n\n rh.printSysLog(\"Enter powerVM.pause, userid: \" + rh.userid)\n\n parms = [\"-T\", rh.userid, \"-k\", \"PAUSE=YES\"]\n results = invokeSMCLI(rh, \"Image_Pause\", parms)\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n rh.printSysLog(\"Exit powerVM.pause, rc: \" + str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Pause a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresets a virtual machine. Input: Request Handle with the following properties: function - 'POWERVM' subfunction - 'RESET' userid - userid of the virtual machine parms['maxQueries'] - Maximum number of queries to issue. Optional. parms['maxWait'] - Maximum time to wait in seconds. Optional, unless 'maxQueries' is specified. parms['poll'] - Polling interval in seconds. Optional, unless 'maxQueries' is specified. Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def reset(rh):\n \"\"\"\n Reset a virtual machine.\n\n Input:\n Request Handle with the following properties:\n function - 'POWERVM'\n subfunction - 'RESET'\n userid - userid of the virtual machine\n parms['maxQueries'] - Maximum number of queries to issue.\n Optional.\n parms['maxWait'] - Maximum time to wait in seconds. Optional,\n unless 'maxQueries' is specified.\n parms['poll'] - Polling interval in seconds. Optional,\n unless 'maxQueries' is specified.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter powerVM.reset, userid: \" + rh.userid)\n\n # Log off the user\n parms = [\"-T\", rh.userid]\n results = invokeSMCLI(rh, \"Image_Deactivate\", parms)\n if results['overallRC'] != 0:\n if results['rc'] == 200 and results['rs'] == 12:\n # Tolerated error. Machine is already in the desired state.\n results['overallRC'] = 0\n results['rc'] = 0\n results['rs'] = 0\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n # Wait for the logoff to complete\n if results['overallRC'] == 0:\n results = waitForVMState(rh, rh.userid, \"off\",\n maxQueries=30, sleepSecs=10)\n\n # Log the user back on\n if results['overallRC'] == 0:\n parms = [\"-T\", rh.userid]\n results = invokeSMCLI(rh, \"Image_Activate\", parms)\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n\n if results['overallRC'] == 0 and 'maxQueries' in rh.parms:\n if rh.parms['desiredState'] == 'up':\n results = waitForOSState(\n rh,\n rh.userid,\n rh.parms['desiredState'],\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n else:\n results = waitForVMState(\n rh,\n rh.userid,\n rh.parms['desiredState'],\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n if results['overallRC'] == 0:\n rh.printLn(\"N\", rh.userid + \": \" +\n rh.parms['desiredState'])\n else:\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit powerVM.reset, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeactivating a virtual machine by first shutting down Linux and then log off the system.", "response": "def softDeactivate(rh):\n \"\"\"\n Deactivate a virtual machine by first shutting down Linux and\n then log it off.\n\n Input:\n Request Handle with the following properties:\n function - 'POWERVM'\n subfunction - 'SOFTOFF'\n userid - userid of the virtual machine\n parms['maxQueries'] - Maximum number of queries to issue.\n Optional.\n parms['maxWait'] - Maximum time to wait in seconds.\n Optional,\n unless 'maxQueries' is specified.\n parms['poll'] - Polling interval in seconds. Optional,\n unless 'maxQueries' is specified.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter powerVM.softDeactivate, userid: \" +\n rh.userid)\n\n strCmd = \"echo 'ping'\"\n iucvResults = execCmdThruIUCV(rh, rh.userid, strCmd)\n\n if iucvResults['overallRC'] == 0:\n # We could talk to the machine, tell it to shutdown nicely.\n strCmd = \"shutdown -h now\"\n iucvResults = execCmdThruIUCV(rh, rh.userid, strCmd)\n if iucvResults['overallRC'] == 0:\n time.sleep(15)\n else:\n # Shutdown failed. Let CP take down the system\n # after we log the results.\n rh.printSysLog(\"powerVM.softDeactivate \" + rh.userid +\n \" is unreachable. Treating it as already shutdown.\")\n else:\n # Could not ping the machine. Treat it as a success\n # after we log the results.\n rh.printSysLog(\"powerVM.softDeactivate \" + rh.userid +\n \" is unreachable. Treating it as already shutdown.\")\n\n # Tell z/VM to log off the system.\n parms = [\"-T\", rh.userid]\n smcliResults = invokeSMCLI(rh, \"Image_Deactivate\", parms)\n if smcliResults['overallRC'] == 0:\n pass\n elif (smcliResults['overallRC'] == 8 and smcliResults['rc'] == 200 and\n (smcliResults['rs'] == 12 or + smcliResults['rs'] == 16)):\n # Tolerable error.\n # Machine is already logged off or is logging off.\n rh.printLn(\"N\", rh.userid + \" is already logged off.\")\n else:\n # SMAPI API failed.\n rh.printLn(\"ES\", smcliResults['response'])\n rh.updateResults(smcliResults) # Use results from invokeSMCLI\n\n if rh.results['overallRC'] == 0 and 'maxQueries' in rh.parms:\n # Wait for the system to log off.\n waitResults = waitForVMState(\n rh,\n rh.userid,\n 'off',\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n if waitResults['overallRC'] == 0:\n rh.printLn(\"N\", \"Userid '\" + rh.userid +\n \" is in the desired state: off\")\n else:\n rh.updateResults(waitResults)\n\n rh.printSysLog(\"Exit powerVM.softDeactivate, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef wait(rh):\n\n rh.printSysLog(\"Enter powerVM.wait, userid: \" + rh.userid)\n\n if (rh.parms['desiredState'] == 'off' or\n rh.parms['desiredState'] == 'on'):\n results = waitForVMState(\n rh,\n rh.userid,\n rh.parms['desiredState'],\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n else:\n results = waitForOSState(\n rh,\n rh.userid,\n rh.parms['desiredState'],\n maxQueries=rh.parms['maxQueries'],\n sleepSecs=rh.parms['poll'])\n\n if results['overallRC'] == 0:\n rh.printLn(\"N\", rh.userid + \": \" + rh.parms['desiredState'])\n else:\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit powerVM.wait, rc: \" + str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "This function waits for the virtual machine to go into the specified state."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending API call to the SDK server and return results", "response": "def call(self, func, *api_args, **api_kwargs):\n \"\"\"Send API call to SDK server and return results\"\"\"\n if not isinstance(func, str) or (func == ''):\n msg = ('Invalid input for API name, should be a'\n 'string, type: %s specified.') % type(func)\n return self._construct_api_name_error(msg)\n\n # Create client socket\n try:\n cs = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n except socket.error as err:\n return self._construct_socket_error(1, error=six.text_type(err))\n\n try:\n # Set socket timeout\n cs.settimeout(self.timeout)\n # Connect SDK server\n try:\n cs.connect((self.addr, self.port))\n except socket.error as err:\n return self._construct_socket_error(2, addr=self.addr,\n port=self.port,\n error=six.text_type(err))\n\n # Prepare the data to be sent and switch to bytes if needed\n api_data = json.dumps((func, api_args, api_kwargs))\n api_data = api_data.encode()\n\n # Send the API call data to SDK server\n sent = 0\n total_len = len(api_data)\n got_error = False\n try:\n while (sent < total_len):\n this_sent = cs.send(api_data[sent:])\n if this_sent == 0:\n got_error = True\n break\n sent += this_sent\n except socket.error as err:\n return self._construct_socket_error(5,\n error=six.text_type(err))\n\n if got_error or sent != total_len:\n return self._construct_socket_error(3, sent=sent,\n api=api_data)\n\n # Receive data from server\n return_blocks = []\n try:\n while True:\n block = cs.recv(4096)\n if not block:\n break\n block = bytes.decode(block)\n return_blocks.append(block)\n except socket.error as err:\n # When the sdkserver cann't handle all the client request,\n # some client request would be rejected.\n # Under this case, the client socket can successfully\n # connect/send, but would get exception in recv with error:\n # \"error: [Errno 104] Connection reset by peer\"\n return self._construct_socket_error(6,\n error=six.text_type(err))\n finally:\n # Always close the client socket to avoid too many hanging\n # socket left.\n cs.close()\n\n # Transform the received stream to standard result form\n # This client assumes that the server would return result in\n # the standard result form, so client just return the received\n # data\n if return_blocks:\n results = json.loads(''.join(return_blocks))\n else:\n results = self._construct_socket_error(4)\n return results"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef send_results(self, client, addr, results):\n json_results = json.dumps(results)\n json_results = json_results.encode()\n\n sent = 0\n total_len = len(json_results)\n got_error = False\n while (sent < total_len):\n this_sent = client.send(json_results[sent:])\n if this_sent == 0:\n got_error = True\n break\n sent += this_sent\n if got_error or sent != total_len:\n self.log_error(\"(%s:%s) Failed to send back results to client, \"\n \"results: %s\" % (addr[0], addr[1], json_results))\n else:\n self.log_debug(\"(%s:%s) Results sent back to client successfully.\"\n % (addr[0], addr[1]))", "response": "send results to a specific client"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread client request and call target SDK API function and return result.", "response": "def serve_API(self, client, addr):\n \"\"\" Read client request and call target SDK API\"\"\"\n self.log_debug(\"(%s:%s) Handling new request from client.\" %\n (addr[0], addr[1]))\n results = None\n try:\n data = client.recv(4096)\n data = bytes.decode(data)\n # When client failed to send the data or quit before sending the\n # data, server side would receive null data.\n # In such case, server would not send back any info and just\n # terminate this thread.\n if not data:\n self.log_warn(\"(%s:%s) Failed to receive data from client.\" %\n (addr[0], addr[1]))\n return\n api_data = json.loads(data)\n\n # API_data should be in the form [funcname, args_list, kwargs_dict]\n if not isinstance(api_data, list) or len(api_data) != 3:\n msg = (\"(%s:%s) SDK server got wrong input: '%s' from client.\"\n % (addr[0], addr[1], data))\n results = self.construct_internal_error(msg)\n return\n\n # Check called API is supported by SDK\n (func_name, api_args, api_kwargs) = api_data\n self.log_debug(\"(%s:%s) Request func: %s, args: %s, kwargs: %s\" %\n (addr[0], addr[1], func_name, str(api_args),\n str(api_kwargs)))\n try:\n api_func = getattr(self.sdkapi, func_name)\n except AttributeError:\n msg = (\"(%s:%s) SDK server got wrong API name: %s from\"\n \"client.\" % (addr[0], addr[1], func_name))\n results = self.construct_api_name_error(msg)\n return\n\n # invoke target API function\n return_data = api_func(*api_args, **api_kwargs)\n except exception.SDKBaseException as e:\n self.log_error(\"(%s:%s) %s\" % (addr[0], addr[1],\n traceback.format_exc()))\n # get the error info from exception attribute\n # All SDKbaseexception should eventually has a\n # results attribute defined which can be used by\n # sdkserver here\n if e.results is None:\n msg = (\"(%s:%s) SDK server got exception without results \"\n \"defined, error: %s\" % (addr[0], addr[1],\n e.format_message()))\n results = self.construct_internal_error(msg)\n else:\n results = {'overallRC': e.results['overallRC'],\n 'modID': e.results['modID'],\n 'rc': e.results['rc'],\n 'rs': e.results['rs'],\n 'errmsg': e.format_message(),\n 'output': ''}\n except Exception as e:\n self.log_error(\"(%s:%s) %s\" % (addr[0], addr[1],\n traceback.format_exc()))\n msg = (\"(%s:%s) SDK server got unexpected exception: \"\n \"%s\" % (addr[0], addr[1], repr(e)))\n results = self.construct_internal_error(msg)\n else:\n if return_data is None:\n return_data = ''\n results = {'overallRC': 0, 'modID': None,\n 'rc': 0, 'rs': 0,\n 'errmsg': '',\n 'output': return_data}\n # Send back the final results\n try:\n if results is not None:\n self.send_results(client, addr, results)\n except Exception as e:\n # This should not happen in normal case.\n # A special case is the server side socket is closed/removed\n # before the send() action.\n self.log_error(\"(%s:%s) %s\" % (addr[0], addr[1], repr(e)))\n finally:\n # Close the connection to make sure the thread socket got\n # closed even when it got unexpected exceptions.\n self.log_debug(\"(%s:%s) Finish handling request, closing \"\n \"socket.\" % (addr[0], addr[1]))\n client.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndispatches the request to the next handler.", "response": "def dispatch(environ, start_response, mapper):\n \"\"\"Find a matching route for the current request.\n\n :raises: 404(not found) if no match request\n 405(method not allowed) if route exist but\n method not provided.\n \"\"\"\n result = mapper.match(environ=environ)\n if result is None:\n info = environ.get('PATH_INFO', '')\n LOG.debug('The route for %s can not be found', info)\n raise webob.exc.HTTPNotFound(\n json_formatter=util.json_error_formatter)\n\n handler = result.pop('action')\n\n environ['wsgiorg.routing_args'] = ((), result)\n return handler(environ, start_response)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a 405 response when the method is not allowed.", "response": "def handle_not_allowed(environ, start_response):\n \"\"\"Return a 405 response when method is not allowed.\n\n If _methods are in routing_args, send an allow header listing\n the methods that are possible on the provided URL.\n \"\"\"\n _methods = util.wsgi_path_item(environ, '_methods')\n headers = {}\n if _methods:\n headers['allow'] = str(_methods)\n raise webob.exc.HTTPMethodNotAllowed(\n ('The method specified is not allowed for this resource.'),\n headers=headers, json_formatter=util.json_error_formatter)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_map(declarations):\n mapper = routes.Mapper()\n for route, methods in ROUTE_LIST:\n allowed_methods = []\n for method, func in methods.items():\n mapper.connect(route, action=func,\n conditions=dict(method=[method]))\n allowed_methods.append(method)\n allowed_methods = ', '.join(allowed_methods)\n mapper.connect(route, action=handle_not_allowed,\n _methods=allowed_methods)\n return mapper", "response": "Process route declarations to create a Route Mapper."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting the records from the config file for the given fcp.", "response": "def _delete_zfcp_config_records(self, fcp, target_wwpn, target_lun):\n \"\"\"rhel\"\"\"\n device = '0.0.%s' % fcp\n data = {'wwpn': target_wwpn, 'lun': target_lun,\n 'device': device, 'zfcpConf': '/etc/zfcp.conf'}\n delete_records_cmd = ('sed -i -e '\n '\\\"/%(device)s %(wwpn)s %(lun)s/d\\\" '\n '%(zfcpConf)s\\n' % data)\n return delete_records_cmd"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the source device path for a given fcp and target WWPN and LUN.", "response": "def _get_source_device_path(self, fcp, target_wwpn, target_lun):\n \"\"\"rhel\"\"\"\n device = '0.0.%s' % fcp\n data = {'device': device, 'wwpn': target_wwpn, 'lun': target_lun}\n var_source_device = ('SourceDevice=\"/dev/disk/by-path/ccw-%(device)s-'\n 'zfcp-%(wwpn)s:%(lun)s\"\\n' % data)\n return var_source_device"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _set_sysfs(self, fcp, target_wwpn, target_lun):\n device = '0.0.%s' % fcp\n unit_add = \"echo '%s' > \" % target_lun\n unit_add += \"/sys/bus/ccw/drivers/zfcp/%(device)s/%(wwpn)s/unit_add\\n\"\\\n % {'device': device, 'wwpn': target_wwpn}\n return unit_add", "response": "rhel7 set WWPN and LUN in sysfs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_udev_rules(self, channel_read, channel_write, channel_data):\n sub_str = '%(read)s %%k %(read)s %(write)s %(data)s qeth' % {\n 'read': channel_read,\n 'read': channel_read,\n 'write': channel_write,\n 'data': channel_data}\n rules_str = '# Configure qeth device at'\n rules_str += ' %(read)s/%(write)s/%(data)s\\n' % {\n 'read': channel_read,\n 'write': channel_write,\n 'data': channel_data}\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"drivers\\\", KERNEL=='\n '\\\"qeth\\\", IMPORT{program}=\\\"collect %s\\\"\\n') % sub_str\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"ccw\\\", KERNEL==\\\"'\n '%(read)s\\\", IMPORT{program}=\"collect %(channel)s\\\"\\n') % {\n 'read': channel_read, 'channel': sub_str}\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"ccw\\\", KERNEL==\\\"'\n '%(write)s\\\", IMPORT{program}=\\\"collect %(channel)s\\\"\\n') % {\n 'write': channel_write, 'channel': sub_str}\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"ccw\\\", KERNEL==\\\"'\n '%(data)s\\\", IMPORT{program}=\\\"collect %(channel)s\\\"\\n') % {\n 'data': channel_data, 'channel': sub_str}\n rules_str += ('ACTION==\\\"remove\\\", SUBSYSTEM==\\\"drivers\\\", KERNEL==\\\"'\n 'qeth\\\", IMPORT{program}=\\\"collect --remove %s\\\"\\n') % sub_str\n rules_str += ('ACTION==\\\"remove\\\", SUBSYSTEM==\\\"ccw\\\", KERNEL==\\\"'\n '%(read)s\\\", IMPORT{program}=\\\"collect --remove %(channel)s\\\"\\n'\n ) % {'read': channel_read, 'channel': sub_str}\n rules_str += ('ACTION==\\\"remove\\\", SUBSYSTEM==\\\"ccw\\\", KERNEL==\\\"'\n '%(write)s\\\", IMPORT{program}=\\\"collect --remove %(channel)s\\\"\\n'\n ) % {'write': channel_write, 'channel': sub_str}\n rules_str += ('ACTION==\\\"remove\\\", SUBSYSTEM==\\\"ccw\\\", KERNEL==\\\"'\n '%(data)s\\\", IMPORT{program}=\\\"collect --remove %(channel)s\\\"\\n'\n ) % {'data': channel_data, 'channel': sub_str}\n rules_str += ('TEST==\\\"[ccwgroup/%(read)s]\\\", GOTO=\\\"qeth-%(read)s'\n '-end\\\"\\n') % {'read': channel_read, 'read': channel_read}\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"ccw\\\", ENV{COLLECT_'\n '%(read)s}==\\\"0\\\", ATTR{[drivers/ccwgroup:qeth]group}=\\\"'\n '%(read)s,%(write)s,%(data)s\\\"\\n') % {\n 'read': channel_read, 'read': channel_read,\n 'write': channel_write, 'data': channel_data}\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"drivers\\\", KERNEL==\\\"qeth'\n '\\\", ENV{COLLECT_%(read)s}==\\\"0\\\", ATTR{[drivers/'\n 'ccwgroup:qeth]group}=\\\"%(read)s,%(write)s,%(data)s\\\"\\n'\n 'LABEL=\\\"qeth-%(read)s-end\\\"\\n') % {\n 'read': channel_read, 'read': channel_read, 'write': channel_write,\n 'data': channel_data, 'read': channel_read}\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"ccwgroup\\\", KERNEL=='\n '\\\"%s\\\", ATTR{layer2}=\\\"1\\\"\\n') % channel_read\n rules_str += ('ACTION==\\\"add\\\", SUBSYSTEM==\\\"ccwgroup\\\", KERNEL=='\n '\\\"%s\\\", ATTR{online}=\\\"1\\\"\\n') % channel_read\n return rules_str", "response": "construct udev rules info."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _delete_vdev_info(self, vdev):\n vdev = vdev.lower()\n rules_file_name = '/etc/udev/rules.d/51-qeth-0.0.%s.rules' % vdev\n cmd = 'rm -f %s\\n' % rules_file_name\n\n address = '0.0.%s' % str(vdev).zfill(4)\n udev_file_name = '/etc/udev/rules.d/70-persistent-net.rules'\n cmd += \"sed -i '/%s/d' %s\\n\" % (address, udev_file_name)\n cmd += \"sed -i '/%s/d' %s\\n\" % (address,\n '/boot/zipl/active_devices.txt')\n return cmd", "response": "handle udev rules file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _set_zfcp_config_files(self, fcp, target_wwpn, target_lun):\n device = '0.0.%s' % fcp\n # host config\n host_config = '/sbin/zfcp_host_configure %s 1' % device\n # disk config\n disk_config = '/sbin/zfcp_disk_configure ' +\\\n '%(device)s %(wwpn)s %(lun)s 1' %\\\n {'device': device, 'wwpn': target_wwpn,\n 'lun': target_lun}\n create_config = 'touch /etc/udev/rules.d/51-zfcp-%s.rules' % device\n # check if the file already contains the zFCP channel\n check_channel = ('out=`cat \"/etc/udev/rules.d/51-zfcp-%s.rules\" '\n '| egrep -i \"ccw/%s]online\"`\\n' % (device, device))\n check_channel += 'if [[ ! $out ]]; then\\n'\n check_channel += (' echo \"ACTION==\\\\\"add\\\\\", SUBSYSTEM==\\\\\"ccw\\\\\", '\n 'KERNEL==\\\\\"%(device)s\\\\\", IMPORT{program}=\\\\\"'\n 'collect %(device)s %%k %(device)s zfcp\\\\\"\"'\n '| tee -a /etc/udev/rules.d/51-zfcp-%(device)s.rules'\n '\\n' % {'device': device})\n check_channel += (' echo \"ACTION==\\\\\"add\\\\\", SUBSYSTEM==\\\\\"drivers\\\\\"'\n ', KERNEL==\\\\\"zfcp\\\\\", IMPORT{program}=\\\\\"'\n 'collect %(device)s %%k %(device)s zfcp\\\\\"\"'\n '| tee -a /etc/udev/rules.d/51-zfcp-%(device)s.rules'\n '\\n' % {'device': device})\n check_channel += (' echo \"ACTION==\\\\\"add\\\\\", '\n 'ENV{COLLECT_%(device)s}==\\\\\"0\\\\\", '\n 'ATTR{[ccw/%(device)s]online}=\\\\\"1\\\\\"\"'\n '| tee -a /etc/udev/rules.d/51-zfcp-%(device)s.rules'\n '\\n' % {'device': device})\n check_channel += 'fi\\n'\n check_channel += ('echo \"ACTION==\\\\\"add\\\\\", KERNEL==\\\\\"rport-*\\\\\", '\n 'ATTR{port_name}==\\\\\"%(wwpn)s\\\\\", '\n 'SUBSYSTEMS==\\\\\"ccw\\\\\", KERNELS==\\\\\"%(device)s\\\\\",'\n 'ATTR{[ccw/%(device)s]%(wwpn)s/unit_add}='\n '\\\\\"%(lun)s\\\\\"\"'\n '| tee -a /etc/udev/rules.d/51-zfcp-%(device)s.rules'\n '\\n' % {'device': device, 'wwpn': target_wwpn,\n 'lun': target_lun})\n return '\\n'.join((host_config,\n 'sleep 2',\n disk_config,\n 'sleep 2',\n create_config,\n check_channel))", "response": "sles set WWPN and LUN in configuration files"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets multipath kernel module", "response": "def _set_zfcp_multipath(self):\n \"\"\"sles\"\"\"\n # modprobe DM multipath kernel module\n modprobe = 'modprobe dm_multipath'\n conf_file = '#blacklist {\\n'\n conf_file += '#\\tdevnode \\\\\"*\\\\\"\\n'\n conf_file += '#}\\n'\n conf_cmd = 'echo -e \"%s\" > /etc/multipath.conf' % conf_file\n restart = self._restart_multipath()\n return '\\n'.join((modprobe,\n conf_cmd,\n restart))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_network_configuration_files(self, file_path, guest_networks,\n first, active=False):\n \"\"\"Generate network configuration files for guest vm\n :param list guest_networks: a list of network info for the guest.\n It has one dictionary that contain some of the below keys for\n each network, the format is:\n {'ip_addr': (str) IP address,\n 'dns_addr': (list) dns addresses,\n 'gateway_addr': (str) gateway address,\n 'cidr': (str) cidr format\n 'nic_vdev': (str) VDEV of the nic}\n\n Example for guest_networks:\n [{'ip_addr': '192.168.95.10',\n 'dns_addr': ['9.0.2.1', '9.0.3.1'],\n 'gateway_addr': '192.168.95.1',\n 'cidr': \"192.168.95.0/24\",\n 'nic_vdev': '1000'},\n {'ip_addr': '192.168.96.10',\n 'dns_addr': ['9.0.2.1', '9.0.3.1'],\n 'gateway_addr': '192.168.96.1',\n 'cidr': \"192.168.96.0/24\",\n 'nic_vdev': '1003}]\n \"\"\"\n cfg_files = []\n cmd_strings = ''\n network_config_file_name = self._get_network_file()\n network_cfg_str = 'auto lo\\n'\n network_cfg_str += 'iface lo inet loopback\\n'\n net_enable_cmd = ''\n if first:\n clean_cmd = self._get_clean_command()\n else:\n clean_cmd = ''\n network_cfg_str = ''\n\n for network in guest_networks:\n base_vdev = network['nic_vdev'].lower()\n network_hw_config_fname = self._get_device_filename(base_vdev)\n network_hw_config_str = self._get_network_hw_config_str(base_vdev)\n cfg_files.append((network_hw_config_fname, network_hw_config_str))\n (cfg_str, dns_str) = self._generate_network_configuration(network,\n base_vdev)\n LOG.debug('Network configure file content is: %s', cfg_str)\n network_cfg_str += cfg_str\n if len(dns_str) > 0:\n network_cfg_str += dns_str\n if first:\n cfg_files.append((network_config_file_name, network_cfg_str))\n else:\n cmd_strings = ('echo \"%s\" >>%s\\n' % (network_cfg_str,\n network_config_file_name))\n return cfg_files, cmd_strings, clean_cmd, net_enable_cmd", "response": "Generate the configuration files for the guest VM."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _delete_vdev_info(self, vdev):\n vdev = vdev.lower()\n network_config_file_name = self._get_network_file()\n device = self._get_device_name(vdev)\n cmd = '\\n'.join((\"num=$(sed -n '/auto %s/=' %s)\" % (device,\n network_config_file_name),\n \"dns=$(awk 'NR==(\\\"\\'$num\\'\\\"+6)&&\"\n \"/dns-nameservers/' %s)\" %\n network_config_file_name,\n \"if [[ -n $dns ]]; then\",\n \" sed -i '/auto %s/,+6d' %s\" % (device,\n network_config_file_name),\n \"else\",\n \" sed -i '/auto %s/,+5d' %s\" % (device,\n network_config_file_name),\n \"fi\"))\n return cmd", "response": "handle vdev related info."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nseparates os and version from os_version.", "response": "def parse_dist(self, os_version):\n \"\"\"Separate os and version from os_version.\n\n Possible return value are only:\n ('rhel', x.y) and ('sles', x.y) where x.y may not be digits\n \"\"\"\n supported = {'rhel': ['rhel', 'redhat', 'red hat'],\n 'sles': ['suse', 'sles'],\n 'ubuntu': ['ubuntu']}\n os_version = os_version.lower()\n for distro, patterns in supported.items():\n for i in patterns:\n if os_version.startswith(i):\n # Not guarrentee the version is digital\n remain = os_version.split(i, 2)[1]\n release = self._parse_release(os_version, distro, remain)\n return distro, release\n\n msg = 'Can not handle os: %s' % os_version\n raise exception.ZVMException(msg=msg)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget status of a VMRelocate request. Input: Request Handle with the following properties: function - 'MIGRATEVM' subfunction - 'STATUS' userid - userid of the virtual machine parms['all'] - If present, set status_target to ALL. parms['incoming'] - If present, set status_target to INCOMING. parms['outgoing'] - If present, set status_target to OUTGOING. if parms does not contain 'all', 'incoming' or 'outgoing', the status_target is set to 'USER '. Output: Request Handle updated with the results. Return code - 0: ok, non-zero: error", "response": "def getStatus(rh):\n \"\"\"\n Get status of a VMRelocate request.\n\n Input:\n Request Handle with the following properties:\n function - 'MIGRATEVM'\n subfunction - 'STATUS'\n userid - userid of the virtual machine\n parms['all'] - If present, set status_target to ALL.\n parms['incoming'] - If present, set status_target to INCOMING.\n parms['outgoing'] - If present, set status_target to OUTGOING.\n if parms does not contain 'all', 'incoming' or 'outgoing', the\n status_target is set to 'USER '.\n\n Output:\n Request Handle updated with the results.\n Return code - 0: ok, non-zero: error\n \"\"\"\n\n rh.printSysLog(\"Enter migrateVM.getStatus\")\n\n parms = [\"-T\", rh.userid]\n\n if 'all' in rh.parms:\n parms.extend([\"-k\", \"status_target=ALL\"])\n elif 'incoming' in rh.parms:\n parms.extend([\"-k\", \"status_target=INCOMING\"])\n elif 'outgoing' in rh.parms:\n parms.extend([\"-k\", \"status_target=OUTGOING\"])\n else:\n parms.extend([\"-k\", \"status_target=USER \" + rh.userid + \"\"])\n\n results = invokeSMCLI(rh, \"VMRELOCATE_Status\", parms)\n if results['overallRC'] != 0:\n # SMAPI API failed.\n rh.printLn(\"ES\", results['response'])\n rh.updateResults(results) # Use results from invokeSMCLI\n if results['rc'] == 4 and results['rs'] == 3001:\n # No relocation in progress\n msg = msgs.msg['0419'][1] % (modId, rh.userid)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0419'][0])\n else:\n rh.printLn(\"N\", results['response'])\n\n rh.printSysLog(\"Exit migrateVM.getStatus, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef driveFunction(self):\n\n if self.function == 'HELP':\n # General help for all functions.\n self.printLn(\"N\", \"\")\n self.printLn(\"N\", \"Usage:\")\n self.printLn(\"N\", \" python \" + self.cmdName + \" --help\")\n for key in sorted(ReqHandle.funcHandler):\n ReqHandle.funcHandler[key][0](self)\n self.printLn(\"N\", \"\")\n self.printLn(\"N\", \"Operand(s):\")\n for key in sorted(ReqHandle.funcHandler):\n ReqHandle.funcHandler[key][1](self)\n self.printLn(\"N\", \"\")\n self.updateResults({}, reset=1)\n elif self.function == 'VERSION':\n # Version of ReqHandle.\n self.printLn(\"N\", \"Version: \" + version)\n self.updateResults({}, reset=1)\n else:\n # Some type of function/subfunction invocation.\n if self.function in self.funcHandler:\n # Invoke the functions doIt routine to route to the\n # appropriate subfunction.\n self.funcHandler[self.function][3](self)\n else:\n # Unrecognized function\n msg = msgs.msg['0007'][1] % (modId, self.function)\n self.printLn(\"ES\", msg)\n self.updateResults(msgs.msg['0007'][0])\n\n return self.results", "response": "This function is called by the function handler to route to the appropriate subfunction."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parseCmdline(self, requestData):\n\n self.printSysLog(\"Enter ReqHandle.parseCmdline\")\n\n # Save the request data based on the type of operand.\n if isinstance(requestData, list):\n self.requestString = ' '.join(requestData) # Request as a string\n self.request = requestData # Request as a list\n elif isinstance(requestData, string_types):\n self.requestString = requestData # Request as a string\n self.request = shlex.split(requestData) # Request as a list\n else:\n # Request data type is not supported.\n msg = msgs.msg['0012'][1] % (modId, type(requestData))\n self.printLn(\"ES\", msg)\n self.updateResults(msgs.msg['0012'][0])\n return self.results\n self.totalParms = len(self.request) # Number of parms in the cmd\n\n # Handle the request, parse it or return an error.\n if self.totalParms == 0:\n # Too few arguments.\n msg = msgs.msg['0009'][1] % modId\n self.printLn(\"ES\", msg)\n self.updateResults(msgs.msg['0009'][0])\n elif self.totalParms == 1:\n self.function = self.request[0].upper()\n if self.function == 'HELP' or self.function == 'VERSION':\n pass\n else:\n # Function is not HELP or VERSION.\n msg = msgs.msg['0008'][1] % (modId, self.function)\n self.printLn(\"ES\", msg)\n self.updateResults(msgs.msg['0008'][0])\n else:\n # Process based on the function operand.\n self.function = self.request[0].upper()\n if self.request[0] == 'HELP' or self.request[0] == 'VERSION':\n pass\n else:\n # Handle the function related parms by calling the function\n # parser.\n if self.function in ReqHandle.funcHandler:\n self.funcHandler[self.function][2](self)\n else:\n # Unrecognized function\n msg = msgs.msg['0007'][1] % (modId, self.function)\n self.printLn(\"ES\", msg)\n self.updateResults(msgs.msg['0007'][0])\n\n self.printSysLog(\"Exit ReqHandle.parseCmdline, rc: \" +\n str(self.results['overallRC']))\n return self.results", "response": "Parse the command string and return the parsed information."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nprinting a line of output to the log.", "response": "def printLn(self, respType, respString):\n \"\"\"\n Add one or lines of output to the response list.\n\n Input:\n Response type: One or more characters indicate type of response.\n E - Error message\n N - Normal message\n S - Output should be logged\n W - Warning message\n \"\"\"\n\n if 'E' in respType:\n respString = '(Error) ' + respString\n if 'W' in respType:\n respString = '(Warning) ' + respString\n if 'S' in respType:\n self.printSysLog(respString)\n self.results['response'] = (self.results['response'] +\n respString.splitlines())\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef printSysLog(self, logString):\n\n if zvmsdklog.LOGGER.getloglevel() <= logging.DEBUG:\n # print log only when debug is enabled\n if self.daemon == '':\n self.logger.debug(self.requestId + \": \" + logString)\n else:\n self.daemon.logger.debug(self.requestId + \": \" + logString)\n\n if self.captureLogs is True:\n self.results['logEntries'].append(self.requestId + \": \" +\n logString)\n return", "response": "Print a log string to the logEntries list."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the results related to this request with the provided dictionary.", "response": "def updateResults(self, newResults, **kwArgs):\n \"\"\"\n Update the results related to this request excluding the 'response'\n and 'logEntries' values.\n We specifically update (if present):\n overallRC, rc, rs, errno.\n\n Input:\n Dictionary containing the results to be updated or an empty\n dictionary the reset keyword was specified.\n Reset keyword:\n 0 - Not a reset. This is the default is reset keyword was not\n specified.\n 1 - Reset failure related items in the result dictionary.\n This exclude responses and log entries.\n 2 - Reset all result items in the result dictionary.\n\n Output:\n Request handle is updated with the results.\n \"\"\"\n\n if 'reset' in kwArgs.keys():\n reset = kwArgs['reset']\n else:\n reset = 0\n\n if reset == 0:\n # Not a reset. Set the keys from the provided dictionary.\n for key in newResults.keys():\n if key == 'response' or key == 'logEntries':\n continue\n self.results[key] = newResults[key]\n elif reset == 1:\n # Reset all failure related items.\n self.results['overallRC'] = 0\n self.results['rc'] = 0\n self.results['rs'] = 0\n self.results['errno'] = 0\n self.results['strError'] = ''\n elif reset == 2:\n # Reset all results information including any responses and\n # log entries.\n self.results['overallRC'] = 0\n self.results['rc'] = 0\n self.results['rs'] = 0\n self.results['errno'] = 0\n self.results['strError'] = ''\n self.results['logEntries'] = ''\n self.results['response'] = ''\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef filter(self, field, operator, value):\n instance = copy(self)\n\n i = len(instance._filters)\n instance._filters.append({\n 'filter[field][%d]' % i: str(field),\n 'filter[operator][%d]' % i: str(operator),\n 'filter[value][%d]' % i: str(value),\n })\n return instance", "response": "Adds a query filter to be applied to the next API list call for this resource."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the number of results that will be retrieved by the request.", "response": "def resultsperpage(self, value):\n \"\"\" Set the number of results that will be retrieved by the request. \n :param value: 'resultsperpage' parameter value for the rest api call\n :type value: str\n\n Take a look at https://apihelp.surveygizmo.com/help/surveyresponse-sub-object\n\n \"\"\"\n\n instance = copy(self)\n\n instance._filters.append({\n 'resultsperpage': value\n })\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the page which will be returned.", "response": "def page(self, value):\n \"\"\" Set the page which will be returned. \n :param value: 'page' parameter value for the rest api call\n :type value: str\n\n Take a look at https://apihelp.surveygizmo.com/help/surveyresponse-sub-object\n\n \"\"\"\n\n instance = copy(self)\n\n instance._filters.append({\n 'page': value\n })\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filters(self):\n params = {}\n for _filter in self._filters:\n params.update(_filter)\n\n return params", "response": "Returns a merged dictionary of filters."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef json_error_formatter(body, status, title, environ):\n body = webob.exc.strip_tags(body)\n status_code = int(status.split(None, 1)[0])\n error_dict = {\n 'status': status_code,\n 'title': title,\n 'detail': body\n }\n return {'errors': [error_dict]}", "response": "A json_formatter for webob exceptions."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef call_func(self, req, *args, **kwargs):\n try:\n return super(SdkWsgify, self).call_func(req, *args, **kwargs)\n except webob.exc.HTTPException as exc:\n msg = ('encounter %(error)s error') % {'error': exc}\n LOG.debug(msg)\n exc.json_formatter = json_error_formatter\n code = exc.status_int\n explanation = six.text_type(exc)\n\n fault_data = {\n 'overallRC': 400,\n 'rc': 400,\n 'rs': code,\n 'modID': SDKWSGI_MODID,\n 'output': '',\n 'errmsg': explanation}\n exc.text = six.text_type(json.dumps(fault_data))\n raise exc", "response": "Add json_error_formatter to any webob HTTPExceptions."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _load_GeoTransform(self):\n def load_lon():\n return arange(ds.RasterXSize)*b[1]+b[0]\n\n def load_lat():\n return arange(ds.RasterYSize)*b[5]+b[3]\n ds = self.ds\n b = self.ds.GetGeoTransform() # bbox, interval\n if with_dask:\n lat = Array(\n {('lat', 0): (load_lat,)}, 'lat', (self.ds.RasterYSize,),\n shape=(self.ds.RasterYSize,), dtype=float)\n lon = Array(\n {('lon', 0): (load_lon,)}, 'lon', (self.ds.RasterXSize,),\n shape=(self.ds.RasterXSize,), dtype=float)\n else:\n lat = load_lat()\n lon = load_lon()\n return Variable(('lat',), lat), Variable(('lon',), lon)", "response": "Calculate latitude and longitude variable calculated from the\n gdal. Open. GetGeoTransform method"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef psyplot_fname(env_key='PSYPLOTRC', fname='psyplotrc.yml',\n if_exists=True):\n \"\"\"\n Get the location of the config file.\n\n The file location is determined in the following order\n\n - `$PWD/psyplotrc.yml`\n\n - environment variable `PSYPLOTRC` (pointing to the file location or a\n directory containing the file `psyplotrc.yml`)\n\n - `$PSYPLOTCONFIGDIR/psyplot`\n\n - On Linux and osx,\n\n - `$HOME/.config/psyplot/psyplotrc.yml`\n\n - On other platforms,\n\n - `$HOME/.psyplot/psyplotrc.yml` if `$HOME` is defined.\n\n - Lastly, it looks in `$PSYPLOTDATA/psyplotrc.yml` for a\n system-defined copy.\n\n Parameters\n ----------\n env_key: str\n The environment variable that can be used for the configuration\n directory\n fname: str\n The name of the configuration file\n if_exists: bool\n If True, the path is only returned if the file exists\n\n Returns\n -------\n None or str\n None, if no file could be found and `if_exists` is True, else the path\n to the psyplot configuration file\n\n Notes\n -----\n This function is motivated by the :func:`matplotlib.matplotlib_fname`\n function\"\"\"\n cwd = getcwd()\n full_fname = os.path.join(cwd, fname)\n if os.path.exists(full_fname):\n return full_fname\n\n if env_key in os.environ:\n path = os.environ[env_key]\n if os.path.exists(path):\n if os.path.isdir(path):\n full_fname = os.path.join(path, fname)\n if os.path.exists(full_fname):\n return full_fname\n else:\n return path\n\n configdir = get_configdir()\n if configdir is not None:\n full_fname = os.path.join(configdir, fname)\n if os.path.exists(full_fname) or not if_exists:\n return full_fname\n\n return None", "response": "Returns the path to the psyplot configuration file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_path_exists(s):\n if s is None:\n return None\n if os.path.exists(s):\n return s\n else:\n raise ValueError('\"%s\" should be a path but it does not exist' % s)", "response": "Validate that s is a path and return s else False"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validate_dict(d):\n try:\n return dict(d)\n except TypeError:\n d = validate_path_exists(d)\n try:\n with open(d) as f:\n return dict(yaml.load(f))\n except Exception:\n raise ValueError(\"Could not convert {} to dictionary!\".format(d))", "response": "Validate a dictionary of a node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting b to a boolean or raise", "response": "def validate_bool_maybe_none(b):\n 'Convert b to a boolean or raise'\n if isinstance(b, six.string_types):\n b = b.lower()\n if b is None or b == 'none':\n return None\n return validate_bool(b)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a string to a boolean or raise a ValueError", "response": "def validate_bool(b):\n \"\"\"Convert b to a boolean or raise\"\"\"\n if isinstance(b, six.string_types):\n b = b.lower()\n if b in ('t', 'y', 'yes', 'on', 'true', '1', 1, True):\n return True\n elif b in ('f', 'n', 'no', 'off', 'false', '0', 0, False):\n return False\n else:\n raise ValueError('Could not convert \"%s\" to boolean' % b)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate_str(s):\n if not isinstance(s, six.string_types):\n raise ValueError(\"Did not found string!\")\n return six.text_type(s)", "response": "Validate a string s is a valid\n ."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_stringlist(s):\n if isinstance(s, six.string_types):\n return [six.text_type(v.strip()) for v in s.split(',') if v.strip()]\n else:\n try:\n return list(map(validate_str, s))\n except TypeError as e:\n raise ValueError(e.message)", "response": "Validate a list of strings containing a single element."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding further base string to the instance of the n - tuple.", "response": "def add_base_str(self, base_str, pattern='.+', pattern_base=None,\n append=True):\n \"\"\"\n Add further base string to this instance\n\n Parameters\n ----------\n base_str: str or list of str\n Strings that are used as to look for keys to get and set keys in\n the :attr:`base` dictionary. If a string does not contain\n ``'%(key)s'``, it will be appended at the end. ``'%(key)s'`` will\n be replaced by the specific key for getting and setting an item.\n pattern: str\n Default: ``'.+'``. This is the pattern that is inserted for\n ``%(key)s`` in a base string to look for matches (using the\n :mod:`re` module) in the `base` dictionary. The default `pattern`\n matches everything without white spaces.\n pattern_base: str or list or str\n If None, the whatever is given in the `base_str` is used.\n Those strings will be used for generating the final search\n patterns. You can specify this parameter by yourself to avoid the\n misinterpretation of patterns. For example for a `base_str` like\n ``'my.str'`` it is recommended to additionally provide the\n `pattern_base` keyword with ``'my\\.str'``.\n Like for `base_str`, the ``%(key)s`` is appended if not already in\n the string.\n append: bool\n If True, the given `base_str` are appended (i.e. it is first\n looked for them in the :attr:`base` dictionary), otherwise they are\n put at the beginning\"\"\"\n base_str = safe_list(base_str)\n pattern_base = safe_list(pattern_base or [])\n for i, s in enumerate(base_str):\n if '%(key)s' not in s:\n base_str[i] += '%(key)s'\n if pattern_base:\n for i, s in enumerate(pattern_base):\n if '%(key)s' not in s:\n pattern_base[i] += '%(key)s'\n else:\n pattern_base = base_str\n self.base_str = base_str + self.base_str\n self.patterns = list(map(lambda s: re.compile(s.replace(\n '%(key)s', '(?P%s)' % pattern)), pattern_base)) + \\\n self.patterns"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connect(self, key, func):\n key = self._get_depreceated(key)[0]\n if key is not None:\n self._connections[key].append(func)", "response": "Connect a function to the given formatoption\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisconnect the connections to the rcParams key.", "response": "def disconnect(self, key=None, func=None):\n \"\"\"Disconnect the connections to the an rcParams key\n\n Parameters\n ----------\n key: str\n The rcParams key. If None, all keys are used\n func: function\n The function that is connected. If None, all functions are\n connected\n \"\"\"\n if key is None:\n for key, connections in self._connections.items():\n for conn in connections[:]:\n if func is None or conn is func:\n connections.remove(conn)\n else:\n connections = self._connections[key]\n for conn in connections[:]:\n if func is None or conn is func:\n connections.remove(conn)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_from_defaultParams(self, defaultParams=None,\n plotters=True):\n \"\"\"Update from the a dictionary like the :attr:`defaultParams`\n\n Parameters\n ----------\n defaultParams: dict\n The :attr:`defaultParams` like dictionary. If None, the\n :attr:`defaultParams` attribute will be updated\n plotters: bool\n If True, ``'project.plotters'`` will be updated too\"\"\"\n if defaultParams is None:\n defaultParams = self.defaultParams\n self.update({key: val[0] for key, val in defaultParams.items()\n if plotters or key != 'project.plotters'})", "response": "Update the internal state of the object with the default parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef keys(self):\n k = list(dict.keys(self))\n k.sort()\n return k", "response": "Return sorted list of keys."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a subset of this RcParams dictionary whose keys match pattern Return the subset of this RcParams instance with entries that match pattern", "response": "def find_all(self, pattern):\n \"\"\"\n Return the subset of this RcParams dictionary whose keys match,\n using :func:`re.search`, the given ``pattern``.\n\n Parameters\n ----------\n pattern: str\n pattern as suitable for re.compile\n\n Returns\n -------\n RcParams\n RcParams instance with entries that match the given `pattern`\n\n Notes\n -----\n Changes to the returned dictionary are (different from\n :meth:`find_and_replace` are *not* propagated to the parent RcParams\n dictionary.\n\n See Also\n --------\n find_and_replace\"\"\"\n pattern_re = re.compile(pattern)\n ret = RcParams()\n ret.defaultParams = self.defaultParams\n ret.update((key, value) for key, value in self.items()\n if pattern_re.search(key))\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_from_file(self, fname=None):\n fname = fname or psyplot_fname()\n if fname and os.path.exists(fname):\n with open(fname) as f:\n d = yaml.load(f)\n self.update(d)\n if (d.get('project.plotters.user') and\n 'project.plotters' in self):\n self['project.plotters'].update(d['project.plotters.user'])", "response": "Update rcParams from user - defined settings"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dump(self, fname=None, overwrite=True, include_keys=None,\n exclude_keys=['project.plotters'], include_descriptions=True,\n **kwargs):\n \"\"\"Dump this instance to a yaml file\n\n Parameters\n ----------\n fname: str or None\n file name to write to. If None, the string that would be written\n to a file is returned\n overwrite: bool\n If True and `fname` already exists, it will be overwritten\n include_keys: None or list of str\n Keys in the dictionary to be included. If None, all keys are\n included\n exclude_keys: list of str\n Keys from the :class:`RcParams` instance to be excluded\n\n Other Parameters\n ----------------\n ``**kwargs``\n Any other parameter for the :func:`yaml.dump` function\n\n Returns\n -------\n str or None\n if fname is ``None``, the string is returned. Otherwise, ``None``\n is returned\n\n Raises\n ------\n IOError\n If `fname` already exists and `overwrite` is False\n\n See Also\n --------\n load_from_file\"\"\"\n if fname is not None and not overwrite and os.path.exists(fname):\n raise IOError(\n '%s already exists! Set overwrite=True to overwrite it!' % (\n fname))\n if six.PY2:\n kwargs.setdefault('encoding', 'utf-8')\n d = {key: val for key, val in six.iteritems(self) if (\n include_keys is None or key in include_keys) and\n key not in exclude_keys}\n kwargs['default_flow_style'] = False\n if include_descriptions:\n s = yaml.dump(d, **kwargs)\n desc = self.descriptions\n i = 2\n header = self.HEADER.splitlines() + [\n '', 'Created with python', ''] + sys.version.splitlines() + [\n '', '']\n lines = ['# ' + l for l in header] + s.splitlines()\n for l in lines[2:]:\n key = l.split(':')[0]\n if key in desc:\n lines.insert(i, '# ' + '\\n# '.join(desc[key].splitlines()))\n i += 1\n i += 1\n s = '\\n'.join(lines)\n if fname is None:\n return s\n else:\n with open(fname, 'w') as f:\n f.write(s)\n else:\n if fname is None:\n return yaml.dump(d, **kwargs)\n with open(fname, 'w') as f:\n yaml.dump(d, f, **kwargs)\n return None", "response": "Dump this instance to a yaml file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _load_plugin_entrypoints(self):\n from pkg_resources import iter_entry_points\n\n def load_plugin(ep):\n if plugins_env == ['no']:\n return False\n elif ep.module_name in exclude_plugins:\n return False\n elif include_plugins and ep.module_name not in include_plugins:\n return False\n return True\n\n self._plugins = self._plugins or []\n\n plugins_env = os.getenv('PSYPLOT_PLUGINS', '').split('::')\n include_plugins = [s[4:] for s in plugins_env if s.startswith('yes:')]\n exclude_plugins = [s[3:] for s in plugins_env if s.startswith('no:')]\n\n logger = logging.getLogger(__name__)\n\n for ep in iter_entry_points(group='psyplot', name='plugin'):\n if not load_plugin(ep):\n logger.debug('Skipping entrypoint %s', ep)\n continue\n self._plugins.append(str(ep))\n logger.debug('Loading entrypoint %s', ep)\n yield ep", "response": "Load the psyplot plugins and return the entry points for the psyplot plugin module"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading the plotters and defaultParams from the plugins that use the entry point specified by the group.", "response": "def load_plugins(self, raise_error=False):\n \"\"\"\n Load the plotters and defaultParams from the plugins\n\n This method loads the `plotters` attribute and `defaultParams`\n attribute from the plugins that use the entry point specified by\n `group`. Entry points must be objects (or modules) that have a\n `defaultParams` and a `plotters` attribute.\n\n Parameters\n ----------\n raise_error: bool\n If True, an error is raised when multiple plugins define the same\n plotter or rcParams key. Otherwise only a warning is raised\"\"\"\n\n pm_env = os.getenv('PSYPLOT_PLOTMETHODS', '').split('::')\n include_pms = [s[4:] for s in pm_env if s.startswith('yes:')]\n exclude_pms = [s[3:] for s in pm_env if s.startswith('no:')]\n\n logger = logging.getLogger(__name__)\n\n plotters = self['project.plotters']\n def_plots = {'default': list(plotters)}\n defaultParams = self.defaultParams\n def_keys = {'default': defaultParams}\n\n def register_pm(ep, name):\n full_name = '%s:%s' % (ep.module_name, name)\n ret = True\n if pm_env == ['no']:\n ret = False\n elif name in exclude_pms or full_name in exclude_pms:\n ret = False\n elif include_pms and (name not in include_pms and\n full_name not in include_pms):\n ret = False\n if not ret:\n logger.debug('Skipping plot method %s', full_name)\n return ret\n\n for ep in self._load_plugin_entrypoints():\n plugin_mod = ep.load()\n rc = plugin_mod.rcParams\n\n # load the plotters\n plugin_plotters = {\n key: val for key, val in rc.get('project.plotters', {}).items()\n if register_pm(ep, key)}\n already_defined = set(plotters).intersection(plugin_plotters)\n if already_defined:\n msg = (\"Error while loading psyplot plugin %s! The \"\n \"following plotters have already been \"\n \"defined\") % ep\n msg += 'and will be overwritten:' if not raise_error else ':'\n msg += '\\n' + '\\n'.join(chain.from_iterable(\n (('%s by %s' % (key, plugin)\n for plugin, keys in def_plots.items() if key in keys)\n for key in already_defined)))\n if raise_error:\n raise ImportError(msg)\n else:\n warn(msg)\n for d in plugin_plotters.values():\n d['plugin'] = ep.module_name\n plotters.update(plugin_plotters)\n def_plots[ep] = list(plugin_plotters)\n\n # load the defaultParams keys\n plugin_defaultParams = rc.defaultParams\n already_defined = set(defaultParams).intersection(\n plugin_defaultParams) - {'project.plotters'}\n if already_defined:\n msg = (\"Error while loading psyplot plugin %s! The \"\n \"following default keys have already been \"\n \"defined:\") % ep\n msg += '\\n' + '\\n'.join(chain.from_iterable(\n (('%s by %s' % (key, plugin)\n for plugin, keys in def_keys.items() if key in keys)\n for key in already_defined)))\n if raise_error:\n raise ImportError(msg)\n else:\n warn(msg)\n update_keys = set(plugin_defaultParams) - {'project.plotters'}\n def_keys[ep] = update_keys\n self.defaultParams.update(\n {key: plugin_defaultParams[key] for key in update_keys})\n\n # load the rcParams (without validation)\n super(RcParams, self).update({key: rc[key] for key in update_keys})\n\n # add the deprecated keys\n self._deprecated_ignore_map.update(rc._deprecated_ignore_map)\n self._deprecated_map.update(rc._deprecated_map)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef deploy(project_name):\n request_log = requestlog.RequestLog\n header_addon = HeaderControl\n fault_wrapper = FaultWrapper\n application = handler.SdkHandler()\n\n # currently we have 3 middleware\n for middleware in (header_addon,\n fault_wrapper,\n request_log,\n ):\n if middleware:\n application = middleware(application)\n\n return application", "response": "Assemble the middleware pipeline"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guest_start(self, userid):\n LOG.info(\"Begin to power on vm %s\", userid)\n self._smtclient.guest_start(userid)\n LOG.info(\"Complete power on vm %s\", userid)", "response": "Power on z / VM instance."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef guest_reboot(self, userid):\n LOG.info(\"Begin to reboot vm %s\", userid)\n self._smtclient.guest_reboot(userid)\n LOG.info(\"Complete reboot vm %s\", userid)", "response": "Reboot a guest vm."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guest_reset(self, userid):\n LOG.info(\"Begin to reset vm %s\", userid)\n self._smtclient.guest_reset(userid)\n LOG.info(\"Complete reset vm %s\", userid)", "response": "Reset z / VM instance."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef live_migrate_vm(self, userid, destination, parms, action):\n # Check guest state is 'on'\n state = self.get_power_state(userid)\n if state != 'on':\n LOG.error(\"Failed to live migrate guest %s, error: \"\n \"guest is inactive, cann't perform live migrate.\" %\n userid)\n raise exception.SDKConflictError(modID='guest', rs=1,\n userid=userid)\n # Do live migrate\n if action.lower() == 'move':\n LOG.info(\"Moving the specific vm %s\", userid)\n self._smtclient.live_migrate_move(userid, destination, parms)\n LOG.info(\"Complete move vm %s\", userid)\n\n if action.lower() == 'test':\n LOG.info(\"Testing the eligiblity of specific vm %s\", userid)\n self._smtclient.live_migrate_test(userid, destination)", "response": "Move an eligible running z/VM system to another z/VM system within an SSI cluster."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates z / VM userid into user directory.", "response": "def create_vm(self, userid, cpu, memory, disk_list,\n user_profile, max_cpu, max_mem, ipl_from,\n ipl_param, ipl_loadparam):\n \"\"\"Create z/VM userid into user directory for a z/VM instance.\"\"\"\n LOG.info(\"Creating the user directory for vm %s\", userid)\n\n info = self._smtclient.create_vm(userid, cpu, memory,\n disk_list, user_profile,\n max_cpu, max_mem, ipl_from,\n ipl_param, ipl_loadparam)\n\n # add userid into smapi namelist\n self._smtclient.namelist_add(self._namelist, userid)\n return info"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete z / VM userid for the instance.", "response": "def delete_vm(self, userid):\n \"\"\"Delete z/VM userid for the instance.\"\"\"\n LOG.info(\"Begin to delete vm %s\", userid)\n self._smtclient.delete_vm(userid)\n\n # remove userid from smapi namelist\n self._smtclient.namelist_remove(self._namelist, userid)\n LOG.info(\"Complete delete vm %s\", userid)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting a command on the guest vm.", "response": "def execute_cmd(self, userid, cmdStr):\n \"\"\"Execute commands on the guest vm.\"\"\"\n LOG.debug(\"executing cmd: %s\", cmdStr)\n return self._smtclient.execute_cmd(userid, cmdStr)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_hostname(self, userid, hostname, os_version):\n tmp_path = self._pathutils.get_guest_temp_path(userid)\n if not os.path.exists(tmp_path):\n os.makedirs(tmp_path)\n tmp_file = tmp_path + '/hostname.sh'\n\n lnxdist = self._dist_manager.get_linux_dist(os_version)()\n lines = lnxdist.generate_set_hostname_script(hostname)\n with open(tmp_file, 'w') as f:\n f.writelines(lines)\n\n requestData = \"ChangeVM \" + userid + \" punchfile \" + \\\n tmp_file + \" --class x\"\n LOG.debug(\"Punch script to guest %s to set hostname\" % userid)\n\n try:\n self._smtclient._request(requestData)\n except exception.SDKSMTRequestFailed as err:\n msg = (\"Failed to punch set_hostname script to userid '%s'. SMT \"\n \"error: %s\" % (userid, err.format_message()))\n LOG.error(msg)\n raise exception.SDKSMTRequestFailed(err.results, msg)\n finally:\n self._pathutils.clean_temp_folder(tmp_path)", "response": "Punch a script that used to set the hostname of the guest."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cvtToBlocks(rh, diskSize):\n\n rh.printSysLog(\"Enter generalUtils.cvtToBlocks\")\n blocks = 0\n results = {'overallRC': 0, 'rc': 0, 'rs': 0, 'errno': 0}\n\n blocks = diskSize.strip().upper()\n lastChar = blocks[-1]\n if lastChar == 'G' or lastChar == 'M':\n # Convert the bytes to blocks\n byteSize = blocks[:-1]\n if byteSize == '':\n # The size of the disk is not valid.\n msg = msgs.msg['0200'][1] % (modId, blocks)\n rh.printLn(\"ES\", msg)\n results = msgs.msg['0200'][0]\n else:\n try:\n if lastChar == 'M':\n blocks = (float(byteSize) * 1024 * 1024) / 512\n elif lastChar == 'G':\n blocks = (float(byteSize) * 1024 * 1024 * 1024) / 512\n blocks = str(int(math.ceil(blocks)))\n except Exception:\n # Failed to convert to a number of blocks.\n msg = msgs.msg['0201'][1] % (modId, byteSize)\n rh.printLn(\"ES\", msg)\n results = msgs.msg['0201'][0]\n elif blocks.strip('1234567890'):\n # Size is not an integer size of blocks.\n msg = msgs.msg['0202'][1] % (modId, blocks)\n rh.printLn(\"ES\", msg)\n results = msgs.msg['0202'][0]\n\n rh.printSysLog(\"Exit generalUtils.cvtToBlocks, rc: \" +\n str(results['overallRC']))\n return results, blocks", "response": "Convert a disk storage value to a number of blocks."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a size value to a number with a magnitude appended.", "response": "def cvtToMag(rh, size):\n \"\"\"\n Convert a size value to a number with a magnitude appended.\n\n Input:\n Request Handle\n Size bytes\n\n Output:\n Converted value with a magnitude\n \"\"\"\n\n rh.printSysLog(\"Enter generalUtils.cvtToMag\")\n\n mSize = ''\n size = size / (1024 * 1024)\n\n if size > (1024 * 5):\n # Size is greater than 5G. Using \"G\" magnitude.\n size = size / 1024\n mSize = \"%.1fG\" % size\n else:\n # Size is less than or equal 5G. Using \"M\" magnitude.\n mSize = \"%.1fM\" % size\n\n rh.printSysLog(\"Exit generalUtils.cvtToMag, magSize: \" + mSize)\n return mSize"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getSizeFromPage(rh, page):\n rh.printSysLog(\"Enter generalUtils.getSizeFromPage\")\n\n bSize = float(page) * 4096\n mSize = cvtToMag(rh, bSize)\n\n rh.printSysLog(\"Exit generalUtils.getSizeFromPage, magSize: \" + mSize)\n return mSize", "response": "This function converts a size value from page to a number with a magnitude appended."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parseCmdline(rh, posOpsList, keyOpsList):\n\n rh.printSysLog(\"Enter generalUtils.parseCmdline\")\n\n # Handle any positional operands on the line.\n if rh.results['overallRC'] == 0 and rh.subfunction in posOpsList:\n ops = posOpsList[rh.subfunction]\n currOp = 0\n # While we have operands on the command line AND\n # we have more operands in the positional operand list.\n while rh.argPos < rh.totalParms and currOp < len(ops):\n key = ops[currOp][1] # key for rh.parms[]\n opType = ops[currOp][3] # data type\n if opType == 1:\n # Handle an integer data type\n try:\n rh.parms[key] = int(rh.request[rh.argPos])\n except ValueError:\n # keyword is not an integer\n msg = msgs.msg['0001'][1] % (modId, rh.function,\n rh.subfunction, (currOp + 1),\n ops[currOp][0], rh.request[rh.argPos])\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0001'][0])\n break\n else:\n rh.parms[key] = rh.request[rh.argPos]\n currOp += 1\n rh.argPos += 1\n\n if (rh.argPos >= rh.totalParms and currOp < len(ops) and\n ops[currOp][2] is True):\n # Check for missing required operands.\n msg = msgs.msg['0002'][1] % (modId, rh.function,\n rh.subfunction, ops[currOp][0], (currOp + 1))\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0002'][0])\n\n # Handle any keyword operands on the line.\n if rh.results['overallRC'] == 0 and rh.subfunction in keyOpsList:\n while rh.argPos < rh.totalParms:\n if rh.request[rh.argPos] in keyOpsList[rh.subfunction]:\n keyword = rh.request[rh.argPos]\n rh.argPos += 1\n ops = keyOpsList[rh.subfunction]\n if keyword in ops:\n key = ops[keyword][0]\n opCnt = ops[keyword][1]\n opType = ops[keyword][2]\n if opCnt == 0:\n # Keyword has no additional value\n rh.parms[key] = True\n else:\n # Keyword has values following it.\n storeIntoArray = False # Assume single word\n if opCnt < 0:\n storeIntoArray = True\n # Property is a list all of the rest of the parms.\n opCnt = rh.totalParms - rh.argPos\n if opCnt == 0:\n # Need at least 1 operand value\n opCnt = 1\n elif opCnt > 1:\n storeIntoArray = True\n if opCnt + rh.argPos > rh.totalParms:\n # keyword is missing its related value operand\n msg = msgs.msg['0003'][1] % (modId, rh.function,\n rh.subfunction, keyword)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0003'][0])\n break\n\n \"\"\"\n Add the expected value to the property.\n Take into account if there are more than 1.\n \"\"\"\n if storeIntoArray:\n # Initialize the list.\n rh.parms[key] = []\n for i in range(0, opCnt):\n if opType == 1:\n # convert from string to int and save it.\n try:\n if not storeIntoArray:\n rh.parms[key] = (\n int(rh.request[rh.argPos]))\n else:\n rh.parms[key].append(int(\n rh.request[rh.argPos]))\n except ValueError:\n # keyword is not an integer\n msg = (msgs.msg['0004'][1] %\n (modId, rh.function, rh.subfunction,\n keyword, rh.request[rh.argPos]))\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0004'][0])\n break\n else:\n # Value is a string, save it.\n if not storeIntoArray:\n rh.parms[key] = rh.request[rh.argPos]\n else:\n rh.parms[key].append(rh.request[rh.argPos])\n rh.argPos += 1\n if rh.results['overallRC'] != 0:\n # Upper loop had an error break from loops.\n break\n else:\n # keyword is not in the subfunction's keyword list\n msg = msgs.msg['0005'][1] % (modId, rh.function,\n rh.subfunction, keyword)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0005'][0])\n break\n else:\n # Subfunction does not support keywords\n msg = (msgs.msg['0006'][1] % (modId, rh.function,\n rh.subfunction, rh.request[rh.argPos]))\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0006'][0])\n break\n\n rh.printSysLog(\"Exit generalUtils.parseCmdLine, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Parse the command line and return the appropriate response code."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndefaults 52x handler that loops every second until a response is received.", "response": "def default_52xhandler(response, resource, url, params):\n \"\"\"\n Default 52x handler that loops every second until a non 52x response is received.\n :param response: The response of the last executed api request.\n :param resource: The resource of the last executed api request.\n :param url: The url of the last executed api request sans encoded query parameters.\n :param params: The query params of the last executed api request in dictionary format.\n \"\"\"\n time.sleep(1)\n return resource.execute(url, params)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nperforming validation on properties.", "response": "def validate(self):\n \"\"\"\n Perform validation check on properties.\n \"\"\"\n if not self.api_token or not self.api_token_secret:\n raise ImproperlyConfigured(\"'api_token' and 'api_token_secret' are required for authentication.\")\n\n if self.response_type not in [\"json\", \"pson\", \"xml\", \"debug\", None]:\n raise ImproperlyConfigured(\"'%s' is an invalid response_type\" % self.response_type)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef execute(self, url, params):\n response = requests.get(url, params=params, **self.config.requests_kwargs)\n\n if 520 <= response.status_code < 530:\n if self.config.handler52x:\n return self.config.handler52x(response, self, url, params)\n\n response.raise_for_status()\n\n if not self.config.response_type:\n return response.json()\n else:\n return response.text", "response": "Executes a call to the API."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates the invokeScript shell for the current network.", "response": "def _create_invokeScript(self, network_file_path, commands,\n files_map):\n \"\"\"invokeScript: Configure zLinux os network\n\n invokeScript is included in the network.doscript, it is used to put\n the network configuration file to the directory where it belongs and\n call znetconfig to configure the network\n \"\"\"\n LOG.debug('Creating invokeScript shell in the folder %s'\n % network_file_path)\n invokeScript = \"invokeScript.sh\"\n\n conf = \"#!/bin/bash \\n\"\n command = commands\n for file in files_map:\n target_path = file['target_path']\n source_file = file['source_file']\n # potential risk: whether target_path exist\n command += 'mv ' + source_file + ' ' + target_path + '\\n'\n\n command += 'sleep 2\\n'\n command += '/bin/bash /tmp/znetconfig.sh\\n'\n command += 'rm -rf invokeScript.sh\\n'\n\n scriptfile = os.path.join(network_file_path, invokeScript)\n with open(scriptfile, \"w\") as f:\n f.write(conf)\n f.write(command)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _create_network_doscript(self, network_file_path):\n # Generate the tar package for punch\n LOG.debug('Creating network doscript in the folder %s'\n % network_file_path)\n network_doscript = os.path.join(network_file_path, 'network.doscript')\n tar = tarfile.open(network_doscript, \"w\")\n for file in os.listdir(network_file_path):\n file_name = os.path.join(network_file_path, file)\n tar.add(file_name, arcname=file)\n tar.close()\n return network_doscript", "response": "Create the network. doscript file in the folder specified by network_file_path."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef salsa20_8(B, x, src, s_start, dest, d_start):\n\n # Merged blockxor for speed\n for i in xrange(16):\n x[i] = B[i] = B[i] ^ src[s_start + i]\n\n # This is the actual Salsa 20/8: four identical double rounds\n for i in xrange(4):\n R(x, 4, 0,12, 7);R(x, 8, 4, 0, 9);R(x,12, 8, 4,13);R(x, 0,12, 8,18)\n R(x, 9, 5, 1, 7);R(x,13, 9, 5, 9);R(x, 1,13, 9,13);R(x, 5, 1,13,18)\n R(x,14,10, 6, 7);R(x, 2,14,10, 9);R(x, 6, 2,14,13);R(x,10, 6, 2,18)\n R(x, 3,15,11, 7);R(x, 7, 3,15, 9);R(x,11, 7, 3,13);R(x,15,11, 7,18)\n R(x, 1, 0, 3, 7);R(x, 2, 1, 0, 9);R(x, 3, 2, 1,13);R(x, 0, 3, 2,18)\n R(x, 6, 5, 4, 7);R(x, 7, 6, 5, 9);R(x, 4, 7, 6,13);R(x, 5, 4, 7,18)\n R(x,11,10, 9, 7);R(x, 8,11,10, 9);R(x, 9, 8,11,13);R(x,10, 9, 8,18)\n R(x,12,15,14, 7);R(x,13,12,15, 9);R(x,14,13,12,13);R(x,15,14,13,18)\n\n # While we are handling the data, write it to the correct dest.\n # The latter half is still part of salsa20\n for i in xrange(16):\n dest[d_start + i] = B[i] = (x[i] + B[i]) & 0xffffffff", "response": "Salsa 20 / 8 algorithm."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fileChunkIter(file_object, file_chunk_size=65536):\n while True:\n chunk = file_object.read(file_chunk_size)\n if chunk:\n yield chunk\n else:\n break", "response": "Returns an iterator to a file - like object that yields fixed size chunks of the file - like object that yields fixed size chunks of the file - like object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a key derived using the scrypt key - derivarion function.", "response": "def scrypt(password, salt, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p, olen=64):\n \"\"\"Returns a key derived using the scrypt key-derivarion function\n\n N must be a power of two larger than 1 but no larger than 2 ** 63 (insane)\n r and p must be positive numbers such that r * p < 2 ** 30\n\n The default values are:\n N -- 2**14 (~16k)\n r -- 8\n p -- 1\n\n Memory usage is proportional to N*r. Defaults require about 16 MiB.\n Time taken is proportional to N*p. Defaults take <100ms of a recent x86.\n\n The last one differs from libscrypt defaults, but matches the 'interactive'\n work factor from the original paper. For long term storage where runtime of\n key derivation is not a problem, you could use 16 as in libscrypt or better\n yet increase N if memory is plentiful.\n \"\"\"\n check_args(password, salt, N, r, p, olen)\n\n out = ctypes.create_string_buffer(olen)\n ret = _libscrypt_scrypt(password, len(password), salt, len(salt),\n N, r, p, out, len(out))\n if ret:\n raise ValueError\n\n return out.raw"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nderiving a Modular Crypt Format hash using the scrypt function.", "response": "def scrypt_mcf(password, salt=None, N=SCRYPT_N, r=SCRYPT_r, p=SCRYPT_p,\n prefix=SCRYPT_MCF_PREFIX_DEFAULT):\n \"\"\"Derives a Modular Crypt Format hash using the scrypt KDF\n\n Parameter space is smaller than for scrypt():\n N must be a power of two larger than 1 but no larger than 2 ** 31\n r and p must be positive numbers between 1 and 255\n Salt must be a byte string 1-16 bytes long.\n\n If no salt is given, a random salt of 128+ bits is used. (Recommended.)\n \"\"\"\n if (prefix != SCRYPT_MCF_PREFIX_s1 and prefix != SCRYPT_MCF_PREFIX_ANY):\n return mcf_mod.scrypt_mcf(scrypt, password, salt, N, r, p, prefix)\n if isinstance(password, unicode):\n password = password.encode('utf8')\n elif not isinstance(password, bytes):\n raise TypeError('password must be a unicode or byte string')\n if salt is None:\n salt = os.urandom(16)\n elif not (1 <= len(salt) <= 16):\n raise ValueError('salt must be 1-16 bytes')\n if N > 2**31:\n raise ValueError('N > 2**31 not supported')\n if b'\\0' in password:\n raise ValueError('scrypt_mcf password must not contain zero bytes')\n\n hash = scrypt(password, salt, N, r, p)\n\n h64 = base64.b64encode(hash)\n s64 = base64.b64encode(salt)\n\n out = ctypes.create_string_buffer(125)\n ret = _libscrypt_mcf(N, r, p, s64, h64, out)\n if not ret:\n raise ValueError\n\n out = out.raw.strip(b'\\0')\n # XXX: Hack to support old libscrypt (like in Ubuntu 14.04)\n if len(out) == 123:\n out = out + b'='\n\n return out"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning True if the password matches the given MCF hash", "response": "def scrypt_mcf_check(mcf, password):\n \"\"\"Returns True if the password matches the given MCF hash\"\"\"\n if not isinstance(mcf, bytes):\n raise TypeError('MCF must be a byte string')\n if isinstance(password, unicode):\n password = password.encode('utf8')\n elif not isinstance(password, bytes):\n raise TypeError('password must be a unicode or byte string')\n if len(mcf) != 124 or b'\\0' in password:\n return mcf_mod.scrypt_mcf_check(scrypt, mcf, password)\n\n mcfbuf = ctypes.create_string_buffer(mcf)\n ret = _libscrypt_check(mcfbuf, password)\n if ret < 0:\n return mcf_mod.scrypt_mcf_check(scrypt, mcf, password)\n\n return bool(ret)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_mixed_delim_str(line):\n arrs = [[], [], []]\n for group in line.split(' '):\n for col, coord in enumerate(group.split('/')):\n if coord:\n arrs[col].append(int(coord))\n\n return [tuple(arr) for arr in arrs]", "response": "Turns. obj face index string line into [ verts texcoords normals ] numeric tuples."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read_objfile(fname):\n verts = defaultdict(list)\n obj_props = []\n with open(fname) as f:\n lines = f.read().splitlines()\n\n for line in lines:\n if line:\n split_line = line.strip().split(' ', 1)\n if len(split_line) < 2:\n continue\n\n prefix, value = split_line[0], split_line[1]\n if prefix == 'o':\n obj_props.append({})\n obj = obj_props[-1]\n obj['f'] = []\n obj[prefix] = value\n # For files without an 'o' statement\n elif prefix == 'v' and len(obj_props) < 1:\n obj_props.append({})\n obj = obj_props[-1]\n obj['f'] = []\n obj['o'] = fname\n if obj_props:\n if prefix[0] == 'v':\n verts[prefix].append([float(val) for val in value.split(' ')])\n elif prefix == 'f':\n obj[prefix].append(parse_mixed_delim_str(value))\n else:\n obj[prefix] = value\n\n\n # Reindex vertices to be in face index order, then remove face indices.\n verts = {key: np.array(value) for key, value in iteritems(verts)}\n for obj in obj_props:\n obj['f'] = tuple(np.array(verts) if verts[0] else tuple() for verts in zip(*obj['f']))\n for idx, vertname in enumerate(['v' ,'vt', 'vn']):\n if vertname in verts:\n obj[vertname] = verts[vertname][obj['f'][idx].flatten() - 1, :]\n else:\n obj[vertname] = tuple()\n del obj['f']\n\n geoms = {obj['o']:obj for obj in obj_props}\n\n return geoms", "response": "Takes. obj filename and returns dict of object properties for each object in file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_wavefront(fname_obj):\n fname_mtl = ''\n geoms = read_objfile(fname_obj)\n for line in open(fname_obj):\n if line:\n split_line = line.strip().split(' ', 1)\n if len(split_line) < 2:\n continue\n\n prefix, data = split_line[0], split_line[1]\n if 'mtllib' in prefix:\n fname_mtl = data\n break\n\n if fname_mtl:\n materials = read_mtlfile(path.join(path.dirname(fname_obj), fname_mtl))\n\n for geom in geoms.values():\n geom['material'] = materials[geom['usemtl']]\n\n return geoms", "response": "Reads a wavefront file and returns a dictionary along with their material dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndisables or enable a disk.", "response": "def disableEnableDisk(rh, userid, vaddr, option):\n \"\"\"\n Disable or enable a disk.\n\n Input:\n Request Handle:\n owning userid\n virtual address\n option ('-e': enable, '-d': disable)\n\n Output:\n Dictionary containing the following:\n overallRC - overall return code, 0: success, non-zero: failure\n rc - rc from the chccwdev command or IUCV transmission.\n rs - rs from the chccwdev command or IUCV transmission.\n results - possible error message from the IUCV transmission.\n \"\"\"\n\n rh.printSysLog(\"Enter vmUtils.disableEnableDisk, userid: \" + userid +\n \" addr: \" + vaddr + \" option: \" + option)\n\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n 'response': ''\n }\n\n \"\"\"\n Can't guarantee the success of online/offline disk, need to wait\n Until it's done because we may detach the disk after -d option\n or use the disk after the -e option\n \"\"\"\n for secs in [0.1, 0.4, 1, 1.5, 3, 7, 15, 32, 30, 30,\n 60, 60, 60, 60, 60]:\n strCmd = \"sudo /sbin/chccwdev \" + option + \" \" + vaddr + \" 2>&1\"\n results = execCmdThruIUCV(rh, userid, strCmd)\n if results['overallRC'] == 0:\n break\n elif (results['overallRC'] == 2 and results['rc'] == 8 and\n results['rs'] == 1 and option == '-d'):\n # Linux does not know about the disk being disabled.\n # Ok, nothing to do. Treat this as a success.\n results = {'overallRC': 0, 'rc': 0, 'rs': 0, 'response': ''}\n break\n time.sleep(secs)\n\n rh.printSysLog(\"Exit vmUtils.disableEnableDisk, rc: \" +\n str(results['overallRC']))\n return results"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef execCmdThruIUCV(rh, userid, strCmd, hideInLog=[]):\n if len(hideInLog) == 0:\n rh.printSysLog(\"Enter vmUtils.execCmdThruIUCV, userid: \" +\n userid + \" cmd: \" + strCmd)\n else:\n logCmd = strCmd.split(' ')\n for i in hideInLog:\n logCmd[i] = ''\n rh.printSysLog(\"Enter vmUtils.execCmdThruIUCV, userid: \" +\n userid + \" cmd: \" + ' '.join(logCmd))\n\n iucvpath = '/opt/zthin/bin/IUCV/'\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n 'errno': 0,\n 'response': [],\n }\n\n cmd = ['sudo',\n iucvpath + \"iucvclnt\",\n userid,\n strCmd]\n try:\n results['response'] = subprocess.check_output(\n cmd,\n stderr=subprocess.STDOUT,\n close_fds=True)\n if isinstance(results['response'], bytes):\n results['response'] = bytes.decode(results['response'])\n except CalledProcessError as e:\n msg = []\n results['overallRC'] = 2\n results['rc'] = e.returncode\n output = bytes.decode(e.output)\n\n match = re.search('Return code (.+?),', output)\n if match:\n try:\n results['rc'] = int(match.group(1))\n except ValueError:\n # Return code in response from IUCVCLNT is not an int.\n msg = msgs.msg['0311'][1] % (modId, userid, strCmd,\n results['rc'], match.group(1), output)\n\n if not msg:\n # We got the rc. Now, get the rs.\n match = re.search('Reason code (.+?)\\.', output)\n if match:\n try:\n results['rs'] = int(match.group(1))\n except ValueError:\n # Reason code in response from IUCVCLNT is not an int.\n msg = msgs.msg['0312'][1] % (modId, userid, strCmd,\n results['rc'], match.group(1), output)\n\n if msg:\n # Already produced an error message.\n pass\n elif results['rc'] == 1:\n # Command was not authorized or a generic Linux error.\n msg = msgs.msg['0313'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n elif results['rc'] == 2:\n # IUCV client parameter error.\n msg = msgs.msg['0314'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n elif results['rc'] == 4:\n # IUCV socket error\n msg = msgs.msg['0315'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n elif results['rc'] == 8:\n # Executed command failed\n msg = msgs.msg['0316'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n elif results['rc'] == 16:\n # File Transport failed\n msg = msgs.msg['0317'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n elif results['rc'] == 32:\n # IUCV server file was not found on this system.\n msg += msgs.msg['0318'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n else:\n # Unrecognized IUCV client error\n msg = msgs.msg['0319'][1] % (modId, userid, strCmd,\n results['rc'], results['rs'], output)\n results['response'] = msg\n\n except Exception as e:\n # Other exceptions from this system (i.e. not the managed system).\n results = msgs.msg['0421'][0]\n msg = msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e))\n results['response'] = msg\n\n rh.printSysLog(\"Exit vmUtils.execCmdThruIUCV, rc: \" +\n str(results['rc']))\n return results", "response": "This routine sends a command to a virtual machine using IUCV."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the performance information for a userid Input: Request Handle Userid to query <- may change this to a list later. Output: Dictionary containing the following: overallRC - overall return code, 0: success, non-zero: failure rc - RC returned from SMCLI if overallRC = 0. rs - RS returned from SMCLI if overallRC = 0. errno - Errno returned from SMCLI if overallRC = 0. response - Stripped and reformatted output of the SMCLI command.", "response": "def getPerfInfo(rh, useridlist):\n \"\"\"\n Get the performance information for a userid\n\n Input:\n Request Handle\n Userid to query <- may change this to a list later.\n\n Output:\n Dictionary containing the following:\n overallRC - overall return code, 0: success, non-zero: failure\n rc - RC returned from SMCLI if overallRC = 0.\n rs - RS returned from SMCLI if overallRC = 0.\n errno - Errno returned from SMCLI if overallRC = 0.\n response - Stripped and reformatted output of the SMCLI command.\n \"\"\"\n rh.printSysLog(\"Enter vmUtils.getPerfInfo, userid: \" + useridlist)\n parms = [\"-T\", rh.userid,\n \"-c\", \"1\"]\n results = invokeSMCLI(rh, \"Image_Performance_Query\", parms)\n if results['overallRC'] != 0:\n # SMCLI failed.\n rh.printLn(\"ES\", results['response'])\n rh.printSysLog(\"Exit vmUtils.getPerfInfo, rc: \" +\n str(results['overallRC']))\n return results\n\n lines = results['response'].split(\"\\n\")\n usedTime = 0\n totalCpu = 0\n totalMem = 0\n usedMem = 0\n try:\n for line in lines:\n if \"Used CPU time:\" in line:\n usedTime = line.split()[3].strip('\"')\n # Value is in us, need make it seconds\n usedTime = int(usedTime) / 1000000\n if \"Guest CPUs:\" in line:\n totalCpu = line.split()[2].strip('\"')\n if \"Max memory:\" in line:\n totalMem = line.split()[2].strip('\"')\n # Value is in Kb, need to make it Mb\n totalMem = int(totalMem) / 1024\n if \"Used memory:\" in line:\n usedMem = line.split()[2].strip('\"')\n usedMem = int(usedMem) / 1024\n except Exception as e:\n msg = msgs.msg['0412'][1] % (modId, type(e).__name__,\n str(e), results['response'])\n rh.printLn(\"ES\", msg)\n results['overallRC'] = 4\n results['rc'] = 4\n results['rs'] = 412\n\n if results['overallRC'] == 0:\n memstr = \"Total Memory: %iM\\n\" % totalMem\n usedmemstr = \"Used Memory: %iM\\n\" % usedMem\n procstr = \"Processors: %s\\n\" % totalCpu\n timestr = \"CPU Used Time: %i sec\\n\" % usedTime\n results['response'] = memstr + usedmemstr + procstr + timestr\n rh.printSysLog(\"Exit vmUtils.getPerfInfo, rc: \" +\n str(results['rc']))\n return results"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef installFS(rh, vaddr, mode, fileSystem, diskType):\n\n rh.printSysLog(\"Enter vmUtils.installFS, userid: \" + rh.userid +\n \", vaddr: \" + str(vaddr) + \", mode: \" + mode + \", file system: \" +\n fileSystem + \", disk type: \" + diskType)\n\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n 'errno': 0,\n }\n\n out = ''\n diskAccessed = False\n\n # Get access to the disk.\n cmd = [\"sudo\",\n \"/opt/zthin/bin/linkdiskandbringonline\",\n rh.userid,\n vaddr,\n mode]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n out = subprocess.check_output(cmd, close_fds=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n diskAccessed = True\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n results = msgs.msg['0421'][0]\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n\n if results['overallRC'] == 0:\n \"\"\"\n sample output:\n linkdiskandbringonline maint start time: 2017-03-03-16:20:48.011\n Success: Userid maint vdev 193 linked at ad35 device name dasdh\n linkdiskandbringonline exit time: 2017-03-03-16:20:52.150\n \"\"\"\n match = re.search('Success:(.+?)\\n', out)\n if match:\n parts = match.group(1).split()\n if len(parts) > 9:\n device = \"/dev/\" + parts[9]\n else:\n strCmd = ' '.join(cmd)\n rh.printLn(\"ES\", msgs.msg['0416'][1] % (modId,\n 'Success:', 10, strCmd, out))\n results = msgs.msg['0416'][0]\n rh.updateResults(results)\n else:\n strCmd = ' '.join(cmd)\n rh.printLn(\"ES\", msgs.msg['0417'][1] % (modId,\n 'Success:', strCmd, out))\n results = msgs.msg['0417'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0 and diskType == \"3390\":\n # dasdfmt the disk\n cmd = [\"sudo\",\n \"/sbin/dasdfmt\",\n \"-y\",\n \"-b\", \"4096\",\n \"-d\", \"cdl\",\n \"-f\", device]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n out = subprocess.check_output(cmd, close_fds=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n strCmd = \" \".join(cmd)\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0 and diskType == \"3390\":\n # Settle the devices so we can do the partition.\n strCmd = (\"which udevadm &> /dev/null && \" +\n \"udevadm settle || udevsettle\")\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n subprocess.check_output(\n strCmd,\n stderr=subprocess.STDOUT,\n close_fds=True,\n shell=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n strCmd = \" \".join(cmd)\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0 and diskType == \"3390\":\n # Prepare the partition with fdasd\n cmd = [\"sudo\", \"/sbin/fdasd\", \"-a\", device]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n out = subprocess.check_output(cmd,\n stderr=subprocess.STDOUT, close_fds=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0 and diskType == \"9336\":\n # Delete the existing partition in case the disk already\n # has a partition in it.\n cmd = \"sudo /sbin/fdisk \" + device + \" << EOF\\nd\\nw\\nEOF\"\n rh.printSysLog(\"Invoking: /sbin/fdsik \" + device +\n \" << EOF\\\\nd\\\\nw\\\\nEOF \")\n try:\n out = subprocess.check_output(cmd,\n stderr=subprocess.STDOUT,\n close_fds=True,\n shell=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, cmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, cmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0 and diskType == \"9336\":\n # Prepare the partition with fdisk\n cmd = \"sudo /sbin/fdisk \" + device + \" << EOF\\nn\\np\\n1\\n\\n\\nw\\nEOF\"\n rh.printSysLog(\"Invoking: sudo /sbin/fdisk \" + device +\n \" << EOF\\\\nn\\\\np\\\\n1\\\\n\\\\n\\\\nw\\\\nEOF\")\n try:\n out = subprocess.check_output(cmd,\n stderr=subprocess.STDOUT,\n close_fds=True,\n shell=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, cmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, cmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0:\n # Settle the devices so we can do the partition.\n strCmd = (\"which udevadm &> /dev/null && \" +\n \"udevadm settle || udevsettle\")\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n subprocess.check_output(\n strCmd,\n stderr=subprocess.STDOUT,\n close_fds=True,\n shell=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n strCmd = \" \".join(cmd)\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['overallRC'] == 0:\n # Install the file system into the disk.\n device = device + \"1\" # Point to first partition\n if fileSystem != 'swap':\n if fileSystem == 'xfs':\n cmd = [\"sudo\", \"mkfs.xfs\", \"-f\", device]\n else:\n cmd = [\"sudo\", \"mkfs\", \"-F\", \"-t\", fileSystem, device]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n out = subprocess.check_output(cmd,\n stderr=subprocess.STDOUT, close_fds=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n rh.printLn(\"N\", \"File system: \" + fileSystem +\n \" is installed.\")\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n else:\n rh.printLn(\"N\", \"File system type is swap. No need to install \" +\n \"a filesystem.\")\n\n if diskAccessed:\n # Give up the disk.\n cmd = [\"sudo\", \"/opt/zthin/bin/offlinediskanddetach\",\n rh.userid,\n vaddr]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n out = subprocess.check_output(cmd, close_fds=True)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n except CalledProcessError as e:\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n rh.updateResults(results)\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n rh.printSysLog(\"Exit vmUtils.installFS, rc: \" + str(results['rc']))\n return results", "response": "This function installs a filesystem on a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef invokeSMCLI(rh, api, parms, hideInLog=[]):\n if len(hideInLog) == 0:\n rh.printSysLog(\"Enter vmUtils.invokeSMCLI, userid: \" +\n rh.userid + \", function: \" + api +\n \", parms: \" + str(parms))\n else:\n logParms = parms\n for i in hideInLog:\n logParms[i] = ''\n rh.printSysLog(\"Enter vmUtils.invokeSMCLI, userid: \" +\n rh.userid + \", function: \" + api +\n \", parms: \" + str(logParms))\n goodHeader = False\n\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n 'errno': 0,\n 'response': [],\n 'strError': '',\n }\n\n cmd = []\n cmd.append('sudo')\n cmd.append('/opt/zthin/bin/smcli')\n cmd.append(api)\n cmd.append('--addRCheader')\n\n try:\n smcliResp = subprocess.check_output(cmd + parms,\n close_fds=True)\n if isinstance(smcliResp, bytes):\n smcliResp = bytes.decode(smcliResp, errors='replace')\n\n smcliResp = smcliResp.split('\\n', 1)\n results['response'] = smcliResp[1]\n results['overallRC'] = 0\n results['rc'] = 0\n\n except CalledProcessError as e:\n strCmd = \" \".join(cmd + parms)\n\n # Break up the RC header into its component parts.\n if e.output == '':\n smcliResp = ['']\n else:\n smcliResp = bytes.decode(e.output).split('\\n', 1)\n\n # Split the header into its component pieces.\n rcHeader = smcliResp[0].split('(details)', 1)\n if len(rcHeader) == 0:\n rcHeader = ['', '']\n elif len(rcHeader) == 1:\n # No data after the details tag. Add empty [1] value.\n rcHeader.append('')\n codes = rcHeader[0].split(' ')\n\n # Validate the rc, rs, and errno.\n if len(codes) < 3:\n # Unexpected number of codes. Need at least 3.\n results = msgs.msg['0301'][0]\n results['response'] = msgs.msg['0301'][1] % (modId, api,\n strCmd, rcHeader[0], rcHeader[1])\n else:\n goodHeader = True\n # Convert the first word (overall rc from SMAPI) to an int\n # and set the SMT overall rc based on this value.\n orcError = False\n try:\n results['overallRC'] = int(codes[0])\n if results['overallRC'] not in [8, 24, 25]:\n orcError = True\n except ValueError:\n goodHeader = False\n orcError = True\n if orcError:\n results['overallRC'] = 25 # SMCLI Internal Error\n results = msgs.msg['0302'][0]\n results['response'] = msgs.msg['0302'][1] % (modId,\n api, codes[0], strCmd, rcHeader[0], rcHeader[1])\n\n # Convert the second word to an int and save as rc.\n try:\n results['rc'] = int(codes[1])\n except ValueError:\n goodHeader = False\n results = msgs.msg['0303'][0]\n results['response'] = msgs.msg['0303'][1] % (modId,\n api, codes[1], strCmd, rcHeader[0], rcHeader[1])\n\n # Convert the second word to an int and save it as either\n # the rs or errno.\n try:\n word3 = int(codes[2])\n if results['overallRC'] == 8:\n results['rs'] = word3 # Must be an rs\n elif results['overallRC'] == 25:\n results['errno'] = word3 # Must be the errno\n # We ignore word 3 for everyone else and default to 0.\n except ValueError:\n goodHeader = False\n results = msgs.msg['0304'][0]\n results['response'] = msgs.msg['0304'][1] % (modId,\n api, codes[1], strCmd, rcHeader[0], rcHeader[1])\n\n results['strError'] = rcHeader[1].lstrip()\n\n if goodHeader:\n # Produce a message that provides the error info.\n results['response'] = msgs.msg['0300'][1] % (modId,\n api, results['overallRC'], results['rc'],\n results['rs'], results['errno'],\n strCmd, smcliResp[1])\n\n except Exception as e:\n # All other exceptions.\n strCmd = \" \".join(cmd + parms)\n results = msgs.msg['0305'][0]\n results['response'] = msgs.msg['0305'][1] % (modId, strCmd,\n type(e).__name__, str(e))\n\n rh.printSysLog(\"Exit vmUtils.invokeSMCLI, rc: \" +\n str(results['overallRC']))\n return results", "response": "Invoke the SMCLI command and return the results."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary containing the contents of the virtual machine with the specified userid.", "response": "def isLoggedOn(rh, userid):\n \"\"\"\n Determine whether a virtual machine is logged on.\n\n Input:\n Request Handle:\n userid being queried\n\n Output:\n Dictionary containing the following:\n overallRC - overall return code, 0: success, non-zero: failure\n rc - 0: if we got status. Otherwise, it is the\n error return code from the commands issued.\n rs - Based on rc value. For rc==0, rs is:\n 0: if we determined it is logged on.\n 1: if we determined it is logged off.\n \"\"\"\n\n rh.printSysLog(\"Enter vmUtils.isLoggedOn, userid: \" + userid)\n\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n }\n\n cmd = [\"sudo\", \"/sbin/vmcp\", \"query\", \"user\", userid]\n strCmd = ' '.join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n subprocess.check_output(\n cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n except CalledProcessError as e:\n search_pattern = '(^HCP\\w\\w\\w045E|^HCP\\w\\w\\w361E)'.encode()\n match = re.search(search_pattern, e.output)\n if match:\n # Not logged on\n results['rs'] = 1\n else:\n # Abnormal failure\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, e.output))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n except Exception as e:\n # All other exceptions.\n results = msgs.msg['0421'][0]\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n\n rh.printSysLog(\"Exit vmUtils.isLoggedOn, overallRC: \" +\n str(results['overallRC']) + \" rc: \" + str(results['rc']) +\n \" rs: \" + str(results['rs']))\n return results"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef punch2reader(rh, userid, fileLoc, spoolClass):\n\n rh.printSysLog(\"Enter punch2reader.punchFile\")\n results = {}\n\n # Setting rc to time out rc code as default and its changed during runtime\n results['rc'] = 9\n\n # Punch to the current user intially and then change the spool class.\n cmd = [\"sudo\", \"/usr/sbin/vmur\", \"punch\", \"-r\", fileLoc]\n strCmd = ' '.join(cmd)\n for secs in [1, 2, 3, 5, 10]:\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n results['response'] = subprocess.check_output(cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n if isinstance(results['response'], bytes):\n results['response'] = bytes.decode(results['response'])\n results['rc'] = 0\n rh.updateResults(results)\n break\n except CalledProcessError as e:\n results['response'] = e.output\n # Check if we have concurrent instance of vmur active\n to_find = \"A concurrent instance of vmur is already active\"\n to_find = to_find.encode()\n if results['response'].find(to_find) == -1:\n # Failure in VMUR punch update the rc\n results['rc'] = 7\n break\n else:\n # if concurrent vmur is active try after sometime\n rh.printSysLog(\"Punch in use. Retrying after \" +\n str(secs) + \" seconds\")\n time.sleep(secs)\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n rh.updateResults(results)\n\n if results['rc'] == 7:\n # Failure while issuing vmur command (For eg: invalid file given)\n msg = msgs.msg['0401'][1] % (modId, fileLoc, userid,\n results['response'])\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0401'][0])\n\n elif results['rc'] == 9:\n # Failure due to vmur timeout\n msg = msgs.msg['0406'][1] % (modId, fileLoc)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0406'][0])\n\n if rh.results['overallRC'] == 0:\n # On VMUR success change the class of the spool file\n spoolId = re.findall(r'\\d+', str(results['response']))\n cmd = [\"sudo\", \"vmcp\", \"change\", \"rdr\", str(spoolId[0]), \"class\",\n spoolClass]\n strCmd = \" \".join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n results['response'] = subprocess.check_output(cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n if isinstance(results['response'], bytes):\n results['response'] = bytes.decode(results['response'])\n rh.updateResults(results)\n except CalledProcessError as e:\n msg = msgs.msg['0404'][1] % (modId,\n spoolClass,\n e.output)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0404'][0])\n # Class change failed\n # Delete the punched file from current userid\n cmd = [\"sudo\", \"vmcp\", \"purge\", \"rdr\", spoolId[0]]\n strCmd = \" \".join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n results['response'] = subprocess.check_output(cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n if isinstance(results['response'], bytes):\n results['response'] = bytes.decode(results['response'])\n # We only need to issue the printLn.\n # Don't need to change return/reason code values\n except CalledProcessError as e:\n msg = msgs.msg['0403'][1] % (modId,\n spoolId[0],\n e.output)\n rh.printLn(\"ES\", msg)\n except Exception as e:\n # All other exceptions related to purge.\n # We only need to issue the printLn.\n # Don't need to change return/reason code values\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n except Exception as e:\n # All other exceptions related to change rdr.\n results = msgs.msg['0421'][0]\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n rh.updateResults(msgs.msg['0421'][0])\n\n if rh.results['overallRC'] == 0:\n # Transfer the file from current user to specified user\n cmd = [\"sudo\", \"vmcp\", \"transfer\", \"*\", \"rdr\", str(spoolId[0]), \"to\",\n userid, \"rdr\"]\n strCmd = \" \".join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n results['response'] = subprocess.check_output(cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n if isinstance(results['response'], bytes):\n results['response'] = bytes.decode(results['response'])\n rh.updateResults(results)\n except CalledProcessError as e:\n msg = msgs.msg['0424'][1] % (modId,\n fileLoc,\n userid, e.output)\n rh.printLn(\"ES\", msg)\n rh.updateResults(msgs.msg['0424'][0])\n\n # Transfer failed so delete the punched file from current userid\n cmd = [\"sudo\", \"vmcp\", \"purge\", \"rdr\", spoolId[0]]\n strCmd = \" \".join(cmd)\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n results['response'] = subprocess.check_output(cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n if isinstance(results['response'], bytes):\n results['response'] = bytes.decode(results['response'])\n # We only need to issue the printLn.\n # Don't need to change return/reason code values\n except CalledProcessError as e:\n msg = msgs.msg['0403'][1] % (modId,\n spoolId[0],\n e.output)\n rh.printLn(\"ES\", msg)\n except Exception as e:\n # All other exceptions related to purge.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n except Exception as e:\n # All other exceptions related to transfer.\n results = msgs.msg['0421'][0]\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n rh.updateResults(msgs.msg['0421'][0])\n\n rh.printSysLog(\"Exit vmUtils.punch2reader, rc: \" +\n str(rh.results['overallRC']))\n return rh.results['overallRC']", "response": "Punch a file to a virtual reader of the specified virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef waitForOSState(rh, userid, desiredState, maxQueries=90, sleepSecs=5):\n\n rh.printSysLog(\"Enter vmUtils.waitForOSState, userid: \" + userid +\n \" state: \" + desiredState +\n \" maxWait: \" + str(maxQueries) +\n \" sleepSecs: \" + str(sleepSecs))\n\n results = {}\n\n strCmd = \"echo 'ping'\"\n stateFnd = False\n\n for i in range(1, maxQueries + 1):\n results = execCmdThruIUCV(rh, rh.userid, strCmd)\n if results['overallRC'] == 0:\n if desiredState == 'up':\n stateFnd = True\n break\n else:\n if desiredState == 'down':\n stateFnd = True\n break\n\n if i < maxQueries:\n time.sleep(sleepSecs)\n\n if stateFnd is True:\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n }\n else:\n maxWait = maxQueries * sleepSecs\n rh.printLn(\"ES\", msgs.msg['0413'][1] % (modId, userid,\n desiredState, maxWait))\n results = msgs.msg['0413'][0]\n\n rh.printSysLog(\"Exit vmUtils.waitForOSState, rc: \" +\n str(results['overallRC']))\n return results", "response": "This function waits for the virtual OS to go into the desired state."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef waitForVMState(rh, userid, desiredState, maxQueries=90, sleepSecs=5):\n\n rh.printSysLog(\"Enter vmUtils.waitForVMState, userid: \" + userid +\n \" state: \" + desiredState +\n \" maxWait: \" + str(maxQueries) +\n \" sleepSecs: \" + str(sleepSecs))\n\n results = {}\n\n cmd = [\"sudo\", \"/sbin/vmcp\", \"query\", \"user\", userid]\n strCmd = \" \".join(cmd)\n stateFnd = False\n\n for i in range(1, maxQueries + 1):\n rh.printSysLog(\"Invoking: \" + strCmd)\n try:\n out = subprocess.check_output(\n cmd,\n close_fds=True,\n stderr=subprocess.STDOUT)\n if isinstance(out, bytes):\n out = bytes.decode(out)\n if desiredState == 'on':\n stateFnd = True\n break\n except CalledProcessError as e:\n match = re.search('(^HCP\\w\\w\\w045E|^HCP\\w\\w\\w361E)', e.output)\n if match:\n # Logged off\n if desiredState == 'off':\n stateFnd = True\n break\n else:\n # Abnormal failure\n out = e.output\n rh.printLn(\"ES\", msgs.msg['0415'][1] % (modId, strCmd,\n e.returncode, out))\n results = msgs.msg['0415'][0]\n results['rs'] = e.returncode\n break\n except Exception as e:\n # All other exceptions.\n rh.printLn(\"ES\", msgs.msg['0421'][1] % (modId, strCmd,\n type(e).__name__, str(e)))\n results = msgs.msg['0421'][0]\n\n if i < maxQueries:\n # Sleep a bit before looping.\n time.sleep(sleepSecs)\n\n if stateFnd is True:\n results = {\n 'overallRC': 0,\n 'rc': 0,\n 'rs': 0,\n }\n else:\n maxWait = maxQueries * sleepSecs\n rh.printLn(\"ES\", msgs.msg['0414'][1] % (modId, userid,\n desiredState, maxWait))\n results = msgs.msg['0414'][0]\n\n rh.printSysLog(\"Exit vmUtils.waitForVMState, rc: \" +\n str(results['overallRC']))\n return results", "response": "This function waits for the virtual machine to go into the desired state."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmatch lines prefixed with a hash mark that don t look like text.", "response": "def match(self, context, line):\n\t\t\"\"\"Match lines prefixed with a hash (\"#\") mark that don't look like text.\"\"\"\n\t\tstripped = line.stripped\n\t\treturn stripped.startswith('#') and not stripped.startswith('#{')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a flattened version of a cinje chunk stream.", "response": "def flatten(input, file=None, encoding=None, errors='strict'):\n\t\"\"\"Return a flattened representation of a cinje chunk stream.\n\t\n\tThis has several modes of operation. If no `file` argument is given, output will be returned as a string.\n\tThe type of string will be determined by the presence of an `encoding`; if one is given the returned value is a\n\tbinary string, otherwise the native unicode representation. If a `file` is present, chunks will be written\n\titeratively through repeated calls to `file.write()`, and the amount of data (characters or bytes) written\n\treturned. The type of string written will be determined by `encoding`, just as the return value is when not\n\twriting to a file-like object. The `errors` argument is passed through when encoding.\n\t\n\tWe can highly recommend using the various stremaing IO containers available in the\n\t[`io`](https://docs.python.org/3/library/io.html) module, though\n\t[`tempfile`](https://docs.python.org/3/library/tempfile.html) classes are also quite useful.\n\t\"\"\"\n\t\n\tinput = stream(input, encoding, errors)\n\t\n\tif file is None: # Exit early if we're not writing to a file.\n\t\treturn b''.join(input) if encoding else ''.join(input)\n\t\n\tcounter = 0\n\t\n\tfor chunk in input:\n\t\tfile.write(chunk)\n\t\tcounter += len(chunk)\n\t\n\treturn counter"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntranslate a template fragment into a callable function.", "response": "def fragment(string, name=\"anonymous\", **context):\n\t\"\"\"Translate a template fragment into a callable function.\n\t\n\t**Note:** Use of this function is discouraged everywhere except tests, as no caching is implemented at this time.\n\t\n\tOnly one function may be declared, either manually, or automatically. If automatic defintition is chosen the\n\tresulting function takes no arguments. Additional keyword arguments are passed through as global variables.\n\t\"\"\"\n\t\n\tif isinstance(string, bytes):\n\t\tstring = string.decode('utf-8')\n\t\n\tif \": def\" in string or \":def\" in string:\n\t\tcode = string.encode('utf8').decode('cinje')\n\t\tname = None\n\telse:\n\t\tcode = \": def {name}\\n\\n{string}\".format(\n\t\t\t\tname = name,\n\t\t\t\tstring = string,\n\t\t\t).encode('utf8').decode('cinje')\n\t\n\tenviron = dict(context)\n\t\n\texec(code, environ)\n\t\n\tif name is None: # We need to dig it out of the `__tmpl__` list.\n\t\tif __debug__ and not environ.get('__tmpl__', None):\n\t\t\traise RuntimeError(\"Template fragment does not contain a function: \" + repr(environ.get('__tmpl__', None)) + \\\n\t\t\t\t\t\"\\n\\n\" + code)\n\t\t\n\t\treturn environ[environ['__tmpl__'][-1]] # Super secret sauce: you _can_ define more than one function...\n\t\n\treturn environ[name]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\niterating over an iterable and track progress.", "response": "def iterate(obj):\n\t\"\"\"Loop over an iterable and track progress, including first and last state.\n\t\n\tOn each iteration yield an Iteration named tuple with the first and last flags, current element index, total\n\titerable length (if possible to acquire), and value, in that order.\n\t\n\t\tfor iteration in iterate(something):\n\t\t\titeration.value # Do something.\n\t\n\tYou can unpack these safely:\n\t\n\t\tfor first, last, index, total, value in iterate(something):\n\t\t\tpass\n\t\n\tIf you want to unpack the values you are iterating across, you can by wrapping the nested unpacking in parenthesis:\n\t\n\t\tfor first, last, index, total, (foo, bar, baz) in iterate(something):\n\t\t\tpass\n\t\n\tEven if the length of the iterable can't be reliably determined this function will still capture the \"last\" state\n\tof the final loop iteration. (Basically: this works with generators.)\n\t\n\tThis process is about 10x slower than simple enumeration on CPython 3.4, so only use it where you actually need to\n\ttrack state. Use `enumerate()` elsewhere.\n\t\"\"\"\n\t\n\tglobal next, Iteration\n\tnext = next\n\tIteration = Iteration\n\t\n\ttotal = len(obj) if isinstance(obj, Sized) else None\n\titerator = iter(obj)\n\tfirst = True\n\tlast = False\n\ti = 0\n\t\n\ttry:\n\t\tvalue = next(iterator)\n\texcept StopIteration:\n\t\treturn\n\t\n\twhile True:\n\t\ttry:\n\t\t\tnext_value = next(iterator)\n\t\texcept StopIteration:\n\t\t\tlast = True\n\t\t\n\t\tyield Iteration(first, last, i, total, value)\n\t\tif last: return\n\t\t\n\t\tvalue = next_value\n\t\ti += 1\n\t\tfirst = False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef chunk(line, mapping={None: 'text', '${': 'escape', '#{': 'bless', '&{': 'args', '%{': 'format', '@{': 'json'}):\n\t\n\tskipping = 0 # How many closing parenthesis will we need to skip?\n\tstart = None # Starting position of current match.\n\tlast = 0\n\t\n\ti = 0\n\t\n\ttext = line.line\n\t\n\twhile i < len(text):\n\t\tif start is not None:\n\t\t\tif text[i] == '{':\n\t\t\t\tskipping += 1\n\t\t\t\n\t\t\telif text[i] == '}':\n\t\t\t\tif skipping:\n\t\t\t\t\tskipping -= 1\n\t\t\t\telse:\n\t\t\t\t\tyield line.clone(kind=mapping[text[start-2:start]], line=text[start:i])\n\t\t\t\t\tstart = None\n\t\t\t\t\tlast = i = i + 1\n\t\t\t\t\tcontinue\n\t\t\n\t\telif text[i:i+2] in mapping:\n\t\t\tif last is not None and last != i:\n\t\t\t\tyield line.clone(kind=mapping[None], line=text[last:i])\n\t\t\t\tlast = None\n\t\t\t\n\t\t\tstart = i = i + 2\n\t\t\tcontinue\n\t\t\n\t\ti += 1\n\t\n\tif last < len(text):\n\t\tyield line.clone(kind=mapping[None], line=text[last:])", "response": "Yields a block of text into plain text and code sections."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef prepare(self):\n\t\tself.scope = 0\n\t\tself.mapping = deque([0])\n\t\tself._handler = [i() for i in sorted(self.handlers, key=lambda handler: handler.priority)]", "response": "Prepare the ordered list of transformers and reset context state to initial."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef stream(self):\n\t\t\n\t\tif 'init' not in self.flag:\n\t\t\troot = True\n\t\t\tself.prepare()\n\t\telse:\n\t\t\troot = False\n\t\t\n\t\t# Track which lines were generated in response to which lines of source code.\n\t\t# The end result is that there is one entry here for every line emitted, each integer representing the source\n\t\t# line number that triggered it. If any lines are returned with missing line numbers, they're inferred from\n\t\t# the last entry already in the list.\n\t\t# Fun fact: this list is backwards; we optimize by using a deque and appending to the left edge. this updates\n\t\t# the head of a linked list; the whole thing needs to be reversed to make sense.\n\t\tmapping = self.mapping\n\t\t\n\t\tfor line in self.input:\n\t\t\thandler = self.classify(line)\n\t\t\t\n\t\t\tif line.kind == 'code' and line.stripped == 'end': # Exit the current child scope.\n\t\t\t\treturn\n\t\t\t\n\t\t\tassert handler, \"Unable to identify handler for line; this should be impossible!\"\n\t\t\t\n\t\t\tself.input.push(line) # Put it back so it can be consumed by the handler.\n\t\t\t\n\t\t\tfor line in handler(self): # This re-indents the code to match, if missing explicit scope.\n\t\t\t\tif root: mapping.appendleft(line.number or mapping[0]) # Track source line number.\n\t\t\t\t\n\t\t\t\tif line.scope is None:\n\t\t\t\t\tline = line.clone(scope=self.scope)\n\t\t\t\t\n\t\t\t\tyield line", "response": "This method is used to generate a list of source code lines from the input lines."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nidentify the correct handler for a given line of input.", "response": "def classify(self, line):\n\t\t\"\"\"Identify the correct handler for a given line of input.\"\"\"\n\t\t\n\t\tfor handler in self._handler:\n\t\t\tif handler.match(self, line):\n\t\t\t\treturn handler"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef red(numbers):\n\t\n\tline = 0\n\tdeltas = []\n\t\n\tfor value in numbers:\n\t\tdeltas.append(value - line)\n\t\tline = value\n\t\n\treturn b64encode(compress(b''.join(chr(i).encode('latin1') for i in deltas))).decode('latin1')", "response": "Encode the deltas to reduce entropy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmatch code lines prefixed with a variety of keywords.", "response": "def match(self, context, line):\n\t\t\"\"\"Match code lines prefixed with a variety of keywords.\"\"\"\n\t\t\n\t\treturn line.kind == 'code' and line.partitioned[0] in self._both"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_field_by_name(model_class, field_name):\n field = model_class._meta.get_field(field_name)\n return (\n field, # field\n field.model, # model\n not field.auto_created or field.concrete, # direct\n field.many_to_many # m2m\n )", "response": "Returns the field object for the given field name."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the remote_field or related_field if it exists.", "response": "def _get_remote_field(field):\n \"\"\"\n Compatible with Django 1.8~1.10 ('related' was renamed to 'remote_field')\n \"\"\"\n if hasattr(field, 'remote_field'):\n return field.remote_field\n elif hasattr(field, 'related'):\n return field.related\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_all_field_names(model):\n return list(set(chain.from_iterable(\n (field.name, field.attname) if hasattr(field, 'attname') else (field.name,)\n for field in model._meta.get_fields()\n # For complete backwards compatibility, you may want to exclude\n # GenericForeignKey from the results.\n if not (field.many_to_one and field.related_model is None)\n )))", "response": "Returns a list of all the field names for the given model."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting related fields from a model class.", "response": "def get_relation_fields_from_model(model_class):\n \"\"\" Get related fields (m2m, FK, and reverse FK) \"\"\"\n relation_fields = []\n all_fields_names = _get_all_field_names(model_class)\n for field_name in all_fields_names:\n field, model, direct, m2m = _get_field_by_name(model_class, field_name)\n # get_all_field_names will return the same field\n # both with and without _id. Ignore the duplicate.\n if field_name[-3:] == '_id' and field_name[:-3] in all_fields_names:\n continue\n if m2m or not direct or _get_remote_field(field):\n field.field_name_override = field_name\n relation_fields += [field]\n return relation_fields"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the direct fields from a model class.", "response": "def get_direct_fields_from_model(model_class):\n \"\"\" Direct, not m2m, not FK \"\"\"\n direct_fields = []\n all_fields_names = _get_all_field_names(model_class)\n for field_name in all_fields_names:\n field, model, direct, m2m = _get_field_by_name(model_class, field_name)\n if direct and not m2m and not _get_remote_field(field):\n direct_fields += [field]\n return direct_fields"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a model class for a related model based on a path string.", "response": "def get_model_from_path_string(root_model, path):\n \"\"\" Return a model class for a related model\n root_model is the class of the initial model\n path is like foo__bar where bar is related to foo\n \"\"\"\n for path_section in path.split('__'):\n if path_section:\n try:\n field, model, direct, m2m = _get_field_by_name(root_model, path_section)\n except FieldDoesNotExist:\n return root_model\n if direct:\n if _get_remote_field(field):\n try:\n root_model = _get_remote_field(field).parent_model()\n except AttributeError:\n root_model = _get_remote_field(field).model\n else:\n if hasattr(field, 'related_model'):\n root_model = field.related_model\n else:\n root_model = field.model\n return root_model"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget fields and meta data from a django model class.", "response": "def get_fields(model_class, field_name='', path=''):\n \"\"\" Get fields and meta data from a model\n\n :param model_class: A django model class\n :param field_name: The field name to get sub fields from\n :param path: path of our field in format\n field_name__second_field_name__ect__\n :returns: Returns fields and meta data about such fields\n fields: Django model fields\n properties: Any properties the model has\n path: Our new path\n :rtype: dict\n \"\"\"\n fields = get_direct_fields_from_model(model_class)\n app_label = model_class._meta.app_label\n\n if field_name != '':\n field, model, direct, m2m = _get_field_by_name(model_class, field_name)\n\n path += field_name\n path += '__'\n if direct: # Direct field\n try:\n new_model = _get_remote_field(field).parent_model\n except AttributeError:\n new_model = _get_remote_field(field).model\n else: # Indirect related field\n new_model = field.related_model\n\n fields = get_direct_fields_from_model(new_model)\n\n app_label = new_model._meta.app_label\n\n return {\n 'fields': fields,\n 'path': path,\n 'app_label': app_label,\n }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_related_fields(model_class, field_name, path=\"\"):\n if field_name:\n field, model, direct, m2m = _get_field_by_name(model_class, field_name)\n if direct:\n # Direct field\n try:\n new_model = _get_remote_field(field).parent_model()\n except AttributeError:\n new_model = _get_remote_field(field).model\n else:\n # Indirect related field\n if hasattr(field, 'related_model'): # Django>=1.8\n new_model = field.related_model\n else:\n new_model = field.model()\n\n path += field_name\n path += '__'\n else:\n new_model = model_class\n\n new_fields = get_relation_fields_from_model(new_model)\n model_ct = ContentType.objects.get_for_model(new_model)\n\n return (new_fields, model_ct, path)", "response": "Get fields related to a given model."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef flush_template(context, declaration=None, reconstruct=True):\n\t\n\tif declaration is None:\n\t\tdeclaration = Line(0, '')\n\t\n\tif {'text', 'dirty'}.issubset(context.flag):\n\t\tyield declaration.clone(line='yield \"\".join(_buffer)')\n\t\t\n\t\tcontext.flag.remove('text') # This will force a new buffer to be constructed.\n\t\tcontext.flag.remove('dirty')\n\t\t\n\t\tif reconstruct:\n\t\t\tfor i in ensure_buffer(context):\n\t\t\t\tyield i\n\t\n\tif declaration.stripped == 'yield':\n\t\tyield declaration", "response": "Emit the code needed to flush the buffer."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _optimize(self, context, argspec):\n\t\t\n\t\targspec = argspec.strip()\n\t\toptimization = \", \".join(i + \"=\" + i for i in self.OPTIMIZE)\n\t\tsplit = None\n\t\tprefix = ''\n\t\tsuffix = ''\n\t\t\n\t\tif argspec:\n\t\t\tmatches = list(self.STARARGS.finditer(argspec))\n\t\t\t\n\t\t\tif matches:\n\t\t\t\tsplit = matches[-1].span()[1] # Inject after, a la \"*args>_<\", as we're positional-only arguments.\n\t\t\t\tif split != len(argspec):\n\t\t\t\t\tprefix = ', ' if argspec[split] == ',' else ''\n\t\t\t\t\tsuffix = '' if argspec[split] == ',' else ', ' \n\t\t\t\n\t\t\telse: # Ok, we can do this a different way\u2026\n\t\t\t\tmatches = list(self.STARSTARARGS.finditer(argspec))\n\t\t\t\tprefix = ', *, '\n\t\t\t\tsuffix = ', '\n\t\t\t\tif matches:\n\t\t\t\t\tsplit = matches[-1].span()[0] # Inject before, a la \">_<**kwargs\". We're positional-only arguments.\n\t\t\t\t\tif split == 0:\n\t\t\t\t\t\tprefix = '*, '\n\t\t\t\t\telse:\n\t\t\t\t\t\tsuffix = ''\n\t\t\t\telse:\n\t\t\t\t\tsplit = len(argspec)\n\t\t\t\t\tsuffix = ''\n\t\t\n\t\telse:\n\t\t\tprefix = '*, '\n\t\t\n\t\tif split is None:\n\t\t\treturn prefix + optimization + suffix\n\t\t\n\t\treturn argspec[:split] + prefix + optimization + suffix + argspec[split:]", "response": "Inject speedup shortcut bindings into the argument specification for a function."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _can_change_or_view(model, user):\n model_name = model._meta.model_name\n app_label = model._meta.app_label\n can_change = user.has_perm(app_label + '.change_' + model_name)\n can_view = user.has_perm(app_label + '.view_' + model_name)\n\n return can_change or can_view", "response": "Return True iff user has either change or view permission for model."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef report_to_list(queryset, display_fields, user):\n model_class = queryset.model\n objects = queryset\n message = \"\"\n\n if not _can_change_or_view(model_class, user):\n return [], 'Permission Denied'\n\n # Convert list of strings to DisplayField objects.\n new_display_fields = []\n\n for display_field in display_fields:\n field_list = display_field.split('__')\n field = field_list[-1]\n path = '__'.join(field_list[:-1])\n\n if path:\n path += '__' # Legacy format to append a __ here.\n\n df = DisplayField(path, field)\n new_display_fields.append(df)\n\n display_fields = new_display_fields\n\n # Display Values\n display_field_paths = []\n\n for i, display_field in enumerate(display_fields):\n model = get_model_from_path_string(model_class, display_field.path)\n\n if not model or _can_change_or_view(model, user):\n display_field_key = display_field.path + display_field.field\n\n display_field_paths.append(display_field_key)\n\n else:\n message += 'Error: Permission denied on access to {0}.'.format(\n display_field.name\n )\n\n values_list = objects.values_list(*display_field_paths)\n values_and_properties_list = [list(row) for row in values_list]\n\n return values_and_properties_list, message", "response": "Convert a report to a list of objects."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_to_workbook(data, title='report', header=None, widths=None):\n wb = Workbook()\n title = re.sub(r'\\W+', '', title)[:30]\n\n if isinstance(data, dict):\n i = 0\n for sheet_name, sheet_data in data.items():\n if i > 0:\n wb.create_sheet()\n ws = wb.worksheets[i]\n build_sheet(\n sheet_data, ws, sheet_name=sheet_name, header=header)\n i += 1\n else:\n ws = wb.worksheets[0]\n build_sheet(data, ws, header=header, widths=widths)\n return wb", "response": "Create just a openpxl workbook from a list of data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build_xlsx_response(wb, title=\"report\"):\n title = generate_filename(title, '.xlsx')\n myfile = BytesIO()\n myfile.write(save_virtual_workbook(wb))\n response = HttpResponse(\n myfile.getvalue(),\n content_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet')\n response['Content-Disposition'] = 'attachment; filename=%s' % title\n response['Content-Length'] = myfile.tell()\n return response", "response": "Take a workbook and return a xlsx file response"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_to_xlsx_response(data, title='report', header=None,\n widths=None):\n \"\"\" Make 2D list into a xlsx response for download\n data can be a 2d array or a dict of 2d arrays\n like {'sheet_1': [['A1', 'B1']]}\n \"\"\"\n wb = list_to_workbook(data, title, header, widths)\n return build_xlsx_response(wb, title=title)", "response": "Convert a list into a xlsx response"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmaking a 2D list into a csv response for download data.", "response": "def list_to_csv_response(data, title='report', header=None, widths=None):\n \"\"\" Make 2D list into a csv response for download data.\n \"\"\"\n response = HttpResponse(content_type=\"text/csv; charset=UTF-8\")\n cw = csv.writer(response)\n for row in chain([header] if header else [], data):\n cw.writerow([force_text(s).encode(response.charset) for s in row])\n return response"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwrapping a stream of lines in armour.", "response": "def wrap(scope, lines, format=BARE_FORMAT):\n\t\t\"\"\"Wrap a stream of lines in armour.\n\t\t\n\t\tTakes a stream of lines, for example, the following single line:\n\t\t\n\t\t\tLine(1, \"Lorem ipsum dolor.\")\n\t\t\n\t\tOr the following multiple lines:\n\t\t\n\t\t\tLine(1, \"Lorem ipsum\")\n\t\t\tLine(2, \"dolor\")\n\t\t\tLine(3, \"sit amet.\")\n\t\t\n\t\tProvides a generator of wrapped lines. For a single line, the following format is utilized:\n\t\t\n\t\t\t{format.single.prefix}{line.stripped}{format.single.suffix}\n\t\t\n\t\tIn the above multi-line example, the following format would be utilized:\n\t\t\n\t\t\t{format.multiple.prefix}{line[1].stripped}{format.intra.suffix}\n\t\t\t{format.intra.prefix}{line[2].stripped}{format.intra.suffix}\n\t\t\t{format.intra.prefix}{line[3].stripped}{format.multiple.suffix}\n\t\t\"\"\"\n\t\t\n\t\tfor line in iterate(lines):\n\t\t\tprefix = suffix = ''\n\t\t\t\n\t\t\tif line.first and line.last:\n\t\t\t\tprefix = format.single.prefix\n\t\t\t\tsuffix = format.single.suffix\n\t\t\telse:\n\t\t\t\tprefix = format.multiple.prefix if line.first else format.intra.prefix\n\t\t\t\tsuffix = format.multiple.suffix if line.last else format.intra.suffix\n\t\t\t\n\t\t\tyield line.value.clone(line=prefix + line.value.stripped + suffix, scope=scope + (0 if line.first else format.indent))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef gather(input):\n\t\t\n\t\ttry:\n\t\t\tline = input.next()\n\t\texcept StopIteration:\n\t\t\treturn\n\t\t\n\t\tlead = True\n\t\tbuffer = []\n\t\t\n\t\t# Gather contiguous (uninterrupted) lines of template text.\n\t\twhile line.kind == 'text':\n\t\t\tvalue = line.line.rstrip().rstrip('\\\\') + ('' if line.continued else '\\n')\n\t\t\t\n\t\t\tif lead and line.stripped:\n\t\t\t\tyield Line(line.number, value)\n\t\t\t\tlead = False\n\t\t\t\n\t\t\telif not lead:\n\t\t\t\tif line.stripped:\n\t\t\t\t\tfor buf in buffer:\n\t\t\t\t\t\tyield buf\n\t\t\t\t\t\n\t\t\t\t\tbuffer = []\n\t\t\t\t\tyield Line(line.number, value)\n\t\t\t\t\n\t\t\t\telse:\n\t\t\t\t\tbuffer.append(Line(line.number, value))\n\t\t\t\n\t\t\ttry:\n\t\t\t\tline = input.next()\n\t\t\texcept StopIteration:\n\t\t\t\tline = None\n\t\t\t\tbreak\n\t\t\t\n\t\tif line:\n\t\t\tinput.push(line)", "response": "Yields all contiguous lines of text preserving line numbers."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprocesses the lines into static and dynamic parts.", "response": "def process(self, context, lines):\n\t\t\"\"\"Chop up individual lines into static and dynamic parts.\n\t\t\n\t\tApplies light optimizations, such as empty chunk removal, and calls out to other methods to process different\n\t\tchunk types.\n\t\t\n\t\tThe processor protocol here requires the method to accept values by yielding resulting lines while accepting\n\t\tsent chunks. Deferral of multiple chunks is possible by yielding None. The processor will be sent None to\n\t\tbe given a chance to yield a final line and perform any clean-up.\n\t\t\"\"\"\n\t\t\n\t\thandler = None\n\t\t\n\t\tfor line in lines:\n\t\t\tfor chunk in chunk_(line):\n\t\t\t\tif 'strip' in context.flag:\n\t\t\t\t\tchunk.line = chunk.stripped\n\t\t\t\t\n\t\t\t\tif not chunk.line: continue # Eliminate empty chunks, i.e. trailing text segments, ${}, etc.\n\t\t\t\t\n\t\t\t\tif not handler or handler[0] != chunk.kind:\n\t\t\t\t\tif handler:\n\t\t\t\t\t\ttry:\n\t\t\t\t\t\t\tresult = next(handler[1])\n\t\t\t\t\t\texcept StopIteration:\n\t\t\t\t\t\t\tresult = None\n\t\t\t\t\t\t\n\t\t\t\t\t\tif result: yield result\n\t\t\t\t\t\n\t\t\t\t\thandler = getattr(self, 'process_' + chunk.kind, self.process_generic)(chunk.kind, context)\n\t\t\t\t\thandler = (chunk.kind, handler)\n\t\t\t\t\t\n\t\t\t\t\ttry:\n\t\t\t\t\t\tnext(handler[1]) # We fast-forward to the first yield.\n\t\t\t\t\texcept StopIteration:\n\t\t\t\t\t\treturn\n\t\t\t\t\n\t\t\t\tresult = handler[1].send(chunk) # Send the handler the next contiguous chunk.\n\t\t\t\tif result: yield result\n\t\t\t\t\n\t\t\t\tif __debug__: # In development mode we skip the contiguous chunk compaction optimization.\n\t\t\t\t\thandler = (None, handler[1])\n\t\t\n\t\t# Clean up the final iteration.\n\t\tif handler:\n\t\t\ttry:\n\t\t\t\tresult = next(handler[1])\n\t\t\texcept StopIteration:\n\t\t\t\treturn\n\t\t\t\n\t\t\tif result: yield result"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef process_text(self, kind, context):\n\t\t\n\t\tresult = None\n\t\t\n\t\twhile True:\n\t\t\tchunk = yield None\n\t\t\t\n\t\t\tif chunk is None:\n\t\t\t\tif result:\n\t\t\t\t\tyield result.clone(line=repr(result.line))\n\t\t\t\t\n\t\t\t\treturn\n\t\t\t\n\t\t\tif not result:\n\t\t\t\tresult = chunk\n\t\t\t\tcontinue\n\t\t\t\n\t\t\tresult.line += chunk.line", "response": "Combine multiple lines of bare text and emit as a Python string literal."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef process_generic(self, kind, context):\n\t\t\n\t\tresult = None\n\t\t\n\t\twhile True:\n\t\t\tchunk = yield result\n\t\t\t\n\t\t\tif chunk is None:\n\t\t\t\treturn\n\t\t\t\n\t\t\tresult = chunk.clone(line='_' + kind + '(' + chunk.line + ')')", "response": "Transform otherwise unhandled kinds of chunks by calling an underscore prefixed function by that name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process_format(self, kind, context):\n\t\t\n\t\tresult = None\n\t\t\n\t\twhile True:\n\t\t\tchunk = yield result\n\t\t\t\n\t\t\tif chunk is None:\n\t\t\t\treturn\n\t\t\t\n\t\t\t# We need to split the expression defining the format string from the values to pass when formatting.\n\t\t\t# We want to allow any Python expression, so we'll need to piggyback on Python's own parser in order\n\t\t\t# to exploit the currently available syntax. Apologies, this is probably the scariest thing in here.\n\t\t\tsplit = -1\n\t\t\tline = chunk.line\n\t\t\t\n\t\t\ttry:\n\t\t\t\tast.parse(line)\n\t\t\texcept SyntaxError as e: # We expect this, and catch it. It'll have exploded after the first expr.\n\t\t\t\tsplit = line.rfind(' ', 0, e.offset)\n\t\t\t\n\t\t\tresult = chunk.clone(line='_bless(' + line[:split].rstrip() + ').format(' + line[split:].lstrip() + ')')", "response": "Handle transforming format string + arguments into Python code."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find_player(self, username: str = None):\n if username != None:\n return next((player for player in self.players if player.name == username), None)\n else:\n return None", "response": "Find the Player with the given properties\n\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfind the team with the given properties Returns the team whose attributes match the given properties or None if no match is found.", "response": "def find_team(self, color: str = None):\n \"\"\"Find the :class:`~.Team` with the given properties\n\n Returns the team whose attributes match the given properties, or\n ``None`` if no match is found.\n\n :param color: The :class:`~.Team.Color` of the Team\n \"\"\"\n if color != None:\n if color is Team.Color.BLUE:\n return self.blue_team\n else:\n return self.orange_team\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _geocode(self, pq):\n #: List of desired output fields\n #: See `ESRI docs _` for details\n outFields = ('Loc_name',\n # 'Shape',\n 'Score',\n 'Match_addr', # based on address standards for the country\n # 'Address', # returned by default\n # 'Country' # 3-digit ISO 3166-1 code for a country. Example: Canada = \"CAN\"\n # 'Admin',\n # 'DepAdmin',\n # 'SubAdmin',\n # 'Locality',\n # 'Postal',\n # 'PostalExt',\n 'Addr_type',\n # 'Type',\n # 'Rank',\n 'AddNum',\n 'StPreDir',\n 'StPreType',\n 'StName',\n 'StType',\n 'StDir',\n # 'Side',\n # 'AddNumFrom',\n # 'AddNumTo',\n # 'AddBldg',\n 'City',\n 'Subregion',\n 'Region',\n 'Postal',\n 'Country',\n # 'Ymax',\n # 'Ymin',\n # 'Xmin',\n # 'Xmax',\n # 'X',\n # 'Y',\n 'DisplayX',\n 'DisplayY',\n # 'LangCode',\n # 'Status',\n )\n outFields = ','.join(outFields)\n query = dict(f='json', # default HTML. Other options are JSON and KMZ.\n outFields=outFields,\n # outSR=WKID, defaults to 4326\n maxLocations=20, # default 1; max is 20\n )\n\n # Postal-code only searches work in the single-line but not multipart geocoder\n # Remember that with the default postprocessors, postcode-level results will be eliminated\n if pq.query == pq.address == '' and pq.postal != '':\n pq.query = pq.postal\n\n if pq.query == '': # multipart\n query = dict(query,\n Address=pq.address, # commonly represents the house number and street name of a complete address\n Neighborhood=pq.neighborhood,\n City=pq.city,\n Subregion=pq.subregion,\n Region=pq.state,\n Postal=pq.postal,\n # PostalExt=\n CountryCode=pq.country, # full country name or ISO 3166-1 2- or 3-digit country code\n )\n else: # single-line\n magic_key = pq.key if hasattr(pq, 'key') else ''\n query = dict(query,\n singleLine=pq.query, # This can be a street address, place name, postal code, or POI.\n sourceCountry=pq.country, # full country name or ISO 3166-1 2- or 3-digit country code\n )\n if magic_key:\n query['magicKey'] = magic_key # This is a lookup key returned from the suggest endpoint.\n\n if pq.bounded and pq.viewbox is not None:\n query = dict(query, searchExtent=pq.viewbox.to_esri_wgs_json())\n\n if self._authenticated:\n if self._token is None or self._token_expiration < datetime.utcnow():\n expiration = timedelta(hours=2)\n self._token = self.get_token(expiration)\n self._token_expiration = datetime.utcnow() + expiration\n\n query['token'] = self._token\n\n if getattr(pq, 'for_storage', False):\n query['forStorage'] = 'true'\n\n endpoint = self._endpoint + '/findAddressCandidates'\n response_obj = self._get_json_obj(endpoint, query)\n returned_candidates = [] # this will be the list returned\n try:\n locations = response_obj['candidates']\n for location in locations:\n c = Candidate()\n attributes = location['attributes']\n c.match_addr = attributes['Match_addr']\n c.locator = attributes['Loc_name']\n c.locator_type = attributes['Addr_type']\n c.score = attributes['Score']\n c.x = attributes['DisplayX'] # represents the actual location of the address.\n c.y = attributes['DisplayY']\n c.wkid = response_obj['spatialReference']['wkid']\n c.geoservice = self.__class__.__name__\n\n # Optional address component fields.\n for in_key, out_key in [('City', 'match_city'), ('Subregion', 'match_subregion'),\n ('Region', 'match_region'), ('Postal', 'match_postal'),\n ('Country', 'match_country')]:\n setattr(c, out_key, attributes.get(in_key, ''))\n setattr(c, 'match_streetaddr', self._street_addr_from_response(attributes))\n returned_candidates.append(c)\n except KeyError:\n pass\n return returned_candidates", "response": "This method is used to geocode the given PlaceQuery object and return a list of location Candidates."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _street_addr_from_response(self, attributes):\n # The exact ordering of the address component fields that should be\n # used to reconstruct the full street address is not specified in the\n # Esri documentation, but the examples imply that it is this.\n ordered_fields = ['AddNum', 'StPreDir', 'StPreType', 'StName', 'StType', 'StDir']\n result = []\n for field in ordered_fields:\n result.append(attributes.get(field, ''))\n if any(result):\n return ' '.join([s for s in result if s]) # Filter out empty strings.\n else:\n return ''", "response": "Construct a street address from a geocoder response."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_token(self, expires=None):\n endpoint = 'https://www.arcgis.com/sharing/rest/oauth2/token/'\n query = {'client_id': self._client_id,\n 'client_secret': self._client_secret,\n 'grant_type': 'client_credentials'}\n\n if expires is not None:\n if not isinstance(expires, timedelta):\n raise Exception('If expires is provided it must be a timedelta instance')\n query['expiration'] = int(expires.total_seconds() / 60)\n\n response_obj = self._get_json_obj(endpoint, query, is_post=True)\n\n return response_obj['access_token']", "response": "Returns a token suitable for use with the Esri geocoding API."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreplaces truncated address range with number of entries in the PlaceQuery instance pq.", "response": "def process(self, pq):\n \"\"\"\n :arg PlaceQuery pq: PlaceQuery instance\n :returns: PlaceQuery instance with truncated address range / number\n \"\"\"\n pq.query = self.replace_range(pq.query)\n pq.address = self.replace_range(pq.address)\n return pq"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing the PlaceQuery instance with the given query and return the corresponding elements.", "response": "def process(self, pq):\n \"\"\"\n :arg PlaceQuery pq: PlaceQuery instance\n :returns: PlaceQuery instance with :py:attr:`query`\n converted to individual elements\n \"\"\"\n if pq.query != '':\n postcode = address = city = '' # define the vars we'll use\n\n # global regex postcode search, pop off last result\n postcode_matches = self.re_UK_postcode.findall(pq.query)\n if len(postcode_matches) > 0:\n postcode = postcode_matches[-1]\n\n query_parts = [part.strip() for part in pq.query.split(',')]\n\n if postcode is not '' and re.search(postcode, query_parts[0]):\n # if postcode is in the first part of query_parts, there are probably no commas\n # get just the part before the postcode\n part_before_postcode = query_parts[0].split(postcode)[0].strip()\n if self.re_blank.search(part_before_postcode) is None:\n address = part_before_postcode\n else:\n address = query_parts[0] # perhaps it isn't really a postcode (apt num, etc)\n else:\n address = query_parts[0] # no postcode to worry about\n\n for part in query_parts[1:]:\n part = part.strip()\n if postcode is not '' and re.search(postcode, part) is not None:\n part = part.replace(postcode, '').strip() # if postcode is in part, remove it\n\n if self.re_unit_numbered.search(part) is not None:\n # test to see if part is secondary address, like \"Ste 402\"\n address = self._comma_join(address, part)\n elif self.re_unit_not_numbered.search(part) is not None:\n # ! might cause problems if 'Lower' or 'Upper' is in the city name\n # test to see if part is secondary address, like \"Basement\"\n address = self._comma_join(address, part)\n else:\n city = self._comma_join(city, part) # it's probably a city (or \"City, County\")\n # set pq parts if they aren't already set (we don't want to overwrite explicit params)\n pq.postal = pq.postal or postcode\n pq.address = pq.address or address\n pq.city = pq.city or city\n\n return pq"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef process(self, pq):\n # Map country, but don't let map overwrite\n if pq.country not in self.acceptable_countries and pq.country in self.country_map:\n pq.country = self.country_map[pq.country]\n if pq.country != '' and \\\n self.acceptable_countries != [] and \\\n pq.country not in self.acceptable_countries:\n return False\n return pq", "response": "Process a PlaceQuery instance and return a new PlaceQuery instance with the new ones."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process(self, pq):\n if pq.country.strip() == '':\n if self.default_country == '':\n return False\n else:\n pq.country = self.default_country\n return pq", "response": "Process the ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3166 - 1 ISO 3"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconstruct a street address from a geocoder response.", "response": "def _street_addr_from_response(self, match):\n \"\"\"Construct a street address (no city, region, etc.) from a geocoder response.\n\n :param match: The match object returned by the geocoder.\n \"\"\"\n # Same caveat as above regarding the ordering of these fields; the\n # documentation is not explicit about the correct ordering for\n # reconstructing a full address, but implies that this is the ordering.\n ordered_fields = ['preQualifier', 'preDirection', 'preType', 'streetName',\n 'suffixType', 'suffixDirection', 'suffixQualifier']\n result = []\n # The address components only contain a from and to address, not the\n # actual number of the address that was matched, so we need to cheat a\n # bit and extract it from the full address string. This is likely to\n # miss some edge cases (hopefully only a few since this is a US-only\n # geocoder).\n addr_num_re = re.match(r'([0-9]+)', match['matchedAddress'])\n if not addr_num_re: # Give up\n return ''\n result.append(addr_num_re.group(0))\n for field in ordered_fields:\n result.append(match['addressComponents'].get(field, ''))\n if any(result):\n return ' '.join([s for s in result if s]) # Filter out empty strings.\n else:\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a shallow copy of the request and environment object.", "response": "def copy(self, *args, **kwargs):\n \"\"\"\n Copy the request and environment object.\n\n This only does a shallow copy, except of wsgi.input\n \"\"\"\n self.make_body_seekable()\n env = self.environ.copy()\n new_req = self.__class__(env, *args, **kwargs)\n new_req.copy_body()\n new_req.identity = self.identity\n return new_req"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_source(self, source):\n geocode_service = self._get_service_by_name(source[0])\n self._sources.append(geocode_service(**source[1]))", "response": "Add a geocoding service to this instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_source(self, source):\n geocode_service = self._get_service_by_name(source[0])\n self._sources.remove(geocode_service(**source[1]))", "response": "Remove a geocoding service from this instance."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the sources for this geocoder", "response": "def set_sources(self, sources):\n \"\"\"\n Creates GeocodeServiceConfigs from each str source\n \"\"\"\n if len(sources) == 0:\n raise Exception('Must declare at least one source for a geocoder')\n self._sources = []\n for source in sources: # iterate through a list of sources\n self.add_source(source)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef geocode(self, pq, waterfall=None, force_stats_logging=False):\n\n waterfall = self.waterfall if waterfall is None else waterfall\n if type(pq) in (str, str):\n pq = PlaceQuery(pq)\n processed_pq = copy.copy(pq)\n\n for p in self._preprocessors: # apply universal address preprocessing\n processed_pq = p.process(processed_pq)\n\n upstream_response_info_list = []\n processed_candidates = []\n for gs in self._sources: # iterate through each GeocodeService\n candidates, upstream_response_info = gs.geocode(processed_pq)\n if upstream_response_info is not None:\n upstream_response_info_list.append(upstream_response_info)\n processed_candidates += candidates # merge lists\n if waterfall is False and len(processed_candidates) > 0:\n break # if >= 1 good candidate, don't go to next geocoder\n\n for p in self._postprocessors: # apply univ. candidate postprocessing\n if processed_candidates == []:\n break # avoid post-processing empty list\n processed_candidates = p.process(processed_candidates)\n\n result = dict(candidates=processed_candidates,\n upstream_response_info=upstream_response_info_list)\n stats_dict = self.convert_geocode_result_to_nested_dicts(result)\n stats_dict = dict(stats_dict, original_pq=pq.__dict__)\n try:\n stats_logger.info(stats_dict)\n except Exception as exception:\n logger.error('Encountered exception while logging stats %s:\\n%s', stats_dict, exception)\n if force_stats_logging:\n raise exception\n return result", "response": "This method will geocode the universal address of the given PlaceQuery object and return a dictionary containing the results of the geocoding."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntraverse a schema and perform the updates it describes to params.", "response": "def _apply_param_actions(self, params, schema_params):\n \"\"\"Traverse a schema and perform the updates it describes to params.\"\"\"\n\n for key, val in schema_params.items():\n if key not in params:\n continue\n\n if isinstance(val, dict):\n self._apply_param_actions(params[key], schema_params[key])\n elif isinstance(val, ResourceId):\n resource_id = val\n\n # Callers can provide ints as ids.\n # We normalize them to strings so that actions don't get confused.\n params[key] = str(params[key])\n\n resource_id.action(params, schema_params, key,\n resource_id, self.state, self.options)\n else:\n logger.error(\"Invalid value in schema params: %r. schema_params: %r and params: %r\",\n val, schema_params, params)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validator(self):\n tstart = datetime.now()\n v = validator.Validator(self.vcf_file)\n std = v.run()\n if std == 0:\n self.is_validated = True\n\n tend = datetime.now()\n execution_time = tend - tstart", "response": "Validate the VCF file and set self. is_validated to True."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns a sanity check on the current VCF file.", "response": "def sanitycheck(self):\n \"\"\"\n Search and Remove variants with [0/0, ./.]\n Search and Replace chr from the beggining of the chromossomes to get positionning. \n Sort VCF by 1...22, X, Y, MT and nothing else\n #Discard other variants\n \"\"\"\n # logging.info('Starting Sanity Check...')\n tstart = datetime.now()\n\n # command = 'python %s/sanity_check.py -i %s' % (scripts_dir, self.vcf_file)\n # self.shell(command)\n\n sc = sanity_check.Sanity_check(self.vcf_file)\n std = sc.run()\n\n tend = datetime.now()\n execution_time = tend - tstart"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef snpeff(self):\n # calculate time thread took to finish\n # logging.info('Starting snpEff')\n tstart = datetime.now()\n\n se = snpeff.Snpeff(self.vcf_file)\n std = se.run()\n\n tend = datetime.now()\n execution_time = tend - tstart", "response": "run SNPEFF and return the output"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef vep(self):\n\n # calculate time thread took to finish\n # logging.info('Starting VEP ')\n tstart = datetime.now()\n\n vep_obj = vep.Vep(self.vcf_file)\n std = vep_obj.run()\n\n # command = 'python %s/vep.py -i sanity_check/checked.vcf' % (scripts_dir)\n # self.shell(command)\n\n # command = 'mv vep/vep.log log/'\n # os.system(command)\n\n # command = 'mv vep/vep.output.vcf_summary.html reports/vep_summary.html'\n # os.system(command)\n\n tend = datetime.now()\n execution_time = tend - tstart", "response": "start the VEP thread"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef decipher(self):\n\n # calculate time thread took to finish\n # logging.info('Starting HI score')\n tstart = datetime.now()\n\n decipher_obj = decipher.Decipher(self.vcf_file)\n decipher_obj.run()\n\n tend = datetime.now()\n execution_time = tend - tstart", "response": "Decipher the VCF file and return the HI score"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hgmd(self):\n\n # calculate time thread took to finish\n # logging.info('Starting HI score')\n tstart = datetime.now()\n\n\n if os.path.isfile(settings.hgmd_file): \n\n hgmd_obj = hgmd.HGMD(self.vcf_file)\n hgmd_obj.run()\n\n tend = datetime.now()\n execution_time = tend - tstart", "response": "Run HGMD on the vcf file and return the score"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns SNP - Sift on the vcf file", "response": "def snpsift(self):\n \"\"\"SnpSift\"\"\"\n\n tstart = datetime.now()\n\n # command = 'python %s/snpsift.py -i sanity_check/checked.vcf 2>log/snpsift.log' % (scripts_dir)\n # self.shell(command)\n\n ss = snpsift.SnpSift(self.vcf_file)\n ss.run()\n\n tend = datetime.now()\n execution_time = tend - tstart"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning DBNSFP on the vcf file.", "response": "def dbnsfp(self):\n \"\"\"dbnsfp\"\"\"\n\n tstart = datetime.now()\n\n # python ../scripts/annotate_vcfs.py -i mm13173_14.ug.target1.vcf -r 1000genomes dbsnp138 clinvar esp6500 -a ../data/1000genomes/ALL.wgs.integrated_phase1_v3.20101123.snps_indels_sv.sites.vcf.gz ../data/dbsnp138/00-All.vcf.gz ../data/dbsnp138/clinvar_00-latest.vcf.gz ../data/ESP6500/ESP6500.vcf.gz\n\n # command = 'python %s/cadd_dann.py -n %s -i sanity_check/checked.vcf 2>log/cadd_dann.log' % (scripts_dir, cadd_vest_cores)\n # self.shell(command)\n db = dbnsfp.Dbnsfp(self.vcf_file, settings.dbnsfp_cores)\n db.run()\n\n tend = datetime.now()\n execution_time = tend - tstart"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmanage Morp application services", "response": "def cli(ctx, settings, app):\n \"\"\"Manage Morp application services\"\"\"\n if app is None and settings is None:\n print('Either --app or --settings must be supplied')\n ctx.ensure_object(dict)\n ctx.obj['app'] = app\n ctx.obj['settings'] = settings"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if all required settings exist and returns True if so returns False if not.", "response": "def _settings_checker(self, required_settings=None, accept_none=True):\n \"\"\"\n Take a list of required _settings dictionary keys\n and make sure they are set. This can be added to a custom\n constructor in a subclass and tested to see if it returns ``True``.\n\n :arg list required_settings: A list of required keys to look for.\n :arg bool accept_none: Boolean set to True if None is an acceptable\n setting. Set to False if None is not an\n acceptable setting.\n\n :returns: * bool ``True`` if all required settings exist, OR\n * str for the first key missing from _settings.\n \"\"\"\n if required_settings is not None:\n for keyname in required_settings:\n if keyname not in self._settings:\n return keyname\n if accept_none is False and self._settings[keyname] is None:\n return keyname\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns response or False in event of failure", "response": "def _get_response(self, endpoint, query, is_post=False):\n \"\"\"Returns response or False in event of failure\"\"\"\n timeout_secs = self._settings.get('timeout', 10)\n headers = self._settings.get('request_headers', {})\n try:\n if is_post:\n response = requests.post(\n endpoint, data=query, headers=headers, timeout=timeout_secs)\n else:\n response = requests.get(\n endpoint, params=query, headers=headers, timeout=timeout_secs)\n except requests.exceptions.Timeout as e:\n raise Exception(\n 'API request timed out after %s seconds.' % timeout_secs)\n except Exception as e:\n raise e\n\n if response.status_code != 200:\n raise Exception('Received status code %s from %s. Content is:\\n%s'\n % (response.status_code,\n self.get_service_name(),\n response.text))\n return response"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_json_obj(self, endpoint, query, is_post=False):\n response = self._get_response(endpoint, query, is_post=is_post)\n content = response.text\n try:\n return loads(content)\n except ValueError:\n raise Exception('Could not decode content to JSON:\\n%s'\n % self.__class__.__name__, content)", "response": "Returns a JSON object from the response."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a minidom Document.", "response": "def _get_xml_doc(self, endpoint, query, is_post=False):\n \"\"\"\n Return False if connection could not be made.\n Otherwise, return a minidom Document.\n \"\"\"\n response = self._get_response(endpoint, query, is_post=is_post)\n return minidom.parse(response.text)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef geocode(self, pq):\n processed_pq = copy.copy(pq)\n for p in self._preprocessors:\n processed_pq = p.process(processed_pq)\n if not processed_pq:\n return [], None\n upstream_response_info = UpstreamResponseInfo(self.get_service_name(),\n processed_pq)\n try:\n start = datetime.now()\n candidates = self._geocode(processed_pq)\n end = datetime.now()\n response_time_sec = (end - start).total_seconds()\n upstream_response_info.set_response_time(1000 * response_time_sec)\n except Exception:\n upstream_response_info.set_success(False)\n upstream_response_info.errors.append(format_exc())\n return [], upstream_response_info\n if len(candidates) > 0:\n for p in self._postprocessors: # apply universal candidate postprocessing\n candidates = p.process(candidates) # merge lists\n return candidates, upstream_response_info", "response": "This method will geocode the specified instance of a resource and return a list of Candidate objects and upstreamResponseInfo object if the API call was made."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing the list of Candidates.", "response": "def process(self, candidates):\n \"\"\"\n :arg list candidates: list of Candidate instances\n \"\"\"\n for c in candidates[:]:\n if c.locator not in self.good_locators:\n # TODO: search string, i.e. find \"EU_Street_Name\" in \"EU_Street_Name.GBR_StreetName\"\n candidates.remove(c)\n\n return candidates"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing the list of Candidate instances in order of locators and order of locators.", "response": "def process(self, unordered_candidates):\n \"\"\"\n :arg list candidates: list of Candidate instances\n\n \"\"\"\n ordered_candidates = []\n # make a new list of candidates in order of ordered_locators\n for locator in self.ordered_locators:\n for uc in unordered_candidates[:]:\n if uc.locator == locator:\n ordered_candidates.append(uc)\n unordered_candidates.remove(uc)\n # add all the candidates that are still left\n # (whose locator values are not in ordered_locators)\n # and return the new list\n return ordered_candidates + unordered_candidates"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprocesses the list of Candidates by comparing the score of the Candidates with the minimum score.", "response": "def process(self, candidates):\n \"\"\"\n :arg list candidates: list of Candidates\n :returns: list of Candidates where score is at least min_score,\n if and only if one or more Candidates have at least min_score.\n Otherwise, returns original list of Candidates.\n \"\"\"\n high_score_candidates = [c for c in candidates if c.score >= self.min_score]\n if high_score_candidates != []:\n return high_score_candidates\n return candidates"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef process(self, candidates):\n return sorted(candidates, key=attrgetter('score'), reverse=self.reverse)", "response": "Process a list of Candidates\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_distance(self, pnt1, pnt2):\n lat1, lon1 = pnt1\n lat2, lon2 = pnt2\n radius = 6356752 # km\n dlat = math.radians(lat2 - lat1)\n dlon = math.radians(lon2 - lon1)\n a = math.sin(dlat / 2) * math.sin(dlat / 2) + math.cos(math.radians(lat1)) \\\n * math.cos(math.radians(lat2)) * math.sin(dlon / 2) * math.sin(dlon / 2)\n c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))\n d = radius * c\n return d", "response": "Get distance in meters between two lat / long points"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _points_within_distance(self, pnt1, pnt2):\n if self._get_distance(pnt1, pnt2) <= self.distance:\n return True\n return False", "response": "Returns true if lat / lon points are within given distance in metres."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _make_candidate_from_result(self, result):\n candidate = Candidate()\n candidate.match_addr = result['formatted_address']\n candidate.x = result['geometry']['location']['lng']\n candidate.y = result['geometry']['location']['lat']\n candidate.locator = self.LOCATOR_MAPPING.get(result['geometry']['location_type'], '')\n candidate.entity_types = result['types']\n candidate.partial_match = result.get('partial_match', False)\n\n component_lookups = {\n 'city': {'type': 'locality', 'key': 'long_name'},\n 'subregion': {'type': 'administrative_area_level_2', 'key': 'long_name'},\n 'region': {'type': 'administrative_area_level_1', 'key': 'short_name'},\n 'postal': {'type': 'postal_code', 'key': 'long_name'},\n 'country': {'type': 'country', 'key': 'short_name'},\n }\n for (field, lookup) in component_lookups.items():\n setattr(candidate, 'match_' + field, self._get_component_from_result(result, lookup))\n candidate.geoservice = self.__class__.__name__\n return candidate", "response": "Make a Candidate from a Google geocoder results dictionary."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_component_from_result(self, result, lookup):\n for component in result['address_components']:\n if lookup['type'] in component['types']:\n return component.get(lookup['key'], '')\n return ''", "response": "Helper function to get a particular address component from a Google result dictionary."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsearch based on specified rulez query", "response": "def search(self, query: Optional[dict] = None, offset: Optional[int] = None,\n limit: Optional[int] = None,\n order_by: Union[None, list, tuple] = None) -> Sequence['IModel']:\n \"\"\"return search result based on specified rulez query\"\"\"\n raise NotImplementedError"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn aggregation result based on specified rulez query and group", "response": "def aggregate(self, query: Optional[dict] = None,\n group: Optional[dict] = None,\n order_by: Union[None, list, tuple] = None) -> list:\n \"\"\"return aggregation result based on specified rulez query and group\"\"\"\n raise NotImplementedError"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreceiving and store a blob object", "response": "def put_blob(self, field: str, fileobj: BinaryIO,\n filename: str,\n mimetype: Optional[str] = None,\n size: Optional[int] = None,\n encoding: Optional[str] = None) -> IBlob:\n \"\"\"Receive and store blob object\"\"\"\n raise NotImplementedError"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef before_blobput(self, field: str, fileobj: BinaryIO,\n filename: str,\n mimetype: Optional[str] = None,\n size: Optional[int] = None,\n encoding: Optional[str] = None) -> None:\n \"\"\"Triggered before BLOB is stored\"\"\"", "response": "Triggered before BLOB is stored"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef search(self, query: Optional[dict] = None,\n offset: int = 0,\n limit: Optional[int] = None,\n order_by: Optional[tuple] = None,\n secure: bool = False) -> List[IModel]:\n \"\"\"Search for models\n\n Filtering is done through ``rulez`` based JSON/dict query, which\n defines boolean statements in JSON/dict structure.\n\n : param query: Rulez based query\n : param offset: Result offset\n : param limit: Maximum number of result\n : param order_by: Tuple of ``(field, order)`` where ``order`` is\n ``'asc'`` or ``'desc'``\n : param secure: When set to True, this will filter out any object which\n current logged in user is not allowed to see\n\n : todo: ``order_by`` need to allow multiple field ordering\n \"\"\"\n raise NotImplementedError", "response": "Search for models in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets aggregated results from rulez based query", "response": "def aggregate(self, query: Optional[dict] = None,\n group: Optional[dict] = None,\n order_by: Optional[tuple] = None) -> List[IModel]:\n \"\"\"Get aggregated results\n\n : param query: Rulez based query\n : param group: Grouping structure\n : param order_by: Tuple of ``(field, order)`` where ``order`` is\n ``'asc'`` or ``'desc'``\n\n : todo: Grouping structure need to be documented\n\n \"\"\"\n raise NotImplementedError"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfetching the raw JSON game state data from EchoVR s session API.", "response": "def fetch_state_data(self):\n \"\"\"Fetch the raw JSON game state data from EchoVR's ``/session`` API\n\n This method could be useful if you want to retrieve some API data not\n directly exposed by this Python wrapper. Otherwise, you should probably\n use :meth:`fetch_state` instead.\n\n :returns:\n An object (probably a :class:`dict`) representing the raw JSON\n response returned by the EchoVR client.\n\n :raises requests.exceptions.ConnectionError:\n This exception will be thrown if the API is unavaible. This might\n indicate that the user is not currently in a match, or that they\n didn't launch Echo VR with the `-http` option.\n\n :raises json.decoder.JSONDecodeError:\n This exception will be thrown if the data returned by the API is not\n valid JSON. Likely indicates a bug in Echo VR or in this library.\n \"\"\"\n response = requests.get(self._gamestate_url)\n response_text = response.text.rstrip('\\0')\n return json.loads(response_text)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_bing_str(self):\n vb = self.convert_srs(4326)\n return '%s,%s,%s,%s' % (vb.bottom, vb.left, vb.top, vb.right)", "response": "Convert the viewbox object to a string that can be used by Bing\n as a query parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_pelias_dict(self):\n vb = self.convert_srs(4326)\n return {\n 'boundary.rect.min_lat': vb.bottom,\n 'boundary.rect.min_lon': vb.left,\n 'boundary.rect.max_lat': vb.top,\n 'boundary.rect.max_lon': vb.right\n }", "response": "Convert the viewbox object to a dictionary that can be used by Pelias\n as a query parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_google_str(self):\n vb = self.convert_srs(4326)\n return '%s,%s|%s,%s' % (vb.bottom, vb.left, vb.top, vb.right)", "response": "Convert the current object to Google s bounds format"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_mapquest_str(self):\n vb = self.convert_srs(4326)\n return '%s,%s,%s,%s' % (vb.left, vb.top, vb.right, vb.bottom)", "response": "Convert Viewbox object to a string that can be used by MapQuest API"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the Viewbox object to a JSON string that can be used by the ESRI World Geocoding Service as a parameter.", "response": "def to_esri_wgs_json(self):\n \"\"\"\n Convert Viewbox object to a JSON string that can be used\n by the ESRI World Geocoding Service as a parameter.\n \"\"\"\n try:\n return ('{ \"xmin\" : %s, '\n '\"ymin\" : %s, '\n '\"xmax\" : %s, '\n '\"ymax\" : %s, '\n '\"spatialReference\" : {\"wkid\" : %d} }'\n % (self.left,\n self.bottom,\n self.right,\n self.top,\n self.wkid))\n except ValueError:\n raise Exception('One or more values could not be cast to a number. '\n 'Four bounding points must be real numbers. '\n 'WKID must be an integer.')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate the username and password and create the user.", "response": "def register(context, request, load):\n \"\"\"Validate the username and password and create the user.\"\"\"\n data = request.json\n res = validate(data, dataclass_to_jsl(\n RegistrationSchema).get_schema())\n if res:\n @request.after\n def set_error(response):\n response.status = 422\n return {\n 'status': 'error',\n 'field_errors': [{'message': res[x]} for x in res.keys()]\n }\n\n if data['password'] != data['password_validate']:\n @request.after\n def adjust_response(response):\n response.status = 422\n\n return {'status': 'error',\n 'message': 'Password confirmation does not match'}\n\n if 'state' not in data.keys() or not data['state']:\n data['state'] = request.app.settings.application.new_user_state\n del data['password_validate']\n obj = context.create(data)\n return {'status': 'success'}"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nauthenticate username and password and log in user.", "response": "def process_login(context, request):\n \"\"\"Authenticate username and password and log in user\"\"\"\n username = request.json['username']\n password = request.json['password']\n\n # Do the password validation.\n user = context.authenticate(username, password)\n if not user:\n @request.after\n def adjust_status(response):\n response.status = 401\n return {\n 'status': 'error',\n 'error': {\n 'code': 401,\n 'message': 'Invalid Username / Password'\n }\n }\n\n @request.after\n def remember(response):\n \"\"\"Remember the identity of the user logged in.\"\"\"\n # We pass the extra info to the identity object.\n response.headers.add('Access-Control-Expose-Headers', 'Authorization')\n identity = user.identity\n request.app.remember_identity(response, request, identity)\n\n return {\n 'status': 'success'\n }"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlog out the user.", "response": "def logout(context, request):\n \"\"\"Log out the user.\"\"\"\n\n @request.after\n def forget(response):\n request.app.forget_identity(response, request)\n\n return {\n 'status': 'success'\n }"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef request(self, url, method, data=None):\n method = getattr(requests, method)\n response = method(\n url,\n headers=self.headers,\n data=self.process_json_for_cloudflare(data) if data else None\n )\n content = response.json()\n if response.status_code != 200:\n print(content)\n raise requests.HTTPError(content['message'])\n return content", "response": "The requester shortcut to submit a http request to CloutFlare"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_record(self, dns_type, name):\n try:\n record = [record for record in self.dns_records\n if record['type'] == dns_type and record['name'] == name][0]\n except IndexError:\n raise RecordNotFound(\n 'Cannot find the specified dns record in domain {domain}'\n .format(domain=name))\n return record", "response": "Get a dns record in the domain."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_record(self, dns_type, name, content, **kwargs):\n data = {\n 'type': dns_type,\n 'name': name,\n 'content': content\n }\n if kwargs.get('ttl') and kwargs['ttl'] != 1:\n data['ttl'] = kwargs['ttl']\n if kwargs.get('proxied') is True:\n data['proxied'] = True\n else:\n data['proxied'] = False\n content = self.request(\n self.api_url + self.zone['id'] + '/dns_records',\n 'post',\n data=data\n )\n print('DNS record successfully created')\n return content['result']", "response": "Create a DNS record."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate a DNS record.", "response": "def update_record(self, dns_type, name, content, **kwargs):\n \"\"\"\n Update dns record\n :param dns_type:\n :param name:\n :param content:\n :param kwargs:\n :return:\n \"\"\"\n record = self.get_record(dns_type, name)\n data = {\n 'type': dns_type,\n 'name': name,\n 'content': content\n }\n if kwargs.get('ttl') and kwargs['ttl'] != 1:\n data['ttl'] = kwargs['ttl']\n if kwargs.get('proxied') is True:\n data['proxied'] = True\n else:\n data['proxied'] = False\n content = self.request(\n urllib.parse.urljoin(self.api_url, self.zone['id'] + '/dns_records/' + record['id']),\n 'put',\n data=data\n )\n print('DNS record successfully updated')\n return content['result']"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_or_update_record(self, dns_type, name, content, **kwargs):\n try:\n return self.update_record(dns_type, name, content, **kwargs)\n except RecordNotFound:\n return self.create_record(dns_type, name, content, **kwargs)", "response": "Create or update a dns record."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes a dns record.", "response": "def delete_record(self, dns_type, name):\n \"\"\"\n Delete a dns record\n :param dns_type:\n :param name:\n :return:\n \"\"\"\n record = self.get_record(dns_type, name)\n content = self.request(\n urllib.parse.urljoin(self.api_url, self.zone['id'] + '/dns_records/' + record['id']),\n 'delete'\n )\n return content['result']['id']"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitialize the object with the passed in arguments.", "response": "def _init_helper(self, vars_):\n \"\"\"Overwrite defaults (if they exist) with arguments passed to constructor\"\"\"\n for k in vars_:\n if k == 'kwargs':\n for kwarg in vars_[k]:\n setattr(self, kwarg, vars_[k][kwarg])\n elif k != 'self':\n setattr(self, k, vars_[k])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef list_members(context, request):\n members = context.members()\n return {\n 'users': [{\n 'username': m.identifier,\n 'userid': m.userid,\n 'roles': context.get_member_roles(m.userid),\n 'links': [rellink(m, request)]\n } for m in members]\n }", "response": "Return the list of users in the group."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngrant member roles in the group.", "response": "def grant_member(context, request):\n \"\"\"Grant member roles in the group.\"\"\"\n mapping = request.json['mapping']\n for entry in mapping:\n user = entry['user']\n roles = entry['roles']\n username = user.get('username', None)\n userid = user.get('userid', None)\n if userid:\n u = context.get_user_by_userid(userid)\n elif username:\n u = context.get_user_by_username(username)\n else:\n u = None\n if u is None:\n raise UnprocessableError(\n 'User %s does not exists' % (userid or username))\n for rolename in roles:\n context.grant_member_role(u.userid, rolename)\n return {'status': 'success'}"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrevokes member roles in the group.", "response": "def revoke_member(context, request):\n \"\"\"Revoke member roles in the group.\"\"\"\n\n mapping = request.json['mapping']\n for entry in mapping:\n user = entry['user']\n roles = entry['roles']\n username = user.get('username', None)\n userid = user.get('userid', None)\n if userid:\n u = context.get_user_by_userid(userid)\n elif username:\n u = context.get_user_by_username(username)\n else:\n u = None\n if u is None:\n raise UnprocessableError(\n 'User %s does not exists' % (userid or username))\n for rolename in roles:\n context.revoke_member_role(u.userid, rolename)\n return {'status': 'success'}"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef call_MediaInfo(file_name, mediainfo_path=None):\n if mediainfo_path is None:\n mediainfo_path = find_MediaInfo()\n\n result = subprocess.check_output(\n [mediainfo_path, \"-f\", file_name], universal_newlines=True\n )\n D = collections.defaultdict(dict)\n for line in result.splitlines():\n line = line.split(':', 1)\n # Skip separators\n if line[0] == '':\n continue\n # Start a new section\n elif len(line) == 1:\n section = line[0].strip()\n # Record section key, value pairs\n else:\n k = line[0].strip()\n v = line[1].strip()\n if k not in D[section]:\n D[section][k] = v\n\n return D", "response": "Returns a dictionary of dictionaries with the output of\n MediaInfo - f file_name"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_video(file_name, mediainfo_path=None):\n D = call_MediaInfo(file_name, mediainfo_path)\n err_msg = \"Could not determine all video paramters\"\n\n if (\"General\" not in D) or (\"Video\" not in D):\n raise MediaInfoError(err_msg)\n\n general_keys = (\"Count of audio streams\", \"File size\", \"Overall bit rate\")\n if any(k not in D[\"General\"] for k in general_keys):\n raise MediaInfoError(err_msg)\n\n video_keys = (\n \"Format profile\",\n \"Commercial name\",\n \"Frame rate\",\n \"Height\",\n \"Scan type\",\n )\n if any(k not in D[\"Video\"] for k in video_keys):\n raise MediaInfoError(err_msg)\n\n return D", "response": "Checks the given file with MediaInfo and returns the video and audio codec\n information if all the required parameters were found."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking that the given file with MediaInfo and returns the picture information if all the required parameters were found.", "response": "def check_picture(file_name, mediainfo_path=None):\n \"\"\"\n Scans the given file with MediaInfo and returns the picture\n information if all the required parameters were found.\n \"\"\"\n D = call_MediaInfo(file_name, mediainfo_path)\n # Check that the file analyzed was a valid movie\n if (\n (\"Image\" not in D) or\n (\"Width\" not in D[\"Image\"]) or\n (\"Height\" not in D[\"Image\"])\n ):\n raise MediaInfoError(\"Could not determine all picture paramters\")\n\n return D"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef md5_checksum(file_path, chunk_bytes=4194304):\n\n with open(file_path, \"rb\") as infile:\n checksum = hashlib.md5()\n while 1:\n data = infile.read(chunk_bytes)\n if not data:\n break\n checksum.update(data)\n\n return checksum.hexdigest()", "response": "Return the MD5 checksum of the file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef trim(docstring):\n if not docstring:\n return ''\n # Convert tabs to spaces (following the normal Python rules)\n # and split into a list of lines:\n lines = docstring.expandtabs().splitlines()\n # Determine minimum indentation (first line doesn't count):\n indent = sys.maxsize\n for line in lines[1:]:\n stripped = line.lstrip()\n if stripped:\n indent = min(indent, len(line) - len(stripped))\n # Remove indentation (first line is special):\n trimmed = [lines[0].strip()]\n if indent < sys.maxsize:\n for line in lines[1:]:\n trimmed.append(line[indent:].rstrip())\n # Strip off trailing and leading blank lines:\n while trimmed and not trimmed[-1]:\n trimmed.pop()\n while trimmed and not trimmed[0]:\n trimmed.pop(0)\n # Return a single string:\n res = '\\n'.join(trimmed)\n if not PY3 and not isinstance(res, unicode):\n res = res.decode('utf8')\n return res", "response": "Trim the given docstring into a single string."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the schema s children filtered by location.", "response": "def _get_attributes(schema, location):\n \"\"\"Return the schema's children, filtered by location.\"\"\"\n schema = DottedNameResolver(__name__).maybe_resolve(schema)\n\n def _filter(attr):\n if not hasattr(attr, \"location\"):\n valid_location = 'body' in location\n else:\n valid_location = attr.location in to_list(location)\n return valid_location\n\n return list(filter(_filter, schema().children))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstart a multi - process server with the given context manager.", "response": "def start_server(worker_actxmgr: AsyncServerContextManager,\n main_ctxmgr: Optional[ServerMainContextManager] = None,\n extra_procs: Iterable[Callable] = tuple(),\n stop_signals: Iterable[signal.Signals] = (\n signal.SIGINT,\n signal.SIGTERM),\n num_workers: int = 1,\n use_threading: bool = False,\n args: Iterable[Any] = tuple()):\n '''\n Starts a multi-process server where each process has their own individual\n asyncio event loop. Their lifecycles are automantically managed -- if the\n main program receives one of the signals specified in ``stop_signals`` it\n will initiate the shutdown routines on each worker that stops the event\n loop gracefully.\n\n Args:\n worker_actxmgr: An asynchronous context manager that dicates the\n initialization and shutdown steps of each worker.\n It should accept the following three arguments:\n\n * **loop**: the asyncio event loop created and set\n by aiotools\n * **pidx**: the 0-based index of the worker\n (use this for per-worker logging)\n * **args**: a concatenated tuple of values yielded by\n **main_ctxmgr** and the user-defined arguments in\n **args**.\n\n aiotools automatically installs an interruption handler\n that calls ``loop.stop()`` to the given event loop,\n regardless of using either threading or\n multiprocessing.\n\n main_ctxmgr: An optional context manager that performs global\n initialization and shutdown steps of the whole program.\n It may yield one or more values to be passed to worker\n processes along with **args** passed to this function.\n There is no arguments passed to those functions since\n you can directly access ``sys.argv`` to parse command\n line arguments and/or read user configurations.\n\n extra_procs: An iterable of functions that consist of extra processes\n whose lifecycles are synchronized with other workers.\n\n You should write the shutdown steps of them differently\n depending on the value of **use_threading** argument.\n\n If it is ``False`` (default), they will get\n a :class:`BaseException` depending on the received stop signal\n number, either :class:`KeyboardInterrupt` (for SIGINT),\n :class:`SystemExit` (for SIGTERM), or\n :class:`InterruptedBySignal` (otherwise).\n\n If it is ``True``, they should check their **intr_event**\n argument periodically because there is no way to install\n signal handlers in Python threads (only the main thread\n can install signal handlers).\n\n It should accept the following three arguments:\n\n * **intr_event**: :class:`threading.Event` object that\n signals the interruption of the main thread (only\n available when **use_threading** is ``True``; otherwise\n it is set to ``None``)\n * **pidx**: same to **worker_actxmgr** argument\n * **args**: same to **worker_actxmgr** argument\n\n stop_signals: A list of UNIX signals that the main program to\n recognize as termination signals.\n\n num_workers: The number of children workers.\n\n use_threading: Use :mod:`threading` instead of :mod:`multiprocessing`.\n In this case, the GIL may become the performance\n bottleneck. Set this ``True`` only when you know what\n you are going to do. Note that this changes the way\n to write user-defined functions passed as **extra_procs**.\n\n args: The user-defined arguments passed to workers and extra\n processes. If **main_ctxmgr** yields one or more values,\n they are *prepended* to this user arguments when passed to\n workers and extra processes.\n\n Returns:\n None\n\n .. versionchanged:: 0.3.2\n\n The name of argument **num_proc** is changed to **num_workers**.\n Even if **num_workers** is 1, a child is created instead of\n doing everything at the main thread.\n\n .. versionadded:: 0.3.2\n\n The argument ``extra_procs`` and ``main_ctxmgr``.\n\n .. versionadded:: 0.4.0\n\n Now supports use of threading instead of multiprocessing via\n **use_threading** option.\n\n .. versionchanged:: 0.8.0\n\n Now **worker_actxmgr** must be an instance of\n :class:`AsyncServerContextManager` or async generators decorated by\n ``@aiotools.server``.\n\n Now **main_ctxmgr** must be an instance of :class:`ServerMainContextManager`\n or plain generators decorated by ``@aiotools.main``.\n\n The usage is same to asynchronous context managers, but optionally you can\n distinguish the received stop signal by retrieving the return value of the\n ``yield`` statement.\n\n In **extra_procs** in non-threaded mode, stop signals are converted into\n either one of :class:`KeyboardInterrupt`, :class:`SystemExit`, or\n :class:`InterruptedBySignal` exception.\n '''\n\n @_main_ctxmgr\n def noop_main_ctxmgr():\n yield\n\n def create_child(*args, **kwargs):\n if use_threading:\n return threading.Thread(*args, **kwargs)\n else:\n return mp.Process(*args, **kwargs)\n\n assert stop_signals\n\n if main_ctxmgr is None:\n main_ctxmgr = noop_main_ctxmgr\n\n children = []\n _children_ctxs.clear()\n _children_loops.clear()\n intr_event = threading.Event()\n sigblock_mask = frozenset(stop_signals)\n main_ctx = main_ctxmgr()\n\n # temporarily block signals and register signal handlers to mainloop\n signal.pthread_sigmask(signal.SIG_BLOCK, sigblock_mask)\n\n old_loop = asyncio.get_event_loop()\n mainloop = asyncio.new_event_loop()\n asyncio.set_event_loop(mainloop)\n\n # to make subprocess working in child threads\n try:\n asyncio.get_child_watcher()\n except NotImplementedError:\n pass # for uvloop\n\n # build a main-to-worker interrupt channel using signals\n def handle_stop_signal(signum):\n main_ctx.yield_return = signum\n if use_threading:\n with _children_lock:\n for c in _children_ctxs:\n c.yield_return = signum\n for l in _children_loops:\n l.call_soon_threadsafe(l.stop)\n intr_event.set()\n else:\n os.killpg(0, signum)\n mainloop.stop()\n\n for signum in stop_signals:\n mainloop.add_signal_handler(\n signum,\n functools.partial(handle_stop_signal, signum))\n\n # build a reliable worker-to-main interrupt channel using a pipe\n # (workers have no idea whether the main interrupt is enabled/disabled)\n def handle_child_interrupt(fd):\n child_idx = struct.unpack('i', os.read(fd, 4))[0] # noqa\n log.debug(f'Child {child_idx} has interrupted the main program.')\n # self-interrupt to initiate the main-to-worker interrupts\n signal.pthread_sigmask(signal.SIG_UNBLOCK, {signal.SIGINT})\n os.kill(0, signal.SIGINT)\n\n if use_threading:\n child_intr_pipe = os.pipe()\n rfd = child_intr_pipe[0]\n else:\n child_intr_pipe = mp.Pipe()\n rfd = child_intr_pipe[0].fileno()\n mainloop.add_reader(rfd, handle_child_interrupt, rfd)\n\n # start\n with main_ctx as main_args:\n\n # retrieve args generated by the user-defined main\n if main_args is None:\n main_args = tuple()\n if not isinstance(main_args, tuple):\n main_args = (main_args, )\n\n # spawn managed async workers\n for i in range(num_workers):\n p = create_child(target=_worker_main, daemon=True,\n args=(worker_actxmgr, use_threading, stop_signals,\n child_intr_pipe[1], i,\n main_args + args))\n p.start()\n children.append(p)\n\n # spawn extra workers\n for i, f in enumerate(extra_procs):\n p = create_child(target=_extra_main, daemon=True,\n args=(f, use_threading, stop_signals,\n intr_event, num_workers + i,\n main_args + args))\n p.start()\n children.append(p)\n try:\n # unblock the stop signals for user/external interrupts.\n signal.pthread_sigmask(signal.SIG_UNBLOCK, sigblock_mask)\n\n # run!\n mainloop.run_forever()\n\n # if interrupted, wait for workers to finish.\n for child in children:\n child.join()\n finally:\n mainloop.close()\n asyncio.set_event_loop(old_loop)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if the lock is acquired", "response": "def is_acquired(self):\n \"\"\"Check if the lock is acquired\"\"\"\n values = self.client.get(self.key)\n return six.b(self._uuid) in values"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef client(host='localhost', port=2379,\n ca_cert=None, cert_key=None, cert_cert=None,\n timeout=None, protocol=\"http\"):\n \"\"\"Return an instance of an Etcd3Client.\"\"\"\n return Etcd3Client(host=host,\n port=port,\n ca_cert=ca_cert,\n cert_key=cert_key,\n cert_cert=cert_cert,\n timeout=timeout,\n protocol=protocol)", "response": "Return an instance of an Etcd3Client."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconstructs a full url to the v3alpha API given a specific path", "response": "def get_url(self, path):\n \"\"\"Construct a full url to the v3alpha API given a specific path\n\n :param path:\n :return: url\n \"\"\"\n host = ('[' + self.host + ']' if (self.host.find(':') != -1)\n else self.host)\n base_url = self.protocol + '://' + host + ':' + str(self.port)\n return base_url + '/v3alpha/' + path.lstrip(\"/\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lease(self, ttl=DEFAULT_TIMEOUT):\n result = self.post(self.get_url(\"/lease/grant\"),\n json={\"TTL\": ttl, \"ID\": 0})\n return Lease(int(result['ID']), client=self)", "response": "Create a Lease object given a timeout."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a Lock object given an ID and a TTL", "response": "def lock(self, id=str(uuid.uuid4()), ttl=DEFAULT_TIMEOUT):\n \"\"\"Create a Lock object given an ID and timeout\n\n :param id: ID for the lock, creates a new uuid if not provided\n :param ttl: timeout\n :return: Lock object\n \"\"\"\n return Lock(id, ttl=ttl, client=self)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create(self, key, value):\n base64_key = _encode(key)\n base64_value = _encode(value)\n txn = {\n 'compare': [{\n 'key': base64_key,\n 'result': 'EQUAL',\n 'target': 'CREATE',\n 'create_revision': 0\n }],\n 'success': [{\n 'request_put': {\n 'key': base64_key,\n 'value': base64_value,\n }\n }],\n 'failure': []\n }\n result = self.transaction(txn)\n if 'succeeded' in result:\n return result['succeeded']\n return False", "response": "Atomically create the given key only if the key doesn t exist."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef put(self, key, value, lease=None):\n payload = {\n \"key\": _encode(key),\n \"value\": _encode(value)\n }\n if lease:\n payload['lease'] = lease.id\n self.post(self.get_url(\"/kv/put\"), json=payload)\n return True", "response": "Put the given key into the key - value store."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get(self, key, metadata=False, sort_order=None,\n sort_target=None, **kwargs):\n \"\"\"Range gets the keys in the range from the key-value store.\n\n :param key:\n :param metadata:\n :param sort_order: 'ascend' or 'descend' or None\n :param sort_target: 'key' or 'version' or 'create' or 'mod' or 'value'\n :param kwargs:\n :return:\n \"\"\"\n try:\n order = 0\n if sort_order:\n order = _SORT_ORDER.index(sort_order)\n except ValueError:\n raise ValueError('sort_order must be one of \"ascend\" or \"descend\"')\n\n try:\n target = 0\n if sort_target:\n target = _SORT_TARGET.index(sort_target)\n except ValueError:\n raise ValueError('sort_target must be one of \"key\", '\n '\"version\", \"create\", \"mod\" or \"value\"')\n\n payload = {\n \"key\": _encode(key),\n \"sort_order\": order,\n \"sort_target\": target,\n }\n payload.update(kwargs)\n result = self.post(self.get_url(\"/kv/range\"),\n json=payload)\n if 'kvs' not in result:\n return []\n\n if metadata:\n def value_with_metadata(item):\n item['key'] = _decode(item['key'])\n value = _decode(item.pop('value'))\n return value, item\n\n return [value_with_metadata(item) for item in result['kvs']]\n else:\n return [_decode(item['value']) for item in result['kvs']]", "response": "Get the keys in the key - value store."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all keys currently stored in etcd.", "response": "def get_all(self, sort_order=None, sort_target='key'):\n \"\"\"Get all keys currently stored in etcd.\n\n :returns: sequence of (value, metadata) tuples\n \"\"\"\n return self.get(\n key=_encode(b'\\0'),\n metadata=True,\n sort_order=sort_order,\n sort_target=sort_target,\n range_end=_encode(b'\\0'),\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_prefix(self, key_prefix, sort_order=None, sort_target=None):\n return self.get(key_prefix,\n metadata=True,\n range_end=_encode(_increment_last_byte(key_prefix)),\n sort_order=sort_order,\n sort_target=sort_target)", "response": "Get a range of keys with a prefix."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes the given key - value store range.", "response": "def delete(self, key, **kwargs):\n \"\"\"DeleteRange deletes the given range from the key-value store.\n\n A delete request increments the revision of the key-value store and\n generates a delete event in the event history for every deleted key.\n\n :param key:\n :param kwargs:\n :return:\n \"\"\"\n payload = {\n \"key\": _encode(key),\n }\n payload.update(kwargs)\n\n result = self.post(self.get_url(\"/kv/deleterange\"),\n json=payload)\n if 'deleted' in result:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_prefix(self, key_prefix):\n return self.delete(\n key_prefix, range_end=_encode(_increment_last_byte(key_prefix)))", "response": "Delete a range of keys with a prefix in etcd."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef transaction(self, txn):\n return self.post(self.get_url(\"/kv/txn\"),\n data=json.dumps(txn))", "response": "Processes multiple requests in a single transaction."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef watch(self, key, **kwargs):\n event_queue = queue.Queue()\n\n def callback(event):\n event_queue.put(event)\n\n w = watch.Watcher(self, key, callback, **kwargs)\n canceled = threading.Event()\n\n def cancel():\n canceled.set()\n event_queue.put(None)\n w.stop()\n\n def iterator():\n while not canceled.is_set():\n event = event_queue.get()\n if event is None:\n canceled.set()\n if not canceled.is_set():\n yield event\n\n return iterator(), cancel", "response": "Watch a key for changes and return an iterator of events and cancel."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef watch_prefix(self, key_prefix, **kwargs):\n kwargs['range_end'] = \\\n _increment_last_byte(key_prefix)\n return self.watch(key_prefix, **kwargs)", "response": "The same as watch but watches a range of keys with a prefix."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef watch_once(self, key, timeout=None, **kwargs):\n event_queue = queue.Queue()\n\n def callback(event):\n event_queue.put(event)\n\n w = watch.Watcher(self, key, callback, **kwargs)\n try:\n return event_queue.get(timeout=timeout)\n except queue.Empty:\n raise exceptions.WatchTimedOut()\n finally:\n w.stop()", "response": "Watch a key and stops after the first event."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwatch a range of keys with a prefix.", "response": "def watch_prefix_once(self, key_prefix, timeout=None, **kwargs):\n \"\"\"Watches a range of keys with a prefix, similar to watch_once\"\"\"\n kwargs['range_end'] = \\\n _increment_last_byte(key_prefix)\n return self.watch_once(key_prefix, timeout=timeout, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nschedules a timer with the given callable and the interval in seconds.", "response": "def create_timer(cb: Callable[[float], None], interval: float,\n delay_policy: TimerDelayPolicy = TimerDelayPolicy.DEFAULT,\n loop: Optional[asyncio.BaseEventLoop] = None) -> asyncio.Task:\n '''\n Schedule a timer with the given callable and the interval in seconds.\n The interval value is also passed to the callable.\n If the callable takes longer than the timer interval, all accumulated\n callable's tasks will be cancelled when the timer is cancelled.\n\n Args:\n cb: TODO - fill argument descriptions\n\n Returns:\n You can stop the timer by cancelling the returned task.\n '''\n if not loop:\n loop = asyncio.get_event_loop()\n\n async def _timer():\n fired_tasks = []\n try:\n while True:\n if delay_policy == TimerDelayPolicy.CANCEL:\n for t in fired_tasks:\n if not t.done():\n t.cancel()\n await t\n fired_tasks.clear()\n else:\n fired_tasks[:] = [t for t in fired_tasks if not t.done()]\n t = loop.create_task(cb(interval=interval))\n fired_tasks.append(t)\n await asyncio.sleep(interval)\n except asyncio.CancelledError:\n for t in fired_tasks:\n t.cancel()\n await asyncio.gather(*fired_tasks)\n\n return loop.create_task(_timer())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ttl(self):\n result = self.client.post(self.client.get_url(\"/kv/lease/timetolive\"),\n json={\"ID\": self.id})\n return int(result['TTL'])", "response": "LeaseTimeToLive retrieves lease information."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the keys associated with this lease.", "response": "def keys(self):\n \"\"\"Get the keys associated with this lease.\n\n :return:\n \"\"\"\n result = self.client.post(self.client.get_url(\"/kv/lease/timetolive\"),\n json={\"ID\": self.id,\n \"keys\": True})\n keys = result['keys'] if 'keys' in result else []\n return [_decode(key) for key in keys]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef apartial(coro, *args, **kwargs):\n '''\n Wraps a coroutine function with pre-defined arguments (including keyword\n arguments). It is an asynchronous version of :func:`functools.partial`.\n '''\n\n @functools.wraps(coro)\n async def wrapped(*cargs, **ckwargs):\n return await coro(*args, *cargs, **kwargs, **ckwargs)\n\n return wrapped", "response": "A coroutine function that returns a single object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get(self, version, method, url_or_urls, **kwargs):\n if not url_or_urls:\n raise ValueError('%s requires a url or a list of urls given: %s' %\n (method.title(), url_or_urls))\n\n # a flag we can use instead of calling isinstance() all the time\n multi = isinstance(url_or_urls, list)\n\n # throw an error early for too many URLs\n if multi and len(url_or_urls) > 20:\n raise ValueError('Embedly accepts only 20 urls at a time. Url '\n 'Count:%s' % len(url_or_urls))\n\n query = ''\n\n key = kwargs.get('key', self.key)\n\n # make sure that a key was set on the client or passed in\n if not key:\n raise ValueError('Requires a key. None given: %s' % key)\n\n kwargs['key'] = key\n\n query += urlencode(kwargs)\n\n if multi:\n query += '&urls=%s&' % ','.join([quote(url) for url in url_or_urls])\n else:\n query += '&url=%s' % quote(url_or_urls)\n\n url = 'http://api.embed.ly/%s/%s?%s' % (version, method, query)\n\n http = httplib2.Http(timeout=self.timeout)\n\n headers = {'User-Agent': self.user_agent,\n 'Connection': 'close'}\n\n resp, content = http.request(url, headers=headers)\n\n if resp['status'] == '200':\n data = json.loads(content.decode('utf-8'))\n\n if kwargs.get('raw', False):\n data['raw'] = content\n else:\n data = {'type': 'error',\n 'error': True,\n 'error_code': int(resp['status'])}\n\n if multi:\n return map(lambda url, data: Url(data, method, url),\n url_or_urls, data)\n\n return Url(data, method, url_or_urls)", "response": "_get makes the actual call to api. embed. ly"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef python_2_unicode_compatible(klass):\n if sys.version_info[0] == 2:\n if '__str__' not in klass.__dict__:\n raise ValueError(\"@python_2_unicode_compatible cannot be applied \"\n \"to %s because it doesn't define __str__().\" %\n klass.__name__)\n klass.__unicode__ = klass.__str__\n klass.__str__ = lambda self: self.__unicode__().encode('utf-8')\n return klass", "response": "A class decorator that can be used to make a class unicode compatible."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nencoding the given data using base - 64", "response": "def _encode(data):\n \"\"\"Encode the given data using base-64\n\n :param data:\n :return: base-64 encoded string\n \"\"\"\n if not isinstance(data, bytes_types):\n data = six.b(str(data))\n return base64.b64encode(data).decode(\"utf-8\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _decode(data):\n if not isinstance(data, bytes_types):\n data = six.b(str(data))\n return base64.b64decode(data.decode(\"utf-8\"))", "response": "Decode the base - 64 encoded string\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _increment_last_byte(data):\n if not isinstance(data, bytes_types):\n if isinstance(data, six.string_types):\n data = data.encode('utf-8')\n else:\n data = six.b(str(data))\n s = bytearray(data)\n s[-1] = s[-1] + 1\n return bytes(s)", "response": "Get the last byte in the array and increment it\n\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef metadata(self):\n self.metadata_path = os.path.join(self.path, 'metadata.rb')\n if not os.path.isfile(self.metadata_path):\n raise ValueError(\"Cookbook needs metadata.rb, %s\"\n % self.metadata_path)\n\n if not self._metadata:\n self._metadata = MetadataRb(open(self.metadata_path, 'r+'))\n\n return self._metadata", "response": "Return dict representation of this cookbook s metadata. rb."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef berksfile(self):\n self.berks_path = os.path.join(self.path, 'Berksfile')\n if not self._berksfile:\n if not os.path.isfile(self.berks_path):\n raise ValueError(\"No Berksfile found at %s\"\n % self.berks_path)\n self._berksfile = Berksfile(open(self.berks_path, 'r+'))\n return self._berksfile", "response": "Return this cookbook s Berksfile instance."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_dict(cls, dictionary):\n cookbooks = set()\n # put these in order\n groups = [cookbooks]\n\n for key, val in dictionary.items():\n if key == 'depends':\n cookbooks.update({cls.depends_statement(cbn, meta)\n for cbn, meta in val.items()})\n\n body = ''\n for group in groups:\n if group:\n body += '\\n'\n body += '\\n'.join(group)\n return cls.from_string(body)", "response": "Create a MetadataRb instance from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a valid Ruby depends statement for the metadata. rb file.", "response": "def depends_statement(cookbook_name, metadata=None):\n \"\"\"Return a valid Ruby 'depends' statement for the metadata.rb file.\"\"\"\n line = \"depends '%s'\" % cookbook_name\n if metadata:\n if not isinstance(metadata, dict):\n raise TypeError(\"Stencil dependency options for %s \"\n \"should be a dict of options, not %s.\"\n % (cookbook_name, metadata))\n if metadata:\n line = \"%s '%s'\" % (line, \"', '\".join(metadata))\n return line"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse the metadata. rb into a dict.", "response": "def parse(self):\n \"\"\"Parse the metadata.rb into a dict.\"\"\"\n data = utils.ruby_lines(self.readlines())\n data = [tuple(j.strip() for j in line.split(None, 1))\n for line in data]\n depends = {}\n for line in data:\n if not len(line) == 2:\n continue\n key, value = line\n if key == 'depends':\n value = value.split(',')\n lib = utils.ruby_strip(value[0])\n detail = [utils.ruby_strip(j) for j in value[1:]]\n depends[lib] = detail\n datamap = {key: utils.ruby_strip(val) for key, val in data}\n if depends:\n datamap['depends'] = depends\n self.seek(0)\n return datamap"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmerging two MetadataRbs into this one.", "response": "def merge(self, other):\n \"\"\"Add requirements from 'other' metadata.rb into this one.\"\"\"\n if not isinstance(other, MetadataRb):\n raise TypeError(\"MetadataRb to merge should be a 'MetadataRb' \"\n \"instance, not %s.\", type(other))\n current = self.to_dict()\n new = other.to_dict()\n\n # compare and gather cookbook dependencies\n meta_writelines = ['%s\\n' % self.depends_statement(cbn, meta)\n for cbn, meta in new.get('depends', {}).items()\n if cbn not in current.get('depends', {})]\n\n self.write_statements(meta_writelines)\n return self.to_dict()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses this Berksfile into a dict.", "response": "def parse(self):\n \"\"\"Parse this Berksfile into a dict.\"\"\"\n self.flush()\n self.seek(0)\n data = utils.ruby_lines(self.readlines())\n data = [tuple(j.strip() for j in line.split(None, 1))\n for line in data]\n datamap = {}\n for line in data:\n if len(line) == 1:\n datamap[line[0]] = True\n elif len(line) == 2:\n key, value = line\n if key == 'cookbook':\n datamap.setdefault('cookbook', {})\n value = [utils.ruby_strip(v) for v in value.split(',')]\n lib, detail = value[0], value[1:]\n datamap['cookbook'].setdefault(lib, {})\n # if there is additional dependency data but its\n # not the ruby hash, its the version constraint\n if detail and not any(\"\".join(detail).startswith(o)\n for o in self.berks_options):\n constraint, detail = detail[0], detail[1:]\n datamap['cookbook'][lib]['constraint'] = constraint\n if detail:\n for deet in detail:\n opt, val = [\n utils.ruby_strip(i)\n for i in deet.split(':', 1)\n ]\n if not any(opt == o for o in self.berks_options):\n raise ValueError(\n \"Cookbook detail '%s' does not specify \"\n \"one of '%s'\" % (opt, self.berks_options))\n else:\n datamap['cookbook'][lib][opt.strip(':')] = (\n utils.ruby_strip(val))\n elif key == 'source':\n datamap.setdefault(key, [])\n datamap[key].append(utils.ruby_strip(value))\n elif key:\n datamap[key] = utils.ruby_strip(value)\n self.seek(0)\n return datamap"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_dict(cls, dictionary):\n cookbooks = set()\n sources = set()\n other = set()\n # put these in order\n groups = [sources, cookbooks, other]\n\n for key, val in dictionary.items():\n if key == 'cookbook':\n cookbooks.update({cls.cookbook_statement(cbn, meta)\n for cbn, meta in val.items()})\n elif key == 'source':\n sources.update({\"source '%s'\" % src for src in val})\n elif key == 'metadata':\n other.add('metadata')\n\n body = ''\n for group in groups:\n if group:\n body += '\\n'\n body += '\\n'.join(group)\n return cls.from_string(body)", "response": "Create a Berksfile instance from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a valid Ruby cookbook statement for the Berksfile.", "response": "def cookbook_statement(cookbook_name, metadata=None):\n \"\"\"Return a valid Ruby 'cookbook' statement for the Berksfile.\"\"\"\n line = \"cookbook '%s'\" % cookbook_name\n if metadata:\n if not isinstance(metadata, dict):\n raise TypeError(\"Berksfile dependency hash for %s \"\n \"should be a dict of options, not %s.\"\n % (cookbook_name, metadata))\n # not like the others...\n if 'constraint' in metadata:\n line += \", '%s'\" % metadata.pop('constraint')\n for opt, spec in metadata.items():\n line += \", %s: '%s'\" % (opt, spec)\n return line"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef merge(self, other):\n if not isinstance(other, Berksfile):\n raise TypeError(\"Berksfile to merge should be a 'Berksfile' \"\n \"instance, not %s.\", type(other))\n current = self.to_dict()\n new = other.to_dict()\n\n # compare and gather cookbook dependencies\n berks_writelines = ['%s\\n' % self.cookbook_statement(cbn, meta)\n for cbn, meta in new.get('cookbook', {}).items()\n if cbn not in current.get('cookbook', {})]\n\n # compare and gather 'source' requirements\n berks_writelines.extend([\"source '%s'\\n\" % src for src\n in new.get('source', [])\n if src not in current.get('source', [])])\n\n self.write_statements(berks_writelines)\n return self.to_dict()", "response": "Merge two Berksfiles into this one."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a Stencil instance given a stencil name.", "response": "def get_stencil(self, stencil_name, **options):\n \"\"\"Return a Stencil instance given a stencil name.\"\"\"\n if stencil_name not in self.manifest.get('stencils', {}):\n raise ValueError(\"Stencil '%s' not declared in StencilSet \"\n \"manifest.\" % stencil_name)\n stencil = copy.deepcopy(self.manifest)\n allstencils = stencil.pop('stencils')\n stencil.pop('default_stencil', None)\n override = allstencils[stencil_name]\n utils.deepupdate(stencil, override)\n\n # merge options, prefer **options (probably user-supplied)\n for opt, data in stencil.get('options', {}).items():\n if opt not in options:\n options[opt] = data.get('default', '')\n stencil['options'] = options\n\n name = stencil['options'].get('name')\n files = stencil['files'].copy()\n for fil, templ in files.items():\n if '' in fil:\n # check for the option b/c there are\n # cases in which it may not exist\n if not name:\n raise ValueError(\"Stencil does not include a name option\")\n\n stencil['files'].pop(fil)\n fil = fil.replace('', name)\n stencil['files'][fil] = templ\n\n return stencil"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _determine_selected_stencil(stencil_set, stencil_definition):\n if 'stencil' not in stencil_definition:\n selected_stencil_name = stencil_set.manifest.get('default_stencil')\n else:\n selected_stencil_name = stencil_definition.get('stencil')\n\n if not selected_stencil_name:\n raise ValueError(\"No stencil name, within stencil set %s, specified.\"\n % stencil_definition['name'])\n\n return selected_stencil_name", "response": "Determine the name of the selected stencil within a set."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding a map of variables for this generated cookbook and stencil.", "response": "def _build_template_map(cookbook, cookbook_name, stencil):\n \"\"\"Build a map of variables for this generated cookbook and stencil.\n\n Get template variables from stencil option values, adding the default ones\n like cookbook and cookbook year.\n \"\"\"\n template_map = {\n 'cookbook': {\"name\": cookbook_name},\n 'options': stencil['options']\n }\n\n # Cookbooks may not yet have metadata, so we pass an empty dict if so\n try:\n template_map['cookbook'] = cookbook.metadata.to_dict().copy()\n except ValueError:\n # ValueError may be returned if this cookbook does not yet have any\n # metadata.rb written by a stencil. This is okay, as everyone should\n # be using the base stencil first, and then we'll try to call\n # cookbook.metadata again in this method later down.\n pass\n\n template_map['cookbook']['year'] = datetime.datetime.now().year\n return template_map"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _render_binaries(files, written_files):\n for source_path, target_path in files.items():\n needdir = os.path.dirname(target_path)\n assert needdir, \"Target should have valid parent dir\"\n try:\n os.makedirs(needdir)\n except OSError as err:\n if err.errno != errno.EEXIST:\n raise\n\n if os.path.isfile(target_path):\n if target_path in written_files:\n LOG.warning(\"Previous stencil has already written file %s.\",\n target_path)\n else:\n print(\"Skipping existing file %s\" % target_path)\n LOG.info(\"Skipping existing file %s\", target_path)\n continue\n\n print(\"Writing rendered file %s\" % target_path)\n LOG.info(\"Writing rendered file %s\", target_path)\n\n shutil.copy(source_path, target_path)\n if os.path.exists(target_path):\n written_files.append(target_path)", "response": "Write binary contents from filetable into files."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrenders all the templates in the filetable into a list of files.", "response": "def _render_templates(files, filetable, written_files, force, open_mode='w'):\n \"\"\"Write template contents from filetable into files.\n\n Using filetable for the rendered templates, and the list of files, render\n all the templates into actual files on disk, forcing to overwrite the file\n as appropriate, and using the given open mode for the file.\n \"\"\"\n for tpl_path, content in filetable:\n target_path = files[tpl_path]\n needdir = os.path.dirname(target_path)\n assert needdir, \"Target should have valid parent dir\"\n try:\n os.makedirs(needdir)\n except OSError as err:\n if err.errno != errno.EEXIST:\n raise\n\n if os.path.isfile(target_path):\n if force:\n LOG.warning(\"Forcing overwrite of existing file %s.\",\n target_path)\n elif target_path in written_files:\n LOG.warning(\"Previous stencil has already written file %s.\",\n target_path)\n else:\n print(\"Skipping existing file %s\" % target_path)\n LOG.info(\"Skipping existing file %s\", target_path)\n continue\n\n with open(target_path, open_mode) as newfile:\n print(\"Writing rendered file %s\" % target_path)\n LOG.info(\"Writing rendered file %s\", target_path)\n newfile.write(content)\n written_files.append(target_path)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build_cookbook(build_config, templatepack_path,\n cookbooks_home, force=False):\n \"\"\"Build a cookbook from a fastfood.json file.\n\n Can build on an existing cookbook, otherwise this will\n create a new cookbook for you based on your templatepack.\n \"\"\"\n with open(build_config) as cfg:\n cfg = json.load(cfg)\n\n cookbook_name = cfg['name']\n template_pack = pack.TemplatePack(templatepack_path)\n\n written_files = []\n cookbook = create_new_cookbook(cookbook_name, cookbooks_home)\n\n for stencil_definition in cfg['stencils']:\n\n selected_stencil_set_name = stencil_definition.get('stencil_set')\n stencil_set = template_pack.load_stencil_set(selected_stencil_set_name)\n\n selected_stencil_name = _determine_selected_stencil(\n stencil_set,\n stencil_definition\n )\n\n stencil = stencil_set.get_stencil(selected_stencil_name,\n **stencil_definition)\n\n updated_cookbook = process_stencil(\n cookbook,\n cookbook_name, # in case no metadata.rb yet\n template_pack,\n force,\n stencil_set,\n stencil,\n written_files\n )\n\n return written_files, updated_cookbook", "response": "Build a new cookbook from a fastfood. json file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef process_stencil(cookbook, cookbook_name, template_pack,\n force_argument, stencil_set, stencil, written_files):\n \"\"\"Process the stencil requested, writing any missing files as needed.\n\n The stencil named 'stencilset_name' should be one of\n templatepack's stencils.\n \"\"\"\n # force can be passed on the command line or forced in a stencil's options\n force = force_argument or stencil['options'].get('force', False)\n\n stencil['files'] = stencil.get('files') or {}\n files = {\n # files.keys() are template paths, files.values() are target paths\n # {path to template: rendered target path, ... }\n os.path.join(stencil_set.path, tpl): os.path.join(cookbook.path, tgt)\n for tgt, tpl in stencil['files'].items()\n }\n\n stencil['partials'] = stencil.get('partials') or {}\n partials = {\n # files.keys() are template paths, files.values() are target paths\n # {path to template: rendered target path, ... }\n os.path.join(stencil_set.path, tpl): os.path.join(cookbook.path, tgt)\n for tgt, tpl in stencil['partials'].items()\n }\n\n stencil['binaries'] = stencil.get('binaries') or {}\n binaries = {\n # files.keys() are binary paths, files.values() are target paths\n # {path to binary: rendered target path, ... }\n os.path.join(stencil_set.path, tpl): os.path.join(cookbook.path, tgt)\n for tgt, tpl in stencil['binaries'].items()\n }\n\n template_map = _build_template_map(cookbook, cookbook_name, stencil)\n\n filetable = templating.render_templates(*files.keys(), **template_map)\n _render_templates(files, filetable, written_files, force)\n\n parttable = templating.render_templates(*partials.keys(), **template_map)\n _render_templates(partials, parttable, written_files, force, open_mode='a')\n\n # no templating needed for binaries, just pass off to the copy method\n _render_binaries(binaries, written_files)\n\n # merge metadata.rb dependencies\n stencil_metadata_deps = {'depends': stencil.get('dependencies', {})}\n stencil_metadata = book.MetadataRb.from_dict(stencil_metadata_deps)\n cookbook.metadata.merge(stencil_metadata)\n\n # merge Berksfile dependencies\n stencil_berks_deps = {'cookbook': stencil.get('berks_dependencies', {})}\n stencil_berks = book.Berksfile.from_dict(stencil_berks_deps)\n cookbook.berksfile.merge(stencil_berks)\n\n return cookbook", "response": "Processes the stencil requested writing any missing files as needed."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new cookbook.", "response": "def create_new_cookbook(cookbook_name, cookbooks_home):\n \"\"\"Create a new cookbook.\n\n :param cookbook_name: Name of the new cookbook.\n :param cookbooks_home: Target dir for new cookbook.\n \"\"\"\n cookbooks_home = utils.normalize_path(cookbooks_home)\n\n if not os.path.exists(cookbooks_home):\n raise ValueError(\"Target cookbook dir %s does not exist.\"\n % os.path.relpath(cookbooks_home))\n\n target_dir = os.path.join(cookbooks_home, cookbook_name)\n LOG.debug(\"Creating dir -> %s\", target_dir)\n try:\n os.makedirs(target_dir)\n except OSError as err:\n if err.errno != errno.EEXIST:\n raise\n else:\n LOG.info(\"Skipping existing directory %s\", target_dir)\n\n cookbook_path = os.path.join(cookbooks_home, cookbook_name)\n cookbook = book.CookBook(cookbook_path)\n\n return cookbook"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntidy up lines from a file honor comment lines.", "response": "def ruby_lines(text):\n \"\"\"Tidy up lines from a file, honor # comments.\n\n Does not honor ruby block comments (yet).\n \"\"\"\n if isinstance(text, basestring):\n text = text.splitlines()\n elif not isinstance(text, list):\n raise TypeError(\"text should be a list or a string, not %s\"\n % type(text))\n return [l.strip() for l in text if l.strip() and not\n l.strip().startswith('#')]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deepupdate(original, update, levels=5):\n if not isinstance(update, dict):\n update = dict(update)\n if not levels > 0:\n original.update(update)\n else:\n for key, val in update.items():\n if isinstance(original.get(key), dict):\n # might need a force=True to override this\n if not isinstance(val, dict):\n raise TypeError(\"Trying to update dict %s with \"\n \"non-dict %s\" % (original[key], val))\n deepupdate(original[key], val, levels=levels-1)\n else:\n original.update({key: val})", "response": "Update original dict with items from iterable update."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_statements(self, statements):\n self.seek(0)\n original_content_lines = self.readlines()\n new_content_lines = copy.copy(original_content_lines)\n # ignore blanks and sort statements to be written\n statements = sorted([stmnt for stmnt in statements if stmnt])\n\n # find all the insert points for each statement\n uniqs = {stmnt.split(None, 1)[0] for stmnt in statements}\n insert_locations = {}\n for line in reversed(original_content_lines):\n if not uniqs:\n break\n if not line:\n continue\n for word in uniqs.copy():\n if line.startswith(word):\n index = original_content_lines.index(line) + 1\n insert_locations[word] = index\n uniqs.remove(word)\n\n for statement in statements:\n print(\"writing to %s : %s\" % (self, statement))\n startswith = statement.split(None, 1)[0]\n # insert new statement with similar OR at the end of the file\n new_content_lines.insert(\n insert_locations.get(startswith, len(new_content_lines)),\n statement)\n\n if new_content_lines != original_content_lines:\n self.seek(0)\n self.writelines(new_content_lines)\n self.flush()", "response": "Insert the statements into the file neatly."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _fastfood_build(args):\n written_files, cookbook = food.build_cookbook(\n args.config_file, args.template_pack,\n args.cookbooks, args.force)\n\n if len(written_files) > 0:\n print(\"%s: %s files written\" % (cookbook,\n len(written_files)))\n else:\n print(\"%s up to date\" % cookbook)\n\n return written_files, cookbook", "response": "Run on fastfood build."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning on fastfood list.", "response": "def _fastfood_list(args):\n \"\"\"Run on `fastfood list`.\"\"\"\n template_pack = pack.TemplatePack(args.template_pack)\n if args.stencil_set:\n stencil_set = template_pack.load_stencil_set(args.stencil_set)\n print(\"Available Stencils for %s:\" % args.stencil_set)\n for stencil in stencil_set.stencils:\n print(\" %s\" % stencil)\n else:\n print('Available Stencil Sets:')\n for name, vals in template_pack.stencil_sets.items():\n print(\" %12s - %12s\" % (name, vals['help']))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nruns on fastfood show.", "response": "def _fastfood_show(args):\n \"\"\"Run on `fastfood show`.\"\"\"\n template_pack = pack.TemplatePack(args.template_pack)\n if args.stencil_set:\n stencil_set = template_pack.load_stencil_set(args.stencil_set)\n print(\"Stencil Set %s:\" % args.stencil_set)\n print(' Stencils:')\n for stencil in stencil_set.stencils:\n print(\" %s\" % stencil)\n print(' Options:')\n for opt, vals in stencil_set.manifest['options'].items():\n print(\" %s - %s\" % (opt, vals['help']))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _release_info():\n pypi_url = 'http://pypi.python.org/pypi/fastfood/json'\n headers = {\n 'Accept': 'application/json',\n }\n request = urllib.Request(pypi_url, headers=headers)\n response = urllib.urlopen(request).read().decode('utf_8')\n data = json.loads(response)\n return data", "response": "Check latest fastfood release info from PyPI."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getenv(option_name, default=None):\n env = \"%s_%s\" % (NAMESPACE.upper(), option_name.upper())\n return os.environ.get(env, default)", "response": "Return the option from the environment in the FASTFOOD namespace."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main(argv=None):\n # pylint: disable=missing-docstring\n import argparse\n import traceback\n\n class HelpfulParser(argparse.ArgumentParser):\n def error(self, message, print_help=False):\n if 'too few arguments' in message:\n sys.argv.insert(0, os.path.basename(sys.argv.pop(0)))\n message = (\"%s. Try getting help with `%s -h`\"\n % (message, \" \".join(sys.argv)))\n if print_help:\n self.print_help()\n sys.stderr.write('\\nerror: %s\\n' % message)\n sys.exit(2)\n\n parser = HelpfulParser(\n prog=NAMESPACE,\n description=__doc__.splitlines()[0],\n epilog=\"\\n\".join(__doc__.splitlines()[1:]),\n formatter_class=argparse.ArgumentDefaultsHelpFormatter,\n )\n\n version_string = 'version %s' % fastfood.__version__\n parser.description = '%s ( %s )' % (parser.description, version_string)\n\n # version_group = subparsers.add_group()\n version_group = parser.add_argument_group(\n title='version info',\n description='Use these arguments to get version info.')\n\n vers_arg = version_group.add_argument(\n '-V', '--version', action='version',\n help=\"Return the current fastfood version.\",\n version='%s %s' % (parser.prog, version_string))\n\n class LatestVersionAction(vers_arg.__class__):\n def __call__(self, prsr, *args, **kw):\n info = _release_info()\n vers = info['info']['version']\n release = info['releases'][vers][0]\n uploaded = datetime.strptime(\n release['upload_time'], '%Y-%m-%dT%H:%M:%S')\n sym = EXCLAIM if vers != fastfood.__version__ else CHECK\n message = u\"{} fastfood version {} uploaded {}\\n\"\n message = message.format(sym, vers, uploaded.ctime())\n prsr.exit(message=message)\n\n version_group.add_argument(\n '-L', '--latest', action=LatestVersionAction,\n help=\"Lookup the latest relase from PyPI.\")\n\n verbose = parser.add_mutually_exclusive_group()\n verbose.add_argument('-v', dest='loglevel', action='store_const',\n const=logging.INFO,\n help=\"Set log-level to INFO.\")\n verbose.add_argument('-vv', dest='loglevel', action='store_const',\n const=logging.DEBUG,\n help=\"Set log-level to DEBUG.\")\n parser.set_defaults(loglevel=logging.WARNING)\n home = os.getenv('HOME') or os.path.expanduser('~') or os.getcwd()\n parser.add_argument(\n '--template-pack', help='template pack location',\n default=getenv(\n 'template_pack', os.path.join(home, '.fastfood')))\n parser.add_argument(\n '--cookbooks', help='cookbooks directory',\n default=getenv(\n 'cookbooks', os.path.join(home, 'cookbooks')))\n\n subparsers = parser.add_subparsers(\n dest='_subparsers', title='fastfood commands',\n description='operations...',\n help='...')\n\n #\n # `fastfood list`\n #\n list_parser = subparsers.add_parser(\n 'list', help='List available stencils',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n list_parser.add_argument('stencil_set', nargs='?',\n help=\"Stencil set to list stencils from\")\n list_parser.set_defaults(func=_fastfood_list)\n\n #\n # `fastfood show `\n #\n show_parser = subparsers.add_parser(\n 'show', help='Show stencil set information',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n show_parser.add_argument('stencil_set',\n help=\"Stencil set to list stencils from\")\n show_parser.set_defaults(func=_fastfood_show)\n\n #\n # `fastfood build`\n #\n build_parser = subparsers.add_parser(\n 'build', help='Create or update a cookbook using a config',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n build_parser.add_argument('config_file',\n help=\"JSON config file\")\n build_parser.add_argument('--force', '-f', action='store_true',\n default=False, help=\"Overwrite existing files.\")\n\n build_parser.set_defaults(func=_fastfood_build)\n\n setattr(_LOCAL, 'argparser', parser)\n if not argv:\n argv = None\n args = parser.parse_args(args=argv)\n if hasattr(args, 'options'):\n args.options = {k: v for k, v in args.options}\n\n logging.basicConfig(level=args.loglevel)\n\n try:\n args.func(args)\n except exc.FastfoodError as err:\n title = exc.get_friendly_title(err)\n print('%s %s: %s' % (RED_X, title, str(err)),\n file=sys.stderr)\n sys.stderr.flush()\n sys.exit(1)\n except Exception as err:\n print('%s Unexpected error. Please report this traceback.'\n % INTERROBANG,\n file=sys.stderr)\n traceback.print_exc()\n # todo: tracack in -v or -vv mode?\n sys.stderr.flush()\n sys.exit(1)\n except KeyboardInterrupt:\n sys.exit(\"\\nStahp\")", "response": "This is the fastfood command line interface."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nturns class instance or name into an eyeball - friendly title.", "response": "def get_friendly_title(err):\n \"\"\"Turn class, instance, or name (str) into an eyeball-friendly title.\n\n E.g. FastfoodStencilSetNotListed --> 'Stencil Set Not Listed'\n \"\"\"\n if isinstance(err, basestring):\n string = err\n else:\n try:\n string = err.__name__\n except AttributeError:\n string = err.__class__.__name__\n split = _SPLITCASE_RE.findall(string)\n if not split:\n split.append(string)\n if len(split) > 1 and split[0] == 'Fastfood':\n split.pop(0)\n\n return \" \".join(split)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _validate(self, key, cls=None):\n if key not in self.manifest:\n raise ValueError(\"Manifest %s requires '%s'.\"\n % (self.manifest_path, key))\n if cls:\n if not isinstance(self.manifest[key], cls):\n raise TypeError(\"Manifest value '%s' should be %s, not %s\"\n % (key, cls, type(self.manifest[key])))", "response": "Verify the manifest schema."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists of stencil sets.", "response": "def stencil_sets(self):\n \"\"\"List of stencil sets.\"\"\"\n if not self._stencil_sets:\n self._stencil_sets = self.manifest['stencil_sets']\n return self._stencil_sets"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a Stencil Set from this template pack.", "response": "def load_stencil_set(self, stencilset_name):\n \"\"\"Return the Stencil Set from this template pack.\"\"\"\n if stencilset_name not in self._stencil_sets:\n if stencilset_name not in self.manifest['stencil_sets'].keys():\n raise exc.FastfoodStencilSetNotListed(\n \"Stencil set '%s' not listed in %s under stencil_sets.\"\n % (stencilset_name, self.manifest_path))\n stencil_path = os.path.join(\n self.path, 'stencils', stencilset_name)\n self._stencil_sets[stencilset_name] = (\n stencil_module.StencilSet(stencil_path))\n return self._stencil_sets[stencilset_name]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef qstring(option):\n if (re.match(NODE_ATTR_RE, option) is None and\n re.match(CHEF_CONST_RE, option) is None):\n return \"'%s'\" % option\n else:\n return option", "response": "Custom quoting method for jinja."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef render_templates_generator(*files, **template_map):\n for path in files:\n if not os.path.isfile(path):\n raise ValueError(\"Template file %s not found\"\n % os.path.relpath(path))\n else:\n try:\n with codecs.open(path, encoding='utf-8') as f:\n text = f.read()\n template = JINJA_ENV.from_string(text)\n except jinja2.TemplateSyntaxError as err:\n msg = (\"Error rendering jinja2 template for file %s \"\n \"on line %s. Error: %s\"\n % (path, err.lineno, err.message))\n raise type(err)(\n msg, err.lineno, filename=os.path.basename(path))\n\n result = template.render(**template_map)\n if not result.endswith('\\n'):\n result += '\\n'\n yield path, result", "response": "Yields the contents of the given file and returns a generator that yields the contents of the file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate(self, model, outfolder):\n _logger.info('Generating code to {!r}.'.format(outfolder))\n\n for task in self.tasks:\n for element in task.filtered_elements(model):\n task.run(element, outfolder)", "response": "Generate code for given model."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run(self, element, outfolder):\n filepath = self.relative_path_for_element(element)\n if outfolder and not os.path.isabs(filepath):\n filepath = os.path.join(outfolder, filepath)\n\n _logger.debug('{!r} --> {!r}'.format(element, filepath))\n\n self.ensure_folder(filepath)\n self.generate_file(element, filepath)", "response": "Apply this task to model element."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_template_context(self, element, **kwargs):\n context = dict(element=element, **kwargs)\n if self.global_context:\n context.update(**self.global_context)\n return context", "response": "Create a dictionary that can be used to create a template context."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _validated(self, value):\n if value is None:\n return None\n if isinstance(value, bson.ObjectId):\n return value\n try:\n return bson.ObjectId(value)\n except (ValueError, AttributeError):\n self.fail('invalid_object_id')", "response": "Format the value or raise a : exc : ValidationException."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_environment(self, **kwargs):\n return jinja2.Environment(\n loader=jinja2.FileSystemLoader(self.templates_path),\n **kwargs\n )", "response": "Create a Jinja2 Environment object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns array of pairs of adjacent elements in a.", "response": "def pairs(a):\n \"\"\"Return array of pairs of adjacent elements in a.\n\n >>> pairs([1, 2, 3, 4])\n array([[1, 2],\n [2, 3],\n [3, 4]])\n\n \"\"\"\n a = np.asarray(a)\n return as_strided(a, shape=(a.size - 1, 2), strides=a.strides * 2)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef transcript_sort_key(transcript):\n return (\n -len(transcript.protein_sequence),\n -len(transcript.sequence),\n transcript.name\n )", "response": "Returns the key used to sort transcripts in a sequence."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef best_transcript(transcripts):\n assert len(transcripts) > 0\n sorted_list = sorted(transcripts, key=transcript_sort_key)\n return sorted_list[0]", "response": "Given a set of coding transcripts choose the best transcript"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef predict_epitopes_from_args(args):\n mhc_model = mhc_binding_predictor_from_args(args)\n variants = variant_collection_from_args(args)\n gene_expression_dict = rna_gene_expression_dict_from_args(args)\n transcript_expression_dict = rna_transcript_expression_dict_from_args(args)\n\n predictor = TopiaryPredictor(\n mhc_model=mhc_model,\n padding_around_mutation=args.padding_around_mutation,\n ic50_cutoff=args.ic50_cutoff,\n percentile_cutoff=args.percentile_cutoff,\n min_transcript_expression=args.rna_min_transcript_expression,\n min_gene_expression=args.rna_min_gene_expression,\n only_novel_epitopes=args.only_novel_epitopes,\n raise_on_error=not args.skip_variant_errors)\n return predictor.predict_from_variants(\n variants=variants,\n transcript_expression_dict=transcript_expression_dict,\n gene_expression_dict=gene_expression_dict)", "response": "Predicts epitopes from the given commandline arguments."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_random_word(dictionary, min_word_length=3, max_word_length=8):\n\n while True:\n # Choose a random word\n word = choice(dictionary)\n # Stop looping as soon as we have a valid candidate\n if len(word) >= min_word_length and len(word) <= max_word_length:\n break\n\n return word", "response": "Returns a random word from the dictionary"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a random password from the ISO.", "response": "def pw(min_word_length=3, max_word_length=8, max_int_value=1000, number_of_elements=4, no_special_characters=False):\n \"\"\"\n Generate a password\n \"\"\"\n\n # Set the position of the integer\n int_position = set_int_position(number_of_elements)\n\n # Load dictionary\n dictionary = load_dictionary()\n\n password = ''\n for i in range(number_of_elements):\n # Add word or integer\n if i == int_position:\n password += str(get_random_int(max_int_value))\n else:\n password += get_random_word(dictionary,\n min_word_length,\n max_word_length).title()\n\n # Add separator\n if i != number_of_elements - 1:\n password += get_random_separator(no_special_characters)\n\n return password"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef protein_subsequences_around_mutations(effects, padding_around_mutation):\n protein_subsequences = {}\n protein_subsequence_start_offsets = {}\n for effect in effects:\n protein_sequence = effect.mutant_protein_sequence\n # some effects will lack a mutant protein sequence since\n # they are either silent or unpredictable\n if protein_sequence:\n mutation_start = effect.aa_mutation_start_offset\n mutation_end = effect.aa_mutation_end_offset\n seq_start_offset = max(\n 0,\n mutation_start - padding_around_mutation)\n # some pseudogenes have stop codons in the reference sequence,\n # if we try to use them for epitope prediction we should trim\n # the sequence to not include the stop character '*'\n first_stop_codon_index = protein_sequence.find(\"*\")\n if first_stop_codon_index < 0:\n first_stop_codon_index = len(protein_sequence)\n\n seq_end_offset = min(\n first_stop_codon_index,\n mutation_end + padding_around_mutation)\n subsequence = protein_sequence[seq_start_offset:seq_end_offset]\n protein_subsequences[effect] = subsequence\n protein_subsequence_start_offsets[effect] = seq_start_offset\n return protein_subsequences, protein_subsequence_start_offsets", "response": "Given a list of effects get a dictionary of subsequences and a dictionary of subsequence start offsets."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_padding_around_mutation(given_padding, epitope_lengths):\n min_required_padding = max(epitope_lengths) - 1\n if not given_padding:\n return min_required_padding\n else:\n require_integer(given_padding, \"Padding around mutation\")\n if given_padding < min_required_padding:\n raise ValueError(\n \"Padding around mutation %d cannot be less than %d \"\n \"for epitope lengths %s\" % (\n given_padding,\n min_required_padding,\n epitope_lengths))\n return given_padding", "response": "Checks that the given padding is less than the minimum required padding around the mutation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the interval of the mutation in the peptide.", "response": "def peptide_mutation_interval(\n peptide_start_in_protein,\n peptide_length,\n mutation_start_in_protein,\n mutation_end_in_protein):\n \"\"\"\n Half-open interval of mutated residues in the peptide, determined\n from the mutation interval in the original protein sequence.\n\n Parameters\n ----------\n peptide_start_in_protein : int\n Position of the first peptide residue within the protein\n (starting from 0)\n\n peptide_length : int\n\n mutation_start_in_protein : int\n Position of the first mutated residue starting from 0. In the case of a\n deletion, the position where the first residue had been.\n\n mutation_end_in_protein : int\n Position of the last mutated residue in the mutant protein. In the case\n of a deletion, this is equal to the mutation_start_in_protein.\n )\n \"\"\"\n if peptide_start_in_protein > mutation_end_in_protein:\n raise ValueError(\"Peptide starts after mutation\")\n elif peptide_start_in_protein + peptide_length < mutation_start_in_protein:\n raise ValueError(\"Peptide ends before mutation\")\n\n # need a half-open start/end interval\n peptide_mutation_start_offset = min(\n peptide_length,\n max(0, mutation_start_in_protein - peptide_start_in_protein))\n peptide_mutation_end_offset = min(\n peptide_length,\n max(0, mutation_end_in_protein - peptide_start_in_protein))\n return (peptide_mutation_start_offset, peptide_mutation_end_offset)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate an Indicator object from a dictionary.", "response": "def from_dict(cls, indicator):\n \"\"\"\n Create an indicator object from a dictionary.\n\n :param indicator: The dictionary.\n :return: The indicator object.\n \"\"\"\n\n tags = indicator.get('tags')\n if tags is not None:\n tags = [Tag.from_dict(tag) for tag in tags]\n\n return Indicator(value=indicator.get('value'),\n type=indicator.get('indicatorType'),\n priority_level=indicator.get('priorityLevel'),\n correlation_count=indicator.get('correlationCount'),\n whitelisted=indicator.get('whitelisted'),\n weight=indicator.get('weight'),\n reason=indicator.get('reason'),\n first_seen=indicator.get('firstSeen'),\n last_seen=indicator.get('lastSeen'),\n sightings=indicator.get('sightings'),\n source=indicator.get('source'),\n notes=indicator.get('notes'),\n tags=tags,\n enclave_ids=indicator.get('enclaveIds'))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dictionary representation of the object.", "response": "def to_dict(self, remove_nones=False):\n \"\"\"\n Creates a dictionary representation of the indicator.\n\n :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``.\n :return: A dictionary representation of the indicator.\n \"\"\"\n\n if remove_nones:\n return super().to_dict(remove_nones=True)\n\n tags = None\n if self.tags is not None:\n tags = [tag.to_dict(remove_nones=remove_nones) for tag in self.tags]\n\n return {\n 'value': self.value,\n 'indicatorType': self.type,\n 'priorityLevel': self.priority_level,\n 'correlationCount': self.correlation_count,\n 'whitelisted': self.whitelisted,\n 'weight': self.weight,\n 'reason': self.reason,\n 'firstSeen': self.first_seen,\n 'lastSeen': self.last_seen,\n 'source': self.source,\n 'notes': self.notes,\n 'tags': tags,\n 'enclaveIds': self.enclave_ids\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef filter_false_positive(df, process_time):\n result = []\n track = []\n count = 0\n for o in df['alerts']:\n count += 1\n if 'closedState' in o:\n if o['closedState'] != 'False Positive':\n if 'distinguishers' in o:\n try:\n if 'virus' in o['distinguishers']:\n if o['distinguishers']['virus'] != 'fetestevent':\n result.append(o)\n else:\n track.append(o) # Track fetestevents that are skipped\n else:\n result.append(o)\n except TypeError:\n result.append(o)\n else:\n result.append(o)\n else:\n track.append(o) # Track false positives that are skipped\n elif 'distinguishers' in o:\n try:\n if 'virus' in o['distinguishers']:\n if o['distinguishers']['virus'] != 'fetestevent':\n result.append(o)\n else:\n track.append(o) # Track fetestevents that are skipped\n else:\n result.append(o)\n except TypeError:\n result.append(o)\n\n trackfile = open('tracking_fetest_' + process_time + '.txt', 'w')\n numskip = 1\n for item in track:\n trackfile.write(\"\\n\\n**** {:d}: Display ID {} ****\\n\\n{}\".format(numskip, item['displayId'], item))\n numskip += 1\n return result", "response": "method that takes in FireEye ( FE ) alerts and filters FE - tests and False Positives\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_alert(file_name):\n\n processed_line = open(file_name, 'r').read()\n char_pos = processed_line.find(\"}\")\n new_line = \"{\" + processed_line[char_pos + 2:]\n return new_line", "response": "A function that removes the alerts property from the FireEye alert and transforms the data into a JSON ready format"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef apply_filter(\n filter_fn,\n collection,\n result_fn=None,\n filter_name=\"\",\n collection_name=\"\"):\n \"\"\"\n Apply filter to effect collection and print number of dropped elements\n\n Parameters\n ----------\n \"\"\"\n n_before = len(collection)\n filtered = [x for x in collection if filter_fn(x)]\n n_after = len(filtered)\n if not collection_name:\n collection_name = collection.__class__.__name__\n logging.info(\n \"%s filtering removed %d/%d entries of %s\",\n filter_name,\n (n_before - n_after),\n n_before,\n collection_name)\n return result_fn(filtered) if result_fn else collection.__class__(filtered)", "response": "Apply filter to effect collection and print number of dropped elements"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef filter_silent_and_noncoding_effects(effects):\n return apply_filter(\n filter_fn=lambda effect: isinstance(effect, NonsilentCodingMutation),\n collection=effects,\n result_fn=effects.clone_with_new_elements,\n filter_name=\"Silent mutation\")", "response": "Filter out non - coding variants which result in modified proteins."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef apply_variant_expression_filters(\n variants,\n gene_expression_dict,\n gene_expression_threshold,\n transcript_expression_dict,\n transcript_expression_threshold):\n \"\"\"\n Filter a collection of variants by gene and transcript expression thresholds\n\n Parameters\n ----------\n variants : varcode.VariantCollection\n\n gene_expression_dict : dict\n\n gene_expression_threshold : float\n\n transcript_expression_dict : dict\n\n transcript_expression_threshold : float\n \"\"\"\n if gene_expression_dict:\n variants = apply_filter(\n lambda variant: any(\n gene_expression_dict.get(gene_id, 0.0) >=\n gene_expression_threshold\n for gene_id in variant.gene_ids\n ),\n variants,\n result_fn=variants.clone_with_new_elements,\n filter_name=\"Variant gene expression (min=%0.4f)\" % gene_expression_threshold)\n if transcript_expression_dict:\n variants = apply_filter(\n lambda variant: any(\n transcript_expression_dict.get(transcript_id, 0.0) >=\n transcript_expression_threshold\n for transcript_id in variant.transcript_ids\n ),\n variants,\n result_fn=variants.clone_with_new_elements,\n filter_name=(\n \"Variant transcript expression (min=%0.4f)\" % (\n transcript_expression_threshold,)))\n return variants", "response": "Filter a collection of variants by gene and transcript expression thresholds"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef apply_effect_expression_filters(\n effects,\n gene_expression_dict,\n gene_expression_threshold,\n transcript_expression_dict,\n transcript_expression_threshold):\n \"\"\"\n Filter collection of varcode effects by given gene\n and transcript expression thresholds.\n\n Parameters\n ----------\n effects : varcode.EffectCollection\n\n gene_expression_dict : dict\n\n gene_expression_threshold : float\n\n transcript_expression_dict : dict\n\n transcript_expression_threshold : float\n \"\"\"\n if gene_expression_dict:\n effects = apply_filter(\n lambda effect: (\n gene_expression_dict.get(effect.gene_id, 0.0) >=\n gene_expression_threshold),\n effects,\n result_fn=effects.clone_with_new_elements,\n filter_name=\"Effect gene expression (min = %0.4f)\" % gene_expression_threshold)\n\n if transcript_expression_dict:\n effects = apply_filter(\n lambda effect: (\n transcript_expression_dict.get(effect.transcript_id, 0.0) >=\n transcript_expression_threshold\n ),\n effects,\n result_fn=effects.clone_with_new_elements,\n filter_name=(\n \"Effect transcript expression (min=%0.4f)\" % (\n transcript_expression_threshold,)))\n return effects", "response": "Filter collection of varcode effects by given gene and transcript expression thresholds."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef submit_indicators(self, indicators, enclave_ids=None, tags=None):\n\n if enclave_ids is None:\n enclave_ids = self.enclave_ids\n\n if tags is not None:\n tags = [tag.to_dict() for tag in tags]\n\n body = {\n \"enclaveIds\": enclave_ids,\n \"content\": [indicator.to_dict() for indicator in indicators],\n \"tags\": tags\n }\n self._client.post(\"indicators\", data=json.dumps(body))", "response": "Submit indicators to the specified enclave."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_indicators(self, from_time=None, to_time=None, enclave_ids=None,\n included_tag_ids=None, excluded_tag_ids=None,\n start_page=0, page_size=None):\n \"\"\"\n Creates a generator from the |get_indicators_page| method that returns each successive indicator as an\n |Indicator| object containing values for the 'value' and 'type' attributes only; all\n other |Indicator| object attributes will contain Null values.\n\n :param int from_time: start of time window in milliseconds since epoch (defaults to 7 days ago).\n :param int to_time: end of time window in milliseconds since epoch (defaults to current time).\n :param list(string) enclave_ids: a list of enclave IDs from which to get indicators from. \n :param list(string) included_tag_ids: only indicators containing ALL of these tag GUIDs will be returned.\n :param list(string) excluded_tag_ids: only indicators containing NONE of these tags GUIDs be returned. \n :param int start_page: see 'page_size' explanation.\n :param int page_size: Passing the integer 1000 as the argument to this parameter should result in your script \n making fewer API calls because it returns the largest quantity of indicators with each API call. An API call \n has to be made to fetch each |Page|. \n :return: A generator of |Indicator| objects containing values for the \"value\" and \"type\" attributes only.\n All other attributes of the |Indicator| object will contain Null values. \n \n \"\"\"\n indicators_page_generator = self._get_indicators_page_generator(\n from_time=from_time,\n to_time=to_time,\n enclave_ids=enclave_ids,\n included_tag_ids=included_tag_ids,\n excluded_tag_ids=excluded_tag_ids,\n page_number=start_page,\n page_size=page_size\n )\n\n indicators_generator = Page.get_generator(page_generator=indicators_page_generator)\n\n return indicators_generator", "response": "Returns a generator that returns each successive indicator for the given time range."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_indicators_page_generator(self, from_time=None, to_time=None, page_number=0, page_size=None,\n enclave_ids=None, included_tag_ids=None, excluded_tag_ids=None):\n \"\"\"\n Creates a generator from the |get_indicators_page| method that returns each successive page.\n\n :param int from_time: start of time window in milliseconds since epoch (defaults to 7 days ago)\n :param int to_time: end of time window in milliseconds since epoch (defaults to current time)\n :param int page_number: the page number\n :param int page_size: the page size\n :param list(string) enclave_ids: a list of enclave IDs to filter by\n :param list(string) included_tag_ids: only indicators containing ALL of these tags will be returned\n :param list(string) excluded_tag_ids: only indicators containing NONE of these tags will be returned\n :return: a |Page| of |Indicator| objects\n \"\"\"\n\n get_page = functools.partial(\n self.get_indicators_page,\n from_time=from_time,\n to_time=to_time,\n page_number=page_number,\n page_size=page_size,\n enclave_ids=enclave_ids,\n included_tag_ids=included_tag_ids,\n excluded_tag_ids=excluded_tag_ids\n )\n return Page.get_page_generator(get_page, page_number, page_size)", "response": "Creates a generator from the get_indicators_page method that returns each successive page."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a page of indicators matching the provided filters.", "response": "def get_indicators_page(self, from_time=None, to_time=None, page_number=None, page_size=None,\n enclave_ids=None, included_tag_ids=None, excluded_tag_ids=None):\n \"\"\"\n Get a page of indicators matching the provided filters.\n\n :param int from_time: start of time window in milliseconds since epoch (defaults to 7 days ago)\n :param int to_time: end of time window in milliseconds since epoch (defaults to current time)\n :param int page_number: the page number\n :param int page_size: the page size\n :param list(string) enclave_ids: a list of enclave IDs to filter by\n :param list(string) included_tag_ids: only indicators containing ALL of these tags will be returned\n :param list(string) excluded_tag_ids: only indicators containing NONE of these tags will be returned\n :return: a |Page| of indicators\n \"\"\"\n\n params = {\n 'from': from_time,\n 'to': to_time,\n 'pageSize': page_size,\n 'pageNumber': page_number,\n 'enclaveIds': enclave_ids,\n 'tagIds': included_tag_ids,\n 'excludedTagIds': excluded_tag_ids\n }\n\n resp = self._client.get(\"indicators\", params=params)\n\n page_of_indicators = Page.from_dict(resp.json(), content_type=Indicator)\n\n return page_of_indicators"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsearching for indicators in the specified enclave.", "response": "def search_indicators(self, search_term=None,\n enclave_ids=None,\n from_time=None,\n to_time=None,\n indicator_types=None,\n tags=None,\n excluded_tags=None):\n \"\"\"\n Uses the |search_indicators_page| method to create a generator that returns each successive indicator.\n\n :param str search_term: The term to search for. If empty, no search term will be applied. Otherwise, must\n be at least 3 characters.\n :param list(str) enclave_ids: list of enclave ids used to restrict indicators to specific enclaves (optional - by\n default indicators from all of user's enclaves are returned)\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :param list(str) indicator_types: a list of indicator types to filter by (optional)\n :param list(str) tags: Name (or list of names) of tag(s) to filter indicators by. Only indicators containing\n ALL of these tags will be returned. (optional)\n :param list(str) excluded_tags: Indicators containing ANY of these tags will be excluded from the results.\n :return: The generator.\n \"\"\"\n\n return Page.get_generator(page_generator=self._search_indicators_page_generator(search_term, enclave_ids,\n from_time, to_time,\n indicator_types, tags,\n excluded_tags))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _search_indicators_page_generator(self, search_term=None,\n enclave_ids=None,\n from_time=None,\n to_time=None,\n indicator_types=None,\n tags=None,\n excluded_tags=None,\n start_page=0,\n page_size=None):\n \"\"\"\n Creates a generator from the |search_indicators_page| method that returns each successive page.\n\n :param str search_term: The term to search for. If empty, no search term will be applied. Otherwise, must\n be at least 3 characters.\n :param list(str) enclave_ids: list of enclave ids used to restrict indicators to specific enclaves (optional - by\n default indicators from all of user's enclaves are returned)\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :param list(str) indicator_types: a list of indicator types to filter by (optional)\n :param list(str) tags: Name (or list of names) of tag(s) to filter indicators by. Only indicators containing\n ALL of these tags will be returned. (optional)\n :param list(str) excluded_tags: Indicators containing ANY of these tags will be excluded from the results.\n :param int start_page: The page to start on.\n :param page_size: The size of each page.\n :return: The generator.\n \"\"\"\n\n get_page = functools.partial(self.search_indicators_page, search_term, enclave_ids,\n from_time, to_time, indicator_types, tags, excluded_tags)\n return Page.get_page_generator(get_page, start_page, page_size)", "response": "Creates a generator from the |search_indicators_page| method that returns each successive page."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef search_indicators_page(self, search_term=None,\n enclave_ids=None,\n from_time=None,\n to_time=None,\n indicator_types=None,\n tags=None,\n excluded_tags=None,\n page_size=None,\n page_number=None):\n \"\"\"\n Search for indicators containing a search term.\n\n :param str search_term: The term to search for. If empty, no search term will be applied. Otherwise, must\n be at least 3 characters.\n :param list(str) enclave_ids: list of enclave ids used to restrict to indicators found in reports in specific\n enclaves (optional - by default reports from all of the user's enclaves are used)\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :param list(str) indicator_types: a list of indicator types to filter by (optional)\n :param list(str) tags: Name (or list of names) of tag(s) to filter indicators by. Only indicators containing\n ALL of these tags will be returned. (optional)\n :param list(str) excluded_tags: Indicators containing ANY of these tags will be excluded from the results.\n :param int page_number: the page number to get.\n :param int page_size: the size of the page to be returned.\n :return: a |Page| of |Indicator| objects.\n \"\"\"\n\n body = {\n 'searchTerm': search_term\n }\n\n params = {\n 'enclaveIds': enclave_ids,\n 'from': from_time,\n 'to': to_time,\n 'entityTypes': indicator_types,\n 'tags': tags,\n 'excludedTags': excluded_tags,\n 'pageSize': page_size,\n 'pageNumber': page_number\n }\n\n resp = self._client.post(\"indicators/search\", params=params, data=json.dumps(body))\n\n return Page.from_dict(resp.json(), content_type=Indicator)", "response": "Search for indicators containing a search term."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_related_indicators(self, indicators=None, enclave_ids=None):\n\n return Page.get_generator(page_generator=self._get_related_indicators_page_generator(indicators, enclave_ids))", "response": "Uses the |get_related_indicators_page| method to create a generator that returns each successive report."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the metadata associated with a single indicator.", "response": "def get_indicator_metadata(self, value):\n \"\"\"\n Provide metadata associated with a single indicators, including value, indicatorType, noteCount,\n sightings, lastSeen, enclaveIds, and tags. The metadata is determined based on the enclaves the user making the\n request has READ access to.\n\n :param value: an indicator value to query.\n :return: A dict containing three fields: 'indicator' (an |Indicator| object), 'tags' (a list of |Tag|\n objects), and 'enclaveIds' (a list of enclave IDs that the indicator was found in).\n\n .. warning:: This method is deprecated. Please use |get_indicators_metadata| instead.\n \"\"\"\n\n result = self.get_indicators_metadata([Indicator(value=value)])\n if len(result) > 0:\n indicator = result[0]\n return {\n 'indicator': indicator,\n 'tags': indicator.tags,\n 'enclaveIds': indicator.enclave_ids\n }\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_indicators_metadata(self, indicators):\n\n data = [{\n 'value': i.value,\n 'indicatorType': i.type\n } for i in indicators]\n\n resp = self._client.post(\"indicators/metadata\", data=json.dumps(data))\n\n return [Indicator.from_dict(x) for x in resp.json()]", "response": "This method returns the metadata associated with a list of indicators."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_indicator_details(self, indicators, enclave_ids=None):\n\n # if the indicators parameter is a string, make it a singleton\n if isinstance(indicators, string_types):\n indicators = [indicators]\n\n params = {\n 'enclaveIds': enclave_ids,\n 'indicatorValues': indicators\n }\n resp = self._client.get(\"indicators/details\", params=params)\n\n return [Indicator.from_dict(indicator) for indicator in resp.json()]", "response": "This method returns details for all indicators in the specified enclave."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a list of terms to the user s company s whitelist.", "response": "def add_terms_to_whitelist(self, terms):\n \"\"\"\n Add a list of terms to the user's company's whitelist.\n\n :param terms: The list of terms to whitelist.\n :return: The list of extracted |Indicator| objects that were whitelisted.\n \"\"\"\n\n resp = self._client.post(\"whitelist\", json=terms)\n return [Indicator.from_dict(indicator) for indicator in resp.json()]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_indicator_from_whitelist(self, indicator):\n\n params = indicator.to_dict()\n self._client.delete(\"whitelist\", params=params)", "response": "Delete an indicator from the user s company s whitelist."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a list of indicators that are trending in the community.", "response": "def get_community_trends(self, indicator_type=None, days_back=None):\n \"\"\"\n Find indicators that are trending in the community.\n\n :param indicator_type: A type of indicator to filter by. If ``None``, will get all types of indicators except\n for MALWARE and CVEs (this convention is for parity with the corresponding view on the Dashboard).\n :param days_back: The number of days back to search. Any integer between 1 and 30 is allowed.\n :return: A list of |Indicator| objects.\n \"\"\"\n\n params = {\n 'type': indicator_type,\n 'daysBack': days_back\n }\n\n resp = self._client.get(\"indicators/community-trending\", params=params)\n body = resp.json()\n\n # parse items in response as indicators\n return [Indicator.from_dict(indicator) for indicator in body]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_whitelist_page(self, page_number=None, page_size=None):\n\n params = {\n 'pageNumber': page_number,\n 'pageSize': page_size\n }\n resp = self._client.get(\"whitelist\", params=params)\n return Page.from_dict(resp.json(), content_type=Indicator)", "response": "Gets a paginated list of indicators that the user s company has whitelisted."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds all reports that contain any of the given indicators and returns correlated indicators from those reports.", "response": "def get_related_indicators_page(self, indicators=None, enclave_ids=None, page_size=None, page_number=None):\n \"\"\"\n Finds all reports that contain any of the given indicators and returns correlated indicators from those reports.\n\n :param indicators: list of indicator values to search for\n :param enclave_ids: list of IDs of enclaves to search in\n :param page_size: number of results per page\n :param page_number: page to start returning results on\n :return: A |Page| of |Report| objects.\n \"\"\"\n\n params = {\n 'indicators': indicators,\n 'enclaveIds': enclave_ids,\n 'pageNumber': page_number,\n 'pageSize': page_size\n }\n\n resp = self._client.get(\"indicators/related\", params=params)\n\n return Page.from_dict(resp.json(), content_type=Indicator)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_indicators_for_report_page_generator(self, report_id, start_page=0, page_size=None):\n\n get_page = functools.partial(self.get_indicators_for_report_page, report_id=report_id)\n return Page.get_page_generator(get_page, start_page, page_size)", "response": "Creates a generator that returns each successive page."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_related_indicators_page_generator(self, indicators=None, enclave_ids=None, start_page=0, page_size=None):\n\n get_page = functools.partial(self.get_related_indicators_page, indicators, enclave_ids)\n return Page.get_page_generator(get_page, start_page, page_size)", "response": "Creates a generator from the |get_related_indicators_page| method that returns each\n successive page."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a generator from the |get_whitelist_page| method that returns each successive page.", "response": "def _get_whitelist_page_generator(self, start_page=0, page_size=None):\n \"\"\"\n Creates a generator from the |get_whitelist_page| method that returns each successive page.\n\n :param int start_page: The page to start on.\n :param int page_size: The size of each page.\n :return: The generator.\n \"\"\"\n\n return Page.get_page_generator(self.get_whitelist_page, start_page, page_size)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndisplay the system as points.", "response": "def points(self, size=1.0, highlight=None, colorlist=None, opacity=1.0):\n \"\"\"Display the system as points.\n\n :param float size: the size of the points.\n\n\n \"\"\"\n if colorlist is None:\n colorlist = [get_atom_color(t) for t in self.topology['atom_types']]\n if highlight is not None:\n if isinstance(highlight, int):\n colorlist[highlight] = 0xff0000\n if isinstance(highlight, (list, np.ndarray)):\n for i in highlight:\n colorlist[i] = 0xff0000\n\n sizes = [size] * len(self.topology['atom_types'])\n\n points = self.add_representation('points', {'coordinates': self.coordinates.astype('float32'),\n 'colors': colorlist,\n 'sizes': sizes,\n 'opacity': opacity})\n # Update closure\n def update(self=self, points=points):\n self.update_representation(points, {'coordinates': self.coordinates.astype('float32')})\n\n self.update_callbacks.append(update)\n self.autozoom(self.coordinates)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef labels(self, text=None, coordinates=None, colorlist=None, sizes=None, fonts=None, opacity=1.0):\n '''Display atomic labels for the system'''\n if coordinates is None:\n coordinates=self.coordinates\n l=len(coordinates)\n if text is None:\n if len(self.topology.get('atom_types'))==l:\n text=[self.topology['atom_types'][i]+str(i+1) for i in range(l)]\n else:\n text=[str(i+1) for i in range(l)]\n\n text_representation = self.add_representation('text', {'coordinates': coordinates,\n 'text': text,\n 'colors': colorlist,\n 'sizes': sizes,\n 'fonts': fonts,\n 'opacity': opacity})\n def update(self=self, text_representation=text_representation):\n self.update_representation(text_representation, {'coordinates': coordinates})\n\n self.update_callbacks.append(update)", "response": "Display atomic labels for the system"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves all atomic labels from the system", "response": "def remove_labels(self):\n '''Remove all atomic labels from the system'''\n for rep_id in self.representations.keys():\n if self.representations[rep_id]['rep_type']=='text' and rep_id not in self._axes_reps:\n self.remove_representation(rep_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef toggle_axes(self, parameters = None):\n '''Toggle axes [x,y,z] on and off for the current representation\n Parameters: dictionary of parameters to control axes:\n position/p: origin of axes\n length/l: length of axis\n offset/o: offset to place axis labels\n axis_colors/ac: axis colors\n text_colors/tc: label colors\n radii/r: axis radii\n text/t: label text\n sizes/s: label sizes\n fonts/f: label fonts'''\n\n if len(self._axes_reps)>0:\n for rep_id in self._axes_reps:\n self.remove_representation(rep_id)\n self._axes_reps = []\n else:\n if not isinstance(parameters,dict):\n parameters={}\n\n def defaults(pdict,keys,default,length=3,instance=(int,float)):\n '''Helper function to generate default values and handle errors'''\n for k in keys:\n val=pdict.get(k)\n if val!=None:\n break\n if val==None:\n val=default\n elif isinstance(val,instance) and length>1:\n val = [val]*length\n elif isinstance(val,(list,np.generic,np.ndarray)) and length>1:\n if not all([isinstance(v,instance) for v in val]):\n raise RuntimeError(\"Invalid type {t} for parameter {p}. Use {i}.\".format(t=type(val),p=val,i=instance))\n elif not isinstance(val,instance):\n raise RuntimeError(\"Invalid type {t} for parameter {p}. Use {i}.\".format(t=type(val),p=val,i=instance))\n return val\n\n p = defaults(parameters,['positions','position','p'],np.average(self.coordinates,0))\n l = defaults(parameters,['lengths','length','l'],max([np.linalg.norm(x-p) for x in self.coordinates]),1)\n o = defaults(parameters,['offsets','offset','o'],l*1.05,1)\n ac = defaults(parameters,[a+c for a in ['axis_','a',''] for c in ['colors','colours','color','colour','c']],[0xff0000,0x00ff00,0x0000ff],3,(int,hex))\n tc = defaults(parameters,[a+c for a in ['text_','t',''] for c in ['colors','colours','color','colour','c']],[0xff0000,0x00ff00,0x0000ff],3,(int,hex))\n r = defaults(parameters,['radii','radius','r'],[0.005]*3,3)\n t = defaults(parameters,['text','labels','t'],['X','Y','Z'],3,str)\n s = defaults(parameters,['sizes','size','s'],[32]*3,3)\n f = defaults(parameters,['fonts','font','f'],['Arial']*3,3,str)\n\n starts=np.array([p,p,p],float)\n ends=np.array([p+[l,0,0],p+[0,l,0],p+[0,0,l]],float)\n axis_labels_coords=np.array([p+[o,0,0],p+[0,o,0],p+[0,0,o]],float)\n\n a_rep=self.add_representation('cylinders',{\"startCoords\":starts,\n \"endCoords\":ends,\n \"colors\":ac,\n \"radii\":r})\n\n t_rep=self.add_representation('text',{\"coordinates\":axis_labels_coords,\n \"text\":t,\n \"colors\":tc,\n \"sizes\":s,\n \"fonts\":f})\n self._axes_reps = [a_rep, t_rep]", "response": "Toggle the axes on and off for the current representation of the object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef lines(self):\n '''Display the system bonds as lines.\n\n '''\n if \"bonds\" not in self.topology:\n return\n\n bond_start, bond_end = zip(*self.topology['bonds'])\n bond_start = np.array(bond_start)\n bond_end = np.array(bond_end)\n\n color_array = np.array([get_atom_color(t) for t in self.topology['atom_types']])\n lines = self.add_representation('lines', {'startCoords': self.coordinates[bond_start],\n 'endCoords': self.coordinates[bond_end],\n 'startColors': color_array[bond_start].tolist(),\n 'endColors': color_array[bond_end].tolist()})\n\n def update(self=self, lines=lines):\n bond_start, bond_end = zip(*self.topology['bonds'])\n bond_start = np.array(bond_start)\n bond_end = np.array(bond_end)\n\n self.update_representation(lines, {'startCoords': self.coordinates[bond_start],\n 'endCoords': self.coordinates[bond_end]})\n self.update_callbacks.append(update)\n self.autozoom(self.coordinates)", "response": "Display the system bonds as lines.\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndisplay atoms as points and bonds as lines.", "response": "def wireframe(self, pointsize=0.2, opacity=1.0):\n '''Display atoms as points of size *pointsize* and bonds as lines.'''\n self.points(pointsize, opacity=opacity)\n self.lines()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ball_and_sticks(self, ball_radius=0.05, stick_radius=0.02, colorlist=None, opacity=1.0):\n\n # Add the spheres\n\n if colorlist is None:\n colorlist = [get_atom_color(t) for t in self.topology['atom_types']]\n sizes = [ball_radius] * len(self.topology['atom_types'])\n\n spheres = self.add_representation('spheres', {'coordinates': self.coordinates.astype('float32'),\n 'colors': colorlist,\n 'radii': sizes,\n 'opacity': opacity})\n\n def update(self=self, spheres=spheres):\n self.update_representation(spheres, {'coordinates': self.coordinates.astype('float32')})\n\n self.update_callbacks.append(update)\n\n # Add the cylinders\n\n if 'bonds' in self.topology and self.topology['bonds'] is not None:\n start_idx, end_idx = zip(*self.topology['bonds'])\n # Added this so bonds don't go through atoms when opacity<1.0\n new_start_coords = []\n new_end_coords = []\n for bond_ind, bond in enumerate(self.topology['bonds']):\n trim_amt = (ball_radius**2 - stick_radius**2)**0.5 if ball_radius>stick_radius else 0\n start_coord = self.coordinates[bond[0]]\n end_coord = self.coordinates[bond[1]]\n vec = (end_coord-start_coord)/np.linalg.norm(end_coord-start_coord)\n new_start_coords.append(start_coord+vec*trim_amt)\n new_end_coords.append(end_coord-vec*trim_amt)\n\n cylinders = self.add_representation('cylinders', {'startCoords': np.array(new_start_coords,dtype='float32'),\n 'endCoords': np.array(new_end_coords,dtype='float32'),\n 'colors': [0xcccccc] * len(new_start_coords),\n 'radii': [stick_radius] * len(new_start_coords),\n 'opacity': opacity})\n # Update closure\n def update(self=self, rep=cylinders, start_idx=start_idx, end_idx=end_idx):\n self.update_representation(rep, {'startCoords': self.coordinates[list(start_idx)],\n 'endCoords': self.coordinates[list(end_idx)]})\n\n self.update_callbacks.append(update)\n self.autozoom(self.coordinates)", "response": "Display the system using a ball and stick representation."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef line_ribbon(self):\n '''Display the protein secondary structure as a white lines that passes through the\n backbone chain.\n\n '''\n # Control points are the CA (C alphas)\n backbone = np.array(self.topology['atom_names']) == 'CA'\n smoothline = self.add_representation('smoothline', {'coordinates': self.coordinates[backbone],\n 'color': 0xffffff})\n\n def update(self=self, smoothline=smoothline):\n self.update_representation(smoothline, {'coordinates': self.coordinates[backbone]})\n self.update_callbacks.append(update)\n\n self.autozoom(self.coordinates)", "response": "Display the protein secondary structure as a white lines that passes through the\n backbone chain."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cylinder_and_strand(self):\n '''Display the protein secondary structure as a white,\n solid tube and the alpha-helices as yellow cylinders.\n\n '''\n top = self.topology\n # We build a mini-state machine to find the\n # start end of helices and such\n in_helix = False\n helices_starts = []\n helices_ends = []\n coils = []\n\n coil = []\n for i, typ in enumerate(top['secondary_structure']):\n if typ == 'H':\n if in_helix == False:\n # We become helices\n helices_starts.append(top['residue_indices'][i][0])\n in_helix = True\n\n # We end the previous coil\n coil.append(top['residue_indices'][i][0])\n else:\n if in_helix == True:\n # We stop being helices\n helices_ends.append(top['residue_indices'][i][0])\n\n # We start a new coil\n coil = []\n coils.append(coil)\n in_helix = False\n\n # We add control points\n coil.append(top['residue_indices'][i][0])\n [coil.append(j) for j in top['residue_indices'][i] if top['atom_names'][j] == 'CA']\n\n # We add the coils\n coil_representations = []\n for control_points in coils:\n rid = self.add_representation('smoothtube', {'coordinates': self.coordinates[control_points],\n 'radius': 0.05,\n 'resolution': 4,\n 'color': 0xffffff})\n coil_representations.append(rid)\n\n\n start_idx, end_idx = helices_starts, helices_ends\n cylinders = self.add_representation('cylinders', {'startCoords': self.coordinates[list(start_idx)],\n 'endCoords': self.coordinates[list(end_idx)],\n 'colors': [0xffff00] * len(self.coordinates),\n 'radii': [0.15] * len(self.coordinates)})\n def update(self=self, cylinders=cylinders, coils=coils,\n coil_representations=coil_representations,\n start_idx=start_idx, end_idx=end_idx, control_points=control_points):\n for i, control_points in enumerate(coils):\n rid = self.update_representation(coil_representations[i],\n {'coordinates': self.coordinates[control_points]})\n\n self.update_representation(cylinders, {'startCoords': self.coordinates[list(start_idx)],\n 'endCoords': self.coordinates[list(end_idx)]})\n\n self.update_callbacks.append(update)\n self.autozoom(self.coordinates)", "response": "Display the protein secondary structure as a white solid tube and the alpha - helices as yellow cylinders."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisplay a protein secondary structure as a pymol - like cartoon representation.", "response": "def cartoon(self, cmap=None):\n '''Display a protein secondary structure as a pymol-like cartoon representation.\n\n :param cmap: is a dictionary that maps the secondary type\n (H=helix, E=sheet, C=coil) to a hexadecimal color (0xffffff for white)\n '''\n # Parse secondary structure\n top = self.topology\n\n geom = gg.GeomProteinCartoon(gg.Aes(xyz=self.coordinates,\n types=top['atom_names'],\n secondary_type=top['secondary_structure']),\n cmap=cmap)\n\n primitives = geom.produce(gg.Aes())\n ids = [self.add_representation(r['rep_type'], r['options']) for r in primitives]\n\n def update(self=self, geom=geom, ids=ids):\n primitives = geom.produce(gg.Aes(xyz=self.coordinates))\n [self.update_representation(id_, rep_options)\n for id_, rep_options in zip(ids, primitives)]\n\n self.update_callbacks.append(update)\n self.autozoom(self.coordinates)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_isosurface(self, function, isolevel=0.3, resolution=32, style=\"wireframe\", color=0xffffff):\n '''Add an isosurface to the current scene.\n\n :param callable function: A function that takes x, y, z coordinates as input and is broadcastable using numpy. Typically simple\n functions that involve standard arithmetic operations and functions\n such as ``x**2 + y**2 + z**2`` or ``np.exp(x**2 + y**2 + z**2)`` will work. If not sure, you can first\n pass the function through ``numpy.vectorize``.\\\n\n Example: ``mv.add_isosurface(np.vectorize(f))``\n :param float isolevel: The value for which the function should be constant.\n :param int resolution: The number of grid point to use for the surface. An high value will give better quality but lower performance.\n :param str style: The surface style, choose between ``solid``, ``wireframe`` and ``transparent``.\n :param int color: The color given as an hexadecimal integer. Example: ``0xffffff`` is white.\n\n '''\n\n avail_styles = ['wireframe', 'solid', 'transparent']\n if style not in avail_styles:\n raise ValueError('style must be in ' + str(avail_styles))\n\n # We want to make a container that contains the whole molecule\n # and surface\n area_min = self.coordinates.min(axis=0) - 0.2\n area_max = self.coordinates.max(axis=0) + 0.2\n\n x = np.linspace(area_min[0], area_max[0], resolution)\n y = np.linspace(area_min[1], area_max[1], resolution)\n z = np.linspace(area_min[2], area_max[2], resolution)\n\n xv, yv, zv = np.meshgrid(x, y, z)\n spacing = np.array((area_max - area_min)/resolution)\n\n if isolevel >= 0:\n triangles = marching_cubes(function(xv, yv, zv), isolevel)\n else: # Wrong traingle unwinding roder -- god only knows why\n triangles = marching_cubes(-function(xv, yv, zv), -isolevel)\n\n if len(triangles) == 0:\n ## NO surface\n return\n\n faces = []\n verts = []\n for i, t in enumerate(triangles):\n faces.append([i * 3, i * 3 +1, i * 3 + 2])\n verts.extend(t)\n\n faces = np.array(faces)\n verts = area_min + spacing/2 + np.array(verts)*spacing\n rep_id = self.add_representation('surface', {'verts': verts.astype('float32'),\n 'faces': faces.astype('int32'),\n 'style': style,\n 'color': color})\n self.autozoom(verts)", "response": "Add an isosurface to the current scene."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding an isosurface to the current scence using pre - computed data on a grid", "response": "def add_isosurface_grid_data(self, data, origin, extent, resolution,\n isolevel=0.3, scale=10,\n style=\"wireframe\", color=0xffffff):\n \"\"\"\n Add an isosurface to current scence using pre-computed data on a grid\n \"\"\"\n spacing = np.array(extent/resolution)/scale\n if isolevel >= 0:\n triangles = marching_cubes(data, isolevel)\n else:\n triangles = marching_cubes(-data, -isolevel)\n faces = []\n verts = []\n for i, t in enumerate(triangles):\n faces.append([i * 3, i * 3 +1, i * 3 + 2])\n verts.extend(t)\n faces = np.array(faces)\n verts = origin + spacing/2 + np.array(verts)*spacing\n rep_id = self.add_representation('surface', {'verts': verts.astype('float32'),\n 'faces': faces.astype('int32'),\n 'style': style,\n 'color': color})\n\n self.autozoom(verts)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _refresh_token(self):\n\n # use basic auth with API key and secret\n client_auth = requests.auth.HTTPBasicAuth(self.api_key, self.api_secret)\n\n # make request\n post_data = {\"grant_type\": \"client_credentials\"}\n response = requests.post(self.auth, auth=client_auth, data=post_data, proxies=self.proxies)\n self.last_response = response\n\n # raise exception if status code indicates an error\n if 400 <= response.status_code < 600:\n message = \"{} {} Error (Trace-Id: {}): {}\".format(response.status_code,\n \"Client\" if response.status_code < 500 else \"Server\",\n self._get_trace_id(response),\n \"unable to get token\")\n raise HTTPError(message, response=response)\n\n # set token property to the received token\n self.token = response.json()[\"access_token\"]", "response": "Refreshes the token for the current user."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate headers dictionary for a request.", "response": "def _get_headers(self, is_json=False):\n \"\"\"\n Create headers dictionary for a request.\n\n :param boolean is_json: Whether the request body is a json.\n :return: The headers dictionary.\n \"\"\"\n\n headers = {\"Authorization\": \"Bearer \" + self._get_token()}\n\n if self.client_type is not None:\n headers[\"Client-Type\"] = self.client_type\n\n if self.client_version is not None:\n headers[\"Client-Version\"] = self.client_version\n\n if self.client_metatag is not None:\n headers[\"Client-Metatag\"] = self.client_metatag\n\n if is_json:\n headers['Content-Type'] = 'application/json'\n\n return headers"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining whether the given response indicates that the token is expired.", "response": "def _is_expired_token_response(cls, response):\n \"\"\"\n Determine whether the given response indicates that the token is expired.\n\n :param response: The response object.\n :return: True if the response indicates that the token is expired.\n \"\"\"\n\n EXPIRED_MESSAGE = \"Expired oauth2 access token\"\n INVALID_MESSAGE = \"Invalid oauth2 access token\"\n\n if response.status_code == 400:\n try:\n body = response.json()\n if str(body.get('error_description')) in [EXPIRED_MESSAGE, INVALID_MESSAGE]:\n return True\n except:\n pass\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef request(self, method, path, headers=None, params=None, data=None, **kwargs):\n\n retry = self.retry\n attempted = False\n while not attempted or retry:\n\n # get headers and merge with headers from method parameter if it exists\n base_headers = self._get_headers(is_json=method in [\"POST\", \"PUT\"])\n if headers is not None:\n base_headers.update(headers)\n\n url = \"{}/{}\".format(self.base, path)\n\n # make request\n response = requests.request(method=method,\n url=url,\n headers=base_headers,\n verify=self.verify,\n params=params,\n data=data,\n proxies=self.proxies,\n **kwargs)\n self.last_response = response\n attempted = True\n\n # log request\n self.logger.debug(\"%s %s. Trace-Id: %s. Params: %s\", method, url, response.headers.get('Trace-Id'), params)\n\n # refresh token if expired\n if self._is_expired_token_response(response):\n self._refresh_token()\n\n # if \"too many requests\" status code received, wait until next request will be allowed and retry\n elif retry and response.status_code == 429:\n wait_time = ceil(response.json().get('waitTime') / 1000)\n self.logger.debug(\"Waiting %d seconds until next request allowed.\" % wait_time)\n\n # if wait time exceeds max wait time, allow the exception to be thrown\n if wait_time <= self.max_wait_time:\n time.sleep(wait_time)\n else:\n retry = False\n\n # request cycle is complete\n else:\n retry = False\n\n # raise exception if status code indicates an error\n if 400 <= response.status_code < 600:\n\n # get response json body, if one exists\n resp_json = None\n try:\n resp_json = response.json()\n except:\n pass\n\n # get message from json body, if one exists\n if resp_json is not None and 'message' in resp_json:\n reason = resp_json['message']\n else:\n reason = \"unknown cause\"\n\n # construct error message\n message = \"{} {} Error (Trace-Id: {}): {}\".format(response.status_code,\n \"Client\" if response.status_code < 500 else \"Server\",\n self._get_trace_id(response),\n reason)\n # raise HTTPError\n raise HTTPError(message, response=response)\n\n return response", "response": "Wrapper around requests. request that handles API calls."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef isosurface_from_data(data, isolevel, origin, spacing):\n spacing = np.array(extent/resolution)\n if isolevel >= 0:\n triangles = marching_cubes(data, isolevel)\n else: # Wrong traingle unwinding roder -- god only knows why\n triangles = marching_cubes(-data, -isolevel)\n faces = []\n verts = []\n for i, t in enumerate(triangles):\n faces.append([i * 3, i * 3 +1, i * 3 + 2])\n verts.extend(t)\n \n faces = np.array(faces)\n verts = origin + spacing/2 + np.array(verts)*spacing\n return verts, faces", "response": "This function takes a numpy array of data and returns the vertices and faces of the isosurface that is used to feed into programs\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nloads a GTF file generated by StringTie which contains transcript - level quantification of abundance. Returns a dictionary mapping transcript IDs of transcripts to FPKM values.", "response": "def load_transcript_fpkm_dict_from_gtf(\n gtf_path,\n transcript_id_column_name=\"reference_id\",\n fpkm_column_name=\"FPKM\",\n feature_column_name=\"feature\"):\n \"\"\"\n Load a GTF file generated by StringTie which contains transcript-level\n quantification of abundance. Returns a dictionary mapping Ensembl\n IDs of transcripts to FPKM values.\n \"\"\"\n df = gtfparse.read_gtf(\n gtf_path,\n column_converters={fpkm_column_name: float})\n transcript_ids = _get_gtf_column(transcript_id_column_name, gtf_path, df)\n fpkm_values = _get_gtf_column(fpkm_column_name, gtf_path, df)\n features = _get_gtf_column(feature_column_name, gtf_path, df)\n logging.info(\"Loaded %d rows from %s\" % (len(transcript_ids), gtf_path))\n logging.info(\"Found %s transcript entries\" % sum(\n feature == \"transcript\" for feature in features))\n result = {\n transcript_id: float(fpkm)\n for (transcript_id, fpkm, feature)\n in zip(transcript_ids, fpkm_values, features)\n if (\n (transcript_id is not None) and\n (len(transcript_id) > 0) and\n (feature == \"transcript\")\n )\n }\n logging.info(\"Keeping %d transcript rows with reference IDs\" % (\n len(result),))\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\npredict the amino acid sequences in the specified named sequences.", "response": "def predict_from_named_sequences(\n self, name_to_sequence_dict):\n \"\"\"\n Parameters\n ----------\n name_to_sequence_dict : (str->str) dict\n Dictionary mapping sequence names to amino acid sequences\n\n Returns pandas.DataFrame with the following columns:\n - source_sequence_name\n - peptide\n - peptide_offset\n - peptide_length\n - allele\n - affinity\n - percentile_rank\n - prediction_method_name\n \"\"\"\n df = self.mhc_model.predict_subsequences_dataframe(name_to_sequence_dict)\n return df.rename(\n columns={\n \"length\": \"peptide_length\",\n \"offset\": \"peptide_offset\"})"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef predict_from_sequences(self, sequences):\n # make each sequence its own unique ID\n sequence_dict = {\n seq: seq\n for seq in sequences\n }\n df = self.predict_from_named_sequences(sequence_dict)\n return df.rename(columns={\"source_sequence_name\": \"source_sequence\"})", "response": "Predict MHC ligands for sub - sequences of each input sequence."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef predict_from_mutation_effects(\n self,\n effects,\n transcript_expression_dict=None,\n gene_expression_dict=None):\n \"\"\"Given a Varcode.EffectCollection of predicted protein effects,\n return predicted epitopes around each mutation.\n\n Parameters\n ----------\n effects : Varcode.EffectCollection\n\n transcript_expression_dict : dict\n Dictionary mapping transcript IDs to RNA expression estimates. Used\n both for transcript expression filtering and for selecting the\n most abundant transcript for a particular variant. If omitted then\n transcript selection is done using priority of variant effects and\n transcript length.\n\n gene_expression_dict : dict, optional\n Dictionary mapping gene IDs to RNA expression estimates\n\n Returns DataFrame with the following columns:\n - variant\n - gene\n - gene_id\n - transcript_id\n - transcript_name\n - effect\n - effect_type\n - peptide\n - peptide_offset\n - peptide_length\n - allele\n - affinity\n - percentile_rank\n - prediction_method_name\n - contains_mutant_residues\n - mutation_start_in_peptide\n - mutation_end_in_peptide\n\n Optionall will also include the following columns if corresponding\n expression dictionary inputs are provided:\n - gene_expression\n - transcript_expression\n \"\"\"\n\n # we only care about effects which impact the coding sequence of a\n # protein\n effects = filter_silent_and_noncoding_effects(effects)\n\n effects = apply_effect_expression_filters(\n effects,\n transcript_expression_dict=transcript_expression_dict,\n transcript_expression_threshold=self.min_transcript_expression,\n gene_expression_dict=gene_expression_dict,\n gene_expression_threshold=self.min_gene_expression)\n\n # group by variants, so that we end up with only one mutant\n # sequence per mutation\n variant_effect_groups = effects.groupby_variant()\n\n if len(variant_effect_groups) == 0:\n logging.warn(\"No candidates for MHC binding prediction\")\n return []\n\n if transcript_expression_dict:\n # if expression data is available, then for each variant\n # keep the effect annotation for the most abundant transcript\n top_effects = [\n variant_effects.top_expression_effect(\n transcript_expression_dict)\n for variant_effects in variant_effect_groups.values()\n ]\n else:\n # if no transcript abundance data is available, then\n # for each variant keep the effect with the most significant\n # predicted effect on the protein sequence, along with using\n # transcript/CDS length as a tie-breaker for effects with the same\n # priority.\n top_effects = [\n variant_effects.top_priority_effect()\n for variant_effects in variant_effect_groups.values()\n ]\n\n # 1) dictionary mapping varcode effect objects to subsequences\n # around each mutation\n # 2) dictionary mapping varcode effect to start offset of subsequence\n # within the full mutant protein sequence\n effect_to_subsequence_dict, effect_to_offset_dict = \\\n protein_subsequences_around_mutations(\n effects=top_effects,\n padding_around_mutation=self.padding_around_mutation)\n\n # since we know that each set of variant effects has been\n # reduced to a single 'top priority' effect, we can uniquely\n # identify each variant sequence by its original genomic variant\n variant_string_to_effect_dict = {\n effect.variant.short_description: effect\n for effect in effect_to_subsequence_dict.keys()\n }\n variant_string_to_subsequence_dict = {\n effect.variant.short_description: subseq\n for (effect, subseq) in effect_to_subsequence_dict.items()\n }\n variant_string_to_offset_dict = {\n effect.variant.short_description: subseq_offset\n for (effect, subseq_offset) in effect_to_offset_dict.items()\n }\n df = self.predict_from_named_sequences(variant_string_to_subsequence_dict)\n logging.info(\"MHC predictor returned %d peptide binding predictions\" % (\n len(df)))\n\n # since we used variant descrptions as the name of each sequence\n # let's rename that column to be more informative\n df = df.rename(columns={\"source_sequence_name\": \"variant\"})\n\n # adjust offset to be relative to start of protein, rather\n # than whatever subsequence we used for prediction\n def compute_peptide_offset_relative_to_protein(row):\n subsequence_offset = variant_string_to_offset_dict[row.variant]\n return row.peptide_offset + subsequence_offset\n\n df[\"peptide_offset\"] = df.apply(\n compute_peptide_offset_relative_to_protein,\n axis=1)\n\n if self.ic50_cutoff:\n df = df[df.affinity <= self.ic50_cutoff]\n logging.info(\"Kept %d predictions after filtering affinity <= %f\" % (\n len(df), self.ic50_cutoff))\n\n if self.percentile_cutoff:\n df = df[df.percentile_rank <= self.percentile_cutoff]\n logging.info(\"Kept %d predictions after filtering percentile <= %f\" % (\n len(df), self.percentile_cutoff))\n\n extra_columns = OrderedDict([\n ('gene', []),\n ('gene_id', []),\n ('transcript_id', []),\n ('transcript_name', []),\n ('effect', []),\n ('effect_type', []),\n ('contains_mutant_residues', []),\n ('mutation_start_in_peptide', []),\n ('mutation_end_in_peptide', []),\n ])\n if gene_expression_dict is not None:\n extra_columns[\"gene_expression\"] = []\n if transcript_expression_dict is not None:\n extra_columns[\"transcript_expression\"] = []\n\n for _, row in df.iterrows():\n effect = variant_string_to_effect_dict[row.variant]\n mutation_start_in_protein = effect.aa_mutation_start_offset\n mutation_end_in_protein = effect.aa_mutation_end_offset\n peptide_length = len(row.peptide)\n is_mutant = contains_mutant_residues(\n peptide_start_in_protein=row.peptide_offset,\n peptide_length=peptide_length,\n mutation_start_in_protein=mutation_start_in_protein,\n mutation_end_in_protein=mutation_end_in_protein)\n if is_mutant:\n mutation_start_in_peptide, mutation_end_in_peptide = peptide_mutation_interval(\n peptide_start_in_protein=row.peptide_offset,\n peptide_length=peptide_length,\n mutation_start_in_protein=mutation_start_in_protein,\n mutation_end_in_protein=mutation_end_in_protein)\n else:\n mutation_start_in_peptide = mutation_end_in_peptide = None\n\n extra_columns[\"gene\"].append(effect.gene_name)\n gene_id = effect.gene_id\n extra_columns[\"gene_id\"].append(gene_id)\n if gene_expression_dict is not None:\n extra_columns[\"gene_expression\"].append(\n gene_expression_dict.get(gene_id, 0.0))\n\n transcript_id = effect.transcript_id\n extra_columns[\"transcript_id\"].append(transcript_id)\n extra_columns[\"transcript_name\"].append(effect.transcript_name)\n if transcript_expression_dict is not None:\n extra_columns[\"transcript_expression\"].append(\n transcript_expression_dict.get(transcript_id, 0.0))\n\n extra_columns[\"effect\"].append(effect.short_description)\n extra_columns[\"effect_type\"].append(effect.__class__.__name__)\n\n extra_columns[\"contains_mutant_residues\"].append(is_mutant)\n extra_columns[\"mutation_start_in_peptide\"].append(mutation_start_in_peptide)\n extra_columns[\"mutation_end_in_peptide\"].append(mutation_end_in_peptide)\n\n for col, values in extra_columns.items():\n df[col] = values\n\n # TODO: add extra boolean field\n # novel = is_mutant | not_in_reference\n # Requires keeping a quick lookup structure for all peptides in\n # the reference proteome\n if self.only_novel_epitopes:\n df = df[df.contains_mutant_residues]\n\n return df", "response": "Given a Varcode. EffectCollection of predicted protein effects return predicted epitopes around each mutation."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef predict_from_variants(\n self,\n variants,\n transcript_expression_dict=None,\n gene_expression_dict=None):\n \"\"\"\n Predict epitopes from a Variant collection, filtering options, and\n optional gene and transcript expression data.\n\n Parameters\n ----------\n variants : varcode.VariantCollection\n\n transcript_expression_dict : dict\n Maps from Ensembl transcript IDs to FPKM expression values.\n\n gene_expression_dict : dict, optional\n Maps from Ensembl gene IDs to FPKM expression values.\n\n Returns DataFrame with the following columns:\n - variant\n - gene\n - gene_id\n - transcript_id\n - transcript_name\n - effect\n - effect_type\n - peptide\n - peptide_offset\n - peptide_length\n - allele\n - affinity\n - percentile_rank\n - prediction_method_name\n - contains_mutant_residues\n - mutation_start_in_peptide\n - mutation_end_in_peptide\n\n Optionall will also include the following columns if corresponding\n expression dictionary inputs are provided:\n - gene_expression\n - transcript_expression\n \"\"\"\n # pre-filter variants by checking if any of the genes or\n # transcripts they overlap have sufficient expression.\n # I'm tolerating the redundancy of this code since it's much cheaper\n # to filter a variant *before* trying to predict its impact/effect\n # on the protein sequence.\n variants = apply_variant_expression_filters(\n variants,\n transcript_expression_dict=transcript_expression_dict,\n transcript_expression_threshold=self.min_transcript_expression,\n gene_expression_dict=gene_expression_dict,\n gene_expression_threshold=self.min_gene_expression)\n\n effects = variants.effects(raise_on_error=self.raise_on_error)\n\n return self.predict_from_mutation_effects(\n effects=effects,\n transcript_expression_dict=transcript_expression_dict,\n gene_expression_dict=gene_expression_dict)", "response": "Predict epitopes from a VariantCollection."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nscripts entry - point to predict neo - epitopes from genomic variants using Topiary.", "response": "def main(args_list=None):\n \"\"\"\n Script entry-point to predict neo-epitopes from genomic variants using\n Topiary.\n \"\"\"\n args = parse_args(args_list)\n print(\"Topiary commandline arguments:\")\n print(args)\n df = predict_epitopes_from_args(args)\n write_outputs(df, args)\n print(\"Total count: %d\" % len(df))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef render_povray(scene, filename='ipython', width=600, height=600,\n antialiasing=0.01, extra_opts={}):\n '''Render the scene with povray for publication.\n\n :param dict scene: The scene to render\n :param string filename: Output filename or 'ipython' to render in the notebook.\n :param int width: Width in pixels.\n :param int height: Height in pixels.\n :param dict extra_opts: Dictionary to merge/override with the passed scene.\n '''\n if not vapory_available:\n raise Exception(\"To render with povray, you need to have the vapory\"\n \" package installed.\")\n\n # Adding extra options\n scene = normalize_scene(scene)\n scene.update(extra_opts)\n\n # Camera target\n aspect = scene['camera']['aspect']\n up = np.dot(rmatrixquaternion(scene['camera']['quaternion']), [0, 1, 0])\n v_fov = scene['camera']['vfov'] / 180.0 * np.pi\n h_fov = 2.0 * np.arctan(np.tan(v_fov/2.0) * aspect) / np.pi * 180\n # Setup camera position\n camera = vp.Camera( 'location', scene['camera']['location'],\n 'direction', [0, 0, -1],\n 'sky', up,\n 'look_at', scene['camera']['target'],\n 'angle', h_fov )\n\n global_settings = []\n # Setup global illumination\n if scene.get('radiosity', False):\n # Global Illumination\n radiosity = vp.Radiosity(\n 'brightness', 2.0,\n 'count', 100,\n 'error_bound', 0.15,\n 'gray_threshold', 0.0,\n 'low_error_factor', 0.2,\n 'minimum_reuse', 0.015,\n 'nearest_count', 10,\n 'recursion_limit', 1, #Docs say 1 is enough\n 'adc_bailout', 0.01,\n 'max_sample', 0.5,\n 'media off',\n 'normal off',\n 'always_sample', 1,\n 'pretrace_start', 0.08,\n 'pretrace_end', 0.01)\n\n light_sources = []\n global_settings.append(radiosity)\n else:\n # Lights\n light_sources = [\n vp.LightSource( np.array([2,4,-3]) * 1000, 'color', [1,1,1] ),\n vp.LightSource( np.array([-2,-4,3]) * 1000, 'color', [1,1,1] ),\n vp.LightSource( np.array([-1,2,3]) * 1000, 'color', [1,1,1] ),\n vp.LightSource( np.array([1,-2,-3]) * 1000, 'color', [1,1,1] )\n ]\n\n # Background -- white for now\n background = vp.Background([1, 1, 1])\n\n # Things to display\n stuff = _generate_objects(scene['representations'])\n\n scene = vp.Scene( camera, objects = light_sources + stuff + [background],\n global_settings=global_settings)\n\n return scene.render(filename, width=width, height=height,\n antialiasing = antialiasing)", "response": "Render the scene with povray for publication."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rmatrixquaternion(q):\n assert np.allclose(math.sqrt(np.dot(q,q)), 1.0)\n\n x, y, z, w = q\n\n xx = x*x\n xy = x*y\n xz = x*z\n xw = x*w\n yy = y*y\n yz = y*z\n yw = y*w\n zz = z*z\n zw = z*w\n\n r00 = 1.0 - 2.0 * (yy + zz)\n r01 = 2.0 * (xy - zw)\n r02 = 2.0 * (xz + yw)\n\n r10 = 2.0 * (xy + zw)\n r11 = 1.0 - 2.0 * (xx + zz)\n r12 = 2.0 * (yz - xw)\n\n r20 = 2.0 * (xz - yw)\n r21 = 2.0 * (yz + xw)\n r22 = 1.0 - 2.0 * (xx + yy)\n\n R = np.array([[r00, r01, r02],\n [r10, r11, r12],\n [r20, r21, r22]], float)\n\n assert np.allclose(np.linalg.det(R), 1.0)\n return R", "response": "Create a rotation matrix from a quaternion rotation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a dictionary mapping Ensembl transcript IDs to FPKM expression values or None if neither Cufflinks tracking file nor StringTie GTF file were specified.", "response": "def rna_transcript_expression_dict_from_args(args):\n \"\"\"\n Returns a dictionary mapping Ensembl transcript IDs to FPKM expression\n values or None if neither Cufflinks tracking file nor StringTie GTF file\n were specified.\n \"\"\"\n if args.rna_transcript_fpkm_tracking_file:\n return load_cufflinks_fpkm_dict(args.rna_transcript_fpkm_tracking_file)\n elif args.rna_transcript_fpkm_gtf_file:\n return load_transcript_fpkm_dict_from_gtf(\n args.rna_transcript_fpkm_gtf_file)\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_total_pages(self):\n\n if self.total_elements is None or self.page_size is None:\n return None\n\n return math.ceil(float(self.total_elements) / float(self.page_size))", "response": "Returns the total number of pages on the server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef has_more_pages(self):\n\n # if has_next property exists, it represents whether more pages exist\n if self.has_next is not None:\n return self.has_next\n\n # otherwise, try to compute whether or not more pages exist\n total_pages = self.get_total_pages()\n if self.page_number is None or total_pages is None:\n return None\n else:\n return self.page_number + 1 < total_pages", "response": "Returns True if there are more pages available on the server."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a |Page| object from a dictionary.", "response": "def from_dict(page, content_type=None):\n \"\"\"\n Create a |Page| object from a dictionary. This method is intended for internal use, to construct a\n |Page| object from the body of a response json from a paginated endpoint.\n\n :param page: The dictionary.\n :param content_type: The class that the contents should be deserialized into.\n :return: The resulting |Page| object.\n \"\"\"\n\n result = Page(items=page.get('items'),\n page_number=page.get('pageNumber'),\n page_size=page.get('pageSize'),\n total_elements=page.get('totalElements'),\n has_next=page.get('hasNext'))\n\n if content_type is not None:\n if not issubclass(content_type, ModelBase):\n raise ValueError(\"'content_type' must be a subclass of ModelBase.\")\n\n result.items = [content_type.from_dict(item) for item in result.items]\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a dictionary representation of the page.", "response": "def to_dict(self, remove_nones=False):\n \"\"\"\n Creates a dictionary representation of the page.\n\n :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``.\n :return: A dictionary representation of the page.\n \"\"\"\n\n items = []\n\n # attempt to replace each item with its dictionary representation if possible\n for item in self.items:\n if hasattr(item, 'to_dict'):\n items.append(item.to_dict(remove_nones=remove_nones))\n else:\n items.append(item)\n\n return {\n 'items': items,\n 'pageNumber': self.page_number,\n 'pageSize': self.page_size,\n 'totalElements': self.total_elements,\n 'hasNext': self.has_next\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a generator that returns each successive page from a paginated endpoint.", "response": "def get_page_generator(func, start_page=0, page_size=None):\n \"\"\"\n Constructs a generator for retrieving pages from a paginated endpoint. This method is intended for internal\n use.\n\n :param func: Should take parameters ``page_number`` and ``page_size`` and return the corresponding |Page| object.\n :param start_page: The page to start on.\n :param page_size: The size of each page.\n :return: A generator that generates each successive page.\n \"\"\"\n\n # initialize starting values\n page_number = start_page\n more_pages = True\n\n # continuously request the next page as long as more pages exist\n while more_pages:\n\n # get next page\n page = func(page_number=page_number, page_size=page_size)\n\n yield page\n\n # determine whether more pages exist\n more_pages = page.has_more_pages()\n page_number += 1"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a dictionary representation of the object.", "response": "def to_dict(self, remove_nones=False):\n \"\"\"\n Creates a dictionary representation of the object.\n\n :param remove_nones: Whether ``None`` values should be filtered out of the dictionary. Defaults to ``False``.\n :return: The dictionary representation.\n \"\"\"\n\n if remove_nones:\n return {k: v for k, v in self.to_dict().items() if v is not None}\n else:\n raise NotImplementedError()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_dict(self, remove_nones=False):\n\n if remove_nones:\n report_dict = super().to_dict(remove_nones=True)\n else:\n report_dict = {\n 'title': self.title,\n 'reportBody': self.body,\n 'timeBegan': self.time_began,\n 'externalUrl': self.external_url,\n 'distributionType': self._get_distribution_type(),\n 'externalTrackingId': self.external_id,\n 'enclaveIds': self.enclave_ids,\n 'created': self.created,\n 'updated': self.updated,\n }\n\n # id field might not be present\n if self.id is not None:\n report_dict['id'] = self.id\n else:\n report_dict['id'] = None\n\n return report_dict", "response": "Returns a dictionary representation of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a report object from a dictionary.", "response": "def from_dict(cls, report):\n \"\"\"\n Create a report object from a dictionary. This method is intended for internal use, to construct a\n :class:`Report` object from the body of a response json. It expects the keys of the dictionary to match those\n of the json that would be found in a response to an API call such as ``GET /report/{id}``.\n\n :param report: The dictionary.\n :return: The report object.\n \"\"\"\n\n # determine distribution type\n distribution_type = report.get('distributionType')\n if distribution_type is not None:\n is_enclave = distribution_type.upper() != DistributionType.COMMUNITY\n else:\n is_enclave = None\n\n return Report(id=report.get('id'),\n title=report.get('title'),\n body=report.get('reportBody'),\n time_began=report.get('timeBegan'),\n external_id=report.get('externalTrackingId'),\n external_url=report.get('externalUrl'),\n is_enclave=is_enclave,\n enclave_ids=report.get('enclaveIds'),\n created=report.get('created'),\n updated=report.get('updated'))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmakes a json - serializable dictionary from input dictionary by converting non - serializable data types such as numpy arrays.", "response": "def serialize_to_dict(dictionary):\n '''Make a json-serializable dictionary from input dictionary by converting\n non-serializable data types such as numpy arrays.'''\n retval = {}\n \n for k, v in dictionary.items():\n if isinstance(v, dict):\n retval[k] = serialize_to_dict(v)\n else:\n # This is when custom serialization happens\n if isinstance(v, np.ndarray):\n if v.dtype == 'float64':\n # We don't support float64 on js side\n v = v.astype('float32')\n\n retval[k] = encode_numpy(v)\n else:\n retval[k] = v\n \n return retval"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a dictionary of information about pkg and its recursive deps.", "response": "def make_graph(pkg):\n \"\"\"Returns a dictionary of information about pkg & its recursive deps.\n\n Given a string, which can be parsed as a requirement specifier, return a\n dictionary where each key is the name of pkg or one of its recursive\n dependencies, and each value is a dictionary returned by research_package.\n (No, it's not really a graph.)\n \"\"\"\n ignore = ['argparse', 'pip', 'setuptools', 'wsgiref']\n pkg_deps = recursive_dependencies(pkg_resources.Requirement.parse(pkg))\n\n dependencies = {key: {} for key in pkg_deps if key not in ignore}\n installed_packages = pkg_resources.working_set\n versions = {package.key: package.version for package in installed_packages}\n for package in dependencies:\n try:\n dependencies[package]['version'] = versions[package]\n except KeyError:\n warnings.warn(\"{} is not installed so we cannot compute \"\n \"resources for its dependencies.\".format(package),\n PackageNotInstalledWarning)\n dependencies[package]['version'] = None\n\n for package in dependencies:\n package_data = research_package(package, dependencies[package]['version'])\n dependencies[package].update(package_data)\n\n return OrderedDict(\n [(package, dependencies[package]) for package in sorted(dependencies.keys())]\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrieving a report by its ID. Internal and external IDs are both allowed.", "response": "def get_report_details(self, report_id, id_type=None):\n \"\"\"\n Retrieves a report by its ID. Internal and external IDs are both allowed.\n\n :param str report_id: The ID of the incident report.\n :param str id_type: Indicates whether ID is internal or external.\n\n :return: The retrieved |Report| object.\n\n Example:\n\n >>> report = ts.get_report_details(\"1a09f14b-ef8c-443f-b082-9643071c522a\")\n >>> print(report)\n {\n \"id\": \"1a09f14b-ef8c-443f-b082-9643071c522a\",\n \"created\": 1515571633505,\n \"updated\": 1515620420062,\n \"reportBody\": \"Employee reported suspect email. We had multiple reports of suspicious email overnight ...\",\n \"title\": \"Phishing Incident\",\n \"enclaveIds\": [\n \"ac6a0d17-7350-4410-bc57-9699521db992\"\n ],\n \"distributionType\": \"ENCLAVE\",\n \"timeBegan\": 1479941278000\n }\n\n \"\"\"\n\n params = {'idType': id_type}\n resp = self._client.get(\"reports/%s\" % report_id, params=params)\n return Report.from_dict(resp.json())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_reports_page(self, is_enclave=None, enclave_ids=None, tag=None, excluded_tags=None,\n from_time=None, to_time=None):\n \"\"\"\n Retrieves a page of reports, filtering by time window, distribution type, enclave association, and tag.\n The results are sorted by updated time.\n This method does not take ``page_number`` and ``page_size`` parameters. Instead, each successive page must be\n found by adjusting the ``from_time`` and ``to_time`` parameters.\n\n Note: This endpoint will only return reports from a time window of maximum size of 2 weeks. If you give a\n time window larger than 2 weeks, it will pull reports starting at 2 weeks before the \"to\" date, through the\n \"to\" date.\n\n :param boolean is_enclave: restrict reports to specific distribution type (optional - by default all accessible\n reports are returned).\n :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific enclaves (optional - by\n default reports from all of user's enclaves are returned)\n :param list(str) tag: Name (or list of names) of tag(s) to filter reports by. Only reports containing\n ALL of these tags will be returned.\n :param list(str) excluded_tags: Reports containing ANY of these tags will be excluded from the results.\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n\n :return: A |Page| of |Report| objects.\n\n \"\"\"\n\n distribution_type = None\n\n # explicitly compare to True and False to distinguish from None (which is treated as False in a conditional)\n if is_enclave:\n distribution_type = DistributionType.ENCLAVE\n elif not is_enclave:\n distribution_type = DistributionType.COMMUNITY\n\n if enclave_ids is None:\n enclave_ids = self.enclave_ids\n\n params = {\n 'from': from_time,\n 'to': to_time,\n 'distributionType': distribution_type,\n 'enclaveIds': enclave_ids,\n 'tags': tag,\n 'excludedTags': excluded_tags\n }\n resp = self._client.get(\"reports\", params=params)\n result = Page.from_dict(resp.json(), content_type=Report)\n\n # create a Page object from the dict\n return result", "response": "This endpoint returns a page of reports from the server."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef submit_report(self, report):\n\n # make distribution type default to \"enclave\"\n if report.is_enclave is None:\n report.is_enclave = True\n\n if report.enclave_ids is None:\n # use configured enclave_ids by default if distribution type is ENCLAVE\n if report.is_enclave:\n report.enclave_ids = self.enclave_ids\n # if distribution type is COMMUNITY, API still expects non-null list of enclaves\n else:\n report.enclave_ids = []\n\n if report.is_enclave and len(report.enclave_ids) == 0:\n raise Exception(\"Cannot submit a report of distribution type 'ENCLAVE' with an empty set of enclaves.\")\n\n # default time began is current time\n if report.time_began is None:\n report.set_time_began(datetime.now())\n\n data = json.dumps(report.to_dict())\n resp = self._client.post(\"reports\", data=data, timeout=60)\n\n # get report id from response body\n report_id = resp.content\n\n if isinstance(report_id, bytes):\n report_id = report_id.decode('utf-8')\n\n report.id = report_id\n\n return report", "response": "Submits a report to the user."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_report(self, report):\n\n # default to interal ID type if ID field is present\n if report.id is not None:\n id_type = IdType.INTERNAL\n report_id = report.id\n # if no ID field is present, but external ID field is, default to external ID type\n elif report.external_id is not None:\n id_type = IdType.EXTERNAL\n report_id = report.external_id\n # if no ID fields exist, raise exception\n else:\n raise Exception(\"Cannot update report without either an ID or an external ID.\")\n\n # not allowed to update value of 'reportId', so remove it\n report_dict = {k: v for k, v in report.to_dict().items() if k != 'reportId'}\n\n params = {'idType': id_type}\n\n data = json.dumps(report.to_dict())\n self._client.put(\"reports/%s\" % report_id, data=data, params=params)\n\n return report", "response": "Updates the report identified by the report. id field."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes the report with the given ID.", "response": "def delete_report(self, report_id, id_type=None):\n \"\"\"\n Deletes the report with the given ID.\n\n :param report_id: the ID of the report to delete\n :param id_type: indicates whether the ID is internal or an external ID provided by the user\n :return: the response object\n\n Example:\n\n >>> response = ts.delete_report(\"4d1fcaee-5009-4620-b239-2b22c3992b80\")\n \"\"\"\n\n params = {'idType': id_type}\n self._client.delete(\"reports/%s\" % report_id, params=params)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a list of the IDs of all TruSTAR reports that contain the searched indicators.", "response": "def get_correlated_report_ids(self, indicators):\n \"\"\"\n DEPRECATED!\n Retrieves a list of the IDs of all TruSTAR reports that contain the searched indicators.\n\n :param indicators: A list of indicator values to retrieve correlated reports for.\n :return: The list of IDs of reports that correlated.\n\n Example:\n\n >>> report_ids = ts.get_correlated_report_ids([\"wannacry\", \"www.evil.com\"])\n >>> print(report_ids)\n [\"e3bc6921-e2c8-42eb-829e-eea8da2d3f36\", \"4d04804f-ff82-4a0b-8586-c42aef2f6f73\"]\n \"\"\"\n\n params = {'indicators': indicators}\n resp = self._client.get(\"reports/correlate\", params=params)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_correlated_reports_page(self, indicators, enclave_ids=None, is_enclave=True,\n page_size=None, page_number=None):\n \"\"\"\n Retrieves a page of all TruSTAR reports that contain the searched indicators.\n\n :param indicators: A list of indicator values to retrieve correlated reports for.\n :param enclave_ids: The enclaves to search in.\n :param is_enclave: Whether to search enclave reports or community reports.\n :param int page_number: the page number to get.\n :param int page_size: the size of the page to be returned.\n :return: The list of IDs of reports that correlated.\n\n Example:\n\n >>> reports = ts.get_correlated_reports_page([\"wannacry\", \"www.evil.com\"]).items\n >>> print([report.id for report in reports])\n [\"e3bc6921-e2c8-42eb-829e-eea8da2d3f36\", \"4d04804f-ff82-4a0b-8586-c42aef2f6f73\"]\n \"\"\"\n\n if is_enclave:\n distribution_type = DistributionType.ENCLAVE\n else:\n distribution_type = DistributionType.COMMUNITY\n\n params = {\n 'indicators': indicators,\n 'enclaveIds': enclave_ids,\n 'distributionType': distribution_type,\n 'pageNumber': page_number,\n 'pageSize': page_size\n }\n resp = self._client.get(\"reports/correlated\", params=params)\n\n return Page.from_dict(resp.json(), content_type=Report)", "response": "Retrieves a page of all TruSTAR reports that contain the searched indicators."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsearch for reports containing a search term.", "response": "def search_reports_page(self, search_term=None,\n enclave_ids=None,\n from_time=None,\n to_time=None,\n tags=None,\n excluded_tags=None,\n page_size=None,\n page_number=None):\n \"\"\"\n Search for reports containing a search term.\n\n :param str search_term: The term to search for. If empty, no search term will be applied. Otherwise, must\n be at least 3 characters.\n :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific enclaves (optional - by\n default reports from all of user's enclaves are returned)\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :param list(str) tags: Name (or list of names) of tag(s) to filter reports by. Only reports containing\n ALL of these tags will be returned. (optional)\n :param list(str) excluded_tags: Reports containing ANY of these tags will be excluded from the results.\n :param int page_number: the page number to get. (optional)\n :param int page_size: the size of the page to be returned.\n :return: a |Page| of |Report| objects. *NOTE*: The bodies of these reports will be ``None``.\n \"\"\"\n\n body = {\n 'searchTerm': search_term\n }\n\n params = {\n 'enclaveIds': enclave_ids,\n 'from': from_time,\n 'to': to_time,\n 'tags': tags,\n 'excludedTags': excluded_tags,\n 'pageSize': page_size,\n 'pageNumber': page_number\n }\n\n resp = self._client.post(\"reports/search\", params=params, data=json.dumps(body))\n page = Page.from_dict(resp.json(), content_type=Report)\n\n return page"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a generator that returns each successive page.", "response": "def _get_reports_page_generator(self, is_enclave=None, enclave_ids=None, tag=None, excluded_tags=None,\n from_time=None, to_time=None):\n \"\"\"\n Creates a generator from the |get_reports_page| method that returns each successive page.\n\n :param boolean is_enclave: restrict reports to specific distribution type (optional - by default all accessible\n reports are returned).\n :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific\n enclaves (optional - by default reports from all enclaves are returned)\n :param str tag: name of tag to filter reports by. if a tag with this name exists in more than one enclave\n indicated in ``enclave_ids``, the request will fail. handle this by making separate requests for each\n enclave ID if necessary.\n :param int from_time: start of time window in milliseconds since epoch\n :param int to_time: end of time window in milliseconds since epoch (optional, defaults to current time)\n :return: The generator.\n \"\"\"\n\n get_page = functools.partial(self.get_reports_page, is_enclave, enclave_ids, tag, excluded_tags)\n return get_time_based_page_generator(\n get_page=get_page,\n get_next_to_time=lambda x: x.items[-1].updated if len(x.items) > 0 else None,\n from_time=from_time,\n to_time=to_time\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nuse the |get_reports_page| method to create a generator that returns each successive report as a trustar report object. :param boolean is_enclave: restrict reports to specific distribution type (optional - by default all accessible reports are returned). :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific enclaves (optional - by default reports from all enclaves are returned) :param list(str) tag: a list of tags; only reports containing ALL of these tags will be returned. If a tag with this name exists in more than one enclave in the list passed as the ``enclave_ids`` argument, the request will fail. Handle this by making separate requests for each enclave ID if necessary. :param list(str) excluded_tags: a list of tags; reports containing ANY of these tags will not be returned. :param int from_time: start of time window in milliseconds since epoch (optional) :param int to_time: end of time window in milliseconds since epoch (optional) :return: A generator of Report objects. Note: If a report contains all of the tags in the list passed as argument to the 'tag' parameter and also contains any (1 or more) of the tags in the list passed as argument to the 'excluded_tags' parameter, that report will not be returned by this function. Example: >>> page = ts.get_reports(is_enclave=True, tag=\"malicious\", from_time=1425695711000, to_time=1514185311000) >>> for report in reports: print(report.id) '661583cb-a6a7-4cbd-8a90-01578fa4da89' 'da131660-2708-4c8a-926e-f91fb5dbbc62' '2e3400d6-fa37-4a8c-bc2f-155aaa02ae5a' '38064828-d3db-4fff-8ab8-e0e3b304ff44' 'dbf26104-cee5-4ca4-bdbf-a01d0178c007'", "response": "def get_reports(self, is_enclave=None, enclave_ids=None, tag=None, excluded_tags=None, from_time=None, to_time=None):\n \"\"\"\n Uses the |get_reports_page| method to create a generator that returns each successive report as a trustar\n report object.\n\n :param boolean is_enclave: restrict reports to specific distribution type (optional - by default all accessible\n reports are returned).\n :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific\n enclaves (optional - by default reports from all enclaves are returned)\n :param list(str) tag: a list of tags; only reports containing ALL of these tags will be returned. \n If a tag with this name exists in more than one enclave in the list passed as the ``enclave_ids``\n argument, the request will fail. Handle this by making separate requests for each\n enclave ID if necessary.\n :param list(str) excluded_tags: a list of tags; reports containing ANY of these tags will not be returned. \n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :return: A generator of Report objects.\n\n Note: If a report contains all of the tags in the list passed as argument to the 'tag' parameter and also \n contains any (1 or more) of the tags in the list passed as argument to the 'excluded_tags' parameter, that \n report will not be returned by this function. \n \n Example:\n\n >>> page = ts.get_reports(is_enclave=True, tag=\"malicious\", from_time=1425695711000, to_time=1514185311000)\n >>> for report in reports: print(report.id)\n '661583cb-a6a7-4cbd-8a90-01578fa4da89'\n 'da131660-2708-4c8a-926e-f91fb5dbbc62'\n '2e3400d6-fa37-4a8c-bc2f-155aaa02ae5a'\n '38064828-d3db-4fff-8ab8-e0e3b304ff44'\n 'dbf26104-cee5-4ca4-bdbf-a01d0178c007'\n\n \"\"\"\n\n return Page.get_generator(page_generator=self._get_reports_page_generator(is_enclave, enclave_ids, tag,\n excluded_tags, from_time, to_time))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_correlated_reports_page_generator(self, indicators, enclave_ids=None, is_enclave=True,\n start_page=0, page_size=None):\n \"\"\"\n Creates a generator from the |get_correlated_reports_page| method that returns each\n successive page.\n\n :param indicators: A list of indicator values to retrieve correlated reports for.\n :param enclave_ids:\n :param is_enclave:\n :return: The generator.\n \"\"\"\n\n get_page = functools.partial(self.get_correlated_reports_page, indicators, enclave_ids, is_enclave)\n return Page.get_page_generator(get_page, start_page, page_size)", "response": "Creates a generator from the |get_correlated_reports_page| method that returns each\n successive page."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_correlated_reports(self, indicators, enclave_ids=None, is_enclave=True):\n\n return Page.get_generator(page_generator=self._get_correlated_reports_page_generator(indicators,\n enclave_ids,\n is_enclave))", "response": "Uses the |get_correlated_reports_page| method to create a generator that returns each successive report."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a generator from the |search_reports_page| method that returns each successive page.", "response": "def _search_reports_page_generator(self, search_term=None,\n enclave_ids=None,\n from_time=None,\n to_time=None,\n tags=None,\n excluded_tags=None,\n start_page=0,\n page_size=None):\n \"\"\"\n Creates a generator from the |search_reports_page| method that returns each successive page.\n\n :param str search_term: The term to search for. If empty, no search term will be applied. Otherwise, must\n be at least 3 characters.\n :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific enclaves (optional - by\n default reports from all of user's enclaves are returned)\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :param list(str) tags: Name (or list of names) of tag(s) to filter reports by. Only reports containing\n ALL of these tags will be returned. (optional)\n :param list(str) excluded_tags: Reports containing ANY of these tags will be excluded from the results.\n :param int start_page: The page to start on.\n :param page_size: The size of each page.\n :return: The generator.\n \"\"\"\n\n get_page = functools.partial(self.search_reports_page, search_term, enclave_ids, from_time, to_time, tags,\n excluded_tags)\n return Page.get_page_generator(get_page, start_page, page_size)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches for reports in the user s enclave.", "response": "def search_reports(self, search_term=None,\n enclave_ids=None,\n from_time=None,\n to_time=None,\n tags=None,\n excluded_tags=None):\n \"\"\"\n Uses the |search_reports_page| method to create a generator that returns each successive report.\n\n :param str search_term: The term to search for. If empty, no search term will be applied. Otherwise, must\n be at least 3 characters.\n :param list(str) enclave_ids: list of enclave ids used to restrict reports to specific enclaves (optional - by\n default reports from all of user's enclaves are returned)\n :param int from_time: start of time window in milliseconds since epoch (optional)\n :param int to_time: end of time window in milliseconds since epoch (optional)\n :param list(str) tags: Name (or list of names) of tag(s) to filter reports by. Only reports containing\n ALL of these tags will be returned. (optional)\n :param list(str) excluded_tags: Reports containing ANY of these tags will be excluded from the results.\n :return: The generator of Report objects. Note that the body attributes of these reports will be ``None``.\n \"\"\"\n\n return Page.get_generator(page_generator=self._search_reports_page_generator(search_term, enclave_ids,\n from_time, to_time, tags,\n excluded_tags))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_boolean(value):\n\n if value is None:\n return None\n\n if isinstance(value, bool):\n return value\n\n if isinstance(value, string_types):\n value = value.lower()\n if value == 'false':\n return False\n if value == 'true':\n return True\n\n raise ValueError(\"Could not convert value to boolean: {}\".format(value))", "response": "Coerce a value to boolean."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef config_from_file(config_file_path, config_role):\n\n # read config file depending on filetype, parse into dictionary\n ext = os.path.splitext(config_file_path)[-1]\n if ext in ['.conf', '.ini']:\n config_parser = configparser.RawConfigParser()\n config_parser.read(config_file_path)\n roles = dict(config_parser)\n elif ext in ['.json', '.yml', '.yaml']:\n with open(config_file_path, 'r') as f:\n roles = yaml.safe_load(f)\n else:\n raise IOError(\"Unrecognized filetype for config file '%s'\" % config_file_path)\n\n # ensure that config file has indicated role\n if config_role in roles:\n config = dict(roles[config_role])\n else:\n raise KeyError(\"Could not find role %s\" % config_role)\n\n # parse enclave ids\n if 'enclave_ids' in config:\n # if id has all numeric characters, will be parsed as an int, so convert to string\n if isinstance(config['enclave_ids'], int):\n config['enclave_ids'] = str(config['enclave_ids'])\n # split comma separated list if necessary\n if isinstance(config['enclave_ids'], string_types):\n config['enclave_ids'] = config['enclave_ids'].split(',')\n elif not isinstance(config['enclave_ids'], list):\n raise Exception(\"'enclave_ids' must be a list or a comma-separated list\")\n # strip out whitespace\n config['enclave_ids'] = [str(x).strip() for x in config['enclave_ids'] if x is not None]\n else:\n # default to empty list\n config['enclave_ids'] = []\n\n return config", "response": "Create a configuration dictionary from a config file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_version(self):\n\n result = self._client.get(\"version\").content\n\n if isinstance(result, bytes):\n result = result.decode('utf-8')\n\n return result.strip('\\n')", "response": "Get the version number of the API."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the list of user enclaves that the user has access to.", "response": "def get_user_enclaves(self):\n \"\"\"\n Gets the list of enclaves that the user has access to.\n\n :return: A list of |EnclavePermissions| objects, each representing an enclave and whether the requesting user\n has read, create, and update access to it.\n \"\"\"\n\n resp = self._client.get(\"enclaves\")\n return [EnclavePermissions.from_dict(enclave) for enclave in resp.json()]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_request_quotas(self):\n\n resp = self._client.get(\"request-quotas\")\n return [RequestQuota.from_dict(quota) for quota in resp.json()]", "response": "Gets the request quotas for the user s company."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninitializes logging configuration to defaults.", "response": "def configure_logging():\n \"\"\"\n Initialize logging configuration to defaults. If the environment variable DISABLE_TRUSTAR_LOGGING is set to true,\n this will be ignored.\n \"\"\"\n\n if not parse_boolean(os.environ.get('DISABLE_TRUSTAR_LOGGING')):\n\n # configure\n dictConfig(DEFAULT_LOGGING_CONFIG)\n\n # construct error logger\n error_logger = logging.getLogger(\"error\")\n\n # log all uncaught exceptions\n def log_exception(exc_type, exc_value, exc_traceback):\n error_logger.error(\"Uncaught exception\", exc_info=(exc_type, exc_value, exc_traceback))\n\n # register logging function as exception hook\n sys.excepthook = log_exception"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_dict(cls, enclave):\n\n return Enclave(id=enclave.get('id'),\n name=enclave.get('name'),\n type=EnclaveType.from_string(enclave.get('type')))", "response": "Create a enclave object from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_dict(self, remove_nones=False):\n\n if remove_nones:\n return super().to_dict(remove_nones=True)\n\n return {\n 'id': self.id,\n 'name': self.name,\n 'type': self.type\n }", "response": "Returns a dictionary representation of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_dict(cls, d):\n\n enclave = super(cls, EnclavePermissions).from_dict(d)\n enclave_permissions = cls.from_enclave(enclave)\n\n enclave_permissions.read = d.get('read')\n enclave_permissions.create = d.get('create')\n enclave_permissions.update = d.get('update')\n\n return enclave_permissions", "response": "Create a enclave object from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_dict(self, remove_nones=False):\n\n d = super().to_dict(remove_nones=remove_nones)\n\n d.update({\n 'read': self.read,\n 'create': self.create,\n 'update': self.update\n })\n\n return d", "response": "Returns a dictionary representation of the enclave."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_enclave(cls, enclave):\n\n return EnclavePermissions(id=enclave.id,\n name=enclave.name,\n type=enclave.type)", "response": "Create an |EnclavePermissions| object from an |Enclave| object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves all enclave tags present in a specific report.", "response": "def get_enclave_tags(self, report_id, id_type=None):\n \"\"\"\n Retrieves all enclave tags present in a specific report.\n\n :param report_id: the ID of the report\n :param id_type: indicates whether the ID internal or an external ID provided by the user\n :return: A list of |Tag| objects.\n \"\"\"\n\n params = {'idType': id_type}\n resp = self._client.get(\"reports/%s/tags\" % report_id, params=params)\n return [Tag.from_dict(indicator) for indicator in resp.json()]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a tag to a specific report for a specific enclave.", "response": "def add_enclave_tag(self, report_id, name, enclave_id, id_type=None):\n \"\"\"\n Adds a tag to a specific report, for a specific enclave.\n\n :param report_id: The ID of the report\n :param name: The name of the tag to be added\n :param enclave_id: ID of the enclave where the tag will be added\n :param id_type: indicates whether the ID internal or an external ID provided by the user\n :return: The ID of the tag that was created.\n \"\"\"\n\n params = {\n 'idType': id_type,\n 'name': name,\n 'enclaveId': enclave_id\n }\n resp = self._client.post(\"reports/%s/tags\" % report_id, params=params)\n return str(resp.content)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_enclave_tag(self, report_id, tag_id, id_type=None):\n\n params = {\n 'idType': id_type\n }\n self._client.delete(\"reports/%s/tags/%s\" % (report_id, tag_id), params=params)", "response": "Deletes a tag from a specific report in a specific enclave."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_all_enclave_tags(self, enclave_ids=None):\n\n params = {'enclaveIds': enclave_ids}\n resp = self._client.get(\"reports/tags\", params=params)\n return [Tag.from_dict(indicator) for indicator in resp.json()]", "response": "Retrieves all tags present in the given enclaves."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_indicator_tag(self, indicator_value, name, enclave_id):\n\n data = {\n 'value': indicator_value,\n 'tag': {\n 'name': name,\n 'enclaveId': enclave_id\n }\n }\n\n resp = self._client.post(\"indicators/tags\", data=json.dumps(data))\n return Tag.from_dict(resp.json())", "response": "Adds a tag to a specific indicator for a specific enclave."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_indicator_tag(self, indicator_value, tag_id):\n\n params = {\n 'value': indicator_value\n }\n\n self._client.delete(\"indicators/tags/%s\" % tag_id, params=params)", "response": "Deletes a tag from an indicator in a specific enclave."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a Tag object from a dictionary.", "response": "def from_dict(cls, tag):\n \"\"\"\n Create a tag object from a dictionary. This method is intended for internal use, to construct a\n :class:`Tag` object from the body of a response json. It expects the keys of the dictionary to match those\n of the json that would be found in a response to an API call such as ``GET /enclave-tags``.\n\n :param tag: The dictionary.\n :return: The :class:`Tag` object.\n \"\"\"\n\n return Tag(name=tag.get('name'),\n id=tag.get('guid'),\n enclave_id=tag.get('enclaveId'))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_dict(self, remove_nones=False):\n\n if remove_nones:\n d = super().to_dict(remove_nones=True)\n else:\n d = {\n 'name': self.name,\n 'id': self.id,\n 'enclaveId': self.enclave_id\n }\n\n return d", "response": "Creates a dictionary representation of the tag."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninfer the most likely delimiter from a file.", "response": "def infer_delimiter(filename, comment_char=\"#\", n_lines=3):\n \"\"\"\n Given a file which contains data separated by one of the following:\n - commas\n - tabs\n - spaces\n Return the most likely separator by sniffing the first few lines\n of the file's contents.\n \"\"\"\n lines = []\n with open(filename, \"r\") as f:\n for line in f:\n if line.startswith(comment_char):\n continue\n if len(lines) < n_lines:\n lines.append(line)\n else:\n break\n if len(lines) < n_lines:\n raise ValueError(\n \"Not enough lines in %s to infer delimiter\" % filename)\n candidate_delimiters = [\"\\t\", \",\", \"\\s+\"]\n for candidate_delimiter in candidate_delimiters:\n counts = [len(re.split(candidate_delimiter, line)) for line in lines]\n first_line_count = counts[0]\n if all(c == first_line_count for c in counts) and first_line_count > 1:\n return candidate_delimiter\n raise ValueError(\"Could not determine delimiter for %s\" % filename)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_required_columns(df, filename, required_columns):\n available_columns = set(df.columns)\n for column_name in required_columns:\n if column_name not in available_columns:\n raise ValueError(\"FPKM tracking file %s missing column '%s'\" % (\n filename,\n column_name))", "response": "Checks that all required columns are present in the given dataframe."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef topology_mdtraj(traj):\n '''Generate topology spec for the MolecularViewer from mdtraj.\n\n :param mdtraj.Trajectory traj: the trajectory\n :return: A chemview-compatible dictionary corresponding to the topology defined in mdtraj.\n\n '''\n import mdtraj as md\n\n top = {}\n top['atom_types'] = [a.element.symbol for a in traj.topology.atoms]\n top['atom_names'] = [a.name for a in traj.topology.atoms]\n top['bonds'] = [(a.index, b.index) for a, b in traj.topology.bonds]\n top['secondary_structure'] = md.compute_dssp(traj[0])[0]\n top['residue_types'] = [r.name for r in traj.topology.residues ]\n top['residue_indices'] = [ [a.index for a in r.atoms] for r in traj.topology.residues ]\n\n return top", "response": "Generate the MolecularViewer topology from the mdtraj. Trajectory object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nencode a numpy array as a base64 encoded string to be JSON serialized.", "response": "def encode_numpy(array):\n '''Encode a numpy array as a base64 encoded string, to be JSON serialized. \n\n :return: a dictionary containing the fields:\n - *data*: the base64 string\n - *type*: the array type\n - *shape*: the array shape\n\n '''\n return {'data' : base64.b64encode(array.data).decode('utf8'),\n 'type' : array.dtype.name,\n 'shape': array.shape}"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a Cufflinks tracking file and returns a DataFrame containing the FPKM data.", "response": "def load_cufflinks_dataframe(\n filename,\n id_column=ID_COLUMN,\n fpkm_column=FPKM_COLUMN,\n status_column=STATUS_COLUMN,\n locus_column=LOCUS_COLUMN,\n gene_names_column=GENE_NAMES_COLUMN,\n drop_failed=True,\n drop_lowdata=False,\n drop_hidata=True,\n replace_hidata_fpkm_value=None,\n drop_nonchromosomal_loci=False,\n drop_novel=False,\n sep=None):\n \"\"\"\n Loads a Cufflinks tracking file, which contains expression levels\n (in FPKM: Fragments Per Kilobase of transcript per Million fragments)\n for transcript isoforms or whole genes. These transcripts/genes may be\n previously known (in which case they have an Ensembl ID) or a novel\n assembly from the RNA-Seq data (in which case their IDs look like \"CUFF.1\")\n\n Parameters\n ----------\n\n filename : str\n Filename of tracking file e.g. \"genes.tracking_fpkm\"\n\n id_column : str, optional\n\n fpkm_column : str, optional\n\n status_column : str, optional\n Name of column which indicates the FPKM estimate status. The column\n name is typically \"FPKM_status\". Possible contained within this column\n will be OK, FAIL, LOWDATA, HIDATA.\n\n locus_column : str, optional\n\n gene_names_column : str, optional\n\n drop_failed : bool, optional\n Drop rows whose FPKM status is \"FAIL\" (default=True)\n\n drop_lowdata : bool, optional\n Drop rows whose FPKM status is \"LOWDATA\", meaning that Cufflinks thought\n there were too few reads to accurately estimate the FPKM (default=False)\n\n drop_hidata : bool, optional\n Drop rows whose FPKM status is \"HIDATA\", meaning that too many\n fragments aligned to a feature for Cufflinks to process. Dropping\n the most expressed genes seems like a stupid idea so: default=False\n\n replace_hidata_fpkm_value : float, optional\n If drop_hidata=False, the HIDATA entries will still have an FPKM=0.0,\n this argument lets you replace the FPKM with some known constant.\n\n drop_nonchromosomal_loci : bool, optional\n Drop rows whose location isn't on a canonical chromosome\n i.e. doesn't start with \"chr\" (default=False)\n\n drop_novel : bool, optional\n Drop genes or isoforms that aren't found in Ensembl (default = False)\n\n sep : str, optional\n Separator between data fields in the FPKM tracking file\n (default is to infer whether the file uses comma or whitespace)\n\n Returns DataFrame with columns:\n id : str\n novel : bool\n fpkm : float\n chr : str\n start : int\n end : int\n gene_names : str list\n \"\"\"\n if sep is None:\n sep = infer_delimiter(filename)\n\n df = pd.read_csv(filename, sep=sep, engine=\"c\")\n\n required_columns = {\n status_column,\n locus_column,\n id_column,\n gene_names_column,\n fpkm_column\n }\n check_required_columns(df, filename, required_columns)\n\n for flag, status_value in [\n (drop_failed, \"FAIL\"),\n (drop_lowdata, \"LOWDATA\"),\n (drop_hidata, \"HIDATA\")]:\n mask = df[status_column] == status_value\n mask_count = mask.sum()\n total_count = len(df)\n if flag and mask_count > 0:\n verb_str = \"Dropping\"\n df = df[~mask]\n else:\n verb_str = \"Keeping\"\n logging.info(\n \"%s %d/%d entries from %s with status=%s\",\n verb_str,\n mask_count,\n total_count,\n filename,\n status_value)\n\n if drop_nonchromosomal_loci:\n loci = df[locus_column]\n chromosomal_loci = loci.str.startswith(\"chr\")\n n_dropped = (~chromosomal_loci).sum()\n if n_dropped > 0:\n logging.info(\"Dropping %d/%d non-chromosomal loci from %s\" % (\n n_dropped, len(df), filename))\n df = df[chromosomal_loci]\n\n if replace_hidata_fpkm_value:\n hidata_mask = df[status_column] == \"HIDATA\"\n n_hidata = hidata_mask.sum()\n logging.info(\n \"Setting FPKM=%s for %d/%d entries with status=HIDATA\",\n replace_hidata_fpkm_value,\n n_hidata,\n len(df))\n df[fpkm_column][hidata_mask] = replace_hidata_fpkm_value\n\n if len(df) == 0:\n raise ValueError(\"Empty FPKM tracking file: %s\" % filename)\n\n ids = df[id_column]\n known = ids.str.startswith(\"ENS\")\n\n if known.sum() == 0:\n raise ValueError(\"No Ensembl IDs found in %s\" % filename)\n\n if drop_novel:\n n_dropped = (~known).sum()\n if n_dropped > 0:\n logging.info(\n \"Dropping %d/%d novel entries from %s\",\n n_dropped,\n len(df),\n filename)\n df = df[known]\n known = np.ones(len(df), dtype='bool')\n\n loci = df[locus_column]\n chromosomes, starts, ends = parse_locus_column(df[locus_column])\n\n # gene names are given either as \"-\" or a comma separated list\n # e.g. \"BRAF1,PFAM2\"\n gene_names_strings = df[gene_names_column].copy()\n gene_names_strings[gene_names_strings == \"-\"] = \"\"\n # split each entry into a list of zero or more strings\n gene_names_lists = gene_names_strings.str.split(\",\")\n\n return pd.DataFrame({\n \"id\": df[id_column],\n \"novel\": ~known,\n \"fpkm\": df[fpkm_column],\n \"chr\": chromosomes,\n \"start\": starts,\n \"end\": ends,\n \"gene_names\": gene_names_lists\n })"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a Cufflinks table into a dictionary.", "response": "def load_cufflinks_dict(*args, **kwargs):\n \"\"\"\n Returns dictionary mapping feature identifier (either transcript or gene ID)\n to a DataFrame row with fields:\n id : str\n novel : bool\n fpkm : float\n chr : str\n start : int\n end : int\n gene_names : str list\n \"\"\"\n return {\n row.id: row\n for (_, row)\n in load_cufflinks_dataframe(*args, **kwargs).iterrows()\n }"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_cufflinks_fpkm_dict(*args, **kwargs):\n return {\n row.id: row.fpkm\n for (_, row)\n in load_cufflinks_dataframe(*args, **kwargs).iterrows()\n }", "response": "Load a Cufflinks FPKM expression value dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef enable_notebook(verbose=0):\n libs = ['objexporter.js',\n 'ArcballControls.js', 'filesaver.js',\n 'base64-arraybuffer.js', 'context.js',\n 'chemview.js', 'three.min.js', 'jquery-ui.min.js',\n 'context.standalone.css', 'chemview_widget.js',\n 'trajectory_controls_widget.js', \"layout_widget.js\",\n \"components/jquery-fullscreen/jquery.fullscreen.js\",\n 'scales.js']\n fns = [resource_filename('chemview', os.path.join('static', f)) for f in libs]\n\n [install_nbextension(fn, verbose=verbose, overwrite=True, user=True) for fn in fns]", "response": "Enable IPython notebook widgets to be displayed."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nextract text from a pdf file", "response": "def extract_pdf(file_name):\n \"\"\"\n Extract text from a pdf file\n :param file_name path to pdf to read\n :return text from pdf\n \"\"\"\n\n rsrcmgr = pdfminer.pdfinterp.PDFResourceManager()\n sio = StringIO()\n laparams = LAParams()\n device = TextConverter(rsrcmgr, sio, codec='utf-8', laparams=laparams)\n interpreter = pdfminer.pdfinterp.PDFPageInterpreter(rsrcmgr, device)\n\n # Extract text from pdf file\n with open(file_name, 'rb') as fp:\n for page in PDFPage.get_pages(fp, maxpages=20):\n interpreter.process_page(page)\n\n text = sio.getvalue()\n\n # Cleanup\n device.close()\n sio.close()\n\n return text"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nextract text from a file", "response": "def process_file(source_file):\n \"\"\"\n Extract text from a file (pdf, txt, eml, csv, json)\n :param source_file path to file to read\n :return text from file\n \"\"\"\n if source_file.endswith(('.pdf', '.PDF')):\n txt = extract_pdf(source_file)\n elif source_file.endswith(('.txt', '.eml', '.csv', '.json')):\n with open(source_file, 'r') as f:\n txt = f.read()\n else:\n logger.info(\"Unsupported file extension for file {}\".format(source_file))\n return \"\"\n return txt"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_string(cls, string):\n\n # find enum value\n for attr in dir(cls):\n value = getattr(cls, attr)\n if value == string:\n return value\n\n # if not found, log warning and return the value passed in\n logger.warning(\"{} is not a valid enum value for {}.\".format(string, cls.__name__))\n return string", "response": "Returns the value passed in as string."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates the given parameter value value.", "response": "def validate(self, value):\n \"\"\"\n Validate (and possibly typecast) the given parameter value value.\n\n :param value: Parameter value\n :return: Typecast parameter value\n :raises ValidationErrors: if there were validation errors\n \"\"\"\n errors = []\n value = self._validate_type(value, errors)\n self._validate_value(value, errors)\n\n if errors:\n raise ValidationErrors(errors)\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef format_cli(self, value):\n if value is None or (self.type == 'flag' and not value):\n return None\n pass_as_bits = text_type(self.pass_as or self.default_pass_as).split()\n env = dict(name=self.name, value=value, v=value)\n return [bit.format(**env) for bit in pass_as_bits]", "response": "Build a single parameter argument that can be used as a CLI string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwrapping the given value into a list.", "response": "def listify(value):\n \"\"\"\n Wrap the given value into a list, with the below provisions:\n\n * If the value is a list or a tuple, it's coerced into a new list.\n * If the value is None, an empty list is returned.\n * Otherwise, a single-element list is returned, containing the value.\n\n :param value: A value.\n :return: a list!\n :rtype: list\n \"\"\"\n if value is None:\n return []\n if isinstance(value, (list, tuple)):\n return list(value)\n return [value]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild a list of shell commands from a command line.", "response": "def build_command(command, parameter_map):\n \"\"\"\n Build command line(s) using the given parameter map.\n\n Even if the passed a single `command`, this function will return a list\n of shell commands. It is the caller's responsibility to concatenate them,\n likely using the semicolon or double ampersands.\n\n :param command: The command to interpolate params into.\n :type command: str|list[str]\n :param parameter_map: A ParameterMap object containing parameter knowledge.\n :type parameter_map: valohai_yaml.objs.parameter_map.ParameterMap\n\n :return: list of commands\n :rtype: list[str]\n \"\"\"\n\n if isinstance(parameter_map, list): # Partially emulate old (pre-0.7) API for this function.\n parameter_map = LegacyParameterMap(parameter_map)\n\n out_commands = []\n for command in listify(command):\n # Only attempt formatting if the string smells like it should be formatted.\n # This allows the user to include shell syntax in the commands, if required.\n # (There's still naturally the chance for false-positives, so guard against\n # those value errors and warn about them.)\n\n if interpolable_re.search(command):\n try:\n command = interpolable_re.sub(\n lambda match: _replace_interpolation(parameter_map, match),\n command,\n )\n except ValueError as exc: # pragma: no cover\n warnings.warn(\n 'failed to interpolate into %r: %s' % (command, exc),\n CommandInterpolationWarning\n )\n out_commands.append(command.strip())\n return out_commands"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate(yaml, raise_exc=True):\n data = read_yaml(yaml)\n validator = get_validator()\n # Nb: this uses a list instead of being a generator function in order to be\n # easier to call correctly. (Were it a generator function, a plain\n # `validate(..., raise_exc=True)` would not do anything.\n errors = list(validator.iter_errors(data))\n if errors and raise_exc:\n raise ValidationErrors(errors)\n return errors", "response": "Validate the given YAML document and return a list of errors encountered."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nbuilds the CLI command line from the parameter values.", "response": "def build_parameters(self):\n \"\"\"\n Build the CLI command line from the parameter values.\n\n :return: list of CLI strings -- not escaped!\n :rtype: list[str]\n \"\"\"\n param_bits = []\n for name in self.parameters:\n param_bits.extend(self.build_parameter_by_name(name) or [])\n return param_bits"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(cls, data):\n parsers = {\n 'step': ([], Step.parse),\n 'endpoint': ([], Endpoint.parse),\n }\n for datum in data:\n assert isinstance(datum, dict)\n for type, (items, parse) in parsers.items():\n if type in datum:\n items.append(parse(datum[type]))\n break\n else:\n raise ValueError('No parser for {0}'.format(datum))\n inst = cls(\n steps=parsers['step'][0],\n endpoints=parsers['endpoint'][0],\n )\n inst._original_data = data\n return inst", "response": "Parse a Config structure out of a Python dict."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_step_by(self, **kwargs):\n if not kwargs:\n return None\n for index, step in enumerate(self.steps.values()):\n extended_step = dict(step.serialize(), index=index)\n # check if kwargs is a subset of extended_step\n if all(item in extended_step.items() for item in kwargs.items()):\n return step\n return None", "response": "Get the first step that matches all the passed named arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses the given YAML data into a Config object.", "response": "def parse(yaml, validate=True):\n \"\"\"\n Parse the given YAML data into a `Config` object, optionally validating it first.\n\n :param yaml: YAML data (either a string, a stream, or pre-parsed Python dict/list)\n :type yaml: list|dict|str|file\n :param validate: Whether to validate the data before attempting to parse it.\n :type validate: bool\n :return: Config object\n :rtype: valohai_yaml.objs.Config\n \"\"\"\n data = read_yaml(yaml)\n if validate: # pragma: no branch\n from .validation import validate\n validate(data, raise_exc=True)\n return Config.parse(data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a dict mapping parameter names to their default values.", "response": "def get_parameter_defaults(self, include_flags=True):\n \"\"\"\n Get a dict mapping parameter names to their defaults (if set).\n :rtype: dict[str, object]\n \"\"\"\n return {\n name: parameter.default\n for (name, parameter)\n in self.parameters.items()\n if parameter.default is not None and (include_flags or parameter.type != 'flag')\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build_command(self, parameter_values, command=None):\n command = (command or self.command)\n\n # merge defaults with passed values\n # ignore flag default values as they are special\n # undefined flag will remain undefined regardless of default value\n values = dict(self.get_parameter_defaults(include_flags=False), **parameter_values)\n\n parameter_map = ParameterMap(parameters=self.parameters, values=values)\n return build_command(command, parameter_map)", "response": "Builds a command for this step using the given parameter values."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate & lint file_path and return a LintResult object.", "response": "def lint_file(file_path):\n \"\"\"\n Validate & lint `file_path` and return a LintResult.\n\n :param file_path: YAML filename\n :type file_path: str\n :return: LintResult object\n \"\"\"\n\n with open(file_path, 'r') as yaml:\n try:\n return lint(yaml)\n except Exception as e:\n lr = LintResult()\n lr.add_error('could not parse YAML: %s' % e, exception=e)\n return lr"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _re_flatten(p):\n ''' Turn all capturing groups in a regular expression pattern into\n non-capturing groups. '''\n if '(' not in p: return p\n return re.sub(r'(\\\\*)(\\(\\?P<[^>]+>|\\((?!\\?))',\n lambda m: m.group(0) if len(m.group(1)) % 2 else m.group(1) + '(?:', p)", "response": "Turn all capturing groups in a regular expression pattern into\n non - capturing groups."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\naborts execution and causes a 303 or 302 redirect.", "response": "def redirect(url, code=None):\n \"\"\" Aborts execution and causes a 303 or 302 redirect, depending on\n the HTTP protocol version. \"\"\"\n if not code:\n code = 303 if request.get('SERVER_PROTOCOL') == \"HTTP/1.1\" else 302\n res = response.copy(cls=HTTPResponse)\n res.status = code\n res.body = \"\"\n res.set_header('Location', urljoin(request.url, url))\n raise res"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _file_iter_range(fp, offset, bytes, maxread=1024*1024):\n ''' Yield chunks from a range in a file. No chunk is bigger than maxread.'''\n fp.seek(offset)\n while bytes > 0:\n part = fp.read(min(bytes, maxread))\n if not part: break\n bytes -= len(part)\n yield part", "response": "Yields a range of bytes from a file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef static_file(filename, root, mimetype='auto', download=False, charset='UTF-8'):\n\n root = os.path.abspath(root) + os.sep\n filename = os.path.abspath(os.path.join(root, filename.strip('/\\\\')))\n headers = dict()\n\n if not filename.startswith(root):\n return HTTPError(403, \"Access denied.\")\n if not os.path.exists(filename) or not os.path.isfile(filename):\n return HTTPError(404, \"File does not exist.\")\n if not os.access(filename, os.R_OK):\n return HTTPError(403, \"You do not have permission to access this file.\")\n\n if mimetype == 'auto':\n mimetype, encoding = mimetypes.guess_type(filename)\n if encoding: headers['Content-Encoding'] = encoding\n\n if mimetype:\n if mimetype[:5] == 'text/' and charset and 'charset' not in mimetype:\n mimetype += '; charset=%s' % charset\n headers['Content-Type'] = mimetype\n\n if download:\n download = os.path.basename(filename if download == True else download)\n headers['Content-Disposition'] = 'attachment; filename=\"%s\"' % download\n\n stats = os.stat(filename)\n headers['Content-Length'] = clen = stats.st_size\n lm = time.strftime(\"%a, %d %b %Y %H:%M:%S GMT\", time.gmtime(stats.st_mtime))\n headers['Last-Modified'] = lm\n\n ims = request.environ.get('HTTP_IF_MODIFIED_SINCE')\n if ims:\n ims = parse_date(ims.split(\";\")[0].strip())\n if ims is not None and ims >= int(stats.st_mtime):\n headers['Date'] = time.strftime(\"%a, %d %b %Y %H:%M:%S GMT\", time.gmtime())\n return HTTPResponse(status=304, **headers)\n\n body = '' if request.method == 'HEAD' else open(filename, 'rb')\n\n headers[\"Accept-Ranges\"] = \"bytes\"\n ranges = request.environ.get('HTTP_RANGE')\n if 'HTTP_RANGE' in request.environ:\n ranges = list(parse_range_header(request.environ['HTTP_RANGE'], clen))\n if not ranges:\n return HTTPError(416, \"Requested Range Not Satisfiable\")\n offset, end = ranges[0]\n headers[\"Content-Range\"] = \"bytes %d-%d/%d\" % (offset, end-1, clen)\n headers[\"Content-Length\"] = str(end-offset)\n if body: body = _file_iter_range(body, offset, end-offset)\n return HTTPResponse(body, status=206, **headers)\n return HTTPResponse(body, **headers)", "response": "Open a file in a safe way and return a response object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchanging the debug level.", "response": "def debug(mode=True):\n \"\"\" Change the debug level.\n There is only one debug level supported at the moment.\"\"\"\n global DEBUG\n if mode: warnings.simplefilter('default')\n DEBUG = bool(mode)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse RFC 2617 HTTP authentication header string and return user and pass tuple or None", "response": "def parse_auth(header):\n \"\"\" Parse rfc2617 HTTP authentication header string (basic) and return (user,pass) tuple or None\"\"\"\n try:\n method, data = header.split(None, 1)\n if method.lower() == 'basic':\n user, pwd = touni(base64.b64decode(tob(data))).split(':',1)\n return user, pwd\n except (KeyError, ValueError):\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a range header.", "response": "def parse_range_header(header, maxlen=0):\n ''' Yield (start, end) ranges parsed from a HTTP Range header. Skip\n unsatisfiable ranges. The end index is non-inclusive.'''\n if not header or header[:6] != 'bytes=': return\n ranges = [r.split('-', 1) for r in header[6:].split(',') if '-' in r]\n for start, end in ranges:\n try:\n if not start: # bytes=-100 -> last 100 bytes\n start, end = max(0, maxlen-int(end)), maxlen\n elif not end: # bytes=100- -> all but the first 99 bytes\n start, end = int(start), maxlen\n else: # bytes=100-200 -> bytes 100-200 (inclusive)\n start, end = int(start), min(int(end)+1, maxlen)\n if 0 <= start < end <= maxlen:\n yield start, end\n except ValueError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a new rule or replace the target for an existing rule.", "response": "def add(self, rule, method, target, name=None):\n ''' Add a new rule or replace the target for an existing rule. '''\n anons = 0 # Number of anonymous wildcards found\n keys = [] # Names of keys\n pattern = '' # Regular expression pattern with named groups\n filters = [] # Lists of wildcard input filters\n builder = [] # Data structure for the URL builder\n is_static = True\n\n for key, mode, conf in self._itertokens(rule):\n if mode:\n is_static = False\n if mode == 'default': mode = self.default_filter\n mask, in_filter, out_filter = self.filters[mode](conf)\n if not key:\n pattern += '(?:%s)' % mask\n key = 'anon%d' % anons\n anons += 1\n else:\n pattern += '(?P<%s>%s)' % (key, mask)\n keys.append(key)\n if in_filter: filters.append((key, in_filter))\n builder.append((key, out_filter or str))\n elif key:\n pattern += re.escape(key)\n builder.append((None, key))\n\n self.builder[rule] = builder\n if name: self.builder[name] = builder\n\n if is_static and not self.strict_order:\n self.static.setdefault(method, {})\n self.static[method][self.build(rule)] = (target, None)\n return\n\n try:\n re_pattern = re.compile('^(%s)$' % pattern)\n re_match = re_pattern.match\n except re.error:\n raise RouteSyntaxError(\"Could not add Route: %s (%s)\" % (rule, _e()))\n\n if filters:\n def getargs(path):\n url_args = re_match(path).groupdict()\n for name, wildcard_filter in filters:\n try:\n url_args[name] = wildcard_filter(url_args[name])\n except ValueError:\n raise HTTPError(400, 'Path has wrong format.')\n return url_args\n elif re_pattern.groupindex:\n def getargs(path):\n return re_match(path).groupdict()\n else:\n getargs = None\n\n flatpat = _re_flatten(pattern)\n whole_rule = (rule, flatpat, target, getargs)\n\n if (flatpat, method) in self._groups:\n if DEBUG:\n msg = 'Route <%s %s> overwrites a previously defined route'\n warnings.warn(msg % (method, rule), RuntimeWarning)\n self.dyna_routes[method][self._groups[flatpat, method]] = whole_rule\n else:\n self.dyna_routes.setdefault(method, []).append(whole_rule)\n self._groups[flatpat, method] = len(self.dyna_routes[method]) - 1\n\n self._compile(method)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild a URL by filling the wildcards in a rule.", "response": "def build(self, _name, *anons, **query):\n ''' Build an URL by filling the wildcards in a rule. '''\n builder = self.builder.get(_name)\n if not builder: raise RouteBuildError(\"No route with that name.\", _name)\n try:\n for i, value in enumerate(anons): query['anon%d'%i] = value\n url = ''.join([f(query.pop(n)) if n else f for (n,f) in builder])\n return url if not query else url+'?'+urlencode(query)\n except KeyError:\n raise RouteBuildError('Missing URL argument: %r' % _e().args[0])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef match(self, environ):\n ''' Return a (target, url_agrs) tuple or raise HTTPError(400/404/405). '''\n verb = environ['REQUEST_METHOD'].upper()\n path = environ['PATH_INFO'] or '/'\n target = None\n if verb == 'HEAD':\n methods = ['PROXY', verb, 'GET', 'ANY']\n else:\n methods = ['PROXY', verb, 'ANY']\n\n for method in methods:\n if method in self.static and path in self.static[method]:\n target, getargs = self.static[method][path]\n return target, getargs(path) if getargs else {}\n elif method in self.dyna_regexes:\n for combined, rules in self.dyna_regexes[method]:\n match = combined(path)\n if match:\n target, getargs = rules[match.lastindex - 1]\n return target, getargs(path) if getargs else {}\n\n # No matching route found. Collect alternative methods for 405 response\n allowed = set([])\n nocheck = set(methods)\n for method in set(self.static) - nocheck:\n if path in self.static[method]:\n allowed.add(verb)\n for method in set(self.dyna_regexes) - allowed - nocheck:\n for combined, rules in self.dyna_regexes[method]:\n match = combined(path)\n if match:\n allowed.add(method)\n if allowed:\n allow_header = \",\".join(sorted(allowed))\n raise HTTPError(405, \"Method not allowed.\", Allow=allow_header)\n\n # No matching route and no alternative method found. We give up\n raise HTTPError(404, \"Not found: \" + repr(path))", "response": "Return a tuple of target url_agrs if the given path matches the current route."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_undecorated_callback(self):\n ''' Return the callback. If the callback is a decorated function, try to\n recover the original function. '''\n func = self.callback\n func = getattr(func, '__func__' if py3k else 'im_func', func)\n closure_attr = '__closure__' if py3k else 'func_closure'\n while hasattr(func, closure_attr) and getattr(func, closure_attr):\n func = getattr(func, closure_attr)[0].cell_contents\n return func", "response": "Return the original callback. If the callback is a decorated function try to\n recover the original function."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_hook(self, name, func):\n ''' Attach a callback to a hook. Three hooks are currently implemented:\n\n before_request\n Executed once before each request. The request context is\n available, but no routing has happened yet.\n after_request\n Executed once after each request regardless of its outcome.\n app_reset\n Called whenever :meth:`Bottle.reset` is called.\n '''\n if name in self.__hook_reversed:\n self._hooks[name].insert(0, func)\n else:\n self._hooks[name].append(func)", "response": "Attach a callback to a hook."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_hook(self, name, func):\n ''' Remove a callback from a hook. '''\n if name in self._hooks and func in self._hooks[name]:\n self._hooks[name].remove(func)\n return True", "response": "Remove a callback from a hook."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntriggers a hook and return a list of results.", "response": "def trigger_hook(self, __name, *args, **kwargs):\n ''' Trigger a hook and return a list of results. '''\n return [hook(*args, **kwargs) for hook in self._hooks[__name][:]]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef hook(self, name):\n def decorator(func):\n self.add_hook(name, func)\n return func\n return decorator", "response": "Returns a decorator that attaches a callback to a hook. See\n . add_hook for details."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mount(self, prefix, app, **options):\n ''' Mount an application (:class:`Bottle` or plain WSGI) to a specific\n URL prefix. Example::\n\n root_app.mount('/admin/', admin_app)\n\n :param prefix: path prefix or `mount-point`. If it ends in a slash,\n that slash is mandatory.\n :param app: an instance of :class:`Bottle` or a WSGI application.\n\n All other parameters are passed to the underlying :meth:`route` call.\n '''\n if isinstance(app, basestring):\n depr('Parameter order of Bottle.mount() changed.', True) # 0.10\n\n segments = [p for p in prefix.split('/') if p]\n if not segments: raise ValueError('Empty path prefix.')\n path_depth = len(segments)\n\n def mountpoint_wrapper():\n try:\n request.path_shift(path_depth)\n rs = HTTPResponse([])\n def start_response(status, headerlist, exc_info=None):\n if exc_info:\n try:\n _raise(*exc_info)\n finally:\n exc_info = None\n rs.status = status\n for name, value in headerlist: rs.add_header(name, value)\n return rs.body.append\n body = app(request.environ, start_response)\n if body and rs.body: body = itertools.chain(rs.body, body)\n rs.body = body or rs.body\n return rs\n finally:\n request.path_shift(-path_depth)\n\n options.setdefault('skip', True)\n options.setdefault('method', 'PROXY')\n options.setdefault('mountpoint', {'prefix': prefix, 'target': app})\n options['callback'] = mountpoint_wrapper\n\n self.route('/%s/<:re:.*>' % '/'.join(segments), **options)\n if not prefix.endswith('/'):\n self.route('/' + '/'.join(segments), **options)", "response": "Mount an application to a specific path prefix."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmerges the routes of another Bottle application or a list of Bottle.", "response": "def merge(self, routes):\n ''' Merge the routes of another :class:`Bottle` application or a list of\n :class:`Route` objects into this application. The routes keep their\n 'owner', meaning that the :data:`Route.app` attribute is not\n changed. '''\n if isinstance(routes, Bottle):\n routes = routes.routes\n for route in routes:\n self.add_route(route)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reset(self, route=None):\n ''' Reset all routes (force plugins to be re-applied) and clear all\n caches. If an ID or route object is given, only that specific route\n is affected. '''\n if route is None: routes = self.routes\n elif isinstance(route, Route): routes = [route]\n else: routes = [self.routes[route]]\n for route in routes: route.reset()\n if DEBUG:\n for route in routes: route.prepare()\n self.trigger_hook('app_reset')", "response": "Reset all routes and clear all the caches."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a route object but do not change the : data : Route. app attribute.", "response": "def add_route(self, route):\n ''' Add a route object, but do not change the :data:`Route.app`\n attribute.'''\n self.routes.append(route)\n self.router.add(route.rule, route.method, route, name=route.name)\n if DEBUG: route.prepare()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _cast(self, out, peek=None):\n\n # Empty output is done here\n if not out:\n if 'Content-Length' not in response:\n response['Content-Length'] = 0\n return []\n # Join lists of byte or unicode strings. Mixed lists are NOT supported\n if isinstance(out, (tuple, list))\\\n and isinstance(out[0], (bytes, unicode)):\n out = out[0][0:0].join(out) # b'abc'[0:0] -> b''\n # Encode unicode strings\n if isinstance(out, unicode):\n out = out.encode(response.charset)\n # Byte Strings are just returned\n if isinstance(out, bytes):\n if 'Content-Length' not in response:\n response['Content-Length'] = len(out)\n return [out]\n # HTTPError or HTTPException (recursive, because they may wrap anything)\n # TODO: Handle these explicitly in handle() or make them iterable.\n if isinstance(out, HTTPError):\n out.apply(response)\n out = self.error_handler.get(out.status_code, self.default_error_handler)(out)\n return self._cast(out)\n if isinstance(out, HTTPResponse):\n out.apply(response)\n return self._cast(out.body)\n\n # File-like objects.\n if hasattr(out, 'read'):\n if 'wsgi.file_wrapper' in request.environ:\n return request.environ['wsgi.file_wrapper'](out)\n elif hasattr(out, 'close') or not hasattr(out, '__iter__'):\n return WSGIFileWrapper(out)\n\n # Handle Iterables. We peek into them to detect their inner type.\n try:\n iout = iter(out)\n first = next(iout)\n while not first:\n first = next(iout)\n except StopIteration:\n return self._cast('')\n except HTTPResponse:\n first = _e()\n except (KeyboardInterrupt, SystemExit, MemoryError):\n raise\n except Exception:\n if not self.catchall: raise\n first = HTTPError(500, 'Unhandled exception', _e(), format_exc())\n\n # These are the inner types allowed in iterator or generator objects.\n if isinstance(first, HTTPResponse):\n return self._cast(first)\n elif isinstance(first, bytes):\n new_iter = itertools.chain([first], iout)\n elif isinstance(first, unicode):\n encoder = lambda x: x.encode(response.charset)\n new_iter = imap(encoder, itertools.chain([first], iout))\n else:\n msg = 'Unsupported response type: %s' % type(first)\n return self._cast(HTTPError(500, msg))\n if hasattr(out, 'close'):\n new_iter = _closeiter(new_iter, out.close)\n return new_iter", "response": "Try to convert the output into something WSGI compatible and set\nAttributeNames correct HTTP headers when possible."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wsgi(self, environ, start_response):\n try:\n out = self._cast(self._handle(environ))\n # rfc2616 section 4.3\n if response._status_code in (100, 101, 204, 304)\\\n or environ['REQUEST_METHOD'] == 'HEAD':\n if hasattr(out, 'close'): out.close()\n out = []\n start_response(response._status_line, response.headerlist)\n return out\n except (KeyboardInterrupt, SystemExit, MemoryError):\n raise\n except Exception:\n if not self.catchall: raise\n err = '

Critical error while processing request: %s

' \\\n % html_escape(environ.get('PATH_INFO', '/'))\n if DEBUG:\n err += '

Error:

\\n
\\n%s\\n
\\n' \\\n '

Traceback:

\\n
\\n%s\\n
\\n' \\\n % (html_escape(repr(_e())), html_escape(format_exc()))\n environ['wsgi.errors'].write(err)\n headers = [('Content-Type', 'text/html; charset=UTF-8')]\n start_response('500 INTERNAL SERVER ERROR', headers, sys.exc_info())\n return [tob(err)]", "response": "The bottle WSGI - interface."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef forms(self):\n forms = FormsDict()\n for name, item in self.POST.allitems():\n if not isinstance(item, FileUpload):\n forms[name] = item\n return forms", "response": "Returns a FormsDict containing all the form values parsed from an url - encoded POST or PUT request body."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef params(self):\n params = FormsDict()\n for key, value in self.query.allitems():\n params[key] = value\n for key, value in self.forms.allitems():\n params[key] = value\n return params", "response": "A FormsDict with the combined values of query and forms. File uploads are stored in files."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef files(self):\n files = FormsDict()\n for name, item in self.POST.allitems():\n if isinstance(item, FileUpload):\n files[name] = item\n return files", "response": "Returns a FormsDict containing the files parsed from the POST or PUT\n request body."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef json(self):\n ''' If the ``Content-Type`` header is ``application/json``, this\n property holds the parsed content of the request body. Only requests\n smaller than :attr:`MEMFILE_MAX` are processed to avoid memory\n exhaustion. '''\n ctype = self.environ.get('CONTENT_TYPE', '').lower().split(';')[0]\n if ctype == 'application/json':\n b = self._get_body_string()\n if not b:\n return None\n return json_loads(b)\n return None", "response": "Returns the json object of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a copy of self.", "response": "def copy(self, cls=None):\n ''' Returns a copy of self. '''\n cls = cls or BaseResponse\n assert issubclass(cls, BaseResponse)\n copy = cls()\n copy.status = self.status\n copy._headers = dict((k, v[:]) for (k, v) in self._headers.items())\n if self._cookies:\n copy._cookies = SimpleCookie()\n copy._cookies.load(self._cookies.output(header=''))\n return copy"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_header(self, name, value):\n ''' Create a new response header, replacing any previously defined\n headers with the same name. '''\n self._headers[_hkey(name)] = [str(value)]", "response": "Create a new response header with the given name and value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef charset(self, default='UTF-8'):\n if 'charset=' in self.content_type:\n return self.content_type.split('charset=')[-1].split(';')[0].strip()\n return default", "response": "Return charset specified in the content - type header."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_cookie(self, name, value, secret=None, **options):\n ''' Create a new cookie or replace an old one. If the `secret` parameter is\n set, create a `Signed Cookie` (described below).\n\n :param name: the name of the cookie.\n :param value: the value of the cookie.\n :param secret: a signature key required for signed cookies.\n\n Additionally, this method accepts all RFC 2109 attributes that are\n supported by :class:`cookie.Morsel`, including:\n\n :param max_age: maximum age in seconds. (default: None)\n :param expires: a datetime object or UNIX timestamp. (default: None)\n :param domain: the domain that is allowed to read the cookie.\n (default: current domain)\n :param path: limits the cookie to a given path (default: current path)\n :param secure: limit the cookie to HTTPS connections (default: off).\n :param httponly: prevents client-side javascript to read this cookie\n (default: off, requires Python 2.6 or newer).\n\n If neither `expires` nor `max_age` is set (default), the cookie will\n expire at the end of the browser session (as soon as the browser\n window is closed).\n\n Signed cookies may store any pickle-able object and are\n cryptographically signed to prevent manipulation. Keep in mind that\n cookies are limited to 4kb in most browsers.\n\n Warning: Signed cookies are not encrypted (the client can still see\n the content) and not copy-protected (the client can restore an old\n cookie). The main intention is to make pickling and unpickling\n save, not to store secret information at client side.\n '''\n if not self._cookies:\n self._cookies = SimpleCookie()\n\n if secret:\n value = touni(cookie_encode((name, value), secret))\n elif not isinstance(value, basestring):\n raise TypeError('Secret key missing for non-string Cookie.')\n\n if len(value) > 4096: raise ValueError('Cookie value to long.')\n self._cookies[name] = value\n\n for key, value in options.items():\n if key == 'max_age':\n if isinstance(value, timedelta):\n value = value.seconds + value.days * 24 * 3600\n if key == 'expires':\n if isinstance(value, (datedate, datetime)):\n value = value.timetuple()\n elif isinstance(value, (int, float)):\n value = time.gmtime(value)\n value = time.strftime(\"%a, %d %b %Y %H:%M:%S GMT\", value)\n self._cookies[name][key.replace('_', '-')] = value", "response": "Create a new cookie or replace an old one. If the `secret` parameter is\n set, create a `Signed Cookie` (described below).\n\n :param name: the name of the cookie.\n :param value: the value of the cookie.\n :param secret: a signature key required for signed cookies.\n\n Additionally, this method accepts all RFC 2109 attributes that are\n supported by :class:`cookie.Morsel`, including:\n\n :param max_age: maximum age in seconds. (default: None)\n :param expires: a datetime object or UNIX timestamp. (default: None)\n :param domain: the domain that is allowed to read the cookie.\n (default: current domain)\n :param path: limits the cookie to a given path (default: current path)\n :param secure: limit the cookie to HTTPS connections (default: off).\n :param httponly: prevents client-side javascript to read this cookie\n (default: off, requires Python 2.6 or newer).\n\n If neither `expires` nor `max_age` is set (default), the cookie will\n expire at the end of the browser session (as soon as the browser\n window is closed).\n\n Signed cookies may store any pickle-able object and are\n cryptographically signed to prevent manipulation. Keep in mind that\n cookies are limited to 4kb in most browsers.\n\n Warning: Signed cookies are not encrypted (the client can still see\n the content) and not copy-protected (the client can restore an old\n cookie). The main intention is to make pickling and unpickling\n save, not to store secret information at client side."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a copy with all keys and values de - or recoded to match .", "response": "def decode(self, encoding=None):\n ''' Returns a copy with all keys and values de- or recoded to match\n :attr:`input_encoding`. Some libraries (e.g. WTForms) want a\n unicode dictionary. '''\n copy = FormsDict()\n enc = copy.input_encoding = encoding or self.input_encoding\n copy.recode_unicode = False\n for key, value in self.allitems():\n copy.append(self._fix(key, enc), self._fix(value, enc))\n return copy"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getunicode(self, name, default=None, encoding=None):\n ''' Return the value as a unicode string, or the default. '''\n try:\n return self._fix(self[name], encoding)\n except (UnicodeError, KeyError):\n return default", "response": "Return the value as a unicode string or the default."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads values from an. ini style config file.", "response": "def load_config(self, filename):\n ''' Load values from an *.ini style config file.\n\n If the config file contains sections, their names are used as\n namespaces for the values within. The two special sections\n ``DEFAULT`` and ``bottle`` refer to the root namespace (no prefix).\n '''\n conf = ConfigParser()\n conf.read(filename)\n for section in conf.sections():\n for key, value in conf.items(section):\n if section not in ('DEFAULT', 'bottle'):\n key = section + '.' + key\n self[key] = value\n return self"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_dict(self, source, namespace='', make_namespaces=False):\n ''' Import values from a dictionary structure. Nesting can be used to\n represent namespaces.\n\n >>> ConfigDict().load_dict({'name': {'space': {'key': 'value'}}})\n {'name.space.key': 'value'}\n '''\n stack = [(namespace, source)]\n while stack:\n prefix, source = stack.pop()\n if not isinstance(source, dict):\n raise TypeError('Source is not a dict (r)' % type(key))\n for key, value in source.items():\n if not isinstance(key, str):\n raise TypeError('Key is not a string (%r)' % type(key))\n full_key = prefix + '.' + key if prefix else key\n if isinstance(value, dict):\n stack.append((full_key, value))\n if make_namespaces:\n self[full_key] = self.Namespace(self, full_key)\n else:\n self[full_key] = value\n return self", "response": "Import values from a dictionary structure. Nesting can be used to represent namespaces."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the value of a meta field for a key.", "response": "def meta_get(self, key, metafield, default=None):\n ''' Return the value of a meta field for a key. '''\n return self._meta.get(key, {}).get(metafield, default)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef meta_set(self, key, metafield, value):\n ''' Set the meta field for a key to a new value. This triggers the\n on-change handler for existing keys. '''\n self._meta.setdefault(key, {})[metafield] = value\n if key in self:\n self[key] = self[key]", "response": "Set the meta field for a key to a new value. This triggers the the\n on - change handler for existing keys."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lookup(self, name):\n ''' Search for a resource and return an absolute file path, or `None`.\n\n The :attr:`path` list is searched in order. The first match is\n returend. Symlinks are followed. The result is cached to speed up\n future lookups. '''\n if name not in self.cache or DEBUG:\n for path in self.path:\n fpath = os.path.join(path, name)\n if os.path.isfile(fpath):\n if self.cachemode in ('all', 'found'):\n self.cache[name] = fpath\n return fpath\n if self.cachemode == 'all':\n self.cache[name] = None\n return self.cache[name]", "response": "Search for a resource and return an absolute file path or None."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds a resource and return a file object.", "response": "def open(self, name, mode='r', *args, **kwargs):\n ''' Find a resource and return a file object, or raise IOError. '''\n fname = self.lookup(name)\n if not fname: raise IOError(\"Resource %r not found.\" % name)\n return self.opener(fname, mode=mode, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving file to disk or copy its content to an open file - like object.", "response": "def save(self, destination, overwrite=False, chunk_size=2**16):\n ''' Save file to disk or copy its content to an open file(-like) object.\n If *destination* is a directory, :attr:`filename` is added to the\n path. Existing files are not overwritten by default (IOError).\n\n :param destination: File path, directory or file(-like) object.\n :param overwrite: If True, replace existing files. (default: False)\n :param chunk_size: Bytes to read at a time. (default: 64kb)\n '''\n if isinstance(destination, basestring): # Except file-likes here\n if os.path.isdir(destination):\n destination = os.path.join(destination, self.filename)\n if not overwrite and os.path.exists(destination):\n raise IOError('File exists.')\n with open(destination, 'wb') as fp:\n self._copy_file(fp, chunk_size)\n else:\n self._copy_file(destination, chunk_size)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrender the template using keyword arguments as local variables.", "response": "def render(self, *args, **kwargs):\n \"\"\" Render the template using keyword arguments as local variables. \"\"\"\n env = {}; stdout = []\n for dictarg in args: env.update(dictarg)\n env.update(kwargs)\n self.execute(stdout, env)\n return ''.join(stdout)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the price of a currency to another currency using exchange rates", "response": "def convert_value(value, source_currency, target_currency):\n \"\"\"Converts the price of a currency to another one using exchange rates\n\n :param price: the price value\n :param type: decimal\n\n :param source_currency: source ISO-4217 currency code\n :param type: str\n\n :param target_currency: target ISO-4217 currency code\n :param type: str\n\n :returns: converted price instance\n :rtype: ``Price``\n\n \"\"\"\n # If price currency and target currency is same\n # return given currency as is\n if source_currency == target_currency:\n return value\n\n rate = get_rate(source_currency, target_currency)\n\n return value * rate"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef import_class(class_path):\n try:\n from django.utils.importlib import import_module\n module_name = '.'.join(class_path.split(\".\")[:-1])\n mod = import_module(module_name)\n return getattr(mod, class_path.split(\".\")[-1])\n except Exception, detail:\n raise ImportError(detail)", "response": "imports and returns given class string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninserts list of Django objects in one SQL query.", "response": "def insert_many(objects, using=\"default\"):\n \"\"\"Insert list of Django objects in one SQL query. Objects must be\n of the same Django model. Note that save is not called and signals\n on the model are not raised.\n\n Mostly from: http://people.iola.dk/olau/python/bulkops.py\n \"\"\"\n if not objects:\n return\n\n import django.db.models\n from django.db import connections\n from django.db import transaction\n con = connections[using]\n\n model = objects[0].__class__\n fields = [f for f in model._meta.fields\n if not isinstance(f, django.db.models.AutoField)]\n parameters = []\n for o in objects:\n params = tuple(f.get_db_prep_save(f.pre_save(o, True), connection=con)\n for f in fields)\n parameters.append(params)\n\n table = model._meta.db_table\n column_names = \",\".join(con.ops.quote_name(f.column) for f in fields)\n placeholders = \",\".join((\"%s\",) * len(fields))\n con.cursor().executemany(\"insert into %s (%s) values (%s)\"\n % (table, column_names, placeholders), parameters)\n transaction.commit_unless_managed(using=using)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating a list of Django objects in one SQL query optionally only overwrite the given fields.", "response": "def update_many(objects, fields=[], using=\"default\"):\n \"\"\"Update list of Django objects in one SQL query, optionally only\n overwrite the given fields (as names, e.g. fields=[\"foo\"]).\n Objects must be of the same Django model. Note that save is not\n called and signals on the model are not raised.\n\n Mostly from: http://people.iola.dk/olau/python/bulkops.py\n \"\"\"\n if not objects:\n return\n\n import django.db.models\n from django.db import connections\n from django.db import transaction\n con = connections[using]\n\n names = fields\n meta = objects[0]._meta\n fields = [f for f in meta.fields\n if not isinstance(f, django.db.models.AutoField)\n and (not names or f.name in names)]\n\n if not fields:\n raise ValueError(\"No fields to update, field names are %s.\" % names)\n\n fields_with_pk = fields + [meta.pk]\n parameters = []\n for o in objects:\n parameters.append(tuple(f.get_db_prep_save(f.pre_save(o, True),\n connection=con) for f in fields_with_pk))\n\n table = meta.db_table\n assignments = \",\".join((\"%s=%%s\" % con.ops.quote_name(f.column))\n for f in fields)\n con.cursor().executemany(\"update %s set %s where %s=%%s\"\n % (table, assignments,\n con.ops.quote_name(meta.pk.column)),\n parameters)\n transaction.commit_unless_managed(using=using)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef memoize(ttl=None):\n def decorator(obj):\n cache = obj.cache = {}\n\n @functools.wraps(obj)\n def memoizer(*args, **kwargs):\n now = datetime.now()\n key = str(args) + str(kwargs)\n if key not in cache:\n cache[key] = (obj(*args, **kwargs), now)\n value, last_update = cache[key]\n if ttl and (now - last_update) > ttl:\n cache[key] = (obj(*args, **kwargs), now)\n return cache[key][0]\n return memoizer\n return decorator", "response": "A decorator that caches the result of the function call with given args for until ttl seconds expires."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef round_to_quarter(start_time, end_time):\n # We don't care about the date (only about the time) but Python\n # can substract only datetime objects, not time ones\n today = datetime.date.today()\n start_date = datetime.datetime.combine(today, start_time)\n end_date = datetime.datetime.combine(today, end_time)\n\n difference_minutes = (end_date - start_date).seconds / 60\n remainder = difference_minutes % 15\n # Round up\n difference_minutes += 15 - remainder if remainder > 0 else 0\n\n return (\n start_date + datetime.timedelta(minutes=difference_minutes)\n ).time()", "response": "Return the duration between start_time and end_time rounded to 15 minutes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a function that will be called on all the timesheets and aggregate the return values in a list and return it.", "response": "def _timesheets_callback(self, callback):\n \"\"\"\n Call a method on all the timesheets, aggregate the return values in a\n list and return it.\n \"\"\"\n def call(*args, **kwargs):\n return_values = []\n\n for timesheet in self:\n attr = getattr(timesheet, callback)\n\n if callable(attr):\n result = attr(*args, **kwargs)\n else:\n result = attr\n\n return_values.append(result)\n\n return return_values\n\n return call"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading a collection of timesheets from the given file pattern.", "response": "def load(cls, file_pattern, nb_previous_files=1, parser=None):\n \"\"\"\n Load a collection of timesheet from the given `file_pattern`. `file_pattern` is a path to a timesheet file that\n will be expanded with :func:`datetime.date.strftime` and the current date. `nb_previous_files` is the number of\n other timesheets to load, depending on `file_pattern` this will result in either the timesheet from the\n previous month or from the previous year to be loaded. If `parser` is not set, a default\n :class:`taxi.timesheet.parser.TimesheetParser` will be used.\n \"\"\"\n if not parser:\n parser = TimesheetParser()\n\n timesheet_files = cls.get_files(file_pattern, nb_previous_files)\n timesheet_collection = cls()\n\n for file_path in timesheet_files:\n try:\n timesheet = Timesheet.load(\n file_path, parser=parser, initial=lambda: timesheet_collection.get_new_timesheets_contents()\n )\n except ParseError as e:\n e.file = file_path\n raise\n\n timesheet_collection.timesheets.append(timesheet)\n\n # Fix `add_date_to_bottom` attribute of timesheet entries based on\n # previous timesheets. When a new timesheet is started it won't have\n # any direction defined, so we take the one from the previous\n # timesheet, if any\n if parser.add_date_to_bottom is None:\n previous_timesheet = None\n for timesheet in timesheet_collection.timesheets:\n if previous_timesheet:\n previous_timesheet_top_down = previous_timesheet.entries.is_top_down()\n\n if previous_timesheet_top_down is not None:\n timesheet.entries.parser.add_date_to_bottom = previous_timesheet_top_down\n previous_timesheet = timesheet\n\n return timesheet_collection"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn an OrderedSet of file paths expanded from filename with a maximum of nb_previous_files.", "response": "def get_files(cls, file_pattern, nb_previous_files, from_date=None):\n \"\"\"\n Return an :class:`~taxi.utils.structures.OrderedSet` of file paths expanded from `filename`, with a maximum of\n `nb_previous_files`. See :func:`taxi.utils.file.expand_date` for more information about filename expansion. If\n `from_date` is set, it will be used as a starting date instead of the current date.\n \"\"\"\n date_units = ['m', 'Y']\n smallest_unit = None\n\n if not from_date:\n from_date = datetime.date.today()\n\n for date_unit in date_units:\n if ('%' + date_unit) in file_pattern:\n smallest_unit = date_unit\n break\n\n if smallest_unit is None:\n return OrderedSet([file_pattern])\n\n files = OrderedSet()\n\n for i in range(nb_previous_files, -1, -1):\n if smallest_unit == 'm':\n file_date = months_ago(from_date, i)\n elif smallest_unit == 'Y':\n file_date = from_date.replace(day=1, year=from_date.year - i)\n\n files.add(file_utils.expand_date(file_pattern, file_date))\n\n return files"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the initial text to be inserted in new timesheets.", "response": "def get_new_timesheets_contents(self):\n \"\"\"\n Return the initial text to be inserted in new timesheets.\n \"\"\"\n popular_aliases = self.get_popular_aliases()\n template = ['# Recently used aliases:']\n\n if popular_aliases:\n contents = '\\n'.join(template + ['# ' + entry for entry, usage in popular_aliases])\n else:\n contents = ''\n\n return contents"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the entries of all timesheets in the the collection.", "response": "def entries(self):\n \"\"\"\n Return the entries (as a {date: entries} dict) of all timesheets in the\n collection.\n \"\"\"\n entries_list = self._timesheets_callback('entries')()\n\n return reduce(lambda x, y: x + y, entries_list)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the aggregated results of : meth : Timeheet. get_popular_aliases.", "response": "def get_popular_aliases(self, *args, **kwargs):\n \"\"\"\n Return the aggregated results of :meth:`Timesheet.get_popular_aliases`.\n \"\"\"\n aliases_count_total = defaultdict(int)\n aliases_counts = self._timesheets_callback('get_popular_aliases')(*args, **kwargs)\n\n for aliases_count in aliases_counts:\n for alias, count in aliases_count:\n aliases_count_total[alias] += count\n\n sorted_aliases_count_total = sorted(aliases_count_total.items(), key=lambda item: item[1], reverse=True)\n\n return sorted_aliases_count_total"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef show(ctx, search):\n matches = {'aliases': [], 'mappings': [], 'projects': []}\n projects_db = ctx.obj['projects_db']\n\n matches = get_alias_matches(search, matches)\n matches = get_mapping_matches(search, matches, projects_db)\n matches = get_project_matches(search, matches, projects_db)\n\n ctx.obj['view'].show_command_results(search, matches, projects_db)", "response": "Show the contents of the base object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nshows the status of what s going to be committed to the server.", "response": "def status(ctx, date, f, pushed):\n \"\"\"\n Shows the summary of what's going to be committed to the server.\n \"\"\"\n try:\n timesheet_collection = get_timesheet_collection_for_context(ctx, f)\n except ParseError as e:\n ctx.obj['view'].err(e)\n else:\n ctx.obj['view'].show_status(\n timesheet_collection.entries.filter(\n date, regroup=ctx.obj['settings']['regroup_entries'],\n pushed=False if not pushed else None\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef edit(ctx, file_to_edit, previous_file):\n timesheet_collection = None\n autofill = not bool(file_to_edit) and previous_file == 0\n if not file_to_edit:\n file_to_edit = ctx.obj['settings'].get_entries_file_path(False)\n\n # If the file was not specified and if it's the current file, autofill it\n if autofill:\n try:\n timesheet_collection = get_timesheet_collection_for_context(\n ctx, file_to_edit\n )\n except ParseError:\n pass\n else:\n t = timesheet_collection.latest()\n\n if ctx.obj['settings']['auto_add'] != Settings.AUTO_ADD_OPTIONS['NO']:\n auto_fill_days = ctx.obj['settings']['auto_fill_days']\n if auto_fill_days:\n t.prefill(auto_fill_days, limit=None)\n\n t.save()\n\n # Get the path to the file we should open in the editor\n timesheet_files = list(reversed(TimesheetCollection.get_files(file_to_edit, previous_file)))\n if previous_file >= len(timesheet_files):\n ctx.fail(\"Couldn't find the requested previous file for `%s`.\" % file_to_edit)\n\n expanded_file_to_edit = list(timesheet_files)[previous_file]\n\n editor = ctx.obj['settings']['editor']\n edit_kwargs = {\n 'filename': expanded_file_to_edit,\n 'extension': '.tks'\n }\n if editor:\n edit_kwargs['editor'] = editor\n\n click.edit(**edit_kwargs)\n\n try:\n # Show the status only for the given file if it was specified with the\n # --file option, or for the files specified in the settings otherwise\n timesheet_collection = get_timesheet_collection_for_context(\n ctx, file_to_edit\n )\n except ParseError as e:\n ctx.obj['view'].err(e)\n else:\n ctx.obj['view'].show_status(\n timesheet_collection.entries.filter(regroup=True, pushed=False)\n )", "response": "Edit a timesheet file in your favourite editor."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the reversed aliases dict.", "response": "def get_reversed_aliases(self):\n \"\"\"\n Return the reversed aliases dict. Instead of being in the form\n {'alias': mapping}, the dict is in the form {mapping: 'alias'}.\n \"\"\"\n return dict((v, k) for k, v in six.iteritems(self.aliases))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of mappings that exactly correspond to the given mapping.", "response": "def filter_from_mapping(self, mapping, backend=None):\n \"\"\"\n Return mappings that either exactly correspond to the given `mapping`\n tuple, or, if the second item of `mapping` is `None`, include mappings\n that only match the first item of `mapping` (useful to show all\n mappings for a given project).\n \"\"\"\n def mapping_filter(key_item):\n key, item = key_item\n\n return (\n (mapping is None or item.mapping == mapping or\n (mapping[1] is None and item.mapping is not None and item.mapping[0] == mapping[0])) and\n (backend is None or item.backend == backend)\n )\n\n items = [item for item in six.iteritems(self) if mapping_filter(item)]\n\n aliases = collections.OrderedDict(\n sorted(items, key=lambda alias: alias[1].mapping\n if alias[1] is not None else (0, 0))\n )\n\n return aliases"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef filter_from_alias(self, alias, backend=None):\n def alias_filter(key_item):\n key, item = key_item\n\n return ((alias is None or alias in key) and\n (backend is None or item.backend == backend))\n\n items = six.moves.filter(alias_filter, six.iteritems(self))\n\n aliases = collections.OrderedDict(sorted(items, key=lambda a: a[0].lower()))\n\n return aliases", "response": "Return aliases that start with the given alias optionally filtered by backend."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef clean_aliases(ctx, force_yes):\n inactive_aliases = []\n\n for (alias, mapping) in six.iteritems(aliases_database):\n # Ignore local aliases\n if mapping.mapping is None:\n continue\n\n project = ctx.obj['projects_db'].get(mapping.mapping[0],\n mapping.backend)\n\n if (project is None or not project.is_active() or\n (mapping.mapping[1] is not None\n and project.get_activity(mapping.mapping[1]) is None)):\n inactive_aliases.append(((alias, mapping), project))\n\n if not inactive_aliases:\n ctx.obj['view'].msg(\"No inactive aliases found.\")\n return\n\n if not force_yes:\n confirm = ctx.obj['view'].clean_inactive_aliases(inactive_aliases)\n\n if force_yes or confirm:\n ctx.obj['settings'].remove_aliases(\n [item[0] for item in inactive_aliases]\n )\n ctx.obj['settings'].write_config()\n ctx.obj['view'].msg(\"%d inactive aliases have been successfully\"\n \" cleaned.\" % len(inactive_aliases))", "response": "Removes aliases from your config file that point to inactive projects."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching information about the given plugin on PyPI and return it as a dict.", "response": "def get_plugin_info(plugin):\n \"\"\"\n Fetch information about the given package on PyPI and return it as a dict.\n If the package cannot be found on PyPI, :exc:`NameError` will be raised.\n \"\"\"\n url = 'https://pypi.python.org/pypi/{}/json'.format(plugin)\n\n try:\n resp = request.urlopen(url)\n except HTTPError as e:\n if e.code == 404:\n raise NameError(\"Plugin {} could not be found.\".format(plugin))\n else:\n raise ValueError(\n \"Checking plugin status on {} returned HTTP code {}\".format(\n url, resp.getcode()\n )\n )\n\n try:\n json_resp = json.loads(resp.read().decode())\n # Catch ValueError instead of JSONDecodeError which is only available in\n # Python 3.5+\n except ValueError:\n raise ValueError(\n \"Could not decode JSON info for plugin at {}\".format(url)\n )\n\n return json_resp['info']"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_installed_plugins():\n return {\n # Strip the first five characters from the plugin name since all\n # plugins are expected to start with `taxi-`\n backend.dist.project_name[5:]: backend.dist.version\n for backend in backends_registry._entry_points.values()\n if backend.dist.project_name != 'taxi'\n }", "response": "Return a dict of installed plugins."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef install(ctx, plugin):\n ensure_inside_venv(ctx)\n\n plugin_name = get_plugin_name(plugin)\n try:\n info = get_plugin_info(plugin_name)\n except NameError:\n echo_error(\"Plugin {} could not be found.\".format(plugin))\n sys.exit(1)\n except ValueError as e:\n echo_error(\"Unable to retrieve plugin info. \"\n \"Error was:\\n\\n {}\".format(e))\n sys.exit(1)\n\n try:\n installed_version = pkg_resources.get_distribution(plugin_name).version\n except pkg_resources.DistributionNotFound:\n installed_version = None\n\n if installed_version is not None and info['version'] == installed_version:\n click.echo(\"You already have the latest version of {} ({}).\".format(\n plugin, info['version']\n ))\n return\n\n pinned_plugin = '{0}=={1}'.format(plugin_name, info['version'])\n try:\n run_command([sys.executable, '-m', 'pip', 'install', pinned_plugin])\n except subprocess.CalledProcessError as e:\n echo_error(\"Error when trying to install plugin {}. \"\n \"Error was:\\n\\n {}\".format(plugin, e))\n sys.exit(1)\n\n echo_success(\"Plugin {} {} installed successfully.\".format(\n plugin, info['version']\n ))", "response": "Install the given plugin."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef uninstall(ctx, plugin):\n ensure_inside_venv(ctx)\n\n if plugin not in get_installed_plugins():\n echo_error(\"Plugin {} does not seem to be installed.\".format(plugin))\n sys.exit(1)\n\n plugin_name = get_plugin_name(plugin)\n try:\n run_command([sys.executable, '-m', 'pip', 'uninstall', '-y',\n plugin_name])\n except subprocess.CalledProcessError as e:\n echo_error(\n \"Error when trying to uninstall plugin {}. Error message \"\n \"was:\\n\\n{}\".format(plugin, e.output.decode())\n )\n sys.exit(1)\n else:\n echo_success(\"Plugin {} uninstalled successfully.\".format(plugin))", "response": "Uninstalls the given plugin."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef hours(self):\n if not isinstance(self.duration, tuple):\n return self.duration\n\n if self.duration[1] is None:\n return 0\n\n time_start = self.get_start_time()\n\n # This can happen if the previous entry has a non-tuple duration\n # and the current entry has a tuple duration without a start time\n if time_start is None:\n return 0\n\n now = datetime.datetime.now()\n time_start = now.replace(\n hour=time_start.hour,\n minute=time_start.minute, second=0\n )\n time_end = now.replace(\n hour=self.duration[1].hour,\n minute=self.duration[1].minute, second=0\n )\n total_time = time_end - time_start\n total_hours = total_time.seconds / 3600.0\n\n return total_hours", "response": "Return the number of hours this entry has lasted."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the start time of the current entry as a : class : datetime. time object.", "response": "def get_start_time(self):\n \"\"\"\n Return the start time of the entry as a :class:`datetime.time` object.\n If the start time is `None`, the end time of the previous entry will be\n returned instead. If the current entry doesn't have a duration in the\n form of a tuple, if there's no previous entry or if the previous entry\n has no end time, the value `None` will be returned.\n \"\"\"\n if not isinstance(self.duration, tuple):\n return None\n\n if self.duration[0] is not None:\n return self.duration[0]\n else:\n if (self.previous_entry and\n isinstance(self.previous_entry.duration, tuple) and\n self.previous_entry.duration[1] is not None):\n return self.previous_entry.duration[1]\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a unique hash of the entry.", "response": "def hash(self):\n \"\"\"\n Return a value that's used to uniquely identify an entry in a date so we can regroup all entries that share the\n same hash.\n \"\"\"\n return u''.join([\n self.alias,\n self.description,\n str(self.ignored),\n str(self.flags),\n ])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding flag to the flags attribute.", "response": "def add_flag(self, flag):\n \"\"\"\n Add flag to the flags and memorize this attribute has changed so we can\n regenerate it when outputting text.\n \"\"\"\n super(Entry, self).add_flag(flag)\n self._changed_attrs.add('flags')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove flag from the flags attribute.", "response": "def remove_flag(self, flag):\n \"\"\"\n Remove flag to the flags and memorize this attribute has changed so we\n can regenerate it when outputting text.\n \"\"\"\n super(Entry, self).remove_flag(flag)\n self._changed_attrs.add('flags')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_entry(self, date, entry):\n in_date = False\n insert_at = 0\n\n for (lineno, line) in enumerate(self.lines):\n # Search for the date of the entry\n if isinstance(line, DateLine) and line.date == date:\n in_date = True\n # Insert here if there is no existing Entry for this date\n insert_at = lineno\n continue\n\n if in_date:\n if isinstance(line, Entry):\n insert_at = lineno\n elif isinstance(line, DateLine):\n break\n\n self.lines.insert(insert_at + 1, entry)\n\n # If there's no other Entry in the current date, add a blank line\n # between the date and the entry\n if not isinstance(self.lines[insert_at], Entry):\n self.lines.insert(insert_at + 1, TextLine(''))", "response": "Add the given entry to the textual representation."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove the given entries from the textual representation.", "response": "def delete_entries(self, entries):\n \"\"\"\n Remove the given entries from the textual representation.\n \"\"\"\n self.lines = trim([\n line for line in self.lines\n if not isinstance(line, Entry) or line not in entries\n ])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_date(self, date):\n self.lines = [\n line for line in self.lines\n if not isinstance(line, DateLine) or line.date != date\n ]\n\n self.lines = trim(self.lines)", "response": "Removes the date line from the textual representation. This doesn t remove any entry line. This doesn t remove any entry line. This doesn t remove any entry line."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_date(self, date):\n self.lines = self.parser.add_date(date, self.lines)", "response": "Add the given date to the textual representation."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes the structured and textual data based on a string containing the entries.", "response": "def init_from_str(self, entries):\n \"\"\"\n Initialize the structured and textual data based on a string\n representing the entries. For detailed information about the format of\n this string, refer to the\n :func:`~taxi.timesheet.parser.parse_text` function.\n \"\"\"\n self.lines = self.parser.parse_text(entries)\n\n for line in self.lines:\n if isinstance(line, DateLine):\n current_date = line.date\n self[current_date] = self.default_factory(self, line.date)\n elif isinstance(line, Entry):\n if len(self[current_date]) > 0:\n line.previous_entry = self[current_date][-1]\n self[current_date][-1].next_entry = line\n\n self[current_date].append(line)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the entries in the list that match the given date.", "response": "def filter(self, date=None, regroup=False, ignored=None, pushed=None, unmapped=None, current_workday=None):\n \"\"\"\n Return the entries as a dict of {:class:`datetime.date`: :class:`~taxi.timesheet.lines.Entry`}\n items.\n\n `date` can either be a single :class:`datetime.date` object to filter only entries from the given date,\n or a tuple of :class:`datetime.date` objects representing `(from, to)`. `filter_callback` is a function that,\n given a :class:`~taxi.timesheet.lines.Entry` object, should return True to include that line, or False to\n exclude it. If `regroup` is set to True, similar entries (ie. having the same\n :meth:`~taxi.timesheet.lines.Entry.hash`) will be regrouped intro a single\n :class:`~taxi.timesheet.entry.AggregatedTimesheetEntry`.\n \"\"\"\n def entry_filter(entry_date, entry):\n if ignored is not None and entry.ignored != ignored:\n return False\n\n if pushed is not None and entry.pushed != pushed:\n return False\n\n if unmapped is not None and entry.mapped == unmapped:\n return False\n\n if current_workday is not None:\n today = datetime.date.today()\n yesterday = date_utils.get_previous_working_day(today)\n is_current_workday = entry_date in (today, yesterday) and entry_date.strftime('%w') not in [6, 0]\n\n if current_workday != is_current_workday:\n return False\n\n return True\n\n # Date can either be a single date (only 1 day) or a tuple for a\n # date range\n if date is not None and not isinstance(date, tuple):\n date = (date, date)\n\n filtered_entries = collections.defaultdict(list)\n\n for (entries_date, entries) in six.iteritems(self):\n if (date is not None and (\n (date[0] is not None and entries_date < date[0])\n or (date[1] is not None and entries_date > date[1]))):\n continue\n\n entries_for_date = []\n\n if regroup:\n # This is a mapping between entries hashes and their\n # position in the entries_for_date list\n aggregated_entries = {}\n id = 0\n\n for entry in entries:\n if not entry_filter(entries_date, entry):\n continue\n\n # Common case: the entry is not yet referenced in the\n # aggregated_entries dict\n if entry.hash not in aggregated_entries:\n # In that case, put it normally in the entries_for_date\n # list. It will get replaced by an AggregatedEntry\n # later if necessary\n entries_for_date.append(entry)\n aggregated_entries[entry.hash] = id\n id += 1\n else:\n # Get the first occurence of the entry in the\n # entries_for_date list\n existing_entry = entries_for_date[\n aggregated_entries[entry.hash]\n ]\n\n # The entry could already have been replaced by an\n # AggregatedEntry if there's more than 2 occurences\n if isinstance(existing_entry, Entry):\n # Create the AggregatedEntry, put the first\n # occurence of Entry in it and the current one\n aggregated_entry = AggregatedTimesheetEntry()\n aggregated_entry.entries.append(existing_entry)\n aggregated_entry.entries.append(entry)\n entries_for_date[\n aggregated_entries[entry.hash]\n ] = aggregated_entry\n else:\n # The entry we found is already an\n # AggregatedEntry, let's just append the\n # current entry to it\n aggregated_entry = existing_entry\n aggregated_entry.entries.append(entry)\n else:\n entries_for_date = [\n entry for entry in entries if entry_filter(entries_date, entry)\n ]\n\n if entries_for_date:\n filtered_entries[entries_date].extend(entries_for_date)\n\n return filtered_entries"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nappends the given element to the list and synchronize the textual representation.", "response": "def append(self, x):\n \"\"\"\n Append the given element to the list and synchronize the textual\n representation.\n \"\"\"\n super(EntriesList, self).append(x)\n\n if self.entries_collection is not None:\n self.entries_collection.add_entry(self.date, x)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef projects_database_update_success(self, aliases_after_update,\n projects_db):\n \"\"\"\n Display the results of the projects/aliases database update. We need\n the projects db to extract the name of the projects / activities.\n \"\"\"\n def show_aliases(aliases):\n \"\"\"\n Display the given list of aliases in the following form:\n\n my_alias\n project name / activity name\n\n The aliases parameter is just a list of aliases, the mapping is\n extracted from the aliases_after_update parameter, and the\n project/activity names are looked up in the projects db.\n \"\"\"\n for alias in aliases:\n mapping = aliases_after_update[alias]\n (project, activity) = projects_db.mapping_to_project(mapping)\n\n self.msg(\"%s\\n\\t%s / %s\" % (\n alias, project.name if project else \"?\",\n activity.name if activity else \"?\"\n ))\n\n self.msg(\"Projects database updated successfully.\")\n\n deleted_aliases = (set(aliases_database.aliases.keys()) -\n set(aliases_after_update.keys()))\n added_aliases = (set(aliases_after_update.keys()) -\n set(aliases_database.aliases.keys()))\n\n modified_aliases = set()\n for alias, mapping in six.iteritems(aliases_after_update):\n if (alias in aliases_database\n and aliases_database[alias][:2] != mapping[:2]):\n modified_aliases.add(alias)\n\n if added_aliases:\n self.msg(\"\\nThe following shared aliases have been added:\\n\")\n show_aliases(added_aliases)\n\n if deleted_aliases:\n self.msg(\"\\nThe following shared aliases have been removed:\\n\")\n for alias in deleted_aliases:\n self.msg(alias)\n\n if modified_aliases:\n self.msg(\"\\nThe following shared aliases have been updated:\\n\")\n show_aliases(modified_aliases)", "response": "Show the results of the projects database update."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_top_down(lines):\n date_lines = [\n line for line in lines if hasattr(line, 'is_date_line') and line.is_date_line\n ]\n\n if len(date_lines) < 2 or date_lines[0].date == date_lines[1].date:\n return None\n else:\n return date_lines[1].date > date_lines[0].date", "response": "Return True if dates in the given lines go in an ascending order or False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving lines that are not text lines and don t have any text.", "response": "def trim(lines):\n \"\"\"\n Remove lines at the start and at the end of the given `lines` that are :class:`~taxi.timesheet.lines.TextLine`\n instances and don't have any text.\n \"\"\"\n trim_top = None\n trim_bottom = None\n _lines = lines[:]\n\n for (lineno, line) in enumerate(_lines):\n if hasattr(line, 'is_text_line') and line.is_text_line and not line.text.strip():\n trim_top = lineno\n else:\n break\n\n for (lineno, line) in enumerate(reversed(_lines)):\n if hasattr(line, 'is_text_line') and line.is_text_line and not line.text.strip():\n trim_bottom = lineno\n else:\n break\n\n if trim_top is not None:\n _lines = _lines[trim_top + 1:]\n\n if trim_bottom is not None:\n trim_bottom = len(_lines) - trim_bottom - 1\n _lines = _lines[:trim_bottom]\n\n return _lines"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncommit your work to the server.", "response": "def commit(ctx, f, force_yes, date):\n \"\"\"\n Commits your work to the server. The [date] option can be used either as a\n single date (eg. 20.01.2014), as a range (eg. 20.01.2014-22.01.2014), or as\n a range with one of the dates omitted (eg. -22.01.2014).\n \"\"\"\n timesheet_collection = get_timesheet_collection_for_context(ctx, f)\n\n if not date and not force_yes:\n non_workday_entries = timesheet_collection.entries.filter(ignored=False, pushed=False, current_workday=False)\n\n if non_workday_entries:\n if not ctx.obj['view'].confirm_commit_entries(non_workday_entries):\n ctx.obj['view'].msg(\"Ok then.\")\n return\n\n ctx.obj['view'].pushing_entries()\n backends_entries = defaultdict(list)\n\n try:\n # Push entries\n for timesheet in timesheet_collection.timesheets:\n entries_to_push = get_entries_to_push(\n timesheet, date, ctx.obj['settings']['regroup_entries']\n )\n\n for (entries_date, entries) in entries_to_push.items():\n for entry in entries:\n backend_name = aliases_database[entry.alias].backend\n backend = plugins_registry.get_backend(backend_name)\n backends_entries[backend].append(entry)\n\n try:\n additional_info = backend.push_entry(entries_date, entry)\n except KeyboardInterrupt:\n entry.push_error = (\"Interrupted, check status in\"\n \" backend\")\n raise\n except Exception as e:\n additional_info = None\n entry.push_error = six.text_type(e)\n else:\n entry.push_error = None\n finally:\n ctx.obj['view'].pushed_entry(entry, additional_info)\n\n # Call post_push_entries on backends\n backends_post_push(backends_entries)\n except KeyboardInterrupt:\n pass\n finally:\n comment_timesheets_entries(timesheet_collection, date)\n\n ignored_entries = timesheet_collection.entries.filter(date=date, ignored=True, unmapped=False, pushed=False)\n ignored_entries_list = []\n for (entries_date, entries) in six.iteritems(ignored_entries):\n ignored_entries_list.extend(entries)\n\n all_entries = itertools.chain(*backends_entries.values())\n ctx.obj['view'].pushed_entries_summary(list(all_entries),\n ignored_entries_list)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\nasync def _request(self, path):\n url = '{}{}'.format(self.base_url, path)\n data = None\n\n try:\n async with self.websession.get(url, auth=self._auth, timeout=self._timeout) as response:\n if response.status == 200:\n if response.headers['content-type'] == 'application/json':\n data = await response.json()\n else:\n data = await response.text()\n\n except (asyncio.TimeoutError, aiohttp.ClientError) as error:\n _LOGGER.error('Failed to communicate with IP Webcam: %s', error)\n self._available = False\n return\n\n self._available = True\n if isinstance(data, str):\n return data.find(\"Ok\") != -1\n return data", "response": "Make the actual request and return the parsed response."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfetch the latest data from IP Webcam.", "response": "async def update(self):\n \"\"\"Fetch the latest data from IP Webcam.\"\"\"\n status_data = await self._request('/status.json?show_avail=1')\n\n if status_data:\n self.status_data = status_data\n\n sensor_data = await self._request('/sensors.json')\n if sensor_data:\n self.sensor_data = sensor_data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef current_settings(self):\n settings = {}\n if not self.status_data:\n return settings\n\n for (key, val) in self.status_data.get('curvals', {}).items():\n try:\n val = float(val)\n except ValueError:\n val = val\n\n if val in ('on', 'off'):\n val = (val == 'on')\n\n settings[key] = val\n\n return settings", "response": "Return dict with all config include."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning dict of lists with all available config settings.", "response": "def available_settings(self):\n \"\"\"Return dict of lists with all available config settings.\"\"\"\n available = {}\n if not self.status_data:\n return available\n\n for (key, val) in self.status_data.get('avail', {}).items():\n available[key] = []\n for subval in val:\n try:\n subval = float(subval)\n except ValueError:\n subval = subval\n\n if val in ('on', 'off'):\n subval = (subval == 'on')\n\n available[key].append(subval)\n\n return available"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef export_sensor(self, sensor):\n value = None\n unit = None\n try:\n container = self.sensor_data.get(sensor)\n unit = container.get('unit')\n data_point = container.get('data', [[0, [0.0]]])\n if data_point and data_point[0]:\n value = data_point[0][-1][0]\n except (ValueError, KeyError, AttributeError):\n pass\n\n return (value, unit)", "response": "Return ( value unit ) from a sensor node."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nchange a setting. Return a coroutine.", "response": "def change_setting(self, key, val):\n \"\"\"Change a setting.\n\n Return a coroutine.\n \"\"\"\n if isinstance(val, bool):\n payload = 'on' if val else 'off'\n else:\n payload = val\n return self._request('/settings/{}?set={}'.format(key, payload))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef record(self, record=True, tag=None):\n path = '/startvideo?force=1' if record else '/stopvideo?force=1'\n if record and tag is not None:\n path = '/startvideo?force=1&tag={}'.format(URL(tag).raw_path)\n\n return self._request(path)", "response": "Enable or disable recording."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the video orientation.", "response": "def set_orientation(self, orientation='landscape'):\n \"\"\"Set the video orientation.\n\n Return a coroutine.\n \"\"\"\n if orientation not in ALLOWED_ORIENTATIONS:\n _LOGGER.debug('%s is not a valid orientation', orientation)\n return False\n return self.change_setting('orientation', orientation)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_scenemode(self, scenemode='auto'):\n if scenemode not in self.available_settings['scenemode']:\n _LOGGER.debug('%s is not a valid scenemode', scenemode)\n return False\n return self.change_setting('scenemode', scenemode)", "response": "Set the video scene mode."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_timesheet_collection_for_context(ctx, entries_file=None):\n if not entries_file:\n entries_file = ctx.obj['settings'].get_entries_file_path(False)\n\n parser = TimesheetParser(\n date_format=ctx.obj['settings']['date_format'],\n add_date_to_bottom=ctx.obj['settings'].get_add_to_bottom(),\n flags_repr=ctx.obj['settings'].get_flags(),\n )\n\n return TimesheetCollection.load(entries_file, ctx.obj['settings']['nb_previous_files'], parser)", "response": "Returns a TimesheetCollection object with the current timesheets."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates main configuration file if it doesn t exist.", "response": "def create_config_file(filename):\n \"\"\"\n Create main configuration file if it doesn't exist.\n \"\"\"\n import textwrap\n from six.moves.urllib import parse\n\n if not os.path.exists(filename):\n old_default_config_file = os.path.join(os.path.dirname(filename),\n '.tksrc')\n if os.path.exists(old_default_config_file):\n upgrade = click.confirm(\"\\n\".join(textwrap.wrap(\n \"It looks like you recently updated Taxi. Some \"\n \"configuration changes are required. You can either let \"\n \"me upgrade your configuration file or do it \"\n \"manually.\")) + \"\\n\\nProceed with automatic configuration \"\n \"file upgrade?\", default=True\n )\n\n if upgrade:\n settings = Settings(old_default_config_file)\n settings.convert_to_4()\n with open(filename, 'w') as config_file:\n settings.config.write(config_file)\n os.remove(old_default_config_file)\n return\n else:\n print(\"Ok then.\")\n sys.exit(0)\n\n welcome_msg = \"Welcome to Taxi!\"\n click.secho(welcome_msg, fg='green', bold=True)\n click.secho('=' * len(welcome_msg) + '\\n', fg='green', bold=True)\n\n click.echo(click.wrap_text(\n \"It looks like this is the first time you run Taxi. You will need \"\n \"a configuration file ({}) in order to proceed. Please answer a \"\n \"few questions to create your configuration file.\".format(\n filename\n )\n ) + '\\n')\n\n config = pkg_resources.resource_string('taxi', 'etc/taxirc.sample').decode('utf-8')\n context = {}\n available_backends = plugins_registry.get_available_backends()\n\n context['backend'] = click.prompt(\n \"Backend you want to use (choices are %s)\" %\n ', '.join(available_backends),\n type=click.Choice(available_backends)\n )\n context['username'] = click.prompt(\"Username or token\")\n context['password'] = parse.quote(\n click.prompt(\"Password (leave empty if you're using\"\n \" a token)\", hide_input=True, default=''),\n safe=''\n )\n # Password can be empty in case of token auth so the ':' separator\n # is not included in the template config, so we add it if the user\n # has set a password\n if context['password']:\n context['password'] = ':' + context['password']\n\n context['hostname'] = click.prompt(\n \"Hostname of the backend (eg. timesheets.example.com)\",\n type=Hostname()\n )\n\n editor = Editor().get_editor()\n context['editor'] = click.prompt(\n \"Editor command to edit your timesheets\", default=editor\n )\n\n templated_config = config.format(**context)\n\n directory = os.path.dirname(filename)\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n with open(filename, 'w') as f:\n f.write(templated_config)\n else:\n settings = Settings(filename)\n conversions = settings.needed_conversions\n\n if conversions:\n for conversion in conversions:\n conversion()\n\n settings.write_config()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef date_options(func):\n @click.option(\n '--until', type=Date(), help=\"Only show entries until the given date.\"\n )\n @click.option(\n '--since', type=Date(), help=\"Only show entries starting at the given date.\",\n )\n @click.option(\n '--today/--not-today', default=None, help=\"Only include today's entries (same as --since=today --until=today)\"\n \" or ignore today's entries (same as --until=yesterday)\"\n )\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n since, until, today = kwargs.pop('since'), kwargs.pop('until'), kwargs.pop('today')\n\n if today is not None:\n if today:\n date = datetime.date.today()\n else:\n date = (None, datetime.date.today() - datetime.timedelta(days=1))\n elif since is not None or until is not None:\n date = (since, until)\n else:\n date = None\n\n kwargs['date'] = date\n\n return func(*args, **kwargs)\n\n return wrapper", "response": "Decorator to add support for date options to the given command."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the given date with nb_months substracted from it.", "response": "def months_ago(date, nb_months=1):\n \"\"\"\n Return the given `date` with `nb_months` substracted from it.\n \"\"\"\n nb_years = nb_months // 12\n nb_months = nb_months % 12\n\n month_diff = date.month - nb_months\n\n if month_diff > 0:\n new_month = month_diff\n else:\n new_month = 12 + month_diff\n nb_years += 1\n\n return date.replace(day=1, month=new_month, year=date.year - nb_years)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a date and return it as a datetime. date object.", "response": "def time_ago_to_date(value):\n \"\"\"\n Parse a date and return it as ``datetime.date`` objects. Examples of valid dates:\n\n * Relative: 2 days ago, today, yesterday, 1 week ago\n * Absolute: 25.10.2017\n \"\"\"\n today = datetime.date.today()\n\n if value == 'today':\n return today\n elif value == 'yesterday':\n return today - datetime.timedelta(days=1)\n\n time_ago = re.match(r'(\\d+) (days?|weeks?|months?|years?) ago', value)\n\n if time_ago:\n duration, unit = int(time_ago.group(1)), time_ago.group(2)\n\n if 'day' in unit:\n return today - datetime.timedelta(days=duration)\n elif 'week' in unit:\n return today - datetime.timedelta(weeks=duration)\n elif 'month' in unit:\n return months_ago(today, duration)\n elif 'year' in unit:\n return today.replace(year=today.year - duration)\n\n return datetime.datetime.strptime(value, '%d.%m.%Y').date()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_(ctx, search_string, reverse, backend, used, inactive):\n if not reverse:\n list_aliases(ctx, search_string, backend, used, inactive=inactive)\n else:\n show_mapping(ctx, search_string, backend)", "response": "List configured aliases. Aliases in red belong to inactive projects and trying to push entries to these aliases\n will probably result in an error."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add(ctx, alias, mapping, backend):\n if not backend:\n backends_list = ctx.obj['settings'].get_backends()\n if len(backends_list) > 1:\n raise click.UsageError(\n \"You're using more than 1 backend. Please set the backend to \"\n \"add the alias to with the --backend option (choices are %s)\" %\n \", \".join(dict(backends_list).keys())\n )\n\n add_mapping(ctx, alias, mapping, backend)", "response": "Add a new alias to your configuration file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert a pre - 4. 0 configuration file to a 4. 0 configuration file.", "response": "def convert_to_4(self):\n \"\"\"\n Convert a pre-4.0 configuration file to a 4.0 configuration file.\n \"\"\"\n from six.moves.urllib import parse\n\n if not self.config.has_section('backends'):\n self.config.add_section('backends')\n\n site = parse.urlparse(self.get('site', default_value=''))\n backend_uri = 'zebra://{username}:{password}@{hostname}'.format(\n username=self.get('username', default_value=''),\n password=parse.quote(self.get('password', default_value=''),\n safe=''),\n hostname=site.hostname\n )\n self.config.set('backends', 'default', backend_uri)\n\n self.config.remove_option('default', 'username')\n self.config.remove_option('default', 'password')\n self.config.remove_option('default', 'site')\n\n if not self.config.has_section('default_aliases'):\n self.config.add_section('default_aliases')\n\n if not self.config.has_section('default_shared_aliases'):\n self.config.add_section('default_shared_aliases')\n\n if self.config.has_section('wrmap'):\n for alias, mapping in self.config.items('wrmap'):\n self.config.set('default_aliases', alias, mapping)\n\n self.config.remove_section('wrmap')\n\n if self.config.has_section('shared_wrmap'):\n for alias, mapping in self.config.items('shared_wrmap'):\n self.config.set('default_shared_aliases', alias, mapping)\n\n self.config.remove_section('shared_wrmap')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef autofill(ctx, f):\n auto_fill_days = ctx.obj['settings']['auto_fill_days']\n\n if not auto_fill_days:\n ctx.obj['view'].view.err(\"The parameter `auto_fill_days` must be set \"\n \"to use this command.\")\n return\n\n today = datetime.date.today()\n last_day = calendar.monthrange(today.year, today.month)\n last_date = datetime.date(today.year, today.month, last_day[1])\n\n timesheet_collection = get_timesheet_collection_for_context(\n ctx, f\n )\n t = timesheet_collection.latest()\n t.prefill(auto_fill_days, last_date)\n t.save()\n\n ctx.obj['view'].msg(\"Your entries file has been filled.\")", "response": "Fills your timesheet up to today for the specified auto_fill_days."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_(ctx, search, backend):\n projects = ctx.obj['projects_db'].search(search, backend=backend)\n projects = sorted(projects, key=lambda project: project.name.lower())\n ctx.obj['view'].search_results(projects)", "response": "List all projects in the database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef alias(ctx, search, backend):\n projects = ctx.obj['projects_db'].search(search, active_only=True)\n projects = sorted(projects, key=lambda project: project.name)\n\n if len(projects) == 0:\n ctx.obj['view'].msg(\n \"No active project matches your search string '%s'.\" %\n ''.join(search)\n )\n return\n\n ctx.obj['view'].projects_list(projects, True)\n\n try:\n number = ctx.obj['view'].select_project(projects)\n except CancelException:\n return\n\n project = projects[number]\n ctx.obj['view'].project_with_activities(project, numbered_activities=True)\n\n try:\n number = ctx.obj['view'].select_activity(project.activities)\n except CancelException:\n return\n\n retry = True\n while retry:\n try:\n alias = ctx.obj['view'].select_alias()\n except CancelException:\n return\n\n if alias in aliases_database:\n mapping = aliases_database[alias]\n overwrite = ctx.obj['view'].overwrite_alias(alias, mapping)\n\n if not overwrite:\n return\n elif overwrite:\n retry = False\n # User chose \"retry\"\n else:\n retry = True\n else:\n retry = False\n\n activity = project.activities[number]\n mapping = Mapping(mapping=(project.id, activity.id),\n backend=project.backend)\n ctx.obj['settings'].add_alias(alias, mapping)\n ctx.obj['settings'].write_config()\n\n ctx.obj['view'].alias_added(alias, (project.id, activity.id))", "response": "Adds an alias for the given project."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef show(ctx, project_id, backend):\n try:\n project = ctx.obj['projects_db'].get(project_id, backend)\n except IOError:\n raise Exception(\"Error: the projects database file doesn't exist. \"\n \"Please run `taxi update` to create it\")\n\n if project is None:\n ctx.obj['view'].err(\n \"Could not find project `%s`\" % (project_id)\n )\n else:\n ctx.obj['view'].project_with_activities(project)", "response": "Show the details of the given project id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef stop(ctx, description, f):\n description = ' '.join(description)\n try:\n timesheet_collection = get_timesheet_collection_for_context(ctx, f)\n current_timesheet = timesheet_collection.latest()\n current_timesheet.continue_entry(\n datetime.date.today(),\n datetime.datetime.now().time(),\n description\n )\n except ParseError as e:\n ctx.obj['view'].err(e)\n except NoActivityInProgressError as e:\n ctx.obj['view'].err(e)\n except StopInThePastError as e:\n ctx.obj['view'].err(e)\n else:\n current_timesheet.save()", "response": "Stop the current task."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef start(ctx, alias, description, f):\n today = datetime.date.today()\n\n try:\n timesheet_collection = get_timesheet_collection_for_context(ctx, f)\n except ParseError as e:\n ctx.obj['view'].err(e)\n return\n\n t = timesheet_collection.latest()\n\n # If there's a previous entry on the same date, check if we can use its\n # end time as a start time for the newly started entry\n today_entries = t.entries.filter(date=today)\n if(today in today_entries and today_entries[today]\n and isinstance(today_entries[today][-1].duration, tuple)\n and today_entries[today][-1].duration[1] is not None):\n new_entry_start_time = today_entries[today][-1].duration[1]\n else:\n new_entry_start_time = datetime.datetime.now()\n\n description = ' '.join(description) if description else '?'\n duration = (new_entry_start_time, None)\n\n e = Entry(alias, duration, description)\n t.entries[today].append(e)\n t.save()", "response": "Start a new entry in the entries file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_time_from_text(text):\n text = text.replace(':', '')\n\n if not re.match('^\\d{3,}$', text):\n raise ValueError(\"Time must be numeric\")\n\n minutes = int(text[-2:])\n hours = int(text[0:2] if len(text) > 3 else text[0])\n\n return datetime.time(hours, minutes)", "response": "Parse a time in the form hhmm or hmm or even hmm or hmm and return a datetime. time object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef duration_to_text(self, duration):\n if isinstance(duration, tuple):\n start = (duration[0].strftime(self.ENTRY_DURATION_FORMAT)\n if duration[0] is not None\n else '')\n\n end = (duration[1].strftime(self.ENTRY_DURATION_FORMAT)\n if duration[1] is not None\n else '?')\n\n duration = '%s-%s' % (start, end)\n else:\n duration = six.text_type(duration)\n\n return duration", "response": "Return the textual representation of the given duration."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the textual representation of the given line.", "response": "def to_text(self, line):\n \"\"\"\n Return the textual representation of the given `line`.\n \"\"\"\n return getattr(self, self.ENTRY_TRANSFORMERS[line.__class__])(line)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the textual representation of the given date line.", "response": "def date_line_to_text(self, date_line):\n \"\"\"\n Return the textual representation of the given :class:`~taxi.timesheet.lines.DateLine` instance. The date\n format is set by the `date_format` parameter given when instanciating the parser instance.\n \"\"\"\n # Changing the date in a dateline is not supported yet, but if it gets implemented someday this will need to be\n # changed\n if date_line._text is not None:\n return date_line._text\n else:\n return date_utils.unicode_strftime(date_line.date, self.date_format)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef entry_line_to_text(self, entry):\n line = []\n\n # The entry is new, it didn't come from an existing line, so let's just return a simple text representation of\n # it\n if not entry._text:\n flags_text = self.flags_to_text(entry.flags)\n duration_text = self.duration_to_text(entry.duration)\n\n return ''.join(\n (flags_text, ' ' if flags_text else '', entry.alias, ' ', duration_text, ' ', entry.description)\n )\n\n for i, text in enumerate(entry._text):\n # If this field is mapped to an attribute, check if it has changed\n # and, if so, regenerate its text. The only fields that are not\n # mapped to attributes are spacing fields\n if i in self.ENTRY_ATTRS_POSITION:\n if self.ENTRY_ATTRS_POSITION[i] in entry._changed_attrs:\n attr_name = self.ENTRY_ATTRS_POSITION[i]\n attr_value = getattr(entry, self.ENTRY_ATTRS_POSITION[i])\n\n # Some attributes need to be transformed to their textual representation, such as flags or duration\n if attr_name in self.ENTRY_ATTRS_TRANSFORMERS:\n attr_value = getattr(self, self.ENTRY_ATTRS_TRANSFORMERS[attr_name])(attr_value)\n else:\n attr_value = text\n\n line.append(attr_value)\n else:\n # If the length of the field has changed, do whatever we can to keep the current formatting (ie. number\n # of whitespaces)\n if len(line[i-1]) != len(entry._text[i-1]):\n text = ' ' * max(1, (len(text) - (len(line[i-1]) - len(entry._text[i-1]))))\n\n line.append(text)\n\n return ''.join(line).strip()", "response": "Converts an entry line into a textual representation of an entry."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_entry_line_from_text(self, text):\n split_line = re.match(self.entry_line_regexp, text)\n\n if not split_line:\n raise ParseError(\"Line must have an alias, a duration and a description\")\n\n alias = split_line.group('alias')\n start_time = end_time = None\n\n if split_line.group('start_time') is not None:\n if split_line.group('start_time'):\n try:\n start_time = create_time_from_text(split_line.group('start_time'))\n except ValueError:\n raise ParseError(\"Start time is not a valid time, it must be in format hh:mm or hhmm\")\n else:\n start_time = None\n\n if split_line.group('end_time') is not None:\n if split_line.group('end_time') == '?':\n end_time = None\n else:\n try:\n end_time = create_time_from_text(split_line.group('end_time'))\n except ValueError:\n raise ParseError(\"End time is not a valid time, it must be in format hh:mm or hhmm\")\n\n if split_line.group('duration') is not None:\n duration = float(split_line.group('duration'))\n elif start_time or end_time:\n duration = (start_time, end_time)\n else:\n duration = (None, None)\n\n description = split_line.group('description')\n\n # Parse and set line flags\n if split_line.group('flags'):\n try:\n flags = self.extract_flags_from_text(split_line.group('flags'))\n # extract_flags_from_text will raise `KeyError` if one of the flags is not recognized. This should never\n # happen though as the list of accepted flags is bundled in self.entry_line_regexp\n except KeyError as e:\n raise ParseError(*e.args)\n else:\n flags = set()\n\n # Backwards compatibility with previous notation that allowed to end the alias with a `?` to ignore it\n if alias.endswith('?'):\n flags.add(Entry.FLAG_IGNORED)\n alias = alias[:-1]\n\n if description == '?':\n flags.add(Entry.FLAG_IGNORED)\n\n line = (\n split_line.group('flags') or '',\n split_line.group('spacing1') or '',\n split_line.group('alias'),\n split_line.group('spacing2'),\n split_line.group('time'),\n split_line.group('spacing3'),\n split_line.group('description'),\n )\n\n entry_line = Entry(alias, duration, description, flags=flags, text=line)\n\n return entry_line", "response": "Parses the given text line and returns an Entry object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a text in the form dd mm dd or yyyy or dd or yyyy or dd or yyyy or dd or yyyy.", "response": "def create_date_from_text(self, text):\n \"\"\"\n Parse a text in the form dd/mm/yyyy, dd/mm/yy or yyyy/mm/dd and return a corresponding :class:`datetime.date`\n object. If no date can be extracted from the given text, a :exc:`ValueError` will be raised.\n \"\"\"\n # Try to match dd/mm/yyyy format\n date_matches = re.match(self.DATE_LINE_REGEXP, text)\n\n # If no match, try with yyyy/mm/dd format\n if date_matches is None:\n date_matches = re.match(self.US_DATE_LINE_REGEXP, text)\n\n if date_matches is None:\n raise ValueError(\"No date could be extracted from the given value\")\n\n # yyyy/mm/dd\n if len(date_matches.group(1)) == 4:\n return datetime.date(int(date_matches.group(1)), int(date_matches.group(2)), int(date_matches.group(3)))\n\n # dd/mm/yy\n if len(date_matches.group(3)) == 2:\n current_year = datetime.date.today().year\n current_millennium = current_year - (current_year % 1000)\n year = current_millennium + int(date_matches.group(3))\n # dd/mm/yyyy\n else:\n year = int(date_matches.group(3))\n\n return datetime.date(year, int(date_matches.group(2)), int(date_matches.group(1)))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nextracts the flags from the given text and return a set of flag values.", "response": "def extract_flags_from_text(self, text):\n \"\"\"\n Extract the flags from the given text and return a :class:`set` of flag values. See\n :class:`~taxi.timesheet.lines.Entry` for a list of existing flags.\n \"\"\"\n flags = set()\n reversed_flags_repr = {v: k for k, v in self.flags_repr.items()}\n for flag_repr in text:\n if flag_repr not in reversed_flags_repr:\n raise KeyError(\"Flag '%s' is not recognized\" % flag_repr)\n else:\n flags.add(reversed_flags_repr[flag_repr])\n\n return flags"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_text(self, text):\n text = text.strip()\n lines = text.splitlines()\n parsed_lines = []\n encountered_date = False\n\n for (lineno, line) in enumerate(lines, 1):\n try:\n parsed_line = self.parse_line(line)\n\n if isinstance(parsed_line, DateLine):\n encountered_date = True\n elif isinstance(parsed_line, Entry) and not encountered_date:\n raise ParseError(\"Entries must be defined inside a date section\")\n except ParseError as e:\n # Update exception with some more information\n e.line_number = lineno\n e.line = line\n raise\n else:\n parsed_lines.append(parsed_line)\n\n return parsed_lines", "response": "Parse the given text and return a list of : class:`~taxi. timesheet. lines. DateLine and ~taxi. timesheet. lines. Entry and ~taxi. timesheet. lines. TextLine objects."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_line(self, text):\n text = text.strip().replace('\\t', ' ' * 4)\n\n # The logic is: if the line starts with a #, consider it's a comment (TextLine), otherwise try to parse it as a\n # date and if this fails, try to parse it as an entry. If this fails too, the line is not valid\n if len(text) == 0 or text.startswith('#'):\n parsed_line = TextLine(text)\n else:\n try:\n date = self.create_date_from_text(text)\n except ValueError:\n parsed_line = self.create_entry_line_from_text(text)\n else:\n parsed_line = DateLine(date, text)\n\n return parsed_line", "response": "Parses the given text and returns either a DateLine or an EntryLine."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_date(self, date, lines):\n _lines = lines[:]\n _lines = trim(_lines)\n\n if self.add_date_to_bottom is None:\n add_date_to_bottom = is_top_down(lines)\n else:\n add_date_to_bottom = self.add_date_to_bottom\n\n if add_date_to_bottom:\n _lines.append(TextLine(''))\n _lines.append(DateLine(date))\n else:\n _lines.insert(0, TextLine(''))\n _lines.insert(0, DateLine(date))\n\n return trim(_lines)", "response": "Return the given lines with the date added in the right place."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(ctx):\n ctx.obj['view'].updating_projects_database()\n\n projects = []\n\n for backend_name, backend_uri in ctx.obj['settings'].get_backends():\n backend = plugins_registry.get_backend(backend_name)\n backend_projects = backend.get_projects()\n\n for project in backend_projects:\n project.backend = backend_name\n\n projects += backend_projects\n\n ctx.obj['projects_db'].update(projects)\n\n # Put the shared aliases in the config file\n shared_aliases = {}\n backends_to_clear = set()\n for project in projects:\n for alias, activity_id in six.iteritems(project.aliases):\n mapping = Mapping(mapping=(project.id, activity_id),\n backend=project.backend)\n shared_aliases[alias] = mapping\n backends_to_clear.add(project.backend)\n\n for backend in backends_to_clear:\n ctx.obj['settings'].clear_shared_aliases(backend)\n\n # The user can have local aliases with additional information (eg. role definition). If these aliases also exist on\n # the remote, then they probably need to be cleared out locally to make sure they don't unintentionally use an\n # alias with a wrong role\n current_aliases = ctx.obj['settings'].get_aliases()\n removed_aliases = [\n (alias, mapping)\n for alias, mapping in current_aliases.items()\n if (alias in shared_aliases and shared_aliases[alias].backend == mapping.backend\n and mapping.mapping[:2] != shared_aliases[alias].mapping[:2])\n ]\n\n if removed_aliases:\n ctx.obj['settings'].remove_aliases(removed_aliases)\n\n for alias, mapping in shared_aliases.items():\n ctx.obj['settings'].add_shared_alias(alias, mapping)\n\n aliases_after_update = ctx.obj['settings'].get_aliases()\n\n ctx.obj['settings'].write_config()\n\n ctx.obj['view'].projects_database_update_success(\n aliases_after_update, ctx.obj['projects_db']\n )", "response": "Synchronizes your project database with the server and updates the shared aliases."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef attributes(self):\n if not self._attribute_pages:\n self.fetch_attributes()\n result = {}\n for page in self._attribute_pages.values():\n result.update(page.attributes)\n return result", "response": "A dictionary mapping names of attributes to BiomartAttribute instances."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of backends that are instances of the given backend_class.", "response": "def get_backends_by_class(self, backend_class):\n \"\"\"\n Return a list of backends that are instances of the given `backend_class`.\n \"\"\"\n return [backend for backend in self._backends_registry.values() if isinstance(backend, backend_class)]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef populate_backends(self, backends, context):\n for name, uri in backends.items():\n self._backends_registry[name] = self._load_backend(uri, context)", "response": "Populate the internal dictionary with the given backends."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _load_backend(self, backend_uri, context):\n parsed = parse.urlparse(backend_uri)\n options = dict(parse.parse_qsl(parsed.query))\n\n try:\n backend = self._entry_points[self.BACKENDS_ENTRY_POINT][parsed.scheme].load()\n except KeyError:\n raise BackendNotFoundError(\n \"The requested backend `%s` could not be found in the \"\n \"registered entry points. Perhaps you forgot to install the \"\n \"corresponding backend package?\" % parsed.scheme\n )\n\n password = (parse.unquote(parsed.password)\n if parsed.password\n else parsed.password)\n\n return backend(\n username=parsed.username, password=password,\n hostname=parsed.hostname, port=parsed.port,\n path=parsed.path, options=options, context=context,\n )", "response": "Loads the backend identified by the given backend_uri."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef register_commands(self):\n for command in self._entry_points[self.COMMANDS_ENTRY_POINT].values():\n command.load()", "response": "Load entry points for custom commands."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _add_or_remove_flag(self, flag, add):\n meth = self.add_flag if add else self.remove_flag\n meth(flag)", "response": "Add or remove the given flag if add is True."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_application_configuration(name):\n _check()\n rc = _ec.get_application_configuration(name)\n if rc is False:\n raise ValueError(\"Application configuration {0} not found.\".format(name))\n return rc", "response": "Get an application configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _submit(primitive, port_index, tuple_):\n args = (_get_opc(primitive), port_index, tuple_)\n _ec._submit(args)", "response": "Internal method to submit a tuple"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the current value of the metric.", "response": "def value(self, value):\n \"\"\"\n Set the current value of the metric.\n \"\"\"\n args = (self.__ptr, int(value))\n _ec.metric_set(args)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a set of tags associated with the resource.", "response": "def resource_tags(self):\n \"\"\"Resource tags for this processing logic.\n\n Tags are a mechanism for differentiating and identifying resources that have different physical characteristics or logical uses. For example a resource (host) that has external connectivity for public data sources may be tagged `ingest`.\n\n Processing logic can be associated with one or more tags to require \n running on suitably tagged resources. For example\n adding tags `ingest` and `db` requires that the processing element\n containing the callable that created the stream runs on a host\n tagged with both `ingest` and `db`.\n\n A :py:class:`~streamsx.topology.topology.Stream` that was not created directly with a Python callable\n cannot have tags associated with it. For example a stream that\n is a :py:meth:`~streamsx.topology.topology.Stream.union` of multiple streams cannot be tagged.\n In this case this method returns an empty `frozenset` which\n cannot be modified.\n\n See https://www.ibm.com/support/knowledgecenter/en/SSCRJU_4.2.1/com.ibm.streams.admin.doc/doc/tags.html for more details of tags within IBM Streams.\n\n Returns:\n set: Set of resource tags, initially empty.\n\n .. warning:: If no resources exist with the required tags then job submission will fail.\n \n .. versionadded:: 1.7\n .. versionadded:: 1.9 Support for :py:class:`Sink` and :py:class:`~streamsx.spl.op.Invoke`.\n \n \"\"\"\n try:\n plc = self._op()._placement\n if not 'resourceTags' in plc:\n plc['resourceTags'] = set()\n return plc['resourceTags']\n except TypeError:\n return frozenset()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsubmits a topology to an application using the specified context type.", "response": "def submit(ctxtype, graph, config=None, username=None, password=None):\n \"\"\"\n Submits a `Topology` (application) using the specified context type.\n\n Used to submit an application for compilation into a Streams application and\n execution within an Streaming Analytics service or IBM Streams instance.\n\n `ctxtype` defines how the application will be submitted, see :py:class:`ContextTypes`.\n\n The parameters `username` and `password` are only required when submitting to an\n IBM Streams instance and it is required to access the Streams REST API from the\n code performing the submit. Accessing data from views created by\n :py:meth:`~streamsx.topology.topology.Stream.view` requires access to the Streams REST API.\n\n Args:\n ctxtype(str): Type of context the application will be submitted to. A value from :py:class:`ContextTypes`.\n graph(Topology): The application topology to be submitted.\n config(dict): Configuration for the submission.\n username(str): Username for the Streams REST api.\n password(str): Password for `username`.\n\n Returns:\n SubmissionResult: Result of the submission. For details of what is contained see the :py:class:`ContextTypes`\n constant passed as `ctxtype`.\n \"\"\"\n streamsx._streams._version._mismatch_check(__name__)\n graph = graph.graph\n\n if not graph.operators:\n raise ValueError(\"Topology {0} does not contain any streams.\".format(graph.topology.name))\n\n context_submitter = _SubmitContextFactory(graph, config, username, password).get_submit_context(ctxtype)\n sr = SubmissionResult(context_submitter.submit())\n sr._submitter = context_submitter\n return sr"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _vcap_from_service_definition(service_def):\n if 'credentials' in service_def:\n credentials = service_def['credentials']\n else:\n credentials = service_def\n\n service = {}\n service['credentials'] = credentials\n service['name'] = _name_from_service_definition(service_def)\n vcap = {'streaming-analytics': [service]}\n return vcap", "response": "Turn a service definition into a vcap services\n containing a single service."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npassing the VCAP through the environment to the java submission", "response": "def _get_java_env(self):\n \"Pass the VCAP through the environment to the java submission\"\n env = super(_StreamingAnalyticsSubmitter, self)._get_java_env()\n vcap = streamsx.rest._get_vcap_services(self._vcap_services)\n env['VCAP_SERVICES'] = json.dumps(vcap)\n return env"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset env vars from connection if set", "response": "def _get_java_env(self):\n \"Set env vars from connection if set\"\n env = super(_DistributedSubmitter, self)._get_java_env()\n if self._streams_connection is not None:\n # Need to sure the environment matches the connection.\n sc = self._streams_connection\n if isinstance(sc._delegator, streamsx.rest_primitives._StreamsRestDelegator):\n env.pop('STREAMS_DOMAIN_ID', None)\n env.pop('STREAMS_INSTANCE_ID', None)\n else:\n env['STREAMS_DOMAIN_ID'] = sc.get_domains()[0].id\n if not ConfigParams.SERVICE_DEFINITION in self._config():\n env['STREAMS_REST_URL'] = sc.resource_url\n env['STREAMS_USERNAME'] = sc.session.auth[0]\n env['STREAMS_PASSWORD'] = sc.session.auth[1]\n return env"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_overlays(overlays):\n jc = JobConfig()\n jc.comment = overlays.get('comment')\n if 'jobConfigOverlays' in overlays:\n if len(overlays['jobConfigOverlays']) >= 1:\n jco = copy.deepcopy(overlays['jobConfigOverlays'][0])\n\n # Now extract the logical information\n if 'jobConfig' in jco:\n _jc = jco['jobConfig']\n jc.job_name = _jc.pop('jobName', None)\n jc.job_group = _jc.pop('jobGroup', None)\n jc.preload = _jc.pop('preloadApplicationBundles', False)\n jc.data_directory = _jc.pop('dataDirectory', None)\n jc.tracing = _jc.pop('tracing', None)\n\n for sp in _jc.pop('submissionParameters', []):\n jc.submission_parameters[sp['name']] = sp['value']\n\n if not _jc:\n del jco['jobConfig']\n if 'deploymentConfig' in jco:\n _dc = jco['deploymentConfig']\n if 'manual' == _dc.get('fusionScheme'):\n if 'fusionTargetPeCount' in _dc:\n jc.target_pe_count = _dc.pop('fusionTargetPeCount')\n if len(_dc) == 1:\n del jco['deploymentConfig']\n\n if jco:\n jc.raw_overlay = jco\n return jc", "response": "Create a JobConfig instance from a full job configuration overlays object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds this as a jobConfigOverlays JSON to config.", "response": "def _add_overlays(self, config):\n \"\"\"\n Add this as a jobConfigOverlays JSON to config.\n \"\"\"\n if self._comment:\n config['comment'] = self._comment\n\n jco = {}\n config[\"jobConfigOverlays\"] = [jco]\n\n if self._raw_overlay:\n jco.update(self._raw_overlay)\n\n jc = jco.get('jobConfig', {})\n\n if self.job_name is not None:\n jc[\"jobName\"] = self.job_name\n if self.job_group is not None:\n jc[\"jobGroup\"] = self.job_group\n if self.data_directory is not None:\n jc[\"dataDirectory\"] = self.data_directory\n if self.preload:\n jc['preloadApplicationBundles'] = True\n if self.tracing is not None:\n jc['tracing'] = self.tracing\n\n if self.submission_parameters:\n sp = jc.get('submissionParameters', [])\n for name in self.submission_parameters:\n sp.append({'name': str(name), 'value': self.submission_parameters[name]})\n jc['submissionParameters'] = sp\n\n if jc:\n jco[\"jobConfig\"] = jc\n\n if self.target_pe_count is not None and self.target_pe_count >= 1:\n deployment = jco.get('deploymentConfig', {})\n deployment.update({'fusionScheme' : 'manual', 'fusionTargetPeCount' : self.target_pe_count})\n jco[\"deploymentConfig\"] = deployment\n return config"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef job(self):\n if self._submitter and hasattr(self._submitter, '_job_access'):\n return self._submitter._job_access()\n return None", "response": "Returns the REST binding for the running job associated with the submitted build."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisplay a button that will cancel the submission of the current job.", "response": "def cancel_job_button(self, description=None):\n \"\"\"Display a button that will cancel the submitted job.\n\n Used in a Jupyter IPython notebook to provide an interactive\n mechanism to cancel a job submitted from the notebook.\n\n Once clicked the button is disabled unless the cancel fails.\n\n A job may be cancelled directly using::\n\n submission_result = submit(ctx_type, topology, config)\n submission_result.job.cancel()\n\n Args:\n\n description(str): Text used as the button description, defaults to value based upon the job name.\n\n .. warning::\n Behavior when called outside a notebook is undefined.\n\n .. versionadded:: 1.12\n \"\"\"\n if not hasattr(self, 'jobId'):\n return\n \n try:\n import ipywidgets as widgets\n if not description:\n description = 'Cancel job: '\n description += self.name if hasattr(self, 'name') else self.job.name\n button = widgets.Button(description=description,\n button_style='danger',\n layout=widgets.Layout(width='40%'))\n out = widgets.Output()\n vb = widgets.VBox([button, out])\n @out.capture(clear_output=True)\n def _cancel_job_click(b):\n b.disabled=True\n print('Cancelling job: id=' + str(self.job.id) + ' ...\\n', flush=True)\n try:\n rc = self.job.cancel()\n out.clear_output()\n if rc:\n print('Cancelled job: id=' + str(self.job.id) + ' : ' + self.job.name + '\\n', flush=True)\n else:\n print('Job already cancelled: id=' + str(self.job.id) + ' : ' + self.job.name + '\\n', flush=True)\n except:\n b.disabled=False\n out.clear_output()\n raise\n \n button.on_click(_cancel_job_click)\n display(vb)\n except:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a default value and a default type for a key.", "response": "def add_default(self, key: str, value: Optional[str], default_type: type = str) -> None:\n \"\"\"\n Adds a default value and a default type for a key.\n\n :param key: Key\n :param value: *Serialized* default value, i.e. a string or ``None``.\n :param default_type: The type to unserialize values for this key to, defaults to ``str``.\n \"\"\"\n self.defaults[key] = HierarkeyDefault(value, default_type)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_type(self, type: type, serialize: Callable[[Any], str], unserialize: Callable[[str], Any]) -> None:\n self.types.append(HierarkeyType(type=type, serialize=serialize, unserialize=unserialize))", "response": "Adds serialization support for a new type."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add(self, cache_namespace: str = None, parent_field: str = None) -> type:\n if isinstance(cache_namespace, type):\n raise ImproperlyConfigured('Incorrect decorator usage, you need to use .add() instead of .add')\n\n def wrapper(model):\n if not issubclass(model, models.Model):\n raise ImproperlyConfigured('Hierarkey.add() can only be invoked on a Django model')\n\n _cache_namespace = cache_namespace or ('%s_%s' % (model.__name__, self.attribute_name))\n\n attrs = self._create_attrs(model)\n attrs['object'] = models.ForeignKey(model, related_name='_%s_objects' % self.attribute_name,\n on_delete=models.CASCADE)\n model_name = '%s_%sStore' % (model.__name__, self.attribute_name.title())\n kv_model = self._create_model(model_name, attrs)\n\n setattr(sys.modules[model.__module__], model_name, kv_model)\n\n hierarkey = self\n\n def prop(iself):\n from .proxy import HierarkeyProxy\n\n attrname = '_hierarkey_proxy_{}_{}'.format(_cache_namespace, self.attribute_name)\n cached = getattr(iself, attrname, None)\n if not cached:\n try:\n parent = getattr(iself, parent_field) if parent_field else None\n except models.ObjectDoesNotExist: # pragma: no cover\n parent = None\n\n if not parent and hierarkey.global_class:\n parent = hierarkey.global_class()\n\n cached = HierarkeyProxy._new(\n iself,\n type=kv_model,\n hierarkey=hierarkey,\n parent=parent,\n cache_namespace=_cache_namespace\n )\n setattr(iself, attrname, cached)\n return cached\n\n\n setattr(model, self.attribute_name, property(prop))\n\n return model\n\n return wrapper", "response": "A class decorator that creates a key - value store for the given hierarchy."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn value converted to a SPL expression if necessary other otherwise value.", "response": "def _as_spl_expr(value):\n \"\"\" Return value converted to an SPL expression if\n needed other otherwise value.\n \"\"\"\n import streamsx._streams._numpy\n\n if hasattr(value, 'spl_json'):\n return value\n \n if isinstance(value, Enum):\n value = streamsx.spl.op.Expression.expression(value.name)\n\n npcnv = streamsx._streams._numpy.as_spl_expr(value)\n if npcnv is not None:\n return npcnv\n \n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a unique identifier for the current node.", "response": "def _unique_id(self, prefix):\n \"\"\"\n Generate a unique (within the graph) identifer\n internal to graph generation.\n \"\"\"\n _id = self._id_gen\n self._id_gen += 1\n return prefix + str(_id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a unique name for an operator or a stream.", "response": "def _requested_name(self, name, action=None, func=None):\n \"\"\"Create a unique name for an operator or a stream.\n \"\"\"\n if name is not None:\n if name in self._used_names:\n # start at 2 for the \"second\" one of this name\n n = 2\n while True:\n pn = name + '_' + str(n)\n if pn not in self._used_names:\n self._used_names.add(pn)\n return pn\n n += 1\n else:\n self._used_names.add(name)\n return name\n\n if func is not None:\n if hasattr(func, '__name__'):\n name = func.__name__\n if name == '':\n # Avoid use of <> characters in name\n # as they are converted to unicode\n # escapes in SPL identifier\n name = action + '_lambda'\n elif hasattr(func, '__class__'):\n name = func.__class__.__name__\n\n if name is None:\n if action is not None:\n name = action\n else:\n name = self.name\n\n # Recurse once to get unique version of name\n return self._requested_name(name)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main_composite(kind, toolkits=None, name=None):\n if '::' in kind:\n ns, name = kind.rsplit('::', 1)\n ns += '._spl'\n else:\n raise ValueError('Main composite requires a namespace qualified name: ' + str(kind))\n topo = streamsx.topology.topology.Topology(name=name, namespace=ns)\n if toolkits:\n for tk_path in toolkits:\n streamsx.spl.toolkit.add_toolkit(topo, tk_path)\n return topo, Invoke(topo, kind, name=name)", "response": "Wrap a main composite invocation as a Topology."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef attribute(self, stream, name):\n if stream not in self._inputs:\n raise ValueError(\"Stream is not an input of this operator.\")\n if len(self._inputs) == 1:\n return Expression('attribute', name)\n else:\n iport = self._op().inputPorts[self._inputs.index(stream)]\n return Expression('attribute', iport._alias + '.' + name)", "response": "Returns an expression representing the input attribute."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an output port assignment expression.", "response": "def output(self, stream, value):\n \"\"\"SPL output port assignment expression.\n\n Arguments:\n stream(Stream): Output stream the assignment is for.\n value(str): SPL expression used for an output assignment. This can be a string, a constant, or an :py:class:`Expression`.\n\n Returns:\n Expression: Output assignment expression that is valid as a the context of this operator.\n \"\"\"\n if stream not in self.outputs:\n raise ValueError(\"Stream is not an output of this operator.\")\n e = self.expression(value)\n e._stream = stream\n return e"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef output(self, value):\n return super(Source, self).output(self.stream, value)", "response": "SPL output port assignment expression."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef output(self, value):\n return super(Map, self).output(self.stream, value)", "response": "SPL output port assignment expression."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef expression(value):\n if isinstance(value, Expression):\n # Clone the expression to allow it to\n # be used in multiple contexts\n return Expression(value._type, value._value)\n if hasattr(value, 'spl_json'):\n sj = value.spl_json()\n return Expression(sj['type'], sj['value'])\n return Expression('splexpr', value)", "response": "Create an SPL expression from a value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nextract a single object from the toolkit.", "response": "def _extract_from_toolkit(args):\n \"\"\"\n Look at all the modules in opt/python/streams (opt/python/streams/*.py)\n and extract any spl decorated function as an operator.\n \"\"\"\n\n extractor = _Extractor(args)\n if extractor._cmd_args.verbose:\n print(\"spl-python-extract:\", __version__)\n print(\"Topology toolkit location:\", _topology_tk_dir())\n\n tk_dir = extractor._tk_dir\n\n tk_streams = os.path.join(tk_dir, 'opt', 'python', 'streams')\n if not os.path.isdir(tk_streams) or not fnmatch.filter(os.listdir(tk_streams), '*.py'):\n # Nothing to do for Python extraction\n extractor._make_toolkit()\n return\n\n lf = os.path.join(tk_streams, '.lockfile')\n with open(lf, 'w') as lfno:\n fcntl.flock(lfno, fcntl.LOCK_EX)\n\n tk_idx = os.path.join(tk_dir, 'toolkit.xml')\n tk_time = os.path.getmtime(tk_idx) if os.path.exists(tk_idx) else None\n changed = False if tk_time else True\n if tk_time:\n for mf in glob.glob(os.path.join(tk_streams, '*.py')):\n if os.path.getmtime(mf) >= tk_time:\n changed = True\n break\n\n if changed:\n path_items = _setup_path(tk_dir, tk_streams)\n\n for mf in glob.glob(os.path.join(tk_streams, '*.py')):\n print('Checking ', mf, 'for operators')\n name = inspect.getmodulename(mf)\n dynm = imp.load_source(name, mf)\n streams_python_file = inspect.getsourcefile(dynm)\n extractor._process_operators(dynm, name, streams_python_file, inspect.getmembers(dynm, inspect.isfunction))\n extractor._process_operators(dynm, name, streams_python_file, inspect.getmembers(dynm, inspect.isclass))\n\n langList = extractor._copy_globalization_resources()\n if extractor._cmd_args.verbose:\n print(\"Available languages for TopologySplpy resource:\", langList)\n extractor._setup_info_xml(langList)\n\n extractor._make_toolkit()\n\n _reset_path(path_items)\n fcntl.flock(lfno, fcntl.LOCK_UN)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _copy_globalization_resources(self):\n '''Copy the language resource files for python api functions\n \n This function copies the TopologySplpy Resource files from Topology toolkit directory\n into the impl/nl folder of the project.\n Returns: the list with the copied locale strings'''\n rootDir = os.path.join(_topology_tk_dir(), \"impl\", \"nl\")\n languageList = []\n for dirName in os.listdir(rootDir):\n srcDir = os.path.join(_topology_tk_dir(), \"impl\", \"nl\", dirName)\n if (os.path.isdir(srcDir)) and (dirName != \"include\"):\n dstDir = os.path.join(self._tk_dir, \"impl\", \"nl\", dirName)\n try:\n print(\"Copy globalization resources \" + dirName)\n os.makedirs(dstDir)\n except OSError as e:\n if (e.errno == 17) and (os.path.isdir(dstDir)):\n if self._cmd_args.verbose:\n print(\"Directory\", dstDir, \"exists\")\n else:\n raise\n srcFile = os.path.join(srcDir, \"TopologySplpyResource.xlf\")\n if os.path.isfile(srcFile):\n res = shutil.copy2(srcFile, dstDir)\n languageList.append(dirName)\n if self._cmd_args.verbose:\n print(\"Written: \" + res)\n return languageList", "response": "Copy the language resource files for python api functions\n \n This function copies the TopologySplpy Resource files from Topology toolkit directory\n \n into the impl / nl folder of the project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\noutputting information about `streamsx` and the environment. Useful for support to get key information for use of `streamsx` and Python in IBM Streams.", "response": "def main(args=None):\n \"\"\" Output information about `streamsx` and the environment.\n \n Useful for support to get key information for use of `streamsx`\n and Python in IBM Streams.\n \"\"\"\n _parse_args(args)\n streamsx._streams._version._mismatch_check('streamsx.topology.context')\n srp = pkg_resources.working_set.find(pkg_resources.Requirement.parse('streamsx'))\n if srp is not None:\n srv = srp.parsed_version\n location = srp.location\n spkg = 'package'\n else:\n srv = streamsx._streams._version.__version__\n location = os.path.dirname(streamsx._streams._version.__file__)\n location = os.path.dirname(location)\n location = os.path.dirname(location)\n tk_path = (os.path.join('com.ibm.streamsx.topology', 'opt', 'python', 'packages'))\n spkg = 'toolkit' if location.endswith(tk_path) else 'unknown'\n\n print('streamsx==' + str(srv) + ' (' + spkg + ')')\n print(' location: ' + str(location))\n print('Python version:' + str(sys.version))\n print('PYTHONHOME=' + str(os.environ.get('PYTHONHOME', 'unset')))\n print('PYTHONPATH=' + str(os.environ.get('PYTHONPATH', 'unset')))\n print('PYTHONWARNINGS=' + str(os.environ.get('PYTHONWARNINGS', 'unset')))\n print('STREAMS_INSTALL=' + str(os.environ.get('STREAMS_INSTALL', 'unset')))\n print('JAVA_HOME=' + str(os.environ.get('JAVA_HOME', 'unset')))\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef operator_driven(drain_timeout=_DEFAULT_DRAIN, reset_timeout=_DEFAULT_RESET, max_consecutive_attempts=_DEFAULT_ATTEMPTS):\n\n return ConsistentRegionConfig(trigger=ConsistentRegionConfig.Trigger.OPERATOR_DRIVEN, drain_timeout=drain_timeout, reset_timeout=reset_timeout, max_consecutive_attempts=max_consecutive_attempts)", "response": "Define an operator - driven consistent region configuration."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a periodic consistent region configuration.", "response": "def periodic(period, drain_timeout=_DEFAULT_DRAIN, reset_timeout=_DEFAULT_RESET, max_consecutive_attempts=_DEFAULT_ATTEMPTS):\n \"\"\"Create a periodic consistent region configuration.\n The IBM Streams runtime will trigger a drain and checkpoint\n the region periodically at the time interval specified by `period`.\n \n Args:\n period: The trigger period. This may be either a :py:class:`datetime.timedelta` value or the number of seconds as a `float`.\n drain_timeout: The drain timeout, as either a :py:class:`datetime.timedelta` value or the number of seconds as a `float`. If not specified, the default value is 180 seconds.\n reset_timeout: The reset timeout, as either a :py:class:`datetime.timedelta` value or the number of seconds as a `float`. If not specified, the default value is 180 seconds.\n max_consecutive_attempts(int): The maximum number of consecutive attempts to reset the region. This must be an integer value between 1 and 2147483647, inclusive. If not specified, the default value is 5.\n\n Returns:\n ConsistentRegionConfig: the configuration.\n \"\"\"\n\n return ConsistentRegionConfig(trigger=ConsistentRegionConfig.Trigger.PERIODIC, period=period, drain_timeout=drain_timeout, reset_timeout=reset_timeout, max_consecutive_attempts=max_consecutive_attempts)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_timestamp_tuple(ts):\n if isinstance(ts, datetime.datetime): \n return Timestamp.from_datetime(ts).tuple()\n elif isinstance(ts, Timestamp): \n return ts\n raise TypeError('Timestamp or datetime.datetime required')", "response": "Internal method to get a timestamp tuple from a value."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts a datetime to an SPL Timestamp.", "response": "def from_datetime(dt, machine_id=0):\n \"\"\"\n Convert a datetime to an SPL `Timestamp`.\n \n Args:\n dt(datetime.datetime): Datetime to be converted.\n machine_id(int): Machine identifier.\n\n Returns:\n Timestamp: Datetime converted to Timestamp.\n \"\"\"\n td = dt - Timestamp._EPOCH\n seconds = td.days * 3600 * 24\n seconds += td.seconds\n return Timestamp(seconds, td.microseconds*1000, machine_id)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_toolkit(topology, location):\n import streamsx.topology.topology\n assert isinstance(topology, streamsx.topology.topology.Topology)\n tkinfo = dict()\n tkinfo['root'] = os.path.abspath(location)\n topology.graph._spl_toolkits.append(tkinfo)", "response": "Add a SPL toolkit to a topology."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a version dependency on a SPL toolkit to a topology.", "response": "def add_toolkit_dependency(topology, name, version):\n \"\"\"Add a version dependency on an SPL toolkit to a topology.\n\n To specify a range of versions for the dependent toolkits,\n use brackets (``[]``) or parentheses. Use brackets to represent an\n inclusive range and parentheses to represent an exclusive range.\n The following examples describe how to specify a dependency on a range of toolkit versions:\n\n * ``[1.0.0, 2.0.0]`` represents a dependency on toolkit versions 1.0.0 - 2.0.0, both inclusive.\n * ``[1.0.0, 2.0.0)`` represents a dependency on toolkit versions 1.0.0 or later, but not including 2.0.0.\n * ``(1.0.0, 2.0.0]`` represents a dependency on toolkits versions later than 1.0.0 and less than or equal to 2.0.0.\n * ``(1.0.0, 2.0.0)`` represents a dependency on toolkit versions 1.0.0 - 2.0.0, both exclusive.\n\n Args:\n topology(Topology): Topology to include toolkit in.\n name(str): Toolkit name.\n version(str): Toolkit version dependency.\n\n .. seealso::\n\n `Toolkit information model file `_\n\n .. versionadded:: 1.12\n \"\"\"\n import streamsx.topology.topology\n assert isinstance(topology, streamsx.topology.topology.Topology)\n tkinfo = dict()\n tkinfo['name'] = name\n tkinfo['version'] = version\n topology.graph._spl_toolkits.append(tkinfo)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef submit(args=None):\n streamsx._streams._version._mismatch_check('streamsx.topology.context')\n cmd_args = _parse_args(args)\n if cmd_args.topology is not None:\n app = _get_topology_app(cmd_args)\n elif cmd_args.main_composite is not None:\n app = _get_spl_app(cmd_args)\n elif cmd_args.bundle is not None:\n app = _get_bundle(cmd_args)\n _job_config_args(cmd_args, app)\n sr = _submit(cmd_args, app)\n if 'return_code' not in sr:\n sr['return_code'] = 1;\n print(sr)\n return sr", "response": "Submits the current streamsx. Topology to the streamsx server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _define_jco_args(cmd_parser):\n jo_group = cmd_parser.add_argument_group('Job options', 'Job configuration options')\n\n jo_group.add_argument('--job-name', help='Job name')\n jo_group.add_argument('--preload', action='store_true', help='Preload job onto all resources in the instance')\n jo_group.add_argument('--trace', choices=['error', 'warn', 'info', 'debug', 'trace'], help='Application trace level')\n\n jo_group.add_argument('--submission-parameters', '-p', nargs='+', action=_SubmitParamArg, help=\"Submission parameters as name=value pairs\")\n\n jo_group.add_argument('--job-config-overlays', help=\"Path to file containing job configuration overlays JSON. Overrides any job configuration set by the application.\" , metavar='file')\n\n return jo_group,", "response": "Define job configuration arguments."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsubmitting a Python topology to the service.", "response": "def _submit_topology(cmd_args, app):\n \"\"\"Submit a Python topology to the service.\n This includes an SPL main composite wrapped in a Python topology.\n \"\"\"\n cfg = app.cfg\n if cmd_args.create_bundle:\n ctxtype = ctx.ContextTypes.BUNDLE\n elif cmd_args.service_name:\n cfg[ctx.ConfigParams.FORCE_REMOTE_BUILD] = True\n cfg[ctx.ConfigParams.SERVICE_NAME] = cmd_args.service_name\n ctxtype = ctx.ContextTypes.STREAMING_ANALYTICS_SERVICE\n sr = ctx.submit(ctxtype, app.app, cfg)\n return sr"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _submit_bundle(cmd_args, app):\n sac = streamsx.rest.StreamingAnalyticsConnection(service_name=cmd_args.service_name)\n sas = sac.get_streaming_analytics()\n sr = sas.submit_job(bundle=app.app, job_config=app.cfg[ctx.ConfigParams.JOB_CONFIG])\n if 'exception' in sr:\n rc = 1\n elif 'status_code' in sr:\n try:\n rc = 0 if int(sr['status_code'] == 200) else 1\n except:\n rc = 1\n elif 'id' in sr or 'jobId' in sr:\n rc = 0\n sr['return_code'] = rc\n return sr", "response": "Submit an existing bundle to the service"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _define_fixed(wrapped, callable_):\n is_class = inspect.isclass(wrapped)\n style = callable_._splpy_style if hasattr(callable_, '_splpy_style') else wrapped._splpy_style\n\n if style == 'dictionary':\n return -1\n\n fixed_count = 0\n if style == 'tuple':\n sig = _inspect.signature(callable_)\n pmds = sig.parameters\n itpmds = iter(pmds)\n # Skip 'self' for classes\n if is_class:\n next(itpmds)\n\n for pn in itpmds:\n param = pmds[pn]\n if param.kind == _inspect.Parameter.POSITIONAL_OR_KEYWORD:\n fixed_count += 1\n if param.kind == _inspect.Parameter.VAR_POSITIONAL: # *args\n fixed_count = -1\n break\n if param.kind == _inspect.Parameter.VAR_KEYWORD:\n break\n return fixed_count", "response": "For the callable see how many positional parameters are required"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates an SPL operator with a single input port and no output ports.", "response": "def sink(wrapped):\n \"\"\"Creates an SPL operator with a single input port.\n\n A SPL operator with a single input port and no output ports.\n For each tuple on the input port the decorated function\n is called passing the contents of the tuple.\n\n .. deprecated:: 1.8\n Recommended to use :py:class:`@spl.for_each ` instead.\n \"\"\"\n if not inspect.isfunction(wrapped):\n raise TypeError('A function is required')\n\n return _wrapforsplop(_OperatorType.Sink, wrapped, 'position', False)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef submit(self, port_id, tuple_):\n port_index = self._splpy_output_ports[port_id]\n ec._submit(self, port_index, tuple_)", "response": "Submit a tuple to the output port."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a function that converts tuples to be submitted as dict objects into Python tuples with the value by position.", "response": "def _splpy_convert_tuple(attributes):\n \"\"\"Create a function that converts tuples to\n be submitted as dict objects into Python tuples\n with the value by position.\n Return function handles tuple,dict,list[tuple|dict|None],None\n \"\"\"\n\n def _to_tuples(tuple_):\n if isinstance(tuple_, tuple):\n return tuple_\n if isinstance(tuple_, dict):\n return tuple(tuple_.get(name, None) for name in attributes)\n if isinstance(tuple_, list):\n lt = list()\n for ev in tuple_:\n if isinstance(ev, dict):\n ev = tuple(ev.get(name, None) for name in attributes)\n lt.append(ev)\n return lt\n return tuple_\n return _to_tuples"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert the list of class input functions to be instance functions against obj.", "response": "def _splpy_primitive_input_fns(obj):\n \"\"\"Convert the list of class input functions to be\n instance functions against obj.\n Used by @spl.primitive_operator SPL cpp template.\n \"\"\"\n ofns = list()\n for fn in obj._splpy_input_ports:\n ofns.append(getattr(obj, fn.__name__))\n return ofns"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalling all_ports_ready for a primitive operator.", "response": "def _splpy_all_ports_ready(callable_):\n \"\"\"Call all_ports_ready for a primitive operator.\"\"\"\n if hasattr(type(callable_), 'all_ports_ready'):\n try:\n return callable_.all_ports_ready()\n except:\n ei = sys.exc_info()\n if streamsx._streams._runtime._call_exit(callable_, ei):\n return None\n raise e1[1]\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_trace(a_frame=None):\n if not ikpdb:\n return \"Error: IKP3db must be launched before calling ikpd.set_trace().\"\n\n if a_frame is None:\n a_frame = sys._getframe().f_back\n ikpdb._line_tracer(a_frame)\n return None", "response": "Sets trace for the current IKP3db object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbreaks on a traceback and send all execution information to the debugger client. If the interpreter is handling an exception at this traceback, exception information is sent to _line_tracer() which will transmit it to the debugging client. Caller can also pass an *exc_info* that will be used to extract exception information. If passed exc_info has precedence over traceback. This method is useful for integrating with systems that manage exceptions. Using it, you can setup a developer mode where unhandled exceptions are sent to the developer. Once user resumes execution, control is returned to caller. IKP3db is just used to \"pretty\" display the execution environment. To call post_mortem() use: .. code-block:: python import ikp3db ... ikp3db.postmortem(any_traceback) :param trace_back: The traceback at which to break on. :type trace_back: traceback :param exc_info: Complete description of the raised Exception as returned by sys.exc_info. :type exc_info: tuple :return: An error message or None is everything went fine. :rtype: str or None", "response": "def post_mortem(trace_back=None, exc_info=None):\n \"\"\" Breaks on a traceback and send all execution information to the debugger \n client. If the interpreter is handling an exception at this traceback, \n exception information is sent to _line_tracer() which will transmit it to \n the debugging client.\n Caller can also pass an *exc_info* that will be used to extract exception\n information. If passed exc_info has precedence over traceback.\n\n This method is useful for integrating with systems that manage exceptions. \n Using it, you can setup a developer mode where unhandled exceptions \n are sent to the developer.\n \n Once user resumes execution, control is returned to caller. IKP3db is \n just used to \"pretty\" display the execution environment.\n \n To call post_mortem() use:\n \n .. code-block:: python\n\n import ikp3db\n ...\n ikp3db.postmortem(any_traceback) \n \n \n :param trace_back: The traceback at which to break on.\n :type trace_back: traceback\n \n :param exc_info: Complete description of the raised Exception as \n returned by sys.exc_info.\n :type exc_info: tuple\n \n :return: An error message or None is everything went fine.\n :rtype: str or None\n \"\"\"\n if not ikpdb:\n return \"Error: IKP3db must be launched before calling ikpd.post_mortem().\"\n \n if exc_info:\n trace_back = exc_info[2]\n elif trace_back and not exc_info:\n if sys.exc_info()[2] == trace_back:\n exc_info = sys.exc_info()\n else:\n return \"missing parameter trace_back or exc_info\"\n\n pm_traceback = trace_back\n while pm_traceback.tb_next:\n pm_traceback = pm_traceback.tb_next \n ikpdb._line_tracer(pm_traceback.tb_frame, exc_info=exc_info)\n _logger.g_info(\"Post mortem processing finished.\")\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setup(cls, ikpdb_log_arg):\n if not ikpdb_log_arg:\n return\n \n IKPdbLogger.enabled = True\n logging_configuration_string = ikpdb_log_arg.lower()\n for letter in logging_configuration_string:\n if letter in IKPdbLogger.DOMAINS:\n IKPdbLogger.DOMAINS[letter] = 10", "response": "This function sets up the IKPdbLogger class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend a message to the debugger.", "response": "def send(self, command, _id=None, result={}, frames=[], threads=None,\n error_messages=[], warning_messages=[], info_messages=[],\n exception=None):\n \"\"\" Build a message from parameters and send it to debugger.\n \n :param command: The command sent to the debugger client.\n :type command: str\n \n :param _id: Unique id of the sent message. Right now, it's always `None`\n for messages by debugger to client.\n :type _id: int\n \n :param result: Used to send `exit_code` and updated `executionStatus` \n to debugger client.\n :type result: dict\n \n :param frames: contains the complete stack frames when debugger sends\n the `programBreak` message.\n :type frames: list\n\n :param error_messages: A list of error messages the debugger client must\n display to the user.\n :type error_messages: list of str\n \n :param warning_messages: A list of warning messages the debugger client\n must display to the user.\n :type warning_messages: list of str\n \n :param info_messages: A list of info messages the debugger client must\n display to the user.\n :type info_messages: list of str\n\n :param exception: If debugger encounter an exception, this dict contains\n 2 keys: `type` and `info` (the later is the message).\n :type exception: dict\n \"\"\"\n with self._connection_lock:\n payload = {\n '_id': _id,\n 'command': command,\n 'result': result,\n 'commandExecStatus': 'ok',\n 'frames': frames,\n 'info_messages': info_messages,\n 'warning_messages': warning_messages,\n 'error_messages': error_messages,\n 'exception': exception\n }\n if threads:\n payload['threads'] = threads\n msg = self.encode(payload)\n if self._connection:\n msg_bytes = bytearray(msg, 'utf-8')\n send_bytes_count = self._connection.sendall(msg_bytes)\n self.log_sent(msg)\n return send_bytes_count\n raise IKPdbConnectionError(\"Connection lost!\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reply(self, obj, result, command_exec_status='ok', info_messages=[], \n warning_messages=[], error_messages=[]):\n \"\"\"Build a response from a previouslsy received command message, send it\n and return number of sent bytes.\n \n :param result: Used to send back the result of the command execution to \n the debugger client.\n :type result: dict\n\n See send() above for others parameters definition.\n \"\"\"\n with self._connection_lock:\n # TODO: add a parameter to remove args from messages ?\n if True:\n del obj['args']\n obj['result'] = result\n obj['commandExecStatus'] = command_exec_status\n obj['info_messages'] = info_messages\n obj['warning_messages'] = warning_messages\n obj['error_messages'] = error_messages\n msg_str = self.encode(obj)\n msg_bytes = bytearray(msg_str, 'utf-8')\n send_bytes_count = self._connection.sendall(msg_bytes)\n self.log_sent(msg_bytes)\n return send_bytes_count", "response": "Builds a response from a previouslsy received command message and send it to the debugger client."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef receive(self, ikpdb):\n # with self._connection_lock:\n while self._network_loop:\n _logger.n_debug(\"Enter socket.recv(%s) with self._received_data = %s\", \n self.SOCKET_BUFFER_SIZE, \n self._received_data)\n try:\n # We may land here with a full packet already in self.received_data\n # In that case we must not enter recv() \n if self.SOCKET_BUFFER_SIZE:\n data = self._connection.recv(self.SOCKET_BUFFER_SIZE)\n else:\n data = b''\n _logger.n_debug(\"Socket.recv(%s) => %s\", self.SOCKET_BUFFER_SIZE, data)\n\n except socket.timeout:\n _logger.n_debug(\"socket.timeout witk ikpdb.status=%s\", ikpdb.status)\n if ikpdb.status == 'terminated':\n _logger.n_debug(\"breaking IKPdbConnectionHandler.receive() \"\n \"network loop as ikpdb state is 'terminated'.\")\n return { \n 'command': '_InternalQuit',\n 'args':{}\n }\n continue\n\n except socket.error as socket_err:\n if ikpdb.status == 'terminated':\n return {'command': '_InternalQuit', \n 'args':{'socket_error_number': socket_err.errno,\n 'socket_error_str': socket_err.strerror}}\n continue\n \n except Exception as exc:\n _logger.g_error(\"Unexecpected Error: '%s' in IKPdbConnectionHandler\"\n \".command_loop.\", exc)\n _logger.g_error(traceback.format_exc())\n print(\"\".join(traceback.format_stack()))\n return { \n 'command': '_InternalQuit',\n 'args':{\n \"error\": exc.__class__.__name__,\n \"message\": exc.message\n }\n }\n \n # received data is utf8 encoded\n self._received_data += data.decode('utf-8')\n \n # have we received a MAGIC_CODE\n try:\n magic_code_idx = self._received_data.index(self.MAGIC_CODE)\n except ValueError:\n continue\n \n # Have we received a 'length='\n try:\n length_idx = self._received_data.index(u'length=')\n except ValueError:\n continue\n \n # extract length content from received data\n json_length = int(self._received_data[length_idx + 7:magic_code_idx])\n message_length = magic_code_idx + len(self.MAGIC_CODE) + json_length\n if message_length <= len(self._received_data):\n full_message = self._received_data[:message_length]\n self._received_data = self._received_data[message_length:]\n if len(self._received_data) > 0:\n self.SOCKET_BUFFER_SIZE = 0\n else:\n self.SOCKET_BUFFER_SIZE = 4096\n break\n else:\n self.SOCKET_BUFFER_SIZE = message_length - len(self._received_data)\n\n self.log_received(full_message)\n obj = self.decode(full_message)\n return obj", "response": "Waits for a message from the debugger and returns it as a dict."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clear(self):\n del IKBreakpoint.breakpoints_by_file_and_line[self.file_name, self.line_number]\n IKBreakpoint.breakpoints_by_number[self.number] = None\n IKBreakpoint.breakpoints_files[self.file_name].remove(self.line_number)\n if len(IKBreakpoint.breakpoints_files[self.file_name]) == 0:\n del IKBreakpoint.breakpoints_files[self.file_name]\n IKBreakpoint.update_active_breakpoint_flag()", "response": "Clear a breakpoint by removing it from all lists."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_active_breakpoint_flag(cls):\n cls.any_active_breakpoint=any([bp.enabled for bp in cls.breakpoints_by_number if bp])", "response": "Checks all breakpoints to find wether at least one is active and updates any_active_breakpoint attribute."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef lookup_effective_breakpoint(cls, file_name, line_number, frame):\n bp = cls.breakpoints_by_file_and_line.get((file_name, line_number), None)\n if not bp:\n return None\n \n if not bp.enabled:\n return None\n \n if not bp.condition:\n return bp\n try:\n value = eval(bp.condition, frame.f_globals, frame.f_locals)\n return bp if value else None\n except:\n pass\n return None", "response": "Checks if there is an enabled breakpoint at given file_name and line_number. Check if any. cke_condition is set to True."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_breakpoints_list(cls):\n breakpoints_list = []\n for bp in cls.breakpoints_by_number:\n if bp: # breakpoint #0 exists and is always None\n bp_dict = {\n 'breakpoint_number': bp.number,\n 'file_name': bp.file_name,\n 'line_number': bp.line_number,\n 'condition': bp.condition,\n 'enabled': bp.enabled,\n }\n breakpoints_list.append(bp_dict)\n return breakpoints_list", "response": "returns a list of all breakpoints."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndisable all breakpoints and udate active_breakpoint_flag.", "response": "def disable_all_breakpoints(cls):\n \"\"\" Disable all breakpoints and udate `active_breakpoint_flag`.\n \"\"\"\n for bp in cls.breakpoints_by_number:\n if bp: # breakpoint #0 exists and is always None\n bp.enabled = False\n cls.update_active_breakpoint_flag()\n return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef backup_breakpoints_state(cls):\n all_breakpoints_state = []\n for bp in cls.breakpoints_by_number:\n if bp: \n all_breakpoints_state.append((bp.number, \n bp.enabled, \n bp.condition,))\n return all_breakpoints_state", "response": "Returns the state of all breakpoints in a list that can be used\n later to restore all breakpoints state"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef restore_breakpoints_state(cls, breakpoints_state_list):\n for breakpoint_state in breakpoints_state_list:\n bp = cls.breakpoints_by_number[breakpoint_state[0]]\n if bp:\n bp.enabled = breakpoint_state[1]\n bp.condition = breakpoint_state[2]\n cls.update_active_breakpoint_flag()\n return", "response": "Restore the state of breakpoints given a list provided by \n backup_breakpoints_state(). If list of breakpoints has changed \n since backup missing or added breakpoints are ignored."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef canonic(self, file_name):\n if file_name == \"<\" + file_name[1:-1] + \">\":\n return file_name\n c_file_name = self.file_name_cache.get(file_name)\n if not c_file_name:\n c_file_name = os.path.abspath(file_name)\n c_file_name = os.path.normcase(c_file_name)\n self.file_name_cache[file_name] = c_file_name\n return c_file_name", "response": "returns a canonical version of a file name."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntranslates a file or module name received from debugging client into an absolute file name.", "response": "def normalize_path_in(self, client_file_name):\n \"\"\"Translate a (possibly incomplete) file or module name received from debugging client\n into an absolute file name.\n \"\"\"\n _logger.p_debug(\"normalize_path_in(%s) with os.getcwd()=>%s\", client_file_name, os.getcwd())\n \n # remove client CWD from file_path\n if client_file_name.startswith(self._CLIENT_CWD):\n file_name = client_file_name[len(self._CLIENT_CWD):]\n else:\n file_name = client_file_name\n \n # Try to find file using it's absolute path\n if os.path.isabs(file_name) and os.path.exists(file_name):\n _logger.p_debug(\" => found absolute path: '%s'\", file_name)\n return file_name\n \n # Can we find the file relatively to launch CWD (useful with buildout)\n f = os.path.join(self._CWD, file_name) \n if os.path.exists(f):\n _logger.p_debug(\" => found path relative to self._CWD: '%s'\", f)\n return f\n\n # Can we find file relatively to launch script\n f = os.path.join(sys.path[0], file_name) \n if os.path.exists(f) and self.canonic(f) == self.mainpyfile:\n _logger.p_debug(\" => found path relative to launch script: '%s'\", f)\n return f\n \n # Try as an absolute path after adding .py extension \n root, ext = os.path.splitext(file_name)\n if ext == '':\n f = file_name + '.py'\n if os.path.isabs(f):\n _logger.p_debug(\" => found absolute path after adding .py extension: '%s'\", f)\n return f\n \n # Can we find the file in system path\n for dir_name in sys.path:\n while os.path.islink(dir_name):\n dir_name = os.readlink(dir_name)\n f = os.path.join(dir_name, file_name)\n if os.path.exists(f):\n _logger.p_debug(\" => found path in sys.path: '%s'\", f)\n return f\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnormalizes path sent to client", "response": "def normalize_path_out(self, path):\n \"\"\"Normalizes path sent to client\n :param path: path to normalize\n :return: normalized path\n \"\"\"\n if path.startswith(self._CWD):\n normalized_path = path[len(self._CWD):]\n else:\n normalized_path = path\n # For remote debugging preprend client CWD\n if self._CLIENT_CWD:\n normalized_path = os.path.join(self._CLIENT_CWD, normalized_path)\n _logger.p_debug(\"normalize_path_out('%s') => %s\", path, normalized_path)\n return normalized_path"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef object_properties_count(self, o):\n o_type = type(o)\n if isinstance(o, (dict, list, tuple, set)):\n return len(o)\n elif isinstance(o, (type(None), bool, float, \n str, int, \n bytes, types.ModuleType, \n types.MethodType, types.FunctionType)):\n return 0\n else:\n # Following lines are used to debug variables members browsing\n # and counting\n # if False and str(o_type) == \"\":\n # print \"@378\"\n # print dir(o)\n # print \"hasattr(o, '__dict__')=%s\" % hasattr(o,'__dict__')\n # count = 0\n # if hasattr(o, '__dict__'):\n # for m_name, m_value in o.__dict__.iteritems():\n # if m_name.startswith('__'):\n # print \" %s=>False\" % (m_name,)\n # continue\n # if type(m_value) in (types.ModuleType, types.MethodType, types.FunctionType,):\n # print \" %s=>False\" % (m_name,)\n # continue\n # print \" %s=>True\" % (m_name,)\n # count +=1\n # print \" %s => %s = %s\" % (o, count, dir(o),)\n # else:\n try:\n if hasattr(o, '__dict__'):\n count = len([m_name for m_name, m_value in o.__dict__.items()\n if not m_name.startswith('__') \n and not type(m_value) in (types.ModuleType, \n types.MethodType, \n types.FunctionType,) ])\n else:\n count = 0\n except:\n # Thank you werkzeug __getattr__ overloading!\n count = 0\n return count", "response": "Returns the number of user browsable properties of an object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extract_object_properties(self, o, limit_size=False):\n try:\n prop_str = repr(o)[:512]\n except:\n prop_str = \"Error while extracting value\"\n\n _logger.e_debug(\"extract_object_properties(%s)\", prop_str)\n var_list = []\n if isinstance(o, dict):\n a_var_name = None\n a_var_value = None\n for a_var_name in o:\n a_var_value = o[a_var_name]\n children_count = self.object_properties_count(a_var_value)\n v_name, v_value, v_type = self.extract_name_value_type(a_var_name, \n a_var_value, \n limit_size=limit_size)\n a_var_info = {\n 'id': id(a_var_value),\n 'name': v_name,\n 'type': \"%s%s\" % (v_type, \" [%s]\" % children_count if children_count else '',),\n 'value': v_value,\n 'children_count': children_count,\n }\n var_list.append(a_var_info)\n \n elif type(o) in (list, tuple, set,):\n MAX_CHILDREN_TO_RETURN = 256\n MAX_CHILDREN_MESSAGE = \"Truncated by ikpdb (don't hot change me !).\"\n a_var_name = None\n a_var_value = None\n do_truncate = len(o) > MAX_CHILDREN_TO_RETURN\n for idx, a_var_value in enumerate(o):\n children_count = self.object_properties_count(a_var_value)\n v_name, v_value, v_type = self.extract_name_value_type(idx, \n a_var_value, \n limit_size=limit_size)\n var_list.append({\n 'id': id(a_var_value),\n 'name': v_name,\n 'type': \"%s%s\" % (v_type, \" [%s]\" % children_count if children_count else '',),\n 'value': v_value,\n 'children_count': children_count,\n })\n if do_truncate and idx==MAX_CHILDREN_TO_RETURN-1:\n var_list.append({\n 'id': None,\n 'name': str(MAX_CHILDREN_TO_RETURN),\n 'type': '',\n 'value': MAX_CHILDREN_MESSAGE,\n 'children_count': 0,\n })\n break\n else:\n a_var_name = None\n a_var_value = None\n if hasattr(o, '__dict__'):\n for a_var_name, a_var_value in o.__dict__.items():\n if (not a_var_name.startswith('__') \n and not type(a_var_value) in (types.ModuleType, \n types.MethodType, \n types.FunctionType,)):\n children_count = self.object_properties_count(a_var_value)\n v_name, v_value, v_type = self.extract_name_value_type(a_var_name,\n a_var_value, \n limit_size=limit_size)\n var_list.append({\n 'id': id(a_var_value),\n 'name': v_name,\n 'type': \"%s%s\" % (v_type, \" [%s]\" % children_count if children_count else '',),\n 'value': v_value,\n 'children_count': children_count,\n })\n return var_list", "response": "Extracts all properties from an object and returns them as a list of variables."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef extract_name_value_type(self, name, value, limit_size=False):\n MAX_STRING_LEN_TO_RETURN = 487\n try:\n t_value = repr(value)\n except:\n t_value = \"Error while extracting value\"\n\n # convert all var names to string\n if isinstance(name, str):\n r_name = name\n else:\n r_name = repr(name)\n\n # truncate value to limit data flow between ikpdb and client\n if len(t_value) > MAX_STRING_LEN_TO_RETURN:\n r_value = \"%s ... (truncated by ikpdb)\" % (t_value[:MAX_STRING_LEN_TO_RETURN],) \n r_name = \"%s*\" % r_name # add a visual marker to truncated var's name\n else:\n r_value = t_value\n \n if isinstance(value, str):\n r_type = \"%s [%s]\" % (IKPdbRepr(value), len(value),)\n else:\n r_type = IKPdbRepr(value)\n \n return r_name, r_value, r_type", "response": "Extracts value of any object eventually reduces it s size and returns name truncated value and type"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndumping the given frame in a representation suitable for serialization and remote client usage.", "response": "def dump_frames(self, frame):\n \"\"\" dumps frames chain in a representation suitable for serialization \n and remote (debugger) client usage.\n \"\"\"\n current_thread = threading.currentThread()\n frames = []\n frame_browser = frame\n \n # Browse the frame chain as far as we can\n _logger.f_debug(\"dump_frames(), frame analysis:\")\n spacer = \"\"\n while hasattr(frame_browser, 'f_back') and frame_browser.f_back != self.frame_beginning:\n spacer += \"=\"\n _logger.f_debug(\"%s>frame = %s, frame.f_code = %s, frame.f_back = %s, \"\n \"self.frame_beginning = %s\",\n spacer,\n hex(id(frame_browser)),\n frame_browser.f_code,\n hex(id(frame_browser.f_back)),\n hex(id(self.frame_beginning)))\n\n # At root frame, globals == locals so we dump only globals\n if hasattr(frame_browser.f_back, 'f_back')\\\n and frame_browser.f_back.f_back != self.frame_beginning:\n locals_vars_list = self.extract_object_properties(frame_browser.f_locals,\n limit_size=True)\n else:\n locals_vars_list = []\n \n globals_vars_list = self.extract_object_properties(frame_browser.f_globals,\n limit_size=True)\n # normalize path sent to debugging client\n file_path = self.normalize_path_out(frame_browser.f_code.co_filename)\n\n frame_name = \"%s() [%s]\" % (frame_browser.f_code.co_name, current_thread.name,)\n remote_frame = {\n 'id': id(frame_browser),\n 'name': frame_name,\n 'line_number': frame_browser.f_lineno, # Warning 1 based\n 'file_path': file_path,\n 'f_locals': locals_vars_list + globals_vars_list,\n 'thread': current_thread.ident,\n 'thread_name': current_thread.name\n }\n frames.append(remote_frame)\n frame_browser = frame_browser.f_back\n return frames"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nevaluating expression in the context of the frame identified by frame_id or globally.", "response": "def evaluate(self, frame_id, expression, global_context=False, disable_break=False):\n \"\"\"Evaluates 'expression' in the context of the frame identified by\n 'frame_id' or globally.\n Breakpoints are disabled depending on 'disable_break' value.\n Returns a tuple of value and type both as str.\n Note that - depending on the CGI_ESCAPE_EVALUATE_OUTPUT attribute - value is \n escaped.\n \"\"\"\n if disable_break:\n breakpoints_backup = IKBreakpoint.backup_breakpoints_state()\n IKBreakpoint.disable_all_breakpoints()\n\n if frame_id and not global_context:\n eval_frame = ctypes.cast(frame_id, ctypes.py_object).value\n global_vars = eval_frame.f_globals\n local_vars = eval_frame.f_locals\n else:\n global_vars = None\n local_vars = None\n\n try: \n result = eval(expression, global_vars, local_vars)\n result_type = IKPdbRepr(result)\n result_value = repr(result)\n except SyntaxError:\n # eval() failed, try with exec to handle statements\n try:\n result = exec(expression, global_vars, local_vars)\n result_type = IKPdbRepr(result)\n result_value = repr(result)\n except Exception as e:\n t, result = sys.exc_info()[:2]\n if isinstance(t, str):\n result_type = t\n else: \n result_type = str(t.__name__)\n result_value = \"%s: %s\" % (result_type, result,)\n except:\n t, result = sys.exc_info()[:2]\n if isinstance(t, str):\n result_type = t\n else: \n result_type = t.__name__\n result_value = \"%s: %s\" % (result_type, result,)\n\n if disable_break:\n IKBreakpoint.restore_breakpoints_state(breakpoints_backup)\n\n _logger.e_debug(\"evaluate(%s) => result_value=%s, result_type=%s, result=%s\", \n expression, \n result_value, \n result_type, \n result)\n if self.CGI_ESCAPE_EVALUATE_OUTPUT:\n result_value = cgi.escape(result_value)\n \n # We must check that result is json.dump compatible so that it can be sent back to client.\n try:\n json.dumps(result_value)\n except:\n t, result = sys.exc_info()[:2]\n if isinstance(t, str):\n result_type = t\n else: \n result_type = t.__name__\n result_value = \"%s: IKP3db is unable to JSON encode result to send it to \"\\\n \"debugging client.\\n\"\\\n \" This typically occurs if you try to print a string that cannot be\"\\\n \" decoded to 'UTF-8'.\\n\"\\\n \" You should be able to evaluate result and inspect it's content\"\\\n \" by removing the print statement.\" % result_type\n return result_value, result_type"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef let_variable(self, frame_id, var_name, expression_value):\n breakpoints_backup = IKBreakpoint.backup_breakpoints_state()\n IKBreakpoint.disable_all_breakpoints()\n\n let_expression = \"%s=%s\" % (var_name, expression_value,)\n\n eval_frame = ctypes.cast(frame_id, ctypes.py_object).value\n global_vars = eval_frame.f_globals\n local_vars = eval_frame.f_locals\n try:\n exec(let_expression, global_vars, local_vars)\n error_message=\"\"\n except Exception as e:\n t, result = sys.exc_info()[:2]\n if isinstance(t, str):\n result_type = t\n else: \n result_type = str(t.__name__)\n error_message = \"%s: %s\" % (result_type, result,)\n\n IKBreakpoint.restore_breakpoints_state(breakpoints_backup)\n\n _logger.e_debug(\"let_variable(%s) => %s\", \n let_expression, \n error_message or 'succeed')\n return error_message", "response": "Let a frame s var with a value by building then eval a let \n expression with breakoints disabled."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup_step_into(self, frame, pure=False):\n self.frame_calling = frame\n if pure:\n self.frame_stop = None\n else:\n self.frame_stop = frame\n self.frame_return = None\n self.frame_suspend = False\n self.pending_stop = True \n return", "response": "Setup debugger for a stepInto"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup_suspend(self):\n self.frame_calling = None\n self.frame_stop = None\n self.frame_return = None\n self.frame_suspend = True\n self.pending_stop = True\n self.enable_tracing()\n return", "response": "Setup debugger to suspend execution\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef should_stop_here(self, frame):\n # TODO: Optimization => defines a set of modules / names where _tracer\n # is never registered. This will replace skip\n #if self.skip and self.is_skipped_module(frame.f_globals.get('__name__')):\n # return False\n\n # step into\n if self.frame_calling and self.frame_calling==frame.f_back:\n return True\n # step over\n if frame==self.frame_stop: # frame cannot be null\n return True\n # step out\n if frame==self.frame_return: # frame cannot be null\n return True\n # suspend\n if self.frame_suspend:\n return True\n\n return False", "response": "Checks if debugger should stop at the current location."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if there is a breakpoint at this frame.", "response": "def should_break_here(self, frame):\n \"\"\"Check wether there is a breakpoint at this frame.\"\"\"\n # Next line commented out for performance\n #_logger.b_debug(\"should_break_here(filename=%s, lineno=%s) with breaks=%s\",\n # frame.f_code.co_filename,\n # frame.f_lineno,\n # IKBreakpoint.breakpoints_by_number)\n \n c_file_name = self.canonic(frame.f_code.co_filename)\n if not c_file_name in IKBreakpoint.breakpoints_files:\n return False\n bp = IKBreakpoint.lookup_effective_breakpoint(c_file_name, \n frame.f_lineno, \n frame)\n return True if bp else False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a dict of all threads and indicates thread being debugged.", "response": "def get_threads(self):\n \"\"\"Returns a dict of all threads and indicates thread being debugged.\n key is thread ident and values thread info.\n Information from this list can be used to swap thread being debugged.\n \"\"\"\n thread_list = {}\n for thread in threading.enumerate():\n thread_ident = thread.ident\n thread_list[thread_ident] = {\n \"ident\": thread_ident,\n \"name\": thread.name,\n \"is_debugger\": thread_ident == self.debugger_thread_ident,\n \"is_debugged\": thread_ident == self.debugged_thread_ident\n }\n return thread_list"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nallow to reset or set the thread to debug.", "response": "def set_debugged_thread(self, target_thread_ident=None):\n \"\"\" Allows to reset or set the thread to debug. \"\"\"\n if target_thread_ident is None:\n self.debugged_thread_ident = None\n self.debugged_thread_name = ''\n return {\n \"result\": self.get_threads(),\n \"error\": \"\"\n }\n \n thread_list = self.get_threads()\n if target_thread_ident not in thread_list:\n return {\n \"result\": None,\n \"error\": \"No thread with ident:%s.\" % target_thread_ident\n }\n \n if thread_list[target_thread_ident]['is_debugger']:\n return {\n \"result\": None,\n \"error\": \"Cannot debug IKPdb tracer (sadly...).\"\n }\n \n self.debugged_thread_ident = target_thread_ident\n self.debugged_thread_name = thread_list[target_thread_ident]['name']\n return {\n \"result\": self.get_threads(),\n \"error\": \"\"\n }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef enable_tracing(self):\n _logger.x_debug(\"entering enable_tracing()\")\n # uncomment next line to get debugger tracing info\n #self.dump_tracing_state(\"before enable_tracing()\")\n \n if not self.tracing_enabled and self.execution_started:\n # Restore or set trace function on all existing frames appart from \n # debugger\n threading.settrace(self._tracer) # then enable on all threads to come\n for thr in threading.enumerate():\n if thr.ident != self.debugger_thread_ident: # skip debugger thread\n a_frame = sys._current_frames()[thr.ident]\n while a_frame:\n a_frame.f_trace = self._tracer\n a_frame = a_frame.f_back\n iksettrace3._set_trace_on(self._tracer, self.debugger_thread_ident)\n self.tracing_enabled = True\n \n #self.dump_tracing_state(\"after enable_tracing()\")\n return self.tracing_enabled", "response": "Enable tracing if it is disabled and debugged program is running."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndisabling tracing if tracing is enabled and debugged program is running.", "response": "def disable_tracing(self):\n \"\"\" Disable tracing if it is disabled and debugged program is running, \n else do nothing.\n\n :return: False if tracing has been disabled, True else.\n \"\"\"\n _logger.x_debug(\"disable_tracing()\")\n #self.dump_tracing_state(\"before disable_tracing()\")\n if self.tracing_enabled and self.execution_started:\n threading.settrace(None) # don't trace threads to come\n iksettrace3._set_trace_off()\n self.tracing_enabled = False\n #self.dump_tracing_state(\"after disable_tracing()\")\n return self.tracing_enabled"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_breakpoint(self, file_name, line_number, condition=None, enabled=True):\n c_file_name = self.canonic(file_name)\n import linecache\n line = linecache.getline(c_file_name, line_number)\n if not line:\n return \"Line %s:%d does not exist.\" % (c_file_name, line_number), None\n bp = IKBreakpoint(c_file_name, line_number, condition, enabled)\n if self.pending_stop or IKBreakpoint.any_active_breakpoint:\n self.enable_tracing()\n else:\n self.disable_tracing()\n return None, bp.number", "response": "Create a breakpoint and register it in the class s lists and returns the breakpoint number and error message."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchanges the state of a breakpoint or condition expression.", "response": "def change_breakpoint_state(self, bp_number, enabled, condition=None):\n \"\"\" Change breakpoint status or `condition` expression.\n \n :param bp_number: number of breakpoint to change \n :return: None or an error message (string)\n \"\"\"\n if not (0 <= bp_number < len(IKBreakpoint.breakpoints_by_number)):\n return \"Found no breakpoint numbered: %s\" % bp_number\n bp = IKBreakpoint.breakpoints_by_number[bp_number]\n if not bp:\n return \"Found no breakpoint numbered %s\" % bp_number\n _logger.b_debug(\" change_breakpoint_state(bp_number=%s, enabled=%s, \"\n \"condition=%s) found %s\", \n bp_number,\n enabled,\n repr(condition),\n bp)\n bp.enabled = enabled\n bp.condition = condition # update condition for conditional breakpoints\n IKBreakpoint.update_active_breakpoint_flag() # force flag refresh\n if self.pending_stop or IKBreakpoint.any_active_breakpoint:\n self.enable_tracing()\n else:\n self.disable_tracing()\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete a breakpoint identified by it s number.", "response": "def clear_breakpoint(self, breakpoint_number):\n \"\"\" Delete a breakpoint identified by it's number. \n \n :param breakpoint_number: index of breakpoint to delete\n :type breakpoint_number: int\n :return: an error message or None\n \"\"\"\n if not (0 <= breakpoint_number < len(IKBreakpoint.breakpoints_by_number)):\n return \"Found no breakpoint numbered %s\" % breakpoint_number\n bp = IKBreakpoint.breakpoints_by_number[breakpoint_number]\n if not bp:\n return \"Found no breakpoint numbered: %s\" % breakpoint_number\n _logger.b_debug(\" clear_breakpoint(breakpoint_number=%s) found: %s\",\n breakpoint_number,\n bp)\n bp.clear()\n if self.pending_stop or IKBreakpoint.any_active_breakpoint:\n self.enable_tracing()\n else:\n self.disable_tracing()\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlaunches debugged program execution using execfile.", "response": "def _runscript(self, filename):\n \"\"\" Launchs debugged program execution using the execfile() builtin.\n \n We reset and setup the __main__ dict to allow the script to run\n in __main__ namespace. This is required for imports from __main__ to \n run correctly.\n \n Note that this has the effect to wipe IKP3db's vars created at this point.\n \"\"\"\n import __main__\n __main__.__dict__.clear()\n __main__.__dict__.update({\"__name__\" : \"__main__\",\n \"__file__\" : filename,\n \"__builtins__\": __builtins__,})\n\n self.mainpyfile = self.canonic(filename)\n #statement = 'execfile(%r)\\n' % filename\n statement = \"exec(compile(open('%s').read(), '%s', 'exec'))\" % (filename, filename,)\n \n globals = __main__.__dict__\n locals = globals\n\n # When IKP3db sets tracing, a number of call and line events happens\n # BEFORE debugger even reaches user's code (and the exact sequence of\n # events depends on python version). So we take special measures to\n # avoid stopping before we reach the main script (see reset(),\n # _tracer() and _line_tracer() methods for details).\n self.reset()\n self.execution_started = True\n self.status = 'running'\n # Turn on limited tracing by setting trace function for \n # current_thread only. This allow self.frame_beginning to be set at\n # first tracer \"call\" invocation.\n sys.settrace(self._tracer)\n\n try:\n exec(statement, globals, locals)\n except IKPdbQuit:\n pass\n finally:\n self.status = 'terminated'\n self.disable_tracing()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef command_loop(self, run_script_event):\n while True:\n obj = remote_client.receive(self)\n command = obj[\"command\"] \n # TODO: ensure we always have a command if receive returns\n args = obj.get('args', {})\n \n if command == 'getBreakpoints':\n breakpoints_list = IKBreakpoint.get_breakpoints_list()\n remote_client.reply(obj, breakpoints_list)\n _logger.b_debug(\"getBreakpoints(%s) => %s\", args, breakpoints_list)\n \n elif command == \"setBreakpoint\":\n # Set a new breakpoint. If the lineno line doesn't exist for the\n # filename passed as argument, return an error message.\n # The filename should be in canonical form, as described in the \n # canonic() method.\n file_name = args['file_name']\n line_number = args['line_number']\n condition = args.get('condition', None)\n enabled = args.get('enabled', True)\n _logger.b_debug(\"setBreakpoint(file_name=%s, line_number=%s,\"\n \" condition=%s, enabled=%s) with CWD=%s\",\n file_name,\n line_number,\n condition,\n enabled,\n os.getcwd())\n \n error_messages = []\n result = {}\n\n c_file_name = self.normalize_path_in(file_name)\n if not c_file_name:\n err = \"Failed to find file '%s'\" % file_name\n _logger.g_error(\"setBreakpoint error: %s\", err)\n msg = \"IKP3db error: Failed to set a breakpoint at %s:%s \"\\\n \"(%s).\" % (file_name, line_number, err)\n error_messages = [msg]\n command_exec_status = 'error'\n else: \n err, bp_number = self.set_breakpoint(c_file_name, \n line_number, \n condition=condition,\n enabled=enabled)\n if err:\n _logger.g_error(\"setBreakpoint error: %s\", err)\n msg = \"IKP3db error: Failed to set a breakpoint at %s:%s \"\\\n \"(%s).\" % (file_name, line_number, err,)\n error_messages = [msg]\n command_exec_status = 'error'\n else:\n result = {'breakpoint_number': bp_number}\n command_exec_status = 'ok'\n remote_client.reply(obj, result, \n command_exec_status=command_exec_status,\n error_messages=error_messages)\n \n elif command == \"changeBreakpointState\":\n # Allows to:\n # - activate or deactivate breakpoint \n # - set or remove condition\n _logger.b_debug(\"changeBreakpointState(%s)\", args)\n bp_number = args.get('breakpoint_number', None)\n if bp_number is None:\n result = {}\n msg = \"changeBreakpointState() error: missing required \" \\\n \"breakpointNumber parameter.\"\n _logger.g_error(\" \"+msg)\n error_messages = [msg]\n command_exec_status = 'error'\n else:\n err = self.change_breakpoint_state(bp_number,\n args.get('enabled', False), \n condition=args.get('condition', ''))\n result = {}\n error_messages = []\n if err:\n msg = \"changeBreakpointState() error: \\\"%s\\\"\" % err\n _logger.g_error(\" \"+msg)\n error_messages = [msg]\n command_exec_status = 'error'\n else:\n command_exec_status = 'ok'\n remote_client.reply(obj, result, \n command_exec_status=command_exec_status,\n error_messages=error_messages)\n _logger.b_debug(\" command_exec_status => %s\", command_exec_status)\n\n elif command == \"clearBreakpoint\":\n _logger.b_debug(\"clearBreakpoint(%s)\", args)\n bp_number = args.get('breakpoint_number', None)\n if bp_number is None:\n result = {}\n msg = \"IKP3db error: Failed to delete breakpoint (Missing \"\\\n \"required breakpointNumber parameter).\"\n error_messages = [msg]\n command_exec_status = 'error'\n else:\n err = self.clear_breakpoint(args['breakpoint_number'])\n result = {}\n error_messages = []\n if err:\n msg = \"IKP3db error: Failed to delete breakpoint (%s).\" % err\n _logger.g_error(msg)\n error_messages = [msg]\n command_exec_status = 'error'\n else:\n command_exec_status = 'ok'\n remote_client.reply(obj, result, \n command_exec_status=command_exec_status,\n error_messages=error_messages)\n\n elif command == 'runScript':\n #TODO: handle a 'stopAtEntry' arg\n _logger.x_debug(\"runScript(%s)\", args)\n remote_client.reply(obj, {'executionStatus': 'running'})\n run_script_event.set()\n\n elif command == 'suspend':\n _logger.x_debug(\"suspend(%s)\", args)\n # We return a running status which is True at that point. Next \n # programBreak will change status to 'stopped'\n remote_client.reply(obj, {'executionStatus': 'running'})\n self.setup_suspend()\n \n elif command == 'resume':\n _logger.x_debug(\"resume(%s)\", args)\n remote_client.reply(obj, {'executionStatus': 'running'})\n self._command_q.put({'cmd':'resume'})\n\n elif command == 'stepOver': # <=> Pdb n(ext)\n _logger.x_debug(\"stepOver(%s)\", args)\n remote_client.reply(obj, {'executionStatus': 'running'})\n self._command_q.put({'cmd':'stepOver'})\n\n elif command == 'stepInto': # <=> Pdb s(tep)\n _logger.x_debug(\"stepInto(%s)\", args)\n remote_client.reply(obj, {'executionStatus': 'running'})\n self._command_q.put({'cmd':'stepInto'})\n\n elif command == 'stepOut': # <=> Pdb r(eturn)\n _logger.x_debug(\"stepOut(%s)\", args)\n remote_client.reply(obj, {'executionStatus': 'running'})\n self._command_q.put({'cmd':'stepOut'})\n\n elif command == 'evaluate':\n _logger.e_debug(\"evaluate(%s)\", args)\n if self.tracing_enabled and self.status == 'stopped':\n self._command_q.put({\n 'cmd':'evaluate',\n 'obj': obj,\n 'frame': args['frame'],\n 'expression': args['expression'],\n 'global': args['global'],\n 'disableBreak': args['disableBreak']\n })\n # reply will be done in _tracer() where result is available\n else:\n remote_client.reply(obj, {'value': None, 'type': None})\n \n elif command == 'getProperties':\n _logger.e_debug(\"getProperties(%s,%s)\", args, obj)\n if self.tracing_enabled and self.status == 'stopped':\n if args.get('id'):\n self._command_q.put({\n 'cmd':'getProperties',\n 'obj': obj,\n 'id': args['id']\n })\n # reply will be done in _tracer() when result is available\n else:\n result={}\n command_exec_status = 'error'\n error_messages = [\"IKP3db received getProperties command sent without target variable 'id'.\"]\n remote_client.reply(obj, result, \n command_exec_status=command_exec_status,\n error_messages=error_messages)\n\n else:\n remote_client.reply(obj, {'value': None, 'type': None})\n\n elif command == 'setVariable':\n _logger.e_debug(\"setVariable(%s)\", args)\n if self.tracing_enabled and self.status == 'stopped':\n self._command_q.put({\n 'cmd':'setVariable',\n 'obj': obj,\n 'frame': args['frame'],\n 'name': args['name'], # TODO: Rework plugin to send var's id\n 'value': args['value']\n })\n # reply will be done in _tracer() when result is available\n else:\n remote_client.reply(obj, {'value': None, 'type': None})\n\n elif command == 'reconnect':\n _logger.n_debug(\"reconnect(%s)\", args)\n remote_client.reply(obj, {'executionStatus': self.status})\n \n elif command == 'getThreads':\n _logger.x_debug(\"getThreads(%s)\", args)\n threads_list = self.get_threads()\n remote_client.reply(obj, threads_list)\n \n elif command == 'setDebuggedThread':\n _logger.x_debug(\"setDebuggedThread(%s)\", args)\n ret_val = self.set_debugged_thread(args['ident'])\n if ret_val['error']:\n remote_client.reply(obj, \n {}, # result\n command_exec_status='error',\n error_messages=[ret_val['error']])\n\n else:\n remote_client.reply(obj, ret_val['result'])\n\n elif command == '_InternalQuit':\n # '_InternalQuit' is an IKP3db internal message, generated by \n # IKPdbConnectionHandler when a socket.error occured.\n # Usually this occurs when socket has been destroyed as \n # debugged program sys.exit()\n # So we leave the command loop to stop the debugger thread\n # in order to allow debugged program to shutdown correctly.\n # This message must NEVER be send by remote client.\n _logger.e_debug(\"_InternalQuit(%s)\", args)\n self._command_q.put({'cmd':'_InternalQuit'})\n return\n \n else: # unrecognized command ; just log and ignored\n _logger.g_critical(\"Unsupported command '%s' ignored.\", command)\n\n if IKPdbLogger.enabled:\n _logger.b_debug(\"Current breakpoints list [any_active_breakpoint=%s]:\", \n IKBreakpoint.any_active_breakpoint) \n _logger.b_debug(\" IKBreakpoint.breakpoints_by_file_and_line:\")\n if not IKBreakpoint.breakpoints_by_file_and_line:\n _logger.b_debug(\" <empty>\") \n for file_line, bp in list(IKBreakpoint.breakpoints_by_file_and_line.items()):\n _logger.b_debug(\" %s => #%s, enabled=%s, condition=%s, %s\", \n file_line,\n bp.number,\n bp.enabled,\n repr(bp.condition),\n bp)\n _logger.b_debug(\" IKBreakpoint.breakpoints_files = %s\", \n IKBreakpoint.breakpoints_files)\n _logger.b_debug(\" IKBreakpoint.breakpoints_by_number = %s\", \n IKBreakpoint.breakpoints_by_number)", "response": "This is the debugger command loop that processes client - specific requests."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting elements matching id or name", "response": "def _get_elements(self, url, key, eclass, id=None, name=None):\n \"\"\"Get elements matching `id` or `name`\n\n Args:\n url(str): url of children.\n key(str): key in the returned JSON.\n eclass(subclass type of :py:class:`_ResourceElement`): element class to create instances of.\n id(str, optional): only return resources whose `id` property matches the given `id`\n name(str, optional): only return resources whose `name` property matches the given `name`\n\n Returns:\n list(_ResourceElement): List of `eclass` instances\n\n Raises:\n ValueError: both `id` and `name` are specified together\n \"\"\"\n if id is not None and name is not None:\n raise ValueError(\"id and name cannot specified together\")\n\n json_elements = self.rest_client.make_request(url)[key]\n return [eclass(element, self.rest_client) for element in json_elements\n if _exact_resource(element, id) and _matching_resource(element, name)]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_element_by_id(self, url, key, eclass, id):\n elements = self._get_elements(url, key, eclass, id=id)\n if not elements:\n raise ValueError(\"No resource matching: {0}\".format(id))\n if len(elements) == 1:\n return elements[0]\n raise ValueError(\"Multiple resources matching: {0}\".format(id))", "response": "Get a single element matching an id"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the Streams domain for the instance that owns this view.", "response": "def get_domain(self):\n \"\"\"Get the Streams domain for the instance that owns this view.\n\n Returns:\n Domain: Streams domain for the instance owning this view.\n \"\"\"\n if hasattr(self, 'domain'):\n return Domain(self.rest_client.make_request(self.domain), self.rest_client)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_instance(self):\n return Instance(self.rest_client.make_request(self.instance), self.rest_client)", "response": "Get the instance that owns this view."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the Streams Job that owns this view.", "response": "def get_job(self):\n \"\"\"Get the Streams job that owns this view.\n\n Returns:\n Job: Streams Job owning this view.\n \"\"\"\n return Job(self.rest_client.make_request(self.job), self.rest_client)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstop the thread that fetches data from the Streams view server.", "response": "def stop_data_fetch(self):\n \"\"\"Stops the thread that fetches data from the Streams view server.\n \"\"\"\n if self._data_fetcher:\n self._data_fetcher.stop.set()\n self._data_fetcher = None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start_data_fetch(self):\n self.stop_data_fetch()\n self._data_fetcher = _ViewDataFetcher(self, self._tuple_fn)\n t = threading.Thread(target=self._data_fetcher)\n t.start()\n return self._data_fetcher.items", "response": "Starts a thread that fetches data from the Streams view server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fetch_tuples(self, max_tuples=20, timeout=None):\n tuples = list()\n if timeout is None:\n while len(tuples) < max_tuples:\n fetcher = self._data_fetcher\n if not fetcher:\n break\n tuples.append(fetcher.items.get())\n return tuples\n\n timeout = float(timeout)\n end = time.time() + timeout\n while len(tuples) < max_tuples:\n qto = end - time.time()\n if qto <= 0:\n break\n try:\n fetcher = self._data_fetcher\n if not fetcher:\n break\n tuples.append(fetcher.items.get(timeout=qto))\n except queue.Empty:\n break\n return tuples", "response": "Fetch a number of tuples from this view."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndisplaying a view within a Jupyter or IPython notebook cell.", "response": "def display(self, duration=None, period=2):\n \"\"\"Display a view within a Jupyter or IPython notebook.\n\n Provides an easy mechanism to visualize data on a stream\n using a view.\n\n Tuples are fetched from the view and displayed in a table\n within the notebook cell using a ``pandas.DataFrame``.\n The table is continually updated with the latest tuples from the view.\n\n This method calls :py:meth:`start_data_fetch` and will call\n :py:meth:`stop_data_fetch` when completed if `duration` is set.\n\n Args:\n duration(float): Number of seconds to fetch and display tuples. If ``None`` then the display will be updated until :py:meth:`stop_data_fetch` is called.\n period(float): Maximum update period.\n\n .. note::\n A view is a sampling of data on a stream so tuples that\n are on the stream may not appear in the view.\n\n .. note::\n Python modules `ipywidgets` and `pandas` must be installed\n in the notebook environment.\n \n .. warning::\n Behavior when called outside a notebook is undefined.\n\n .. versionadded:: 1.12\n \"\"\"\n import ipywidgets as widgets\n vn = widgets.Text(value=self.description, description=self.name, disabled=True)\n active = widgets.Valid(value=True, description='Fetching', readout='Stopped')\n out = widgets.Output(layout={'border': '1px solid black'})\n hb = widgets.HBox([vn, active])\n vb = widgets.VBox([hb, out])\n display(vb)\n self._display_thread = threading.Thread(target=lambda: self._display(out, duration, period, active))\n self._display_thread.start()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a list of ViewItem elements associated with this view.", "response": "def get_view_items(self):\n \"\"\"Get a list of :py:class:`ViewItem` elements associated with this view.\n\n Returns:\n list(ViewItem): List of ViewItem(s) associated with this view.\n \"\"\"\n view_items = [ViewItem(json_view_items, self.rest_client) for json_view_items\n in self.rest_client.make_request(self.viewItems)['viewItems']]\n logger.debug(\"Retrieved \" + str(len(view_items)) + \" items from view \" + self.name)\n return view_items"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the application log and trace files of the job and saves them as a compressed tar file.", "response": "def retrieve_log_trace(self, filename=None, dir=None):\n \"\"\"Retrieves the application log and trace files of the job\n and saves them as a compressed tar file.\n\n An existing file with the same name will be overwritten.\n\n Args:\n filename (str): name of the created tar file. Defaults to `job_<id>_<timestamp>.tar.gz` where `id` is the job identifier and `timestamp` is the number of seconds since the Unix epoch, for example ``job_355_1511995995.tar.gz``.\n dir (str): a valid directory in which to save the archive. Defaults to the current directory.\n\n Returns:\n str: the path to the created tar file, or ``None`` if retrieving a job's logs is not supported in the version of IBM Streams to which the job is submitted.\n\n .. versionadded:: 1.8\n \"\"\"\n\n if hasattr(self, \"applicationLogTrace\") and self.applicationLogTrace is not None:\n logger.debug(\"Retrieving application logs from: \" + self.applicationLogTrace)\n if not filename:\n filename = _file_name('job', self.id, '.tar.gz')\n\n return self.rest_client._retrieve_file(self.applicationLogTrace, filename, dir, 'application/x-compressed')\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_views(self, name=None):\n return self._get_elements(self.views, 'views', View, name=name)", "response": "Get the list of views associated with this job."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the list of Operator elements associated with this job.", "response": "def get_operators(self, name=None):\n \"\"\"Get the list of :py:class:`Operator` elements associated with this job.\n\n Args:\n name(str): Only return operators matching `name`, where `name` can be a regular expression. If\n `name` is not supplied, then all operators for this job are returned.\n\n Returns:\n list(Operator): List of Operator elements associated with this job.\n\n Retrieving a list of operators whose name contains the string \"temperatureSensor\" could be performed as followed\n Example:\n >>> from streamsx import rest\n >>> sc = rest.StreamingAnalyticsConnection()\n >>> instances = sc.get_instances()\n >>> job = instances[0].get_jobs()[0]\n >>> operators = job.get_operators(name=\"*temperatureSensor*\")\n\n .. versionchanged:: 1.9 `name` parameter added.\n \"\"\"\n return self._get_elements(self.operators, 'operators', Operator, name=name)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncanceling this job. Args: force (bool, optional): Forcefully cancel this job. Returns: bool: True if the job was cancelled, otherwise False if an error occurred.", "response": "def cancel(self, force=False):\n \"\"\"Cancel this job.\n\n Args:\n force (bool, optional): Forcefully cancel this job.\n\n Returns:\n bool: True if the job was cancelled, otherwise False if an error occurred.\n \"\"\"\n return self.rest_client._sc._delegator._cancel_job(self, force)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting metrics for this operator.", "response": "def get_metrics(self, name=None):\n \"\"\"Get metrics for this operator.\n\n Args:\n name(str, optional): Only return metrics matching `name`, where `name` can be a regular expression. If\n `name` is not supplied, then all metrics for this operator are returned.\n\n Returns:\n list(Metric): List of matching metrics.\n\n Retrieving a list of metrics whose name contains the string \"temperatureSensor\" could be performed as followed\n Example:\n >>> from streamsx import rest\n >>> sc = rest.StreamingAnalyticsConnection()\n >>> instances = sc.get_instances()\n >>> operator = instances[0].get_operators()[0]\n >>> metrics = op.get_metrics(name='*temperatureSensor*')\n \"\"\"\n return self._get_elements(self.metrics, 'metrics', Metric, name=name)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_host(self):\n if hasattr(self, 'host') and self.host:\n return Host(self.rest_client.make_request(self.host), self.rest_client)", "response": "Get the resource this operator is currently executing in."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_pe(self):\n return PE(self.rest_client.make_request(self.pe), self.rest_client)", "response": "Get the processing element this operator is executing in."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve the application trace files for this PE and saves them as plain text files.", "response": "def retrieve_trace(self, filename=None, dir=None):\n \"\"\"Retrieves the application trace files for this PE\n and saves them as a plain text file.\n\n An existing file with the same name will be overwritten.\n\n Args:\n filename (str): name of the created file. Defaults to `pe_<id>_<timestamp>.trace` where `id` is the PE identifier and `timestamp` is the number of seconds since the Unix epoch, for example ``pe_83_1511995995.trace``.\n dir (str): a valid directory in which to save the file. Defaults to the current directory.\n\n Returns:\n str: the path to the created file, or None if retrieving a job's logs is not supported in the version of streams to which the job is submitted.\n\n .. versionadded:: 1.9\n \"\"\"\n if hasattr(self, \"applicationTrace\") and self.applicationTrace is not None:\n logger.debug(\"Retrieving PE trace: \" + self.applicationTrace)\n if not filename:\n filename = _file_name('pe', self.id, '.trace')\n return self.rest_client._retrieve_file(self.applicationTrace, filename, dir, 'text/plain')\n\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef retrieve_console_log(self, filename=None, dir=None):\n if hasattr(self, \"consoleLog\") and self.consoleLog is not None:\n logger.debug(\"Retrieving PE console log: \" + self.consoleLog)\n if not filename:\n filename = _file_name('pe', self.id, '.stdouterr')\n return self.rest_client._retrieve_file(self.consoleLog, filename, dir, 'text/plain')\n\n else:\n return None", "response": "Retrieves the application console log for this PE and saves them as plain text files."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_resource_allocation(self):\n if hasattr(self, 'resourceAllocation'):\n return ResourceAllocation(self.rest_client.make_request(self.resourceAllocation), self.rest_client)", "response": "Get the : py : class : ResourceAllocation element tance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_resource(self):\n return Resource(self.rest_client.make_request(self.resource), self.rest_client)", "response": "Get the resource of the resource allocation."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_jobs(self, name=None):\n if self.applicationResource:\n return self._get_elements(self.jobs, 'jobs', Job, None, name)\n else:\n return []", "response": "Retrieves jobs running on this resource in its instance."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_operator_output_port(self):\n return OperatorOutputPort(self.rest_client.make_request(self.operatorOutputPort), self.rest_client)", "response": "Returns the OperatorOutputPort object for this exported stream."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconnects to an IBM Streams service instance.", "response": "def of_service(config):\n \"\"\"Connect to an IBM Streams service instance running in IBM Cloud Private for Data.\n\n The instance is specified in `config`. The configuration may be code injected from the list of services\n in a Jupyter notebook running in ICPD or manually created. The code that selects a service instance by name is::\n\n # Two lines are code injected in a Jupyter notebook by selecting the service instance\n from icpd_core import ipcd_util\n cfg = icpd_util.get_service_details(name='instanceName')\n\n instance = Instance.of_service(cfg)\n\n SSL host verification is disabled by setting :py:const:`~streamsx.topology.context.ConfigParams.SSL_VERIFY`\n to ``False`` within `config` before calling this method::\n\n cfg[ConfigParams.SSL_VERIFY] = False\n instance = Instance.of_service(cfg)\n\n Args:\n config(dict): Configuration of IBM Streams service instance.\n\n Returns:\n Instance: Instance representing for IBM Streams service instance.\n\n .. note:: Only supported when running within the ICPD cluster,\n for example in a Jupyter notebook within a ICPD project.\n\n .. versionadded:: 1.12\n \"\"\"\n service = Instance._find_service_def(config)\n if not service:\n raise ValueError()\n endpoint = service['connection_info'].get('serviceRestEndpoint')\n resource_url, name = Instance._root_from_endpoint(endpoint)\n\n sc = streamsx.rest.StreamsConnection(resource_url=resource_url, auth=_ICPDAuthHandler(name, service['service_token']))\n if streamsx.topology.context.ConfigParams.SSL_VERIFY in config:\n sc.session.verify = config[streamsx.topology.context.ConfigParams.SSL_VERIFY]\n return sc.get_instance(name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving a job matching the given id", "response": "def get_job(self, id):\n \"\"\"Retrieves a job matching the given `id`\n\n Args:\n id (str): Job `id` to match.\n\n Returns:\n Job: Job matching the given `id`\n\n Raises:\n ValueError: No resource matches given `id` or multiple resources matching given `id`\n \"\"\"\n return self._get_element_by_id(self.jobs, 'jobs', Job, str(id))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a list of currently published topics for this instance.", "response": "def get_published_topics(self):\n \"\"\"Get a list of published topics for this instance.\n\n Streams applications publish streams to a a topic that can be subscribed to by other\n applications. This allows a microservice approach where publishers\n and subscribers are independent of each other.\n\n A published stream has a topic and a schema. It is recommended that a\n topic is only associated with a single schema.\n\n Streams may be published and subscribed by applications regardless of the\n implementation language. For example a Python application can publish\n a stream of JSON tuples that are subscribed to by SPL and Java applications.\n\n Returns:\n list(PublishedTopic): List of currently published topics.\n \"\"\"\n published_topics = []\n # A topic can be published multiple times\n # (typically with the same schema) but the\n # returned list only wants to contain a topic,schema\n # pair once. I.e. the list of topics being published is\n # being returned, not the list of streams.\n seen_topics = {}\n for es in self.get_exported_streams():\n pt = es._as_published_topic()\n if pt is not None:\n if pt.topic in seen_topics:\n if pt.schema is None:\n continue\n if pt.schema in seen_topics[pt.topic]:\n continue\n seen_topics[pt.topic].append(pt.schema)\n else:\n seen_topics[pt.topic] = [pt.schema]\n published_topics.append(pt)\n\n return published_topics"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve application configurations for this instance.", "response": "def get_application_configurations(self, name=None):\n \"\"\"Retrieves application configurations for this instance.\n\n Args:\n name (str, optional): Only return application configurations containing property **name** that matches `name`. `name` can be a\n regular expression. If `name` is not supplied, then all application configurations are returned.\n\n Returns:\n list(ApplicationConfiguration): A list of application configurations matching the given `name`.\n \n .. versionadded 1.12\n \"\"\"\n if hasattr(self, 'applicationConfigurations'):\n return self._get_elements(self.applicationConfigurations, 'applicationConfigurations', ApplicationConfiguration, None, name)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_application_configuration(self, name, properties, description=None):\n if not hasattr(self, 'applicationConfigurations'):\n raise NotImplementedError()\n\n cv = ApplicationConfiguration._props(name, properties, description)\n\n res = self.rest_client.session.post(self.applicationConfigurations,\n headers = {'Accept' : 'application/json'},\n json=cv)\n _handle_http_errors(res)\n return ApplicationConfiguration(res.json(), self.rest_client)", "response": "Create an application configuration."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsubmitting a new application bundle to this Streaming Analytics service.", "response": "def submit_job(self, bundle, job_config=None):\n \"\"\"Submit a Streams Application Bundle (sab file) to\n this Streaming Analytics service.\n \n Args:\n bundle(str): path to a Streams application bundle (sab file)\n containing the application to be submitted\n job_config(JobConfig): a job configuration overlay\n \n Returns:\n dict: JSON response from service containing 'name' field with unique\n job name assigned to submitted job, or, 'error_status' and\n 'description' fields if submission was unsuccessful.\n \"\"\"\n return self._delegator._submit_job(bundle=bundle, job_config=job_config)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncancel a running job.", "response": "def cancel_job(self, job_id=None, job_name=None):\n \"\"\"Cancel a running job.\n\n Args:\n job_id (str, optional): Identifier of job to be canceled.\n job_name (str, optional): Name of job to be canceled.\n\n Returns:\n dict: JSON response for the job cancel operation.\n \"\"\"\n return self._delegator.cancel_job(job_id=job_id, job_name = job_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_jobs_url(self):\n if self._jobs_url is None:\n self.get_instance_status()\n if self._jobs_url is None:\n raise ValueError(\"Cannot obtain jobs URL\")\n return self._jobs_url", "response": "Get & save jobs URL from the status call."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncancels a running job.", "response": "def cancel_job(self, job_id=None, job_name=None):\n \"\"\"Cancel a running job.\n\n Args:\n job_id (str, optional): Identifier of job to be canceled.\n job_name (str, optional): Name of job to be canceled.\n\n Returns:\n dict: JSON response for the job cancel operation.\n \"\"\"\n payload = {}\n if job_name is not None:\n payload['job_name'] = job_name\n if job_id is not None:\n payload['job_id'] = job_id\n\n jobs_url = self._get_url('jobs_path')\n res = self.rest_client.session.delete(jobs_url, params=payload)\n _handle_http_errors(res)\n return res.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef start_instance(self):\n start_url = self._get_url('start_path')\n res = self.rest_client.session.put(start_url, json={})\n _handle_http_errors(res)\n return res.json()", "response": "Start the instance for this Streaming Analytics service."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops the instance for this Streaming Analytics service.", "response": "def stop_instance(self):\n \"\"\"Stop the instance for this Streaming Analytics service.\n\n Returns:\n dict: JSON response for the instance stop operation.\n \"\"\"\n stop_url = self._get_url('stop_path')\n res = self.rest_client.session.put(stop_url, json={})\n _handle_http_errors(res)\n return res.json()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_instance_status(self):\n status_url = self._get_url('status_path')\n res = self.rest_client.session.get(status_url)\n _handle_http_errors(res)\n return res.json()", "response": "Get the status of the instance for this Streaming Analytics service."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsubmits this Streams Application Bundle to its associated instance.", "response": "def submit_job(self, job_config=None):\n \"\"\"Submit this Streams Application Bundle (sab file) to\n its associated instance.\n \n Args:\n job_config(JobConfig): a job configuration overlay\n \n Returns:\n Job: Resulting job instance.\n \"\"\"\n job_id = self._delegator._submit_bundle(self, job_config)\n return self._instance.get_job(job_id)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncanceling the job using streamtool.", "response": "def _cancel_job(self, job, force):\n \"\"\"Cancel job using streamtool.\"\"\"\n import streamsx.st as st\n if st._has_local_install:\n return st._cancel_job(job.id, force,\n domain_id=job.get_instance().get_domain().id, instance_id=job.get_instance().id)\n return False"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating this application configuration.", "response": "def update(self, properties=None, description=None):\n \"\"\"Update this application configuration.\n\n To create or update a property provide its key-value\n pair in `properties`.\n\n To delete a property provide its key with the value ``None``\n in properties.\n\n Args:\n properties (dict): Property values to be updated. If ``None`` the properties are unchanged.\n description (str): Description for the configuration. If ``None`` the description is unchanged.\n\n Returns:\n ApplicationConfiguration: self\n \"\"\"\n cv = ApplicationConfiguration._props(properties=properties, description=description)\n res = self.rest_client.session.patch(self.rest_self,\n headers = {'Accept' : 'application/json',\n 'Content-Type' : 'application/json'},\n json=cv)\n _handle_http_errors(res)\n self.json_rep = res.json()\n return self"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete(self):\n res = self.rest_client.session.delete(self.rest_self)\n _handle_http_errors(res)", "response": "Delete this application configuration."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_common(schema):\n if isinstance(schema, StreamSchema):\n return schema.schema() in _SCHEMA_COMMON\n if isinstance(schema, CommonSchema):\n return True\n if isinstance(schema, basestring):\n return is_common(StreamSchema(schema))\n return False", "response": "Is schema an common schema?"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset a schema from another schema", "response": "def _set(self, schema):\n \"\"\"Set a schema from another schema\"\"\"\n if isinstance(schema, CommonSchema):\n self._spl_type = False\n self._schema = schema.schema()\n self._style = self._default_style()\n else:\n self._spl_type = schema._spl_type\n self._schema = schema._schema\n self._style = schema._style"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef as_tuple(self, named=None):\n if not named:\n return self._copy(tuple)\n\n if named == True or isinstance(named, basestring):\n return self._copy(self._make_named_tuple(name=named))\n\n return self._copy(tuple)", "response": "Create a structured schema that will pass stream tuples into a tuple."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextend this schema by another schema.", "response": "def extend(self, schema):\n \"\"\"\n Extend a structured schema by another.\n\n For example extending ``tuple<rstring id, timestamp ts, float64 value>``\n with ``tuple<float32 score>`` results in ``tuple<rstring id, timestamp ts, float64 value, float32 score>``.\n\n Args:\n schema(StreamSchema): Schema to extend this schema by.\n\n Returns:\n StreamSchema: New schema that is an extension of this schema.\n \"\"\"\n if self._spl_type:\n raise TypeError(\"Not supported for declared SPL types\")\n base = self.schema()\n extends = schema.schema()\n new_schema = base[:-1] + ',' + extends[6:]\n return StreamSchema(new_schema)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset an operator s parameter representing the style of this schema.", "response": "def _fnop_style(schema, op, name):\n \"\"\"Set an operator's parameter representing the style of this schema.\"\"\"\n if is_common(schema):\n if name in op.params:\n del op.params[name]\n return\n if _is_pending(schema):\n ntp = 'pending'\n elif schema.style is tuple:\n ntp = 'tuple'\n elif schema.style is _spl_dict:\n ntp = 'dict'\n elif _is_namedtuple(schema.style) and hasattr(schema.style, '_splpy_namedtuple'):\n ntp = 'namedtuple:' + schema.style._splpy_namedtuple\n else:\n return\n op.params[name] = ntp"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving all changed values to the database.", "response": "def save(self) -> None:\n \"\"\"\n Saves all changed values to the database.\n \"\"\"\n for name, field in self.fields.items():\n value = self.cleaned_data[name]\n if isinstance(value, UploadedFile):\n # Delete old file\n fname = self._s.get(name, as_type=File)\n if fname:\n try:\n default_storage.delete(fname.name)\n except OSError: # pragma: no cover\n logger.error('Deleting file %s failed.' % fname.name)\n\n # Create new file\n newname = default_storage.save(self.get_new_filename(value.name), value)\n value._name = newname\n self._s.set(name, value)\n elif isinstance(value, File):\n # file is unchanged\n continue\n elif isinstance(field, forms.FileField):\n # file is deleted\n fname = self._s.get(name, as_type=File)\n if fname:\n try:\n default_storage.delete(fname.name)\n except OSError: # pragma: no cover\n logger.error('Deleting file %s failed.' % fname.name)\n del self._s[name]\n elif value is None:\n del self._s[name]\n elif self._s.get(name, as_type=type(value)) != value:\n self._s.set(name, value)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the new filename based on the original filename of an uploaded file.", "response": "def get_new_filename(self, name: str) -> str:\n \"\"\"\n Returns the file name to use based on the original filename of an uploaded file.\n By default, the file name is constructed as::\n \n <model_name>-<attribute_name>/<primary_key>/<original_basename>.<random_nonce>.<extension>\n \"\"\"\n nonce = get_random_string(length=8)\n return '%s-%s/%s/%s.%s.%s' % (\n self.obj._meta.model_name, self.attribute_name,\n self.obj.pk, name, nonce, name.split('.')[-1]\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _stop(sas, cmd_args):\n if not cmd_args.force:\n status = sas.get_instance_status()\n jobs = int(status['job_count'])\n if jobs:\n return status\n return sas.stop_instance()", "response": "Stop the service if no jobs are running unless force is set"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main(args=None):\n streamsx._streams._version._mismatch_check('streamsx.topology.context')\n try:\n sr = run_cmd(args)\n sr['return_code'] = 0\n except:\n sr = {'return_code':1, 'error': sys.exc_info()}\n return sr", "response": "Runs an action against a Streaming Analytics service."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget information from the user s code.", "response": "def _source_info():\n \"\"\"\n Get information from the user's code (two frames up)\n to leave breadcrumbs for file, line, class and function.\n \"\"\"\n ofi = inspect.getouterframes(inspect.currentframe())[2]\n try:\n calling_class = ofi[0].f_locals['self'].__class__\n except KeyError:\n calling_class = None\n # Tuple of file,line,calling_class,function_name\n return ofi[1], ofi[2], calling_class, ofi[3]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef source(self, func, name=None):\n _name = name\n if inspect.isroutine(func):\n pass\n elif callable(func):\n pass\n else:\n if _name is None:\n _name = type(func).__name__\n func = streamsx.topology.runtime._IterableInstance(func)\n\n sl = _SourceLocation(_source_info(), \"source\")\n _name = self.graph._requested_name(_name, action='source', func=func)\n # source is always stateful\n op = self.graph.addOperator(self.opnamespace+\"::Source\", func, name=_name, sl=sl)\n op._layout(kind='Source', name=_name, orig_name=name)\n oport = op.addOutputPort(name=_name)\n return Stream(self, oport)._make_placeable()", "response": "Declare a source stream that introduces tuples from an external source."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsubscribing to a topic.", "response": "def subscribe(self, topic, schema=streamsx.topology.schema.CommonSchema.Python, name=None, connect=None, buffer_capacity=None, buffer_full_policy=None):\n \"\"\"\n Subscribe to a topic published by other Streams applications.\n A Streams application may publish a stream to allow other\n Streams applications to subscribe to it. A subscriber matches a\n publisher if the topic and schema match.\n\n By default a stream is subscribed as :py:const:`~streamsx.topology.schema.CommonSchema.Python` objects\n which connects to streams published to topic by Python Streams applications.\n\n Structured schemas are subscribed to using an instance of\n :py:class:`StreamSchema`. A Streams application publishing\n structured schema streams may have been implemented in any\n programming language supported by Streams.\n\n JSON streams are subscribed to using schema :py:const:`~streamsx.topology.schema.CommonSchema.Json`.\n Each tuple on the returned stream will be a Python dictionary\n object created by ``json.loads(tuple)``.\n A Streams application publishing JSON streams may have been implemented in any programming language\n supported by Streams.\n \n String streams are subscribed to using schema :py:const:`~streamsx.topology.schema.CommonSchema.String`.\n Each tuple on the returned stream will be a Python string object.\n A Streams application publishing string streams may have been implemented in any programming language\n supported by Streams.\n\n Subscribers can ensure they do not slow down matching publishers\n by using a buffered connection with a buffer full policy\n that drops tuples.\n\n Args:\n topic(str): Topic to subscribe to.\n schema(~streamsx.topology.schema.StreamSchema): schema to subscribe to.\n name(str): Name of the subscribed stream, defaults to a generated name.\n connect(SubscribeConnection): How subscriber will be connected to matching publishers. Defaults to :py:const:`~SubscribeConnection.Direct` connection.\n buffer_capacity(int): Buffer capacity in tuples when `connect` is set to :py:const:`~SubscribeConnection.Buffered`. Defaults to 1000 when `connect` is `Buffered`. Ignored when `connect` is `None` or `Direct`.\n buffer_full_policy(~streamsx.types.CongestionPolicy): Policy when a pulished tuple arrives and the subscriber's buffer is full. Defaults to `Wait` when `connect` is `Buffered`. Ignored when `connect` is `None` or `Direct`.\n\n Returns:\n Stream: A stream whose tuples have been published to the topic by other Streams applications.\n\n .. versionchanged:: 1.9 `connect`, `buffer_capacity` and `buffer_full_policy` parameters added.\n\n .. seealso:`SubscribeConnection`\n \"\"\"\n schema = streamsx.topology.schema._normalize(schema)\n _name = self.graph._requested_name(name, 'subscribe')\n sl = _SourceLocation(_source_info(), \"subscribe\")\n # subscribe is never stateful\n op = self.graph.addOperator(kind=\"com.ibm.streamsx.topology.topic::Subscribe\", sl=sl, name=_name, stateful=False)\n oport = op.addOutputPort(schema=schema, name=_name)\n params = {'topic': topic, 'streamType': schema}\n if connect is not None and connect != SubscribeConnection.Direct:\n params['connect'] = connect\n if buffer_capacity:\n params['bufferCapacity'] = int(buffer_capacity)\n if buffer_full_policy:\n params['bufferFullPolicy'] = buffer_full_policy\n \n op.setParameters(params)\n op._layout_group('Subscribe', name if name else _name)\n return Stream(self, oport)._make_placeable()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_file_dependency(self, path, location):\n if location not in {'etc', 'opt'}:\n raise ValueError(location)\n\n if not os.path.isfile(path) and not os.path.isdir(path):\n raise ValueError(path)\n\n path = os.path.abspath(path)\n\n if location not in self._files:\n self._files[location] = [path]\n else:\n self._files[location].append(path)\n return location + '/' + os.path.basename(path)", "response": "Adds a file or directory dependency into an Streams application bundle."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a Python package dependency to the internal list of packages that are required by pip.", "response": "def add_pip_package(self, requirement):\n \"\"\"\n Add a Python package dependency for this topology.\n\n If the package defined by the requirement specifier\n is not pre-installed on the build system then the\n package is installed using `pip` and becomes part\n of the Streams application bundle (`sab` file).\n The package is expected to be available from `pypi.org`.\n\n\n If the package is already installed on the build system\n then it is not added into the `sab` file.\n The assumption is that the runtime hosts for a Streams\n instance have the same Python packages installed as the\n build machines. This is always true for IBM Cloud\n Private for Data and the Streaming Analytics service on IBM Cloud.\n\n The project name extracted from the requirement\n specifier is added to :py:attr:`~exclude_packages`\n to avoid the package being added by the dependency\n resolver. Thus the package should be added before\n it is used in any stream transformation.\n\n When an application is run with trace level ``info``\n the available Python packages on the running system\n are listed to application trace. This includes\n any packages added by this method.\n\n Example::\n\n topo = Topology()\n # Add dependency on pint package\n # and astral at version 0.8.1\n topo.add_pip_package('pint')\n topo.add_pip_package('astral==0.8.1')\n \n Args:\n requirement(str): Package requirements specifier.\n\n .. warning::\n Only supported when using the build service with\n a Streams instance in IBM Cloud Private for Data\n or Streaming Analytics service on IBM Cloud.\n\n .. note::\n Installing packages through `pip` is preferred to\n the automatic dependency checking performed on local\n modules. This is because `pip` will perform a full\n install of the package including any dependent packages\n and additional files, such as shared libraries, that\n might be missed by dependency discovery.\n\n .. versionadded:: 1.9\n \"\"\"\n self._pip_packages.append(str(requirement))\n pr = pkg_resources.Requirement.parse(requirement) \n self.exclude_packages.add(pr.project_name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_submission_parameter(self, name, default=None, type_=None):\n \n if name in self._submission_parameters:\n raise ValueError(\"Submission parameter {} already defined.\".format(name))\n sp = streamsx.topology.runtime._SubmissionParam(name, default, type_)\n self._submission_parameters[name] = sp\n return sp", "response": "Create a submission parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _generate_requirements(self):\n if not self._pip_packages:\n return\n\n reqs = ''\n for req in self._pip_packages:\n reqs += \"{}\\n\".format(req)\n reqs_include = {\n 'contents': reqs,\n 'target':'opt/python/streams',\n 'name': 'requirements.txt'}\n\n if 'opt' not in self._files:\n self._files['opt'] = [reqs_include]\n else:\n self._files['opt'].append(reqs_include)", "response": "Generate the info to create requirements. txt in the toookit."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _add_job_control_plane(self):\n if not self._has_jcp:\n jcp = self.graph.addOperator(kind=\"spl.control::JobControlPlane\", name=\"JobControlPlane\")\n jcp.viewable = False\n self._has_jcp = True", "response": "Add a JobControlPlane operator to the topology if one has not already been added."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate an alias of this stream with name equal to name.", "response": "def aliased_as(self, name):\n \"\"\"\n Create an alias of this stream.\n\n Returns an alias of this stream with name `name`.\n When invocation of an SPL operator requires an\n :py:class:`~streamsx.spl.op.Expression` against\n an input port this can be used to ensure expression\n matches the input port alias regardless of the name\n of the actual stream.\n\n Example use where the filter expression for a ``Filter`` SPL operator\n uses ``IN`` to access input tuple attribute ``seq``::\n\n s = ...\n s = s.aliased_as('IN')\n\n params = {'filter': op.Expression.expression('IN.seq % 4ul == 0ul')}\n f = op.Map('spl.relational::Filter', stream, params = params)\n\n Args:\n name(str): Name for returned stream.\n\n Returns:\n Stream: Alias of this stream with ``name`` equal to `name`.\n \n .. versionadded:: 1.9\n \"\"\"\n stream = copy.copy(self)\n stream._alias = name\n return stream"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a view on a distributed topology.", "response": "def view(self, buffer_time = 10.0, sample_size = 10000, name=None, description=None, start=False):\n \"\"\"\n Defines a view on a stream.\n\n A view is a continually updated sampled buffer of a streams's tuples.\n Views allow visibility into a stream from external clients such\n as Jupyter Notebooks, the Streams console,\n `Microsoft Excel <https://www.ibm.com/support/knowledgecenter/SSCRJU_4.2.0/com.ibm.streams.excel.doc/doc/excel_overview.html>`_ or REST clients.\n\n The view created by this method can be used by external clients\n and through the returned :py:class:`~streamsx.topology.topology.View` object after the topology is submitted. For example a Jupyter Notebook can\n declare and submit an application with views, and then\n use the resultant `View` objects to visualize live data within the streams.\n\n When the stream contains Python objects then they are converted\n to JSON.\n\n Args:\n buffer_time: Specifies the buffer size to use measured in seconds.\n sample_size: Specifies the number of tuples to sample per second.\n name(str): Name of the view. Name must be unique within the topology. Defaults to a generated name.\n description: Description of the view.\n start(bool): Start buffering data when the job is submitted.\n If `False` then the view starts buffering data when the first\n remote client accesses it to retrieve data.\n \n Returns:\n streamsx.topology.topology.View: View object which can be used to access the data when the\n topology is submitted.\n\n .. note:: Views are only supported when submitting to distributed\n contexts including Streaming Analytics service.\n \"\"\"\n if name is None:\n name = ''.join(random.choice('0123456789abcdef') for x in range(16))\n\n if self.oport.schema == streamsx.topology.schema.CommonSchema.Python:\n if self._json_stream:\n view_stream = self._json_stream\n else:\n self._json_stream = self.as_json(force_object=False)._layout(hidden=True)\n view_stream = self._json_stream\n # colocate map operator with stream that is being viewed.\n if self._placeable:\n self._colocate(view_stream, 'view')\n else:\n view_stream = self\n\n port = view_stream.oport.name\n view_config = {\n 'name': name,\n 'port': port,\n 'description': description,\n 'bufferTime': buffer_time,\n 'sampleSize': sample_size}\n if start:\n view_config['activateOption'] = 'automatic'\n view_stream.oport.operator.addViewConfig(view_config)\n _view = View(name)\n self.topology.graph._views.append(_view)\n return _view"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmapping each tuple from this stream into 0 or 1 stream tuples.", "response": "def map(self, func=None, name=None, schema=None):\n \"\"\"\n Maps each tuple from this stream into 0 or 1 stream tuples.\n\n For each tuple on this stream ``result = func(tuple)`` is called.\n If `result` is not `None` then the result will be submitted\n as a tuple on the returned stream. If `result` is `None` then\n no tuple submission will occur.\n\n By default the submitted tuple is ``result`` without modification\n resulting in a stream of picklable Python objects. Setting the\n `schema` parameter changes the type of the stream and\n modifies each ``result`` before submission.\n\n * ``object`` or :py:const:`~streamsx.topology.schema.CommonSchema.Python` - The default: `result` is submitted.\n * ``str`` type (``unicode`` 2.7) or :py:const:`~streamsx.topology.schema.CommonSchema.String` - A stream of strings: ``str(result)`` is submitted.\n * ``json`` or :py:const:`~streamsx.topology.schema.CommonSchema.Json` - A stream of JSON objects: ``result`` must be convertable to a JSON object using `json` package.\n * :py:const:`~streamsx.topology.schema.StreamSchema` - A structured stream. `result` must be a `dict` or (Python) `tuple`. When a `dict` is returned the outgoing stream tuple attributes are set by name, when a `tuple` is returned stream tuple attributes are set by position.\n * string value - Equivalent to passing ``StreamSchema(schema)``\n\n Args:\n func: A callable that takes a single parameter for the tuple.\n If not supplied then a function equivalent to ``lambda tuple_ : tuple_`` is used.\n name(str): Name of the mapped stream, defaults to a generated name.\n schema(StreamSchema|CommonSchema|str): Schema of the resulting stream.\n\n If invoking ``func`` for a tuple on the stream raises an exception\n then its processing element will terminate. By default the processing\n element will automatically restart though tuples may be lost.\n\n If ``func`` is a callable object then it may suppress exceptions\n by return a true value from its ``__exit__`` method. When an\n exception is suppressed no tuple is submitted to the mapped\n stream corresponding to the input tuple that caused the exception.\n \n\n Returns:\n Stream: A stream containing tuples mapped by `func`.\n\n .. versionadded:: 1.7 `schema` argument added to allow conversion to\n a structured stream.\n .. versionadded:: 1.8 Support for submitting `dict` objects as stream tuples to a structured stream (in addition to existing support for `tuple` objects).\n .. versionchanged:: 1.11 `func` is optional.\n \"\"\"\n if schema is None:\n schema = streamsx.topology.schema.CommonSchema.Python\n if func is None:\n func = streamsx.topology.runtime._identity\n if name is None:\n name = 'identity'\n \n ms = self._map(func, schema=schema, name=name)._layout('Map')\n ms.oport.operator.sl = _SourceLocation(_source_info(), 'map')\n return ms"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a Stream containing the flattened and mapped tuples.", "response": "def flat_map(self, func=None, name=None):\n \"\"\"\n Maps and flatterns each tuple from this stream into 0 or more tuples.\n\n\n For each tuple on this stream ``func(tuple)`` is called.\n If the result is not `None` then the the result is iterated over\n with each value from the iterator that is not `None` will be submitted\n to the return stream.\n\n If the result is `None` or an empty iterable then no tuples are submitted to\n the returned stream.\n \n Args:\n func: A callable that takes a single parameter for the tuple.\n If not supplied then a function equivalent to ``lambda tuple_ : tuple_`` is used.\n This is suitable when each tuple on this stream is an iterable to be flattened.\n \n name(str): Name of the flattened stream, defaults to a generated name.\n\n If invoking ``func`` for a tuple on the stream raises an exception\n then its processing element will terminate. By default the processing\n element will automatically restart though tuples may be lost.\n\n If ``func`` is a callable object then it may suppress exceptions\n by return a true value from its ``__exit__`` method. When an\n exception is suppressed no tuples are submitted to the flattened\n and mapped stream corresponding to the input tuple\n that caused the exception.\n\n Returns:\n Stream: A Stream containing flattened and mapped tuples.\n Raises:\n TypeError: if `func` does not return an iterator nor None\n\n .. versionchanged:: 1.11 `func` is optional.\n \"\"\" \n if func is None:\n func = streamsx.topology.runtime._identity\n if name is None:\n name = 'flatten'\n \n sl = _SourceLocation(_source_info(), 'flat_map')\n _name = self.topology.graph._requested_name(name, action='flat_map', func=func)\n stateful = self._determine_statefulness(func)\n op = self.topology.graph.addOperator(self.topology.opnamespace+\"::FlatMap\", func, name=_name, sl=sl, stateful=stateful)\n op.addInputPort(outputPort=self.oport)\n streamsx.topology.schema.StreamSchema._fnop_style(self.oport.schema, op, 'pyStyle')\n oport = op.addOutputPort(name=_name)\n return Stream(self.topology, oport)._make_placeable()._layout('FlatMap', name=_name, orig_name=name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parallel(self, width, routing=Routing.ROUND_ROBIN, func=None, name=None):\n _name = name\n if _name is None:\n _name = self.name + '_parallel'\n \n _name = self.topology.graph._requested_name(_name, action='parallel', func=func)\n\n if routing is None or routing == Routing.ROUND_ROBIN or routing == Routing.BROADCAST:\n op2 = self.topology.graph.addOperator(\"$Parallel$\", name=_name)\n if name is not None:\n op2.config['regionName'] = _name\n op2.addInputPort(outputPort=self.oport)\n if routing == Routing.BROADCAST:\n oport = op2.addOutputPort(width, schema=self.oport.schema, routing=\"BROADCAST\", name=_name)\n else:\n oport = op2.addOutputPort(width, schema=self.oport.schema, routing=\"ROUND_ROBIN\", name=_name)\n\n \n return Stream(self.topology, oport)\n elif routing == Routing.HASH_PARTITIONED:\n\n if (func is None):\n if self.oport.schema == streamsx.topology.schema.CommonSchema.String:\n keys = ['string']\n parallel_input = self.oport\n elif self.oport.schema == streamsx.topology.schema.CommonSchema.Python:\n func = hash\n else:\n raise NotImplementedError(\"HASH_PARTITIONED for schema {0} requires a hash function.\".format(self.oport.schema))\n\n if func is not None:\n keys = ['__spl_hash']\n stateful = self._determine_statefulness(func)\n hash_adder = self.topology.graph.addOperator(self.topology.opnamespace+\"::HashAdder\", func, stateful=stateful)\n hash_adder._op_def['hashAdder'] = True\n hash_adder._layout(hidden=True)\n hash_schema = self.oport.schema.extend(streamsx.topology.schema.StreamSchema(\"tuple<int64 __spl_hash>\"))\n hash_adder.addInputPort(outputPort=self.oport)\n streamsx.topology.schema.StreamSchema._fnop_style(self.oport.schema, hash_adder, 'pyStyle')\n parallel_input = hash_adder.addOutputPort(schema=hash_schema)\n\n parallel_op = self.topology.graph.addOperator(\"$Parallel$\", name=_name)\n if name is not None:\n parallel_op.config['regionName'] = _name\n parallel_op.addInputPort(outputPort=parallel_input)\n parallel_op_port = parallel_op.addOutputPort(oWidth=width, schema=parallel_input.schema, partitioned_keys=keys, routing=\"HASH_PARTITIONED\")\n\n if func is not None:\n # use the Functor passthru operator to remove the hash attribute by removing it from output port schema\n hrop = self.topology.graph.addPassThruOperator()\n hrop._layout(hidden=True)\n hrop.addInputPort(outputPort=parallel_op_port)\n parallel_op_port = hrop.addOutputPort(schema=self.oport.schema)\n\n return Stream(self.topology, parallel_op_port)\n else :\n raise TypeError(\"Invalid routing type supplied to the parallel operator\")", "response": "Split the stream into channels and start a parallel region."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nending a parallel region by merging the channels into a single stream.", "response": "def end_parallel(self):\n \"\"\"\n Ends a parallel region by merging the channels into a single stream.\n\n Returns:\n Stream: Stream for which subsequent transformations are no longer parallelized.\n\n .. seealso:: :py:meth:`set_parallel`, :py:meth:`parallel`\n \"\"\"\n outport = self.oport\n if isinstance(self.oport.operator, streamsx.topology.graph.Marker):\n if self.oport.operator.kind == \"$Union$\":\n pto = self.topology.graph.addPassThruOperator()\n pto.addInputPort(outputPort=self.oport)\n outport = pto.addOutputPort(schema=self.oport.schema)\n op = self.topology.graph.addOperator(\"$EndParallel$\")\n op.addInputPort(outputPort=outport)\n oport = op.addOutputPort(schema=self.oport.schema)\n endP = Stream(self.topology, oport)\n return endP"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets this source stream to be split into multiple channels and start a parallel region.", "response": "def set_parallel(self, width, name=None):\n \"\"\"\n Set this source stream to be split into multiple channels\n as the start of a parallel region.\n\n Calling ``set_parallel`` on a stream created by\n :py:meth:`~Topology.source` results in the stream\n having `width` channels, each created by its own instance\n of the callable::\n\n s = topo.source(S())\n s.set_parallel(3)\n f = s.filter(F())\n e = f.end_parallel()\n\n Each channel has independent instances of ``S`` and ``F``. Tuples\n created by the instance of ``S`` in channel 0 are passed to the\n instance of ``F`` in channel 0, and so on for channels 1 and 2.\n\n Callable transforms instances within the channel can use\n the runtime functions\n :py:func:`~streamsx.ec.channel`, \n :py:func:`~streamsx.ec.local_channel`, \n :py:func:`~streamsx.ec.max_channels` &\n :py:func:`~streamsx.ec.local_max_channels`\n to adapt to being invoked in parallel. For example a\n source callable can use its channel number to determine\n which partition to read from in a partitioned external system.\n\n Calling ``set_parallel`` on a stream created by\n :py:meth:`~Topology.subscribe` results in the stream\n having `width` channels. Subscribe ensures that the\n stream will contain all published tuples matching the\n topic subscription and type. A published tuple will appear\n on one of the channels though the specific channel is not known\n in advance.\n\n A parallel region is terminated by :py:meth:`end_parallel`\n or :py:meth:`for_each`.\n\n The number of channels is set by `width` which may be an `int` greater\n than zero or a submission parameter created by\n :py:meth:`Topology.create_submission_parameter`.\n\n With IBM Streams 4.3 or later the number of channels can be\n dynamically changed at runtime.\n\n Parallel regions are started on non-source streams using\n :py:meth:`parallel`.\n\n Args:\n width: The degree of parallelism for the parallel region.\n name(str): Name of the parallel region. Defaults to the name of this stream.\n\n Returns:\n Stream: Returns this stream.\n\n .. seealso:: :py:meth:`parallel`, :py:meth:`end_parallel`\n\n .. versionadded:: 1.9\n .. versionchanged:: 1.11 `name` parameter added.\n \"\"\"\n self.oport.operator.config['parallel'] = True\n self.oport.operator.config['width'] = streamsx.topology.graph._as_spl_json(width, int)\n if name:\n name = self.topology.graph._requested_name(str(name), action='set_parallel')\n self.oport.operator.config['regionName'] = name\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_consistent(self, consistent_config):\n\n # add job control plane if needed\n self.topology._add_job_control_plane()\n self.oport.operator.consistent(consistent_config)\n return self._make_placeable()", "response": "Sets the consistent state of the current stream."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef last(self, size=1):\n win = Window(self, 'SLIDING')\n if isinstance(size, datetime.timedelta):\n win._evict_time(size)\n elif isinstance(size, int):\n win._evict_count(size)\n else:\n raise ValueError(size)\n return win", "response": "Declares a slding window containing the most recent tuples on this stream."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef union(self, streamSet):\n if(not isinstance(streamSet,set)) :\n raise TypeError(\"The union operator parameter must be a set object\")\n if(len(streamSet) == 0):\n return self \n op = self.topology.graph.addOperator(\"$Union$\")\n op.addInputPort(outputPort=self.oport)\n for stream in streamSet:\n op.addInputPort(outputPort=stream.oport)\n oport = op.addOutputPort(schema=self.oport.schema)\n return Stream(self.topology, oport)", "response": "Returns a new stream that is a union of this stream and other streams."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef print(self, tag=None, name=None):\n _name = name\n if _name is None:\n _name = 'print'\n fn = streamsx.topology.functions.print_flush\n if tag is not None:\n tag = str(tag) + ': '\n fn = lambda v : streamsx.topology.functions.print_flush(tag + str(v))\n sp = self.for_each(fn, name=_name)\n sp._op().sl = _SourceLocation(_source_info(), 'print')\n return sp", "response": "Prints each tuple to stdout flushing after each tuple."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npublishes this stream to a topic.", "response": "def publish(self, topic, schema=None, name=None):\n \"\"\"\n Publish this stream on a topic for other Streams applications to subscribe to.\n A Streams application may publish a stream to allow other\n Streams applications to subscribe to it. A subscriber\n matches a publisher if the topic and schema match.\n\n By default a stream is published using its schema.\n\n A stream of :py:const:`Python objects <streamsx.topology.schema.CommonSchema.Python>` can be subscribed to by other Streams Python applications.\n\n If a stream is published setting `schema` to\n :py:const:`~streamsx.topology.schema.CommonSchema.Json`\n then it is published as a stream of JSON objects.\n Other Streams applications may subscribe to it regardless\n of their implementation language.\n\n If a stream is published setting `schema` to\n :py:const:`~streamsx.topology.schema.CommonSchema.String`\n then it is published as strings\n Other Streams applications may subscribe to it regardless\n of their implementation language.\n\n Supported values of `schema` are only\n :py:const:`~streamsx.topology.schema.CommonSchema.Json`\n and\n :py:const:`~streamsx.topology.schema.CommonSchema.String`.\n\n Args:\n topic(str): Topic to publish this stream to.\n schema: Schema to publish. Defaults to the schema of this stream.\n name(str): Name of the publish operator, defaults to a generated name.\n Returns:\n streamsx.topology.topology.Sink: Stream termination.\n\n .. versionadded:: 1.6.1 `name` parameter.\n\n .. versionchanged:: 1.7\n Now returns a :py:class:`Sink` instance.\n \"\"\"\n sl = _SourceLocation(_source_info(), 'publish')\n schema = streamsx.topology.schema._normalize(schema)\n if schema is not None and self.oport.schema.schema() != schema.schema():\n nc = None\n if schema == streamsx.topology.schema.CommonSchema.Json:\n schema_change = self.as_json()\n elif schema == streamsx.topology.schema.CommonSchema.String:\n schema_change = self.as_string()\n else:\n raise ValueError(schema)\n \n if self._placeable:\n self._colocate(schema_change, 'publish')\n sp = schema_change.publish(topic, schema=schema, name=name)\n sp._op().sl = sl\n return sp\n\n _name = self.topology.graph._requested_name(name, action=\"publish\")\n # publish is never stateful\n op = self.topology.graph.addOperator(\"com.ibm.streamsx.topology.topic::Publish\", params={'topic': topic}, sl=sl, name=_name, stateful=False)\n op.addInputPort(outputPort=self.oport)\n op._layout_group('Publish', name if name else _name)\n sink = Sink(op)\n\n if self._placeable:\n self._colocate(sink, 'publish')\n return sink"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef autonomous(self):\n op = self.topology.graph.addOperator(\"$Autonomous$\")\n op.addInputPort(outputPort=self.oport)\n oport = op.addOutputPort(schema=self.oport.schema)\n return Stream(self.topology, oport)", "response": "Starts an autonomous region for downstream processing."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a new SAS stream containing the string representations of each tuple on this stream.", "response": "def as_string(self, name=None):\n \"\"\"\n Declares a stream converting each tuple on this stream\n into a string using `str(tuple)`.\n\n The stream is typed as a :py:const:`string stream <streamsx.topology.schema.CommonSchema.String>`.\n\n If this stream is already typed as a string stream then it will\n be returned (with no additional processing against it and `name`\n is ignored).\n\n Args:\n name(str): Name of the resulting stream.\n When `None` defaults to a generated name.\n\n .. versionadded:: 1.6\n .. versionadded:: 1.6.1 `name` parameter added.\n\n Returns:\n Stream: Stream containing the string representations of tuples on this stream.\n \"\"\"\n sas = self._change_schema(streamsx.topology.schema.CommonSchema.String, 'as_string', name)._layout('AsString')\n sas.oport.operator.sl = _SourceLocation(_source_info(), 'as_string')\n return sas"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef as_json(self, force_object=True, name=None):\n func = streamsx.topology.runtime._json_force_object if force_object else None\n saj = self._change_schema(streamsx.topology.schema.CommonSchema.Json, 'as_json', name, func)._layout('AsJson')\n saj.oport.operator.sl = _SourceLocation(_source_info(), 'as_json')\n return saj", "response": "Returns a new stream that contains the JSON representation of the tuples on this stream."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _initialize_rest(self):\n if self._submit_context is None:\n raise ValueError(\"View has not been created.\")\n job = self._submit_context._job_access()\n self._view_object = job.get_views(name=self.name)[0]", "response": "Used to initialize the View object on first use."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef display(self, duration=None, period=2):\n self._initialize_rest()\n return self._view_object.display(duration, period)", "response": "Display a view within a Jupyter or IPython notebook cell."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncompletes the pending stream.", "response": "def complete(self, stream):\n \"\"\"Complete the pending stream.\n\n Any connections made to :py:attr:`stream` are connected to `stream` once\n this method returns.\n\n Args:\n stream(Stream): Stream that completes the connection.\n \"\"\"\n assert not self.is_complete()\n self._marker.addInputPort(outputPort=stream.oport)\n self.stream.oport.schema = stream.oport.schema\n # Update the pending schema to the actual schema\n # Any downstream filters that took the reference\n # will be automatically updated to the correct schema\n self._pending_schema._set(self.stream.oport.schema)\n\n # Mark the operator with the pending stream\n # a start point for graph travesal\n stream.oport.operator._start_op = True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef trigger(self, when=1):\n tw = Window(self.stream, self._config['type'])\n tw._config['evictPolicy'] = self._config['evictPolicy']\n tw._config['evictConfig'] = self._config['evictConfig']\n if self._config['evictPolicy'] == 'TIME':\n tw._config['evictTimeUnit'] = 'MILLISECONDS'\n\n if isinstance(when, datetime.timedelta):\n tw._config['triggerPolicy'] = 'TIME'\n tw._config['triggerConfig'] = int(when.total_seconds() * 1000.0)\n tw._config['triggerTimeUnit'] = 'MILLISECONDS'\n elif isinstance(when, int):\n tw._config['triggerPolicy'] = 'COUNT'\n tw._config['triggerConfig'] = when\n else:\n raise ValueError(when)\n return tw", "response": "Declare a window with this window s size and a trigger policy."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef aggregate(self, function, name=None):\n schema = streamsx.topology.schema.CommonSchema.Python\n \n sl = _SourceLocation(_source_info(), \"aggregate\")\n _name = self.topology.graph._requested_name(name, action=\"aggregate\", func=function)\n stateful = self.stream._determine_statefulness(function)\n op = self.topology.graph.addOperator(self.topology.opnamespace+\"::Aggregate\", function, name=_name, sl=sl, stateful=stateful)\n op.addInputPort(outputPort=self.stream.oport, window_config=self._config)\n streamsx.topology.schema.StreamSchema._fnop_style(self.stream.oport.schema, op, 'pyStyle')\n oport = op.addOutputPort(schema=schema, name=_name)\n op._layout(kind='Aggregate', name=_name, orig_name=name)\n return Stream(self.topology, oport)._make_placeable()", "response": "Aggregates the contents of the window when the window is triggered."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_rest_api():\n assert _has_local_install\n\n url=[]\n ok = _run_st(['geturl', '--api'], lines=url)\n if not ok:\n raise ChildProcessError('streamtool geturl')\n return url[0]", "response": "Get the root URL for IBM Streams REST API."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_package_name(module):\n try:\n # if __package__ is defined, use it\n package_name = module.__package__\n except AttributeError:\n package_name = None \n \n if package_name is None:\n # if __path__ is defined, the package name is the module name\n package_name = module.__name__\n if not hasattr(module, '__path__'):\n # if __path__ is not defined, the package name is the\n # string before the last \".\" of the fully-qualified module name\n package_name = package_name.rpartition('.')[0]\n \n return package_name", "response": "Gets the package name given a module object"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the function s module name", "response": "def _get_module_name(function):\n \"\"\"\n Gets the function's module name\n Resolves the __main__ module to an actual module name\n Returns:\n str: the function's module name\n \"\"\"\n module_name = function.__module__\n if module_name == '__main__':\n # get the main module object of the function\n main_module = inspect.getmodule(function)\n # get the module name from __file__ by getting the base name and removing the .py extension\n # e.g. test1.py => test1\n if hasattr(main_module, '__file__'):\n module_name = os.path.splitext(os.path.basename(main_module.__file__))[0]\n return module_name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nam the given module a builtin module?", "response": "def _is_builtin_module(module):\n \"\"\"Is builtin or part of standard library\n \"\"\"\n if (not hasattr(module, '__file__')) or module.__name__ in sys.builtin_module_names:\n return True\n if module.__name__ in _stdlib._STD_LIB_MODULES:\n return True\n amp = os.path.abspath(module.__file__)\n if 'site-packages' in amp:\n return False\n if amp.startswith(_STD_MODULE_DIR):\n return True\n if not '.' in module.__name__:\n return False\n mn_top = module.__name__.split('.')[0]\n return mn_top in _stdlib._STD_LIB_MODULES"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds the set of dependent modules for used modules classes and routines.", "response": "def _find_dependent_modules(self, module):\n \"\"\"\n Return the set of dependent modules for used modules,\n classes and routines.\n \"\"\"\n dms = set()\n for um in inspect.getmembers(module, inspect.ismodule):\n dms.add(um[1])\n\n for uc in inspect.getmembers(module, inspect.isclass):\n self._add_obj_module(dms, uc[1])\n for ur in inspect.getmembers(module, inspect.isroutine):\n self._add_obj_module(dms, ur[1])\n\n return dms"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_dependencies(self, module):\n\n if module in self._processed_modules:\n return None\n\n if hasattr(module, \"__name__\"):\n mn = module.__name__\n else:\n mn = '<unknown>'\n\n _debug.debug(\"add_dependencies:module=%s\", module)\n\n # If the module in which the class/function is defined is __main__, don't add it. Just add its dependencies.\n if mn == \"__main__\":\n self._processed_modules.add(module)\n\n # add the module as a dependency\n elif not self._add_dependency(module, mn):\n _debug.debug(\"add_dependencies:not added:module=%s\", mn)\n return None\n\n _debug.debug(\"add_dependencies:ADDED:module=%s\", mn)\n\n # recursively get the module's imports and add those as dependencies\n for dm in self._find_dependent_modules(module):\n _debug.debug(\"add_dependencies:adding dependent module %s for %s\", dm, mn)\n self.add_dependencies(dm)", "response": "Adds a module and its dependencies to the list of dependencies."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning True if a module should be included or excluded based upon included_packages and excluded_packages.", "response": "def _include_module(self, module, mn):\n \"\"\" See if a module should be included or excluded based upon\n included_packages and excluded_packages.\n\n As some packages have the following format:\n\n scipy.special.specfun\n scipy.linalg\n\n Where the top-level package name is just a prefix to a longer package name,\n we don't want to do a direct comparison. Instead, we want to exclude packages\n which are either exactly \"<package_name>\", or start with \"<package_name>\".\n \"\"\"\n\n if mn in self.topology.include_packages:\n _debug.debug(\"_include_module:explicit using __include_packages: module=%s\", mn)\n return True\n if '.' in mn:\n for include_package in self.topology.include_packages:\n if mn.startswith(include_package + '.'):\n _debug.debug(\"_include_module:explicit pattern using __include_packages: module=%s pattern=%s\", mn, \\\n include_package + '.')\n return True\n\n if mn in self.topology.exclude_packages:\n _debug.debug(\"_include_module:explicit using __exclude_packages: module=%s\", mn)\n return False\n if '.' in mn:\n for exclude_package in self.topology.exclude_packages:\n if mn.startswith(exclude_package + '.'):\n _debug.debug(\"_include_module:explicit pattern using __exclude_packages: module=%s pattern=%s\", mn, \\\n exclude_package + '.')\n return False\n\n _debug.debug(\"_include_module:including: module=%s\", mn)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a module to the list of dependencies.", "response": "def _add_dependency(self, module, mn):\n \"\"\"\n Adds a module to the list of dependencies\n without handling the modules dependences.\n\n Modules in site-packages are excluded from being added into\n the toolkit. This mimics dill.\n \"\"\"\n _debug.debug(\"_add_dependency:module=%s\", mn)\n\n if _is_streamsx_module(module):\n _debug.debug(\"_add_dependency:streamsx module=%s\", mn)\n return False\n\n if _is_builtin_module(module):\n _debug.debug(\"_add_dependency:builtin module=%s\", mn)\n return False\n\n if not self._include_module(module, mn):\n #print (\"ignoring dependencies for {0} {1}\".format(module.__name__, module))\n return False\n\n package_name = _get_package_name(module)\n top_package_name = module.__name__.split('.')[0]\n\n if package_name and top_package_name in sys.modules:\n # module is part of a package\n # get the top-level package\n top_package = sys.modules[top_package_name]\n\n if \"__path__\" in top_package.__dict__:\n # for regular packages, there is one top-level directory\n # for namespace packages, there can be more than one.\n # they will be merged in the bundle\n seen_non_site_package = False\n for top_package_path in reversed(list(top_package.__path__)):\n top_package_path = os.path.abspath(top_package_path)\n if 'site-packages' in top_package_path:\n continue\n seen_non_site_package = True\n self._add_package(top_package_path)\n if not seen_non_site_package:\n _debug.debug(\"_add_dependency:site-packages path module=%s\", mn)\n return False\n elif hasattr(top_package, '__file__'):\n # package that is an individual python file with empty __path__\n if 'site-packages' in top_package.__file__:\n _debug.debug(\"_add_dependency:site-packages module=%s\", mn)\n return False\n self._add_package(os.path.abspath(top_package.__file__))\n elif hasattr(module, '__file__'):\n # individual Python module\n module_path = os.path.abspath(module.__file__)\n if 'site-packages' in module_path:\n _debug.debug(\"_add_dependency:site-packages module=%s\", mn)\n return False\n self._modules.add(module_path)\n \n self._processed_modules.add(module)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef freeze(self) -> dict:\n settings = {}\n for key, v in self._h.defaults.items():\n settings[key] = self._unserialize(v.value, v.type)\n if self._parent:\n settings.update(getattr(self._parent, self._h.attribute_name).freeze())\n for key in self._cache():\n settings[key] = self.get(key)\n return settings", "response": "Returns a dictionary of all settings set for this object including any values of its parents or hardcoded defaults."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a value from the object s settings object.", "response": "def get(self, key: str, default=None, as_type: type = None, binary_file=False):\n \"\"\"\n Get a setting specified by key ``key``. Normally, settings are strings, but\n if you put non-strings into the settings object, you can request unserialization\n by specifying ``as_type``. If the key does not have a harcdoded default type,\n omitting ``as_type`` always will get you a string.\n\n If the setting with the specified name does not exist on this object, any parent object\n up to the global settings layer (if configured) will be queried. If still no value is \n found, a default value set in ths source code will be returned if one exists. \n If not, the value of the ``default`` argument of this method will be returned instead.\n \n If you receive a ``File`` object, it will already be opened. You can specify the ``binary_file`` \n flag to indicate that it should be opened in binary mode.\n \"\"\"\n if as_type is None and key in self._h.defaults:\n as_type = self._h.defaults[key].type\n\n if key in self._cache():\n value = self._cache()[key]\n else:\n value = None\n if self._parent:\n value = getattr(self._parent, self._h.attribute_name).get(key, as_type=str)\n if value is None and key in self._h.defaults:\n value = self._h.defaults[key].value\n if value is None and default is not None:\n value = default\n\n return self._unserialize(value, as_type, binary_file=binary_file)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstoring a setting to the database of the object.", "response": "def set(self, key: str, value: Any) -> None:\n \"\"\"\n Stores a setting to the database of its object. \n \n The write to the database is performed immediately and the cache in the cache backend is flushed.\n The cache within this object will be updated correctly.\n \"\"\"\n wc = self._write_cache()\n if key in wc:\n s = wc[key]\n else:\n s = self._type(object=self._obj, key=key)\n s.value = self._serialize(value)\n s.save()\n self._cache()[key] = s.value\n wc[key] = s\n self._flush_external_cache()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete a setting from this object s storage.", "response": "def delete(self, key: str) -> None:\n \"\"\"\n Deletes a setting from this object's storage.\n \n The write to the database is performed immediately and the cache in the cache backend is flushed.\n The cache within this object will be updated correctly.\n \"\"\"\n if key in self._write_cache():\n self._write_cache()[key].delete()\n del self._write_cache()[key]\n\n if key in self._cache():\n del self._cache()[key]\n\n self._flush_external_cache()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_vcap_services(vcap_services=None):\n vcap_services = vcap_services or os.environ.get('VCAP_SERVICES')\n if not vcap_services:\n raise ValueError(\n \"VCAP_SERVICES information must be supplied as a parameter or as environment variable 'VCAP_SERVICES'\")\n # If it was passed to config as a dict, simply return it\n if isinstance(vcap_services, dict):\n return vcap_services\n try:\n # Otherwise, if it's a string, try to load it as json\n vcap_services = json.loads(vcap_services)\n except json.JSONDecodeError:\n # If that doesn't work, attempt to open it as a file path to the json config.\n try:\n with open(vcap_services) as vcap_json_data:\n vcap_services = json.load(vcap_json_data)\n except:\n raise ValueError(\"VCAP_SERVICES information is not JSON or a file containing JSON:\", vcap_services)\n return vcap_services", "response": "Retrieves the VCAP Services information from the VCAP_SERVICES environment variable or from the config object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nretrieves the credentials of the specified service_name.", "response": "def _get_credentials(vcap_services, service_name=None):\n \"\"\"Retrieves the credentials of the VCAP Service of the specified `service_name`. If\n `service_name` is not specified, it takes the information from STREAMING_ANALYTICS_SERVICE_NAME environment\n variable.\n\n Args:\n vcap_services (dict): A dict representation of the VCAP Services information.\n service_name (str): One of the service name stored in `vcap_services`\n\n Returns:\n dict: A dict representation of the credentials.\n\n Raises:\n ValueError: Cannot find `service_name` in `vcap_services`\n \"\"\"\n service_name = service_name or os.environ.get('STREAMING_ANALYTICS_SERVICE_NAME', None)\n # Get the service corresponding to the SERVICE_NAME\n services = vcap_services['streaming-analytics']\n creds = None\n for service in services:\n if service['name'] == service_name:\n creds = service['credentials']\n break\n\n # If no corresponding service is found, error\n if creds is None:\n raise ValueError(\"Streaming Analytics service \" + str(service_name) + \" was not found in VCAP_SERVICES\")\n return creds"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve the Streams REST API URL from the provided credentials.", "response": "def _get_rest_api_url_from_creds(session, credentials):\n \"\"\"Retrieves the Streams REST API URL from the provided credentials.\n Args:\n session (:py:class:`requests.Session`): A Requests session object for making REST calls\n credentials (dict): A dict representation of the credentials.\n Returns:\n str: The remote Streams REST API URL.\n \"\"\"\n resources_url = credentials['rest_url'] + credentials['resources_path']\n try:\n response_raw = session.get(resources_url, auth=(credentials['userid'], credentials['password']))\n response = response_raw.json()\n except:\n logger.error(\"Error while retrieving rest REST url from: \" + resources_url)\n raise\n\n response_raw.raise_for_status()\n\n rest_api_url = response['streams_rest_url'] + '/resources'\n return rest_api_url"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_iam_rest_api_url_from_creds(rest_client, credentials):\n res = rest_client.make_request(credentials[_IAMConstants.V2_REST_URL])\n base = res['streams_self']\n end = base.find('/instances')\n return base[:end] + '/resources'", "response": "Retrieves the remote Streams REST API URL from the provided credentials using IAM authentication."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resource_url(self):\n self._resource_url = self._resource_url or st.get_rest_api()\n return self._resource_url", "response": "str - Root URL for IBM Streams REST API"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a single element matching an id", "response": "def _get_element_by_id(self, resource_name, eclass, id):\n \"\"\"Get a single element matching an id\"\"\"\n elements = self._get_elements(resource_name, eclass, id=id)\n if not elements:\n raise ValueError(\"No resource matching: {0}\".format(id))\n if len(elements) == 1:\n return elements[0]\n raise ValueError(\"Multiple resources matching: {0}\".format(id))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves the available domains.", "response": "def get_domains(self):\n \"\"\"Retrieves available domains.\n\n Returns:\n :py:obj:`list` of :py:class:`~.rest_primitives.Domain`: List of available domains\n \"\"\"\n # Domains are fixed and actually only one per REST api.\n if self._domains is None:\n self._domains = self._get_elements('domains', Domain)\n return self._domains"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving a list of all known Streams high - level REST resources.", "response": "def get_resources(self):\n \"\"\"Retrieves a list of all known Streams high-level REST resources.\n\n Returns:\n :py:obj:`list` of :py:class:`~.rest_primitives.RestResource`: List of all Streams high-level REST resources.\n \"\"\"\n json_resources = self.rest_client.make_request(self.resource_url)['resources']\n return [RestResource(resource, self.rest_client) for resource in json_resources]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef of_definition(service_def):\n vcap_services = streamsx.topology.context._vcap_from_service_definition(service_def)\n service_name = streamsx.topology.context._name_from_service_definition(service_def)\n return StreamingAnalyticsConnection(vcap_services, service_name)", "response": "Create a Streaming Analytics connection to a single service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resource_url(self):\n if self._iam:\n self._resource_url = self._resource_url or _get_iam_rest_api_url_from_creds(self.rest_client, self.credentials)\n else:\n self._resource_url = self._resource_url or _get_rest_api_url_from_creds(self.session, self.credentials)\n return self._resource_url", "response": "str - Returns the root URL for the IBM Streams REST API."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_authenticated(self, request, **kwargs):\n log.info(\"OAuth20Authentication\")\n try:\n key = request.GET.get('oauth_consumer_key')\n if not key:\n for header in ['Authorization', 'HTTP_AUTHORIZATION']:\n auth_header_value = request.META.get(header)\n if auth_header_value:\n key = auth_header_value.split(' ', 1)[1]\n break\n if not key and request.method == 'POST':\n if request.META.get('CONTENT_TYPE') == 'application/json':\n decoded_body = request.body.decode('utf8')\n key = json.loads(decoded_body)['oauth_consumer_key']\n if not key:\n log.info('OAuth20Authentication. No consumer_key found.')\n return None\n \"\"\"\n If verify_access_token() does not pass, it will raise an error\n \"\"\"\n token = self.verify_access_token(key, request, **kwargs)\n\n # If OAuth authentication is successful, set the request user to\n # the token user for authorization\n request.user = token.user\n\n # If OAuth authentication is successful, set oauth_consumer_key on\n # request in case we need it later\n request.META['oauth_consumer_key'] = key\n return True\n except KeyError:\n log.exception(\"Error in OAuth20Authentication.\")\n request.user = AnonymousUser()\n return False\n except Exception:\n log.exception(\"Error in OAuth20Authentication.\")\n return False", "response": "Verify 2 - legged OAuth request."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_scope(self, token, request):\n http_method = request.method\n if not hasattr(self, http_method):\n raise OAuthError(\"HTTP method is not recognized\")\n required_scopes = getattr(self, http_method)\n # a None scope means always allowed\n if required_scopes is None:\n return True\n \"\"\"\n The required scope is either a string or an iterable. If string,\n check if it is allowed for our access token otherwise, iterate through\n the required_scopes to see which scopes are allowed\n \"\"\"\n # for non iterable types\n if isinstance(required_scopes, six.string_types):\n if token.allow_scopes(required_scopes.split()):\n return [required_scopes]\n return []\n allowed_scopes = []\n try:\n for scope in required_scopes:\n if token.allow_scopes(scope.split()):\n allowed_scopes.append(scope)\n except:\n raise Exception('Invalid required scope values')\n else:\n return allowed_scopes", "response": "Checks if the token is allowed for our access token."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if input matches type renderer. eightbit.", "response": "def _is_args_eightbit(*args) -> bool:\n \"\"\"\n Check if input matches type: renderer.eightbit.\n \"\"\"\n if not args:\n return False\n\n elif args[0] is 0:\n return True\n\n elif isinstance(args[0], int):\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nexport color register as dict.", "response": "def as_dict(self) -> Dict[str, str]:\n \"\"\"\n Export color register as dict.\n \"\"\"\n items: Dict[str, str] = {}\n\n for k, v in self.items():\n\n if type(v) is str:\n items.update({k: v})\n\n return items"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef as_namedtuple(self):\n d = self.as_dict()\n return namedtuple('ColorRegister', d.keys())(*d.values())", "response": "Export color register as namedtuple."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _extract_attrs(x, n):\n try:\n return extract_attrs(x, n)\n\n except (ValueError, IndexError):\n\n if PANDOCVERSION < '1.16':\n # Look for attributes attached to the image path, as occurs with\n # image references for pandoc < 1.16 (pandoc-fignos Issue #14).\n # See http://pandoc.org/MANUAL.html#images for the syntax.\n # Note: This code does not handle the \"optional title\" for\n # image references (search for link_attributes in pandoc's docs).\n assert x[n-1]['t'] == 'Image'\n image = x[n-1]\n s = image['c'][-1][0]\n if '%20%7B' in s:\n path = s[:s.index('%20%7B')]\n attrs = unquote(s[s.index('%7B'):])\n image['c'][-1][0] = path # Remove attr string from the path\n return PandocAttributes(attrs.strip(), 'markdown').to_pandoc()\n raise", "response": "Extracts attributes from an image list x."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprocess the figure. Returns a dict containing the figure properties.", "response": "def _process_figure(value, fmt):\n \"\"\"Processes the figure. Returns a dict containing figure properties.\"\"\"\n\n # pylint: disable=global-statement\n global Nreferences # Global references counter\n global has_unnumbered_figures # Flags unnumbered figures were found\n global cursec # Current section\n\n # Parse the image\n attrs, caption = value[0]['c'][:2]\n\n # Initialize the return value\n fig = {'is_unnumbered': False,\n 'is_unreferenceable': False,\n 'is_tagged': False,\n 'attrs': attrs}\n\n # Bail out if the label does not conform\n if not LABEL_PATTERN.match(attrs[0]):\n has_unnumbered_figures = True\n fig['is_unnumbered'] = True\n fig['is_unreferenceable'] = True\n return fig\n\n # Process unreferenceable figures\n if attrs[0] == 'fig:': # Make up a unique description\n attrs[0] = attrs[0] + str(uuid.uuid4())\n fig['is_unreferenceable'] = True\n unreferenceable.append(attrs[0])\n\n # For html, hard-code in the section numbers as tags\n kvs = PandocAttributes(attrs, 'pandoc').kvs\n if numbersections and fmt in ['html', 'html5'] and 'tag' not in kvs:\n if kvs['secno'] != cursec:\n cursec = kvs['secno']\n Nreferences = 1\n kvs['tag'] = cursec + '.' + str(Nreferences)\n Nreferences += 1\n\n # Save to the global references tracker\n fig['is_tagged'] = 'tag' in kvs\n if fig['is_tagged']:\n # Remove any surrounding quotes\n if kvs['tag'][0] == '\"' and kvs['tag'][-1] == '\"':\n kvs['tag'] = kvs['tag'].strip('\"')\n elif kvs['tag'][0] == \"'\" and kvs['tag'][-1] == \"'\":\n kvs['tag'] = kvs['tag'].strip(\"'\")\n references[attrs[0]] = kvs['tag']\n else:\n Nreferences += 1\n references[attrs[0]] = Nreferences\n\n # Adjust caption depending on the output format\n if fmt in ['latex', 'beamer']: # Append a \\label if this is referenceable\n if not fig['is_unreferenceable']:\n value[0]['c'][1] += [RawInline('tex', r'\\label{%s}'%attrs[0])]\n else: # Hard-code in the caption name and number/tag\n if isinstance(references[attrs[0]], int): # Numbered reference\n value[0]['c'][1] = [RawInline('html', r'<span>'),\n Str(captionname), Space(),\n Str('%d:'%references[attrs[0]]),\n RawInline('html', r'</span>')] \\\n if fmt in ['html', 'html5'] else \\\n [Str(captionname), Space(), Str('%d:'%references[attrs[0]])]\n value[0]['c'][1] += [Space()] + list(caption)\n else: # Tagged reference\n assert isinstance(references[attrs[0]], STRTYPES)\n text = references[attrs[0]]\n if text.startswith('$') and text.endswith('$'): # Math\n math = text.replace(' ', r'\\ ')[1:-1]\n els = [Math({\"t\":\"InlineMath\", \"c\":[]}, math), Str(':')]\n else: # Text\n els = [Str(text+':')]\n value[0]['c'][1] = \\\n [RawInline('html', r'<span>'), Str(captionname), Space()] + \\\n els + [RawInline('html', r'</span>')] \\\n if fmt in ['html', 'html5'] else \\\n [Str(captionname), Space()] + els\n value[0]['c'][1] += [Space()] + list(caption)\n\n return fig"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n\n # pylint: disable=global-statement\n global PANDOCVERSION\n global Image\n\n # Get the output format and document\n fmt = args.fmt\n doc = json.loads(STDIN.read())\n\n # Initialize pandocxnos\n # pylint: disable=too-many-function-args\n PANDOCVERSION = pandocxnos.init(args.pandocversion, doc)\n\n # Element primitives\n if PANDOCVERSION < '1.16':\n Image = elt('Image', 2)\n\n # Chop up the doc\n meta = doc['meta'] if PANDOCVERSION >= '1.18' else doc[0]['unMeta']\n blocks = doc['blocks'] if PANDOCVERSION >= '1.18' else doc[1:]\n\n # Process the metadata variables\n process(meta)\n\n # First pass\n attach_attrs_image = attach_attrs_factory(Image,\n extract_attrs=_extract_attrs)\n detach_attrs_image = detach_attrs_factory(Image)\n insert_secnos = insert_secnos_factory(Image)\n delete_secnos = delete_secnos_factory(Image)\n filters = [insert_secnos, process_figures, delete_secnos] \\\n if PANDOCVERSION >= '1.16' else \\\n [attach_attrs_image, insert_secnos, process_figures,\n delete_secnos, detach_attrs_image]\n altered = functools.reduce(lambda x, action: walk(x, action, fmt, meta),\n filters, blocks)\n\n # Second pass\n process_refs = process_refs_factory(references.keys())\n replace_refs = replace_refs_factory(references,\n use_cleveref_default, False,\n plusname if not capitalize else\n [name.title() for name in plusname],\n starname, 'figure')\n altered = functools.reduce(lambda x, action: walk(x, action, fmt, meta),\n [repair_refs, process_refs, replace_refs],\n altered)\n\n # Insert supporting TeX\n if fmt == 'latex':\n\n rawblocks = []\n\n if has_unnumbered_figures:\n rawblocks += [RawBlock('tex', TEX0),\n RawBlock('tex', TEX1),\n RawBlock('tex', TEX2)]\n\n if captionname != 'Figure':\n rawblocks += [RawBlock('tex', TEX3 % captionname)]\n\n insert_rawblocks = insert_rawblocks_factory(rawblocks)\n\n altered = functools.reduce(lambda x, action: walk(x, action, fmt, meta),\n [insert_rawblocks], altered)\n\n # Update the doc\n if PANDOCVERSION >= '1.18':\n doc['blocks'] = altered\n else:\n doc = doc[:1] + altered\n\n # Dump the results\n json.dump(doc, STDOUT)\n\n # Flush stdout\n STDOUT.flush()", "response": "Filters the document AST and returns the resulting tree."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef i18n_javascript(self, request):\n if settings.USE_I18N:\n from django.views.i18n import javascript_catalog\n else:\n from django.views.i18n import null_javascript_catalog as javascript_catalog\n return javascript_catalog(request, packages=['media_tree'])", "response": "Displays the i18n JavaScript that the Django admin requires."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef file_links(items, opts=None):\n\tresult = []\n\tkwargs = get_kwargs_for_file_link(opts)\n\tfor item in items:\n\t\tif isinstance(item, FileNode):\n\t\t\tresult.append(get_file_link(item, **kwargs))\n\t\telse:\n\t\t\tresult.append(file_links(item, kwargs))\n\treturn result", "response": "Turns a list of FileNode objects into a list of \n\tstrings linking to the associated files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef forwards(self, orm):\n \"Write your forwards methods here.\"\n for node in orm.FileNode.objects.all():\n node.save()", "response": "Write your forwards methods here."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_media_backend(fail_silently=True, handles_media_types=None, \n handles_file_extensions=None, supports_thumbnails=None):\n \"\"\"\n Returns the MediaBackend subclass that is configured for use with \n media_tree.\n \"\"\"\n backends = app_settings.MEDIA_TREE_MEDIA_BACKENDS\n if not len(backends):\n if not fail_silently:\n raise ImproperlyConfigured('There is no media backend configured.' \\\n + ' Please define `MEDIA_TREE_MEDIA_BACKENDS` in your settings.')\n else:\n return False\n \n for path in backends:\n # Traverse backends until there is one supporting what's requested:\n backend = get_module_attr(path)\n if (not handles_media_types or backend.handles_media_types(handles_media_types)) \\\n and (not handles_file_extensions or backend.handles_file_extensions(handles_file_extensions)) \\\n and (not supports_thumbnails or backend.supports_thumbnails()):\n return backend\n \n if not fail_silently:\n raise ImproperlyConfigured('There is no media backend configured to handle' \\\n ' the specified file types.')\n return False", "response": "Returns the MediaBackend subclass that is configured for use with \n media_tree."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef thumbnail_size(parser, token):\n \n args = token.split_contents()\n tag = args.pop(0)\n if len(args) >= 2 and args[-2] == 'as':\n context_name = args[-1]\n args = args[:-2]\n else:\n context_name = None\n\n if len(args) > 1:\n raise template.TemplateSyntaxError(\"Invalid syntax. Expected \"\n \"'{%% %s [\\\"size_name\\\"] [as context_var] %%}'\" % tag)\n elif len(args) == 1:\n size_name = args[0]\n else:\n size_name = None\n\n return ThumbnailSizeNode(size_name=size_name, context_name=context_name)", "response": "Returns a pre - configured thumbnail size or assigns it to a context variable."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef request(method, path, params=None, data=None, auto_retry=True):\n body = None\n if data is not None:\n body = json.dumps(data, cls=utils.DateTimeEncoder)\n\n headers = {\n 'accept': 'application/json',\n 'content-type': 'application/json',\n 'user-agent': analyzere.user_agent,\n }\n resp = request_raw(method, path, params=params, body=body, headers=headers,\n auto_retry=auto_retry)\n content = resp.text\n if content:\n try:\n content = json.loads(content, cls=utils.DateTimeDecoder)\n except ValueError:\n raise errors.ServerError('Unable to parse JSON response returned '\n 'from server.', resp, resp.status_code)\n return content", "response": "This function is used to make a request to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef file_length(file_obj):\n file_obj.seek(0, 2)\n length = file_obj.tell()\n file_obj.seek(0)\n return length", "response": "Returns the length in bytes of a given file object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses an Analyze Re href into collection name and ID", "response": "def parse_href(href):\n \"\"\"Parses an Analyze Re href into collection name and ID\"\"\"\n url = urlparse(href)\n path = url.path.split('/')\n collection_name = path[1]\n id_ = path[2]\n return collection_name, id_"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntake a value or list of values and returns a single result joined by \",\".", "response": "def vectorize(values):\n \"\"\"\n Takes a value or list of values and returns a single result, joined by \",\"\n if necessary.\n \"\"\"\n if isinstance(values, list):\n return ','.join(str(v) for v in values)\n return values"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuploads data to the tus server.", "response": "def upload_data(self, file_or_str, chunk_size=analyzere.upload_chunk_size,\n poll_interval=analyzere.upload_poll_interval,\n upload_callback=lambda x: None,\n commit_callback=lambda x: None):\n \"\"\"\n Accepts a file-like object or string and uploads it. Files are\n automatically uploaded in chunks. The default chunk size is 16MiB and\n can be overwritten by specifying the number of bytes in the\n ``chunk_size`` variable.\n Accepts an optional poll_interval for temporarily overriding the\n default value `analyzere.upload_poll_interval`.\n Implements the tus protocol.\n Takes optional callbacks that return the percentage complete for the\n given \"phase\" of upload: upload/commit.\n Callback values are returned as 10.0 for 10%\n \"\"\"\n if not callable(upload_callback):\n raise Exception('provided upload_callback is not callable')\n if not callable(commit_callback):\n raise Exception('provided commit_callback is not callable')\n\n file_obj = StringIO(file_or_str) if isinstance(\n file_or_str, six.string_types) else file_or_str\n\n # Upload file with known entity size if file object supports random\n # access.\n length = None\n if hasattr(file_obj, 'seek'):\n length = utils.file_length(file_obj)\n\n # Initiate upload session\n request_raw('post', self._data_path,\n headers={'Entity-Length': str(length)})\n else:\n request_raw('post', self._data_path)\n\n # Upload chunks\n for chunk, offset in utils.read_in_chunks(file_obj, chunk_size):\n headers = {'Offset': str(offset),\n 'Content-Type': 'application/offset+octet-stream'}\n request_raw('patch', self._data_path, headers=headers, body=chunk)\n # if there is a known size, and an upload callback, call it\n if length:\n upload_callback(offset * 100.0 / length)\n\n upload_callback(100.0)\n # Commit the session\n request_raw('post', self._commit_path)\n\n # Block until data has finished processing\n while True:\n resp = self.upload_status\n if (resp.status == 'Processing Successful' or resp.status == 'Processing Failed'):\n commit_callback(100.0)\n return resp\n else:\n commit_callback(float(resp.commit_progress))\n time.sleep(poll_interval)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes orphaned files from storage.", "response": "def delete_orphaned_files(modeladmin, request, queryset=None):\n \"\"\"\n Deletes orphaned files, i.e. media files existing in storage that are not in the database.\n \"\"\"\n \n execute = request.POST.get('execute')\n \n storage = get_media_storage()\n broken_node_links = []\n orphaned_files_choices = []\n\n broken_nodes, orphaned_files = get_broken_media()\n\n for node in broken_nodes:\n link = mark_safe('<a href=\"%s\">%s</a>' % (node.get_admin_url(), node.__unicode__()))\n broken_node_links.append(link)\n\n for storage_name in orphaned_files:\n file_path = storage.path(storage_name)\n link = mark_safe('<a href=\"%s\">%s</a>' % (\n storage.url(storage_name), file_path))\n orphaned_files_choices.append((storage_name, link))\n\n if not len(orphaned_files_choices) and not len(broken_node_links):\n messages.success(request, message=_('There are no orphaned files.'))\n return HttpResponseRedirect('')\n\n if execute:\n form = DeleteOrphanedFilesForm(queryset, orphaned_files_choices, request.POST)\n if form.is_valid():\n form.save()\n node = FileNode.get_top_node()\n messages.success(request, message=ungettext('Deleted %i file from storage.', 'Deleted %i files from storage.', len(form.success_files)) % len(form.success_files))\n if form.error_files:\n messages.error(request, message=_('The following files could not be deleted from storage:')+' '+repr(form.error_files))\n return HttpResponseRedirect(node.get_admin_url())\n\n if not execute:\n if len(orphaned_files_choices) > 0:\n form = DeleteOrphanedFilesForm(queryset, orphaned_files_choices)\n else:\n form = None\n\n c = get_actions_context(modeladmin)\n c.update({\n 'title': _('Orphaned files'),\n 'submit_label': _('Delete selected files'),\n 'form': form,\n 'select_all': 'selected_files',\n 'node_list_title': _('The following files in the database do not exist in storage. You should fix these media objects:'),\n 'node_list': broken_node_links,\n })\n return render_to_response('admin/media_tree/filenode/actions_form.html', c, context_instance=RequestContext(request))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrebuilds whole tree in database using parent link.", "response": "def rebuild_tree(modeladmin, request, queryset=None):\n \"\"\"\n Rebuilds whole tree in database using `parent` link.\n \"\"\"\n tree = FileNode.tree.rebuild()\n messages.success(request, message=_('The node tree was rebuilt.'))\n return HttpResponseRedirect('')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clear_cache(modeladmin, request, queryset=None):\n \n execute = request.POST.get('execute')\n \n files_in_storage = []\n storage = get_media_storage()\n cache_files_choices = []\n for storage_name in get_cache_files():\n link = mark_safe('<a href=\"%s\">%s</a>' % (\n storage.url(storage_name), storage_name))\n cache_files_choices.append((storage_name, link))\n\n if not len(cache_files_choices):\n messages.warning(request, message=_('There are no cache files.'))\n return HttpResponseRedirect('')\n\n if execute:\n form = DeleteCacheFilesForm(queryset, cache_files_choices, request.POST)\n if form.is_valid():\n form.save()\n node = FileNode.get_top_node()\n message = ungettext('Deleted %i cache file.', 'Deleted %i cache files.', len(form.success_files)) % len(form.success_files)\n if len(form.success_files) == len(cache_files_choices):\n message = '%s %s' % (_('The cache was cleared.'), message)\n messages.success(request, message=message)\n if form.error_files:\n messages.error(request, message=_('The following files could not be deleted:')+' '+repr(form.error_files))\n return HttpResponseRedirect(node.get_admin_url())\n\n if not execute:\n if len(cache_files_choices) > 0:\n form = DeleteCacheFilesForm(queryset, cache_files_choices)\n else:\n form = None\n\n c = get_actions_context(modeladmin)\n c.update({\n 'title': _('Clear cache'),\n 'submit_label': _('Delete selected files'),\n 'form': form,\n 'select_all': 'selected_files',\n })\n return render_to_response('admin/media_tree/filenode/actions_form.html', c, context_instance=RequestContext(request))\n\n return HttpResponseRedirect('')", "response": "Clear media cache files."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef south_field_triple(self):\n \"Returns a suitable description of this field for South.\"\n from south.modelsinspector import introspector\n field_class = \"django.db.models.fields.CharField\"\n args, kwargs = introspector(self)\n return (field_class, args, kwargs)", "response": "Returns a suitable description of this field for South."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef multi_splitext(basename):\n groups = list(RE_SPLITEXT.match(basename).groups())\n if not groups[2]:\n groups[2] = groups[1]\n return groups", "response": "Returns a list of three elements that are the same size and the third being the full extension."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef join_formatted(text, new_text, glue_format_if_true = u'%s%s', glue_format_if_false = u'%s%s', condition=None, format = u'%s', escape=False):\n if condition is None:\n condition = text and new_text\n add_text = new_text\n if escape:\n add_text = conditional_escape(add_text)\n if add_text:\n add_text = format % add_text\n glue_format = glue_format_if_true if condition else glue_format_if_false\n return glue_format % (text, add_text)", "response": "Join two strings optionally escaping the second and using one of the two string formats for glueing them together."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef widthratio(value, max_value, max_width):\n ratio = float(value) / float(max_value)\n return int(round(ratio * max_width))", "response": "Returns the width ratio of a single value."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the internal state of the object.", "response": "def _initialize(self, path):\n \"\"\" Find metadata tree root, detect format version \"\"\"\n # Find the tree root\n root = os.path.abspath(path)\n try:\n while \".fmf\" not in next(os.walk(root))[1]:\n if root == \"/\":\n raise utils.RootError(\n \"Unable to find tree root for '{0}'.\".format(\n os.path.abspath(path)))\n root = os.path.abspath(os.path.join(root, os.pardir))\n except StopIteration:\n raise utils.FileError(\"Invalid directory path: {0}\".format(root))\n log.info(\"Root directory found: {0}\".format(root))\n self.root = root\n # Detect format version\n try:\n with open(os.path.join(self.root, \".fmf\", \"version\")) as version:\n self.version = int(version.read())\n log.info(\"Format version detected: {0}\".format(self.version))\n except IOError as error:\n raise utils.FormatError(\n \"Unable to detect format version: {0}\".format(error))\n except ValueError:\n raise utils.FormatError(\"Invalid version format\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef merge(self, parent=None):\n # Check parent, append source files\n if parent is None:\n parent = self.parent\n if parent is None:\n return\n self.sources = parent.sources + self.sources\n # Merge child data with parent data\n data = copy.deepcopy(parent.data)\n for key, value in sorted(self.data.items()):\n # Handle attribute adding\n if key.endswith('+'):\n key = key.rstrip('+')\n if key in data:\n # Use dict.update() for merging dictionaries\n if type(data[key]) == type(value) == dict:\n data[key].update(value)\n continue\n try:\n value = data[key] + value\n except TypeError as error:\n raise utils.MergeError(\n \"MergeError: Key '{0}' in {1} ({2}).\".format(\n key, self.name, str(error)))\n # And finally update the value\n data[key] = value\n self.data = data", "response": "Merge the current object with the parent object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply inheritance to this object and all its children.", "response": "def inherit(self):\n \"\"\" Apply inheritance \"\"\"\n # Preserve original data and merge parent\n # (original data needed for custom inheritance extensions)\n self.original_data = self.data\n self.merge()\n log.debug(\"Data for '{0}' inherited.\".format(self))\n log.data(pretty(self.data))\n # Apply inheritance to all children\n for child in self.children.values():\n child.inherit()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(self, data):\n # Nothing to do if no data\n if data is None:\n return\n for key, value in sorted(data.items()):\n # Handle child attributes\n if key.startswith('/'):\n name = key.lstrip('/')\n # Handle deeper nesting (e.g. keys like /one/two/three) by\n # extracting only the first level of the hierarchy as name\n match = re.search(\"([^/]+)(/.*)\", name)\n if match:\n name = match.groups()[0]\n value = {match.groups()[1]: value}\n # Update existing child or create a new one\n self.child(name, value)\n # Update regular attributes\n else:\n self.data[key] = value\n log.debug(\"Data for '{0}' updated.\".format(self))\n log.data(pretty(self.data))", "response": "Update metadata for this object by parsing the data dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget attribute value or return default if no attribute specified.", "response": "def get(self, name=None, default=None):\n \"\"\"\n Get attribute value or return default\n\n Whole data dictionary is returned when no attribute provided.\n Supports direct values retrieval from deep dictionaries as well.\n Dictionary path should be provided as list. The following two\n examples are equal:\n\n tree.data['hardware']['memory']['size']\n tree.get(['hardware', 'memory', 'size'])\n\n However the latter approach will also correctly handle providing\n default value when any of the dictionary keys does not exist.\n\n \"\"\"\n # Return the whole dictionary if no attribute specified\n if name is None:\n return self.data\n if not isinstance(name, list):\n name = [name]\n data = self.data\n try:\n for key in name:\n data = data[key]\n except KeyError:\n return default\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef child(self, name, data, source=None):\n try:\n if isinstance(data, dict):\n self.children[name].update(data)\n else:\n self.children[name].grow(data)\n except KeyError:\n self.children[name] = Tree(data, name, parent=self)\n # Save source file\n if source is not None:\n self.children[name].sources.append(source)", "response": "Create or update a child with given data."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef grow(self, path):\n if path is None:\n return\n path = path.rstrip(\"/\")\n log.info(\"Walking through directory {0}\".format(\n os.path.abspath(path)))\n dirpath, dirnames, filenames = next(os.walk(path))\n # Investigate main.fmf as the first file (for correct inheritance)\n filenames = sorted(\n [filename for filename in filenames if filename.endswith(SUFFIX)])\n try:\n filenames.insert(0, filenames.pop(filenames.index(MAIN)))\n except ValueError:\n pass\n # Check every metadata file and load data (ignore hidden)\n for filename in filenames:\n if filename.startswith(\".\"):\n continue\n fullpath = os.path.abspath(os.path.join(dirpath, filename))\n log.info(\"Checking file {0}\".format(fullpath))\n try:\n with open(fullpath) as datafile:\n data = yaml.load(datafile, Loader=FullLoader)\n except yaml.scanner.ScannerError as error:\n raise(utils.FileError(\"Failed to parse '{0}'\\n{1}\".format(\n fullpath, error)))\n log.data(pretty(data))\n # Handle main.fmf as data for self\n if filename == MAIN:\n self.sources.append(fullpath)\n self.update(data)\n # Handle other *.fmf files as children\n else:\n self.child(os.path.splitext(filename)[0], data, fullpath)\n # Explore every child directory (ignore hidden dirs and subtrees)\n for dirname in sorted(dirnames):\n if dirname.startswith(\".\"):\n continue\n # Ignore metadata subtrees\n if os.path.isdir(os.path.join(path, dirname, SUFFIX)):\n log.debug(\"Ignoring metadata tree '{0}'.\".format(dirname))\n continue\n self.child(dirname, os.path.join(path, dirname))\n # Remove empty children (ignore directories without metadata)\n for name in list(self.children.keys()):\n child = self.children[name]\n if not child.data and not child.children:\n del(self.children[name])\n log.debug(\"Empty tree '{0}' removed.\".format(child.name))\n # Apply inheritance when all scattered data are gathered.\n # This is done only once, from the top parent object.\n if self.parent is None:\n self.inherit()", "response": "Grow the tree for the given path."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef climb(self, whole=False):\n if whole or not self.children:\n yield self\n for name, child in self.children.items():\n for node in child.climb(whole):\n yield node", "response": "Iterate through the tree and yield all the nodes that are leaf."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find(self, name):\n for node in self.climb(whole=True):\n if node.name == name:\n return node\n return None", "response": "Find a node with given name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npruning the tree nodes based on given criteria", "response": "def prune(self, whole=False, keys=[], names=[], filters=[]):\n \"\"\" Filter tree nodes based on given criteria \"\"\"\n for node in self.climb(whole):\n # Select only nodes with key content\n if not all([key in node.data for key in keys]):\n continue\n # Select nodes with name matching regular expression\n if names and not any(\n [re.search(name, node.name) for name in names]):\n continue\n # Apply advanced filters if given\n try:\n if not all([utils.filter(filter, node.data, regexp=True)\n for filter in filters]):\n continue\n # Handle missing attribute as if filter failed\n except utils.FilterError:\n continue\n # All criteria met, thus yield the node\n yield node"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nshow the current state of the object.", "response": "def show(self, brief=False, formatting=None, values=[]):\n \"\"\" Show metadata \"\"\"\n # Show nothing if there's nothing\n if not self.data:\n return None\n\n # Custom formatting\n if formatting is not None:\n formatting = re.sub(\"\\\\\\\\n\", \"\\n\", formatting)\n name = self.name\n data = self.data\n root = self.root\n sources = self.sources\n evaluated = []\n for value in values:\n evaluated.append(eval(value))\n return formatting.format(*evaluated)\n\n # Show the name\n output = utils.color(self.name, 'red')\n if brief:\n return output + \"\\n\"\n # List available attributes\n for key, value in sorted(self.data.items()):\n output += \"\\n{0}: \".format(utils.color(key, 'green'))\n if isinstance(value, type(\"\")):\n output += value\n elif isinstance(value, list) and all(\n [isinstance(item, type(\"\")) for item in value]):\n output += utils.listed(value)\n else:\n output += pretty(value)\n output\n return output + \"\\n\""} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a label for the given object.", "response": "def label_from_instance(self, obj):\n \"\"\"\n Creates labels which represent the tree level of each node when\n generating option labels.\n \"\"\"\n level = getattr(obj, obj._mptt_meta.level_attr)\n level_indicator = mark_safe(conditional_escape(self.level_indicator) * level)\n return mark_safe(u'%s %s' % (level_indicator, conditional_escape(smart_unicode(obj))))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save(self):\n try:\n self.node.move_to(self.cleaned_data['target'],\n self.cleaned_data['position'])\n return self.node\n except InvalidMove, e:\n self.errors[NON_FIELD_ERRORS] = ErrorList(e)\n raise", "response": "Saves the current state of the node to the specified target and position."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\niterating a set of dirs under the static root, this method tries to find a file named like one of the names and file ext passed, and returns the storage path to the first file it encounters. Usage this method makes it possible to override static files (such as icon sets) in a similar way like templates in different locations can override others that have the same file name.", "response": "def find(names, dirs, file_ext):\n \"\"\"\n Iterating a set of dirs under the static root, this method tries to find\n a file named like one of the names and file ext passed, and returns the\n storage path to the first file it encounters.\n \n Usage this method makes it possible to override static files (such as \n icon sets) in a similar way like templates in different locations can\n override others that have the same file name.\n \"\"\"\n if not isinstance(names, list) or isinstance(names, tuple):\n names = (names,)\n for dir_name in dirs:\n for name in names:\n path = os.path.join(dir_name, name + file_ext)\n if not path in EXISTING_PATHS:\n # check on file system, then cache\n EXISTING_PATHS[path] = STATIC_STORAGE.exists(path)\n if EXISTING_PATHS[path]:\n return path"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find(file_node, dirs=ICON_DIRS, default_name=None, file_ext='.png'):\n names = []\n for attr_name in ('extension', 'mimetype', 'mime_supertype'):\n attr = getattr(file_node, attr_name)\n if attr:\n names.append(attr)\n if default_name:\n names.append(default_name)\n icon_path = StaticPathFinder.find(names, dirs, file_ext)\n if icon_path:\n return StaticIconFile(file_node, icon_path)", "response": "Find a file in the given directory and return a StaticIconFile object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a flat tree of results", "response": "def result_tree_flat(context, cl, request):\n \"\"\"\n Added 'filtered' param, so the template's js knows whether the results have\n been affected by a GET param or not. Only when the results are not filtered\n you can drag and sort the tree\n \"\"\"\n\n return {\n #'filtered': is_filtered_cl(cl, request),\n 'results': (th_for_result(cl, res) for res in list(cl.result_list)),\n }"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_object(self, queryset=None):\n # Use a custom queryset if provided; this is required for subclasses\n # like DateDetailView\n if queryset is None:\n queryset = self.get_queryset()\n\n path = self.kwargs.get('path', None)\n\n # Next, try looking up by path.\n if path is not None:\n queryset = queryset.filter(**FileNode.objects.get_filter_args_with_path(\n for_self=True, path=path))\n try:\n obj = queryset.get()\n except FileNode.DoesNotExist:\n raise Http404(_(u\"No %(verbose_name)s found matching the query\") %\n {'verbose_name': queryset.model._meta.verbose_name})\n return obj\n\n return super(FileNodeDetailView, self).get_object(queryset)", "response": "Returns the object that the view is displaying."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_detail_view(self, request, object, opts=None):\n view = self.get_view(request, self.view_class, opts)\n view.object = object\n return view", "response": "Instantiates and returns the view class that will generate the actual\n context for this plugin."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_listing_view(self, request, queryset, opts=None):\n view = self.get_view(request, self.view_class, opts)\n view.queryset = queryset\n return view", "response": "Instantiates and returns the view class that will generate the\n actual context for this plugin."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd the formatting options to the parser", "response": "def options_formatting(self):\n \"\"\" Formating options \"\"\"\n group = self.parser.add_argument_group(\"Format\")\n group.add_argument(\n \"--format\", dest=\"formatting\", default=None,\n help=\"Custom output format using the {} expansion\")\n group.add_argument(\n \"--value\", dest=\"values\", action=\"append\", default=[],\n help=\"Values for the custom formatting string\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef options_utils(self):\n group = self.parser.add_argument_group(\"Utils\")\n group.add_argument(\n \"--path\", action=\"append\", dest=\"paths\",\n help=\"Path to the metadata tree (default: current directory)\")\n group.add_argument(\n \"--verbose\", action=\"store_true\",\n help=\"Print information about parsed files to stderr\")\n group.add_argument(\n \"--debug\", action=\"store_true\",\n help=\"Turn on debugging output, do not catch exceptions\")", "response": "Options for the utils command"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist names of available objects", "response": "def command_ls(self):\n \"\"\" List names \"\"\"\n self.parser = argparse.ArgumentParser(\n description=\"List names of available objects\")\n self.options_select()\n self.options_utils()\n self.options = self.parser.parse_args(self.arguments[2:])\n self.show(brief=True)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef command_show(self):\n self.parser = argparse.ArgumentParser(\n description=\"Show metadata of available objects\")\n self.options_select()\n self.options_formatting()\n self.options_utils()\n self.options = self.parser.parse_args(self.arguments[2:])\n self.show(brief=False)", "response": "Show metadata of available objects"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef command_init(self):\n self.parser = argparse.ArgumentParser(\n description=\"Initialize a new metadata tree\")\n self.options_utils()\n self.options = self.parser.parse_args(self.arguments[2:])\n # For each path create an .fmf directory and version file\n for path in self.options.paths or [\".\"]:\n root = os.path.abspath(os.path.join(path, \".fmf\"))\n if os.path.exists(root):\n raise utils.FileError(\"{0} '{1}' already exists.\".format(\n \"Directory\" if os.path.isdir(root) else \"File\", root))\n os.makedirs(root)\n with open(os.path.join(root, \"version\"), \"w\") as version:\n version.write(\"{0}\\n\".format(utils.VERSION))\n print(\"Metadata tree '{0}' successfully initialized.\".format(root))", "response": "Initialize a new metadata tree"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nshow metadata for each path given", "response": "def show(self, brief=False):\n \"\"\" Show metadata for each path given \"\"\"\n output = []\n for path in self.options.paths or [\".\"]:\n if self.options.verbose:\n utils.info(\"Checking {0} for metadata.\".format(path))\n tree = fmf.Tree(path)\n for node in tree.prune(\n self.options.whole, self.options.keys, self.options.names,\n self.options.filters):\n if brief:\n show = node.show(brief=True)\n else:\n show = node.show(\n brief=False,\n formatting=self.options.formatting,\n values=self.options.values)\n # List source files when in debug mode\n if self.options.debug:\n for source in node.sources:\n show += utils.color(\"{0}\\n\".format(source), \"blue\")\n if show is not None:\n output.append(show)\n\n # Print output and summary\n if brief or self.options.formatting:\n joined = \"\".join(output)\n else:\n joined = \"\\n\".join(output)\n try: # pragma: no cover\n print(joined, end=\"\")\n except UnicodeEncodeError: # pragma: no cover\n print(joined.encode('utf-8'), end=\"\")\n if self.options.verbose:\n utils.info(\"Found {0}.\".format(\n utils.listed(len(output), \"object\")))\n self.output = joined"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfilter the data dictionary for a single item in the list of items in the list of items in the list of items in the list of items in the list of items in the list of items in the list of items in the list.", "response": "def filter(filter, data, sensitive=True, regexp=False):\n \"\"\"\n Return true if provided filter matches given dictionary of values\n\n Filter supports disjunctive normal form with '|' used for OR, '&'\n for AND and '-' for negation. Individual values are prefixed with\n 'value:', leading/trailing white-space is stripped. For example::\n\n tag: Tier1 | tag: Tier2 | tag: Tier3\n category: Sanity, Security & tag: -destructive\n\n Note that multiple comma-separated values can be used as a syntactic\n sugar to shorten the filter notation::\n\n tag: A, B, C ---> tag: A | tag: B | tag: C\n\n Values should be provided as a dictionary of lists each describing\n the values against which the filter is to be matched. For example::\n\n data = {tag: [\"Tier1\", \"TIPpass\"], category: [\"Sanity\"]}\n\n Other types of dictionary values are converted into a string.\n A FilterError exception is raised when a dimension parsed from the\n filter is not found in the data dictionary. Set option 'sensitive'\n to False to enable case-insensitive matching. If 'regexp' option is\n True, regular expressions can be used in the filter values as well.\n \"\"\"\n\n def match_value(pattern, text):\n \"\"\" Match value against data (simple or regexp) \"\"\"\n if regexp:\n return re.match(\"^{0}$\".format(pattern), text)\n else:\n return pattern == text\n\n def check_value(dimension, value):\n \"\"\" Check whether the value matches data \"\"\"\n # E.g. value = 'A, B' or value = \"C\" or value = \"-D\"\n # If there are multiple values, at least one must match\n for atom in re.split(\"\\s*,\\s*\", value):\n # Handle negative values (check the whole data for non-presence)\n if atom.startswith(\"-\"):\n atom = atom[1:]\n # Check each value for given dimension\n for dato in data[dimension]:\n if match_value(atom, dato):\n break\n # Pattern not found ---> good\n else:\n return True\n # Handle positive values (return True upon first successful match)\n else:\n # Check each value for given dimension\n for dato in data[dimension]:\n if match_value(atom, dato):\n # Pattern found ---> good\n return True\n # No value matched the data\n return False\n\n def check_dimension(dimension, values):\n \"\"\" Check whether all values for given dimension match data \"\"\"\n # E.g. dimension = 'tag', values = ['A, B', 'C', '-D']\n # Raise exception upon unknown dimension\n if dimension not in data:\n raise FilterError(\"Invalid filter '{0}'\".format(dimension))\n # Every value must match at least one value for data\n return all([check_value(dimension, value) for value in values])\n\n def check_clause(clause):\n \"\"\" Split into literals and check whether all match \"\"\"\n # E.g. clause = 'tag: A, B & tag: C & tag: -D'\n # Split into individual literals by dimension\n literals = dict()\n for literal in re.split(\"\\s*&\\s*\", clause):\n # E.g. literal = 'tag: A, B'\n # Make sure the literal matches dimension:value format\n matched = re.match(\"^(.*)\\s*:\\s*(.*)$\", literal)\n if not matched:\n raise FilterError(\"Invalid filter '{0}'\".format(literal))\n dimension, value = matched.groups()\n values = [value]\n # Append the literal value(s) to corresponding dimension list\n literals.setdefault(dimension, []).extend(values)\n # For each dimension all literals must match given data\n return all([check_dimension(dimension, values)\n for dimension, values in literals.items()])\n\n # Default to True if no filter given, bail out if weird data given\n if filter is None or filter == \"\": return True\n if not isinstance(data, dict):\n raise FilterError(\"Invalid data type '{0}'\".format(type(data)))\n\n # Make sure that data dictionary contains lists of strings\n data = copy.deepcopy(data)\n try: # pragma: no cover\n for key in data:\n if isinstance(data[key], list):\n data[key] = [unicode(item) for item in data[key]]\n else:\n data[key] = [unicode(data[key])]\n except NameError: # pragma: no cover\n for key in data:\n if isinstance(data[key], list):\n data[key] = [str(item) for item in data[key]]\n else:\n data[key] = [str(data[key])]\n # Turn all data into lowercase if sensitivity is off\n if not sensitive:\n filter = filter.lower()\n lowered = dict()\n for key, values in data.items():\n lowered[key.lower()] = [value.lower() for value in values]\n data = lowered\n\n # At least one clause must be true\n return any([check_clause(clause)\n for clause in re.split(\"\\s*\\|\\s*\", filter)])"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef color(text, color=None, background=None, light=False, enabled=\"auto\"):\n colors = {\"black\": 30, \"red\": 31, \"green\": 32, \"yellow\": 33,\n \"blue\": 34, \"magenta\": 35, \"cyan\": 36, \"white\": 37}\n # Nothing do do if coloring disabled\n if enabled == \"auto\":\n enabled = Coloring().enabled()\n if not enabled:\n return text\n # Prepare colors (strip 'light' if present in color)\n if color and color.startswith(\"light\"):\n light = True\n color = color[5:]\n color = color and \";{0}\".format(colors[color]) or \"\"\n background = background and \";{0}\".format(colors[background] + 10) or \"\"\n light = light and 1 or 0\n # Starting and finishing sequence\n start = \"\\033[{0}{1}{2}m\".format(light, color, background)\n finish = \"\\033[1;m\"\n return \"\".join([start, text, finish])", "response": "Color a text in the specified color if coloring is enabled."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a logger with the given name and level.", "response": "def _create_logger(name='fmf', level=None):\n \"\"\" Create fmf logger \"\"\"\n # Create logger, handler and formatter\n logger = logging.getLogger(name)\n handler = logging.StreamHandler()\n handler.setFormatter(Logging.ColoredFormatter())\n logger.addHandler(handler)\n # Save log levels in the logger itself (backward compatibility)\n for level in Logging.LEVELS:\n setattr(logger, level, getattr(logging, level))\n # Additional logging constants and methods for cache and xmlrpc\n logger.DATA = LOG_DATA\n logger.CACHE = LOG_CACHE\n logger.ALL = LOG_ALL\n logger.cache = lambda message: logger.log(LOG_CACHE, message) # NOQA\n logger.data = lambda message: logger.log(LOG_DATA, message) # NOQA\n logger.all = lambda message: logger.log(LOG_ALL, message) # NOQA\n return logger"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nworking just like the default Manager's :func:`filter` method, but you can pass an additional keyword argument named ``path`` specifying the full **path of the folder whose immediate child objects** you want to retrieve, e.g. ``\"path/to/folder\"``.", "response": "def filter(self, *args, **kwargs):\n \"\"\"\n Works just like the default Manager's :func:`filter` method, but\n you can pass an additional keyword argument named ``path`` specifying\n the full **path of the folder whose immediate child objects** you\n want to retrieve, e.g. ``\"path/to/folder\"``.\n \"\"\"\n if 'path' in kwargs:\n kwargs = self.get_filter_args_with_path(False, **kwargs)\n return super(FileNodeManager, self).filter(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexclude objects from the cache.", "response": "def exclude(self, *args, **kwargs):\n \"\"\"\n Works just like the default Manager's :func:`exclude` method, but\n you can pass an additional keyword argument named ``path`` specifying\n the full **path of the folder whose immediate child objects** you\n want to exclude, e.g. ``\"path/to/folder\"``.\n \"\"\"\n if 'path' in kwargs:\n kwargs = self.get_filter_args_with_path(False, **kwargs)\n return super(FileNodeManager, self).exclude(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get(self, *args, **kwargs):\n if 'path' in kwargs:\n kwargs = self.get_filter_args_with_path(True, **kwargs)\n return super(FileNodeManager, self).get(\n *args, **kwargs)", "response": "This method is a wrapper for the base class get method."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self, value, model_instance):\n if not self.editable:\n # Skip validation for non-editable fields.\n return\n if self._choices and value:\n print(value)\n l = value\n if type(value) != list:\n l = [ value ]\n for v in value:\n for option_key, option_value in self.choices:\n if isinstance(option_value, (list, tuple)):\n # This is an optgroup, so look inside the group for options.\n for optgroup_key, optgroup_value in option_value:\n if v == optgroup_key:\n return\n elif v == option_key:\n return\n raise ValidationError(self.error_messages['invalid_choice'] % {'value': value})\n\n if value is None and not self.null:\n raise ValidationError(self.error_messages['null'])\n\n if not self.blank and value in validators.EMPTY_VALUES:\n raise ValidationError(self.error_messages['blank'])\n\n return super(MultipleChoiceCommaSeparatedIntegerField, self).validate(value, model_instance)", "response": "Validate value and throws ValidationError."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_qualified_file_url(self, field_name='file'):\n url = getattr(self, field_name).url\n if '://' in url:\n # `MEDIA_URL` already contains domain\n return url\n protocol = getattr(settings, 'PROTOCOL', 'http')\n domain = Site.objects.get_current().domain\n port = getattr(settings, 'PORT', '')\n return '%(protocol)s://%(domain)s%(port)s%(url)s' % {\n 'protocol': 'http',\n 'domain': domain.rstrip('/'),\n 'port': ':'+port if port else '',\n 'url': url,\n }", "response": "Returns a fully qualified URL for the file field including the protocol domain and port."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_admin_url(self, query_params=None, use_path=False):\n\n if not query_params:\n query_params = {}\n\n url = ''\n if self.is_top_node():\n url = reverse('admin:media_tree_filenode_changelist');\n elif use_path and (self.is_folder() or self.pk):\n url = reverse('admin:media_tree_filenode_open_path', args=(self.get_path(),));\n elif self.is_folder():\n url = reverse('admin:media_tree_filenode_changelist');\n query_params['folder_id'] = self.pk\n elif self.pk:\n return reverse('admin:media_tree_filenode_change', args=(self.pk,));\n\n if len(query_params):\n params = ['%s=%s' % (key, value) for key, value in query_params.items()]\n url = '%s?%s' % (url, \"&\".join(params))\n\n return url", "response": "Returns the URL for viewing a FileNode in the admin."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the object metadata that has been selected to be displayed users compiled as a string.", "response": "def get_metadata_display(self, field_formats = {}, escape=True):\n \"\"\"Returns object metadata that has been selected to be displayed to\n users, compiled as a string.\n \"\"\"\n def field_format(field):\n if field in field_formats:\n return field_formats[field]\n return u'%s'\n t = join_formatted('', self.title, format=field_format('title'), escape=escape)\n t = join_formatted(t, self.description, u'%s: %s', escape=escape)\n if self.publish_author:\n t = join_formatted(t, self.author, u'%s' + u' \u2013 ' + u'Author: %s', u'%s' + u'Author: %s', escape=escape)\n if self.publish_copyright:\n t = join_formatted(t, self.copyright, u'%s, %s', escape=escape)\n if self.publish_date_time and self.date_time:\n date_time_formatted = dateformat.format(self.date_time, get_format('DATE_FORMAT'))\n t = join_formatted(t, date_time_formatted, u'%s (%s)', '%s%s', escape=escape)\n return t"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn object metadata that has been selected to be displayed as a string including default formatting for example.", "response": "def get_caption_formatted(self, field_formats = app_settings.MEDIA_TREE_METADATA_FORMATS, escape=True):\n \"\"\"Returns object metadata that has been selected to be displayed to\n users, compiled as a string including default formatting, for example\n bold titles.\n\n You can use this method in templates where you want to output image\n captions.\n \"\"\"\n if self.override_caption != '':\n return self.override_caption\n else:\n return mark_safe(self.get_metadata_display(field_formats, escape=escape))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef alt(self):\n if self.override_alt != '' and self.override_alt is not None:\n return self.override_alt\n elif self.override_caption != '' and self.override_caption is not None:\n return self.override_caption\n else:\n return self.get_metadata_display()", "response": "Returns object metadata suitable for use as the HTML alt attribute."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsaves the current state of the current state of the current node to the selected target and the current position.", "response": "def save(self):\n \"\"\"\n Attempts to move the nodes using the selected target and\n position.\n\n If an invalid move is attempted, the related error message will\n be added to the form's non-field errors and the error will be\n re-raised. Callers should attempt to catch ``InvalidMove`` to\n redisplay the form with the error, should it occur.\n \"\"\"\n self.success_count = 0\n for node in self.get_selected_nodes():\n self.move_node(node, self.cleaned_data['target_node'])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete the selected files from storage and saves the result to self. success_files and self. error_files.", "response": "def save(self):\n \"\"\"\n Deletes the selected files from storage\n \"\"\"\n storage = get_media_storage()\n for storage_name in self.cleaned_data['selected_files']:\n full_path = storage.path(storage_name)\n try:\n storage.delete(storage_name)\n self.success_files.append(full_path)\n except OSError:\n self.error_files.append(full_path)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_merged_filenode_list(nodes, filter_media_types=None, exclude_media_types=None, filter=None, ordering=None, processors=None, max_depth=None, max_nodes=None):\n return __get_filenode_list(nodes, filter_media_types=filter_media_types, exclude_media_types=exclude_media_types,\n filter=filter, ordering=ordering, processors=processors, list_method='extend', max_depth=max_depth, max_nodes=max_nodes)", "response": "Returns a list of all the file nodes that are merged into one."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a formatted HTML link tag to the FileNode s file optionally including some meta information about the file.", "response": "def get_file_link(node, use_metadata=False, include_size=False, include_extension=False, include_icon=False, href=None, extra_class='', extra=''):\n \"\"\"\n Returns a formatted HTML link tag to the FileNode's file, optionally including some meta information about the file.\n \"\"\"\n link_text = None\n if use_metadata:\n link_text = node.get_metadata_display()\n if not link_text:\n link_text = node.__unicode__()\n if node.node_type != media_types.FOLDER:\n if include_extension:\n if extra != '':\n extra += ' '\n extra = '<span class=\"file-extension\">%s</span>' % node.extension.upper()\n if include_size:\n if extra != '':\n extra += ', '\n extra += '<span class=\"file-size\">%s</span>' % filesizeformat(node.size)\n if extra:\n extra = ' <span class=\"details\">(%s)</span>' % extra\n link_class = 'file %s' % node.extension\n else:\n link_class = 'folder'\n if extra_class:\n link_class = '%s %s' % (link_class, extra_class)\n\n if node.node_type != media_types.FOLDER and not href:\n href = node.file.url\n\n icon = ''\n if include_icon:\n icon_file = node.get_icon_file()\n if icon_file:\n icon = '<span class=\"icon\"><img src=\"%s\" alt=\"%s\" /></span>' % (\n icon_file.url, node.alt)\n\n if href:\n link = u'<a class=\"%s\" href=\"%s\">%s%s</a>%s' % (\n link_class, href, icon, link_text, extra)\n else:\n link = u'<span class=\"%s\">%s%s</span>%s' % (\n link_class, icon, link_text, extra)\n\n return force_unicode(mark_safe(link))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting selected items from the tree.", "response": "def delete_selected_tree(self, modeladmin, request, queryset):\n \"\"\"\n Deletes multiple instances and makes sure the MPTT fields get recalculated properly.\n (Because merely doing a bulk delete doesn't trigger the post_delete hooks.)\n \"\"\"\n # If the user has not yet confirmed the deletion, call the regular delete\n # action that will present a confirmation page\n if not request.POST.get('post'):\n return actions.delete_selected(modeladmin, request, queryset)\n # Otherwise, delete objects one by one\n n = 0\n for obj in queryset:\n obj.delete()\n n += 1\n self.message_user(request, _(\"Successfully deleted %s items.\" % n))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef norm_name(build_module: str, target_name: str):\n if ':' not in target_name:\n raise ValueError(\n \"Must provide fully-qualified target name (with `:') to avoid \"\n \"possible ambiguity - `{}' not valid\".format(target_name))\n\n mod, name = split(target_name)\n return '{}:{}'.format(\n PurePath(norm_proj_path(mod, build_module)).as_posix().strip('.'),\n validate_name(name))", "response": "Return a normalized canonical target name for the target_name build_module."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef expand_target_selector(target_selector: str, conf: Config):\n if target_selector == '**:*':\n return target_selector\n if ':' not in target_selector:\n target_selector += ':*'\n build_module, target_name = split(target_selector)\n build_module = normpath(join(conf.get_rel_work_dir(), build_module))\n return '{}:{}'.format(PurePath(build_module).as_posix().strip('.'),\n validate_name(target_name))", "response": "Expand a target selector into a normalized target name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef hashify_targets(targets: list, build_context) -> list:\n return sorted(build_context.targets[target_name].hash(build_context)\n for target_name in listify(targets))", "response": "Return sorted hashes of targets."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn mapping from file path to file hash.", "response": "def hashify_files(files: list) -> dict:\n \"\"\"Return mapping from file path to file hash.\"\"\"\n return {filepath.replace('\\\\', '/'): hash_tree(filepath)\n for filepath in listify(files)}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a cachable representation of the prop value given its type.", "response": "def process_prop(prop_type: PT, value, build_context):\n \"\"\"Return a cachable representation of the prop `value` given its type.\"\"\"\n if prop_type in (PT.Target, PT.TargetList):\n return hashify_targets(value, build_context)\n elif prop_type in (PT.File, PT.FileList):\n return hashify_files(value)\n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes and store a JSON serialization of this target for caching purposes.", "response": "def compute_json(self, build_context):\n \"\"\"Compute and store a JSON serialization of this target for caching\n purposes.\n\n The serialization includes:\n - The build flavor\n - The builder name\n - Target tags\n - Hashes of target dependencies & buildenv\n - Processed props (where target props are replaced with their hashes,\n and file props are replaced with mapping from file name to its hash)\n\n It specifically does NOT include:\n - Artifacts produced by the target\n\n The target name is currently included, although it would be better off\n to leave it out, and allow targets to be renamed without affecting\n their caching status (if it's just a rename).\n It is currently included because it's the easy way to account for the\n fact that when cached artifacts are restored, their path may be a\n function of the target name in non-essential ways (such as a workspace\n dir name).\n \"\"\"\n props = {}\n test_props = {}\n for prop in self.props:\n if prop in self._prop_json_blacklist:\n continue\n sig_spec = Plugin.builders[self.builder_name].sig.get(prop)\n if sig_spec is None:\n continue\n if prop in self._prop_json_testlist:\n test_props[prop] = process_prop(sig_spec.type,\n self.props[prop],\n build_context)\n else:\n props[prop] = process_prop(sig_spec.type, self.props[prop],\n build_context)\n json_dict = dict(\n # TODO: avoid including the name in the hashed json...\n name=self.name,\n builder_name=self.builder_name,\n deps=hashify_targets(self.deps, build_context),\n props=props,\n buildenv=hashify_targets(self.buildenv, build_context),\n tags=sorted(list(self.tags)),\n flavor=build_context.conf.flavor, # TODO: any other conf args?\n # yabt_version=__version__, # TODO: is this needed?\n )\n json_test_dict = dict(\n props=test_props,\n )\n\n self._json = json.dumps(json_dict, sort_keys=True, indent=4)\n self._test_json = json.dumps(json_test_dict, sort_keys=True, indent=4)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef json(self, build_context) -> str:\n if self._json is None:\n self.compute_json(build_context)\n return self._json", "response": "Return the JSON representation of this target for caching purposes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute and store the hash of this target for caching purposes.", "response": "def compute_hash(self, build_context):\n \"\"\"Compute and store the hash of this target for caching purposes.\n\n The hash is computed over the target JSON representation.\n \"\"\"\n m = md5()\n m.update(self.json(build_context).encode('utf8'))\n self._hash = m.hexdigest()\n m = md5()\n m.update(self.test_json(build_context).encode('utf8'))\n self._test_hash = m.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hash(self, build_context) -> str:\n if self._hash is None:\n self.compute_hash(build_context)\n return self._hash", "response": "Return the hash of this target for caching purposes."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle Docker image build cache.", "response": "def handle_build_cache(\n conf: Config, name: str, tag: str, icb: ImageCachingBehavior):\n \"\"\"Handle Docker image build cache.\n\n Return image ID if image is cached, and there's no need to redo the build.\n Return None if need to build the image (whether cached locally or not).\n Raise RuntimeError if not allowed to build the image because of state of\n local cache.\n\n TODO(itamar): figure out a better name for this function, that reflects\n what it returns (e.g. `get_cached_image_id`),\n without \"surprising\" the caller with the potential of long\n and non-trivial operations that are not usually expected from functions\n with such names.\n \"\"\"\n if icb.pull_if_cached or (icb.pull_if_not_cached and\n get_cached_image_id(icb.remote_image) is None):\n try:\n pull_docker_image(icb.remote_image, conf.docker_pull_cmd)\n except CalledProcessError:\n pass\n local_image = '{}:{}'.format(name, tag)\n if (icb.skip_build_if_cached and\n get_cached_image_id(icb.remote_image) is not None):\n tag_docker_image(icb.remote_image, local_image)\n return get_cached_image_id(local_image)\n if ((not icb.allow_build_if_not_cached) and\n get_cached_image_id(icb.remote_image) is None):\n raise RuntimeError('No cached image for {}'.format(local_image))\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build_docker_image(\n build_context, name: str, tag: str, base_image, deps: list=None,\n env: dict=None, work_dir: str=None,\n entrypoint: list=None, cmd: list=None, full_path_cmd: bool=False,\n distro: dict=None, image_caching_behavior: dict=None,\n runtime_params: dict=None, ybt_bin_path: str=None,\n build_user: str=None, run_user: str=None, labels: dict=None,\n no_artifacts: bool=False):\n \"\"\"Build Docker image, and return a (image_id, image_name:tag) tuple of\n built image, if built successfully.\n\n Notes:\n Using the given image name & tag as they are, but using the global host\n Docker image namespace (as opposed to a private-project-workspace),\n so collisions between projects are possible (and very likely, e.g., when\n used in a CI environment, or shared machine use-case).\n Trying to address this issue to some extent by using the image ID after\n it is built, which is unique.\n There's a race condition between \"build\" and \"get ID\" - ignoring this at\n the moment.\n Also, I'm not sure about Docker's garbage collection...\n If I use the image ID in other places, and someone else \"grabbed\" my image\n name and tag (so now my image ID is floating), is it still safe to use\n the ID? Or is it going to be garbage collected and cleaned up sometime?\n From my experiments, the \"floating image ID\" was left alone (usable),\n but prone to \"manual cleanups\".\n Also ignoring this at the moment...\n Thought about an alternative approach based on first building an image\n with a randomly generated tag, so I can use that safely later, and tag it\n to the requested tag.\n Decided against it, seeing that every run increases the local Docker\n images spam significantly with a bunch of random tags, making it even less\n useful.\n Documenting it here to remember it was considered, and to discuss it\n further in case anyone thinks it's a better idea than what I went with.\n \"\"\"\n docker_image = '{}:{}'.format(name, tag)\n # create directory for this target under a private builder workspace\n workspace_dir = build_context.get_workspace('DockerBuilder', docker_image)\n # generate Dockerfile and build it\n dockerfile_path = join(workspace_dir, 'Dockerfile')\n dockerfile = [\n 'FROM {}\\n'.format(format_qualified_image_name(base_image)),\n 'ARG DEBIAN_FRONTEND=noninteractive\\n',\n ]\n if build_user:\n dockerfile.append('USER {}\\n'.format(build_user))\n workspace_src_dir = join(workspace_dir, 'src')\n rmtree(workspace_src_dir)\n num_linked = 0\n apt_repo_deps = []\n effective_env = {}\n effective_labels = {}\n KNOWN_RUNTIME_PARAMS = frozenset((\n 'ports', 'volumes', 'container_name', 'daemonize', 'interactive',\n 'term', 'auto_it', 'rm', 'env', 'work_dir', 'impersonate'))\n if runtime_params is None:\n runtime_params = {}\n runtime_params['ports'] = listify(runtime_params.get('ports'))\n runtime_params['volumes'] = listify(runtime_params.get('volumes'))\n runtime_params['env'] = dict(runtime_params.get('env', {}))\n env_manipulations = {}\n packaging_layers = []\n\n def add_package(pkg_type, pkg_spec):\n \"\"\"Add package specification of certain package type.\n\n Uses last layer if matches package type, otherwise opens a new layer.\n\n This can result \"Docker layer framgantation\", by opening and closing\n many layers.\n No optimization is performed on detecting opportunities to merge layers\n that were split just because of arbitrary topological sort decision\n (tie breaker), and not a real topology in the target graph.\n Such an optimization could be done here by inspecting the graph\n directly, but decided not to go into it at this stage, since it's not\n clear it's beneficial overall (e.g. better to have more layers so\n some of them can remain cached if others change).\n A better optimization (also not implemented) could be to do topological\n sort tie breaking based on Docker-cache optimization - e.g., move new\n things to new layers in order to keep old things in cached layers.\n \"\"\"\n if len(packaging_layers) == 0:\n layer = (pkg_type, list())\n packaging_layers.append(layer)\n else:\n layer = packaging_layers[-1]\n if pkg_type != layer[0]:\n layer = (pkg_type, list())\n packaging_layers.append(layer)\n if isinstance(pkg_spec, list):\n layer[1].extend(pkg_spec)\n else:\n layer[1].append(pkg_spec)\n\n def check_env_overrides(new_vars: set, op_kind: str, vars_source: str):\n overridden_vars = new_vars.intersection(effective_env.keys())\n if overridden_vars:\n raise ValueError(\n 'Following env vars {} from {} override previously set vars '\n 'during build of Docker image \"{}\": {}'.format(\n op_kind, vars_source, docker_image,\n ', '.join(overridden_vars)))\n if op_kind == 'set':\n overridden_vars = new_vars.intersection(env_manipulations.keys())\n if overridden_vars:\n raise ValueError(\n 'Following env vars {} from {} override previous '\n 'manipulations during build of Docker image \"{}\": {}'\n .format(op_kind, vars_source, docker_image,\n ', '.join(overridden_vars)))\n\n def check_label_overrides(new_labels: set, labels_source: str):\n overridden_labels = new_labels.intersection(effective_labels.keys())\n if overridden_labels:\n raise ValueError(\n 'Following labels set from {} override previously set labels '\n 'during build of Docker image \"{}\": {}'.format(\n labels_source, docker_image, ', '.join(overridden_labels)))\n\n def update_runtime_params(new_rt_param: dict, params_source: str):\n invalid_keys = set(\n new_rt_param.keys()).difference(KNOWN_RUNTIME_PARAMS)\n if invalid_keys:\n raise ValueError(\n 'Unknown keys in runtime params of {}: {}'.format(\n params_source, ', '.join(invalid_keys)))\n # TODO(itamar): check for invalid values and inconsistencies\n runtime_params['ports'].extend(listify(new_rt_param.get('ports')))\n runtime_params['volumes'].extend(listify(new_rt_param.get('volumes')))\n runtime_params['env'].update(dict(runtime_params.get('env', {})))\n for param in ('container_name', 'daemonize', 'interactive', 'term',\n 'auto_it', 'rm', 'work_dir', 'impersonate'):\n if param in new_rt_param:\n # TODO(itamar): check conflicting overrides\n runtime_params[param] = new_rt_param[param]\n\n if deps is None:\n deps = []\n # Get all base image deps, so when building this image we can skip adding\n # deps that already exist in the base image.\n base_image_deps = set(build_context.generate_dep_names(base_image))\n for dep in deps:\n if not distro and 'distro' in dep.props:\n distro = dep.props.distro\n if 'runtime_params' in dep.props:\n update_runtime_params(dep.props.runtime_params,\n 'dependency {}'.format(dep.name))\n\n if dep.name in base_image_deps:\n logger.debug('Skipping base image dep {}', dep.name)\n continue\n if not no_artifacts:\n num_linked += dep.artifacts.link_for_image(\n workspace_src_dir, build_context.conf)\n\n PACKAGING_PARAMS = frozenset(\n ('set_env', 'semicolon_join_env', 'set_label'))\n invalid_keys = set(\n dep.props.packaging_params.keys()).difference(PACKAGING_PARAMS)\n if invalid_keys:\n raise ValueError(\n 'Unknown keys in packaging params of target \"{}\": {}'.format(\n dep.name, ', '.join(invalid_keys)))\n if 'set_env' in dep.props.packaging_params:\n dep_env = dep.props.packaging_params['set_env']\n check_env_overrides(\n set(dep_env.keys()), 'set', 'dependency {}'.format(dep.name))\n effective_env.update(dep_env)\n if 'semicolon_join_env' in dep.props.packaging_params:\n append_env = dep.props.packaging_params['semicolon_join_env']\n check_env_overrides(set(append_env.keys()), 'manipulations',\n 'dependency {}'.format(dep.name))\n for key, value in append_env.items():\n env_manip = env_manipulations.setdefault(\n key, ['${{{}}}'.format(key)])\n if value not in env_manip:\n env_manip.append(value)\n if 'set_label' in dep.props.packaging_params:\n dep_labels = dep.props.packaging_params['set_label']\n check_label_overrides(\n set(dep_labels.keys()), 'dependency {}'.format(dep.name))\n effective_labels.update(dep_labels)\n\n if 'apt-repository' in dep.tags:\n apt_repo_deps.append(dep)\n if 'apt-installable' in dep.tags:\n add_package('apt', format_apt_specifier(dep))\n if 'pip-installable' in dep.tags:\n add_package(dep.props.pip, format_pypi_specifier(dep))\n if 'custom-installer' in dep.tags:\n add_package('custom', get_installer_desc(build_context, dep))\n if 'npm-installable' in dep.tags:\n if dep.props.global_install:\n add_package('npm-global', format_npm_specifier(dep))\n else:\n add_package('npm-local', format_npm_specifier(dep))\n if 'gem-installable' in dep.tags:\n add_package('gem', format_gem_specifier(dep))\n\n # Add environment variables (one layer)\n if env:\n check_env_overrides(set(env.keys()), 'set', 'the target')\n effective_env.update(env)\n for key, value in env_manipulations.items():\n effective_env[key] = ':'.join(value)\n if effective_env:\n dockerfile.append(\n 'ENV {}\\n'.format(\n ' '.join('{}=\"{}\"'.format(key, value)\n for key, value in sorted(effective_env.items()))))\n\n apt_key_cmds = []\n apt_repositories = []\n for dep in apt_repo_deps:\n source_line, apt_key_cmd = parse_apt_repository(\n build_context, dep, distro)\n apt_repositories.append(source_line)\n if apt_key_cmd:\n apt_key_cmds.append(apt_key_cmd)\n # Handle apt keys (one layer for all)\n if apt_key_cmds:\n dockerfile.append(\n 'RUN {}\\n'.format(' && '.join(apt_key_cmds)))\n # Handle apt repositories (one layer for all)\n if apt_repositories:\n list_name = '{}.list'.format(name)\n apt_src_file = join(workspace_dir, list_name)\n if make_apt_sources_list(apt_repositories, apt_src_file):\n dockerfile.append(\n 'COPY {} /etc/apt/sources.list.d/\\n'.format(list_name))\n\n custom_cnt = 0\n pip_req_cnt = defaultdict(int)\n\n def install_npm(npm_packages: list, global_install: bool):\n if npm_packages:\n if not global_install:\n dockerfile.append('WORKDIR /usr/src\\n')\n dockerfile.append(\n 'RUN npm install {} {}\\n'.format(\n ' '.join(npm_packages),\n '--global' if global_install else '&& npm dedupe'))\n\n for layer in packaging_layers:\n pkg_type, packages = layer\n\n if pkg_type == 'apt':\n dockerfile.append(\n 'RUN apt-get update -y && apt-get install '\n '--no-install-recommends -y {} '\n '&& rm -rf /var/lib/apt/lists/*\\n'.format(\n ' '.join(sorted(packages))))\n\n elif pkg_type == 'custom':\n # Handle custom installers (2 layers per occurrence)\n custom_cnt += 1\n packages_dir = 'packages{}'.format(custom_cnt)\n tmp_install = '/tmp/install{}'.format(custom_cnt)\n workspace_packages_dir = join(workspace_dir, packages_dir)\n rmtree(workspace_packages_dir)\n os.makedirs(workspace_packages_dir)\n run_installers = []\n for custom_installer_desc in packages:\n target_name, install_script, package = custom_installer_desc\n package_tar = basename(package)\n link_node(package, join(workspace_packages_dir, package_tar))\n run_installers.extend([\n 'tar -xf {0}/{1} -C {0}'.format(tmp_install, package_tar),\n 'cd {}/{}'.format(tmp_install, target_name),\n 'cat {} | tr -d \\'\\\\r\\' | bash'.format(install_script),\n ])\n dockerfile.extend([\n 'COPY {} {}\\n'.format(packages_dir, tmp_install),\n 'RUN {} && cd / && rm -rf {}\\n'.format(\n ' && '.join(run_installers), tmp_install),\n ])\n\n elif pkg_type.startswith('pip'):\n # Handle pip packages (2 layers per occurrence)\n req_fname = 'requirements_{}_{}.txt'.format(\n pkg_type, pip_req_cnt[pkg_type] + 1)\n pip_req_file = join(workspace_dir, req_fname)\n if make_pip_requirements(packages, pip_req_file):\n upgrade_pip = (\n '{pip} install --no-cache-dir --upgrade pip && '\n .format(pip=pkg_type)\n if pip_req_cnt[pkg_type] == 0 else '')\n dockerfile.extend([\n 'COPY {} /usr/src/\\n'.format(req_fname),\n 'RUN {upgrade_pip}'\n '{pip} install --no-cache-dir -r /usr/src/{reqs}\\n'\n .format(upgrade_pip=upgrade_pip, pip=pkg_type,\n reqs=req_fname)\n ])\n pip_req_cnt[pkg_type] += 1\n\n elif pkg_type == 'npm-global':\n # Handle npm global packages (1 layer per occurrence)\n install_npm(packages, True)\n\n elif pkg_type == 'npm-local':\n # Handle npm local packages (1 layer per occurrence)\n install_npm(packages, False)\n\n elif pkg_type == 'gem':\n # Handle gem (ruby) packages (1 layer per occurrence)\n dockerfile.append(\n 'RUN gem install {}\\n'.format(' '.join(packages)))\n\n if work_dir:\n dockerfile.append('WORKDIR {}\\n'.format(work_dir))\n\n if num_linked > 0:\n dockerfile.append('COPY src /usr/src\\n')\n\n # Add labels (one layer)\n if labels:\n check_label_overrides(set(labels.keys()), 'the target')\n effective_labels.update(labels)\n if effective_labels:\n dockerfile.append(\n 'LABEL {}\\n'.format(\n ' '.join('\"{}\"=\"{}\"'.format(key, value)\n for key, value in sorted(effective_labels.items()))))\n\n def format_docker_cmd(docker_cmd):\n return ('\"{}\"'.format(cmd) for cmd in docker_cmd)\n\n if run_user:\n dockerfile.append('USER {}\\n'.format(run_user))\n\n # Add ENTRYPOINT (one layer)\n if entrypoint:\n # TODO(itamar): Consider adding tini as entrypoint also if given\n # Docker CMD without a Docker ENTRYPOINT?\n entrypoint[0] = PurePath(entrypoint[0]).as_posix()\n if full_path_cmd:\n entrypoint[0] = (PurePath('/usr/src/app') /\n entrypoint[0]).as_posix()\n if build_context.conf.with_tini_entrypoint:\n entrypoint = ['tini', '--'] + entrypoint\n dockerfile.append(\n 'ENTRYPOINT [{}]\\n'.format(\n ', '.join(format_docker_cmd(entrypoint))))\n\n # Add CMD (one layer)\n if cmd:\n cmd[0] = PurePath(cmd[0]).as_posix()\n if full_path_cmd:\n cmd[0] = (PurePath('/usr/src/app') / cmd[0]).as_posix()\n dockerfile.append(\n 'CMD [{}]\\n'.format(', '.join(format_docker_cmd(cmd))))\n\n # TODO(itamar): write only if changed?\n with open(dockerfile_path, 'w') as dockerfile_f:\n dockerfile_f.writelines(dockerfile)\n docker_build_cmd = ['docker', 'build']\n if build_context.conf.no_docker_cache:\n docker_build_cmd.append('--no-cache')\n docker_build_cmd.extend(['-t', docker_image, workspace_dir])\n logger.info('Building docker image \"{}\" using command {}',\n docker_image, docker_build_cmd)\n run(docker_build_cmd, check=True)\n # TODO(itamar): race condition here\n image_id = get_cached_image_id(docker_image)\n metadata = {\n 'image_id': image_id,\n 'images': [{\n 'name': docker_image,\n 'pushed': False,\n }],\n }\n icb = ImageCachingBehavior(name, tag, image_caching_behavior)\n if icb.push_image_after_build:\n tag_docker_image(image_id, icb.remote_image)\n push_docker_image(icb.remote_image, build_context.conf.docker_push_cmd)\n metadata['images'].append({\n 'name': icb.remote_image,\n 'pushed': True,\n })\n # Generate ybt_bin scripts\n if ybt_bin_path:\n if ybt_bin_path.startswith('//'):\n ybt_bin_path = join(build_context.conf.get_bin_path(),\n ybt_bin_path[2:])\n # Make sure ybt_bin's are created only under bin_path\n assert (build_context.conf.get_bin_path() ==\n commonpath([build_context.conf.get_bin_path(), ybt_bin_path]))\n\n def format_docker_run_params(params: dict):\n param_strings = []\n if 'container_name' in params:\n param_strings.extend(['--name', params['container_name']])\n if params.get('interactive'):\n param_strings.append('-i')\n if params.get('term'):\n param_strings.append('-t')\n if params.get('rm'):\n param_strings.append('--rm')\n if params.get('daemonize'):\n param_strings.append('-d')\n if params.get('impersonate') and platform.system() == 'Linux':\n param_strings.extend([\n '-u', '$( id -u ):$( id -g )',\n '-v', '/etc/passwd:/etc/passwd:ro',\n '-v', '/etc/group:/etc/group:ro'])\n for port in params['ports']:\n param_strings.extend(['-p', port])\n for volume in params['volumes']:\n param_strings.extend(['-v', volume])\n if params.get('work_dir'):\n param_strings.extend(['-w', params['work_dir']])\n for var, value in params['env'].items():\n param_strings.extend(['-e', '{}=\"{}\"'.format(var, value)])\n return ' '.join(param_strings)\n\n with open(join(dirname(abspath(__file__)),\n 'ybtbin.sh.tmpl'), 'r') as tmpl_f:\n ybt_bin = tmpl_f.read().format(\n image_name=docker_image, image_id=image_id,\n docker_opts=format_docker_run_params(runtime_params),\n default_opts='$IT' if runtime_params.get('auto_it') else '')\n with open(ybt_bin_path, 'w') as ybt_bin_f:\n ybt_bin_f.write(ybt_bin)\n os.chmod(ybt_bin_path, 0o755)\n metadata['ybt_bin'] = ybt_bin_path\n return metadata", "response": "Build a Docker image and return a tuple of\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a brace - handler using stream to logger.", "response": "def add_stream_handler(logger, stream):\n \"\"\"Add a brace-handler stream-handler using `stream` to `logger`.\"\"\"\n handler = logging.StreamHandler(stream=stream)\n # Using Brace Formatter (see\n # https://docs.python.org/3.5/howto/logging-cookbook.html#use-of-alternative-formatting-styles)\n formatter = logging.Formatter(\n '{asctime} {name:24s} {levelname:8s} {message}', style='{')\n handler.setFormatter(formatter)\n logger.addHandler(handler)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing and configure logging.", "response": "def configure_logging(conf):\n \"\"\"Initialize and configure logging.\"\"\"\n root_logger = logging.getLogger()\n root_logger.setLevel(getattr(logging, conf.loglevel.upper()))\n if conf.logtostderr:\n add_stream_handler(root_logger, sys.stderr)\n if conf.logtostdout:\n add_stream_handler(root_logger, sys.stdout)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn an iterator which yields the paths matching a pathname pattern.", "response": "def iglob(pathname, *, recursive=False):\n \"\"\"Return an iterator which yields the paths matching a pathname pattern.\n\n The pattern may contain simple shell-style wildcards a la\n fnmatch. However, unlike fnmatch, filenames starting with a\n dot are special cases that are not matched by '*' and '?'\n patterns.\n\n If recursive is true, the pattern '**' will match any files and\n zero or more directories and subdirectories.\n \"\"\"\n it = _iglob(pathname, recursive)\n if recursive and _isrecursive(pathname):\n s = next(it) # skip empty string\n assert not s\n return it"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_sig(self, builder_name: str, sig: list, docstring: str,\n cachable: bool=True, attempts=1):\n \"\"\"Register a builder signature & docstring for `builder_name`.\n\n The input for the builder signature is a list of \"sig-spec\"s\n representing the builder function arguments.\n\n Each sig-spec in the list can be:\n 1. A string. This represents a simple untyped positional argument name,\n with no default value.\n 2. A 1-tuple with one string element. Same as #1.\n 3. A 2-tuple with ('arg-name', arg_type). This represents a typed\n positional argument, if arg_type is an instance of PropType enum.\n 4. A 2-tuple with ('arg-name', default_value). This represents an\n un-typed keyword argument with a default value.\n 5. A 3-tuple with ('arg-name', arg_type, default_value). This\n represents a typed keyword argument with a default value,\n if arg_type is an instance of PropType enum.\n\n In addition to the args specified in the `sig` list, there are several\n *injected* args:\n 1. A positional arg `name` of type TargetName is always the first arg.\n 2. A keyword arg `deps` of type TargetList and default value `None`\n (or empty list) is always the first after all builder args.\n 3. A keyword arg `cachable` of type bool and default value taken from\n the signature registration call (`cachable` arg).\n 4. A keyword arg `license` of type StrList and default value [].\n 5. A keyword arg `policies` of type StrList and default value [].\n 6. A keyword arg `packaging_params` of type dict and default value {}\n (empty dict).\n 7. A keyword arg `runtime_params` of type dict and default value {}\n (empty dict).\n 8. A keyword arg `build_params` of type dict and default value {}\n (empty dict).\n 9. A keyword arg `attempts` of type int and default value 1.\n \"\"\"\n if self.sig is not None:\n raise KeyError('{} already registered a signature!'\n .format(builder_name))\n self.sig = OrderedDict(name=ArgSpec(PropType.TargetName, Empty))\n self.docstring = docstring\n kwargs_section = False\n for arg_spec in listify(sig):\n arg_name, sig_spec = evaluate_arg_spec(arg_spec)\n if arg_name in self.sig or arg_name in INJECTED_ARGS:\n raise SyntaxError(\n \"duplicate argument '{}' in function definition\"\n .format(arg_name))\n self.sig[arg_name] = sig_spec\n if sig_spec.default == Empty:\n if kwargs_section:\n # TODO(itamar): how to give syntax error source annotation?\n # (see: http://stackoverflow.com/questions/33717804)\n raise SyntaxError(\n 'non-default argument follows default argument')\n self.min_positional_args += 1\n else:\n kwargs_section = True\n self.sig['deps'] = ArgSpec(PropType.TargetList, None)\n self.sig['cachable'] = ArgSpec(PropType.bool, cachable)\n self.sig['license'] = ArgSpec(PropType.StrList, None)\n self.sig['policies'] = ArgSpec(PropType.StrList, None)\n self.sig['packaging_params'] = ArgSpec(PropType.dict, None)\n self.sig['runtime_params'] = ArgSpec(PropType.dict, None)\n self.sig['build_params'] = ArgSpec(PropType.dict, None)\n self.sig['attempts'] = ArgSpec(PropType.numeric, 1)", "response": "Register a builder signature & docstring for the builder_name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove_builder(cls, builder_name: str):\n cls.builders.pop(builder_name, None)\n for hook_spec in cls.hooks.values():\n hook_spec.pop(builder_name, None)", "response": "Remove a registered builder."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_build_module(build_file_path: str, conf: Config) -> str:\n build_file = Path(build_file_path)\n root = Path(conf.project_root)\n return build_file.resolve().relative_to(root).parent.as_posix().strip('.')", "response": "Return a normalized build module name for build_file_path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting out version information about YABT and detected builders.", "response": "def cmd_version(unused_conf):\n \"\"\"Print out version information about YABT and detected builders.\"\"\"\n import pkg_resources\n print('This is {} version {}, imported from {}'\n .format(__oneliner__, __version__, __file__))\n if len(Plugin.builders) > 0:\n print('setuptools registered builders:')\n for entry_point in pkg_resources.iter_entry_points('yabt.builders'):\n print(' {0.module_name}.{0.name} (dist {0.dist})'.format(entry_point))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprint out information on loaded builders and hooks.", "response": "def cmd_list(unused_conf: Config):\n \"\"\"Print out information on loaded builders and hooks.\"\"\"\n for name, builder in sorted(Plugin.builders.items()):\n if builder.func:\n print('+- {0:16s} implemented in {1.__module__}.{1.__name__}()'\n .format(name, builder.func))\n else:\n print('+- {0:16s} loaded with no builder function'.format(name))\n for hook_name, hook_func in sorted(Plugin.get_hooks_for_builder(name)):\n print(' +- {0} hook implemented in '\n '{1.__module__}.{1.__name__}()'\n .format(hook_name, hook_func))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild requested targets and their dependencies.", "response": "def cmd_build(conf: Config, run_tests: bool=False):\n \"\"\"Build requested targets, and their dependencies.\"\"\"\n build_context = BuildContext(conf)\n populate_targets_graph(build_context, conf)\n build_context.build_graph(run_tests=run_tests)\n build_context.write_artifacts_metadata()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting out a neat targets dependency tree based on requested targets.", "response": "def cmd_dot(conf: Config):\n \"\"\"Print out a neat targets dependency tree based on requested targets.\n\n Use graphviz to render the dot file, e.g.:\n\n > ybt dot :foo :bar | dot -Tpng -o graph.png\n \"\"\"\n build_context = BuildContext(conf)\n populate_targets_graph(build_context, conf)\n if conf.output_dot_file is None:\n write_dot(build_context, conf, sys.stdout)\n else:\n with open(conf.output_dot_file, 'w') as out_file:\n write_dot(build_context, conf, out_file)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cmd_tree(conf: Config):\n build_context = BuildContext(conf)\n populate_targets_graph(build_context, conf)\n\n def print_target_with_deps(target, depth=2):\n print('{: >{}}{}'.format('+-', depth, target.name))\n for dep in sorted(\n build_context.target_graph.neighbors(target.name)):\n print_target_with_deps(build_context.targets[dep], depth + 2)\n\n if conf.targets:\n for target_name in sorted(parse_target_selectors(conf.targets, conf)):\n mod, name = split(target_name)\n if name == '*':\n for target_name in sorted(\n build_context.targets_by_module[mod]):\n print_target_with_deps(build_context.targets[target_name])\n else:\n print_target_with_deps(build_context.targets[target_name])\n else:\n for _, target in sorted(build_context.targets.items()):\n print_target_with_deps(target)", "response": "Prints out a neat targets dependency tree based on requested targets."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n conf = init_and_get_conf()\n logger = make_logger(__name__)\n logger.info('YaBT version {}', __version__)\n handlers = {\n 'build': YabtCommand(func=cmd_build, requires_project=True),\n 'dot': YabtCommand(func=cmd_dot, requires_project=True),\n 'test': YabtCommand(func=cmd_test, requires_project=True),\n 'tree': YabtCommand(func=cmd_tree, requires_project=True),\n 'version': YabtCommand(func=cmd_version, requires_project=False),\n 'list-builders': YabtCommand(func=cmd_list, requires_project=False),\n }\n command = handlers[conf.cmd]\n if command.requires_project and not conf.in_yabt_project():\n fatal('Not a YABT project (or any of the parent directories): {}',\n BUILD_PROJ_FILE)\n try:\n command.func(conf)\n except Exception as ex:\n fatal('{}', ex)", "response": "Main console script entry point - run YABT from command - line."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cpp_app_builder(build_context, target):\n yprint(build_context.conf, 'Build CppApp', target)\n if target.props.executable and target.props.main:\n raise KeyError(\n '`main` and `executable` arguments are mutually exclusive')\n if target.props.executable:\n if target.props.executable not in target.artifacts.get(AT.app):\n target.artifacts.add(AT.app, target.props.executable)\n entrypoint = [target.props.executable]\n elif target.props.main:\n prog = build_context.targets[target.props.main]\n binary = list(prog.artifacts.get(AT.binary).keys())[0]\n entrypoint = ['/usr/src/bin/' + binary]\n else:\n raise KeyError('Must specify either `main` or `executable` argument')\n build_app_docker_and_bin(\n build_context, target, entrypoint=entrypoint)", "response": "Pack a C ++ binary as a Docker image with its runtime dependencies."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a pre - build hook function for C ++ builders.", "response": "def make_pre_build_hook(extra_compiler_config_params):\n \"\"\"Return a pre-build hook function for C++ builders.\n\n When called, during graph build, it computes and stores the compiler-config\n object on the target, as well as adding it to the internal_dict prop for\n hashing purposes.\n \"\"\"\n\n def pre_build_hook(build_context, target):\n target.compiler_config = CompilerConfig(\n build_context, target, extra_compiler_config_params)\n target.props._internal_dict_['compiler_config'] = (\n target.compiler_config.as_dict())\n\n return pre_build_hook"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompile list of C ++ source files in a buildenv image and return list of generated object file.", "response": "def compile_cc(build_context, compiler_config, buildenv, sources,\n workspace_dir, buildenv_workspace, cmd_env):\n \"\"\"Compile list of C++ source files in a buildenv image\n and return list of generated object file.\n \"\"\"\n objects = []\n for src in sources:\n obj_rel_path = '{}.o'.format(splitext(src)[0])\n obj_file = join(buildenv_workspace, obj_rel_path)\n include_paths = [buildenv_workspace] + compiler_config.include_path\n compile_cmd = (\n [compiler_config.compiler, '-o', obj_file, '-c'] +\n compiler_config.compile_flags +\n ['-I{}'.format(path) for path in include_paths] +\n [join(buildenv_workspace, src)])\n # TODO: capture and transform error messages from compiler so file\n # paths match host paths for smooth(er) editor / IDE integration\n build_context.run_in_buildenv(buildenv, compile_cmd, cmd_env)\n objects.append(\n join(relpath(workspace_dir, build_context.conf.project_root),\n obj_rel_path))\n return objects"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef link_cpp_artifacts(build_context, target, workspace_dir,\n include_objects: bool):\n \"\"\"Link required artifacts from dependencies under target workspace dir.\n Return list of object files of dependencies (if `include_objects`).\n\n Includes:\n - Generated code from proto dependencies\n - Header files from all dependencies\n - Generated header files from all dependencies\n - If `include_objects` is True, also object files from all dependencies\n (these will be returned without linking)\n \"\"\"\n # include the source & header files of the current target\n # add objects of all dependencies (direct & transitive), if needed\n source_files = target.props.sources + target.props.headers\n generated_srcs = {}\n objects = []\n\n # add headers of dependencies\n for dep in build_context.generate_all_deps(target):\n source_files.extend(dep.props.get('headers', []))\n\n link_files(source_files, workspace_dir, None, build_context.conf)\n\n # add generated headers and collect objects of dependencies\n for dep in build_context.generate_all_deps(target):\n dep.artifacts.link_types(workspace_dir, [AT.gen_h], build_context.conf)\n if include_objects:\n objects.extend(dep.artifacts.get(AT.object).values())\n\n # add generated code from proto dependencies\n for proto_dep_name in target.props.protos:\n proto_dep = build_context.targets[proto_dep_name]\n proto_dep.artifacts.link_types(workspace_dir, [AT.gen_cc],\n build_context.conf)\n\n return objects", "response": "Link required artifacts from dependencies under target workspace dir."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn list of source files for target.", "response": "def get_source_files(target, build_context) -> list:\n \"\"\"Return list of source files for `target`.\"\"\"\n all_sources = list(target.props.sources)\n for proto_dep_name in target.props.protos:\n proto_dep = build_context.targets[proto_dep_name]\n all_sources.extend(proto_dep.artifacts.get(AT.gen_cc).keys())\n return all_sources"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompile and link a C ++ binary for target.", "response": "def build_cpp(build_context, target, compiler_config, workspace_dir):\n \"\"\"Compile and link a C++ binary for `target`.\"\"\"\n rmtree(workspace_dir)\n binary = join(*split(target.name))\n objects = link_cpp_artifacts(build_context, target, workspace_dir, True)\n buildenv_workspace = build_context.conf.host_to_buildenv_path(\n workspace_dir)\n objects.extend(compile_cc(\n build_context, compiler_config, target.props.in_buildenv,\n get_source_files(target, build_context), workspace_dir,\n buildenv_workspace, target.props.cmd_env))\n bin_file = join(buildenv_workspace, binary)\n link_cmd = (\n [compiler_config.linker, '-o', bin_file] +\n objects + compiler_config.link_flags)\n build_context.run_in_buildenv(\n target.props.in_buildenv, link_cmd, target.props.cmd_env)\n target.artifacts.add(AT.binary, relpath(join(workspace_dir, binary),\n build_context.conf.project_root), binary)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding a C ++ binary executable", "response": "def cpp_prog_builder(build_context, target):\n \"\"\"Build a C++ binary executable\"\"\"\n yprint(build_context.conf, 'Build CppProg', target)\n workspace_dir = build_context.get_workspace('CppProg', target.name)\n build_cpp(build_context, target, target.compiler_config, workspace_dir)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding C ++ object files", "response": "def cpp_lib_builder(build_context, target):\n \"\"\"Build C++ object files\"\"\"\n yprint(build_context.conf, 'Build CppLib', target)\n workspace_dir = build_context.get_workspace('CppLib', target.name)\n workspace_src_dir = join(workspace_dir, 'src')\n rmtree(workspace_src_dir)\n link_cpp_artifacts(build_context, target, workspace_src_dir, False)\n buildenv_workspace = build_context.conf.host_to_buildenv_path(\n workspace_src_dir)\n objects = compile_cc(\n build_context, target.compiler_config, target.props.in_buildenv,\n get_source_files(target, build_context), workspace_src_dir,\n buildenv_workspace, target.props.cmd_env)\n for obj_file in objects:\n target.artifacts.add(AT.object, obj_file)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the value of param according to priority and expansion.", "response": "def get(self, param, config, target, fallback):\n \"\"\"Return the value of `param`, according to priority / expansion.\n\n First priority - the target itself.\n Second priority - the project config.\n Third priority - a global default (\"fallback\").\n\n In list-params, a '$*' term processed as \"expansion term\", meaning\n it is replaced with all terms from the config-level.\n \"\"\"\n target_val = target.props.get(param)\n config_val = config.get(param, fallback)\n if not target_val:\n return config_val\n if isinstance(target_val, list):\n val = []\n for el in target_val:\n if el == '$*':\n val.extend(listify(config_val))\n else:\n val.append(el)\n return val\n return target_val"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate an argument parser for the given project - specific config file.", "response": "def make_parser(project_config_file: str) -> configargparse.ArgumentParser:\n \"\"\"Return the argument parser.\n\n :param project_config_file: Absolute path to project-specific config file.\n\n If cached parser already exists - return it immediately.\n Otherwise, initialize a new `ConfigArgParser` that is able to take default\n values from a hierarchy of config files and environment variables, as well\n as standard ArgParse command-line parsing behavior.\n\n We take default values from configuration files:\n\n - System-wide (see code for location)\n - User-level overrides (see code for location, hopefully under home dir)\n - If a project-specific config file is available, it will override both\n of the above.\n\n Environment variables will override all configuration files.\n For an option `--foo-bar`, if an environment variable named `YBT_FOO_VAR`\n exists, the option value will be taken from there.\n\n Of course, options specified directly on the command-line always win.\n \"\"\"\n global PARSER # pylint: disable=global-statement\n if PARSER is None:\n config_files = ['/etc/yabt.conf', '~/.yconfig']\n if project_config_file:\n config_files.append(project_config_file)\n PARSER = configargparse.getArgumentParser(\n # Support loading default values from system-wide or\n # user-specific config files (user-level overrides system-wide)\n default_config_files=config_files,\n # Show default values in help message\n formatter_class=configargparse.DefaultsFormatter,\n auto_env_var_prefix='ybt_',\n args_for_setting_config_path=['--config'],\n args_for_writing_out_config_file=['--write-out-config-file'])\n # PARSER.add('--config', is_config_file=True, help='Config file path')\n PARSER.add('--artifacts-metadata-file',\n help='Output file to write artifacts metadata to')\n PARSER.add('--continue-after-fail', default=False, action='store_true',\n help='If a target fails continue independent targets')\n PARSER.add('--bin-output-dir', default='ybt_bin')\n PARSER.add('--build-file-name', default='YBuild')\n PARSER.add('--build-base-images', action='store_true')\n PARSER.add('--builders-workspace-dir', default='yabtwork')\n PARSER.add('--default-target-name', default='@default')\n PARSER.add('--docker-pull-cmd', default='docker pull',\n help='Command to use for pulling images from registries')\n PARSER.add('--docker-push-cmd', default='docker push',\n help='Command to use for pushing images to registries')\n PARSER.add('--docker-volume',\n help='Use the specified docker volume as buildenv /project')\n PARSER.add('-f', '--flavor', help='Choose build flavor (AKA profile)')\n PARSER.add('--force-pull', action='store_true')\n PARSER.add('-j', '--jobs', type=int, default=1)\n # TODO(itamar): support auto-detection of interactivity-mode\n PARSER.add('--non-interactive', action='store_true')\n PARSER.add('--offline', action='store_true')\n PARSER.add('--output-dot-file', default=None,\n help='Output file for dot graph (default: stdin)')\n # TODO(itamar): this flag should come from the builder, not from here\n PARSER.add('--push', action='store_true')\n PARSER.add('--scm-provider')\n PARSER.add('--no-build-cache', action='store_true',\n help='Disable YBT build cache')\n PARSER.add('--no-docker-cache', action='store_true',\n help='Disable YBT Docker cache')\n PARSER.add('--no-policies', action='store_true')\n PARSER.add('--no-test-cache', action='store_true',\n help='Disable YBT test cache')\n PARSER.add('--test-attempts', type=int, default=1)\n PARSER.add('-v', '--verbose', action='store_true',\n help='More verbose output to STDOUT')\n PARSER.add('--with-tini-entrypoint', action='store_true')\n # Logging flags\n PARSER.add('--logtostderr', action='store_true',\n help='Whether to log to STDERR')\n PARSER.add('--logtostdout', action='store_true',\n help='Whether to log to STDOUT')\n PARSER.add('--loglevel', default='INFO', choices=LOG_LEVELS_CHOICES,\n help='Log level threshold')\n PARSER.add('--show-buildenv-deps', type=bool, default=False,\n help='When running dot, if set to True then the buildenv '\n 'targets are printed to the graph too')\n PARSER.add('cmd', choices=['build', 'dot', 'test', 'tree', 'version'])\n PARSER.add('targets', nargs='*')\n return PARSER"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning absolute path to project - specific config file if it exists.", "response": "def find_project_config_file(project_root: str) -> str:\n \"\"\"Return absolute path to project-specific config file, if it exists.\n\n :param project_root: Absolute path to project root directory.\n\n A project config file is a file named `YCONFIG_FILE` found at the top\n level of the project root dir.\n\n Return `None` if project root dir is not specified,\n or if no such file is found.\n \"\"\"\n if project_root:\n project_config_file = os.path.join(project_root, YCONFIG_FILE)\n if os.path.isfile(project_config_file):\n return project_config_file"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the project - specific user settings module if it exists.", "response": "def get_user_settings_module(project_root: str):\n \"\"\"Return project-specific user settings module, if it exists.\n\n :param project_root: Absolute path to project root directory.\n\n A project settings file is a file named `YSETTINGS_FILE` found at the top\n level of the project root dir.\n\n Return `None` if project root dir is not specified,\n or if no such file is found.\n\n Raise an exception if a file is found, but not importable.\n\n The YSettings file can define 2 special module-level functions that\n interact with the YABT CLI & config system:\n 1. `extend_cli`, if defined, takes the YABT `parser` object and may extend\n it, to add custom command-line flags for the project.\n (careful not to collide with YABT flags...)\n 2. `extend_config`, if defined, takes the YABT `config` object and the\n parsed `args` object (returned by the the parser), and may extend the\n config - should be used to reflect custom project CLI flags in the\n config object.\n\n Beyond that, the settings module is available in YBuild's under\n `conf.settings` (except for the 2 special fucntions that are removed).\n \"\"\"\n if project_root:\n project_settings_file = os.path.join(project_root, YSETTINGS_FILE)\n if os.path.isfile(project_settings_file):\n settings_loader = SourceFileLoader(\n 'settings', project_settings_file)\n return settings_loader.load_module()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalls a user - supplied settings function and clean it up afterwards.", "response": "def call_user_func(settings_module, func_name, *args, **kwargs):\n \"\"\"Call a user-supplied settings function and clean it up afterwards.\n\n settings_module may be None, or the function may not exist.\n If the function exists, it is called with the specified *args and **kwargs,\n and the result is returned.\n \"\"\"\n if settings_module:\n if hasattr(settings_module, func_name):\n func = getattr(settings_module, func_name)\n try:\n return func(*args, **kwargs)\n finally:\n # cleanup user function\n delattr(settings_module, func_name)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_build_flavor(settings_module, args):\n known_flavors = listify(call_user_func(settings_module, 'known_flavors'))\n if args.flavor:\n if args.flavor not in known_flavors:\n raise ValueError('Unknown build flavor: {}'.format(args.flavor))\n else:\n args.flavor = call_user_func(settings_module, 'default_flavor')\n if args.flavor and args.flavor not in known_flavors:\n raise ValueError(\n 'Unknown default build flavor: {}'.format(args.flavor))", "response": "Update the flavor arg based on the settings API"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninitializes a YABT CLI environment and return a Config instance.", "response": "def init_and_get_conf(argv: list=None) -> Config:\n \"\"\"Initialize a YABT CLI environment and return a Config instance.\n\n :param argv: Manual override of command-line params to parse (for tests).\n \"\"\"\n colorama.init()\n work_dir = os.path.abspath(os.curdir)\n project_root = search_for_parent_dir(work_dir,\n with_files=set([BUILD_PROJ_FILE]))\n parser = make_parser(find_project_config_file(project_root))\n settings_module = get_user_settings_module(project_root)\n call_user_func(settings_module, 'extend_cli', parser)\n argcomplete.autocomplete(parser)\n args = parser.parse(argv)\n get_build_flavor(settings_module, args)\n config = Config(args, project_root, work_dir, settings_module)\n config.common_conf = call_user_func(\n config.settings, 'get_common_config', config, args)\n config.flavor_conf = call_user_func(\n config.settings, 'get_flavored_config', config, args)\n call_user_func(config.settings, 'extend_config', config, args)\n if not args.no_policies:\n config.policies = listify(call_user_func(\n config.settings, 'get_policies', config))\n return config"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of nodes in the topological sort order.", "response": "def stable_reverse_topological_sort(graph):\n \"\"\"Return a list of nodes in topological sort order.\n\n This topological sort is a **unique** permutation of the nodes\n such that an edge from u to v implies that u appears before v in the\n topological sort order.\n\n Parameters\n ----------\n graph : NetworkX digraph\n A directed graph\n\n Raises\n ------\n NetworkXError\n Topological sort is defined for directed graphs only. If the\n graph G is undirected, a NetworkXError is raised.\n NetworkXUnfeasible\n If G is not a directed acyclic graph (DAG) no topological sort\n exists and a NetworkXUnfeasible exception is raised.\n\n Notes\n -----\n - This algorithm is based on a description and proof in\n The Algorithm Design Manual [1]_ .\n - This implementation is modified from networkx 1.11 implementation [2]_\n to achieve stability, support only reverse (allows yielding instead of\n returning a list), and remove the `nbunch` argument (had no use for it).\n\n See also\n --------\n is_directed_acyclic_graph\n\n References\n ----------\n .. [1] Skiena, S. S. The Algorithm Design Manual (Springer-Verlag, 1998).\n http://www.amazon.com/exec/obidos/ASIN/0387948600/ref=ase_thealgorithmrepo/\n .. [2] networkx on GitHub\n https://github.com/networkx/networkx/blob/8358afac209c00b7feb3e81c901098852a9413b3/networkx/algorithms/dag.py#L88-L168\n \"\"\"\n if not graph.is_directed():\n raise networkx.NetworkXError(\n 'Topological sort not defined on undirected graphs.')\n\n # nonrecursive version\n seen = set()\n explored = set()\n\n for v in sorted(graph.nodes()):\n if v in explored:\n continue\n fringe = [v] # nodes yet to look at\n while fringe:\n w = fringe[-1] # depth first search\n if w in explored: # already looked down this branch\n fringe.pop()\n continue\n seen.add(w) # mark as seen\n # Check successors for cycles and for new nodes\n new_nodes = []\n for n in sorted(graph[w]):\n if n not in explored:\n if n in seen: # CYCLE!! OH NOOOO!!\n raise networkx.NetworkXUnfeasible(\n 'Graph contains a cycle.')\n new_nodes.append(n)\n if new_nodes: # Add new_nodes to fringe\n fringe.extend(new_nodes)\n else: # No new nodes so w is fully explored\n explored.add(w)\n yield w\n fringe.pop()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nraise error about unresolved targets during graph parsing.", "response": "def raise_unresolved_targets(build_context, conf, unknown_seeds, seed_refs):\n \"\"\"Raise error about unresolved targets during graph parsing.\"\"\"\n\n def format_target(target_name):\n # TODO: suggest similar known target names\n build_module = split_build_module(target_name)\n return '{} (in {})'.format(target_name,\n conf.get_build_file_path(build_module))\n\n def format_unresolved(seed):\n if seed not in seed_refs:\n return seed\n seed_ref = seed_refs[seed]\n reasons = []\n if seed_ref.on_cli:\n reasons.append('seen on command line')\n if seed_ref.from_default:\n reasons.append('specified as default target in {}'\n .format(conf.get_project_build_file))\n if seed_ref.dep_of:\n reasons.append(\n 'dependency of ' +\n ', '.join(format_target(target_name)\n for target_name in sorted(seed_ref.dep_of)))\n if seed_ref.buildenv_of:\n reasons.append(\n 'buildenv of ' +\n ', '.join(format_target(target_name)\n for target_name in sorted(seed_ref.buildenv_of)))\n\n return '{} - {}'.format(seed, ', '.join(reasons))\n\n unresolved_str = '\\n'.join(format_unresolved(target_name)\n for target_name in sorted(unknown_seeds))\n num_target_str = '{} target'.format(len(unknown_seeds))\n if len(unknown_seeds) > 1:\n num_target_str += 's'\n raise ValueError('Could not resolve {}:\\n{}'\n .format(num_target_str, unresolved_str))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register_scm_provider(scm_name: str):\n\n def register_decorator(scm_class: SourceControl):\n \"\"\"Decorator for registering SCM provider.\"\"\"\n if scm_name in ScmManager.providers:\n raise KeyError('{} already registered!'.format(scm_name))\n ScmManager.providers[scm_name] = scm_class\n SourceControl.register(scm_class)\n logger.debug('Registered {0} SCM from {1.__module__}.{1.__name__}',\n scm_name, scm_class)\n return scm_class\n return register_decorator", "response": "Returns a decorator for registering a SCM provider named scm_name."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload and return named SCM provider instance.", "response": "def get_provider(cls, scm_name: str, conf) -> SourceControl:\n \"\"\"Load and return named SCM provider instance.\n\n :param conf: A yabt.config.Config object used to initialize the SCM\n provider instance.\n\n :raises KeyError: If no SCM provider with name `scm_name` registered.\n \"\"\"\n for entry_point in pkg_resources.iter_entry_points('yabt.scm',\n scm_name):\n entry_point.load()\n logger.debug('Loaded SCM provider {0.name} from {0.module_name} '\n '(dist {0.dist})', entry_point)\n logger.debug('Loaded {} SCM providers', len(cls.providers))\n if scm_name not in cls.providers:\n raise KeyError('Unknown SCM identifier {}'.format(scm_name))\n return cls.providers[scm_name](conf)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites build graph in dot format to out_f file - like object.", "response": "def write_dot(build_context, conf: Config, out_f):\n \"\"\"Write build graph in dot format to `out_f` file-like object.\"\"\"\n not_buildenv_targets = get_not_buildenv_targets(build_context)\n prebuilt_targets = get_prebuilt_targets(build_context)\n out_f.write('strict digraph {\\n')\n for node in build_context.target_graph.nodes:\n if conf.show_buildenv_deps or node in not_buildenv_targets:\n cached = node in prebuilt_targets\n fillcolor = 'fillcolor=\"grey\",style=filled' if cached else ''\n color = TARGETS_COLORS.get(\n build_context.targets[node].builder_name, 'black')\n out_f.write(' \"{}\" [color=\"{}\",{}];\\n'.format(node, color,\n fillcolor))\n out_f.writelines(' \"{}\" -> \"{}\";\\n'.format(u, v)\n for u, v in build_context.target_graph.edges\n if conf.show_buildenv_deps or\n (u in not_buildenv_targets and v in not_buildenv_targets))\n out_f.write('}\\n\\n')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a path to a private workspace dir.", "response": "def get_workspace(self, *parts) -> str:\n \"\"\"Return a path to a private workspace dir.\n Create sub-tree of dirs using strings from `parts` inside workspace,\n and return full path to innermost directory.\n\n Upon returning successfully, the directory will exist (potentially\n changed to a safe FS name), even if it didn't exist before, including\n any intermediate parent directories.\n \"\"\"\n workspace_dir = os.path.join(self.conf.get_workspace_path(),\n *(get_safe_path(part) for part in parts))\n if not os.path.isdir(workspace_dir):\n # exist_ok=True in case of concurrent creation of the same dir\n os.makedirs(workspace_dir, exist_ok=True)\n return workspace_dir"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a path to the binaries dir for a build module.", "response": "def get_bin_dir(self, build_module: str) -> str:\n \"\"\"Return a path to the binaries dir for a build module dir.\n Create sub-tree of missing dirs as needed, and return full path\n to innermost directory.\n \"\"\"\n bin_dir = os.path.join(self.conf.get_bin_path(), build_module)\n if not os.path.isdir(bin_dir):\n # exist_ok=True in case of concurrent creation of the same dir\n os.makedirs(bin_dir, exist_ok=True)\n return bin_dir"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef walk_target_deps_topological_order(self, target: Target):\n all_deps = get_descendants(self.target_graph, target.name)\n for dep_name in topological_sort(self.target_graph):\n if dep_name in all_deps:\n yield self.targets[dep_name]", "response": "Generate all dependencies of target by topological sort order."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate only direct dependencies of target.", "response": "def generate_direct_deps(self, target: Target):\n \"\"\"Generate only direct dependencies of `target`.\"\"\"\n yield from (self.targets[dep_name] for dep_name in sorted(target.deps))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_dep_names(self, target: Target):\n yield from sorted(get_descendants(self.target_graph, target.name))", "response": "Generate names of all dependencies of target."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate_all_deps(self, target: Target):\n yield from (self.targets[dep_name]\n for dep_name in self.generate_dep_names(target))", "response": "Generate all dependencies of target."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef register_target(self, target: Target):\n if target.name in self.targets:\n first = self.targets[target.name]\n raise NameError(\n 'Target with name \"{0.name}\" ({0.builder_name} from module '\n '\"{1}\") already exists - defined first as '\n '{2.builder_name} in module \"{3}\"'.format(\n target, split_build_module(target.name),\n first, split_build_module(first.name)))\n self.targets[target.name] = target\n self.targets_by_module[split_build_module(target.name)].add(\n target.name)", "response": "Register a target instance in this build context."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove_target(self, target_name: str):\n if target_name in self.targets:\n del self.targets[target_name]\n build_module = split_build_module(target_name)\n if build_module in self.targets_by_module:\n self.targets_by_module[build_module].remove(target_name)", "response": "Removes a target from this build context."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_target_extraction_context(self, build_file_path: str) -> dict:\n extraction_context = {}\n for name, builder in Plugin.builders.items():\n extraction_context[name] = extractor(name, builder,\n build_file_path, self)\n return extraction_context", "response": "Return a build file parser target extraction context."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a graph induced by buildenv nodes.", "response": "def get_buildenv_graph(self):\n \"\"\"Return a graph induced by buildenv nodes\"\"\"\n # This implementation first obtains all subsets of nodes that all\n # buildenvs depend on, and then builds a subgraph induced by the union\n # of these subsets. This can be very non-optimal.\n # TODO(itamar): Reimplement efficient algo, or redesign buildenvs\n buildenvs = set(target.buildenv for target in self.targets.values()\n if target.buildenv)\n return nx.DiGraph(self.target_graph.subgraph(reduce(\n lambda x, y: x | set(y),\n (get_descendants(self.target_graph, buildenv)\n for buildenv in buildenvs), buildenvs)))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating ready targets from the graph_copy.", "response": "def ready_nodes_iter(self, graph_copy):\n \"\"\"Generate ready targets from the graph `graph_copy`.\n\n The input graph is mutated by this method, so it has to be a mutable\n copy of the graph (e.g. not original copy, or read-only view).\n\n Caller **must** call `done()` after processing every generated\n target, so additional ready targets can be added to the queue.\n\n The invariant: a target may be yielded from this generator only\n after all its descendant targets were notified \"done\".\n \"\"\"\n\n def is_ready(target_name):\n \"\"\"Return True if the node `target_name` is \"ready\" in the graph\n `graph_copy`.\n\n \"Ready\" means that the graph doesn't contain any more nodes that\n `target_name` depends on (e.g. it has no successors).\n \"\"\"\n try:\n next(graph_copy.successors(target_name))\n except StopIteration:\n return True\n return False\n\n ready_nodes = deque(sorted(\n target_name for target_name in graph_copy.nodes\n if is_ready(target_name)))\n produced_event = threading.Event()\n failed_event = threading.Event()\n\n def make_done_callback(target: Target):\n \"\"\"Return a callable \"done\" notifier to\n report a target as processed.\"\"\"\n\n def done_notifier():\n \"\"\"Mark target as done, adding new ready nodes to queue\"\"\"\n if graph_copy.has_node(target.name):\n affected_nodes = list(sorted(\n graph_copy.predecessors(target.name)))\n graph_copy.remove_node(target.name)\n ready_nodes.extend(\n target_name for target_name in affected_nodes\n if is_ready(target_name))\n produced_event.set()\n\n return done_notifier\n\n def make_retry_callback(target: Target):\n \"\"\"Return a callable \"retry\" notifier to\n report a target as in need of retry.\n Currently for tests we rebuild the target\n when it's not necessary.\"\"\"\n\n def retry_notifier():\n \"\"\"Mark target as retry, re-entering node to end of queue\"\"\"\n if graph_copy.has_node(target.name):\n ready_nodes.append(target.name)\n produced_event.set()\n\n return retry_notifier\n\n def make_fail_callback(target: Target):\n \"\"\"Return a callable \"fail\" notifier to\n report a target as failed after all retries.\"\"\"\n\n def fail_notifier(ex):\n \"\"\"Mark target as failed, taking it and ancestors\n out of the queue\"\"\"\n # TODO(Dana) separate \"failed to build target\" errors from\n # \"failed to run\" errors.\n # see: https://github.com/resonai/ybt/issues/124\n if isinstance(ex, CalledProcessError):\n sys.stdout.write(ex.stdout.decode('utf-8'))\n sys.stderr.write(ex.stderr.decode('utf-8'))\n if graph_copy.has_node(target.name):\n self.failed_nodes[target.name] = ex\n # removing all ancestors (nodes that depend on this one)\n affected_nodes = get_ancestors(graph_copy, target.name)\n graph_copy.remove_node(target.name)\n for affected_node in affected_nodes:\n if affected_node in self.skipped_nodes:\n continue\n if graph_copy.has_node(affected_node):\n self.skipped_nodes.append(affected_node)\n graph_copy.remove_node(affected_node)\n if self.conf.continue_after_fail:\n logger.info('Failed target: {} due to error: {}',\n target.name, ex)\n produced_event.set()\n else:\n failed_event.set()\n fatal('`{}\\': {}', target.name, ex)\n\n return fail_notifier\n\n while True:\n while len(ready_nodes) == 0:\n if graph_copy.order() == 0:\n return\n if failed_event.is_set():\n return\n produced_event.wait(0.5)\n produced_event.clear()\n next_node = ready_nodes.popleft()\n node = self.targets[next_node]\n node.done = make_done_callback(node)\n # TODO(bergden) retry assumes no need to update predecessors:\n # This means we don't support retries for targets that are\n # prerequisites of other targets (builds, installs)\n node.retry = make_retry_callback(node)\n node.fail = make_fail_callback(node)\n yield node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrunning a command in a named BuildEnv Docker image.", "response": "def run_in_buildenv(\n self, buildenv_target_name: str, cmd: list, cmd_env: dict=None,\n work_dir: str=None, auto_uid: bool=True, runtime: str=None,\n **kwargs):\n \"\"\"Run a command in a named BuildEnv Docker image.\n\n :param buildenv_target_name: A named Docker image target in which the\n command should be run.\n :param cmd: The command to run, as you'd pass to subprocess.run()\n :param cmd_env: A dictionary of environment variables for the command.\n :param work_dir: A different work dir to run in.\n Either absolute path, or relative to project root.\n :param auto_uid: Whether to run as the active uid:gid, or as root.\n :param kwargs: Extra keyword arguments that are passed to the\n subprocess.run() call that runs the BuildEnv container\n (for, e.g. timeout arg, stdout/err redirection, etc.)\n\n :raises KeyError: If named BuildEnv is not a registered BuildEnv image\n \"\"\"\n buildenv_target = self.targets[buildenv_target_name]\n # TODO(itamar): Assert that buildenv_target is up to date\n redirection = any(\n stream_key in kwargs\n for stream_key in ('stdin', 'stdout', 'stderr', 'input'))\n docker_run = ['docker', 'run']\n # if not self.conf.non_interactive:\n # docker_run.append('-i')\n if not redirection:\n docker_run.append('-t')\n project_vol = (self.conf.docker_volume if self.conf.docker_volume else\n self.conf.project_root)\n container_work_dir = PurePath('/project')\n if work_dir:\n container_work_dir /= work_dir\n if runtime:\n docker_run.extend([\n '--runtime', runtime,\n ])\n\n docker_run.extend([\n '--rm',\n '-v', project_vol + ':/project',\n # TODO: windows containers?\n '-w', container_work_dir.as_posix(),\n ])\n if cmd_env:\n for key, value in cmd_env.items():\n # TODO(itamar): escaping\n docker_run.extend(['-e', '{}={}'.format(key, value)])\n if platform.system() == 'Linux' and auto_uid:\n # Fix permissions for bind-mounted project dir\n # The fix is not needed when using Docker For Mac / Windows,\n # because it is somehow taken care of by the sharing mechanics\n docker_run.extend([\n '-u', '{}:{}'.format(os.getuid(), os.getgid()),\n '-v', '/etc/shadow:/etc/shadow:ro',\n '-v', '/etc/group:/etc/group:ro',\n '-v', '/etc/passwd:/etc/passwd:ro',\n '-v', '/etc/sudoers:/etc/sudoers:ro',\n ])\n docker_run.append(format_qualified_image_name(buildenv_target))\n docker_run.extend(cmd)\n logger.info('Running command in build env \"{}\" using command {}',\n buildenv_target_name, docker_run)\n # TODO: Consider changing the PIPEs to temp files.\n if 'stderr' not in kwargs:\n kwargs['stderr'] = PIPE\n if 'stdout' not in kwargs:\n kwargs['stdout'] = PIPE\n result = run(docker_run, check=True, **kwargs)\n\n # TODO(Dana): Understand what is the right enconding and remove the\n # try except\n if kwargs['stdout'] is PIPE:\n try:\n sys.stdout.write(result.stdout.decode('utf-8'))\n except UnicodeEncodeError as e:\n sys.stderr.write('tried writing the stdout of {},\\n but it '\n 'has a problematic character:\\n {}\\n'\n 'hex dump of stdout:\\n{}\\n'\n .format(docker_run, str(e), codecs.encode(\n result.stdout, 'hex').decode('utf8')))\n if kwargs['stderr'] is PIPE:\n try:\n sys.stderr.write(result.stderr.decode('utf-8'))\n except UnicodeEncodeError as e:\n sys.stderr.write('tried writing the stderr of {},\\n but it '\n 'has a problematic character:\\n {}\\n'\n 'hex dump of stderr:\\n{}\\n'\n .format(docker_run, str(e), codecs.encode(\n result.stderr, 'hex').decode('utf8')))\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build_target(self, target: Target):\n builder = Plugin.builders[target.builder_name]\n if builder.func:\n logger.debug('About to invoke the {} builder function for {}',\n target.builder_name, target.name)\n builder.func(self, target)\n else:\n logger.debug('Skipping {} builder function for target {} (no '\n 'function registered)', target.builder_name, target)", "response": "Invoke the builder function for a target."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nregistering the artifact metadata dictionary for a built target.", "response": "def register_target_artifact_metadata(self, target: str, metadata: dict):\n \"\"\"Register the artifact metadata dictionary for a built target.\"\"\"\n with self.context_lock:\n self.artifacts_metadata[target.name] = metadata"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwriting out a JSON file with all built targets artifact metadata and any other metadata.", "response": "def write_artifacts_metadata(self):\n \"\"\"Write out a JSON file with all built targets artifact metadata,\n if such output file is specified.\"\"\"\n if self.conf.artifacts_metadata_file:\n logger.info('Writing artifacts metadata to file \"%s\"',\n self.conf.artifacts_metadata_file)\n with open(self.conf.artifacts_metadata_file, 'w') as fp:\n json.dump(self.artifacts_metadata, fp)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning True if target can be used in cache.", "response": "def can_use_cache(self, target: Target) -> bool:\n \"\"\"Return True if should attempt to load `target` from cache.\n Return False if `target` has to be built, regardless of its cache\n status (because cache is disabled, or dependencies are dirty).\n \"\"\"\n # if caching is disabled for this execution, then all targets are dirty\n if self.conf.no_build_cache:\n return False\n # if the target's `cachable` prop is falsy, then it is dirty\n if not target.props.cachable:\n return False\n # if any dependency of the target is dirty, then the target is dirty\n if any(self.targets[dep].is_dirty for dep in target.deps):\n return False\n # if the target has a dirty buildenv then it's also dirty\n if target.buildenv and self.targets[target.buildenv].is_dirty:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_build_file_path(self, build_module) -> str:\n project_root = Path(self.project_root)\n build_module = norm_proj_path(build_module, '')\n return str(project_root / build_module /\n (BUILD_PROJ_FILE if '' == build_module\n else self.build_file_name))", "response": "Return a full path to the build file of build_module."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef guess_uri_type(uri: str, hint: str=None):\n # TODO(itamar): do this better\n if hint:\n return hint\n norm_uri = uri.lower()\n parsed_uri = urlparse(norm_uri)\n if parsed_uri.path.endswith('.git'):\n return 'git'\n if parsed_uri.scheme in ('http', 'https'):\n ext = splitext(parsed_uri.path)[-1]\n if ext in KNOWN_ARCHIVES:\n return 'archive'\n return 'single'\n return 'local'", "response": "Return a guess for the URI type based on the URI string uri."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef git_handler(unused_build_context, target, fetch, package_dir, tar):\n target_name = split_name(target.name)\n # clone the repository under a private builder workspace\n repo_dir = join(package_dir, fetch.name) if fetch.name else package_dir\n try:\n repo = git.Repo(repo_dir)\n except (InvalidGitRepositoryError, NoSuchPathError):\n repo = git.Repo.clone_from(fetch.uri, repo_dir)\n assert repo.working_tree_dir == repo_dir\n tar.add(package_dir, arcname=target_name, filter=gitfilter)", "response": "Handle remote Git repository URI."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef archive_handler(unused_build_context, target, fetch, package_dir, tar):\n package_dest = join(package_dir, basename(urlparse(fetch.uri).path))\n package_content_dir = join(package_dir, 'content')\n extract_dir = (join(package_content_dir, fetch.name)\n if fetch.name else package_content_dir)\n fetch_url(fetch.uri, package_dest, package_dir)\n\n # TODO(itamar): Avoid repetition of splitting extension here and above\n # TODO(itamar): Don't use `extractall` on potentially untrsuted archives\n ext = splitext(package_dest)[-1].lower()\n if ext in ('.gz', '.bz2', '.tgz'):\n with tarfile.open(package_dest, 'r:*') as src_tar:\n src_tar.extractall(extract_dir)\n elif ext in ('.zip',):\n with ZipFile(package_dest, 'r') as zipf:\n zipf.extractall(extract_dir)\n else:\n raise ValueError('Unsupported extension {}'.format(ext))\n tar.add(package_content_dir, arcname=split_name(target.name))", "response": "Handle remote downloadable archive URI."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fetch_file_handler(unused_build_context, target, fetch, package_dir, tar):\n dl_dir = join(package_dir, fetch.name) if fetch.name else package_dir\n fetch_url(fetch.uri,\n join(dl_dir, basename(urlparse(fetch.uri).path)),\n dl_dir)\n tar.add(package_dir, arcname=split_name(target.name))", "response": "Handle remote downloadable file URI."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_installer_desc(build_context, target) -> tuple:\n workspace_dir = build_context.get_workspace('CustomInstaller', target.name)\n target_name = split_name(target.name)\n script_name = basename(target.props.script)\n package_tarball = '{}.tar.gz'.format(join(workspace_dir, target_name))\n return target_name, script_name, package_tarball", "response": "Return a target_name script_name package_tarball tuple for target"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_prebuilt_targets(build_context):\n logger.info('Scanning for cached base images')\n # deps that are part of cached based images\n contained_deps = set()\n # deps that are needed by images that are going to be built,\n # but are not part of their base images\n required_deps = set()\n # mapping from target name to set of all its deps (descendants)\n cached_descendants = CachedDescendants(build_context.target_graph)\n\n for target_name, target in build_context.targets.items():\n if 'image_caching_behavior' not in target.props:\n continue\n image_name = get_image_name(target)\n image_tag = target.props.image_tag\n icb = ImageCachingBehavior(image_name, image_tag,\n target.props.image_caching_behavior)\n target.image_id = handle_build_cache(build_context.conf, image_name,\n image_tag, icb)\n if target.image_id:\n # mark deps of cached base image as \"contained\"\n image_deps = cached_descendants.get(target_name)\n contained_deps.update(image_deps)\n contained_deps.add(target.name)\n else:\n # mark deps of image that is going to be built\n # (and are not deps of its base image) as \"required\"\n image_deps = cached_descendants.get(target_name)\n base_image_deps = cached_descendants.get(target.props.base_image)\n required_deps.update(image_deps - base_image_deps)\n return contained_deps - required_deps", "response": "Return set of target names that are not part of cached base images and are not part of base images and are not part of base images and are not part of base images and are not part of base images and not part of base images."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_summary(summary: dict, cache_dir: str):\n # update the summary last-accessed timestamp\n summary['accessed'] = time()\n with open(join(cache_dir, 'summary.json'), 'w') as summary_file:\n summary_file.write(json.dumps(summary, indent=4, sort_keys=True))", "response": "Write the summary JSON to cache_dir."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload target from build cache", "response": "def load_target_from_cache(target: Target, build_context) -> (bool, bool):\n \"\"\"Load `target` from build cache, restoring cached artifacts & summary.\n Return (build_cached, test_cached) tuple.\n\n `build_cached` is True if target restored successfully.\n `test_cached` is True if build is cached and test_time metadata is valid.\n \"\"\"\n cache_dir = build_context.conf.get_cache_dir(target, build_context)\n if not isdir(cache_dir):\n logger.debug('No cache dir found for target {}', target.name)\n return False, False\n # read summary file and restore relevant fields into target\n with open(join(cache_dir, 'summary.json'), 'r') as summary_file:\n summary = json.loads(summary_file.read())\n for field in ('build_time', 'test_time', 'created', 'accessed'):\n target.summary[field] = summary.get(field)\n # compare artifacts hash\n if (hash_tree(join(cache_dir, 'artifacts.json')) !=\n summary.get('artifacts_hash', 'no hash')):\n return False, False\n # read cached artifacts metadata\n with open(join(cache_dir, 'artifacts.json'), 'r') as artifacts_meta_file:\n artifact_desc = json.loads(artifacts_meta_file.read())\n # restore all artifacts\n for type_name, artifact_list in artifact_desc.items():\n artifact_type = getattr(AT, type_name)\n for artifact in artifact_list:\n # restore artifact to its expected src path\n if artifact_type not in _NO_CACHE_TYPES:\n if not restore_artifact(\n artifact['src'], artifact['hash'], build_context.conf):\n target.artifacts.reset()\n return False, False\n if artifact_type in (AT.docker_image,):\n # \"restore\" docker image from local registry\n image_id = artifact['src']\n image_full_name = artifact['dst']\n try:\n tag_docker_image(image_id, image_full_name)\n except:\n logger.debug('Docker image with ID {} not found locally',\n image_id)\n target.artifacts.reset()\n return False, False\n target.image_id = image_id\n target.artifacts.add(\n artifact_type, artifact['src'], artifact['dst'])\n write_summary(summary, cache_dir)\n # check that the testing cache exists.\n if not isfile(join(cache_dir, 'tested.json')):\n logger.debug('No testing cache found for target {}', target.name)\n return True, False\n # read the testing cache.\n with open(join(cache_dir, 'tested.json'), 'r') as tested_file:\n target.tested = json.loads(tested_file.read())\n test_key = target.test_hash(build_context)\n return True, (target.tested.get(test_key) is not None)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncopies the artifact at src_path with hash artifact_hash to artifacts cache dir.", "response": "def copy_artifact(src_path: str, artifact_hash: str, conf: Config):\n \"\"\"Copy the artifact at `src_path` with hash `artifact_hash` to artifacts\n cache dir.\n\n If an artifact already exists at that location, it is assumed to be\n identical (since it's based on hash), and the copy is skipped.\n\n TODO: pruning policy to limit cache size.\n \"\"\"\n cache_dir = conf.get_artifacts_cache_dir()\n if not isdir(cache_dir):\n makedirs(cache_dir)\n cached_artifact_path = join(cache_dir, artifact_hash)\n if isfile(cached_artifact_path) or isdir(cached_artifact_path):\n logger.debug('Skipping copy of existing cached artifact {} -> {}',\n src_path, cached_artifact_path)\n return\n abs_src_path = join(conf.project_root, src_path)\n logger.debug('Caching artifact {} under {}',\n abs_src_path, cached_artifact_path)\n shutil.copy(abs_src_path, cached_artifact_path)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef restore_artifact(src_path: str, artifact_hash: str, conf: Config):\n cache_dir = conf.get_artifacts_cache_dir()\n if not isdir(cache_dir):\n return False\n cached_artifact_path = join(cache_dir, artifact_hash)\n if isfile(cached_artifact_path) or isdir(cached_artifact_path):\n # verify cached item hash matches expected hash\n actual_hash = hash_tree(cached_artifact_path)\n if actual_hash != artifact_hash:\n logger.warning(\n 'Cached artifact {} expected hash {} != actual hash {}',\n src_path, artifact_hash, actual_hash)\n rmnode(cached_artifact_path)\n return False\n # if something exists in src_path, check if it matches the cached item\n abs_src_path = join(conf.project_root, src_path)\n if isfile(abs_src_path) or isdir(abs_src_path):\n existing_hash = hash_tree(src_path)\n if existing_hash == artifact_hash:\n logger.debug('Existing artifact {} matches cached hash {}',\n src_path, artifact_hash)\n return True\n logger.debug('Replacing existing artifact {} with cached one',\n src_path)\n rmnode(abs_src_path)\n logger.debug('Restoring cached artifact {} to {}',\n artifact_hash, src_path)\n shutil.copy(cached_artifact_path, abs_src_path)\n return True\n logger.debug('No cached artifact for {} with hash {}',\n src_path, artifact_hash)\n return False", "response": "Restore the artifact whose hash is artifact_hash to src_path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save_target_in_cache(target: Target, build_context):\n cache_dir = build_context.conf.get_cache_dir(target, build_context)\n if isdir(cache_dir):\n rmtree(cache_dir)\n makedirs(cache_dir)\n logger.debug('Saving target metadata in cache under {}', cache_dir)\n # write target metadata\n with open(join(cache_dir, 'target.json'), 'w') as meta_file:\n meta_file.write(target.json(build_context))\n # copy artifacts to artifact cache by hash\n artifacts = target.artifacts.get_all()\n artifact_hashes = {}\n for artifact_type, artifact_map in artifacts.items():\n if artifact_type in (AT.docker_image,):\n continue\n for dst_path, src_path in artifact_map.items():\n artifact_hashes[dst_path] = hash_tree(src_path)\n # not caching \"app\" artifacts, since they're part\n # of the source tree\n if artifact_type not in _NO_CACHE_TYPES:\n copy_artifact(src_path, artifact_hashes[dst_path],\n build_context.conf)\n # serialize target artifacts metadata + hashes\n artifacts_desc = {\n artifact_type.name:\n [{'dst': dst_path, 'src': src_path,\n 'hash': artifact_hashes.get(dst_path)}\n for dst_path, src_path in artifact_map.items()]\n for artifact_type, artifact_map in artifacts.items()\n }\n with open(join(cache_dir, 'artifacts.json'), 'w') as artifacts_meta_file:\n artifacts_meta_file.write(json.dumps(artifacts_desc, indent=4,\n sort_keys=True))\n # copying the summary dict so I can modify it without mutating the target\n summary = dict(target.summary)\n summary['name'] = target.name\n summary['artifacts_hash'] = hash_tree(join(cache_dir, 'artifacts.json'))\n if summary.get('created') is None:\n summary['created'] = time()\n write_summary(summary, cache_dir)", "response": "Save target to build cache for future reuse."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn set of descendants of node named key in target_graph.", "response": "def get(self, key):\n \"\"\"Return set of descendants of node named `key` in `target_graph`.\n\n Returns from cached dict if exists, otherwise compute over the graph\n and cache results in the dict.\n \"\"\"\n if key not in self:\n self[key] = set(get_descendants(self._target_graph, key))\n return self[key]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fatal(msg, *args, **kwargs):\n exc_str = format_exc()\n if exc_str.strip() != 'NoneType: None':\n logger.info('{}', format_exc())\n fatal_noexc(msg, *args, **kwargs)", "response": "Print a red msg to STDERR and exit."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprint a red msg to STDERR and exit.", "response": "def fatal_noexc(msg, *args, **kwargs):\n \"\"\"Print a red `msg` to STDERR and exit.\n\n The message is formatted with `args` & `kwargs`.\n \"\"\"\n print(Fore.RED + 'Fatal: ' + msg.format(*args, **kwargs) + Style.RESET_ALL,\n file=sys.stderr)\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef link_node(abs_src: str, abs_dest: str, force: bool=False):\n dest_parent_dir = split(abs_dest)[0]\n if not isdir(dest_parent_dir):\n # exist_ok=True in case of concurrent creation of the same\n # parent dir\n os.makedirs(dest_parent_dir, exist_ok=True)\n if isfile(abs_src):\n # sync file by linking it to dest\n link_func(abs_src, abs_dest, force)\n elif isdir(abs_src):\n # sync dir by recursively linking files under it to dest\n shutil.copytree(abs_src, abs_dest,\n copy_function=functools.partial(link_func,\n force=force),\n ignore=shutil.ignore_patterns('.git'))\n else:\n raise FileNotFoundError(abs_src)", "response": "Sync source node to destination path using hard links."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef link_files(files: set, workspace_src_dir: str,\n common_parent: str, conf):\n \"\"\"Sync the list of files and directories in `files` to destination\n directory specified by `workspace_src_dir`.\n\n \"Sync\" in the sense that every file given in `files` will be\n hard-linked under `workspace_src_dir` after this function returns, and no\n other files will exist under `workspace_src_dir`.\n\n For directories in `files`, hard-links of contained files are\n created recursively.\n\n All paths in `files`, and the `workspace_src_dir`, must be relative\n to `conf.project_root`.\n\n If `common_parent` is given, and it is a common parent directory of all\n `files`, then the `commonm_parent` part is truncated from the\n sync'ed files destination path under `workspace_src_dir`.\n\n :raises FileNotFoundError: If `files` contains files or directories\n that do not exist.\n\n :raises ValueError: If `common_parent` is given (not `None`), but is *NOT*\n a common parent of all `files`.\n \"\"\"\n norm_dir = normpath(workspace_src_dir)\n base_dir = ''\n if common_parent:\n common_parent = normpath(common_parent)\n base_dir = commonpath(list(files) + [common_parent])\n if base_dir != common_parent:\n raise ValueError('{} is not the common parent of all target '\n 'sources and data'.format(common_parent))\n logger.debug(\n 'Rebasing files in image relative to common parent dir {}',\n base_dir)\n num_linked = 0\n for src in files:\n abs_src = join(conf.project_root, src)\n abs_dest = join(conf.project_root, workspace_src_dir,\n relpath(src, base_dir))\n link_node(abs_src, abs_dest, conf.builders_workspace_dir in src)\n num_linked += 1\n return num_linked", "response": "Sync the list of files and directories in files to destination\n directory specified by workspace_src_dir."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a normalized path for the path observed in build_module.", "response": "def norm_proj_path(path, build_module):\n \"\"\"Return a normalized path for the `path` observed in `build_module`.\n\n The normalized path is \"normalized\" (in the `os.path.normpath` sense),\n relative from the project root directory, and OS-native.\n\n Supports making references from project root directory by prefixing the\n path with \"//\".\n\n :raises ValueError: If path references outside the project sandbox.\n \"\"\"\n if path == '//':\n return ''\n\n if path.startswith('//'):\n norm = normpath(path[2:])\n if norm[0] in ('.', '/', '\\\\'):\n raise ValueError(\"Invalid path: `{}'\".format(path))\n return norm\n\n if path.startswith('/'):\n raise ValueError(\"Invalid path: `{}' - use '//' to start from \"\n \"project root\".format(path))\n\n if build_module == '//':\n build_module = ''\n norm = normpath(join(build_module, path))\n if norm.startswith('..'):\n raise ValueError(\n \"Invalid path `{}' - must remain inside project sandbox\"\n .format(path))\n return norm.strip('.')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef search_for_parent_dir(start_at: str=None, with_files: set=None,\n with_dirs: set=None) -> str:\n \"\"\"Return absolute path of first parent directory of `start_at` that\n contains all files `with_files` and all dirs `with_dirs`\n (including `start_at`).\n\n If `start_at` not specified, start at current working directory.\n\n :param start_at: Initial path for searching for the project build file.\n\n Returns `None` upon reaching FS root without finding a project buildfile.\n \"\"\"\n if not start_at:\n start_at = os.path.abspath(os.curdir)\n if not with_files:\n with_files = set()\n if not with_dirs:\n with_dirs = set()\n exp_hits = len(with_files) + len(with_dirs)\n while start_at:\n num_hits = 0\n for entry in scandir(start_at):\n if ((entry.is_file() and entry.name in with_files) or\n (entry.is_dir() and entry.name in with_dirs)):\n num_hits += 1\n if num_hits == exp_hits:\n return start_at\n cur_level = start_at\n start_at = os.path.split(cur_level)[0]\n if os.path.realpath(cur_level) == os.path.realpath(start_at):\n # looped on root once\n break", "response": "Search for the first parent directory of start_at that contains all files and all dirs with_files and with_dirs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\naccumulating content of file at filepath in hasher.", "response": "def acc_hash(filepath: str, hasher):\n \"\"\"Accumulate content of file at `filepath` in `hasher`.\"\"\"\n with open(filepath, 'rb') as f:\n while True:\n chunk = f.read(_BUF_SIZE)\n if not chunk:\n break\n hasher.update(chunk)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the hexdigest MD5 hash of content of file at filepath.", "response": "def hash_file(filepath: str) -> str:\n \"\"\"Return the hexdigest MD5 hash of content of file at `filepath`.\"\"\"\n md5 = hashlib.md5()\n acc_hash(filepath, md5)\n return md5.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the hexdigest MD5 hash of file or directory at filepath.", "response": "def hash_tree(filepath: str) -> str:\n \"\"\"Return the hexdigest MD5 hash of file or directory at `filepath`.\n\n If file - just hash file content.\n If directory - walk the directory, and accumulate hashes of all the\n relative paths + contents of files under the directory.\n \"\"\"\n if isfile(filepath):\n return hash_file(filepath)\n if isdir(filepath):\n base_dir = filepath\n md5 = hashlib.md5()\n for root, dirs, files in walk(base_dir):\n dirs.sort()\n for fname in sorted(files):\n filepath = join(root, fname)\n # consistent hashing between POSIX & Windows\n md5.update(relpath(filepath, base_dir)\n .replace('\\\\', '/').encode('utf8'))\n acc_hash(filepath, md5)\n return md5.hexdigest()\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds an artifact of type artifact_type at src_path to the set of dependent objects.", "response": "def add(self, artifact_type: ArtifactType, src_path: str,\n dst_path: str=None):\n \"\"\"Add an artifact of type `artifact_type` at `src_path`.\n\n `src_path` should be the path of the file relative to project root.\n `dst_path`, if given, is the desired path of the artifact in dependent\n targets, relative to its base path (by type).\n \"\"\"\n if dst_path is None:\n dst_path = src_path\n other_src_path = self._artifacts[artifact_type].setdefault(\n dst_path, src_path)\n if src_path != other_src_path:\n raise RuntimeError(\n '{} artifact with dest path {} exists with different src '\n 'path: {} != {}'.format(artifact_type, dst_path, src_path,\n other_src_path))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding all src_paths as artifact of type artifact_type.", "response": "def extend(self, artifact_type: ArtifactType, src_paths: list):\n \"\"\"Add all `src_paths` as artifact of type `artifact_type`.\"\"\"\n for src_path in src_paths:\n self.add(artifact_type, src_path, src_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlinks all artifacts with types under base_dir and return the number of linked artifacts.", "response": "def link_types(self, base_dir: str, types: list, conf: Config) -> int:\n \"\"\"Link all artifacts with types `types` under `base_dir` and return\n the number of linked artifacts.\"\"\"\n num_linked = 0\n for kind in types:\n artifact_map = self._artifacts.get(kind)\n if not artifact_map:\n continue\n num_linked += self._link(join(base_dir, self.type_to_dir[kind]),\n artifact_map, conf)\n return num_linked"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef link_for_image(self, base_dir: str, conf: Config) -> int:\n return self.link_types(\n base_dir,\n [ArtifactType.app, ArtifactType.binary, ArtifactType.gen_py],\n conf)", "response": "Link all artifacts required for a Docker image under base_dir and\n return the number of linked artifacts."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlink all artifacts in artifact_map under base_dir and return the number of artifacts linked.", "response": "def _link(self, base_dir: str, artifact_map: dict, conf: Config):\n \"\"\"Link all artifacts in `artifact_map` under `base_dir` and return\n the number of artifacts linked.\"\"\"\n num_linked = 0\n for dst, src in artifact_map.items():\n abs_src = join(conf.project_root, src)\n abs_dest = join(conf.project_root, base_dir, dst)\n link_node(abs_src, abs_dest)\n num_linked += 1\n return num_linked"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_readme():\n base_dir = path.abspath(path.dirname(__file__))\n with open(path.join(base_dir, 'README.md'), encoding='utf-8') as readme_f:\n return readme_f.read()", "response": "Read and return the content of the project README file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts build file args and kwargs to target props.", "response": "def args_to_props(target: Target, builder: Builder, args: list, kwargs: dict):\n \"\"\"Convert build file `args` and `kwargs` to `target` props.\n\n Use builder signature to validate builder usage in build-file, raising\n appropriate exceptions on signature-mismatches.\n\n Use builder signature default values to assign props values to args that\n were not passed in the build-file call.\n\n This function handles only the arg/kwargs-to-prop assignment, including\n default values when necessary. When it returns, if no exception was raised,\n it is guaranteed that `target.props` contains all args defined in the\n builder registered signature, with values taken either from the build-file\n call, or from default values provided in the signature.\n\n Specifically, this function DOES NOT do anything about the arg types\n defined in the builder signature.\n\n :raise TypeError: On signature-call mismatch.\n \"\"\"\n if len(args) > len(builder.sig):\n # too many positional arguments supplied - say how many we can take\n raise TypeError('{}() takes {}, but {} were given'\n .format(target.builder_name,\n format_num_positional_arguments(builder),\n len(args)))\n # read given args into the matching props according to the signature\n for arg_name, value in zip(builder.sig.keys(), args):\n target.props[arg_name] = value\n # read given kwargs into the named props, asserting matching sig arg names\n for arg_name, value in kwargs.items():\n if arg_name not in builder.sig:\n raise TypeError(\"{}() got an unexpected keyword argument '{}'\"\n .format(target.builder_name, arg_name))\n if arg_name in target.props:\n raise TypeError(\"{}() got multiple values for argument '{}'\"\n .format(target.builder_name, arg_name))\n target.props[arg_name] = value\n # go over signature args, assigning default values to anything that wasn't\n # assigned from args / kwargs, making sure no positional args are missing\n missing_args = []\n for arg_name, sig_spec in builder.sig.items():\n if arg_name not in target.props:\n if sig_spec.default == Empty:\n missing_args.append(arg_name)\n else:\n target.props[arg_name] = sig_spec.default\n if missing_args:\n # not enough positional arguments supplied - say which\n # TODO(itamar): match Python's error more closely (last \"and \"):\n # foo() missing 3 required positional arguments: 'a', 'b', and 'c'\n # TODO(itamar): use inflect\n raise TypeError('{}() missing {} required positional argument{}: {}'\n .format(target.builder_name, len(missing_args),\n 's' if len(missing_args) > 1 else '',\n ', '.join(\"'{}'\".format(arg)\n for arg in missing_args)))\n logger.debug('Got props for target: {}', target)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a target extraction function for a specific builder and a specific build file.", "response": "def extractor(\n builder_name: str, builder: Builder, build_file_path: str,\n build_context) -> types.FunctionType:\n \"\"\"Return a target extraction function for a specific builder and a\n specific build file.\"\"\"\n build_module = to_build_module(build_file_path, build_context.conf)\n\n def extract_target(*args, **kwargs):\n \"\"\"The actual target extraction function that is executed when any\n builder function is called in a build file.\"\"\"\n target = Target(builder_name=builder_name)\n # convert args/kwargs to target.props and handle arg types\n args_to_props(target, builder, args, kwargs)\n raw_name = target.props.name\n handle_typed_args(target, builder, build_module)\n logger.debug('Extracting target: {}', target)\n # promote the `name` and `deps` from props to the target instance\n target.name = target.props.pop('name')\n target.deps = target.props.pop('deps', [])\n if target.deps:\n logger.debug('Got deps for target \"{0.name}\": {0.deps}', target)\n # invoke builder hooks on extracted target\n for hook_name, hook in Plugin.get_hooks_for_builder(builder_name):\n logger.debug('About to invoke hook {} on target {}',\n hook_name, target)\n hook(build_context, target)\n # save the target in the build context\n build_context.register_target(target)\n logger.debug('Registered {}', target)\n\n return extract_target"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndefaults rdopkg action runner including rdopkg action modules", "response": "def rdopkg_runner():\n \"\"\"\n default rdopkg action runner including rdopkg action modules\n \"\"\"\n aman = ActionManager()\n # assume all actions.* modules are action modules\n aman.add_actions_modules(actions)\n aman.fill_aliases()\n # additional rdopkg action module logic should go here\n return ActionRunner(action_manager=aman)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute rdopkg action with specified arguments and return shell friendly exit code.", "response": "def rdopkg(*cargs):\n \"\"\"\n rdopkg CLI interface\n\n Execute rdopkg action with specified arguments and return\n shell friendly exit code.\n\n This is the default high level way to interact with rdopkg.\n\n py> rdopkg('new-version', '1.2.3')\n\n is equivalent to\n\n $> rdopkg new-version 1.2.3\n \"\"\"\n runner = rdopkg_runner()\n return shell.run(runner,\n cargs=cargs,\n prog='rdopkg',\n version=__version__)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getDynDnsClientForConfig(config, plugins=None):\n initparams = {}\n if \"interval\" in config:\n initparams[\"detect_interval\"] = config[\"interval\"]\n\n if plugins is not None:\n initparams[\"plugins\"] = plugins\n\n if \"updater\" in config:\n for updater_name, updater_options in config[\"updater\"]:\n initparams[\"updater\"] = get_updater_class(updater_name)(**updater_options)\n\n # find class and instantiate the detector:\n if \"detector\" in config:\n detector_name, detector_opts = config[\"detector\"][-1]\n try:\n klass = get_detector_class(detector_name)\n except KeyError as exc:\n LOG.warning(\"Invalid change detector configuration: '%s'\",\n detector_name, exc_info=exc)\n return None\n thedetector = klass(**detector_opts)\n initparams[\"detector\"] = thedetector\n\n return DynDnsClient(**initparams)", "response": "Instantiate and return a complete and working dyndns client."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sync(self):\n detected_ip = self.detector.detect()\n if detected_ip is None:\n LOG.debug(\"Couldn't detect the current IP using detector %r\", self.detector.names()[-1])\n # we don't have a value to set it to, so don't update! Still shouldn't happen though\n elif self.dns.detect() != detected_ip:\n LOG.info(\"%s: dns IP '%s' does not match detected IP '%s', updating\",\n self.updater.hostname, self.dns.get_current_value(), detected_ip)\n self.status = self.updater.update(detected_ip)\n self.plugins.after_remote_ip_update(detected_ip, self.status)\n else:\n self.status = 0\n LOG.debug(\"%s: nothing to do, dns '%s' equals detection '%s'\",\n self.updater.hostname,\n self.dns.get_current_value(),\n self.detector.get_current_value())", "response": "Synchronize the registered IP with the detected IP."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if the online state change has changed.", "response": "def has_state_changed(self):\n \"\"\"\n Detect changes in offline detector and real DNS value.\n\n Detect a change either in the offline detector or a\n difference between the real DNS value and what the online\n detector last got.\n This is efficient, since it only generates minimal dns traffic\n for online detectors and no traffic at all for offline detectors.\n\n :rtype: boolean\n \"\"\"\n self.lastcheck = time.time()\n # prefer offline state change detection:\n if self.detector.can_detect_offline():\n self.detector.detect()\n elif not self.dns.detect() == self.detector.get_current_value():\n # The following produces traffic, but probably less traffic\n # overall than the detector\n self.detector.detect()\n\n if self.detector.has_changed():\n LOG.debug(\"detector changed\")\n return True\n elif self.dns.has_changed():\n LOG.debug(\"dns changed\")\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef needs_check(self):\n if self.lastcheck is None:\n return True\n return time.time() - self.lastcheck >= self.ipchangedetection_sleep", "response": "Check if enough time has elapsed to perform a check."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef needs_sync(self):\n if self.lastforce is None:\n self.lastforce = time.time()\n return time.time() - self.lastforce >= self.forceipchangedetection_sleep", "response": "Check if enough time has elapsed to perform a sync."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if the detector has changed and call sync if it has changed.", "response": "def check(self):\n \"\"\"\n Check if the detector changed and call sync() accordingly.\n\n If the sleep time has elapsed, this method will see if the attached\n detector has had a state change and call sync() accordingly.\n \"\"\"\n if self.needs_check():\n if self.has_state_changed():\n LOG.debug(\"state changed, syncing...\")\n self.sync()\n elif self.needs_sync():\n LOG.debug(\"forcing sync after %s seconds\",\n self.forceipchangedetection_sleep)\n self.lastforce = time.time()\n self.sync()\n else:\n # nothing to be done\n pass"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _parser_jsonip(text):\n import json\n try:\n return str(json.loads(text).get(\"ip\"))\n except ValueError as exc:\n LOG.debug(\"Text '%s' could not be parsed\", exc_info=exc)\n return None", "response": "Parse response text like the one returned by http://jsonip. com /."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntrying to contact a remote webservice and parse the returned output and return the IP address.", "response": "def detect(self):\n \"\"\"\n Try to contact a remote webservice and parse the returned output.\n\n Determine the IP address from the parsed output and return.\n \"\"\"\n if self.opts_url and self.opts_parser:\n url = self.opts_url\n parser = self.opts_parser\n else:\n url, parser = choice(self.urls) # noqa: S311\n parser = globals().get(\"_parser_\" + parser)\n theip = _get_ip_from_url(url, parser)\n if theip is None:\n LOG.info(\"Could not detect IP using webcheck! Offline?\")\n self.set_current_value(theip)\n return theip"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_plugin(self, plugin, call):\n meth = getattr(plugin, call, None)\n if meth is not None:\n self.plugins.append((plugin, meth))", "response": "Add a plugin to the list of plugins."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef listcall(self, *arg, **kw):\n final_result = None\n for _, meth in self.plugins:\n result = meth(*arg, **kw)\n if final_result is None and result is not None:\n final_result = result\n return final_result", "response": "Call each plugin sequentially."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_plugin(self, plugin):\n # allow plugins loaded via entry points to override builtin plugins\n new_name = self.plugin_name(plugin)\n self._plugins[:] = [p for p in self._plugins\n if self.plugin_name(p) != new_name]\n self._plugins.append(plugin)", "response": "Add the given plugin to the internal list of plugins."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconfiguring the set of plugins with the given args.", "response": "def configure(self, args):\n \"\"\"Configure the set of plugins with the given args.\n\n After configuration, disabled plugins are removed from the plugins list.\n \"\"\"\n for plug in self._plugins:\n plug_name = self.plugin_name(plug)\n plug.enabled = getattr(args, \"plugin_%s\" % plug_name, False)\n if plug.enabled and getattr(plug, \"configure\", None):\n if callable(getattr(plug, \"configure\", None)):\n plug.configure(args)\n LOG.debug(\"Available plugins: %s\", self._plugins)\n self.plugins = [plugin for plugin in self._plugins if getattr(plugin, \"enabled\", False)]\n LOG.debug(\"Enabled plugins: %s\", self.plugins)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef options(self, parser, env):\n def get_help(plug):\n \"\"\"Extract the help docstring from the given plugin.\"\"\"\n import textwrap\n if plug.__class__.__doc__:\n # doc sections are often indented; compress the spaces\n return textwrap.dedent(plug.__class__.__doc__)\n return \"(no help available)\"\n for plug in self._plugins:\n env_opt = ENV_PREFIX + self.plugin_name(plug).upper()\n env_opt = env_opt.replace(\"-\", \"_\")\n parser.add_argument(\"--with-%s\" % self.plugin_name(plug),\n action=\"store_true\",\n dest=\"plugin_%s\" % self.plugin_name(plug),\n default=env.get(env_opt),\n help=\"Enable plugin %s: %s [%s]\" %\n (plug.__class__.__name__, get_help(plug), env_opt))", "response": "Register commandline options with the given parser."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads plugins from entry point ( s ).", "response": "def load_plugins(self):\n \"\"\"Load plugins from entry point(s).\"\"\"\n from pkg_resources import iter_entry_points\n seen = set()\n for entry_point in self.entry_points:\n for ep in iter_entry_points(entry_point):\n if ep.name in seen:\n continue\n seen.add(ep.name)\n try:\n plugincls = ep.load()\n except Exception as exc:\n # never let a plugin load kill us\n warn(\"Unable to load plugin %s: %s\" % (ep, exc),\n RuntimeWarning)\n continue\n plugin = plugincls()\n self.add_plugin(plugin)\n super(EntryPointPluginManager, self).load_plugins()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload plugins from dyndnsc. plugins. builtin.", "response": "def load_plugins(self):\n \"\"\"Load plugins from `dyndnsc.plugins.builtin`.\"\"\"\n from dyndnsc.plugins.builtin import PLUGINS\n for plugin in PLUGINS:\n self.add_plugin(plugin())\n super(BuiltinPluginManager, self).load_plugins()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update(self, ip):\n return self.handler.update_record(name=self._recordname,\n address=ip)", "response": "Update the IP on the remote service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(self):\n try:\n with open(self.config_file) as configfile:\n self.config = yaml.load(configfile)\n except TypeError:\n # no config file (use environment variables)\n pass\n if self.config:\n self.prefix = self.config.get('config_prefix', None)\n if not self.prefix:\n if os.getenv(self.config_prefix):\n self.prefix = os.getenv(self.config_prefix)\n else:\n for path in [\n os.path.join(self.basepath, self.default_file),\n os.path.join(self.config_root, self.default_file)\n ]:\n if os.path.exists(path):\n with open(path) as conf:\n config = yaml.load(conf)\n prefix = config.get(\n self.config_prefix.lower(), None\n )\n if prefix:\n self.prefix = prefix\n break", "response": "Load config file and set prefix."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get(self, var, section=None, **kwargs):\n # default is not a specified keyword argument so we can distinguish\n # between a default set to None and no default sets\n if not section and self.section:\n section = self.section\n default = kwargs.get('default', None)\n env_var = \"{}{}{}\".format(\n _suffix(self.prefix) if self.prefix else '',\n _suffix(alphasnake(section)) if section else '',\n alphasnake(str(var))\n ).upper()\n config = self.config.get(section, {}) if section else self.config\n result = config.get(var, default)\n result = os.getenv(env_var, default=result)\n # no default keyword supplied (and no result)\n # use is None to allow empty lists etc\n if result is None and 'default' not in kwargs:\n msg = \"Could not find '{}'\".format(var)\n if section:\n msg = \"{} in section '{}'.\".format(\n msg, section\n )\n msg = \"{} Checked environment variable: {}\".format(msg, env_var)\n if self.config_file:\n msg = \"{} and file: {}\".format(msg, self.config_file)\n raise ConfigError(msg)\n return result", "response": "Retrieve a config var."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef keys(self, section=None):\n if not section and self.section:\n section = self.section\n config = self.config.get(section, {}) if section else self.config\n return config.keys()", "response": "Provide dict like keys method"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprovide dict like items method", "response": "def items(self, section=None):\n \"\"\"Provide dict like items method\"\"\"\n if not section and self.section:\n section = self.section\n config = self.config.get(section, {}) if section else self.config\n return config.items()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef values(self, section=None):\n if not section and self.section:\n section = self.section\n config = self.config.get(section, {}) if section else self.config\n return config.values()", "response": "Provide dict like values method"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets filepath of the config file.", "response": "def _get_filepath(self, filename=None, config_dir=None):\n \"\"\"\n Get config file.\n\n :param filename: name of config file (not path)\n :param config_dir: dir name prepended to file name.\n\n Note: we use e.g. GBR_CONFIG_DIR here, this is the default\n value in GBR but it is actually self.env_prefix + '_DIR' etc.\n\n If config_dir is not supplied it will be set to the value of the\n environment variable GBR_CONFIG_DIR or None.\n\n If filename is not supplied and the environment variable GBR_CONFIG\n is set and contains a path, its value will be tested to see if a file\n exists, if so that is returned as the config file otherwise filename\n will be set to GBR_CONFIG, if it exists, otherwise 'config.yaml'.\n\n If a filename is supplied or GBR_CONFIG is not an existing file:\n\n If the environment variable GBR_CONFIG_PATH exists the path\n GBR_CONFIG_PATH/config_dir/filename is checked.\n\n If it doesn't exist config/CONFIG_DIR/filename is checked\n (relative to the root of the (GBR) repo)\n finally GBR_CONFIG_DEFAULT/CONFIG_DIR/filename is tried\n\n If no file is found None will be returned.\n \"\"\"\n # pylint: disable=no-self-use\n config_file = None\n config_dir_env_var = self.env_prefix + '_DIR'\n if not filename:\n # Check env vars for config\n filename = os.getenv(self.env_prefix, default=self.default_file)\n # contains path so try directly\n if os.path.dirname(filename) and os.path.exists(filename):\n config_file = filename\n if not config_file:\n # Cannot contain path\n filename = os.path.basename(filename)\n if not config_dir:\n config_dir = os.getenv(config_dir_env_var, default='')\n for path in [self.basepath, self.config_root]:\n filepath = os.path.join(path, config_dir, filename)\n if os.path.exists(filepath):\n config_file = filepath\n break\n return config_file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update(self, ip):\n timeout = 60\n LOG.debug(\"Updating '%s' to '%s' at service '%s'\", self.hostname, ip, self._updateurl)\n params = {\"domains\": self.hostname.partition(\".\")[0], \"token\": self.__token}\n if ip is None:\n params[\"ip\"] = \"\"\n else:\n params[\"ip\"] = ip\n # LOG.debug(\"Update params: %r\", params)\n req = requests.get(self._updateurl, params=params, headers=constants.REQUEST_HEADERS_DEFAULT,\n timeout=timeout)\n LOG.debug(\"status %i, %s\", req.status_code, req.text)\n # duckdns response codes seem undocumented...\n if req.status_code == 200:\n if req.text.startswith(\"OK\"):\n return ip\n return req.text\n return \"invalid http status code: %s\" % req.status_code", "response": "Update the IP on the remote service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resolve(hostname, family=AF_UNSPEC):\n af_ok = (AF_INET, AF_INET6)\n if family != AF_UNSPEC and family not in af_ok:\n raise ValueError(\"Invalid family '%s'\" % family)\n ips = ()\n try:\n addrinfo = socket.getaddrinfo(hostname, None, family)\n except socket.gaierror as exc:\n # EAI_NODATA and EAI_NONAME are expected if this name is not (yet)\n # present in DNS\n if exc.errno not in (socket.EAI_NODATA, socket.EAI_NONAME):\n LOG.debug(\"socket.getaddrinfo() raised an exception\", exc_info=exc)\n else:\n if family == AF_UNSPEC:\n ips = tuple({item[4][0] for item in addrinfo if item[0] in af_ok})\n else:\n ips = tuple({item[4][0] for item in addrinfo})\n return ips", "response": "Resolve hostname to one or more IP addresses through the operating system."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef detect(self):\n theip = next(iter(resolve(self.opts_hostname, self.opts_family)), None)\n self.set_current_value(theip)\n return theip", "response": "Resolve the hostname to an IP address through the operating system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef list_presets(cfg, out=sys.stdout):\n for section in cfg.sections():\n if section.startswith(\"preset:\"):\n out.write((section.replace(\"preset:\", \"\")) + os.linesep)\n for k, v in cfg.items(section):\n out.write(\"\\t%s = %s\" % (k, v) + os.linesep)", "response": "Write a human readable list of available presets to out."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning an endless loop accross the give dynamic dns clients.", "response": "def run_forever(dyndnsclients):\n \"\"\"\n Run an endless loop accross the give dynamic dns clients.\n\n :param dyndnsclients: list of DynDnsClients\n \"\"\"\n while True:\n try:\n # Do small sleeps in the main loop, needs_check() is cheap and does\n # the rest.\n time.sleep(15)\n for dyndnsclient in dyndnsclients:\n dyndnsclient.check()\n except (KeyboardInterrupt,):\n break\n except (Exception,) as exc:\n LOG.critical(\"An exception occurred in the dyndns loop\", exc_info=exc)\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nruns the main CLI program. Initializes the stack, parses command line arguments, and fires requested logic.", "response": "def main():\n \"\"\"\n Run the main CLI program.\n\n Initializes the stack, parses command line arguments, and fires requested\n logic.\n \"\"\"\n plugins = DefaultPluginManager()\n plugins.load_plugins()\n\n parser, _ = create_argparser()\n # add the updater protocol options to the CLI:\n for kls in updater_classes():\n kls.register_arguments(parser)\n\n for kls in detector_classes():\n kls.register_arguments(parser)\n\n # add the plugin options to the CLI:\n from os import environ\n plugins.options(parser, environ)\n\n args = parser.parse_args()\n\n if args.debug:\n args.verbose_count = 5 # some high number\n\n log_level = max(int(logging.WARNING / 10) - args.verbose_count, 0) * 10\n # print(log_level)\n logging.basicConfig(level=log_level, format=\"%(levelname)s %(message)s\")\n # logging.debug(\"args %r\", args)\n\n if args.version:\n from . import __version__\n print(\"dyndnsc %s\" % __version__) # noqa\n return 0\n\n # silence 'requests' logging\n requests_log = logging.getLogger(\"requests\")\n requests_log.setLevel(logging.WARNING)\n\n logging.debug(parser)\n cfg = get_configuration(args.config)\n\n if args.listpresets:\n list_presets(cfg)\n return 0\n\n if args.config:\n collected_configs = collect_config(cfg)\n else:\n parsed_args = parse_cmdline_args(args, updater_classes().union(detector_classes()))\n logging.debug(\"parsed_args %r\", parsed_args)\n\n collected_configs = {\n \"cmdline\": {\n \"interval\": int(args.sleeptime)\n }\n }\n collected_configs[\"cmdline\"].update(parsed_args)\n\n plugins.configure(args)\n plugins.initialize()\n\n logging.debug(\"collected_configs: %r\", collected_configs)\n dyndnsclients = []\n for thisconfig in collected_configs:\n logging.debug(\"Initializing client for '%s'\", thisconfig)\n # done with options, bring on the dancing girls\n dyndnsclient = getDynDnsClientForConfig(\n collected_configs[thisconfig], plugins=plugins)\n if dyndnsclient is None:\n return 1\n # do an initial synchronization, before going into endless loop:\n dyndnsclient.sync()\n dyndnsclients.append(dyndnsclient)\n\n run_forever_callable = partial(run_forever, dyndnsclients)\n\n if args.daemon:\n import daemonocle\n daemon = daemonocle.Daemon(worker=run_forever_callable)\n daemon.do_action(\"start\")\n args.loop = True\n\n if args.loop:\n run_forever_callable()\n\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmake sure a git repo ssh:// url has a user set", "response": "def tidy_ssh_user(url=None, user=None):\n \"\"\"make sure a git repo ssh:// url has a user set\"\"\"\n if url and url.startswith('ssh://'):\n # is there a user already ?\n match = re.compile('ssh://([^@]+)@.+').match(url)\n if match:\n ssh_user = match.group(1)\n if user and ssh_user != user:\n # assume prevalence of argument\n url = url.replace(re.escape(ssh_user) + '@',\n user + '@')\n elif user:\n url = 'ssh://' + \\\n user + '@' + \\\n url[len('ssh://'):]\n return url"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef config_get(self, param, default=None):\n '''Return the value of a git configuration option. This will\n return the value of the default parameter (which defaults to\n None) if the given option does not exist.'''\n\n try:\n return self(\"config\", \"--get\", param,\n log_fail=False, log_cmd=False)\n except exception.CommandFailed:\n return default", "response": "Return the value of a git configuration option. This will\n return the value of the default parameter."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns an initialized ConfigParser.", "response": "def get_configuration(config_file=None):\n \"\"\"Return an initialized ConfigParser.\n\n If no config filename is presented, `DEFAULT_USER_INI` is used if present.\n Also reads the built-in presets.\n\n :param config_file: string path\n \"\"\"\n parser = configparser.ConfigParser()\n if config_file is None:\n # fallback to default user config file\n config_file = os.path.join(os.getenv(\"HOME\"), DEFAULT_USER_INI)\n if not os.path.isfile(config_file):\n config_file = None\n else:\n if not os.path.isfile(config_file):\n raise ValueError(\"%s is not a file\" % config_file)\n\n configs = [get_filename(PRESETS_INI)]\n if config_file:\n configs.append(config_file)\n LOG.debug(\"Attempting to read configuration from %r\", configs)\n read_configs = parser.read(configs)\n LOG.debug(\"Successfully read configuration from %r\", read_configs)\n LOG.debug(\"config file sections: %r\", parser.sections())\n return parser"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _iraw_client_configs(cfg):\n client_names = cfg.get(\"dyndnsc\", \"configs\").split(\",\")\n _preset_prefix = \"preset:\"\n _use_preset = \"use_preset\"\n for client_name in (x.strip() for x in client_names if x.strip()):\n client_cfg_dict = dict(cfg.items(client_name))\n if cfg.has_option(client_name, _use_preset):\n prf = dict(\n cfg.items(_preset_prefix + cfg.get(client_name, _use_preset)))\n prf.update(client_cfg_dict)\n client_cfg_dict = prf\n else:\n # raw config with NO preset in use, so no updating of dict\n pass\n logging.debug(\"raw config for '%s': %r\", client_name, client_cfg_dict)\n if _use_preset in client_cfg_dict:\n del client_cfg_dict[_use_preset]\n yield client_name, client_cfg_dict", "response": "Generator for client_name client_cfg_dict"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncollect the configuration dictionary from configparser. ConfigParser and returns a dictionary containing the relevant configuration values.", "response": "def collect_config(cfg):\n \"\"\"\n Construct configuration dictionary from configparser.\n\n Resolves presets and returns a dictionary containing:\n\n .. code-block:: bash\n\n {\n \"client_name\": {\n \"detector\": (\"detector_name\", detector_opts),\n \"updater\": [\n (\"updater_name\", updater_opts),\n ...\n ]\n },\n ...\n }\n\n :param cfg: ConfigParser\n \"\"\"\n collected_configs = {}\n _updater_str = \"updater\"\n _detector_str = \"detector\"\n _dash = \"-\"\n for client_name, client_cfg_dict in _iraw_client_configs(cfg):\n detector_name = None\n detector_options = {}\n updater_name = None\n updater_options = {}\n collected_config = {}\n for k in client_cfg_dict:\n if k.startswith(_detector_str + _dash):\n detector_options[\n k.replace(_detector_str + _dash, \"\")] = client_cfg_dict[k]\n elif k == _updater_str:\n updater_name = client_cfg_dict.get(k)\n elif k == _detector_str:\n detector_name = client_cfg_dict.get(k)\n elif k.startswith(_updater_str + _dash):\n updater_options[\n k.replace(_updater_str + _dash, \"\")] = client_cfg_dict[k]\n else:\n # options passed \"as is\" to the dyndnsc client\n collected_config[k] = client_cfg_dict[k]\n\n collected_config[_detector_str] = [(detector_name, detector_options)]\n collected_config[_updater_str] = [(updater_name, updater_options)]\n\n collected_configs[client_name] = collected_config\n return collected_configs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef detect(self):\n if PY3: # py23\n import subprocess # noqa: S404 @UnresolvedImport pylint: disable=import-error\n else:\n import commands as subprocess # @UnresolvedImport pylint: disable=import-error\n try:\n theip = subprocess.getoutput(self.opts_command) # noqa: S605\n except Exception:\n theip = None\n self.set_current_value(theip)\n return theip", "response": "Detect and return the IP address."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find_ip(family=AF_INET, flavour=\"opendns\"):\n flavours = {\n \"opendns\": {\n AF_INET: {\n \"@\": (\"resolver1.opendns.com\", \"resolver2.opendns.com\"),\n \"qname\": \"myip.opendns.com\",\n \"rdtype\": \"A\",\n },\n AF_INET6: {\n \"@\": (\"resolver1.ipv6-sandbox.opendns.com\", \"resolver2.ipv6-sandbox.opendns.com\"),\n \"qname\": \"myip.opendns.com\",\n \"rdtype\": \"AAAA\",\n },\n },\n }\n\n flavour = flavours[\"opendns\"]\n resolver = dns.resolver.Resolver()\n # specify the custom nameservers to be used (as IPs):\n resolver.nameservers = [next(iter(resolve(h, family=family))) for h in flavour[family][\"@\"]]\n\n answers = resolver.query(qname=flavour[family][\"qname\"], rdtype=flavour[family][\"rdtype\"])\n for rdata in answers:\n return rdata.address\n return None", "response": "Find the publicly visible IP address of the current system."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetect the WAN IP of the current process through DNS.", "response": "def detect(self):\n \"\"\"\n Detect the WAN IP of the current process through DNS.\n\n Depending on the 'family' option, either ipv4 or ipv6 resolution is\n carried out.\n\n :return: ip address\n \"\"\"\n theip = find_ip(family=self.opts_family)\n self.set_current_value(theip)\n return theip"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the detected IP in the current run ( if any.", "response": "def set_current_value(self, value):\n \"\"\"Set the detected IP in the current run (if any).\"\"\"\n self._oldvalue = self.get_current_value()\n self._currentvalue = value\n if self._oldvalue != value:\n # self.notify_observers(\"new_ip_detected\", {\"ip\": value})\n LOG.debug(\"%s.set_current_value(%s)\", self.__class__.__name__, value)\n return value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_cmdline_args(args, classes):\n if args is None:\n raise ValueError(\"args must not be None\")\n parsed_args = {}\n for kls in classes:\n prefix = kls.configuration_key_prefix()\n name = kls.configuration_key\n if getattr(args, \"%s_%s\" % (prefix, name), False):\n logging.debug(\n \"Gathering initargs for '%s.%s'\", prefix, name)\n initargs = {}\n for arg_name in kls.init_argnames():\n val = getattr(args, \"%s_%s_%s\" %\n (prefix, name, arg_name))\n if val is not None:\n initargs[arg_name] = val\n if prefix not in parsed_args:\n parsed_args[prefix] = []\n parsed_args[prefix].append((name, initargs))\n return parsed_args", "response": "Parse all updater and detector related arguments from args."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nregistering command line options.", "response": "def register_arguments(cls, parser):\n \"\"\"Register command line options.\n\n Implement this method for normal options behavior with protection from\n OptionConflictErrors. If you override this method and want the default\n --$name option(s) to be registered, be sure to call super().\n \"\"\"\n if hasattr(cls, \"_dont_register_arguments\"):\n return\n prefix = cls.configuration_key_prefix()\n cfgkey = cls.configuration_key\n parser.add_argument(\"--%s-%s\" % (prefix, cfgkey),\n action=\"store_true\",\n dest=\"%s_%s\" % (prefix, cfgkey),\n default=False,\n help=\"%s: %s\" %\n (cls.__name__, cls.help()))\n args = cls.init_argnames()\n defaults = cls._init_argdefaults()\n for arg in args[0:len(args) - len(defaults)]:\n parser.add_argument(\"--%s-%s-%s\" % (prefix, cfgkey, arg),\n dest=\"%s_%s_%s\" % (prefix, cfgkey, arg),\n help=\"\")\n for i, arg in enumerate(args[len(args) - len(defaults):]):\n parser.add_argument(\"--%s-%s-%s\" % (prefix, cfgkey, arg),\n dest=\"%s_%s_%s\" % (prefix, cfgkey, arg),\n default=defaults[i],\n help=\"default: %(default)s\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef tag_patches_branch(package, local_patches_branch, patches_branch,\n force=False, push=False):\n \"\"\" Tag the local_patches_branch with this package's NVR. \"\"\"\n vr = specfile.Spec().get_vr(epoch=False)\n nvr_tag = package + '-' + vr\n tag_cmd = ['tag', nvr_tag, local_patches_branch]\n if force:\n tag_cmd.append('-f')\n git(*tag_cmd)\n if push:\n patches_remote = patches_branch.partition('/')[0]\n git('push', patches_remote, nvr_tag)\n else:\n print('Not pushing tag. Run \"git push patches %s\" by hand.' % nvr_tag)", "response": "Tag the local_patches_branch with this package s NVR."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load_class(module_name, class_name):\n try:\n plugmod = import_module(module_name)\n except Exception as exc:\n warn(\"Importing built-in plugin %s.%s raised an exception: %r\" %\n (module_name, class_name, repr(exc)), ImportWarning)\n return None\n else:\n return getattr(plugmod, class_name)", "response": "Load a class from a module."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_class(name, classes):\n name = name.lower()\n cls = next((c for c in classes if c.configuration_key == name), None)\n if cls is None:\n raise ValueError(\"No class named '%s' could be found\" % name)\n return cls", "response": "Return class in classes identified by configuration key name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef alphasnake(string):\n if string:\n string = \" \".join(\n [re.sub(r'\\W+', '', word) for word in string.split()]\n )\n string = decamel_to_snake(string)\n return string", "response": "Convert to snakecase removing non alpha numerics\n Word"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsplits CamelCased words. CamelCase -> Camel Case dromedaryCase -> dromedary Case", "response": "def decamel(string):\n \"\"\"\"Split CamelCased words.\n\n CamelCase -> Camel Case, dromedaryCase -> dromedary Case.\n \"\"\"\n regex = re.compile(r'(\\B[A-Z][a-z]*)')\n return regex.sub(r' \\1', string)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting to lower case join camel case with underscore.", "response": "def decamel_to_snake(string):\n \"\"\"Convert to lower case, join camel case with underscore.\n CamelCase -> camel_case. Camel Case -> camel_case.\n \"\"\"\n strings = [decamel(word) if not word.isupper() else word.lower()\n for word in string.split()]\n return \"_\".join([snake(dstring)for dstring in strings])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns default distroinfo info file", "response": "def info_file(distro=None):\n \"\"\"Return default distroinfo info file\"\"\"\n if not distro:\n distro = cfg['DISTRO']\n info_file_conf = distro.upper() + 'INFO_FILE'\n try:\n return cfg[info_file_conf]\n except KeyError:\n raise exception.InvalidUsage(\n why=\"Couldn't find config option %s for distro: %s\"\n % (info_file_conf, distro))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_distroinfo(distro=None):\n if not distro:\n distro = cfg['DISTRO']\n _info_file = info_file(distro)\n # prefer git fetcher if available\n git_info_url_conf = distro.upper() + 'INFO_REPO'\n try:\n remote_git_info = cfg[git_info_url_conf]\n return DistroInfo(_info_file, remote_git_info=remote_git_info)\n except KeyError:\n pass\n # try raw remote fetcher\n remote_info_url_conf = distro.upper() + 'INFO_RAW_URL'\n try:\n remote_info = cfg[remote_info_url_conf]\n return DistroInfo(_info_file, remote_info=remote_info)\n except KeyError:\n raise exception.InvalidUsage(\n why=\"Couldn't find config option %s or %s for distro: %s\"\n % (git_info_url_conf, remote_info_url_conf, distro))", "response": "Get DistroInfo initialized from configuration"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef split_filename(filename):\n\n # Remove .rpm suffix\n if filename.endswith('.rpm'):\n filename = filename.split('.rpm')[0]\n\n # is there an epoch?\n components = filename.split(':')\n if len(components) > 1:\n epoch = components[0]\n else:\n epoch = ''\n\n # Arch is the last item after .\n arch = filename.rsplit('.')[-1]\n remaining = filename.rsplit('.%s' % arch)[0]\n release = remaining.rsplit('-')[-1]\n version = remaining.rsplit('-')[-2]\n name = '-'.join(remaining.rsplit('-')[:-2])\n\n return name, version, release, epoch, arch", "response": "This function splits a rpm filename into name version epoch and arch."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a tuple of epoch version and release from a version string", "response": "def string_to_version(verstring):\n \"\"\"\n Return a tuple of (epoch, version, release) from a version string\n\n This function replaces rpmUtils.miscutils.stringToVersion, see\n https://bugzilla.redhat.com/1364504\n \"\"\"\n # is there an epoch?\n components = verstring.split(':')\n if len(components) > 1:\n epoch = components[0]\n else:\n epoch = 0\n\n remaining = components[:2][0].split('-')\n version = remaining[0]\n release = remaining[1]\n\n return (epoch, version, release)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef spec_fn(spec_dir='.'):\n specs = [f for f in os.listdir(spec_dir)\n if os.path.isfile(f) and f.endswith('.spec')]\n if not specs:\n raise exception.SpecFileNotFound()\n if len(specs) != 1:\n raise exception.MultipleSpecFilesFound()\n return specs[0]", "response": "Return the filename for a. spec file in this directory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef version_parts(version):\n m = re.match(r'(\\d+(?:\\.\\d+)*)([.%]|$)(.*)', version)\n if m:\n numver = m.group(1)\n rest = m.group(2) + m.group(3)\n return numver, rest\n else:\n return version, ''", "response": "Split a version string into numeric X. Y. Z part and the rest."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsplits RPM Release string into number milestone and rest.", "response": "def release_parts(version):\n \"\"\"\n Split RPM Release string into (numeric X.Y.Z part, milestone, rest).\n\n :returns: a three-element tuple (number, milestone, rest). If we cannot\n determine the \"milestone\" or \"rest\", those will be an empty\n string.\n \"\"\"\n numver, tail = version_parts(version)\n if numver and not re.match(r'\\d', numver):\n # entire release is macro a la %{release}\n tail = numver\n numver = ''\n m = re.match(r'(\\.?(?:%\\{\\?milestone\\}|[^%.]+))(.*)$', tail)\n if m:\n milestone = m.group(1)\n rest = m.group(2)\n else:\n milestone = ''\n rest = tail\n return numver, milestone, rest"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a value of name = value comment in spec or None.", "response": "def get_magic_comment(self, name, expand_macros=False):\n \"\"\"Return a value of # name=value comment in spec or None.\"\"\"\n match = re.search(r'^#\\s*?%s\\s?=\\s?(\\S+)' % re.escape(name),\n self.txt, flags=re.M)\n if not match:\n return None\n\n val = match.group(1)\n if expand_macros and has_macros(val):\n # don't parse using rpm unless required\n val = self.expand_macro(val)\n return val"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_patches_base(self, expand_macros=False):\n match = re.search(r'(?<=patches_base=)[\\w.+?%{}]+', self.txt)\n if not match:\n return None, 0\n\n patches_base = match.group()\n if expand_macros and has_macros(patches_base):\n # don't parse using rpm unless required\n patches_base = self.expand_macro(patches_base)\n patches_base_ref, _, n_commits = patches_base.partition('+')\n\n try:\n n_commits = int(n_commits)\n except ValueError:\n n_commits = 0\n return patches_base_ref, n_commits", "response": "Return a tuple of version number_of_commits that are parsed\n from the patches_base in the specfile."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a string representing a regex for filtering out patches", "response": "def get_patches_ignore_regex(self):\n \"\"\"Returns a string representing a regex for filtering out patches\n\n This string is parsed from a comment in the specfile that contains the\n word filter-out followed by an equal sign.\n\n For example, a comment as such:\n # patches_ignore=(regex)\n\n would mean this method returns the string '(regex)'\n\n Only a very limited subset of characters are accepted so no fancy stuff\n like matching groups etc.\n \"\"\"\n match = re.search(r'# *patches_ignore=([\\w *.+?[\\]|{,}\\-_]+)',\n self.txt)\n if not match:\n return None\n regex_string = match.group(1)\n try:\n return re.compile(regex_string)\n except Exception:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef recognized_release(self):\n _, _, rest = self.get_release_parts()\n # If \"rest\" is not a well-known value here, then this package is\n # using a Release value pattern we cannot recognize.\n if rest == '' or re.match(r'%{\\??dist}', rest):\n return True\n return False", "response": "Check if this release value is something we can parse."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget VR string from. spec Version Release and Epoch", "response": "def get_vr(self, epoch=None):\n \"\"\"get VR string from .spec Version, Release and Epoch\n\n epoch is None: prefix epoch if present (default)\n epoch is True: prefix epoch even if not present (0:)\n epoch is False: omit epoch even if present\n \"\"\"\n version = self.get_tag('Version', expand_macros=True)\n e = None\n if epoch is None or epoch:\n try:\n e = self.get_tag('Epoch')\n except exception.SpecFileParseError:\n pass\n if epoch is None and e:\n epoch = True\n if epoch:\n if not e:\n e = '0'\n version = '%s:%s' % (e, version)\n release = self.get_tag('Release')\n release = re.sub(r'%\\{?\\??dist\\}?$', '', release)\n release = self.expand_macro(release)\n if release:\n return '%s-%s' % (version, release)\n return version"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting NVR string from. spec Name Version Release and Epoch", "response": "def get_nvr(self, epoch=None):\n \"\"\"get NVR string from .spec Name, Version, Release and Epoch\"\"\"\n name = self.get_tag('Name', expand_macros=True)\n vr = self.get_vr(epoch=epoch)\n return '%s-%s' % (name, vr)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites the textual content to the. spec file.", "response": "def save(self):\n \"\"\" Write the textual content (self._txt) to .spec file (self.fn). \"\"\"\n if not self.txt:\n # no changes\n return\n if not self.fn:\n raise exception.InvalidAction(\n \"Can't save .spec file without its file name specified.\")\n f = codecs.open(self.fn, 'w', encoding='utf-8')\n f.write(self.txt)\n f.close()\n self._rpmspec = None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndetects the IP address of a specific kind of IPv4 or IPv6 address.", "response": "def detect_ip(kind):\n \"\"\"\n Detect IP address.\n\n kind can be:\n IPV4 - returns IPv4 address\n IPV6_ANY - returns any IPv6 address (no preference)\n IPV6_PUBLIC - returns public IPv6 address\n IPV6_TMP - returns temporary IPV6 address (privacy extensions)\n\n This function either returns an IP address (str) or\n raises a GetIpException.\n \"\"\"\n if kind not in (IPV4, IPV6_PUBLIC, IPV6_TMP, IPV6_ANY):\n raise ValueError(\"invalid kind specified\")\n\n # We create an UDP socket and connect it to a public host.\n # We query the OS to know what our address is.\n # No packet will really be sent since we are using UDP.\n af = socket.AF_INET if kind == IPV4 else socket.AF_INET6\n s = socket.socket(af, socket.SOCK_DGRAM)\n try:\n if kind in [IPV6_PUBLIC, IPV6_TMP, ]:\n # caller wants some specific kind of IPv6 address (not IPV6_ANY)\n try:\n if kind == IPV6_PUBLIC:\n preference = socket.IPV6_PREFER_SRC_PUBLIC\n elif kind == IPV6_TMP:\n preference = socket.IPV6_PREFER_SRC_TMP\n s.setsockopt(socket.IPPROTO_IPV6,\n socket.IPV6_ADDR_PREFERENCES, preference)\n except socket.error as e:\n if e.errno == errno.ENOPROTOOPT:\n raise GetIpException(\"Kernel doesn't support IPv6 address preference\")\n else:\n raise GetIpException(\"Unable to set IPv6 address preference: %s\" % e)\n\n try:\n outside_ip = OUTSIDE_IPV4 if kind == IPV4 else OUTSIDE_IPV6\n s.connect((outside_ip, 9))\n except (socket.error, socket.gaierror) as e:\n raise GetIpException(str(e))\n\n ip = s.getsockname()[0]\n finally:\n s.close()\n return ip"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup_kojiclient(profile):\n opts = koji.read_config(profile)\n for k, v in opts.iteritems():\n opts[k] = os.path.expanduser(v) if type(v) is str else v\n kojiclient = koji.ClientSession(opts['server'], opts=opts)\n kojiclient.ssl_login(opts['cert'], None, opts['serverca'])\n return kojiclient", "response": "Setup koji client session"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef retrieve_sources():\n spectool = find_executable('spectool')\n if not spectool:\n log.warn('spectool is not installed')\n return\n try:\n specfile = spec_fn()\n except Exception:\n return\n\n cmd = [spectool, \"-g\", specfile]\n output = subprocess.check_output(' '.join(cmd), shell=True)\n log.warn(output)", "response": "Retrieve sources using spectool\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_srpm(dist='el7'):\n if not RPM_AVAILABLE:\n raise RpmModuleNotAvailable()\n path = os.getcwd()\n try:\n specfile = spec_fn()\n spec = Spec(specfile)\n except Exception:\n return\n\n rpmdefines = [\"--define 'dist .{}'\".format(dist),\n \"--define '_sourcedir {}'\".format(path),\n \"--define '_srcrpmdir {}'\".format(path)]\n rpm.addMacro('_sourcedir', '.{}'.format(dist))\n # FIXME: needs to be fixed in Spec\n rpm.addMacro('dist', '.{}'.format(dist))\n module_name = spec.get_tag('Name', True)\n version = spec.get_tag('Version', True)\n release = spec.get_tag('Release', True)\n srpm = os.path.join(path,\n \"{}-{}-{}.src.rpm\".format(module_name,\n version,\n release))\n\n # See if we need to build the srpm\n if os.path.exists(srpm):\n log.warn('Srpm found, rewriting it.')\n\n cmd = ['rpmbuild']\n cmd.extend(rpmdefines)\n\n cmd.extend(['--nodeps', '-bs', specfile])\n output = subprocess.check_output(' '.join(cmd), shell=True)\n log.warn(output)\n srpm = output.split()[1]\n return srpm", "response": "Create an srpm file for the current package."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef compute_auth_key(userid, password):\n import sys\n if sys.version_info >= (3, 0):\n return hashlib.sha1(b\"|\".join((userid.encode(\"ascii\"), # noqa: S303\n password.encode(\"ascii\")))).hexdigest()\n return hashlib.sha1(\"|\".join((userid, password))).hexdigest()", "response": "Compute the authentication key for freedns. afraid. org."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nyield the dynamic DNS records associated with this account.", "response": "def records(credentials, url=\"https://freedns.afraid.org/api/\"):\n \"\"\"\n Yield the dynamic DNS records associated with this account.\n\n :param credentials: an AfraidCredentials instance\n :param url: the service URL\n \"\"\"\n params = {\"action\": \"getdyndns\", \"sha\": credentials.sha}\n req = requests.get(\n url, params=params, headers=constants.REQUEST_HEADERS_DEFAULT, timeout=60)\n for record_line in (line.strip() for line in req.text.splitlines()\n if len(line.strip()) > 0):\n yield AfraidDynDNSRecord(*record_line.split(\"|\"))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the remote DNS record by requesting its special endpoint URL.", "response": "def update(url):\n \"\"\"\n Update remote DNS record by requesting its special endpoint URL.\n\n This automatically picks the IP address using the HTTP connection: it is not\n possible to specify the IP address explicitly.\n\n :param url: URL to retrieve for triggering the update\n :return: IP address\n \"\"\"\n req = requests.get(\n url, headers=constants.REQUEST_HEADERS_DEFAULT, timeout=60)\n req.close()\n # Response must contain an IP address, or else we can't parse it.\n # Also, the IP address in the response is the newly assigned IP address.\n ipregex = re.compile(r\"\\b(?P<ip>(?:[0-9]{1,3}\\.){3}[0-9]{1,3})\\b\")\n ipmatch = ipregex.search(req.text)\n if ipmatch:\n return str(ipaddress(ipmatch.group(\"ip\")))\n LOG.error(\"couldn't parse the server's response '%s'\", req.text)\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns sha of the user s user.", "response": "def sha(self):\n \"\"\"Return sha, lazily compute if not done yet.\"\"\"\n if self._sha is None:\n self._sha = compute_auth_key(self.userid, self.password)\n return self._sha"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the IP on the remote service.", "response": "def update(self, *args, **kwargs):\n \"\"\"Update the IP on the remote service.\"\"\"\n # first find the update_url for the provided account + hostname:\n update_url = next((r.update_url for r in\n records(self._credentials, self._url)\n if r.hostname == self.hostname), None)\n if update_url is None:\n LOG.warning(\"Could not find hostname '%s' at '%s'\",\n self.hostname, self._url)\n return None\n return update(update_url)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_observer(self, observer, events=None):\n if events is not None and not isinstance(events, (tuple, list)):\n events = (events,)\n\n if observer in self._observers:\n LOG.warning(\"Observer '%r' already registered, overwriting for events\"\n \" %r\", observer, events)\n self._observers[observer] = events", "response": "Register a listener function."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef notify_observers(self, event=None, msg=None):\n for observer, events in list(self._observers.items()):\n # LOG.debug(\"trying to notify the observer\")\n if events is None or event is None or event in events:\n try:\n observer(self, event, msg)\n except (Exception,) as ex: # pylint: disable=broad-except\n self.unregister_observer(observer)\n errmsg = \"Exception in message dispatch: Handler '{0}' unregistered for event '{1}' \".format(\n observer.__class__.__name__, event)\n LOG.error(errmsg, exc_info=ex)", "response": "Notify observers of a specific event."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndetects the IP address.", "response": "def detect(self):\n \"\"\"Detect the IP address.\"\"\"\n if self.opts_family == AF_INET6:\n kind = IPV6_PUBLIC\n else: # 'INET':\n kind = IPV4\n theip = None\n try:\n theip = detect_ip(kind)\n except GetIpException:\n LOG.exception(\"socket detector raised an exception:\")\n self.set_current_value(theip)\n return theip"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nusing the netifaces module to detect ifconfig information.", "response": "def _detect(self):\n \"\"\"Use the netifaces module to detect ifconfig information.\"\"\"\n theip = None\n try:\n if self.opts_family == AF_INET6:\n addrlist = netifaces.ifaddresses(self.opts_iface)[netifaces.AF_INET6]\n else:\n addrlist = netifaces.ifaddresses(self.opts_iface)[netifaces.AF_INET]\n except ValueError as exc:\n LOG.error(\"netifaces choked while trying to get network interface\"\n \" information for interface '%s'\", self.opts_iface,\n exc_info=exc)\n else: # now we have a list of addresses as returned by netifaces\n for pair in addrlist:\n try:\n detip = ipaddress(pair[\"addr\"])\n except (TypeError, ValueError) as exc:\n LOG.debug(\"Found invalid IP '%s' on interface '%s'!?\",\n pair[\"addr\"], self.opts_iface, exc_info=exc)\n continue\n if self.netmask is not None:\n if detip in self.netmask:\n theip = pair[\"addr\"]\n else:\n continue\n else:\n theip = pair[\"addr\"]\n break # we use the first IP found\n # theip can still be None at this point!\n self.set_current_value(theip)\n return theip"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncleaning up temporary test dirs for passed tests.", "response": "def clean_tempdir(context, scenario):\n \"\"\"\n Clean up temporary test dirs for passed tests.\n\n Leave failed test dirs for manual inspection.\n\n \"\"\"\n tempdir = getattr(context, 'tempdir', None)\n if tempdir and scenario.status == 'passed':\n shutil.rmtree(tempdir)\n del(context.tempdir)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if the given ip address is in a reserved ipv4 address space.", "response": "def is_reserved_ip(self, ip):\n \"\"\"Check if the given ip address is in a reserved ipv4 address space.\n\n :param ip: ip address\n :return: boolean\n \"\"\"\n theip = ipaddress(ip)\n for res in self._reserved_netmasks:\n if theip in ipnetwork(res):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef random_public_ip(self):\n randomip = random_ip()\n while self.is_reserved_ip(randomip):\n randomip = random_ip()\n return randomip", "response": "Return a randomly generated public IPv4 address."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetects IP and return it.", "response": "def detect(self):\n \"\"\"Detect IP and return it.\"\"\"\n for theip in self.rips:\n LOG.debug(\"detected %s\", str(theip))\n self.set_current_value(str(theip))\n return str(theip)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the IP on the remote service.", "response": "def update(self, ip):\n \"\"\"Update the IP on the remote service.\"\"\"\n timeout = 60\n LOG.debug(\"Updating '%s' to '%s' at service '%s'\", self.hostname, ip, self._updateurl)\n params = {\"myip\": ip, \"hostname\": self.hostname}\n req = requests.get(self._updateurl, params=params, headers=constants.REQUEST_HEADERS_DEFAULT,\n auth=(self.__userid, self.__password), timeout=timeout)\n LOG.debug(\"status %i, %s\", req.status_code, req.text)\n if req.status_code == 200:\n # responses can also be \"nohost\", \"abuse\", \"911\", \"notfqdn\"\n if req.text.startswith(\"good \") or req.text.startswith(\"nochg\"):\n return ip\n return req.text\n return \"invalid http status code: %s\" % req.status_code"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef patches_base_ref(default=exception.CantGuess):\n ref = None\n try:\n spec = specfile.Spec()\n ref, _ = spec.get_patches_base(expand_macros=True)\n if ref:\n ref, _ = tag2version(ref)\n else:\n ref = spec.get_tag('Version', expand_macros=True)\n milestone = spec.get_milestone()\n if milestone:\n ref += milestone\n if not ref:\n raise exception.CantGuess(msg=\"got empty .spec Version\")\n except Exception as ex:\n if default is exception.CantGuess:\n raise exception.CantGuess(\n what=\"current package version\",\n why=str(ex))\n else:\n return default\n tag_style = version_tag_style(ref)\n return version2tag(ref, tag_style=tag_style)", "response": "Return a git reference to the patches branch base."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef display_listitems(items, url):\n '''Displays a list of items along with the index to enable a user\n to select an item.\n '''\n if (len(items) == 2 and items[0].get_label() == '..'\n and items[1].get_played()):\n display_video(items)\n else:\n label_width = get_max_len(item.get_label() for item in items)\n num_width = len(str(len(items)))\n output = []\n for i, item in enumerate(items):\n output.append('[%s] %s (%s)' % (\n str(i).rjust(num_width),\n item.get_label().ljust(label_width),\n item.get_path()))\n\n line_width = get_max_len(output)\n output.append('-' * line_width)\n\n header = [\n '',\n '=' * line_width,\n 'Current URL: %s' % url,\n '-' * line_width,\n '%s %s Path' % ('#'.center(num_width + 2),\n 'Label'.ljust(label_width)),\n '-' * line_width,\n ]\n print '\\n'.join(header + output)", "response": "Displays a list of items along with the index to enable a user\n to select an item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting a message for a playing video and displays the parent listitem.", "response": "def display_video(items):\n '''Prints a message for a playing video and displays the parent\n listitem.\n '''\n parent_item, played_item = items\n\n title_line = 'Playing Media %s (%s)' % (played_item.get_label(),\n played_item.get_path())\n parent_line = '[0] %s (%s)' % (parent_item.get_label(),\n parent_item.get_path())\n line_width = get_max_len([title_line, parent_line])\n\n output = [\n '-' * line_width,\n title_line,\n '-' * line_width,\n parent_line,\n ]\n print '\\n'.join(output)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_user_choice(items):\n '''Returns the selected item from provided items or None if 'q' was\n entered for quit.\n '''\n choice = raw_input('Choose an item or \"q\" to quit: ')\n while choice != 'q':\n try:\n item = items[int(choice)]\n print # Blank line for readability between interactive views\n return item\n except ValueError:\n # Passed something that cound't be converted with int()\n choice = raw_input('You entered a non-integer. Choice must be an'\n ' integer or \"q\": ')\n except IndexError:\n # Passed an integer that was out of range of the list of urls\n choice = raw_input('You entered an invalid integer. Choice must be'\n ' from above url list or \"q\": ')\n return None", "response": "Returns the selected item from provided items or None if the user has entered quit."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndecoding data into a single object.", "response": "def decode(data):\n \"\"\"\n Decode data employing some charset detection and including unicode BOM\n stripping.\n \"\"\"\n\n if isinstance(data, unicode):\n return data\n\n # Detect standard unicode BOMs.\n for bom, encoding in UNICODE_BOMS:\n if data.startswith(bom):\n return data[len(bom):].decode(encoding, errors='ignore')\n\n # Try straight UTF-8.\n try:\n return data.decode('utf-8')\n except UnicodeDecodeError:\n pass\n\n # Test for various common encodings.\n for encoding in COMMON_ENCODINGS:\n try:\n return data.decode(encoding)\n except UnicodeDecodeError:\n pass\n\n # Anything else gets filtered.\n return NON_ASCII_FILTER.sub('', data).decode('ascii', errors='replace')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_context(self, line=1, column=0):\n 'Returns a tuple containing the context for a line'\n\n line -= 1 # The line is one-based\n\n # If there is no data in the file, there can be no context.\n datalen = len(self.data)\n if datalen <= line:\n return None\n\n build = [self.data[line]]\n\n # Add surrounding lines if they're available. There must always be\n # three elements in the context.\n if line > 0:\n build.insert(0, self.data[line - 1])\n else:\n build.insert(0, None)\n\n if line < datalen - 1:\n build.append(self.data[line + 1])\n else:\n build.append(None)\n\n leading_counts = []\n\n # Count whitespace to determine how much needs to be stripped.\n lstrip_count = INFINITY\n for line in build:\n # Don't count empty/whitespace-only lines.\n if line is None or not line.strip():\n leading_counts.append(lstrip_count)\n continue\n\n # Isolate the leading whitespace.\n ws_count = len(line) - len(line.lstrip())\n leading_counts.append(ws_count)\n if ws_count < lstrip_count:\n lstrip_count = ws_count\n\n # If all of the lines were skipped over, it means everything was\n # whitespace.\n if lstrip_count == INFINITY:\n return ('', '', '')\n\n for lnum in range(3):\n # Skip edge lines.\n if not build[lnum]:\n continue\n\n line = build[lnum].strip()\n\n # Empty lines stay empty.\n if not line:\n build[lnum] = ''\n continue\n\n line = self._format_line(line, column=column, rel_line=lnum)\n line = '%s%s' % (' ' * (leading_counts[lnum] - lstrip_count), line)\n\n build[lnum] = line\n\n # Return the final output as a tuple.\n return tuple(build)", "response": "Returns a tuple containing the context for a line"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _format_line(self, data, column=0, rel_line=1):\n 'Formats a line from the data to be the appropriate length'\n line_length = len(data)\n\n if line_length > 140:\n if rel_line == 0:\n # Trim from the beginning\n data = '... %s' % data[-140:]\n elif rel_line == 1:\n # Trim surrounding the error position\n if column < 70:\n data = '%s ...' % data[:140]\n elif column > line_length - 70:\n data = '... %s' % data[-140:]\n else:\n data = '... %s ...' % data[column - 70:column + 70]\n\n elif rel_line == 2:\n # Trim from the end\n data = '%s ...' % data[:140]\n\n data = unicodehelper.decode(data)\n return data", "response": "Formats a line from the data to be the appropriate length"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the line number that the given string position is found on", "response": "def get_line(self, position):\n 'Returns the line number that the given string position is found on'\n\n datalen = len(self.data)\n count = len(self.data[0])\n line = 1\n while count < position:\n if line >= datalen:\n break\n count += len(self.data[line]) + 1\n line += 1\n\n return line"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing an addon id from the given addon. xml filename.", "response": "def get_addon_id(addonxml):\n '''Parses an addon id from the given addon.xml filename.'''\n xml = parse(addonxml)\n addon_node = xml.getElementsByTagName('addon')[0]\n return addon_node.getAttribute('id')"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse an addon name from the given addon. xml filename.", "response": "def get_addon_name(addonxml):\n '''Parses an addon name from the given addon.xml filename.'''\n xml = parse(addonxml)\n addon_node = xml.getElementsByTagName('addon')[0]\n return addon_node.getAttribute('name')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates necessary directories for the given path or does nothing.", "response": "def _create_dir(path):\n '''Creates necessary directories for the given path or does nothing\n if the directories already exist.\n '''\n try:\n os.makedirs(path)\n except OSError, exc:\n if exc.errno == errno.EEXIST:\n pass\n else:\n raise"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntranslate a path into a valid XBMC file.", "response": "def translatePath(path):\n '''Creates folders in the OS's temp directory. Doesn't touch any\n possible XBMC installation on the machine. Attempting to do as\n little work as possible to enable this function to work seamlessly.\n '''\n valid_dirs = ['xbmc', 'home', 'temp', 'masterprofile', 'profile',\n 'subtitles', 'userdata', 'database', 'thumbnails', 'recordings',\n 'screenshots', 'musicplaylists', 'videoplaylists', 'cdrips', 'skin',\n ]\n\n assert path.startswith('special://'), 'Not a valid special:// path.'\n parts = path.split('/')[2:]\n assert len(parts) > 1, 'Need at least a single root directory'\n assert parts[0] in valid_dirs, '%s is not a valid root dir.' % parts[0]\n\n # We don't want to swallow any potential IOErrors here, so only makedir for\n # the root dir, the user is responsible for making any further child dirs\n _create_dir(os.path.join(TEMP_DIR, parts[0]))\n\n return os.path.join(TEMP_DIR, *parts)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _parse_request(self, url=None, handle=None):\n '''Handles setup of the plugin state, including request\n arguments, handle, mode.\n\n This method never needs to be called directly. For testing, see\n plugin.test()\n '''\n # To accomdate self.redirect, we need to be able to parse a full url as\n # well\n if url is None:\n url = sys.argv[0]\n if len(sys.argv) == 3:\n url += sys.argv[2]\n if handle is None:\n handle = sys.argv[1]\n return Request(url, handle)", "response": "Handles setup of the plugin state including request\n arguments handle mode."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef register_module(self, module, url_prefix):\n '''Registers a module with a plugin. Requires a url_prefix that\n will then enable calls to url_for.\n\n :param module: Should be an instance `xbmcswift2.Module`.\n :param url_prefix: A url prefix to use for all module urls,\n e.g. '/mymodule'\n '''\n module._plugin = self\n module._url_prefix = url_prefix\n for func in module._register_funcs:\n func(self, url_prefix)", "response": "Registers a module with a plugin. Requires a url_prefix that is used to register all urls for that module."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cached_route(self, url_rule, name=None, options=None, TTL=None):\n '''A decorator to add a route to a view and also apply caching. The\n url_rule, name and options arguments are the same arguments for the\n route function. The TTL argument if given will passed along to the\n caching decorator.\n '''\n route_decorator = self.route(url_rule, name=name, options=options)\n if TTL:\n cache_decorator = self.cached(TTL)\n else:\n cache_decorator = self.cached()\n\n def new_decorator(func):\n return route_decorator(cache_decorator(func))\n return new_decorator", "response": "A decorator to add a route to a view and also apply caching."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef route(self, url_rule, name=None, options=None):\n '''A decorator to add a route to a view. name is used to\n differentiate when there are multiple routes for a given view.'''\n # TODO: change options kwarg to defaults\n def decorator(f):\n view_name = name or f.__name__\n self.add_url_rule(url_rule, f, name=view_name, options=options)\n return f\n return decorator", "response": "A decorator to add a route to a view. name is used to\n differentiate when there are multiple routes for a given view. options is used to\n differentiate when there are multiple routes for a given view."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a valid XBMC plugin URL for the given endpoint name.", "response": "def url_for(self, endpoint, **items):\n '''Returns a valid XBMC plugin URL for the given endpoint name.\n endpoint can be the literal name of a function, or it can\n correspond to the name keyword arguments passed to the route\n decorator.\n\n Raises AmbiguousUrlException if there is more than one possible\n view for the given endpoint name.\n '''\n try:\n rule = self._view_functions[endpoint]\n except KeyError:\n try:\n rule = (rule for rule in self._view_functions.values() if rule.view_func == endpoint).next()\n except StopIteration:\n raise NotFoundException(\n '%s doesn\\'t match any known patterns.' % endpoint)\n\n # rule can be None since values of None are allowed in the\n # _view_functions dict. This signifies more than one view function is\n # tied to the same name.\n if not rule:\n # TODO: Make this a regular exception\n raise AmbiguousUrlException\n\n pathqs = rule.make_path_qs(items)\n return 'plugin://%s%s' % (self._addon_id, pathqs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef redirect(self, url):\n '''Used when you need to redirect to another view, and you only\n have the final plugin:// url.'''\n # TODO: Should we be overriding self.request with the new request?\n new_request = self._parse_request(url=url, handle=self.request.handle)\n log.debug('Redirecting %s to %s', self.request.path, new_request.path)\n return self._dispatch(new_request.path)", "response": "Used when you need to redirect to another view and you only have the final plugin:// url."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run(self, test=False):\n '''The main entry point for a plugin.'''\n self._request = self._parse_request()\n log.debug('Handling incoming request for %s', self.request.path)\n items = self._dispatch(self.request.path)\n\n # Close any open storages which will persist them to disk\n if hasattr(self, '_unsynced_storages'):\n for storage in self._unsynced_storages.values():\n log.debug('Saving a %s storage to disk at \"%s\"',\n storage.file_format, storage.filename)\n storage.close()\n\n return items", "response": "The main entry point for a plugin."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n '''The entry point for the console script xbmcswift2.\n\n The 'xbcmswift2' script is command bassed, so the second argument is always\n the command to execute. Each command has its own parser options and usages.\n If no command is provided or the -h flag is used without any other\n commands, the general help message is shown.\n '''\n parser = OptionParser()\n if len(sys.argv) == 1:\n parser.set_usage(USAGE)\n parser.error('At least one command is required.')\n\n # spy sys.argv[1] in order to use correct opts/args\n command = sys.argv[1]\n\n if command == '-h':\n parser.set_usage(USAGE)\n opts, args = parser.parse_args()\n\n if command not in COMMANDS.keys():\n parser.error('Invalid command')\n\n # We have a proper command, set the usage and options list according to the\n # specific command\n manager = COMMANDS[command]\n if hasattr(manager, 'option_list'):\n for args, kwargs in manager.option_list:\n parser.add_option(*args, **kwargs)\n if hasattr(manager, 'usage'):\n parser.set_usage(manager.usage)\n\n opts, args = parser.parse_args()\n\n # Since we are calling a specific comamnd's manager, we no longer need the\n # actual command in sys.argv so we slice from position 1\n manager.run(opts, args[1:])", "response": "The main function of xbcmswift2."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding escape sequences to colorize text and make it beautiful.", "response": "def colorize_text(self, text):\n \"\"\"Adds escape sequences to colorize text and make it\n beautiful. To colorize text, prefix the text you want to color\n with the color (capitalized) wrapped in double angle brackets\n (i.e.: <<GREEN>>). End your string with <<NORMAL>>. If you\n don't, it will be done for you (assuming you used a color code\n in your string.\"\"\"\n\n # Take note of where the escape sequences are.\n rnormal = text.rfind('<<NORMAL')\n rany = text.rfind('<<')\n\n # Put in the escape sequences.\n for color, code in self.colors.items():\n text = text.replace('<<%s>>' % color, code)\n\n # Make sure that the last sequence is a NORMAL sequence.\n if rany > -1 and rnormal < rany:\n text += self.colors['NORMAL']\n\n return text"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nusing curses to print in the fanciest way possible.", "response": "def write(self, text):\n 'Uses curses to print in the fanciest way possible.'\n\n # Add color to the terminal.\n if not self.no_color:\n text = self.colorize_text(text)\n else:\n pattern = re.compile('\\<\\<[A-Z]*?\\>\\>')\n text = pattern.sub('', text)\n\n text += '\\n'\n\n self.buffer.write(text)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef say(self, input=None, **kwargs):\n params = self._get_params(input, kwargs)\n try:\n reply = self.session.get(\n self.url, params=params, timeout=self.timeout)\n except requests.Timeout:\n raise Timeout(self.timeout)\n else:\n try:\n data = reply.json()\n except ValueError as error:\n raise DecodeError(error)\n else:\n if reply.status_code == 200:\n self.data = data\n return data.get('output')\n else:\n raise APIError(data.get('error'), data.get('status'))", "response": "Talk to Cleverbot.\n\n Arguments:\n input: The input argument is what you want to say to Cleverbot,\n such as \"hello\".\n tweak1-3: Changes Cleverbot's mood.\n **kwargs: Keyword arguments to update the request parameters with.\n\n Returns:\n Cleverbot's reply.\n\n Raises:\n APIError: A Cleverbot API error occurred.\n DecodeError: An error occurred while reading the reply.\n Timeout: The request timed out."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning an int found in the named file if there is one and if there is a running process with that process id.", "response": "def live_pidfile(pidfile): # pragma: no cover\n \"\"\"(pidfile:str) -> int | None\n Returns an int found in the named file, if there is one,\n and if there is a running process with that process id.\n Return None if no such process exists.\n \"\"\"\n pid = read_pidfile(pidfile)\n if pid:\n try:\n kill(int(pid), 0)\n return pid\n except OSError as e:\n if e.errno == errno.EPERM:\n return pid\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _turn_sigterm_into_systemexit(): # pragma: no cover\n try:\n import signal\n except ImportError:\n return\n def handle_term(signo, frame):\n raise SystemExit\n signal.signal(signal.SIGTERM, handle_term)", "response": "Turn a SIGTERM exception into a SystemExit exception."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wsgiref_server_runner(wsgi_app, global_conf, **kw): # pragma: no cover\n from wsgiref.simple_server import make_server, WSGIServer\n\n host = kw.get('host', '0.0.0.0')\n port = int(kw.get('port', 8080))\n threaded = asbool(kw.get('wsgiref.threaded', False))\n\n server_class = WSGIServer\n certfile = kw.get('wsgiref.certfile')\n keyfile = kw.get('wsgiref.keyfile')\n scheme = 'http'\n\n if certfile and keyfile:\n \"\"\"\n based on code from nullege:\n description='Dropbox REST API Client with more consistent responses.',\n author='Rick van Hattem',\n author_email='Rick@Wol.ph',\n url='http://wol.ph/',\n \"\"\"\n import ssl\n class SecureWSGIServer(WSGIServer):\n\n def get_request(self):\n socket, client_address = WSGIServer.get_request(self)\n socket = ssl.wrap_socket(socket,\n server_side=True,\n certfile=certfile,\n keyfile=keyfile)\n return socket, client_address\n\n port = int(kw.get('port', 4443))\n server_class = SecureWSGIServer\n\n if threaded:\n from SocketServer import ThreadingMixIn\n class GearboxWSGIServer(ThreadingMixIn, server_class): pass\n server_type = 'Threaded'\n else:\n class GearboxWSGIServer(server_class): pass\n server_type = 'Standard'\n\n server = make_server(host, port, wsgi_app, server_class=GearboxWSGIServer)\n if certfile and keyfile:\n server_type += ' Secure'\n scheme += 's'\n ServeCommand.out('Starting %s HTTP server on %s://%s:%s' % (server_type, scheme, host, port))\n server.serve_forever()", "response": "This function is the main function for the wsgiref server."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses the given variables like a = b c = d into a dictionary of key = value.", "response": "def parse_vars(self, args):\n \"\"\"\n Given variables like ``['a=b', 'c=d']`` turns it into ``{'a':\n 'b', 'c': 'd'}``\n \"\"\"\n result = {}\n for arg in args:\n if '=' not in arg:\n raise ValueError(\n 'Variable assignment %r invalid (no \"=\")'\n % arg)\n name, value = arg.split('=', 1)\n result[name] = value\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget proper arguments for re - running the command.", "response": "def get_fixed_argv(self): # pragma: no cover\n \"\"\"Get proper arguments for re-running the command.\n\n This is primarily for fixing some issues under Windows.\n\n First, there was a bug in Windows when running an executable\n located at a path with a space in it. This has become a\n non-issue with current versions of Python and Windows,\n so we don't take measures like adding quotes or calling\n win32api.GetShortPathName() as was necessary in former times.\n\n Second, depending on whether gearbox was installed as an egg\n or a wheel under Windows, it is run as a .py or an .exe stub.\n In the first case, we need to run it through the interpreter.\n On other operating systems, we can re-run the command as is.\n\n \"\"\"\n argv = sys.argv[:]\n if sys.platform == 'win32' and argv[0].endswith('.py'):\n argv.insert(0, sys.executable)\n return argv"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef package_contents(self):\n 'Returns a dictionary of file information'\n\n if self.contents_cache:\n return self.contents_cache\n\n # Get a list of ZipInfo objects.\n files = self.zf.infolist()\n out_files = {}\n\n # Iterate through each file in the XPI.\n for file_ in files:\n\n file_doc = {'name': file_.filename,\n 'size': file_.file_size,\n 'name_lower': file_.filename.lower()}\n\n file_doc['extension'] = file_doc['name_lower'].split('.')[-1]\n\n out_files[file_.filename] = file_doc\n\n self.contents_cache = out_files\n return out_files", "response": "Returns a dictionary of file information"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrite a blob of data to the XPI manager.", "response": "def write(self, name, data):\n \"\"\"Write a blob of data to the XPI manager.\"\"\"\n if isinstance(data, StringIO):\n self.zf.writestr(name, data.getvalue())\n else:\n self.zf.writestr(name, to_utf8(data))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_file(self, name, path=None):\n\n if path is None:\n path = name\n\n self.zf.write(path, name)", "response": "Write the contents of a file from the disk to the XPI."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nappend key - val pairs to the end of a URL. Useful for passing arbitrary HTTP headers to XBMC.", "response": "def xbmc_url(url, **options):\n '''Appends key/val pairs to the end of a URL. Useful for passing arbitrary\n HTTP headers to XBMC to be used when fetching a media resource, e.g.\n cookies.\n '''\n optionstring = urllib.urlencode(options)\n if optionstring:\n return url + '|' + optionstring\n return url"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dict where items with a None value are removed", "response": "def clean_dict(dct):\n '''Returns a dict where items with a None value are removed'''\n return dict((key, val) for key, val in dct.items() if val is not None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a new dictionary where values which aren t instances of basestring are pickled. Also a new key _pickled contains a comma separated list of keys corresponding to the pickled values. Also a new key _pickled contains a comma separated list of keys corresponding to the pickled values.", "response": "def pickle_dict(items):\n '''Returns a new dictionary where values which aren't instances of\n basestring are pickled. Also, a new key '_pickled' contains a comma\n separated list of keys corresponding to the pickled values.\n '''\n ret = {}\n pickled_keys = []\n for key, val in items.items():\n if isinstance(val, basestring):\n ret[key] = val\n else:\n pickled_keys.append(key)\n ret[key] = pickle.dumps(val)\n if pickled_keys:\n ret['_pickled'] = ','.join(pickled_keys)\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unpickle_args(items):\n '''Takes a dict and unpickles values whose keys are found in\n '_pickled' key.\n\n >>> unpickle_args({'_pickled': ['foo']. 'foo': ['I3%0A.']})\n {'foo': 3}\n '''\n # Technically there can be more than one _pickled value. At this point\n # we'll just use the first one\n pickled= items.pop('_pickled', None)\n if pickled is None:\n return items\n\n pickled_keys = pickled[0].split(',')\n ret = {}\n for key, vals in items.items():\n if key in pickled_keys:\n ret[key] = [pickle.loads(val) for val in vals]\n else:\n ret[key] = vals\n return ret", "response": "Takes a dict and unpickles values whose keys are found in\n _pickled key."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a dict pickled with pickle_dict", "response": "def unpickle_dict(items):\n '''Returns a dict pickled with pickle_dict'''\n pickled_keys = items.pop('_pickled', '').split(',')\n ret = {}\n for key, val in items.items():\n if key in pickled_keys:\n ret[key] = pickle.loads(val)\n else:\n ret[key] = val\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_page(url, data=None):\n '''Returns the response for the given url. The optional data argument is\n passed directly to urlopen.'''\n conn = urllib2.urlopen(url, data)\n resp = conn.read()\n conn.close()\n return resp", "response": "Returns the response for the given url. The optional data argument is\n passed directly to urlopen."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unhex(inp):\n '''unquote(r'abc\\x20def') -> 'abc def'.'''\n res = inp.split(r'\\x')\n for i in xrange(1, len(res)):\n item = res[i]\n try:\n res[i] = _hextochr[item[:2]] + item[2:]\n except KeyError:\n res[i] = '%' + item\n except UnicodeDecodeError:\n res[i] = unichr(int(item[:2], 16)) + item[2:]\n return ''.join(res)", "response": "unquote ( r \\ x20def ) -> abc def."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_commands(self, namespace):\n for ep in pkg_resources.iter_entry_points(namespace):\n LOG.debug('found command %r', ep.name)\n cmd_name = (ep.name.replace('_', ' ')\n if self.convert_underscores\n else ep.name)\n self.commands[cmd_name] = ep\n return", "response": "Load all the commands from an entrypoint"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_command(self, argv):\n search_args = argv[:]\n name = ''\n while search_args:\n if search_args[0].startswith('-'):\n name = '%s %s' % (name, search_args[0])\n raise ValueError('Invalid command %r' % name)\n next_val = search_args.pop(0)\n name = '%s %s' % (name, next_val) if name else next_val\n if name in self.commands:\n cmd_ep = self.commands[name]\n if hasattr(cmd_ep, 'resolve'):\n cmd_factory = cmd_ep.resolve()\n else:\n # NOTE(dhellmann): Some fake classes don't take\n # require as an argument. Yay?\n arg_spec = inspect.getargspec(cmd_ep.load)\n if 'require' in arg_spec[0]:\n cmd_factory = cmd_ep.load(require=False)\n else:\n cmd_factory = cmd_ep.load()\n return (cmd_factory, name, search_args)\n else:\n raise ValueError('Unknown command %r' % next(iter(argv), ''))", "response": "Given an argument list find a command and\n return the processor and any remaining arguments."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup_logging(config_uri, fileConfig=fileConfig,\n configparser=configparser):\n \"\"\"\n Set up logging via the logging module's fileConfig function with the\n filename specified via ``config_uri`` (a string in the form\n ``filename#sectionname``).\n\n ConfigParser defaults are specified for the special ``__file__``\n and ``here`` variables, similar to PasteDeploy config loading.\n \"\"\"\n path, _ = _getpathsec(config_uri, None)\n parser = configparser.ConfigParser()\n parser.read([path])\n if parser.has_section('loggers'):\n config_file = os.path.abspath(path)\n config_options = dict(\n __file__=config_file,\n here=os.path.dirname(config_file)\n )\n\n fileConfig(config_file, config_options,\n disable_existing_loggers=False)", "response": "Setup logging via the logging module s fileConfig function with the specified config_uri."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npreparing a file - based package for validation.", "response": "def prepare_package(err, path, expectation=0, for_appversions=None,\n timeout=-1):\n \"\"\"Prepares a file-based package for validation.\n\n timeout is the number of seconds before validation is aborted.\n If timeout is -1 then no timeout checking code will run.\n \"\"\"\n\n package = None\n try:\n # Test that the package actually exists. I consider this Tier 0\n # since we may not even be dealing with a real file.\n if not os.path.isfile(path):\n err.error(('main', 'prepare_package', 'not_found'),\n 'The package could not be found')\n return\n\n # Pop the package extension.\n package_extension = os.path.splitext(path)[1]\n package_extension = package_extension.lower()\n\n def timeout_handler(signum, frame):\n raise validator.ValidationTimeout(timeout)\n\n if timeout != -1:\n signal.signal(signal.SIGALRM, timeout_handler)\n signal.setitimer(signal.ITIMER_REAL, timeout)\n\n if package_extension == '.xml':\n test_search(err, path, expectation)\n elif package_extension not in ('.xpi', '.jar'):\n err.error(('main', 'prepare_package', 'unrecognized'),\n 'The package is not of a recognized type.')\n else:\n package = open(path, 'rb')\n test_package(err, package, path, expectation, for_appversions)\n\n err.metadata['is_extension'] = err.detected_type == PACKAGE_EXTENSION\n\n except validator.ValidationTimeout:\n err.system_error(\n msg_id='validation_timeout',\n message='Validation has timed out',\n signing_severity='high',\n description=('Validation was unable to complete in the allotted '\n 'time. This is most likely due to the size or '\n 'complexity of your add-on.',\n 'This timeout has been logged, but please consider '\n 'filing an issue report here: '\n 'https://bit.ly/1POrYYU'),\n exc_info=sys.exc_info())\n\n except Exception:\n err.system_error(exc_info=sys.exc_info())\n\n finally:\n # Remove timers and signal handlers regardless of whether\n # we've completed successfully or the timer has fired.\n if timeout != -1:\n signal.setitimer(signal.ITIMER_REAL, 0)\n signal.signal(signal.SIGALRM, signal.SIG_DFL)\n\n if package:\n package.close()\n\n decorator.cleanup()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef populate_chrome_manifest(err, xpi_package):\n \"Loads the chrome.manifest if it's present\"\n\n if 'chrome.manifest' in xpi_package:\n chrome_data = xpi_package.read('chrome.manifest')\n chrome = ChromeManifest(chrome_data, 'chrome.manifest')\n\n chrome_recursion_buster = set()\n\n # Handle the case of manifests linked from the manifest.\n def get_linked_manifest(path, from_path, from_chrome, from_triple):\n\n if path in chrome_recursion_buster:\n err.warning(\n err_id=('submain', 'populate_chrome_manifest',\n 'recursion'),\n warning='Linked manifest recursion detected.',\n description='A chrome registration file links back to '\n 'itself. This can cause a multitude of '\n 'issues.',\n filename=path)\n return\n\n # Make sure the manifest is properly linked\n if path not in xpi_package:\n err.notice(\n err_id=('submain', 'populate_chrome_manifest', 'linkerr'),\n notice='Linked manifest could not be found.',\n description=('A linked manifest file could not be found '\n 'in the package.',\n 'Path: %s' % path),\n filename=from_path,\n line=from_triple['line'],\n context=from_chrome.context)\n return\n\n chrome_recursion_buster.add(path)\n\n manifest = ChromeManifest(xpi_package.read(path), path)\n for triple in manifest.triples:\n yield triple\n\n if triple['subject'] == 'manifest':\n subpath = triple['predicate']\n # If the path is relative, make it relative to the current\n # file.\n if not subpath.startswith('/'):\n subpath = '%s/%s' % (\n '/'.join(path.split('/')[:-1]), subpath)\n\n subpath = subpath.lstrip('/')\n\n for subtriple in get_linked_manifest(\n subpath, path, manifest, triple):\n yield subtriple\n\n chrome_recursion_buster.discard(path)\n\n chrome_recursion_buster.add('chrome.manifest')\n\n # Search for linked manifests in the base manifest.\n for extra_manifest in chrome.get_triples(subject='manifest'):\n # When one is found, add its triples to our own.\n for triple in get_linked_manifest(extra_manifest['predicate'],\n 'chrome.manifest', chrome,\n extra_manifest):\n chrome.triples.append(triple)\n\n chrome_recursion_buster.discard('chrome.manifest')\n\n # Create a reference so we can get the chrome manifest later, but make\n # it pushable so we don't run chrome manifests in JAR files.\n err.save_resource('chrome.manifest', chrome, pushable=True)\n # Create a non-pushable reference for tests that need to access the\n # chrome manifest from within JAR files.\n err.save_resource('chrome.manifest_nopush', chrome, pushable=False)", "response": "Loads the chrome. manifest if it s present"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef detect_type(err, install_rdf=None, xpi_package=None):\n\n # The types in the install.rdf don't pair up 1:1 with the type\n # system that we're using for expectations and the like. This is\n # to help translate between the two.\n translated_types = {'2': PACKAGE_EXTENSION,\n '4': PACKAGE_THEME,\n '8': PACKAGE_LANGPACK,\n '32': PACKAGE_MULTI,\n '64': PACKAGE_DICTIONARY,\n # New \"experiment\" types: see bug 1220097 and\n # https://github.com/mozilla/addons-server/issues/3315\n '128': PACKAGE_EXTENSION,\n '256': PACKAGE_EXTENSION,}\n\n # If we're missing our install.rdf file, we can try to make some\n # assumptions.\n if install_rdf is None:\n types = {'xpi': PACKAGE_DICTIONARY}\n\n err.notice(('typedetection',\n 'detect_type',\n 'missing_install_rdf'),\n 'install.rdf was not found.',\n 'The type should be determined by install.rdf if present. '\n \"If it isn't, we still need to know the type.\")\n\n # If we know what the file type might be, return it.\n if xpi_package.extension in types:\n return types[xpi_package.extension]\n # Otherwise, we're out of luck :(\n else:\n return None\n\n # Attempt to locate the <em:type> node in the RDF doc.\n type_uri = install_rdf.uri('type')\n type_ = install_rdf.get_object(None, type_uri)\n\n # Dictionaries are weird too, they might not have the obligatory\n # em:type. We can assume that if they have a /dictionaries/ folder,\n # they are a dictionary because even if they aren't, dictionaries\n # have an extraordinarily strict set of rules and file filters that\n # must be passed. It's so crazy secure that it's cool if we use it\n # as kind of a fallback.\n\n if any(file_ for file_ in xpi_package if\n file_.startswith('dictionaries/')):\n if type_ != '64':\n err.error(('typedetection',\n 'dictionary_valid_type',\n 'invalid_em_type'),\n 'Invalid <em:type> value.',\n 'The package appears to be a dictionary but does not have '\n 'the correct <em:type> set in the install manifest.')\n return PACKAGE_DICTIONARY\n\n if type_ is not None:\n if type_ in translated_types:\n err.save_resource('is_multipackage', type_ == '32', pushable=True)\n # Make sure we translate back to the normalized version\n return translated_types[type_]\n else:\n err.error(('typedetection',\n 'detect_type',\n 'invalid_em_type'),\n 'Invalid <em:type> value.',\n 'The only valid values for <em:type> are 2, 4, 8, and '\n '32. Any other values are either invalid or deprecated.',\n 'install.rdf')\n return\n else:\n err.notice(\n err_id=('typedetection',\n 'detect_type',\n 'no_em:type'),\n notice='No <em:type> element found in install.rdf',\n description=\"It isn't always required, but it is the most reliable \"\n 'method for determining add-on type.',\n filename='install.rdf')\n\n # There's no type element, so the spec says that it's either a\n # theme or an extension. At this point, we know that it isn't\n # a dictionary, language pack, or multiple extension pack.\n extensions = {'jar': '4',\n 'xpi': '2'}\n\n # If the package's extension is listed in the [tiny] extension\n # dictionary, then just return that. We'll validate against that\n # add-on type's layout later. Better to false positive than to false\n # negative.\n if xpi_package.extension in extensions:\n # Make sure it gets translated back to the normalized version\n install_rdf_type = extensions[xpi_package.extension]\n return translated_types[install_rdf_type]", "response": "Detects the type of add - on being validated based on the install. rdf file extension and other properties."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetecting parse and validate an OpenSearch provider", "response": "def detect_opensearch(err, package, listed=False):\n 'Detect, parse, and validate an OpenSearch provider'\n\n # Parse the file.\n try:\n # Check if it is a file object.\n if hasattr(package, 'read'):\n srch_prov = parse(package)\n else:\n # It's not a file object; open it (the XML parser is bad at this).\n with open(package, 'rb') as package_file:\n srch_prov = parse(package_file)\n except DefusedXmlException:\n url = 'https://pypi.python.org/pypi/defusedxml/0.3#attack-vectors'\n err.error(\n err_id=('opensearch', 'security_error'),\n error='OpenSearch: XML Security Error',\n description='The OpenSearch extension could not be parsed due to '\n 'a security error in the XML. See {url} for more '\n 'info.'.format(url=url))\n return err\n except ExpatError:\n err.error(\n err_id=('opensearch', 'parse_error'),\n error='OpenSearch: XML Parse Error',\n description='The OpenSearch extension could not be parsed due to '\n 'a syntax error in the XML.')\n return err\n\n # Make sure that the root element is OpenSearchDescription.\n if srch_prov.documentElement.tagName != 'OpenSearchDescription':\n err.error(\n err_id=('opensearch', 'invalid_document_root'),\n error='OpenSearch: Invalid Document Root',\n description='The root element of the OpenSearch provider is not '\n \"'OpenSearchDescription'.\")\n\n # Per bug 617822\n if not srch_prov.documentElement.hasAttribute('xmlns'):\n err.error(\n err_id=('opensearch', 'no_xmlns'),\n error='OpenSearch: Missing XMLNS attribute',\n description='The XML namespace attribute is missing from the '\n 'OpenSearch document.')\n\n if ('xmlns' not in srch_prov.documentElement.attributes.keys() or\n srch_prov.documentElement.attributes['xmlns'].value not in (\n 'http://a9.com/-/spec/opensearch/1.0/',\n 'http://a9.com/-/spec/opensearch/1.1/',\n 'http://a9.com/-/spec/opensearchdescription/1.1/',\n 'http://a9.com/-/spec/opensearchdescription/1.0/')):\n err.error(\n err_id=('opensearch', 'invalid_xmlns'),\n error='OpenSearch: Bad XMLNS attribute',\n description='The XML namespace attribute contains an '\n 'value.')\n\n # Make sure that there is exactly one ShortName.\n sn = srch_prov.documentElement.getElementsByTagName('ShortName')\n if not sn:\n err.error(\n err_id=('opensearch', 'missing_shortname'),\n error='OpenSearch: Missing <ShortName> elements',\n description='ShortName elements are mandatory OpenSearch provider '\n 'elements.')\n elif len(sn) > 1:\n err.error(\n err_id=('opensearch', 'extra_shortnames'),\n error='OpenSearch: Too many <ShortName> elements',\n description='Too many ShortName elements exist in the OpenSearch '\n 'provider.')\n else:\n sn_children = sn[0].childNodes\n short_name = 0\n for node in sn_children:\n if node.nodeType == node.TEXT_NODE:\n short_name += len(node.data)\n if short_name > 16:\n err.error(\n err_id=('opensearch', 'big_shortname'),\n error='OpenSearch: <ShortName> element too long',\n description='The ShortName element must contains less than '\n 'seventeen characters.')\n\n # Make sure that there is exactly one Description.\n if len(srch_prov.documentElement.getElementsByTagName('Description')) != 1:\n err.error(\n err_id=('opensearch', 'missing_description'),\n error='OpenSearch: Invalid number of <Description> elements',\n description='There are too many or too few Description elements '\n 'in the OpenSearch provider.')\n\n # Grab the URLs and make sure that there is at least one.\n urls = srch_prov.documentElement.getElementsByTagName('Url')\n if not urls:\n err.error(\n err_id=('opensearch', 'missing_url'),\n error='OpenSearch: Missing <Url> elements',\n description='The OpenSearch provider is missing a Url element.')\n\n if listed and any(url.hasAttribute('rel') and\n url.attributes['rel'].value == 'self' for\n url in urls):\n err.error(\n err_id=('opensearch', 'rel_self'),\n error='OpenSearch: <Url> elements may not be rel=self',\n description='Per AMO guidelines, OpenSearch providers cannot '\n \"contain <Url /> elements with a 'rel' attribute \"\n \"pointing to the URL's current location. It must be \"\n 'removed before posting this provider to AMO.')\n\n acceptable_mimes = ('text/html', 'application/xhtml+xml')\n acceptable_urls = [url for url in urls if url.hasAttribute('type') and\n url.attributes['type'].value in acceptable_mimes]\n\n # At least one Url must be text/html\n if not acceptable_urls:\n err.error(\n err_id=('opensearch', 'missing_url_texthtml'),\n error=\"OpenSearch: Missing <Url> element with 'text/html' type\",\n description='OpenSearch providers must have at least one Url '\n \"element with a type attribute set to 'text/html'.\")\n\n # Make sure that each Url has the require attributes.\n for url in acceptable_urls:\n\n if url.hasAttribute('rel') and url.attributes['rel'].value == 'self':\n continue\n\n if url.hasAttribute('method') and \\\n url.attributes['method'].value.upper() not in ('GET', 'POST'):\n err.error(\n err_id=('opensearch', 'missing_method'),\n error=\"OpenSearch: <Url> element with invalid 'method'\",\n description='A Url element in the OpenSearch provider lists a '\n 'method attribute, but the value is not GET or '\n 'POST.')\n\n # Test for attribute presence.\n if not url.hasAttribute('template'):\n err.error(\n err_id=('opensearch', 'missing_template'),\n error='OpenSearch: <Url> element missing template attribute',\n description='<Url> elements of OpenSearch providers must '\n 'include a template attribute.')\n else:\n url_template = url.attributes['template'].value\n if url_template[:4] != 'http':\n err.error(\n err_id=('opensearch', 'invalid_template'),\n error='OpenSearch: `<Url>` element with invalid '\n '`template`',\n description='A `<Url>` element in the OpenSearch '\n 'provider lists a template attribute, but '\n 'the value is not a valid HTTP URL.')\n\n # Make sure that there is a {searchTerms} placeholder in the\n # URL template.\n found_template = url_template.count('{searchTerms}') > 0\n\n # If we didn't find it in a simple parse of the template=\"\"\n # attribute, look deeper at the <Param /> elements.\n if not found_template:\n for param in url.getElementsByTagName('Param'):\n # As long as we're in here and dependent on the\n # attributes, we'd might as well validate them.\n attribute_keys = param.attributes.keys()\n if 'name' not in attribute_keys or \\\n 'value' not in attribute_keys:\n err.error(\n err_id=('opensearch', 'param_missing_attrs'),\n error='OpenSearch: `<Param>` element missing '\n 'name/value',\n description='Param elements in the OpenSearch '\n 'provider must include a name and a '\n 'value attribute.')\n\n param_value = (param.attributes['value'].value if\n 'value' in param.attributes.keys() else\n '')\n if param_value.count('{searchTerms}'):\n found_template = True\n\n # Since we're in a validating spirit, continue\n # looking for more errors and don't break\n\n # If the template still hasn't been found...\n if not found_template:\n tpl = url.attributes['template'].value\n err.error(\n err_id=('opensearch', 'template_not_found'),\n error='OpenSearch: <Url> element missing template '\n 'placeholder',\n description=('`<Url>` elements of OpenSearch providers '\n 'must include a template attribute or '\n 'specify a placeholder with '\n '`{searchTerms}`.',\n 'Missing template: %s' % tpl))\n\n # Make sure there are no updateURL elements\n if srch_prov.getElementsByTagName('updateURL'):\n err.error(\n err_id=('opensearch', 'banned_updateurl'),\n error='OpenSearch: <updateURL> elements are banned in OpenSearch '\n 'providers.',\n description='OpenSearch providers may not contain <updateURL> '\n 'elements.')\n\n # The OpenSearch provider is valid!\n return err"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef copy_dir(source, dest, vars, verbosity=1, simulate=False, indent=0,\n sub_vars=True, interactive=False, overwrite=True,\n template_renderer=None, out_=sys.stdout):\n \"\"\"\n Copies the ``source`` directory to the ``dest`` directory.\n\n ``vars``: A dictionary of variables to use in any substitutions.\n\n ``verbosity``: Higher numbers will show more about what is happening.\n\n ``simulate``: If true, then don't actually *do* anything.\n\n ``indent``: Indent any messages by this amount.\n\n ``sub_vars``: If true, variables in ``_tmpl`` files and ``+var+``\n in filenames will be substituted.\n\n ``overwrite``: If false, then don't ever overwrite anything.\n\n ``interactive``: If you are overwriting a file and interactive is\n true, then ask before overwriting.\n\n ``template_renderer``: This is a function for rendering templates (if you\n don't want to use string.Template). It should have the signature\n ``template_renderer(content_as_string, vars_as_dict,\n filename=filename)``.\n \"\"\"\n def out(msg):\n out_.write(msg)\n out_.write('\\n')\n out_.flush()\n # This allows you to use a leading +dot+ in filenames which would\n # otherwise be skipped because leading dots make the file hidden:\n vars.setdefault('dot', '.')\n vars.setdefault('plus', '+')\n use_pkg_resources = isinstance(source, tuple)\n if use_pkg_resources:\n names = sorted(pkg_resources.resource_listdir(source[0], source[1]))\n else:\n names = sorted(os.listdir(source))\n pad = ' '*(indent*2)\n if not os.path.exists(dest):\n if verbosity >= 1:\n out('%sCreating %s/' % (pad, dest))\n if not simulate:\n makedirs(dest, verbosity=verbosity, pad=pad)\n elif verbosity >= 2:\n out('%sDirectory %s exists' % (pad, dest))\n for name in names:\n if use_pkg_resources:\n full = '/'.join([source[1], name])\n else:\n full = os.path.join(source, name)\n reason = should_skip_file(name)\n if reason:\n if verbosity >= 2:\n reason = pad + reason % {'filename': full}\n out(reason)\n continue # pragma: no cover\n if sub_vars:\n dest_full = os.path.join(dest, substitute_filename(name, vars))\n sub_file = False\n if dest_full.endswith('_tmpl'):\n dest_full = dest_full[:-5]\n sub_file = sub_vars\n if use_pkg_resources and pkg_resources.resource_isdir(source[0], full):\n if verbosity:\n out('%sRecursing into %s' % (pad, os.path.basename(full)))\n copy_dir((source[0], full), dest_full, vars, verbosity, simulate,\n indent=indent+1,\n sub_vars=sub_vars, interactive=interactive,\n template_renderer=template_renderer, out_=out_)\n continue\n elif not use_pkg_resources and os.path.isdir(full):\n if verbosity:\n out('%sRecursing into %s' % (pad, os.path.basename(full)))\n copy_dir(full, dest_full, vars, verbosity, simulate,\n indent=indent+1,\n sub_vars=sub_vars, interactive=interactive,\n template_renderer=template_renderer, out_=out_)\n continue\n elif use_pkg_resources:\n content = pkg_resources.resource_string(source[0], full)\n else:\n f = open(full, 'rb')\n content = f.read()\n f.close()\n if sub_file:\n try:\n content = substitute_content(\n content, vars, filename=full,\n template_renderer=template_renderer\n )\n except SkipTemplate:\n continue # pragma: no cover\n if content is None:\n continue # pragma: no cover\n already_exists = os.path.exists(dest_full)\n if already_exists:\n f = open(dest_full, 'rb')\n old_content = f.read()\n f.close()\n if old_content == content:\n if verbosity:\n out('%s%s already exists (same content)' %\n (pad, dest_full))\n continue # pragma: no cover\n if interactive:\n if not query_interactive(\n native_(full, fsenc), native_(dest_full, fsenc),\n native_(content, fsenc), native_(old_content, fsenc),\n simulate=simulate, out_=out_):\n continue\n elif not overwrite:\n continue # pragma: no cover\n if verbosity and use_pkg_resources:\n out('%sCopying %s to %s' % (pad, full, dest_full))\n elif verbosity:\n out(\n '%sCopying %s to %s' % (pad, os.path.basename(full),\n dest_full))\n if not simulate:\n f = open(dest_full, 'wb')\n f.write(content)\n f.close()", "response": "Copy a directory from source to dest."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a file should be skipped based on its name. Returns the reason if it should be skipped.", "response": "def should_skip_file(name):\n \"\"\"\n Checks if a file should be skipped based on its name.\n\n If it should be skipped, returns the reason, otherwise returns\n None.\n \"\"\"\n if name.startswith('.'):\n return 'Skipping hidden file %(filename)s'\n if name.endswith('~') or name.endswith('.bak'):\n return 'Skipping backup file %(filename)s'\n if name.endswith('.pyc') or name.endswith('.pyo'):\n return 'Skipping %s file ' % os.path.splitext(name)[1] + '%(filename)s'\n if name.endswith('$py.class'):\n return 'Skipping $py.class file %(filename)s'\n if name in ('CVS', '_darcs'):\n return 'Skipping version control directory %(filename)s'\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking any actions necessary based on command line options", "response": "def setup_options(opts):\n '''Takes any actions necessary based on command line options'''\n if opts.quiet:\n logger.log.setLevel(logging.WARNING)\n logger.GLOBAL_LOG_LEVEL = logging.WARNING\n\n if opts.verbose:\n logger.log.setLevel(logging.DEBUG)\n logger.GLOBAL_LOG_LEVEL = logging.DEBUG"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_addon_module_name(addonxml_filename):\n '''Attempts to extract a module name for the given addon's addon.xml file.\n Looks for the 'xbmc.python.pluginsource' extension node and returns the\n addon's filename without the .py suffix.\n '''\n try:\n xml = ET.parse(addonxml_filename).getroot()\n except IOError:\n sys.exit('Cannot find an addon.xml file in the current working '\n 'directory. Please run this command from the root directory '\n 'of an addon.')\n\n try:\n plugin_source = (ext for ext in xml.findall('extension') if\n ext.get('point') == 'xbmc.python.pluginsource').next()\n except StopIteration:\n sys.exit('ERROR, no pluginsource in addonxml')\n\n return plugin_source.get('library').split('.')[0]", "response": "Attempts to extract a module name for the given addon s addon. xml file and returns the addon s filename without the. py suffix."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef patch_plugin(plugin, path, handle=None):\n '''Patches a few attributes of a plugin instance to enable a new call to\n plugin.run()\n '''\n if handle is None:\n handle = plugin.request.handle\n patch_sysargv(path, handle)\n plugin._end_of_directory = False", "response": "Patches a few attributes of a plugin instance to enable a new call to\n plugin. run"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nperforms a breadth - first crawl of all possible routes from the starting path. Will only visit a URL once even if it is referenced multiple times in a plugin. Requires user interaction in between each fetch.", "response": "def crawl(plugin):\n '''Performs a breadth-first crawl of all possible routes from the\n starting path. Will only visit a URL once, even if it is referenced\n multiple times in a plugin. Requires user interaction in between each\n fetch.\n '''\n # TODO: use OrderedSet?\n paths_visited = set()\n paths_to_visit = set(item.get_path() for item in once(plugin))\n\n while paths_to_visit and continue_or_quit():\n path = paths_to_visit.pop()\n paths_visited.add(path)\n\n # Run the new listitem\n patch_plugin(plugin, path)\n new_paths = set(item.get_path() for item in once(plugin))\n\n # Filter new items by checking against urls_visited and\n # urls_tovisit\n paths_to_visit.update(path for path in new_paths\n if path not in paths_visited)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(opts, args):\n '''The run method for the 'run' command. Executes a plugin from the\n command line.\n '''\n setup_options(opts)\n\n mode = Modes.ONCE\n if len(args) > 0 and hasattr(Modes, args[0].upper()):\n _mode = args.pop(0).upper()\n mode = getattr(Modes, _mode)\n\n url = None\n if len(args) > 0:\n # A url was specified\n url = args.pop(0)\n\n plugin_mgr = PluginManager.load_plugin_from_addonxml(mode, url)\n plugin_mgr.run()", "response": "The run method for the run command. Executes a plugin from the\n command line."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_plugin_from_addonxml(cls, mode, url):\n '''Attempts to import a plugin's source code and find an instance of\n :class:`~xbmcswif2.Plugin`. Returns an instance of PluginManager if\n succesful.\n '''\n cwd = os.getcwd()\n sys.path.insert(0, cwd)\n module_name = get_addon_module_name(os.path.join(cwd, 'addon.xml'))\n addon = __import__(module_name)\n\n # Find the first instance of xbmcswift2.Plugin\n try:\n plugin = (attr_value for attr_value in vars(addon).values()\n if isinstance(attr_value, Plugin)).next()\n except StopIteration:\n sys.exit('Could\\'t find a Plugin instance in %s.py' % module_name)\n\n return cls(plugin, mode, url)", "response": "Attempts to import a plugin s source code and find an instance of\n :class ~xbmcswif2. Plugin."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run(self):\n '''This method runs the the plugin in the appropriate mode parsed from\n the command line options.\n '''\n handle = 0\n handlers = {\n Modes.ONCE: once,\n Modes.CRAWL: crawl,\n Modes.INTERACTIVE: interactive,\n }\n handler = handlers[self.mode]\n patch_sysargv(self.url or 'plugin://%s/' % self.plugin.id, handle)\n return handler(self.plugin)", "response": "This method runs the plugin in the appropriate mode parsed from\n the command line options."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _setup_config(self, dist, filename, section, vars, verbosity):\n modules = [line.strip() for line in dist.get_metadata_lines('top_level.txt')\n if line.strip() and not line.strip().startswith('#')]\n\n if not modules:\n print('No modules are listed in top_level.txt')\n print('Try running python setup.py egg_info to regenerate that file')\n\n for mod_name in modules:\n mod_name = mod_name + '.websetup'\n try:\n mod = self._import_module(mod_name)\n except ImportError as e:\n print(e)\n desc = getattr(e, 'args', ['No module named websetup'])[0]\n if not desc.startswith('No module named websetup'):\n raise\n mod = None\n\n if mod is None:\n continue\n\n if hasattr(mod, 'setup_app'):\n if verbosity:\n print('Running setup_app() from %s' % mod_name)\n self._call_setup_app(mod.setup_app, filename, section, vars)\n elif hasattr(mod, 'setup_config'):\n if verbosity:\n print('Running setup_config() from %s' % mod_name)\n mod.setup_config(None, filename, section, vars)\n else:\n print('No setup_app() or setup_config() function in %s (%s)' % (mod.__name__, mod.__file__))", "response": "This function is called by the application setup. py to generate a new configuration file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef say(self, input=None, **kwargs):\n params = self._get_params(input, kwargs)\n try:\n reply = yield from self.session.get(\n self.url, params=params, timeout=self.timeout)\n except asyncio.TimeoutError:\n raise Timeout(self.timeout)\n else:\n try:\n data = yield from reply.json()\n except ValueError as error:\n raise DecodeError(error)\n else:\n if reply.status == 200:\n self.data = data\n return data.get('output')\n else:\n raise APIError(data.get('error'), data.get('status'))", "response": "Talk to Cleverbot.\n\n Arguments:\n input: The input argument is what you want to say to Cleverbot,\n such as \"hello\".\n tweak1-3: Changes Cleverbot's mood.\n **kwargs: Keyword arguments to update the request parameters with.\n\n Returns:\n Cleverbot's reply.\n\n Raises:\n APIError: A Cleverbot API error occurred.\n DecodeError: An error occurred while reading the reply.\n Timeout: The request timed out."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef conversation(self, name=None, **kwargs):\n convo = Conversation(self, **kwargs)\n super().conversation(name, convo)\n return convo", "response": "Make a new conversation."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(path, format='json',\n approved_applications=None,\n determined=True,\n listed=True,\n expectation=PACKAGE_ANY,\n for_appversions=None,\n overrides=None,\n timeout=-1,\n compat_test=False,\n **kw):\n \"\"\"\n Perform validation in one easy step!\n\n `path`:\n *Required*\n A file system path to the package to be validated.\n `format`:\n The format to return the results in. Defaults to \"json\". Currently, any\n other format will simply return the error bundle.\n `approved_applications`:\n Path to the list of approved application versions\n `determined`:\n If set to `False`, validation will halt at the end of the first tier\n that raises errors.\n `listed`:\n Whether the app is headed for the app marketplace or AMO. Defaults to\n `True`.\n `expectation`:\n The type of package that should be expected. Must be a symbolic\n constant from validator.constants (i.e.:\n validator.constants.PACKAGE_*). Defaults to PACKAGE_ANY.\n `for_appversions`:\n A dict of app GUIDs referencing lists of versions. Determines which\n version-dependant tests should be run.\n `timeout`:\n Number of seconds before aborting addon validation, or -1 to\n run with no timeout.\n `compat_tests`:\n A flag to signal the validator to skip tests which should not be run\n during compatibility bumps. Defaults to `False`.\n \"\"\"\n\n bundle = ErrorBundle(listed=listed, determined=determined,\n overrides=overrides, for_appversions=for_appversions)\n bundle.save_resource('is_compat_test', compat_test)\n\n if approved_applications is None:\n approved_applications = os.path.join(os.path.dirname(__file__),\n 'app_versions.json')\n\n if isinstance(approved_applications, types.StringTypes):\n # Load up the target applications if the approved applications is a\n # path (string).\n with open(approved_applications) as approved_apps:\n apps = json.load(approved_apps)\n elif isinstance(approved_applications, dict):\n # If the lists of approved applications are already in a dict, just use\n # that instead of trying to pull from a file.\n apps = approved_applications\n else:\n raise ValueError('Unknown format for `approved_applications`.')\n\n constants.APPROVED_APPLICATIONS.clear()\n constants.APPROVED_APPLICATIONS.update(apps)\n\n submain.prepare_package(bundle, path, expectation,\n for_appversions=for_appversions,\n timeout=timeout)\n\n return format_result(bundle, format)", "response": "Validate a file system path."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef system_error(self, msg_id=None, message=None, description=None,\n validation_timeout=False, exc_info=None, **kw):\n \"\"\"Add an error message for an unexpected exception in validator\n code, and move it to the front of the error message list. If\n `exc_info` is supplied, the error will be logged.\n\n If the error is a validation timeout, it is re-raised unless\n `msg_id` is \"validation_timeout\".\"\"\"\n\n if exc_info:\n if (isinstance(exc_info[1], validator.ValidationTimeout) and\n msg_id != 'validation_timeout'):\n # These should always propagate to the top-level exception\n # handler, and be reported only once.\n raise exc_info[1]\n\n log.error('Unexpected error during validation: %s: %s'\n % (exc_info[0].__name__, exc_info[1]),\n exc_info=exc_info)\n\n full_id = ('validator', 'unexpected_exception')\n if msg_id:\n full_id += (msg_id,)\n\n self.error(full_id,\n message or 'An unexpected error has occurred.',\n description or\n ('Validation was unable to complete successfully due '\n 'to an unexpected error.',\n 'The error has been logged, but please consider '\n 'filing an issue report here: '\n 'https://bit.ly/1POrYYU'),\n tier=1, **kw)\n\n # Move the error message to the beginning of the list.\n self.errors.insert(0, self.errors.pop())", "response": "Add an error message for an unexpected exception in validator\n code and move it to the front of the error message list."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef drop_message(self, message):\n\n for type_ in 'errors', 'warnings', 'notices':\n list_ = getattr(self, type_)\n if message in list_:\n list_.remove(message)\n if 'signing_severity' in message:\n self.signing_summary[message['signing_severity']] -= 1\n return True\n\n return False", "response": "Drop the given message object from the appropriate message list. Returns True if the message was found otherwise False."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_tier(self, tier):\n 'Updates the tier and ending tier'\n self.tier = tier\n if tier > self.ending_tier:\n self.ending_tier = tier", "response": "Updates the tier and ending tier"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstore a message in the appropriate message stack.", "response": "def _save_message(self, stack, type_, message, context=None,\n from_merge=False):\n 'Stores a message in the appropriate message stack.'\n\n uid = uuid.uuid4().hex\n message['uid'] = uid\n\n # Get the context for the message (if there's a context available)\n if context is not None:\n if isinstance(context, tuple):\n message['context'] = context\n else:\n message['context'] = (\n context.get_context(line=message['line'],\n column=message['column']))\n else:\n message['context'] = None\n\n if self.package_stack:\n if not isinstance(message['file'], list):\n message['file'] = [message['file']]\n\n message['file'] = self.package_stack + message['file']\n\n # Test that if for_appversions is set that we're only applying to\n # supported add-ons. THIS IS THE LAST FILTER BEFORE THE MESSAGE IS\n # ADDED TO THE STACK!\n if message['for_appversions']:\n if not self.supports_version(message['for_appversions']):\n if self.instant:\n print '(Instant error discarded)'\n self._print_message(type_, message, verbose=True)\n return\n elif self.version_requirements:\n # If there was no for_appversions but there were version\n # requirements detailed in the decorator, use the ones from the\n # decorator.\n message['for_appversions'] = self.version_requirements\n\n # Save the message to the stack.\n stack.append(message)\n\n # Mark the tier that the error occurred at.\n if message['tier'] is None:\n message['tier'] = self.tier\n\n # Build out the compatibility summary if possible.\n if message['compatibility_type'] and not from_merge:\n self.compat_summary['%ss' % message['compatibility_type']] += 1\n\n # Build out the message tree entry.\n if message['id']:\n tree = self.message_tree\n last_id = None\n for eid in message['id']:\n if last_id is not None:\n tree = tree[last_id]\n if eid not in tree:\n tree[eid] = {'__errors': 0,\n '__warnings': 0,\n '__notices': 0,\n '__messages': []}\n tree[eid]['__%s' % type_] += 1\n last_id = eid\n\n tree[last_id]['__messages'].append(uid)\n\n # If instant mode is turned on, output the message immediately.\n if self.instant:\n self._print_message(type_, message, verbose=True)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef failed(self, fail_on_warnings=True):\n\n return bool(self.errors) or (fail_on_warnings and bool(self.warnings))", "response": "Returns a boolean value describing whether the validation\n succeeded or not."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving an object that has been stored by another test.", "response": "def get_resource(self, name):\n 'Retrieves an object that has been stored by another test.'\n\n if name in self.resources:\n return self.resources[name]\n elif name in self.pushable_resources:\n return self.pushable_resources[name]\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves an object such that it can be used by other tests.", "response": "def save_resource(self, name, resource, pushable=False):\n 'Saves an object such that it can be used by other tests.'\n\n if pushable:\n self.pushable_resources[name] = resource\n else:\n self.resources[name] = resource"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef push_state(self, new_file=''):\n 'Saves the current error state to parse subpackages'\n\n self.subpackages.append({'detected_type': self.detected_type,\n 'message_tree': self.message_tree,\n 'resources': self.pushable_resources,\n 'metadata': self.metadata})\n\n self.message_tree = {}\n self.pushable_resources = {}\n self.metadata = {'requires_chrome': False,\n 'listed': self.metadata.get('listed'),\n 'validator_version': validator.__version__}\n\n self.package_stack.append(new_file)", "response": "Saves the current error state to parse subpackages"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the last saved state and restores it.", "response": "def pop_state(self):\n 'Retrieves the last saved state and restores it.'\n\n # Save a copy of the current state.\n state = self.subpackages.pop()\n metadata = self.metadata\n # We only rebuild message_tree anyway. No need to restore.\n\n # Copy the existing state back into place\n self.detected_type = state['detected_type']\n self.message_tree = state['message_tree']\n self.pushable_resources = state['resources']\n self.metadata = state['metadata']\n\n name = self.package_stack.pop()\n\n self.metadata.setdefault('sub_packages', {})[name] = metadata"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a JSON summary of the validation operation.", "response": "def render_json(self):\n 'Returns a JSON summary of the validation operation.'\n\n types = {0: 'unknown',\n 1: 'extension',\n 2: 'theme',\n 3: 'dictionary',\n 4: 'langpack',\n 5: 'search',\n 8: 'webapp'}\n output = {'detected_type': types[self.detected_type],\n 'ending_tier': self.ending_tier,\n 'success': not self.failed(),\n 'messages': [],\n 'errors': len(self.errors),\n 'warnings': len(self.warnings),\n 'notices': len(self.notices),\n 'message_tree': self.message_tree,\n 'compatibility_summary': self.compat_summary,\n 'signing_summary': self.signing_summary,\n 'metadata': self.metadata}\n\n messages = output['messages']\n\n # Copy messages to the JSON output\n for error in self.errors:\n error['type'] = 'error'\n messages.append(error)\n\n for warning in self.warnings:\n warning['type'] = 'warning'\n messages.append(warning)\n\n for notice in self.notices:\n notice['type'] = 'notice'\n messages.append(notice)\n\n # Output the JSON.\n return json.dumps(output)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting a summary of the validation process so far.", "response": "def print_summary(self, verbose=False, no_color=False):\n 'Prints a summary of the validation process so far.'\n\n types = {0: 'Unknown',\n 1: 'Extension/Multi-Extension',\n 2: 'Full Theme',\n 3: 'Dictionary',\n 4: 'Language Pack',\n 5: 'Search Provider',\n 7: 'Subpackage',\n 8: 'App'}\n detected_type = types[self.detected_type]\n\n buffer = StringIO()\n self.handler = OutputHandler(buffer, no_color)\n\n # Make a neat little printout.\n self.handler.write('\\n<<GREEN>>Summary:') \\\n .write('-' * 30) \\\n .write('Detected type: <<BLUE>>%s' % detected_type) \\\n .write('-' * 30)\n\n if self.failed():\n self.handler.write('<<BLUE>>Test failed! Errors:')\n\n # Print out all the errors/warnings:\n for error in self.errors:\n self._print_message('<<RED>>Error:<<NORMAL>>\\t',\n error, verbose)\n for warning in self.warnings:\n self._print_message('<<YELLOW>>Warning:<<NORMAL>> ',\n warning, verbose)\n else:\n self.handler.write('<<GREEN>>All tests succeeded!')\n\n if self.notices:\n for notice in self.notices:\n self._print_message(prefix='<<WHITE>>Notice:<<NORMAL>>\\t',\n message=notice,\n verbose=verbose)\n\n if 'is_jetpack' in self.metadata and verbose:\n self.handler.write('\\n')\n self.handler.write('<<GREEN>>Jetpack add-on detected.<<NORMAL>>\\n'\n 'Identified files:')\n if 'jetpack_identified_files' in self.metadata:\n for filename, data in \\\n self.metadata['jetpack_identified_files'].items():\n self.handler.write((' %s\\n' % filename) +\n (' %s : %s' % data))\n\n if 'jetpack_unknown_files' in self.metadata:\n self.handler.write('Unknown files:')\n for filename in self.metadata['jetpack_unknown_files']:\n self.handler.write(' %s' % filename)\n\n self.handler.write('\\n')\n if self.unfinished:\n self.handler.write('<<RED>>Validation terminated early')\n self.handler.write('Errors during validation are preventing '\n 'the validation process from completing.')\n self.handler.write('Use the <<YELLOW>>--determined<<NORMAL>> '\n 'flag to ignore these errors.')\n self.handler.write('\\n')\n\n return buffer.getvalue()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _flatten_list(self, data):\n 'Flattens nested lists into strings.'\n\n if data is None:\n return ''\n if isinstance(data, types.StringTypes):\n return data\n elif isinstance(data, (list, tuple)):\n return '\\n'.join(self._flatten_list(x) for x in data)", "response": "Flattens nested lists into strings."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting a message and takes care of all sorts of nasty code", "response": "def _print_message(self, prefix, message, verbose=True):\n 'Prints a message and takes care of all sorts of nasty code'\n\n # Load up the standard output.\n output = ['\\n', prefix, message['message']]\n\n # We have some extra stuff for verbose mode.\n if verbose:\n verbose_output = []\n\n # Detailed problem description.\n if message['description']:\n verbose_output.append(\n self._flatten_list(message['description']))\n\n if message.get('signing_severity'):\n verbose_output.append(\n ('\\tAutomated signing severity: %s' %\n message['signing_severity']))\n\n if message.get('signing_help'):\n verbose_output.append(\n '\\tSuggestions for passing automated signing:')\n verbose_output.append(\n self._flatten_list(message['signing_help']))\n\n # Show the user what tier we're on\n verbose_output.append('\\tTier:\\t%d' % message['tier'])\n\n # If file information is available, output that as well.\n files = message['file']\n if files is not None and files != '':\n fmsg = '\\tFile:\\t%s'\n\n # Nested files (subpackes) are stored in a list.\n if type(files) is list:\n if files[-1] == '':\n files[-1] = '(none)'\n verbose_output.append(fmsg % ' > '.join(files))\n else:\n verbose_output.append(fmsg % files)\n\n # If there is a line number, that gets put on the end.\n if message['line']:\n verbose_output.append('\\tLine:\\t%s' % message['line'])\n if message['column'] and message['column'] != 0:\n verbose_output.append('\\tColumn:\\t%d' % message['column'])\n\n if message.get('context'):\n verbose_output.append('\\tContext:')\n verbose_output.extend([('\\t> %s' % x\n if x is not None\n else '\\t>' + ('-' * 20))\n for x\n in message['context']])\n\n # Stick it in with the standard items.\n output.append('\\n')\n output.append('\\n'.join(verbose_output))\n\n # Send the final output to the handler to be rendered.\n self.handler.write(u''.join(map(unicodehelper.decode, output)))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns whether a GUID set in for_appversions format is compatbile with the current supported applications list.", "response": "def supports_version(self, guid_set):\n \"\"\"\n Returns whether a GUID set in for_appversions format is compatbile with\n the current supported applications list.\n \"\"\"\n\n # Don't let the test run if we haven't parsed install.rdf yet.\n if self.supported_versions is None:\n raise Exception('Early compatibility test run before install.rdf '\n 'was parsed.')\n\n return self._compare_version(requirements=guid_set,\n support=self.supported_versions)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _compare_version(self, requirements, support):\n\n for guid in requirements:\n # If we support any of the GUIDs in the guid_set, test if any of\n # the provided versions for the GUID are supported.\n if (guid in support and\n any((detected_version in requirements[guid]) for\n detected_version in support[guid])):\n return True", "response": "Compare the version of the application and the set of supported applications."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting unused messages from errors warnings and notices.", "response": "def discard_unused_messages(self, ending_tier):\n \"\"\"\n Delete messages from errors, warnings, and notices whose tier is\n greater than the ending tier.\n \"\"\"\n\n stacks = [self.errors, self.warnings, self.notices]\n for stack in stacks:\n for message in stack:\n if message['tier'] > ending_tier:\n stack.remove(message)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntweaks will be lost for Cleverbot and its conversations.", "response": "def migrator(state):\n \"\"\"Tweaks will be lost for Cleverbot and its conversations.\"\"\"\n for tweak in ('tweak1', 'tweak2', 'tweak3'):\n del state[0][tweak]\n for convo in state[1]:\n if tweak in convo:\n del convo[tweak]\n return state"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmigrate the state of the state machine into a Cleverbot object.", "response": "def migrator(state):\n \"\"\"Nameless conversations will be lost.\"\"\"\n cleverbot_kwargs, convos_kwargs = state\n cb = Cleverbot(**cleverbot_kwargs)\n for convo_kwargs in convos_kwargs:\n cb.conversation(**convo_kwargs)\n return cb"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitialize with an UID", "response": "def init_with_uid(self, uid):\n \"\"\"Initialize with an UID\n \"\"\"\n self._uid = uid\n self._brain = None\n self._catalog = None\n self._instance = None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init_with_brain(self, brain):\n self._uid = api.get_uid(brain)\n self._brain = brain\n self._catalog = self.get_catalog_for(brain)\n self._instance = None", "response": "Initialize with a catalog brain"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_with_instance(self, instance):\n self._uid = api.get_uid(instance)\n self._brain = None\n self._catalog = self.get_catalog_for(instance)\n self._instance = instance", "response": "Initialize with an instance object"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process_value(self, value):\n # UID -> SuperModel\n if api.is_uid(value):\n return self.to_super_model(value)\n # Content -> SuperModel\n elif api.is_object(value):\n return self.to_super_model(value)\n # String -> Unicode\n elif isinstance(value, basestring):\n return safe_unicode(value).encode(\"utf-8\")\n # DateTime -> DateTime\n elif isinstance(value, DateTime):\n return value\n # Process list values\n elif isinstance(value, (LazyMap, list, tuple)):\n return map(self.process_value, value)\n # Process dict values\n elif isinstance(value, (dict)):\n return {k: self.process_value(v) for k, v in value.iteritems()}\n # Process function\n elif safe_callable(value):\n return self.process_value(value())\n # Always return the unprocessed value last\n return value", "response": "Process the value of the key - value pair."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef instance(self):\n if self._instance is None:\n logger.debug(\"SuperModel::instance: *Wakup object*\")\n self._instance = api.get_object(self.brain)\n return self._instance", "response": "Return the content instance of the wrapped object"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the catalog brain of the wrapped object", "response": "def brain(self):\n \"\"\"Catalog brain of the wrapped object\n \"\"\"\n if self._brain is None:\n logger.debug(\"SuperModel::brain: *Fetch catalog brain*\")\n self._brain = self.get_brain_by_uid(self.uid)\n return self._brain"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef catalog(self):\n if self._catalog is None:\n logger.debug(\"SuperModel::catalog: *Fetch catalog*\")\n self._catalog = self.get_catalog_for(self.brain)\n return self._catalog", "response": "Primary registered catalog for the wrapped portal type"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the primary catalog for the given brain or object", "response": "def get_catalog_for(self, brain_or_object):\n \"\"\"Return the primary catalog for the given brain or object\n \"\"\"\n if not api.is_object(brain_or_object):\n raise TypeError(\"Invalid object type %r\" % brain_or_object)\n catalogs = api.get_catalogs_for(brain_or_object, default=\"uid_catalog\")\n return catalogs[0]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrap an object into a Super Model", "response": "def to_super_model(self, thing):\n \"\"\"Wraps an object into a Super Model\n \"\"\"\n if api.is_uid(thing):\n return SuperModel(thing)\n if not api.is_object(thing):\n raise TypeError(\"Expected a portal object, got '{}'\"\n .format(type(thing)))\n return thing"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stringify(self, value):\n # SuperModel -> UID\n if ISuperModel.providedBy(value):\n return str(value)\n # DateTime -> ISO8601 format\n elif isinstance(value, (DateTime)):\n return value.ISO8601()\n # Image/Files -> filename\n elif safe_hasattr(value, \"filename\"):\n return value.filename\n # Dict -> convert_value_to_string\n elif isinstance(value, dict):\n return {k: self.stringify(v) for k, v in value.iteritems()}\n # List -> convert_value_to_string\n if isinstance(value, (list, tuple, LazyMap)):\n return map(self.stringify, value)\n # Callables\n elif safe_callable(value):\n return self.stringify(value())\n elif isinstance(value, unicode):\n value = value.encode(\"utf8\")\n try:\n return str(value)\n except (AttributeError, TypeError, ValueError):\n logger.warn(\"Could not convert {} to string\".format(repr(value)))\n return None", "response": "Convert value to string"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_dict(self, converter=None):\n if converter is None:\n converter = self.stringify\n out = dict()\n for k, v in self.iteritems():\n out[k] = converter(v)\n return out", "response": "Returns a copy dict of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef route(self, url_rule, name=None, options=None):\n '''A decorator to add a route to a view. name is used to\n differentiate when there are multiple routes for a given view.'''\n def decorator(func):\n '''Adds a url rule for the provided function'''\n view_name = name or func.__name__\n self.add_url_rule(url_rule, func, name=view_name, options=options)\n return func\n return decorator", "response": "A decorator to add a route to a view. name is used to\n differentiate when there are multiple routes for a given view. options is used to\n differentiate when there are multiple routes for a given view."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef url_for(self, endpoint, explicit=False, **items):\n '''Returns a valid XBMC plugin URL for the given endpoint name.\n endpoint can be the literal name of a function, or it can\n correspond to the name keyword arguments passed to the route\n decorator.\n\n Currently, view names must be unique across all plugins and\n modules. There are not namespace prefixes for modules.\n '''\n # TODO: Enable items to be passed with keywords of other var names\n # such as endpoing and explicit\n # TODO: Figure out how to handle the case where a module wants to\n # call a parent plugin view.\n if not explicit and not endpoint.startswith(self._namespace):\n endpoint = '%s.%s' % (self._namespace, endpoint)\n return self._plugin.url_for(endpoint, **items)", "response": "Returns a valid XBMC plugin URL for the given endpoint name."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_url_rule(self, url_rule, view_func, name, options=None):\n '''This method adds a URL rule for routing purposes. The\n provided name can be different from the view function name if\n desired. The provided name is what is used in url_for to build\n a URL.\n\n The route decorator provides the same functionality.\n '''\n name = '%s.%s' % (self._namespace, name)\n\n def register_rule(plugin, url_prefix):\n '''Registers a url rule for the provided plugin and\n url_prefix.\n '''\n full_url_rule = url_prefix + url_rule\n plugin.add_url_rule(full_url_rule, view_func, name, options)\n\n # Delay actual registration of the url rule until this module is\n # registered with a plugin\n self._register_funcs.append(register_rule)", "response": "This method adds a url rule for routing purposes."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\noverriding to add command options.", "response": "def get_parser(self, prog_name):\n \"\"\"Override to add command options.\"\"\"\n parser = argparse.ArgumentParser(description=self.get_description(),\n prog=prog_name, add_help=False)\n return parser"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning True if the provided value is a valid pluglin id", "response": "def validate_pluginid(value):\n '''Returns True if the provided value is a valid pluglin id'''\n valid = string.ascii_letters + string.digits + '.'\n return all(c in valid for c in value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisplay the provided prompt and gets input from the user.", "response": "def get_valid_value(prompt, validator, default=None):\n '''Displays the provided prompt and gets input from the user. This behavior\n loops indefinitely until the provided validator returns True for the user\n input. If a default value is provided, it will be used only if the user\n hits Enter and does not provide a value.\n\n If the validator callable has an error_message attribute, it will be\n displayed for an invalid value, otherwise a generic message is used.\n '''\n ans = get_value(prompt, default)\n while not validator(ans):\n try:\n print validator.error_message\n except AttributeError:\n print 'Invalid value.'\n ans = get_value(prompt, default)\n\n return ans"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_value(prompt, default=None, hidden=False):\n '''Displays the provided prompt and returns the input from the user. If the\n user hits Enter and there is a default value provided, the default is\n returned.\n '''\n _prompt = '%s : ' % prompt\n if default:\n _prompt = '%s [%s]: ' % (prompt, default)\n\n if hidden:\n ans = getpass(_prompt)\n else:\n ans = raw_input(_prompt)\n\n # If user hit Enter and there is a default value\n if not ans and default:\n ans = default\n return ans", "response": "Displays the provided prompt and returns the input from the user."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_file(filename, items):\n '''Edits the given file in place, replacing any instances of {key} with the\n appropriate value from the provided items dict. If the given filename ends\n with \".xml\" values will be quoted and escaped for XML.\n '''\n # TODO: Implement something in the templates to denote whether the value\n # being replaced is an XML attribute or a value. Perhaps move to dyanmic\n # XML tree building rather than string replacement.\n should_escape = filename.endswith('addon.xml')\n\n with open(filename, 'r') as inp:\n text = inp.read()\n\n for key, val in items.items():\n if should_escape:\n val = saxutils.quoteattr(val)\n text = text.replace('{%s}' % key, val)\n output = text\n\n with open(filename, 'w') as out:\n out.write(output)", "response": "Edits the given file in place replacing any instances of key with the the\nCTYPE appropriate value from the provided items dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_new_project():\n '''Creates a new XBMC Addon directory based on user input'''\n readline.parse_and_bind('tab: complete')\n\n print \\\n'''\n xbmcswift2 - A micro-framework for creating XBMC plugins.\n xbmc@jonathanbeluch.com\n --\n'''\n print 'I\\'m going to ask you a few questions to get this project' \\\n ' started.'\n\n opts = {}\n\n # Plugin Name\n opts['plugin_name'] = get_valid_value(\n 'What is your plugin name?',\n validate_nonblank\n )\n\n # Plugin ID\n opts['plugin_id'] = get_valid_value(\n 'Enter your plugin id.',\n validate_pluginid,\n 'plugin.video.%s' % (opts['plugin_name'].lower().replace(' ', ''))\n )\n\n # Parent Directory\n opts['parent_dir'] = get_valid_value(\n 'Enter parent folder (where to create project)',\n validate_isfolder,\n getcwd()\n )\n opts['plugin_dir'] = os.path.join(opts['parent_dir'], opts['plugin_id'])\n assert not os.path.isdir(opts['plugin_dir']), \\\n 'A folder named %s already exists in %s.' % (opts['plugin_id'],\n opts['parent_dir'])\n\n # Provider\n opts['provider_name'] = get_valid_value(\n 'Enter provider name',\n validate_nonblank,\n )\n\n # Create the project folder by copying over skel\n copytree(SKEL, opts['plugin_dir'], ignore=ignore_patterns('*.pyc'))\n\n # Walk through all the new files and fill in with out options\n for root, dirs, files in os.walk(opts['plugin_dir']):\n for filename in files:\n update_file(os.path.join(root, filename), opts)\n\n print 'Projects successfully created in %s.' % opts['plugin_dir']\n print 'Done.'", "response": "Create a new project in the current directory."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the list of supported applications.", "response": "def get_applications(self):\n \"\"\"Return the list of supported applications.\"\"\"\n if ('applications' not in self.data or\n 'gecko' not in self.data['applications']):\n return []\n app = self.data['applications']['gecko']\n min_version = app.get('strict_min_version', u'42.0')\n max_version = app.get('strict_max_version', u'*')\n return [{u'guid': FIREFOX_GUID,\n u'min_version': min_version,\n u'max_version': max_version}]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the first triple value matching the given subject predicate and object.", "response": "def get_value(self, subject=None, predicate=None, object_=None):\n \"\"\"Returns the first triple value matching the given subject,\n predicate, and/or object\"\"\"\n\n for triple in self.triples:\n\n # Filter out non-matches\n if (subject and triple['subject'] != subject) or \\\n (predicate and triple['predicate'] != predicate) or \\\n (object_ and triple['object'] != object_): # pragma: no cover\n continue\n\n # Return the first found.\n return triple\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_objects(self, subject=None, predicate=None):\n\n for triple in self.triples:\n\n # Filter out non-matches\n if ((subject and triple['subject'] != subject) or\n (predicate and triple['predicate'] != predicate)):\n continue\n\n yield triple['object']", "response": "Returns a generator of objects that correspond to the\n specified subjects and predicates."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_triples(self, subject=None, predicate=None, object_=None):\n\n for triple in self.triples:\n\n # Filter out non-matches\n if subject is not None and triple['subject'] != subject:\n continue\n if predicate is not None and triple['predicate'] != predicate:\n continue\n if object_ is not None and triple['object'] != object_:\n continue\n\n yield triple", "response": "Returns a generator of triples that correspond to the specified subject predicate and object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a set of overlays that are applicable to the current package or subpackage.", "response": "def get_applicable_overlays(self, error_bundle):\n \"\"\"\n Given an error bundle, a list of overlays that are present in the\n current package or subpackage are returned.\n \"\"\"\n\n content_paths = self.get_triples(subject='content')\n if not content_paths:\n return set()\n\n # Create some variables that will store where the applicable content\n # instruction path references and where it links to.\n chrome_path = ''\n content_root_path = '/'\n\n # Look through each of the listed packages and paths.\n for path in content_paths:\n chrome_name = path['predicate']\n if not path['object']:\n continue\n path_location = path['object'].strip().split()[0]\n\n # Handle jarred paths differently.\n if path_location.startswith('jar:'):\n if not error_bundle.is_nested_package:\n continue\n\n # Parse out the JAR and it's location within the chrome.\n split_jar_url = path_location[4:].split('!', 2)\n # Ignore invalid/unsupported JAR URLs.\n if len(split_jar_url) != 2:\n continue\n\n # Unpack the JAR URL.\n jar_path, package_path = split_jar_url\n\n # Ignore the instruction if the JAR it points to doesn't match\n # up with the current subpackage tree.\n if jar_path != error_bundle.package_stack[0]:\n continue\n chrome_path = self._url_chunk_join(chrome_name, package_path)\n # content_root_path stays at the default: /\n\n break\n else:\n # If we're in a subpackage, a content instruction referring to\n # the root of the package obviously doesn't apply.\n if error_bundle.is_nested_package:\n continue\n\n chrome_path = self._url_chunk_join(chrome_name, 'content')\n content_root_path = '/%s/' % path_location.strip('/')\n break\n\n if not chrome_path:\n return set()\n\n applicable_overlays = set()\n chrome_path = 'chrome://%s' % self._url_chunk_join(chrome_path + '/')\n\n for overlay in self.get_triples(subject='overlay'):\n if not overlay['object']:\n error_bundle.error(\n err_id=('chromemanifest', 'get_applicable_overalys',\n 'object'),\n error='Overlay instruction missing a property.',\n description='When overlays are registered in a chrome '\n 'manifest file, they require a namespace and '\n 'a chrome URL at minimum.',\n filename=overlay['filename'],\n line=overlay['line'],\n context=self.context) #TODO(basta): Update this!\n continue\n overlay_url = overlay['object'].split()[0]\n if overlay_url.startswith(chrome_path):\n overlay_relative_path = overlay_url[len(chrome_path):]\n applicable_overlays.add('/%s' %\n self._url_chunk_join(content_root_path,\n overlay_relative_path))\n\n return applicable_overlays"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef reverse_lookup(self, state, path):\n\n # Make sure the path starts with a forward slash.\n if not path.startswith('/'):\n path = '/%s' % path\n\n # If the state is an error bundle, extract the package stack.\n if not isinstance(state, list):\n state = state.package_stack\n\n content_paths = self.get_triples(subject='content')\n for content_path in content_paths:\n chrome_name = content_path['predicate']\n if not content_path['object']:\n continue\n path_location = content_path['object'].split()[0]\n\n if path_location.startswith('jar:'):\n if not state:\n continue\n\n # Parse out the JAR and it's location within the chrome.\n split_jar_url = path_location[4:].split('!', 2)\n # Ignore invalid/unsupported JAR URLs.\n if len(split_jar_url) != 2:\n continue\n\n # Unpack the JAR URL.\n jar_path, package_path = split_jar_url\n\n if jar_path != state[0]:\n continue\n\n return 'chrome://%s' % self._url_chunk_join(chrome_name,\n package_path,\n path)\n else:\n if state:\n continue\n\n path_location = '/%s/' % path_location.strip('/')\n rel_path = os.path.relpath(path, path_location)\n\n if rel_path.startswith('../') or rel_path == '..':\n continue\n\n return 'chrome://%s' % self._url_chunk_join(chrome_name,\n rel_path)\n\n return None", "response": "Returns a chrome URL for a given path given the current package depth\n in an error bundle."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _url_chunk_join(self, *args):\n # Strip slashes from either side of each path piece.\n pathlets = map(lambda s: s.strip('/'), args)\n # Remove empty pieces.\n pathlets = filter(None, pathlets)\n url = '/'.join(pathlets)\n # If this is a directory, add a trailing slash.\n if args[-1].endswith('/'):\n url = '%s/' % url\n return url", "response": "Join the arguments together to form a predictable URL chunk."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(self, argv):\n try:\n self.options, remainder = self.parser.parse_known_args(argv)\n self._configure_logging()\n\n if self.options.relative_plugins:\n curdir = os.getcwd()\n sys.path.insert(0, curdir)\n pkg_resources.working_set.add_entry(curdir)\n\n\n try:\n self._load_commands_for_current_dir()\n except pkg_resources.DistributionNotFound as e:\n try:\n error_msg = repr(e)\n except:\n error_msg = 'Unknown Error'\n\n log.error('Failed to load project commands with error '\n '``%s``, have you installed your project?' % error_msg)\n\n except Exception as err:\n if hasattr(self, 'options'):\n debug = self.options.debug\n else:\n debug = True\n\n if debug:\n log.exception(err)\n else:\n log.error(err)\n\n return 1\n\n return self._run_subcommand(remainder)", "response": "Main entry point for the application."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a URIRef object for use with the RDF document.", "response": "def uri(self, element, namespace=None):\n 'Returns a URIRef object for use with the RDF document.'\n\n if namespace is None:\n namespace = self.namespace\n\n return URIRef('%s#%s' % (namespace, element))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_root_subject(self):\n 'Returns the BNode which describes the topmost subject of the graph.'\n\n manifest = URIRef(self.manifest)\n\n if list(self.rdf.triples((manifest, None, None))):\n return manifest\n else:\n return self.rdf.subjects(None, self.manifest).next()", "response": "Returns the BNode which describes the topmost subject of the graph."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_object(self, subject=None, predicate=None):\n\n # Get the result of the search\n results = self.rdf.objects(subject, predicate)\n as_list = list(results)\n\n # Don't raise exceptions, value test!\n if not as_list:\n return None\n\n return as_list[0]", "response": "Eliminates some of the glue code for searching RDF. Pass\n in a URIRef object for either of the the\n parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_applications(self):\n applications = []\n\n # Isolate all of the bnodes referring to target applications\n for target_app in self.get_objects(None,\n self.uri('targetApplication')):\n applications.append({\n 'guid': self.get_object(target_app, self.uri('id')),\n 'min_version': self.get_object(target_app,\n self.uri('minVersion')),\n 'max_version': self.get_object(target_app,\n self.uri('maxVersion'))})\n return applications", "response": "Return the list of supported applications."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_context_menu_items(self, items, replace_items=False):\n '''Adds context menu items. If replace_items is True all\n previous context menu items will be removed.\n '''\n for label, action in items:\n assert isinstance(label, basestring)\n assert isinstance(action, basestring)\n if replace_items:\n self._context_menu_items = []\n self._context_menu_items.extend(items)\n self._listitem.addContextMenuItems(items, replace_items)", "response": "Adds context menu items."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_icon(self, icon):\n '''Sets the listitem's icon image'''\n self._icon = icon\n return self._listitem.setIconImage(icon)", "response": "Sets the listitem s icon image"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the listitem s thumbnail image", "response": "def set_thumbnail(self, thumbnail):\n '''Sets the listitem's thumbnail image'''\n self._thumbnail = thumbnail\n return self._listitem.setThumbnailImage(thumbnail)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_path(self, path):\n '''Sets the listitem's path'''\n self._path = path\n return self._listitem.setPath(path)", "response": "Sets the listitem s path"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the listitem s playable flag", "response": "def set_is_playable(self, is_playable):\n '''Sets the listitem's playable flag'''\n value = 'false'\n if is_playable:\n value = 'true'\n self.set_property('isPlayable', value)\n self.is_folder = not is_playable"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cached(self, TTL=60 * 24):\n '''A decorator that will cache the output of the wrapped function. The\n key used for the cache is the function name as well as the `*args` and\n `**kwargs` passed to the function.\n\n :param TTL: time to live in minutes\n\n .. note:: For route caching, you should use\n :meth:`xbmcswift2.Plugin.cached_route`.\n '''\n def decorating_function(function):\n # TODO test this method\n storage = self.get_storage(self._function_cache_name, file_format='pickle',\n TTL=TTL)\n kwd_mark = 'f35c2d973e1bbbc61ca60fc6d7ae4eb3'\n\n @wraps(function)\n def wrapper(*args, **kwargs):\n key = (function.__name__, kwd_mark,) + args\n if kwargs:\n key += (kwd_mark,) + tuple(sorted(kwargs.items()))\n\n try:\n result = storage[key]\n log.debug('Storage hit for function \"%s\" with args \"%s\" '\n 'and kwargs \"%s\"', function.__name__, args,\n kwargs)\n except KeyError:\n log.debug('Storage miss for function \"%s\" with args \"%s\" '\n 'and kwargs \"%s\"', function.__name__, args,\n kwargs)\n result = function(*args, **kwargs)\n storage[key] = result\n storage.sync()\n return result\n return wrapper\n return decorating_function", "response": "A decorator that caches the output of a function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list of existing stores. The returned names can then be used to call get_storage.", "response": "def list_storages(self):\n '''Returns a list of existing stores. The returned names can then be\n used to call get_storage().\n '''\n # Filter out any storages used by xbmcswift2 so caller doesn't corrupt\n # them.\n return [name for name in os.listdir(self.storage_path)\n if not name.startswith('.')]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a storage for the given name.", "response": "def get_storage(self, name='main', file_format='pickle', TTL=None):\n '''Returns a storage for the given name. The returned storage is a\n fully functioning python dictionary and is designed to be used that\n way. It is usually not necessary for the caller to load or save the\n storage manually. If the storage does not already exist, it will be\n created.\n\n .. seealso:: :class:`xbmcswift2.TimedStorage` for more details.\n\n :param name: The name of the storage to retrieve.\n :param file_format: Choices are 'pickle', 'csv', and 'json'. Pickle is\n recommended as it supports python objects.\n\n .. note:: If a storage already exists for the given\n name, the file_format parameter is\n ignored. The format will be determined by\n the existing storage file.\n :param TTL: The time to live for storage items specified in minutes or None\n for no expiration. Since storage items aren't expired until a\n storage is loaded form disk, it is possible to call\n get_storage() with a different TTL than when the storage was\n created. The currently specified TTL is always honored.\n '''\n\n if not hasattr(self, '_unsynced_storages'):\n self._unsynced_storages = {}\n filename = os.path.join(self.storage_path, name)\n try:\n storage = self._unsynced_storages[filename]\n log.debug('Loaded storage \"%s\" from memory', name)\n except KeyError:\n if TTL:\n TTL = timedelta(minutes=TTL)\n\n try:\n storage = TimedStorage(filename, file_format, TTL)\n except ValueError:\n # Thrown when the storage file is corrupted and can't be read.\n # Prompt user to delete storage.\n choices = ['Clear storage', 'Cancel']\n ret = xbmcgui.Dialog().select('A storage file is corrupted. It'\n ' is recommended to clear it.',\n choices)\n if ret == 0:\n os.remove(filename)\n storage = TimedStorage(filename, file_format, TTL)\n else:\n raise Exception('Corrupted storage file at %s' % filename)\n\n self._unsynced_storages[filename] = storage\n log.debug('Loaded storage \"%s\" from disk', name)\n return storage"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the localized string from strings. xml for the given stringid.", "response": "def get_string(self, stringid):\n '''Returns the localized string from strings.xml for the given\n stringid.\n '''\n stringid = int(stringid)\n if not hasattr(self, '_strings'):\n self._strings = {}\n if not stringid in self._strings:\n self._strings[stringid] = self.addon.getLocalizedString(stringid)\n return self._strings[stringid]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the value of the setting with the provided id.", "response": "def get_setting(self, key, converter=None, choices=None):\n '''Returns the settings value for the provided key.\n If converter is str, unicode, bool or int the settings value will be\n returned converted to the provided type.\n If choices is an instance of list or tuple its item at position of the\n settings value be returned.\n .. note:: It is suggested to always use unicode for text-settings\n because else xbmc returns utf-8 encoded strings.\n\n :param key: The id of the setting defined in settings.xml.\n :param converter: (Optional) Choices are str, unicode, bool and int.\n :param converter: (Optional) Choices are instances of list or tuple.\n\n Examples:\n * ``plugin.get_setting('per_page', int)``\n * ``plugin.get_setting('password', unicode)``\n * ``plugin.get_setting('force_viewmode', bool)``\n * ``plugin.get_setting('content', choices=('videos', 'movies'))``\n '''\n #TODO: allow pickling of settings items?\n # TODO: STUB THIS OUT ON CLI\n value = self.addon.getSetting(id=key)\n if converter is str:\n return value\n elif converter is unicode:\n return value.decode('utf-8')\n elif converter is bool:\n return value == 'true'\n elif converter is int:\n return int(value)\n elif isinstance(choices, (list, tuple)):\n return choices[int(value)]\n elif converter is None:\n log.warning('No converter provided, unicode should be used, '\n 'but returning str value')\n return value\n else:\n raise TypeError('Acceptable converters are str, unicode, bool and '\n 'int. Acceptable choices are instances of list '\n ' or tuple.')"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds the provided list of items to the specified playlist.", "response": "def add_to_playlist(self, items, playlist='video'):\n '''Adds the provided list of items to the specified playlist.\n Available playlists include *video* and *music*.\n '''\n playlists = {'music': 0, 'video': 1}\n assert playlist in playlists.keys(), ('Playlist \"%s\" is invalid.' %\n playlist)\n selected_playlist = xbmc.PlayList(playlists[playlist])\n\n _items = []\n for item in items:\n if not hasattr(item, 'as_xbmc_listitem'):\n if 'info_type' in item.keys():\n log.warning('info_type key has no affect for playlist '\n 'items as the info_type is inferred from the '\n 'playlist type.')\n # info_type has to be same as the playlist type\n item['info_type'] = playlist\n item = xbmcswift2.ListItem.from_dict(**item)\n _items.append(item)\n selected_playlist.add(item.get_path(), item.as_xbmc_listitem())\n return _items"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_view_mode_id(self, view_mode):\n '''Attempts to return a view_mode_id for a given view_mode\n taking into account the current skin. If not view_mode_id can\n be found, None is returned. 'thumbnail' is currently the only\n suppported view_mode.\n '''\n view_mode_ids = VIEW_MODES.get(view_mode.lower())\n if view_mode_ids:\n return view_mode_ids.get(xbmc.getSkinDir())\n return None", "response": "Attempts to return a view_mode_id for a given view_mode."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef keyboard(self, default=None, heading=None, hidden=False):\n '''Displays the keyboard input window to the user. If the user does not\n cancel the modal, the value entered by the user will be returned.\n\n :param default: The placeholder text used to prepopulate the input field.\n :param heading: The heading for the window. Defaults to the current\n addon's name. If you require a blank heading, pass an\n empty string.\n :param hidden: Whether or not the input field should be masked with\n stars, e.g. a password field.\n '''\n if heading is None:\n heading = self.addon.getAddonInfo('name')\n if default is None:\n default = ''\n keyboard = xbmc.Keyboard(default, heading, hidden)\n keyboard.doModal()\n if keyboard.isConfirmed():\n return keyboard.getText()", "response": "Displays the keyboard input window to the user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef notify(self, msg='', title=None, delay=5000, image=''):\n '''Displays a temporary notification message to the user. If\n title is not provided, the plugin name will be used. To have a\n blank title, pass '' for the title argument. The delay argument\n is in milliseconds.\n '''\n if not msg:\n log.warning('Empty message for notification dialog')\n if title is None:\n title = self.addon.getAddonInfo('name')\n xbmc.executebuiltin('XBMC.Notification(\"%s\", \"%s\", \"%s\", \"%s\")' %\n (msg, title, delay, image))", "response": "Displays a temporary notification message to the user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _listitemify(self, item):\n '''Creates an xbmcswift2.ListItem if the provided value for item is a\n dict. If item is already a valid xbmcswift2.ListItem, the item is\n returned unmodified.\n '''\n info_type = self.info_type if hasattr(self, 'info_type') else 'video'\n\n # Create ListItems for anything that is not already an instance of\n # ListItem\n if not hasattr(item, 'as_tuple'):\n if 'info_type' not in item.keys():\n item['info_type'] = info_type\n item = xbmcswift2.ListItem.from_dict(**item)\n return item", "response": "Creates an xbmcswift2. ListItem if the provided value for item is a\n dict."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding subtitles to playing video.", "response": "def _add_subtitles(self, subtitles):\n '''Adds subtitles to playing video.\n\n :param subtitles: A URL to a remote subtitles file or a local filename\n for a subtitles file.\n\n .. warning:: You must start playing a video before calling this method\n or it will loop for an indefinite length.\n '''\n # This method is named with an underscore to suggest that callers pass\n # the subtitles argument to set_resolved_url instead of calling this\n # method directly. This is to ensure a video is played before calling\n # this method.\n player = xbmc.Player()\n for _ in xrange(30):\n if player.isPlaying():\n break\n time.sleep(1)\n else:\n raise Exception('No video playing. Aborted after 30 seconds.')\n\n player.setSubtitles(subtitles)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the resolved url of a playable list item.", "response": "def set_resolved_url(self, item=None, subtitles=None):\n '''Takes a url or a listitem to be played. Used in conjunction with a\n playable list item with a path that calls back into your addon.\n\n :param item: A playable list item or url. Pass None to alert XBMC of a\n failure to resolve the item.\n\n .. warning:: When using set_resolved_url you should ensure\n the initial playable item (which calls back\n into your addon) doesn't have a trailing\n slash in the URL. Otherwise it won't work\n reliably with XBMC's PlayMedia().\n :param subtitles: A URL to a remote subtitles file or a local filename\n for a subtitles file to be played along with the\n item.\n '''\n if self._end_of_directory:\n raise Exception('Current XBMC handle has been removed. Either '\n 'set_resolved_url(), end_of_directory(), or '\n 'finish() has already been called.')\n self._end_of_directory = True\n\n succeeded = True\n if item is None:\n # None item indicates the resolve url failed.\n item = {}\n succeeded = False\n\n if isinstance(item, basestring):\n # caller is passing a url instead of an item dict\n item = {'path': item}\n\n item = self._listitemify(item)\n item.set_played(True)\n xbmcplugin.setResolvedUrl(self.handle, succeeded,\n item.as_xbmc_listitem())\n\n # call to _add_subtitles must be after setResolvedUrl\n if subtitles:\n self._add_subtitles(subtitles)\n return [item]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding ListItems to the XBMC interface.", "response": "def add_items(self, items):\n '''Adds ListItems to the XBMC interface. Each item in the\n provided list should either be instances of xbmcswift2.ListItem,\n or regular dictionaries that will be passed to\n xbmcswift2.ListItem.from_dict. Returns the list of ListItems.\n\n :param items: An iterable of items where each item is either a\n dictionary with keys/values suitable for passing to\n :meth:`xbmcswift2.ListItem.from_dict` or an instance of\n :class:`xbmcswift2.ListItem`.\n '''\n _items = [self._listitemify(item) for item in items]\n tuples = [item.as_tuple() for item in _items]\n xbmcplugin.addDirectoryItems(self.handle, tuples, len(tuples))\n\n # We need to keep track internally of added items so we can return them\n # all at the end for testing purposes\n self.added_items.extend(_items)\n\n # Possibly need an if statement if only for debug mode\n return _items"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap for xbmcplugin. endOfDirectory. Records state in self. _end_of_directory.", "response": "def end_of_directory(self, succeeded=True, update_listing=False,\n cache_to_disc=True):\n '''Wrapper for xbmcplugin.endOfDirectory. Records state in\n self._end_of_directory.\n\n Typically it is not necessary to call this method directly, as\n calling :meth:`~xbmcswift2.Plugin.finish` will call this method.\n '''\n self._update_listing = update_listing\n if not self._end_of_directory:\n self._end_of_directory = True\n # Finalize the directory items\n return xbmcplugin.endOfDirectory(self.handle, succeeded,\n update_listing, cache_to_disc)\n assert False, 'Already called endOfDirectory.'"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_sort_method(self, sort_method, label2_mask=None):\n '''A wrapper for `xbmcplugin.addSortMethod()\n <http://mirrors.xbmc.org/docs/python-docs/xbmcplugin.html#-addSortMethod>`_.\n You can use ``dir(xbmcswift2.SortMethod)`` to list all available sort\n methods.\n\n :param sort_method: A valid sort method. You can provided the constant\n from xbmcplugin, an attribute of SortMethod, or a\n string name. For instance, the following method\n calls are all equivalent:\n\n * ``plugin.add_sort_method(xbmcplugin.SORT_METHOD_TITLE)``\n * ``plugin.add_sort_metohd(SortMethod.TITLE)``\n * ``plugin.add_sort_method('title')``\n :param label2_mask: A mask pattern for label2. See the `XBMC\n documentation\n <http://mirrors.xbmc.org/docs/python-docs/xbmcplugin.html#-addSortMethod>`_\n for more information.\n '''\n try:\n # Assume it's a string and we need to get the actual int value\n sort_method = SortMethod.from_string(sort_method)\n except AttributeError:\n # sort_method was already an int (or a bad value)\n pass\n\n if label2_mask:\n xbmcplugin.addSortMethod(self.handle, sort_method, label2_mask)\n else:\n xbmcplugin.addSortMethod(self.handle, sort_method)", "response": "A wrapper for xbmcplugin. addSortMethod that adds a sort method to the current sort hierarchy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfinish the list items in the XBMC interface.", "response": "def finish(self, items=None, sort_methods=None, succeeded=True,\n update_listing=False, cache_to_disc=True, view_mode=None):\n '''Adds the provided items to the XBMC interface.\n\n :param items: an iterable of items where each item is either a\n dictionary with keys/values suitable for passing to\n :meth:`xbmcswift2.ListItem.from_dict` or an instance of\n :class:`xbmcswift2.ListItem`.\n :param sort_methods: a list of valid XBMC sort_methods. Each item in\n the list can either be a sort method or a tuple of\n ``sort_method, label2_mask``. See\n :meth:`add_sort_method` for\n more detail concerning valid sort_methods.\n\n Example call with sort_methods::\n\n sort_methods = ['label', 'title', ('date', '%D')]\n plugin.finish(items, sort_methods=sort_methods)\n\n :param view_mode: can either be an integer (or parseable integer\n string) corresponding to a view_mode or the name of a type of view.\n Currrently the only view type supported is 'thumbnail'.\n :returns: a list of all ListItems added to the XBMC interface.\n '''\n # If we have any items, add them. Items are optional here.\n if items:\n self.add_items(items)\n if sort_methods:\n for sort_method in sort_methods:\n if not isinstance(sort_method, basestring) and hasattr(sort_method, '__len__'):\n self.add_sort_method(*sort_method)\n else:\n self.add_sort_method(sort_method)\n\n # Attempt to set a view_mode if given\n if view_mode is not None:\n # First check if we were given an integer or parseable integer\n try:\n view_mode_id = int(view_mode)\n except ValueError:\n # Attempt to lookup a view mode\n view_mode_id = self.get_view_mode_id(view_mode)\n\n if view_mode_id is not None:\n self.set_view_mode(view_mode_id)\n\n # Finalize the directory items\n self.end_of_directory(succeeded, update_listing, cache_to_disc)\n\n # Return the cached list of all the list items that were added\n return self.added_items"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_cleanup(cleanup):\n\n if not callable(cleanup):\n # Allow decorating a class with a `cleanup` classm ethod.\n cleanup = cleanup.cleanup\n\n CLEANUP_FUNCTIONS.append(cleanup.cleanup)\n return cleanup", "response": "Register a cleanup function to be called at the end of every validation\n task. Takes either a callable that is called at the end of every validation\n task. Takes either a class with a __call_ method or a class with a cleanup class method."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef version_range(guid, version, before=None, app_versions=None):\n\n if app_versions is None:\n app_versions = validator.constants.APPROVED_APPLICATIONS\n app_key = None\n\n # Support for shorthand instead of full GUIDs.\n for app_guid, app_name in APPLICATIONS.items():\n if app_name == guid:\n guid = app_guid\n break\n\n for key in app_versions.keys():\n if app_versions[key]['guid'] == guid:\n app_key = key\n break\n\n if not app_key or version not in app_versions[app_key]['versions']:\n raise Exception('Bad GUID or version provided for version range: %s'\n % version)\n\n all_versions = app_versions[app_key]['versions']\n version_pos = all_versions.index(version)\n before_pos = None\n if before is not None and before in all_versions:\n before_pos = all_versions.index(before)\n\n return all_versions[version_pos:before_pos]", "response": "Returns all values after and including the version for the app guid"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef match(self, path):\n '''Attempts to match a url to the given path. If successful, a tuple is\n returned. The first item is the matchd function and the second item is\n a dictionary containing items to be passed to the function parsed from\n the provided path.\n\n If the provided path does not match this url rule then a\n NotFoundException is raised.\n '''\n m = self._regex.search(path)\n if not m:\n raise NotFoundException\n\n # urlunencode the values\n items = dict((key, unquote_plus(val))\n for key, val in m.groupdict().items())\n\n # unpickle any items if present\n items = unpickle_dict(items)\n\n # We need to update our dictionary with default values provided in\n # options if the keys don't already exist.\n [items.setdefault(key, val) for key, val in self._options.items()]\n return self._view_func, items", "response": "Attempts to match a url to the given path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _make_path(self, items):\n '''Returns a relative path for the given dictionary of items.\n\n Uses this url rule's url pattern and replaces instances of <var_name>\n with the appropriate value from the items dict.\n '''\n for key, val in items.items():\n if not isinstance(val, basestring):\n raise TypeError, ('Value \"%s\" for key \"%s\" must be an instance'\n ' of basestring' % (val, key))\n items[key] = quote_plus(val)\n\n try:\n path = self._url_format.format(**items)\n except AttributeError:\n # Old version of python\n path = self._url_format\n for key, val in items.items():\n path = path.replace('{%s}' % key, val)\n return path", "response": "Returns a relative path for the given dictionary of items."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_path_qs(self, items):\n '''Returns a relative path complete with query string for the given\n dictionary of items.\n\n Any items with keys matching this rule's url pattern will be inserted\n into the path. Any remaining items will be appended as query string\n parameters.\n\n All items will be urlencoded. Any items which are not instances of\n basestring, or int/long will be pickled before being urlencoded.\n\n .. warning:: The pickling of items only works for key/value pairs which\n will be in the query string. This behavior should only be\n used for the simplest of python objects. It causes the\n URL to get very lengthy (and unreadable) and XBMC has a\n hard limit on URL length. See the caching section if you\n need to persist a large amount of data between requests.\n '''\n # Convert any ints and longs to strings\n for key, val in items.items():\n if isinstance(val, (int, long)):\n items[key] = str(val)\n\n # First use our defaults passed when registering the rule\n url_items = dict((key, val) for key, val in self._options.items()\n if key in self._keywords)\n\n # Now update with any items explicitly passed to url_for\n url_items.update((key, val) for key, val in items.items()\n if key in self._keywords)\n\n # Create the path\n path = self._make_path(url_items)\n\n # Extra arguments get tacked on to the query string\n qs_items = dict((key, val) for key, val in items.items()\n if key not in self._keywords)\n qs = self._make_qs(qs_items)\n\n if qs:\n return '?'.join([path, qs])\n return path", "response": "Returns a relative path complete with query string for the given items."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites the dict to disk", "response": "def sync(self):\n '''Write the dict to disk'''\n if self.flag == 'r':\n return\n filename = self.filename\n tempname = filename + '.tmp'\n fileobj = open(tempname, 'wb' if self.file_format == 'pickle' else 'w')\n try:\n self.dump(fileobj)\n except Exception:\n os.remove(tempname)\n raise\n finally:\n fileobj.close()\n shutil.move(tempname, self.filename) # atomic commit\n if self.mode is not None:\n os.chmod(self.filename, self.mode)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nhandle the writing of the dict to the file object", "response": "def dump(self, fileobj):\n '''Handles the writing of the dict to the file object'''\n if self.file_format == 'csv':\n csv.writer(fileobj).writerows(self.raw_dict().items())\n elif self.file_format == 'json':\n json.dump(self.raw_dict(), fileobj, separators=(',', ':'))\n elif self.file_format == 'pickle':\n pickle.dump(dict(self.raw_dict()), fileobj, 2)\n else:\n raise NotImplementedError('Unknown format: ' +\n repr(self.file_format))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self, fileobj):\n '''Load the dict from the file object'''\n # try formats from most restrictive to least restrictive\n for loader in (pickle.load, json.load, csv.reader):\n fileobj.seek(0)\n try:\n return self.initial_update(loader(fileobj))\n except Exception as e:\n pass\n raise ValueError('File not in a supported format')", "response": "Load the dict from the file object fileobj"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninitial update of the underlying dictionary with keys values and timestamps.", "response": "def initial_update(self, mapping):\n '''Initially fills the underlying dictionary with keys, values and\n timestamps.\n '''\n for key, val in mapping.items():\n _, timestamp = val\n if not self.TTL or (datetime.utcnow() -\n datetime.utcfromtimestamp(timestamp) < self.TTL):\n self.__setitem__(key, val, raw=True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a logging instance for the provided name.", "response": "def setup_log(name):\n '''Returns a logging instance for the provided name. The returned\n object is an instance of logging.Logger. Logged messages will be\n printed to stderr when running in the CLI, or forwarded to XBMC's\n log when running in XBMC mode.\n '''\n _log = logging.getLogger(name)\n _log.setLevel(GLOBAL_LOG_LEVEL)\n handler = logging.StreamHandler()\n formatter = logging.Formatter(\n '%(asctime)s - %(levelname)s - [%(name)s] %(message)s')\n handler.setFormatter(formatter)\n _log.addHandler(handler)\n _log.addFilter(XBMCFilter('[%s] ' % name))\n return _log"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef filter(self, record):\n '''Returns True for all records if running in the CLI, else returns\n True.\n\n When running inside XBMC it calls the xbmc.log() method and prevents\n the message from being double printed to STDOUT.\n '''\n\n # When running in XBMC, any logged statements will be double printed\n # since we are calling xbmc.log() explicitly. Therefore we return False\n # so every log message is filtered out and not printed again.\n if CLI_MODE:\n return True\n else:\n # Must not be imported until here because of import order issues\n # when running in CLI\n from xbmcswift2 import xbmc\n xbmc_level = XBMCFilter.xbmc_levels.get(\n XBMCFilter.python_to_xbmc.get(record.levelname))\n xbmc.log('%s%s' % (self.prefix, record.getMessage()), xbmc_level)\n return False", "response": "Returns True if all records if running in CLI else returns False."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntesting a manifest name value for trademarks.", "response": "def validate_name(err, value, source):\n 'Tests a manifest name value for trademarks.'\n\n ff_pattern = re.compile('(mozilla|firefox)', re.I)\n\n err.metadata['name'] = value\n\n if ff_pattern.search(value):\n err.warning(\n ('metadata_helpers', '_test_name', 'trademark'),\n 'Add-on has potentially illegal name.',\n 'Add-on names cannot contain the Mozilla or Firefox '\n 'trademarks. These names should not be contained in add-on '\n 'names if at all possible.',\n source)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_id(err, value, source):\n 'Tests a manifest UUID value'\n\n field_name = '<em:id>' if source == 'install.rdf' else 'id'\n\n id_pattern = re.compile(\n '('\n # UUID format.\n '\\{[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}\\}'\n '|'\n # \"email\" format.\n '[a-z0-9-\\.\\+_]*\\@[a-z0-9-\\._]+'\n ')',\n re.I)\n\n err.metadata['id'] = value\n\n # Must be a valid UUID string.\n if not id_pattern.match(value):\n err.error(\n ('metadata_helpers', '_test_id', 'invalid'),\n 'The value of {name} is invalid'.format(name=field_name),\n ['The values supplied for {name} in the {source} file is not a '\n 'valid UUID string or email address.'.format(\n name=field_name, source=source),\n 'For help, please consult: '\n 'https://developer.mozilla.org/en/install_manifests#id'],\n source)", "response": "Tests a manifest UUID value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_version(err, value, source):\n 'Tests a manifest version number'\n\n\n field_name = '<em:version>' if source == 'install.rdf' else 'version'\n\n err.metadata['version'] = value\n\n # May not be longer than 32 characters\n if len(value) > 32:\n err.error(\n ('metadata_helpers', '_test_version', 'too_long'),\n 'The value of {name} is too long'.format(name=field_name),\n 'Values supplied for {name} in the {source} file must be 32 '\n 'characters or less.'.format(name=field_name, source=source),\n source)\n\n # Must be a valid version number.\n if not VERSION_PATTERN.match(value):\n err.error(\n ('metadata_helpers', '_test_version', 'invalid_format'),\n 'The value of {name} is invalid'.format(name=field_name),\n 'The values supplied for version in the {source} file is not a '\n 'valid version string. It can only contain letters, numbers, and '\n 'the symbols +*.-_.'.format(name=field_name, source=source),\n source)", "response": "Tests a manifest version number"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nquery the status of a specific pin or all configured pins", "response": "def get_device(self, pin=None):\n \"\"\" Query the status of a specific pin (or all configured pins if pin is ommitted) \"\"\"\n url = self.base_url + '/device'\n try:\n r = requests.get(url, params={'pin': pin}, timeout=10)\n return r.json()\n except RequestException as err:\n raise Client.ClientError(err)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_status(self):\n url = self.base_url + '/status'\n try:\n r = requests.get(url, timeout=10)\n return r.json()\n except RequestException as err:\n raise Client.ClientError(err)", "response": "Query the device status. Returns the device internal state."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nactuates a device pin", "response": "def put_device(self, pin, state, momentary=None, times=None, pause=None):\n \"\"\" Actuate a device pin \"\"\"\n url = self.base_url + '/device'\n\n payload = {\n \"pin\": pin,\n \"state\": state\n }\n\n if momentary is not None:\n payload[\"momentary\"] = momentary\n\n if times is not None:\n payload[\"times\"] = times\n\n if pause is not None:\n payload[\"pause\"] = pause\n\n try:\n r = requests.put(url, json=payload, timeout=10)\n return r.json()\n except RequestException as err:\n raise Client.ClientError(err)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef put_settings(self, sensors=[], actuators=[], auth_token=None,\n endpoint=None, blink=None, discovery=None,\n dht_sensors=[], ds18b20_sensors=[]):\n \"\"\" Sync settings to the Konnected device \"\"\"\n url = self.base_url + '/settings'\n\n payload = {\n \"sensors\": sensors,\n \"actuators\": actuators,\n \"dht_sensors\": dht_sensors,\n \"ds18b20_sensors\": ds18b20_sensors,\n \"token\": auth_token,\n \"apiUrl\": endpoint\n }\n\n if blink is not None:\n payload['blink'] = blink\n\n if discovery is not None:\n payload['discovery'] = discovery\n\n try:\n r = requests.put(url, json=payload, timeout=10)\n return r.ok\n except RequestException as err:\n raise Client.ClientError(err)", "response": "Syncs the settings to the Konnected device"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_mime_type(mime_type):\n full_type, params = cgi.parse_header(mime_type)\n # Java URLConnection class sends an Accept header that includes a\n # single '*'. Turn it into a legal wildcard.\n if full_type == '*':\n full_type = '*/*'\n\n type_parts = full_type.split('/') if '/' in full_type else None\n if not type_parts or len(type_parts) > 2:\n raise MimeTypeParseException(\n \"Can't parse type \\\"{}\\\"\".format(full_type))\n\n (type, subtype) = type_parts\n\n return (type.strip(), subtype.strip(), params)", "response": "Parses a mime - type into its component parts."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse a media - range into its component parts.", "response": "def parse_media_range(range):\n \"\"\"Parse a media-range into its component parts.\n\n Carves up a media range and returns a tuple of the (type, subtype,\n params) where 'params' is a dictionary of all the parameters for the media\n range. For example, the media range 'application/*;q=0.5' would get parsed\n into:\n\n ('application', '*', {'q', '0.5'})\n\n In addition this function also guarantees that there is a value for 'q'\n in the params dictionary, filling it in with a proper default if\n necessary.\n\n :rtype: (str,str,dict)\n \"\"\"\n (type, subtype, params) = parse_mime_type(range)\n params.setdefault('q', params.pop('Q', None)) # q is case insensitive\n try:\n if not params['q'] or not 0 <= float(params['q']) <= 1:\n params['q'] = '1'\n except ValueError: # from float()\n params['q'] = '1'\n\n return (type, subtype, params)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef quality_and_fitness_parsed(mime_type, parsed_ranges):\n best_fitness = -1\n best_fit_q = 0\n (target_type, target_subtype, target_params) = \\\n parse_media_range(mime_type)\n\n for (type, subtype, params) in parsed_ranges:\n\n # check if the type and the subtype match\n type_match = (\n type in (target_type, '*') or\n target_type == '*'\n )\n subtype_match = (\n subtype in (target_subtype, '*') or\n target_subtype == '*'\n )\n\n # if they do, assess the \"fitness\" of this mime_type\n if type_match and subtype_match:\n\n # 100 points if the type matches w/o a wildcard\n fitness = type == target_type and 100 or 0\n\n # 10 points if the subtype matches w/o a wildcard\n fitness += subtype == target_subtype and 10 or 0\n\n # 1 bonus point for each matching param besides \"q\"\n param_matches = sum([\n 1 for (key, value) in target_params.items()\n if key != 'q' and key in params and value == params[key]\n ])\n fitness += param_matches\n\n # finally, add the target's \"q\" param (between 0 and 1)\n fitness += float(target_params.get('q', 1))\n\n if fitness > best_fitness:\n best_fitness = fitness\n best_fit_q = params['q']\n\n return float(best_fit_q), best_fitness", "response": "Find the best match for a given mime - type amongst parsed media - ranges. Returns a tuple of the fitness value and the value of the q quality parameter of the best match."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef quality(mime_type, ranges):\n parsed_ranges = [parse_media_range(r) for r in ranges.split(',')]\n\n return quality_parsed(mime_type, parsed_ranges)", "response": "Return the quality of a mime - type against a list of media - ranges."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef best_match(supported, header):\n split_header = _filter_blank(header.split(','))\n parsed_header = [parse_media_range(r) for r in split_header]\n weighted_matches = []\n pos = 0\n for mime_type in supported:\n weighted_matches.append((\n quality_and_fitness_parsed(mime_type, parsed_header),\n pos,\n mime_type\n ))\n pos += 1\n weighted_matches.sort()\n\n return weighted_matches[-1][0][0] and weighted_matches[-1][2] or ''", "response": "Return mime - type with the highest quality ('q') from list of candidates."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef makeXsl(filename):\n pkg = 'cnxml2html'\n package = ''.join(['.' + x for x in __name__.split('.')[:-1]])[1:]\n if package != '':\n pkg = package + '.' + pkg\n path = pkg_resources.resource_filename(pkg, filename)\n xml = etree.parse(path)\n return etree.XSLT(xml)", "response": "Helper that creates a XSLT stylesheet from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives a collxml file this returns an HTML version of it", "response": "def transform_collxml(collxml_file):\n \"\"\" Given a collxml file (collection.xml) this returns an HTML version of it\n (including \"include\" anchor links to the modules) \"\"\"\n\n xml = etree.parse(collxml_file)\n xslt = makeXsl('collxml2xhtml.xsl')\n xml = xslt(xml)\n return xml"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngiving a module cnxml file this returns an HTML version of it", "response": "def transform_cnxml(cnxml_file):\n \"\"\" Given a module cnxml file (index.cnxml) this returns an HTML version of it \"\"\"\n\n xml = etree.parse(cnxml_file)\n xslt = makeXsl('cnxml2xhtml.xsl')\n xml = xslt(xml)\n return xml"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ngive an unzipped collection generate a giant HTML file representing the entire collection", "response": "def transform_collection(collection_dir):\n \"\"\" Given an unzipped collection generate a giant HTML file representing\n the entire collection (including loading and converting individual modules) \"\"\"\n\n collxml_file = open(os.path.join(collection_dir, 'collection.xml'))\n collxml_html = transform_collxml(collxml_file)\n\n # For each included module, parse and convert it\n for node in INCLUDE_XPATH(collxml_html):\n href = node.attrib['href']\n module = href.split('@')[0]\n # version = None # We don't care about version\n module_dir = os.path.join(collection_dir, module)\n\n # By default, use the index_auto_generated.cnxml file for the module\n module_path = os.path.join(module_dir, 'index_auto_generated.cnxml')\n if not os.path.exists(module_path):\n module_path = os.path.join(module_dir, 'index.cnxml')\n\n module_html = transform_cnxml(module_path)\n\n # Replace the include link with the body of the module\n module_body = MODULE_BODY_XPATH(module_html)\n\n node.getparent().replace(node, module_body[0])\n return collxml_html"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate an instance of the class instance from a dictionary.", "response": "def from_dict(cls, d: dict, force_snake_case: bool=True, force_cast: bool=False, restrict: bool=True) -> T:\n \"\"\"From dict to instance\n\n :param d: Dict\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: Instance\n\n Usage:\n\n >>> from owlmixin.samples import Human, Food, Japanese\n >>> human: Human = Human.from_dict({\n ... \"id\": 1,\n ... \"name\": \"Tom\",\n ... \"favorites\": [\n ... {\"name\": \"Apple\", \"names_by_lang\": {\"en\": \"Apple\", \"de\": \"Apfel\"}},\n ... {\"name\": \"Orange\"}\n ... ]\n ... })\n >>> human.id\n 1\n >>> human.name\n 'Tom'\n >>> human.favorites[0].name\n 'Apple'\n >>> human.favorites[0].names_by_lang.get()[\"de\"]\n 'Apfel'\n\n You can use default value\n\n >>> taro: Japanese = Japanese.from_dict({\n ... \"name\": 'taro'\n ... }) # doctest: +NORMALIZE_WHITESPACE\n >>> taro.name\n 'taro'\n >>> taro.language\n 'japanese'\n\n If you don't set `force_snake=False` explicitly, keys are transformed to snake case as following.\n\n >>> human: Human = Human.from_dict({\n ... \"--id\": 1,\n ... \"<name>\": \"Tom\",\n ... \"favorites\": [\n ... {\"name\": \"Apple\", \"namesByLang\": {\"en\": \"Apple\"}}\n ... ]\n ... })\n >>> human.id\n 1\n >>> human.name\n 'Tom'\n >>> human.favorites[0].names_by_lang.get()[\"en\"]\n 'Apple'\n\n You can allow extra parameters (like ``hogehoge``) if you set `restrict=False`.\n\n >>> apple: Food = Food.from_dict({\n ... \"name\": \"Apple\",\n ... \"hogehoge\": \"ooooooooooooooooooooo\",\n ... }, restrict=False)\n >>> apple.to_dict()\n {'name': 'Apple'}\n\n You can prohibit extra parameters (like ``hogehoge``) if you set `restrict=True` (which is default).\n\n >>> human = Human.from_dict({\n ... \"id\": 1,\n ... \"name\": \"Tom\",\n ... \"hogehoge1\": \"ooooooooooooooooooooo\",\n ... \"hogehoge2\": [\"aaaaaaaaaaaaaaaaaa\", \"iiiiiiiiiiiiiiiii\"],\n ... \"favorites\": [\n ... {\"name\": \"Apple\", \"namesByLang\": {\"en\": \"Apple\", \"de\": \"Apfel\"}},\n ... {\"name\": \"Orange\"}\n ... ]\n ... }) # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n owlmixin.errors.UnknownPropertiesError:\n . \u2227,,_\u2227 ,___________________\n \u2282 ( \uff65\u03c9\uff65 )\u3064- < Unknown properties error\n \uff0f\uff0f/ /::/ `-------------------\n |::|/\u2282\u30fd\u30ce|::|\u300d\n \uff0f\uffe3\uffe3\u65e6\uffe3\uffe3\uffe3\uff0f|\n \uff3f\uff3f\uff3f\uff3f\uff3f\uff3f\uff0f | |\n |------\u30fc----\u30fc|\uff0f\n <BLANKLINE>\n `owlmixin.samples.Human` has unknown properties ['hogehoge1', 'hogehoge2']!!\n <BLANKLINE>\n * If you want to allow unknown properties, set `restrict=False`\n * If you want to disallow unknown properties, add `hogehoge1` and `hogehoge2` to owlmixin.samples.Human\n <BLANKLINE>\n\n If you specify wrong type...\n\n >>> human: Human = Human.from_dict({\n ... \"id\": 1,\n ... \"name\": \"ichiro\",\n ... \"favorites\": [\"apple\", \"orange\"]\n ... }) # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n owlmixin.errors.InvalidTypeError:\n . \u2227,,_\u2227 ,___________________\n \u2282 ( \uff65\u03c9\uff65 )\u3064- < Invalid Type error\n \uff0f\uff0f/ /::/ `-------------------\n |::|/\u2282\u30fd\u30ce|::|\u300d\n \uff0f\uffe3\uffe3\u65e6\uffe3\uffe3\uffe3\uff0f|\n \uff3f\uff3f\uff3f\uff3f\uff3f\uff3f\uff0f | |\n |------\u30fc----\u30fc|\uff0f\n <BLANKLINE>\n `owlmixin.samples.Human#favorites.0 = apple` doesn't match expected types.\n Expected type is one of ['Food', 'dict'], but actual type is `str`\n <BLANKLINE>\n * If you want to force cast, set `force_cast=True`\n * If you don't want to force cast, specify value which has correct type\n <BLANKLINE>\n\n If you don't specify required params... (ex. name\n\n >>> human: Human = Human.from_dict({\n ... \"id\": 1\n ... }) # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n owlmixin.errors.RequiredError:\n . \u2227,,_\u2227 ,___________________\n \u2282 ( \uff65\u03c9\uff65 )\u3064- < Required error\n \uff0f\uff0f/ /::/ `-------------------\n |::|/\u2282\u30fd\u30ce|::|\u300d\n \uff0f\uffe3\uffe3\u65e6\uffe3\uffe3\uffe3\uff0f|\n \uff3f\uff3f\uff3f\uff3f\uff3f\uff3f\uff0f | |\n |------\u30fc----\u30fc|\uff0f\n <BLANKLINE>\n `owlmixin.samples.Human#name: str` is empty!!\n <BLANKLINE>\n * If `name` is certainly required, specify anything.\n * If `name` is optional, change type from `str` to `TOption[str]`\n <BLANKLINE>\n \"\"\"\n if isinstance(d, cls):\n return d\n\n instance: T = cls()\n d = util.replace_keys(d, {\"self\": \"_self\"}, force_snake_case)\n\n properties = cls.__annotations__.items()\n\n if restrict:\n assert_extra(properties, d, cls)\n\n for n, t in properties:\n f = cls._methods_dict.get(f'_{cls.__name__}___{n}')\n arg_v = f(d.get(n)) if f else d.get(n)\n def_v = getattr(instance, n, None)\n setattr(instance, n,\n traverse(\n type_=t,\n name=n,\n value=def_v if arg_v is None else arg_v,\n cls=cls,\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict\n ))\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_optional_dict(cls, d: Optional[dict],\n force_snake_case: bool=True,force_cast: bool=False, restrict: bool=True) -> TOption[T]:\n \"\"\"From dict to optional instance.\n\n :param d: Dict\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: Instance\n\n Usage:\n\n >>> from owlmixin.samples import Human\n >>> Human.from_optional_dict(None).is_none()\n True\n >>> Human.from_optional_dict({}).get() # doctest: +NORMALIZE_WHITESPACE\n Traceback (most recent call last):\n ...\n owlmixin.errors.RequiredError:\n . \u2227,,_\u2227 ,___________________\n \u2282 ( \uff65\u03c9\uff65 )\u3064- < Required error\n \uff0f\uff0f/ /::/ `-------------------\n |::|/\u2282\u30fd\u30ce|::|\u300d\n \uff0f\uffe3\uffe3\u65e6\uffe3\uffe3\uffe3\uff0f|\n \uff3f\uff3f\uff3f\uff3f\uff3f\uff3f\uff0f | |\n |------\u30fc----\u30fc|\uff0f\n <BLANKLINE>\n `owlmixin.samples.Human#id: int` is empty!!\n <BLANKLINE>\n * If `id` is certainly required, specify anything.\n * If `id` is optional, change type from `int` to `TOption[int]`\n <BLANKLINE>\n \"\"\"\n return TOption(cls.from_dict(d,\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict) if d is not None else None)", "response": "From dict to optional instance."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_optional_dicts(cls, ds: Optional[List[dict]],\n force_snake_case: bool=True, force_cast: bool=False, restrict: bool=True) -> TOption[TList[T]]:\n \"\"\"From list of dict to optional list of instance.\n\n :param ds: List of dict\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: List of instance\n\n Usage:\n\n >>> from owlmixin.samples import Human\n >>> Human.from_optional_dicts(None).is_none()\n True\n >>> Human.from_optional_dicts([]).get()\n []\n \"\"\"\n return TOption(cls.from_dicts(ds,\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict) if ds is not None else None)", "response": "From list of dict to optional list of instance."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_dicts_by_key(cls, ds: dict,\n force_snake_case: bool=True, force_cast: bool=False, restrict: bool=True) -> TDict[T]:\n \"\"\"From dict of dict to dict of instance\n\n :param ds: Dict of dict\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: Dict of instance\n\n Usage:\n\n >>> from owlmixin.samples import Human\n >>> humans_by_name: TDict[Human] = Human.from_dicts_by_key({\n ... 'Tom': {\"id\": 1, \"name\": \"Tom\", \"favorites\": [{\"name\": \"Apple\"}]},\n ... 'John': {\"id\": 2, \"name\": \"John\", \"favorites\": [{\"name\": \"Orange\"}]}\n ... })\n >>> humans_by_name['Tom'].name\n 'Tom'\n >>> humans_by_name['John'].name\n 'John'\n \"\"\"\n return TDict({k: cls.from_dict(v,\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict) for k, v in ds.items()})", "response": "From dict of dict to dict of instance"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_optional_dicts_by_key(cls, ds: Optional[dict],\n force_snake_case=True, force_cast: bool=False, restrict: bool=True) -> TOption[TDict[T]]:\n \"\"\"From dict of dict to optional dict of instance.\n\n :param ds: Dict of dict\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: Dict of instance\n\n Usage:\n\n >>> from owlmixin.samples import Human\n >>> Human.from_optional_dicts_by_key(None).is_none()\n True\n >>> Human.from_optional_dicts_by_key({}).get()\n {}\n \"\"\"\n return TOption(cls.from_dicts_by_key(ds,\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict) if ds is not None else None)", "response": "From dict of dict of dict of instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_jsonf(cls, fpath: str, encoding: str='utf8',\n force_snake_case=True, force_cast: bool=False, restrict: bool=False) -> T:\n \"\"\"From json file path to instance\n\n :param fpath: Json file path\n :param encoding: Json file encoding\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: Instance\n \"\"\"\n return cls.from_dict(util.load_jsonf(fpath, encoding),\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict)", "response": "From json file path to instance\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_json_to_list(cls, data: str,\n force_snake_case=True, force_cast: bool=False, restrict: bool=False) -> TList[T]:\n \"\"\"From json string to list of instance\n\n :param data: Json string\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: List of instance\n\n Usage:\n\n >>> from owlmixin.samples import Human\n >>> humans: TList[Human] = Human.from_json_to_list('''[\n ... {\"id\": 1, \"name\": \"Tom\", \"favorites\": [{\"name\": \"Apple\"}]},\n ... {\"id\": 2, \"name\": \"John\", \"favorites\": [{\"name\": \"Orange\"}]}\n ... ]''')\n >>> humans[0].name\n 'Tom'\n >>> humans[1].name\n 'John'\n \"\"\"\n return cls.from_dicts(util.load_json(data),\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict)", "response": "From json string to list of instance of class T"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_jsonf_to_list(cls, fpath: str, encoding: str='utf8',\n force_snake_case=True, force_cast: bool=False, restrict: bool=False) -> TList[T]:\n \"\"\"From json file path to list of instance\n\n :param fpath: Json file path\n :param encoding: Json file encoding\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param force_cast: Cast forcibly if True\n :param restrict: Prohibit extra parameters if True\n :return: List of instance\n \"\"\"\n return cls.from_dicts(util.load_jsonf(fpath, encoding),\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict)", "response": "From json file path to list of instance containing keys and values."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_yaml(cls, data: str, force_snake_case=True, force_cast: bool=False, restrict: bool=True) -> T:\n return cls.from_dict(util.load_yaml(data),\n force_snake_case=force_snake_case,\n force_cast=force_cast,\n restrict=restrict)", "response": "From yaml string to instance"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_csvf(cls, fpath: str, fieldnames: Optional[Sequence[str]]=None, encoding: str='utf8',\n force_snake_case: bool=True, restrict: bool=True) -> TList[T]:\n \"\"\"From csv file path to list of instance\n\n :param fpath: Csv file path\n :param fieldnames: Specify csv header names if not included in the file\n :param encoding: Csv file encoding\n :param force_snake_case: Keys are transformed to snake case in order to compliant PEP8 if True\n :param restrict: Prohibit extra parameters if True\n :return: List of Instance\n \"\"\"\n return cls.from_dicts(util.load_csvf(fpath, fieldnames, encoding),\n force_snake_case=force_snake_case,\n force_cast=True,\n restrict=restrict)", "response": "From csv file path to list of InstanceTrees"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get(self, index: int) -> TOption[T]:\n return TOption(self[index]) if len(self) > index else TOption(None)", "response": "Get the object at the specified index."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef head_while(self, func: Callable[[T], bool]) -> 'TList[T]':\n r = TList()\n for x in self:\n if not func(x):\n return r\n else:\n r.append(x)\n return r", "response": "Returns a copy of the list with elements from the head of the list while a truth test is true."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef uniq(self) -> 'TList[T]':\n rs = TList()\n for e in self:\n if e not in rs:\n rs.append(e)\n return rs", "response": "Return a new list of unique entries in the list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a new list with unique elements from the list.", "response": "def uniq_by(self, func: Callable[[T], Any]) -> 'TList[T]':\n \"\"\"\n Usage:\n\n >>> TList([1, 2, 3, -2, -1]).uniq_by(lambda x: x**2)\n [1, 2, 3]\n \"\"\"\n rs = TList()\n for e in self:\n if func(e) not in rs.map(func):\n rs.append(e)\n return rs"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dictionary of the set of keys and values grouped by to_key", "response": "def group_by(self, to_key):\n \"\"\"\n :param to_key:\n :type to_key: T -> unicode\n :rtype: TDict[TList[T]]\n\n Usage:\n\n >>> TList([1, 2, 3, 4, 5]).group_by(lambda x: x % 2).to_json()\n '{\"0\": [2,4],\"1\": [1,3,5]}'\n \"\"\"\n ret = TDict()\n for v in self:\n k = to_key(v)\n ret.setdefault(k, TList())\n ret[k].append(v)\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef key_by(self, to_key: Callable[[T], str]) -> 'TDict[T]':\n return TDict({to_key(x): x for x in self})", "response": "Return a copy of the list with keys mapped to the given function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsorting the list by a function.", "response": "def order_by(self, func, reverse=False):\n \"\"\"\n :param func:\n :type func: T -> any\n :param reverse: Sort by descend order if True, else by ascend\n :type reverse: bool\n :rtype: TList[T]\n\n Usage:\n\n >>> TList([12, 25, 31, 40, 57]).order_by(lambda x: x % 10)\n [40, 31, 12, 25, 57]\n >>> TList([12, 25, 31, 40, 57]).order_by(lambda x: x % 10, reverse=True)\n [57, 25, 12, 31, 40]\n \"\"\"\n return TList(sorted(self, key=func, reverse=reverse))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find(self, func: Callable[[T], bool]) -> TOption[T]:\n for x in self:\n if func(x):\n return TOption(x)\n return TOption(None)", "response": "Find the first entry in the list that satisfies the condition func."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get(self, key: K) -> TOption[T]:\n return TOption(self[key]) if key in self else TOption(None)", "response": "Get the value of the key in the object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef map(self, func):\n return TList([func(k, v) for k, v in self.items()])", "response": "Return a list of U objects by applying func to each element of the list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmapping a function to the values of the object.", "response": "def map_values(self, func):\n \"\"\"\n :param func:\n :type func: T -> U\n :rtype: TDict[U]\n\n Usage:\n\n >>> TDict(k1=1, k2=2, k3=3).map_values(lambda x: x*2) == {\n ... \"k1\": 2,\n ... \"k2\": 4,\n ... \"k3\": 6\n ... }\n True\n \"\"\"\n return TDict({k: func(v) for k, v in self.items()})"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef map_values2(self, func):\n return TDict({k: func(k, v) for k, v in self.items()})", "response": "Map a function to the values of a set of keys and values."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef filter(self, func):\n return TList([v for k, v in self.items() if func(k, v)])", "response": "Return a copy of the dictionary with only the keys that pass the given function."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a copy of the dictionary with entries in the order of the keys that do not satisfy the condition func.", "response": "def reject(self, func):\n \"\"\"\n :param func:\n :type func: (K, T) -> bool\n :rtype: TList[T]\n\n Usage:\n\n >>> TDict(k1=1, k2=2, k3=3).reject(lambda k, v: v < 3)\n [3]\n \"\"\"\n return TList([v for k, v in self.items() if not func(k, v)])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the first entry in the cache that satisfies the given function.", "response": "def find(self, func: Callable[[K, T], bool]) -> TOption[T]:\n \"\"\"\n Usage:\n\n >>> TDict(k1=1, k2=2, k3=3).find(lambda k, v: v == 2)\n Option --> 2\n >>> TDict(k1=1, k2=2, k3=3).find(lambda k, v: v == 4)\n Option --> None\n \"\"\"\n for k, v in self.items():\n if func(k, v):\n return TOption(v)\n return TOption(None)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef all(self, func):\n return all([func(k, v) for k, v in self.items()])", "response": "Tests if all elements in the dictionary are True or False."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef any(self, func):\n return any([func(k, v) for k, v in self.items()])", "response": "Tests if any of the keys in the dictionary are True or False."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a copy of the dictionary with only the keys that satisfy the given function.", "response": "def pick_by(self, func: Callable[[K, T], bool]) -> 'TDict[T]':\n \"\"\"\n Usage:\n\n >>> TDict(k1=1, k2=2, k3=3).pick_by(lambda k, v: v > 2)\n {'k3': 3}\n \"\"\"\n return TDict({k: v for k, v in self.items() if func(k, v)})"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef storeSectionState(self, level):\n\n # self.document.append(\"<!-- storeSectionState(): \" + str(len(self.header_stack)) + \" open section tags. \" + str(self.header_stack) + \"-->\\n\")\n\n try:\n # special case. we are not processing an OOo XML start tag which we\n # are going to insert <section> before. we have reached a point\n # where all sections need to be closed. EG </office:body> or </text:section>,\n # both of which are hierarchical => scope closure for all open <section> tags\n bClosedAllSections = ( level == u'0' )\n if bClosedAllSections:\n # have reached a point where all sections need to be closed\n iSectionsClosed = len(self.header_stack)\n while len(self.header_stack) > 0:\n del(self.header_stack[-1])\n return iSectionsClosed\n\n if len(self.header_stack) == 0:\n # no open section tags\n iSectionsClosed = 0\n self.header_stack.append(level)\n else:\n iLastLevel = self.header_stack[-1]\n if level > iLastLevel:\n # open sections tags AND no sections need closing\n iSectionsClosed = 0\n self.header_stack.append(level)\n elif level == iLastLevel:\n # open sections tags AND need to closed one of the sections\n iSectionsClosed = 1\n # imagine deleting the last level and then re-adding it\n elif level < iLastLevel:\n # open sections tags AND need to closed some of the sections\n del(self.header_stack[-1])\n iSectionsClosed = 1\n iSectionsClosed += self.storeSectionState(level)\n\n return iSectionsClosed\n\n except IndexError:\n print level\n raise", "response": "Stores the state of the section at the given level."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclose all sections of level > = sectnum. Defaults to closing all open sections.", "response": "def endSections(self, level=u'0'):\n \"\"\"Closes all sections of level >= sectnum. Defaults to closing all open sections\"\"\"\n\n iSectionsClosed = self.storeSectionState(level)\n self.document.append(\"</section>\\n\" * iSectionsClosed)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_dict(self, ignore_none: bool=True, force_value: bool=True, ignore_empty: bool=False) -> dict:\n return traverse_dict(self._dict, ignore_none, force_value, ignore_empty)", "response": "From instance to dict"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_yaml(self, ignore_none: bool=True, ignore_empty: bool=False) -> str:\n return util.dump_yaml(traverse(self, ignore_none, force_value=True, ignore_empty=ignore_empty))", "response": "From instance to yaml string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_csv(self, fieldnames: Sequence[str], with_header: bool=False, crlf: bool=False, tsv: bool=False) -> str:\n return util.dump_csv(traverse(self, force_value=True), fieldnames, with_header, crlf, tsv)", "response": "From sequence of text to csv string"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _post_tidy(html):\n tree = etree.fromstring(html)\n ems = tree.xpath(\n \"//xh:em[@class='underline']|//xh:em[contains(@class, ' underline ')]\",\n namespaces={'xh': 'http://www.w3.org/1999/xhtml'})\n for el in ems:\n c = el.attrib.get('class', '').split()\n c.remove('underline')\n el.tag = '{http://www.w3.org/1999/xhtml}u'\n if c:\n el.attrib['class'] = ' '.join(c)\n elif 'class' in el.attrib:\n del(el.attrib['class'])\n\n return tree", "response": "This method transforms post tidy. Will go away when tidy goes away."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _tidy2xhtml5(html):\n html = _io2string(html)\n html = _pre_tidy(html) # Pre-process\n xhtml5, errors =\\\n tidy_document(html,\n options={\n # do not merge nested div elements\n # - preserve semantic block structrues\n 'merge-divs': 0,\n # create xml output\n 'output-xml': 1,\n # Don't use indent, adds extra linespace or linefeed\n # which are big problems\n 'indent': 0,\n # No tidy meta tag in output\n 'tidy-mark': 0,\n # No wrapping\n 'wrap': 0,\n # Help ensure validation\n 'alt-text': '',\n # No sense in transitional for tool-generated markup\n 'doctype': 'strict',\n # May not get what you expect,\n # but you will get something\n 'force-output': 1,\n # remove HTML entities like e.g. nbsp\n 'numeric-entities': 1,\n # remove\n 'clean': 1,\n 'bare': 1,\n 'word-2000': 1,\n 'drop-proprietary-attributes': 1,\n # enclose text in body always with <p>...</p>\n 'enclose-text': 1,\n # transforms <i> and <b> to <em> and <strong>\n 'logical-emphasis': 1,\n # do not tidy all MathML elements!\n # List of MathML 3.0 elements from\n # http://www.w3.org/TR/MathML3/appendixi.html#index.elem\n 'new-inline-tags': 'abs, and, annotation, '\n 'annotation-xml, apply, approx, arccos, arccosh, '\n 'arccot, arccoth, arccsc, arccsch, arcsec, arcsech, '\n 'arcsin, arcsinh, arctan, arctanh, arg, bind, bvar, '\n 'card, cartesianproduct, cbytes, ceiling, cerror, '\n 'ci, cn, codomain, complexes, compose, condition, '\n 'conjugate, cos, cosh, cot, coth, cs, csc, csch, '\n 'csymbol, curl, declare, degree, determinant, diff, '\n 'divergence, divide, domain, domainofapplication, '\n 'el, emptyset, eq, equivalent, eulergamma, exists, '\n 'exp, exponentiale, factorial, factorof, false, '\n 'floor, fn, forall, gcd, geq, grad, gt, ident, '\n 'image, imaginary, imaginaryi, implies, in, '\n 'infinity, int, integers, intersect, interval, '\n 'inverse, lambda, laplacian, lcm, leq, limit, list, '\n 'ln, log, logbase, lowlimit, lt, maction, malign, '\n 'maligngroup, malignmark, malignscope, math, '\n 'matrix, matrixrow, max, mean, median, menclose, '\n 'merror, mfenced, mfrac, mfraction, mglyph, mi, '\n 'min, minus, mlabeledtr, mlongdiv, mmultiscripts, '\n 'mn, mo, mode, moment, momentabout, mover, mpadded, '\n 'mphantom, mprescripts, mroot, mrow, ms, mscarries, '\n 'mscarry, msgroup, msline, mspace, msqrt, msrow, '\n 'mstack, mstyle, msub, msubsup, msup, mtable, mtd, '\n 'mtext, mtr, munder, munderover, naturalnumbers, '\n 'neq, none, not, notanumber, note, notin, '\n 'notprsubset, notsubset, or, otherwise, '\n 'outerproduct, partialdiff, pi, piece, piecewise, '\n 'plus, power, primes, product, prsubset, quotient, '\n 'rationals, real, reals, reln, rem, root, '\n 'scalarproduct, sdev, sec, sech, selector, '\n 'semantics, sep, set, setdiff, share, sin, sinh, '\n 'subset, sum, tan, tanh, tendsto, times, transpose, '\n 'true, union, uplimit, variance, vector, '\n 'vectorproduct, xor',\n 'doctype': 'html5',\n })\n\n # return xhtml5\n # return the tree itself, there is another modification below to avoid\n # another parse\n return _post_tidy(xhtml5)", "response": "Tidy up a html4. 5 soup to a parsable valid XHTML5."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _make_xsl(filename):\n path = pkg_resources.resource_filename('rhaptos.cnxmlutils.xsl', filename)\n xml = etree.parse(path)\n return etree.XSLT(xml)", "response": "Helper that creates a XSLT stylesheet"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ntransform the xml using the specifiec xsl file.", "response": "def _transform(xsl_filename, xml, **kwargs):\n \"\"\"Transforms the xml using the specifiec xsl file.\"\"\"\n xslt = _make_xsl(xsl_filename)\n xml = xslt(xml, **kwargs)\n return xml"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cnxml_to_html(cnxml_source):\n source = _string2io(cnxml_source)\n xml = etree.parse(source)\n # Run the CNXML to HTML transform\n xml = _transform('cnxml-to-html5.xsl', xml,\n version='\"{}\"'.format(version))\n xml = XHTML_MODULE_BODY_XPATH(xml)\n return etree.tostring(xml[0])", "response": "Transform the CNXML source to HTML"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef aloha_to_etree(html_source):\n xml = _tidy2xhtml5(html_source)\n for i, transform in enumerate(ALOHA2HTML_TRANSFORM_PIPELINE):\n xml = transform(xml)\n return xml", "response": "Converts HTML5 from Aloha editor output to lxml etree."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts HTML5 from Aloha to a more structured HTML5", "response": "def aloha_to_html(html_source):\n \"\"\"Converts HTML5 from Aloha to a more structured HTML5\"\"\"\n xml = aloha_to_etree(html_source)\n return etree.tostring(xml, pretty_print=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef html_to_cnxml(html_source, cnxml_source):\n source = _string2io(html_source)\n xml = etree.parse(source)\n cnxml = etree.parse(_string2io(cnxml_source))\n # Run the HTML to CNXML transform on it\n xml = _transform('html5-to-cnxml.xsl', xml)\n # Replace the original content element with the transformed one.\n namespaces = {'c': 'http://cnx.rice.edu/cnxml'}\n xpath = etree.XPath('//c:content', namespaces=namespaces)\n replaceable_node = xpath(cnxml)[0]\n replaceable_node.getparent().replace(replaceable_node, xml.getroot())\n # Set the content into the existing cnxml source\n return etree.tostring(cnxml)", "response": "Transform the HTML to CNXML."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntransforms the HTML to valid CNXML.", "response": "def html_to_valid_cnxml(html_source):\n \"\"\"Transform the HTML to valid CNXML (used for OERPUB).\n No original CNXML is needed. If HTML is from Aloha please use\n aloha_to_html before using this method\n \"\"\"\n source = _string2io(html_source)\n xml = etree.parse(source)\n return etree_to_valid_cnxml(xml, pretty_print=True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting or return the current value of the object.", "response": "def get_or(self, default: T) -> T:\n \"\"\"\n Usage:\n\n >>> TOption(3).get_or(999)\n 3\n >>> TOption(0).get_or(999)\n 0\n >>> TOption(None).get_or(999)\n 999\n \"\"\"\n return default if self.value is None else self.value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef map(self, func: Callable[[T], U]) -> 'TOption[T]':\n return self if self.is_none() else TOption(func(self.value))", "response": "Returns a new instance of the class with the value mapped to the given function."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a new TOption instance with the values mapped to the specified function.", "response": "def flat_map(self, func: Callable[[T], 'TOption[T]']) -> 'TOption[T]':\n \"\"\"\n Usage:\n\n >>> TOption(3).flat_map(lambda x: TOption(x+1)).get()\n 4\n >>> TOption(3).flat_map(lambda x: TOption(None)).get_or(999)\n 999\n >>> TOption(None).flat_map(lambda x: TOption(x+1)).get_or(999)\n 999\n \"\"\"\n return self if self.is_none() else TOption(func(self.value).get())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef replace(text):\n for hex, value in UNICODE_DICTIONARY.items():\n num = int(hex[3:-1], 16)\n #uni = unichr(num)\n decimal = '&#' + str(num) + ';'\n for key in [ hex, decimal ]: #uni\n text = text.replace(key, value)\n return text", "response": "Replace both the hex and decimal versions of symbols in an XML string"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreplaces keys in a dictionary with the values in keymap.", "response": "def replace_keys(d, keymap, force_snake_case):\n \"\"\"\n :param dict d:\n :param Dict[unicode, unicode] keymap:\n :param bool force_snake_case:\n :rtype: Dict[unicode, unicode]\n \"\"\"\n return {\n to_snake(keymap.get(k, k)) if force_snake_case else keymap.get(k, k):\n v for k, v in d.items()\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading a single node from a JSON file.", "response": "def load_jsonf(fpath, encoding):\n \"\"\"\n :param unicode fpath:\n :param unicode encoding:\n :rtype: dict | list\n \"\"\"\n with codecs.open(fpath, encoding=encoding) as f:\n return json.load(f)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading a single node from a yaml file.", "response": "def load_yamlf(fpath, encoding):\n \"\"\"\n :param unicode fpath:\n :param unicode encoding:\n :rtype: dict | list\n \"\"\"\n with codecs.open(fpath, encoding=encoding) as f:\n return yaml.safe_load(f)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload a single ndata tree from a CSV file.", "response": "def load_csvf(fpath, fieldnames, encoding):\n \"\"\"\n :param unicode fpath:\n :param Optional[list[unicode]] fieldnames:\n :param unicode encoding:\n :rtype: List[dict]\n \"\"\"\n with open(fpath, mode='r', encoding=encoding) as f:\n snippet = f.read(8192)\n f.seek(0)\n\n dialect = csv.Sniffer().sniff(snippet)\n dialect.skipinitialspace = True\n return list(csv.DictReader(f, fieldnames=fieldnames, dialect=dialect))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndump a list of dicts into a CSV file.", "response": "def dump_csv(data: List[dict], fieldnames: Sequence[str], with_header: bool = False, crlf: bool = False,\n tsv: bool = False) -> str:\n \"\"\"\n :param data:\n :param fieldnames:\n :param with_header:\n :param crlf:\n :param tsv:\n :return: unicode\n \"\"\"\n\n def force_str(v):\n # XXX: Double quotation behaves strangely... so replace (why?)\n return dump_json(v).replace('\"', \"'\") if isinstance(v, (dict, list)) else v\n\n with io.StringIO() as sio:\n dialect = get_dialect_name(crlf, tsv)\n writer = csv.DictWriter(sio, fieldnames=fieldnames, dialect=dialect, extrasaction='ignore')\n if with_header:\n writer.writeheader()\n for x in data:\n writer.writerow({k: force_str(v) for k, v in x.items()})\n sio.seek(0)\n return sio.read()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_csvf(data: list, fieldnames: Sequence[str], fpath: str, encoding: str, with_header: bool = False,\n crlf: bool = False, tsv: bool = False) -> str:\n \"\"\"\n :param data:\n :param fieldnames:\n :param fpath: write path\n :param encoding: encoding\n :param with_header:\n :param crlf:\n :param tsv:\n :return: written path\n \"\"\"\n with codecs.open(fpath, mode='w', encoding=encoding) as f:\n f.write(dump_csv(data, fieldnames, with_header=with_header, crlf=crlf, tsv=tsv))\n return fpath", "response": "Save data to a CSV file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndump a list | dict to JSON.", "response": "def dump_json(data, indent=None):\n \"\"\"\n :param list | dict data:\n :param Optional[int] indent:\n :rtype: unicode\n \"\"\"\n return json.dumps(data,\n indent=indent,\n ensure_ascii=False,\n sort_keys=True,\n separators=(',', ': '))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsave a list of dicts to a JSON file.", "response": "def save_jsonf(data: Union[list, dict], fpath: str, encoding: str, indent=None) -> str:\n \"\"\"\n :param data: list | dict data\n :param fpath: write path\n :param encoding: encoding\n :param indent:\n :rtype: written path\n \"\"\"\n with codecs.open(fpath, mode='w', encoding=encoding) as f:\n f.write(dump_json(data, indent))\n return fpath"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndumping a list | dict to YAML", "response": "def dump_yaml(data):\n \"\"\"\n :param list | dict data:\n :rtype: unicode\n \"\"\"\n return yaml.dump(data,\n indent=2,\n encoding=None,\n allow_unicode=True,\n default_flow_style=False,\n Dumper=MyDumper)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave a list of dicts to a file in YAML format.", "response": "def save_yamlf(data: Union[list, dict], fpath: str, encoding: str) -> str:\n \"\"\"\n :param data: list | dict data\n :param fpath: write path\n :param encoding: encoding\n :rtype: written path\n \"\"\"\n with codecs.open(fpath, mode='w', encoding=encoding) as f:\n f.write(dump_yaml(data))\n return fpath"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndumping a list of dictionaries into a string.", "response": "def dump_table(data: List[dict], fieldnames: Sequence[str]) -> str:\n \"\"\"\n :param data:\n :param fieldnames:\n :return: Table string\n \"\"\"\n\n def min3(num: int) -> int:\n return 3 if num < 4 else num\n\n width_by_col: Dict[str, int] = {\n f: min3(max([string_width(str(d.get(f))) for d in data] + [string_width(f)])) for f in fieldnames\n }\n\n def fill_spaces(word: str, width: int, center=False):\n \"\"\" aaa, 4 => ' aaa ' \"\"\"\n to_fills: int = width - string_width(word)\n return f\" {' ' * floor(to_fills / 2)}{word}{' ' * ceil(to_fills / 2)} \" if center \\\n else f\" {word}{' ' * to_fills} \"\n\n def to_record(r: dict) -> str:\n return f\"|{'|'.join([fill_spaces(str(r.get(f)), width_by_col.get(f)) for f in fieldnames])}|\"\n\n return f\"\"\"\n|{'|'.join([fill_spaces(x, width_by_col.get(x), center=True) for x in fieldnames])}|\n|{'|'.join([fill_spaces(width_by_col.get(x) * \"-\", width_by_col.get(x)) for x in fieldnames])}|\n{os.linesep.join([to_record(x) for x in data])}\n\"\"\".lstrip()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the width of the word in the alphabet", "response": "def string_width(word: str) -> int:\n \"\"\"\n :param word:\n :return: Widths of word\n\n Usage:\n\n >>> string_width('abc')\n 3\n >>> string_width('\uff21b\u3057\u30fc')\n 7\n >>> string_width('')\n 0\n \"\"\"\n return sum(map(lambda x: 2 if east_asian_width(x) in 'FWA' else 1, word))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite out XML file.", "response": "def writeXMLFile(filename, content):\n \"\"\" Used only for debugging to write out intermediate files\"\"\"\n xmlfile = open(filename, 'w')\n # pretty print\n content = etree.tostring(content, pretty_print=True)\n xmlfile.write(content)\n xmlfile.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef transform(odtfile, debug=False, parsable=False, outputdir=None):\n # Store mapping of images extracted from the ODT file (and their bits)\n images = {}\n # Log of Errors and Warnings generated\n # For example, the text produced by XSLT should be:\n # {'level':'WARNING',\n # 'msg' :'Headings without text between them are not allowed',\n # 'id' :'import-auto-id2376'}\n # That way we can put a little * near all the cnxml where issues arose\n errors = []\n\n zip = zipfile.ZipFile(odtfile, 'r')\n content = zip.read('content.xml')\n xml = etree.fromstring(content)\n\n def appendLog(xslDoc):\n if hasattr(xslDoc, 'error_log'):\n for entry in xslDoc.error_log:\n # Entries are of the form:\n # {'level':'ERROR','id':'id1234','msg':'Descriptive message'}\n text = entry.message\n try:\n dict = json.loads(text)\n errors.append(dict)\n except ValueError:\n errors.append({\n u'level':u'CRITICAL',\n u'id' :u'(none)',\n u'msg' :unicode(text) })\n \n def injectStyles(xml):\n # HACK - need to find the object location from the manifest ...\n strStyles = zip.read('styles.xml')\n \n parser = etree.XMLParser()\n parser.feed(strStyles)\n stylesXml = parser.close()\n \n for i, obj in enumerate(STYLES_XPATH(stylesXml)):\n xml.append(obj)\n\n return xml\n\n\n # All MathML is stored in separate files \"Object #/content.xml\"\n # This converter includes the MathML by looking up the file in the zip\n def mathIncluder(xml):\n for i, obj in enumerate(MATH_XPATH(xml)):\n strMathPath = MATH_HREF_XPATH(obj)[0] # Or obj.get('{%s}href' % XLINK_NS)\n if strMathPath[0] == '#':\n strMathPath = strMathPath[1:]\n # Remove leading './' Zip doesn't like it\n if strMathPath[0] == '.':\n strMathPath = strMathPath[2:]\n\n # HACK - need to find the object location from the manifest ...\n strMathPath = os.path.join(strMathPath, 'content.xml')\n strMath = zip.read(strMathPath)\n \n #parser = etree.XMLParser(encoding='utf-8')\n #parser.feed(strMath)\n #math = parser.close()\n math = etree.parse(StringIO(strMath)).getroot()\n \n # Replace the reference to the Math with the actual MathML\n obj.getparent().replace(obj, math)\n return xml\n\n def imagePuller(xml):\n for i, obj in enumerate(IMAGE_XPATH(xml)):\n strPath = IMAGE_HREF_XPATH(obj)[0]\n strName = IMAGE_NAME_XPATH(obj)[0]\n\n fileNeedEnding = ( strName.find('.') == -1 )\n if fileNeedEnding:\n strName = strName + strPath[strPath.index('.'):]\n\n if strPath[0] == '#':\n strPath = strPath[1:]\n # Remove leading './' Zip doesn't like it\n if strPath[0] == '.':\n strPath = strPath[2:]\n\n image = zip.read(strPath)\n images[strName] = image\n \n # Later on, an XSL pass will convert the draw:frame to a c:image and \n # set the @src correctly\n\n return xml\n \n def drawPuller(xml):\n styles = DRAW_STYLES_XPATH(xml)\n \n empty_odg_dirname = os.path.join(dirname, 'empty_odg_template')\n \n temp_dirname = tempfile.mkdtemp()\n \n for i, obj in enumerate(DRAW_XPATH(xml)):\n # Copy everything except content.xml from the empty ODG (OOo Draw) template into a new zipfile\n \n odg_filename = DRAW_FILENAME_PREFIX + str(i) + '.odg'\n png_filename = DRAW_FILENAME_PREFIX + str(i) + '.png'\n\n # add PNG filename as attribute to parent node. The good thing is: The child (obj) will get lost! :-)\n parent = obj.getparent()\n parent.attrib['ooo_drawing'] = png_filename\n \n odg_zip = zipfile.ZipFile(os.path.join(temp_dirname, odg_filename), 'w', zipfile.ZIP_DEFLATED)\n for root, dirs, files in os.walk(empty_odg_dirname):\n for name in files:\n if name not in ('content.xml', 'styles.xml'): # copy everything inside ZIP except content.xml or styles.xml\n sourcename = os.path.join(root, name)\n # http://stackoverflow.com/a/1193171/756056 \n arcname = os.path.join(root[len(empty_odg_dirname):], name) # Path name inside the ZIP file, empty_odg_template is the root folder\n odg_zip.write(sourcename, arcname)\n \n content = etree.parse(os.path.join(empty_odg_dirname, 'content.xml'))\n \n # Inject content styles in empty OOo Draw content.xml\n content_style_xpath = etree.XPath('/office:document-content/office:automatic-styles', namespaces=NAMESPACES)\n content_styles = content_style_xpath(content) \n for style in styles:\n content_styles[0].append(deepcopy(style))\n \n # Inject drawing in empty OOo Draw content.xml\n content_page_xpath = etree.XPath('/office:document-content/office:body/office:drawing/draw:page', namespaces=NAMESPACES)\n content_page = content_page_xpath(content)\n content_page[0].append(obj)\n \n # write modified content.xml\n odg_zip.writestr('content.xml', etree.tostring(content, xml_declaration=True, encoding='UTF-8'))\n \n # copy styles.xml from odt to odg without modification\n styles_xml = zip.read('styles.xml')\n odg_zip.writestr('styles.xml', styles_xml)\n\n odg_zip.close()\n \n # TODO: Better error handling in the future.\n try:\n # convert every odg to png\n command = '/usr/bin/soffice -headless -nologo -nofirststartwizard \"macro:///Standard.Module1.SaveAsPNG(%s,%s)\"' % (os.path.join(temp_dirname, odg_filename),os.path.join(temp_dirname, png_filename))\n os.system(command)\n\n # save every image to memory \n image = open(os.path.join(temp_dirname, png_filename), 'r').read()\n images[png_filename] = image\n \n if outputdir is not None:\n shutil.copy (os.path.join(temp_dirname, odg_filename), os.path.join(outputdir, odg_filename))\n shutil.copy (os.path.join(temp_dirname, png_filename), os.path.join(outputdir, png_filename))\n except:\n pass\n \n # delete temporary directory\n shutil.rmtree(temp_dirname)\n \n return xml\n\n # Reparse after XSL because the RED-escape pass injects arbitrary XML\n def redParser(xml):\n xsl = makeXsl('pass1_odt2red-escape.xsl')\n result = xsl(xml)\n appendLog(xsl)\n try:\n xml = etree.fromstring(etree.tostring(result))\n except etree.XMLSyntaxError, e:\n msg = str(e)\n xml = makeXsl('pass1_odt2red-failed.xsl')(xml, message=\"'%s'\" % msg.replace(\"'\", '\"'))\n xml = xml.getroot()\n return xml\n\n def replaceSymbols(xml):\n xmlstr = etree.tostring(xml)\n xmlstr = symbols.replace(xmlstr)\n return etree.fromstring(xmlstr)\n\n PIPELINE = [\n drawPuller, # gets OOo Draw objects out of odt and generate odg (OOo Draw) files\n replaceSymbols,\n injectStyles, # include the styles.xml file because it contains list numbering info\n makeXsl('pass2_odt-normalize.xsl'), # This needs to be done 2x to fix headings \n makeXsl('pass2_odt-normalize.xsl'), # In the worst case all headings are 9 \n # and need to be 1. See (testbed) southwood__Lesson_2.doc\n makeXsl('pass2_odt-collapse-spans.xsl'), # Collapse adjacent spans (for RED)\n redParser, # makeXsl('pass1_odt2red-escape.xsl'),\n makeXsl('pass4_odt-headers.xsl'),\n imagePuller, # Need to run before math because both have a <draw:image> (see xpath)\n mathIncluder,\n makeXsl('pass7_odt2cnxml.xsl'),\n makeXsl('pass8_cnxml-cleanup.xsl'),\n makeXsl('pass8.5_cnxml-cleanup.xsl'),\n makeXsl('pass9_id-generation.xsl'),\n makeXsl('pass10_processing-instruction-logger.xsl'),\n ]\n\n # \"xml\" variable gets replaced during each iteration\n passNum = 0\n for xslDoc in PIPELINE:\n if debug: errors.append(\"DEBUG: Starting pass %d\" % passNum)\n xml = xslDoc(xml)\n\n appendLog(xslDoc)\n if outputdir is not None: writeXMLFile(os.path.join(outputdir, 'pass%d.xml' % passNum), xml)\n passNum += 1\n\n # In most cases (EIP) Invalid XML is preferable over valid but Escaped XML\n if not parsable:\n xml = (makeXsl('pass11_red-unescape.xsl'))(xml)\n\n return (xml, images, errors)", "response": "This function takes an ODT file and returns a tuple containing the cnxml a dictionary of filename - > data a list of errors"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating an instance from a unique symbol.", "response": "def from_value(cls, value: str) -> T:\n \"\"\"Create instance from symbol\n :param value: unique symbol\n :return: This instance\n\n Usage:\n\n >>> from owlmixin.samples import Animal\n >>> Animal.from_value('cat').crow()\n mewing\n \"\"\"\n return [x for x in cls.__members__.values() if x.value[0] == value][0]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef seed(vault_client, opt):\n if opt.thaw_from:\n opt.secrets = tempfile.mkdtemp('aomi-thaw')\n auto_thaw(vault_client, opt)\n\n Context.load(get_secretfile(opt), opt) \\\n .fetch(vault_client) \\\n .sync(vault_client, opt)\n\n if opt.thaw_from:\n rmtree(opt.secrets)", "response": "Will provision vault based on the definition within a Secretfile"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrendering any provided template. This includes the Secretfile Vault policies and inline AWS roles.", "response": "def render(directory, opt):\n \"\"\"Render any provided template. This includes the Secretfile,\n Vault policies, and inline AWS roles\"\"\"\n if not os.path.exists(directory) and not os.path.isdir(directory):\n os.mkdir(directory)\n\n a_secretfile = render_secretfile(opt)\n s_path = \"%s/Secretfile\" % directory\n LOG.debug(\"writing Secretfile to %s\", s_path)\n open(s_path, 'w').write(a_secretfile)\n ctx = Context.load(yaml.safe_load(a_secretfile), opt)\n for resource in ctx.resources():\n if not resource.present:\n continue\n\n if issubclass(type(resource), Policy):\n if not os.path.isdir(\"%s/policy\" % directory):\n os.mkdir(\"%s/policy\" % directory)\n\n filename = \"%s/policy/%s\" % (directory, resource.path)\n open(filename, 'w').write(resource.obj())\n LOG.debug(\"writing %s to %s\", resource, filename)\n elif issubclass(type(resource), AWSRole):\n if not os.path.isdir(\"%s/aws\" % directory):\n os.mkdir(\"%s/aws\" % directory)\n\n if 'policy' in resource.obj():\n filename = \"%s/aws/%s\" % (directory,\n os.path.basename(resource.path))\n r_obj = resource.obj()\n if 'policy' in r_obj:\n LOG.debug(\"writing %s to %s\", resource, filename)\n open(filename, 'w').write(r_obj['policy'])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef export(vault_client, opt):\n ctx = Context.load(get_secretfile(opt), opt) \\\n .fetch(vault_client)\n for resource in ctx.resources():\n resource.export(opt.directory)", "response": "Export contents of a Secretfile from the Vault server into a specified directory."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nnormalizes JSON or YAML derived values as they pertain to Vault resources and comparison operations", "response": "def normalize_val(val):\n \"\"\"Normalize JSON/YAML derived values as they pertain\n to Vault resources and comparison operations \"\"\"\n if is_unicode(val) and val.isdigit():\n return int(val)\n elif isinstance(val, list):\n return ','.join(val)\n elif val is None:\n return ''\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef details_dict(obj, existing, ignore_missing, opt):\n existing = dict_unicodeize(existing)\n obj = dict_unicodeize(obj)\n for ex_k, ex_v in iteritems(existing):\n new_value = normalize_val(obj.get(ex_k))\n og_value = normalize_val(ex_v)\n if ex_k in obj and og_value != new_value:\n print(maybe_colored(\"-- %s: %s\" % (ex_k, og_value),\n 'red', opt))\n print(maybe_colored(\"++ %s: %s\" % (ex_k, new_value),\n 'green', opt))\n\n if (not ignore_missing) and (ex_k not in obj):\n print(maybe_colored(\"-- %s: %s\" % (ex_k, og_value),\n 'red', opt))\n\n for ob_k, ob_v in iteritems(obj):\n val = normalize_val(ob_v)\n if ob_k not in existing:\n print(maybe_colored(\"++ %s: %s\" % (ob_k, val),\n 'green', opt))\n\n return", "response": "Output the changes if any for a dict"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprints out detailed version of the specified Vault resource", "response": "def maybe_details(resource, opt):\n \"\"\"At the first level of verbosity this will print out detailed\n change information on for the specified Vault resource\"\"\"\n\n if opt.verbose == 0:\n return\n\n if not resource.present:\n return\n\n obj = None\n existing = None\n if isinstance(resource, Resource):\n obj = resource.obj()\n existing = resource.existing\n elif isinstance(resource, VaultBackend):\n obj = resource.config\n existing = resource.existing\n\n if not obj:\n return\n\n if is_unicode(existing) and is_unicode(obj):\n a_diff = difflib.unified_diff(existing.splitlines(),\n obj.splitlines(),\n lineterm='')\n for line in a_diff:\n if line.startswith('+++') or line.startswith('---'):\n continue\n if line[0] == '+':\n print(maybe_colored(\"++ %s\" % line[1:], 'green', opt))\n elif line[0] == '-':\n print(maybe_colored(\"-- %s\" % line[1:], 'red', opt))\n else:\n print(line)\n elif isinstance(existing, dict):\n ignore_missing = isinstance(resource, VaultBackend)\n details_dict(obj, existing, ignore_missing, opt)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef diff_a_thing(thing, opt):\n changed = thing.diff()\n if changed == ADD:\n print(\"%s %s\" % (maybe_colored(\"+\", \"green\", opt), str(thing)))\n elif changed == DEL:\n print(\"%s %s\" % (maybe_colored(\"-\", \"red\", opt), str(thing)))\n elif changed == CHANGED:\n print(\"%s %s\" % (maybe_colored(\"~\", \"yellow\", opt), str(thing)))\n elif changed == OVERWRITE:\n print(\"%s %s\" % (maybe_colored(\"+\", \"yellow\", opt), str(thing)))\n elif changed == CONFLICT:\n print(\"%s %s\" % (maybe_colored(\"!\", \"red\", opt), str(thing)))\n\n if changed != OVERWRITE and changed != NOOP:\n maybe_details(thing, opt)", "response": "Handle the diff action for a single thing"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef diff(vault_client, opt):\n if opt.thaw_from:\n opt.secrets = tempfile.mkdtemp('aomi-thaw')\n auto_thaw(vault_client, opt)\n\n ctx = Context.load(get_secretfile(opt), opt) \\\n .fetch(vault_client)\n\n for backend in ctx.mounts():\n diff_a_thing(backend, opt)\n\n for resource in ctx.resources():\n diff_a_thing(resource, opt)\n\n if opt.thaw_from:\n rmtree(opt.secrets)", "response": "Derive a comparison between what is represented in the Secretfile\n and what is actually live on a Vault instance"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhandling display of help and whatever diagnostics", "response": "def help_me(parser, opt):\n \"\"\"Handle display of help and whatever diagnostics\"\"\"\n print(\"aomi v%s\" % version)\n print('Get started with aomi'\n ' https://autodesk.github.io/aomi/quickstart')\n if opt.verbose == 2:\n tf_str = 'Token File,' if token_file() else ''\n app_str = 'AppID File,' if appid_file() else ''\n approle_str = 'Approle File,' if approle_file() else ''\n tfe_str = 'Token Env,' if 'VAULT_TOKEN' in os.environ else ''\n appre_str = 'App Role Env,' if 'VAULT_ROLE_ID' in os.environ and \\\n 'VAULT_SECRET_ID' in os.environ else ''\n appe_str = 'AppID Env,' if 'VAULT_USER_ID' in os.environ and \\\n 'VAULT_APP_ID' in os.environ else ''\n\n LOG.info((\"Auth Hints Present : %s%s%s%s%s%s\" %\n (tf_str, app_str, approle_str, tfe_str,\n appre_str, appe_str))[:-1])\n LOG.info(\"Vault Server %s\" %\n os.environ['VAULT_ADDR']\n if 'VAULT_ADDR' in os.environ else '??')\n\n parser.print_help()\n sys.exit(0)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract_file_args(subparsers):\n extract_parser = subparsers.add_parser('extract_file',\n help='Extract a single secret from'\n 'Vault to a local file')\n extract_parser.add_argument('vault_path',\n help='Full path (including key) to secret')\n extract_parser.add_argument('destination',\n help='Location of destination file')\n base_args(extract_parser)", "response": "Add the command line options for the extract_file operation"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mapping_args(parser):\n parser.add_argument('--add-prefix',\n dest='add_prefix',\n help='Specify a prefix to use when '\n 'generating secret key names')\n parser.add_argument('--add-suffix',\n dest='add_suffix',\n help='Specify a suffix to use when '\n 'generating secret key names')\n parser.add_argument('--merge-path',\n dest='merge_path',\n action='store_true',\n default=True,\n help='merge vault path and key name')\n parser.add_argument('--no-merge-path',\n dest='merge_path',\n action='store_false',\n default=True,\n help='do not merge vault path and key name')\n parser.add_argument('--key-map',\n dest='key_map',\n action='append',\n type=str,\n default=[])", "response": "Add various variable mapping command line options to the parser"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef aws_env_args(subparsers):\n env_parser = subparsers.add_parser('aws_environment')\n env_parser.add_argument('vault_path',\n help='Full path(s) to the AWS secret')\n export_arg(env_parser)\n base_args(env_parser)", "response": "Add command line options for the aws_environment operation"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef environment_args(subparsers):\n env_parser = subparsers.add_parser('environment')\n env_parser.add_argument('vault_paths',\n help='Full path(s) to secret',\n nargs='+')\n env_parser.add_argument('--prefix',\n dest='prefix',\n help='Old style prefix to use when'\n 'generating secret key names')\n export_arg(env_parser)\n mapping_args(env_parser)\n base_args(env_parser)", "response": "Add command line options for the environment operation"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef template_args(subparsers):\n template_parser = subparsers.add_parser('template')\n template_parser.add_argument('template',\n help='Template source',\n nargs='?')\n template_parser.add_argument('destination',\n help='Path to write rendered template',\n nargs='?')\n template_parser.add_argument('vault_paths',\n help='Full path(s) to secret',\n nargs='*')\n template_parser.add_argument('--builtin-list',\n dest='builtin_list',\n help='Display a list of builtin templates',\n action='store_true',\n default=False)\n template_parser.add_argument('--builtin-info',\n dest='builtin_info',\n help='Display information on a '\n 'particular builtin template')\n vars_args(template_parser)\n mapping_args(template_parser)\n base_args(template_parser)", "response": "Add command line options for the template operation"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef secretfile_args(parser):\n parser.add_argument('--secrets',\n dest='secrets',\n help='Path where secrets are stored',\n default=os.path.join(os.getcwd(), \".secrets\"))\n parser.add_argument('--policies',\n dest='policies',\n help='Path where policies are stored',\n default=os.path.join(os.getcwd(), \"vault\", \"\"))\n parser.add_argument('--secretfile',\n dest='secretfile',\n help='Secretfile to use',\n default=os.path.join(os.getcwd(), \"Secretfile\"))\n parser.add_argument('--tags',\n dest='tags',\n help='Tags of things to seed',\n default=[],\n type=str,\n action='append')\n parser.add_argument('--include',\n dest='include',\n help='Specify paths to include',\n default=[],\n type=str,\n action='append')\n parser.add_argument('--exclude',\n dest='exclude',\n help='Specify paths to exclude',\n default=[],\n type=str,\n action='append')", "response": "Add Secretfile management command line arguments to parser"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef base_args(parser):\n generic_args(parser)\n parser.add_argument('--monochrome',\n dest='monochrome',\n help='Whether or not to use colors',\n action='store_true')\n parser.add_argument('--metadata',\n dest='metadata',\n help='A series of key=value pairs for token metadata.',\n default='')\n parser.add_argument('--lease',\n dest='lease',\n help='Lease time for intermediary token.',\n default='10s')\n parser.add_argument('--reuse-token',\n dest='reuse_token',\n help='Whether to reuse the existing token. Note'\n ' this will cause metadata to not be preserved',\n action='store_true')", "response": "Add the generic command line options"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding command line options for the export operation", "response": "def export_args(subparsers):\n \"\"\"Add command line options for the export operation\"\"\"\n export_parser = subparsers.add_parser('export')\n export_parser.add_argument('directory',\n help='Path where secrets will be exported into')\n secretfile_args(export_parser)\n vars_args(export_parser)\n base_args(export_parser)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds command line options for the render operation", "response": "def render_args(subparsers):\n \"\"\"Add command line options for the render operation\"\"\"\n render_parser = subparsers.add_parser('render')\n render_parser.add_argument('directory',\n help='Path where Secrefile and accoutrement'\n ' will be rendered into')\n secretfile_args(render_parser)\n vars_args(render_parser)\n base_args(render_parser)\n thaw_from_args(render_parser)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd command line options for the diff operation", "response": "def diff_args(subparsers):\n \"\"\"Add command line options for the diff operation\"\"\"\n diff_parser = subparsers.add_parser('diff')\n secretfile_args(diff_parser)\n vars_args(diff_parser)\n base_args(diff_parser)\n thaw_from_args(diff_parser)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd command line options for the seed operation", "response": "def seed_args(subparsers):\n \"\"\"Add command line options for the seed operation\"\"\"\n seed_parser = subparsers.add_parser('seed')\n secretfile_args(seed_parser)\n vars_args(seed_parser)\n seed_parser.add_argument('--mount-only',\n dest='mount_only',\n help='Only mount paths if needed',\n default=False,\n action='store_true')\n thaw_from_args(seed_parser)\n seed_parser.add_argument('--remove-unknown',\n dest='remove_unknown',\n action='store_true',\n help='Remove mountpoints that are not '\n 'defined in the Secretfile')\n base_args(seed_parser)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef thaw_from_args(parser):\n parser.add_argument('--thaw-from',\n dest='thaw_from',\n help='Thaw an ICE file containing secrets')\n parser.add_argument('--gpg-password-path',\n dest='gpg_pass_path',\n help='Vault path of GPG passphrase location')", "response": "Adds command line options for things related to inline thawing\n of ICE files"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding command line options for the thaw operation", "response": "def thaw_args(subparsers):\n \"\"\"Add command line options for the thaw operation\"\"\"\n thaw_parser = subparsers.add_parser('thaw')\n thaw_parser.add_argument('--gpg-password-path',\n dest='gpg_pass_path',\n help='Vault path of GPG passphrase location')\n thaw_parser.add_argument('--ignore-missing',\n dest='ignore_missing',\n help='Warn when secrets are missing from icefiles'\n 'instead of exiting',\n action='store_true',\n default=False)\n secretfile_args(thaw_parser)\n archive_args(thaw_parser)\n vars_args(thaw_parser)\n base_args(thaw_parser)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef freeze_args(subparsers):\n freeze_parser = subparsers.add_parser('freeze')\n freeze_parser.add_argument('--icefile-prefix',\n dest='icefile_prefix',\n help='Prefix of icefilename')\n secretfile_args(freeze_parser)\n archive_args(freeze_parser)\n vars_args(freeze_parser)\n base_args(freeze_parser)", "response": "Add command line options for the freeze operation"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef password_args(subparsers):\n password_parser = subparsers.add_parser('set_password')\n password_parser.add_argument('vault_path',\n help='Path which contains password'\n 'secret to be udpated')\n base_args(password_parser)", "response": "Add command line options for the set_password operation"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding various command line options for external vars", "response": "def vars_args(parser):\n \"\"\"Add various command line options for external vars\"\"\"\n parser.add_argument('--extra-vars',\n dest='extra_vars',\n help='Extra template variables',\n default=[],\n type=str,\n action='append')\n parser.add_argument('--extra-vars-file',\n dest='extra_vars_file',\n help='YAML files full of variables',\n default=[],\n type=str,\n action='append')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parser_factory(fake_args=None):\n parser = ArgumentParser(description='aomi')\n subparsers = parser.add_subparsers(dest='operation',\n help='Specify the data '\n ' or extraction operation')\n extract_file_args(subparsers)\n environment_args(subparsers)\n aws_env_args(subparsers)\n seed_args(subparsers)\n render_args(subparsers)\n diff_args(subparsers)\n freeze_args(subparsers)\n thaw_args(subparsers)\n template_args(subparsers)\n password_args(subparsers)\n token_args(subparsers)\n help_args(subparsers)\n export_args(subparsers)\n\n if fake_args is None:\n return parser, parser.parse_args()\n\n return parser, parser.parse_args(fake_args)", "response": "Return a proper contextual OptionParser"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nexecute template related operations", "response": "def template_runner(client, parser, args):\n \"\"\"Executes template related operations\"\"\"\n if args.builtin_list:\n aomi.template.builtin_list()\n elif args.builtin_info:\n aomi.template.builtin_info(args.builtin_info)\n elif args.template and args.destination and args.vault_paths:\n aomi.render.template(client, args.template,\n args.destination,\n args.vault_paths,\n args)\n else:\n parser.print_usage()\n sys.exit(2)\n\n sys.exit(0)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ux_actions(parser, args):\n # cryptorito uses native logging (as aomi should tbh)\n normal_fmt = '%(message)s'\n if hasattr(args, 'verbose') and args.verbose and args.verbose >= 2:\n logging.basicConfig(level=logging.DEBUG)\n elif hasattr(args, 'verbose') and args.verbose >= 1:\n logging.basicConfig(level=logging.INFO, format=normal_fmt)\n else:\n logging.basicConfig(level=logging.WARN, format=normal_fmt)\n\n if args.operation == 'help':\n help_me(parser, args)", "response": "Handle some human triggers actions"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nexecutes the thaw operation pulling in an actual Vault client if neccesary", "response": "def do_thaw(client, args):\n \"\"\"Execute the thaw operation, pulling in an actual Vault\n client if neccesary\"\"\"\n vault_client = None\n if args.gpg_pass_path:\n vault_client = client.connect(args)\n\n aomi.filez.thaw(vault_client, args.icefile, args)\n sys.exit(0)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef action_runner(parser, args):\n\n ux_actions(parser, args)\n client = aomi.vault.Client(args)\n\n if args.operation == 'extract_file':\n aomi.render.raw_file(client.connect(args),\n args.vault_path, args.destination, args)\n sys.exit(0)\n elif args.operation == 'environment':\n aomi.render.env(client.connect(args),\n args.vault_paths, args)\n sys.exit(0)\n elif args.operation == 'aws_environment':\n aomi.render.aws(client.connect(args),\n args.vault_path, args)\n sys.exit(0)\n elif args.operation == 'seed':\n aomi.validation.gitignore(args)\n aomi.seed_action.seed(client.connect(args), args)\n sys.exit(0)\n elif args.operation == 'render':\n aomi.seed_action.render(args.directory, args)\n sys.exit(0)\n elif args.operation == 'export':\n aomi.seed_action.export(client.connect(args), args)\n sys.exit(0)\n elif args.operation == 'diff':\n aomi.seed_action.diff(client.connect(args), args)\n sys.exit(0)\n elif args.operation == 'template':\n template_runner(client.connect(args), parser, args)\n elif args.operation == 'token':\n print(client.connect(args).token)\n sys.exit(0)\n elif args.operation == 'set_password':\n aomi.util.password(client.connect(args), args.vault_path)\n sys.exit(0)\n elif args.operation == 'freeze':\n aomi.filez.freeze(args.icefile, args)\n sys.exit(0)\n elif args.operation == 'thaw':\n do_thaw(client, args)\n\n parser.print_usage()\n sys.exit(2)", "response": "Run appropriate action or throw help"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing the TTL information from a secret dictionary", "response": "def grok_ttl(secret):\n \"\"\"Parses the TTL information\"\"\"\n ttl_obj = {}\n lease_msg = ''\n if 'lease' in secret:\n ttl_obj['lease'] = secret['lease']\n lease_msg = \"lease:%s\" % (ttl_obj['lease'])\n\n if 'lease_max' in secret:\n ttl_obj['lease_max'] = secret['lease_max']\n elif 'lease' in ttl_obj:\n ttl_obj['lease_max'] = ttl_obj['lease']\n\n if 'lease_max' in ttl_obj:\n lease_msg = \"%s lease_max:%s\" % (lease_msg, ttl_obj['lease_max'])\n\n return ttl_obj, lease_msg"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef my_version():\n if os.path.exists(resource_filename(__name__, 'version')):\n return resource_string(__name__, 'version')\n\n return open(os.path.join(os.path.dirname(__file__),\n \"..\", \"version\")).read()", "response": "Return the version of the current package"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn what is hopefully a OS independent path.", "response": "def abspath(raw):\n \"\"\"Return what is hopefully a OS independent path.\"\"\"\n path_bits = []\n if raw.find('/') != -1:\n path_bits = raw.split('/')\n elif raw.find('\\\\') != -1:\n path_bits = raw.split('\\\\')\n else:\n path_bits = [raw]\n\n return os.path.abspath(os.sep.join(path_bits))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef hard_path(path, prefix_dir):\n relative = abspath(\"%s/%s\" % (prefix_dir, path))\n a_path = abspath(path)\n if os.path.exists(relative):\n LOG.debug(\"using relative path %s (%s)\", relative, path)\n return relative\n\n LOG.debug(\"using absolute path %s\", a_path)\n return a_path", "response": "Returns an absolute path to either the relative or absolute file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_tagged(required_tags, has_tags):\n if not required_tags and not has_tags:\n return True\n elif not required_tags:\n return False\n\n found_tags = []\n for tag in required_tags:\n if tag in has_tags:\n found_tags.append(tag)\n\n return len(found_tags) == len(required_tags)", "response": "Checks if the list of tags is tagged with the list of tags."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse out a hash from a list of key = value strings", "response": "def cli_hash(list_of_kv):\n \"\"\"Parse out a hash from a list of key=value strings\"\"\"\n ev_obj = {}\n for extra_var in list_of_kv:\n ev_list = extra_var.split('=')\n key = ev_list[0]\n val = '='.join(ev_list[1:]) # b64 and other side effects\n ev_obj[key] = val\n\n return ev_obj"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_tty_password(confirm):\n LOG.debug(\"Reading password from TTY\")\n new_password = getpass('Enter Password: ', stream=sys.stderr)\n if not new_password:\n raise aomi.exceptions.AomiCommand(\"Must specify a password\")\n\n if not confirm:\n return new_password\n\n confirm_password = getpass('Again, Please: ', stream=sys.stderr)\n if confirm_password != new_password:\n raise aomi.exceptions.AomiCommand(\"Passwords do not match\")\n\n return new_password", "response": "When returning a password from a TTY we assume a user is entering it on a keyboard so we ask for confirmation."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef path_pieces(vault_path):\n path_bits = vault_path.split('/')\n path = '/'.join(path_bits[0:len(path_bits) - 1])\n key = path_bits[len(path_bits) - 1]\n return path, key", "response": "Will return a two part tuple comprising of the vault path\n and the key with in the stored object"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the mountpoint for this path", "response": "def mount_for_path(path, client):\n \"\"\"Returns the mountpoint for this path\"\"\"\n backend_data = client.list_secret_backends()['data']\n backends = [mnt for mnt in backend_data.keys()]\n path_bits = path.split('/')\n if len(path_bits) == 1:\n vault_path = \"%s/\" % path\n if vault_path in backends:\n return vault_path[0:len(vault_path) - 1]\n else:\n for i in range(1, len(path_bits) + 1):\n vault_path = \"%s/\" % '/'.join(path_bits[0:i])\n if vault_path in backends:\n return vault_path[0:len(vault_path) - 1]\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef backend_type(path, client):\n backends = client.list_secret_backends()['data']\n vault_path = \"%s/\" % path\n return backends[vault_path]['type']", "response": "Returns the type of backend at the given mountpoint"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_word_file(filename):\n words_file = resource_filename(__name__, \"words/%s\" % filename)\n handle = open(words_file, 'r')\n words = handle.readlines()\n handle.close()\n return words", "response": "Loads a words file as a list of lines"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef choose_one(things):\n choice = SystemRandom().randint(0, len(things) - 1)\n return things[choice].strip()", "response": "Returns a random entry from a list of things"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a file path relative to another path.", "response": "def subdir_path(directory, relative):\n \"\"\"Returns a file path relative to another path.\"\"\"\n item_bits = directory.split(os.sep)\n relative_bits = relative.split(os.sep)\n for i, _item in enumerate(item_bits):\n if i == len(relative_bits) - 1:\n return os.sep.join(item_bits[i:])\n else:\n if item_bits[i] != relative_bits[i]:\n return None\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef open_maybe_binary(filename):\n if sys.version_info >= (3, 0):\n data = open(filename, 'rb').read()\n try:\n return data.decode('utf-8')\n except UnicodeDecodeError:\n return data\n\n return open(filename, 'r').read()", "response": "Opens something that might be binary but also might be plain text."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nensure a directory exists", "response": "def ensure_dir(path):\n \"\"\"Ensures a directory exists\"\"\"\n if not (os.path.exists(path) and\n os.path.isdir(path)):\n os.mkdir(path)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clean_tmpdir(path):\n if os.path.exists(path) and \\\n os.path.isdir(path):\n rmtree(path)", "response": "Called atexit this removes our tmpdir"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dict_unicodeize(some_dict):\n\n # some python 2/3 compat\n if isinstance(some_dict, (\"\".__class__, u\"\".__class__)):\n if sys.version_info >= (3, 0):\n return some_dict\n\n return some_dict.decode('utf-8')\n elif isinstance(some_dict, collections.Mapping):\n return dict(map(dict_unicodeize, iteritems(some_dict)))\n elif isinstance(some_dict, collections.Iterable):\n return type(some_dict)(map(dict_unicodeize, some_dict))\n\n return some_dict", "response": "Ensure that every string in a dict is properly represented by unicode strings"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef diff_dict(dict1, dict2, ignore_missing=False):\n unidict1 = dict_unicodeize(dict1)\n unidict2 = dict_unicodeize(dict2)\n if ((not ignore_missing) and (len(unidict1) != len(unidict2))) or \\\n (ignore_missing and (len(unidict1) >= len(unidict2))):\n return True\n\n for comp_k, comp_v in iteritems(unidict1):\n if comp_k not in unidict2:\n return True\n else:\n if comp_v != unidict2[comp_k]:\n return True\n\n return False", "response": "Performs a base type comparison between two dicts"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfilter a context by only the resources that are actually available for use.", "response": "def filtered_context(context):\n \"\"\"Filters a context\n This will return a new context with only the resources that\n are actually available for use. Uses tags and command line\n options to make determination.\"\"\"\n\n ctx = Context(context.opt)\n for resource in context.resources():\n if resource.child:\n continue\n\n if resource.filtered():\n ctx.add(resource)\n\n return ctx"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ensure_backend(resource, backend, backends, opt, managed=True):\n existing_mount = find_backend(resource.mount, backends)\n if not existing_mount:\n new_mount = backend(resource, opt, managed=managed)\n backends.append(new_mount)\n return new_mount\n\n return existing_mount", "response": "Ensure the backend for a resource is properly in context"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_model(config, obj, mods):\n for mod in mods:\n if mod[0] != config:\n continue\n\n if len(mod) == 2:\n return mod[1]\n\n if len(mod) == 3 and mod[1] in obj:\n return mod[2]\n\n return None", "response": "Given a list of mods attempts to find a model that fits one of the models"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef py_resources():\n aomi_mods = [m for\n m, _v in iteritems(sys.modules)\n if m.startswith('aomi.model')]\n mod_list = []\n mod_map = []\n for amod in [sys.modules[m] for m in aomi_mods]:\n for _mod_bit, model in inspect.getmembers(amod):\n if str(model) in mod_list:\n continue\n\n if model == Mount:\n mod_list.append(str(model))\n mod_map.append((model.config_key, model))\n elif (inspect.isclass(model) and\n issubclass(model, Resource) and\n model.config_key):\n mod_list.append(str(model))\n if model.resource_key:\n mod_map.append((model.config_key,\n model.resource_key,\n model))\n elif model.config_key != 'secrets':\n mod_map.append((model.config_key, model))\n\n return mod_map", "response": "Discovers all aomi Vault resource models. This includes all aomi Vault resource models extending aomi. model. Mount or aomi. model. Resource."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load(config, opt):\n ctx = Context(opt)\n seed_map = py_resources()\n seed_keys = sorted(set([m[0] for m in seed_map]), key=resource_sort)\n for config_key in seed_keys:\n if config_key not in config:\n continue\n for resource_config in config[config_key]:\n mod = find_model(config_key, resource_config, seed_map)\n if not mod:\n LOG.warning(\"unable to find mod for %s\", resource_config)\n continue\n\n ctx.add(mod(resource_config, opt))\n\n for config_key in config.keys():\n if config_key != 'pgp_keys' and \\\n config_key not in seed_keys:\n LOG.warning(\"missing model for %s\", config_key)\n\n return filtered_context(ctx)", "response": "Loads and returns a full context object based on the Secretfile"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef thaw(self, tmp_dir):\n for resource in self.resources():\n if resource.present:\n resource.thaw(tmp_dir)", "response": "Will thaw every secret into an appropriate temporary location"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfreezes every resource within a context", "response": "def freeze(self, dest_dir):\n \"\"\"Freezes every resource within a context\"\"\"\n for resource in self.resources():\n if resource.present:\n resource.freeze(dest_dir)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of vault resources within context", "response": "def resources(self):\n \"\"\"Vault resources within context\"\"\"\n res = []\n for resource in self._resources:\n res = res + resource.resources()\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a resource to the context", "response": "def add(self, resource):\n \"\"\"Add a resource to the context\"\"\"\n if isinstance(resource, Resource):\n if isinstance(resource, Secret) and \\\n resource.mount != 'cubbyhole':\n ensure_backend(resource,\n SecretBackend,\n self._mounts,\n self.opt,\n False)\n elif isinstance(resource, Mount):\n ensure_backend(resource, SecretBackend, self._mounts, self.opt)\n elif isinstance(resource, Auth):\n ensure_backend(resource, AuthBackend, self._auths, self.opt)\n elif isinstance(resource, AuditLog):\n ensure_backend(resource, LogBackend, self._logs, self.opt)\n\n self._resources.append(resource)\n else:\n msg = \"Unknown resource %s being \" \\\n \"added to context\" % resource.__class__\n raise aomi_excep.AomiError(msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove(self, resource):\n if isinstance(resource, Resource):\n self._resources.remove(resource)", "response": "Removes a resource from the context"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsynchronizing auth mount wrappers.", "response": "def sync_auth(self, vault_client, resources):\n \"\"\"Synchronizes auth mount wrappers. These happen\n early in the cycle, to ensure that user backends\n are proper. They may also be used to set mount\n tuning\"\"\"\n for auth in self.auths():\n auth.sync(vault_client)\n\n auth_resources = [x for x in resources\n if isinstance(x, (LDAP, UserPass))]\n for resource in auth_resources:\n resource.sync(vault_client)\n\n return [x for x in resources\n if not isinstance(x, (LDAP, UserPass, AuditLog))]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef actually_mount(self, vault_client, resource, active_mounts):\n a_mounts = list(active_mounts)\n if isinstance(resource, Secret) and resource.mount == 'cubbyhole':\n return a_mounts\n\n active_mount = find_backend(resource.mount, active_mounts)\n if not active_mount:\n actual_mount = find_backend(resource.mount, self._mounts)\n a_mounts.append(actual_mount)\n actual_mount.sync(vault_client)\n\n return a_mounts", "response": "Handle the actual mounting of a secret backend."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sync_mounts(self, active_mounts, resources, vault_client):\n # Create a resource set that is only explicit mounts\n # and sort so removals are first\n mounts = [x for x in resources\n if isinstance(x, (Mount, AWS))]\n\n s_resources = sorted(mounts, key=absent_sort)\n # Iterate over explicit mounts only\n for resource in s_resources:\n active_mounts = self.actually_mount(vault_client,\n resource,\n active_mounts)\n\n # OK Now iterate over everything but make sure it is clear\n # that ad-hoc mountpoints are deprecated as per\n # https://github.com/Autodesk/aomi/issues/110\n for resource in [x for x in resources\n if isinstance(x, Secret)]:\n n_mounts = self.actually_mount(vault_client,\n resource,\n active_mounts)\n if len(n_mounts) != len(active_mounts):\n LOG.warning(\"Ad-Hoc mount with %s. Please specify\"\n \" explicit mountpoints.\", resource)\n\n active_mounts = n_mounts\n\n return active_mounts, [x for x in resources\n if not isinstance(x, (Mount))]", "response": "Synchronizes the active mount points. Removes things before adding new."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sync(self, vault_client, opt):\n active_mounts = []\n for audit_log in self.logs():\n audit_log.sync(vault_client)\n\n # Handle policies only on the first pass. This allows us\n # to ensure that ACL's are in place prior to actually\n # making any changes.\n not_policies = self.sync_policies(vault_client)\n # Handle auth wrapper resources on the next path. The resources\n # may update a path on their own. They may also provide mount\n # tuning information.\n not_auth = self.sync_auth(vault_client, not_policies)\n # Handle mounts only on the next pass. This allows us to\n # ensure that everything is in order prior to actually\n # provisioning secrets. Note we handle removals before\n # anything else, allowing us to address mount conflicts.\n active_mounts, not_mounts = self.sync_mounts(active_mounts,\n not_auth,\n vault_client)\n # Now handle everything else. If \"best practices\" are being\n # adhered to then every generic mountpoint should exist by now.\n # We handle \"child\" resources after the first batch\n sorted_resources = sorted(not_mounts, key=childless_first)\n for resource in sorted_resources:\n resource.sync(vault_client)\n\n for mount in self.mounts():\n if not find_backend(mount.path, active_mounts):\n mount.unmount(vault_client)\n\n if opt.remove_unknown:\n self.prune(vault_client)", "response": "Synchronizes the context to the Vault server."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the context based on the contents of the Vault server.", "response": "def fetch(self, vault_client):\n \"\"\"Updates the context based on the contents of the Vault\n server. Note that some resources can not be read after\n they have been written to and it is up to those classes\n to handle that case properly.\"\"\"\n backends = [(self.mounts, SecretBackend),\n (self.auths, AuthBackend),\n (self.logs, LogBackend)]\n for b_list, b_class in backends:\n backend_list = b_list()\n if backend_list:\n existing = getattr(vault_client, b_class.list_fun)()\n for backend in backend_list:\n backend.fetch(vault_client, existing)\n\n for rsc in self.resources():\n if issubclass(type(rsc), Secret):\n nc_exists = (rsc.mount != 'cubbyhole' and\n find_backend(rsc.mount, self._mounts).existing)\n if nc_exists or rsc.mount == 'cubbyhole':\n rsc.fetch(vault_client)\n elif issubclass(type(rsc), Auth):\n if find_backend(rsc.mount, self._auths).existing:\n rsc.fetch(vault_client)\n elif issubclass(type(rsc), Mount):\n rsc.existing = find_backend(rsc.mount,\n self._mounts).existing\n else:\n rsc.fetch(vault_client)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef secret_key_name(path, key, opt):\n value = key\n if opt.merge_path:\n norm_path = [x for x in path.split('/') if x]\n value = \"%s_%s\" % ('_'.join(norm_path), key)\n\n if opt.add_prefix:\n value = \"%s%s\" % (opt.add_prefix, value)\n\n if opt.add_suffix:\n value = \"%s%s\" % (value, opt.add_suffix)\n\n return value", "response": "Renders a Secret key name appropriately"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndetermines the real deal template file", "response": "def grok_template_file(src):\n \"\"\"Determine the real deal template file\"\"\"\n if not src.startswith('builtin:'):\n return abspath(src)\n\n builtin = src.split(':')[1]\n builtin = \"templates/%s.j2\" % builtin\n return resource_filename(__name__, builtin)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef blend_vars(secrets, opt):\n base_obj = load_vars(opt)\n merged = merge_dicts(base_obj, secrets)\n template_obj = dict((k, v) for k, v in iteritems(merged) if v)\n # give templates something to iterate over\n template_obj['aomi_items'] = template_obj.copy()\n return template_obj", "response": "Blends secret and static variables together"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef template(client, src, dest, paths, opt):\n key_map = cli_hash(opt.key_map)\n obj = {}\n for path in paths:\n response = client.read(path)\n if not response:\n raise aomi.exceptions.VaultData(\"Unable to retrieve %s\" % path)\n if is_aws(response['data']) and 'sts' not in path:\n renew_secret(client, response, opt)\n\n for s_k, s_v in response['data'].items():\n o_key = s_k\n if s_k in key_map:\n o_key = key_map[s_k]\n\n k_name = secret_key_name(path, o_key, opt) \\\n .lower() \\\n .replace('-', '_')\n obj[k_name] = s_v\n\n template_obj = blend_vars(obj, opt)\n output = render(grok_template_file(src),\n template_obj)\n write_raw_file(output, abspath(dest))", "response": "Writes a template using variables from a vault path"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_raw_file(secret, dest):\n secret_file = None\n secret_filename = abspath(dest)\n if sys.version_info >= (3, 0):\n if not isinstance(secret, str):\n secret_file = open(secret_filename, 'wb')\n\n if not secret_file:\n secret_file = open(secret_filename, 'w')\n\n secret_file.write(secret)\n secret_file.close()\n os.chmod(secret_filename, 0o600)", "response": "Writes an actual secret out to a file"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite the contents of a vault path to a file.", "response": "def raw_file(client, src, dest, opt):\n \"\"\"Write the contents of a vault path/key to a file. Is\n smart enough to attempt and handle binary files that are\n base64 encoded.\"\"\"\n path, key = path_pieces(src)\n resp = client.read(path)\n if not resp:\n client.revoke_self_token()\n raise aomi.exceptions.VaultData(\"Unable to retrieve %s\" % path)\n else:\n if 'data' in resp and key in resp['data']:\n secret = resp['data'][key]\n if is_base64(secret):\n LOG.debug('decoding base64 entry')\n secret = portable_b64decode(secret)\n\n if is_aws(resp['data']) and 'sts' not in path:\n renew_secret(client, resp, opt)\n\n write_raw_file(secret, dest)\n else:\n client.revoke_self_token()\n e_msg = \"Key %s not found in %s\" % (key, path)\n raise aomi.exceptions.VaultData(e_msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders a shell snippet based on paths in a Secretfile", "response": "def env(client, paths, opt):\n \"\"\"Renders a shell snippet based on paths in a Secretfile\"\"\"\n old_prefix = False\n old_prefix = opt.prefix and not (opt.add_prefix or\n opt.add_suffix or\n not opt.merge_path)\n if old_prefix:\n LOG.warning(\"the prefix option is deprecated \"\n \"please use\"\n \"--no-merge-path --add-prefix $OLDPREFIX_ instead\")\n elif opt.prefix:\n LOG.warning(\"the prefix option is deprecated\"\n \"please use\"\n \"--no-merge-path --add-prefix $OLDPREFIX_ instead\")\n key_map = cli_hash(opt.key_map)\n for path in paths:\n secrets = client.read(path)\n if secrets and 'data' in secrets:\n if is_aws(secrets['data']) and 'sts' not in path:\n renew_secret(client, secrets, opt)\n\n for s_key, s_val in secrets['data'].items():\n o_key = s_key\n if s_key in key_map:\n o_key = key_map[s_key]\n\n # see https://github.com/Autodesk/aomi/issues/40\n env_name = None\n if old_prefix:\n env_name = (\"%s_%s\" % (opt.prefix, o_key)).upper()\n else:\n env_name = secret_key_name(path, o_key, opt).upper()\n\n print(\"%s=\\\"%s\\\"\" % (env_name, s_val))\n if opt.export:\n print(\"export %s\" % env_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef aws(client, path, opt):\n\n try:\n creds = client.read(path)\n except (hvac.exceptions.InternalServerError) as vault_exception:\n # this is how old vault behaves\n if vault_exception.errors[0].find('unsupported path') > 0:\n emsg = \"Invalid AWS path. Did you forget the\" \\\n \" credential type and role?\"\n raise aomi.exceptions.AomiFile(emsg)\n else:\n raise\n\n # this is how new vault behaves\n if not creds:\n emsg = \"Invalid AWS path. Did you forget the\" \\\n \" credential type and role?\"\n raise aomi.exceptions.AomiFile(emsg)\n\n renew_secret(client, creds, opt)\n\n if creds and 'data' in creds:\n print(\"AWS_ACCESS_KEY_ID=\\\"%s\\\"\" % creds['data']['access_key'])\n print(\"AWS_SECRET_ACCESS_KEY=\\\"%s\\\"\" % creds['data']['secret_key'])\n if 'security_token' in creds['data'] \\\n and creds['data']['security_token']:\n token = creds['data']['security_token']\n print(\"AWS_SECURITY_TOKEN=\\\"%s\\\"\" % token)\n else:\n client.revoke_self_token()\n e_msg = \"Unable to generate AWS credentials from %s\" % path\n raise aomi.exceptions.VaultData(e_msg)\n\n if opt.export:\n print(\"export AWS_ACCESS_KEY_ID\")\n print(\"export AWS_SECRET_ACCESS_KEY\")\n if 'security_token' in creds['data'] \\\n and creds['data']['security_token']:\n print(\"export AWS_SECURITY_TOKEN\")", "response": "Renders a shell environment snippet with AWS information"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generated_key(key):\n key_name = key['name']\n if key['method'] == 'uuid':\n LOG.debug(\"Setting %s to a uuid\", key_name)\n return str(uuid4())\n elif key['method'] == 'words':\n LOG.debug(\"Setting %s to random words\", key_name)\n return random_word()\n elif key['method'] == 'static':\n if 'value' not in key.keys():\n raise aomi.exceptions.AomiData(\"Missing static value\")\n\n LOG.debug(\"Setting %s to a static value\", key_name)\n return key['value']\n else:\n raise aomi.exceptions.AomiData(\"Unexpected generated secret method %s\"\n % key['method'])", "response": "Create the proper generated key value"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef generate_obj(self):\n secret_obj = {}\n if self.existing:\n secret_obj = deepcopy(self.existing)\n\n for key in self.keys:\n key_name = key['name']\n if self.existing and \\\n key_name in self.existing and \\\n not key.get('overwrite'):\n LOG.debug(\"Not overwriting %s/%s\", self.path, key_name)\n continue\n else:\n secret_obj[key_name] = generated_key(key)\n\n return secret_obj", "response": "Generates the secret object accordinging existing information\n and user specified options"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unhandled(exception, opt):\n exmod = type(exception).__module__\n name = \"%s.%s\" % (exmod, type(exception).__name__)\n # this is a Vault error\n if exmod == 'aomi.exceptions' or exmod == 'cryptorito':\n # This may be set for Validation or similar errors\n if hasattr(exception, 'source'):\n output(exception.message, opt, extra=exception.source)\n else:\n output(exception.message, opt)\n\n else:\n output(\"Unexpected error: %s\" % name, opt)\n\n sys.exit(1)", "response": "Handle uncaught errors and be polite about it"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nensure that we are returning just seconds", "response": "def grok_seconds(lease):\n \"\"\"Ensures that we are returning just seconds\"\"\"\n if lease.endswith('s'):\n return int(lease[0:-1])\n elif lease.endswith('m'):\n return int(lease[0:-1]) * 60\n elif lease.endswith('h'):\n return int(lease[0:-1]) * 3600\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef renew_secret(client, creds, opt):\n if opt.reuse_token:\n return\n\n seconds = grok_seconds(opt.lease)\n if not seconds:\n raise aomi.exceptions.AomiCommand(\"invalid lease %s\" % opt.lease)\n\n renew = None\n if client.version:\n v_bits = client.version.split('.')\n if int(v_bits[0]) == 0 and \\\n int(v_bits[1]) <= 8 and \\\n int(v_bits[2]) <= 0:\n r_obj = {\n 'increment': seconds\n }\n r_path = \"v1/sys/renew/{0}\".format(creds['lease_id'])\n # Pending discussion on https://github.com/ianunruh/hvac/issues/148\n # pylint: disable=protected-access\n renew = client._post(r_path, json=r_obj).json()\n\n if not renew:\n renew = client.renew_secret(creds['lease_id'], seconds)\n\n # sometimes it takes a bit for vault to respond\n # if we are within 5s then we are fine\n if not renew or (seconds - renew['lease_duration'] >= 5):\n client.revoke_self_token()\n e_msg = 'Unable to renew with desired lease'\n raise aomi.exceptions.VaultConstraint(e_msg)", "response": "Renews a secret. This will occur unless the user has\n specified on the command line that it is not neccesary"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a vault token based on the role and secret id", "response": "def approle_token(vault_client, role_id, secret_id):\n \"\"\"Returns a vault token based on the role and seret id\"\"\"\n resp = vault_client.auth_approle(role_id, secret_id)\n if 'auth' in resp and 'client_token' in resp['auth']:\n return resp['auth']['client_token']\n else:\n raise aomi.exceptions.AomiCredentials('invalid approle')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a vault token based on the app and user id.", "response": "def app_token(vault_client, app_id, user_id):\n \"\"\"Returns a vault token based on the app and user id.\"\"\"\n resp = vault_client.auth_app_id(app_id, user_id)\n if 'auth' in resp and 'client_token' in resp['auth']:\n return resp['auth']['client_token']\n else:\n raise aomi.exceptions.AomiCredentials('invalid apptoken')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating metadata for a token", "response": "def token_meta(opt):\n \"\"\"Generates metadata for a token\"\"\"\n meta = {\n 'via': 'aomi',\n 'operation': opt.operation,\n 'hostname': socket.gethostname()\n }\n if 'USER' in os.environ:\n meta['unix_user'] = os.environ['USER']\n\n if opt.metadata:\n meta_bits = opt.metadata.split(',')\n for meta_bit in meta_bits:\n key, value = meta_bit.split('=')\n\n if key not in meta:\n meta[key] = value\n\n for key, value in meta.items():\n LOG.debug(\"Token metadata %s %s\", key, value)\n\n return meta"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns mountpoint details for a backend", "response": "def get_backend(backend, path, backends):\n \"\"\"Returns mountpoint details for a backend\"\"\"\n m_norm = normalize_vault_path(path)\n for mount_name, values in backends.items():\n b_norm = normalize_vault_path(mount_name)\n if (m_norm == b_norm) and values['type'] == backend:\n return values\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef wrap_hvac(msg):\n # pylint: disable=missing-docstring\n def wrap_call(func):\n # pylint: disable=missing-docstring\n def func_wrapper(self, vault_client):\n try:\n return func(self, vault_client)\n except (hvac.exceptions.InvalidRequest,\n hvac.exceptions.Forbidden) as vault_exception:\n if vault_exception.errors[0] == 'permission denied':\n emsg = \"Permission denied %s from %s\" % (msg, self.path)\n raise aomi.exceptions.AomiCredentials(emsg)\n else:\n raise\n\n return func_wrapper\n return wrap_call", "response": "A decorator that wraps API interactions with Vault API."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nattempting to determine the version of Vault that a server is running. Some actions will change on older Vault deployments.", "response": "def server_version(self):\n \"\"\"Attempts to determine the version of Vault that a\n server is running. Some actions will change on older\n Vault deployments.\"\"\"\n health_url = \"%s/v1/sys/health\" % self.vault_addr\n resp = self.session.request('get', health_url, **self._kwargs)\n if resp.status_code == 200 or resp.status_code == 429:\n blob = resp.json()\n if 'version' in blob:\n return blob['version']\n else:\n raise aomi.exceptions.VaultProblem('Health check failed')\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef connect(self, opt):\n if not self._kwargs['verify']:\n LOG.warning('Skipping SSL Validation!')\n\n self.version = self.server_version()\n self.token = self.init_token()\n my_token = self.lookup_token()\n if not my_token or 'data' not in my_token:\n raise aomi.exceptions.AomiCredentials('initial token')\n\n display_name = my_token['data']['display_name']\n vsn_string = \"\"\n if self.version:\n vsn_string = \", v%s\" % self.version\n else:\n LOG.warning(\"Unable to deterine Vault version. Not all \"\n \"functionality is supported\")\n\n LOG.info(\"Connected to %s as %s%s\",\n self._url,\n display_name,\n vsn_string)\n\n if opt.reuse_token:\n LOG.debug(\"Not creating operational token\")\n self.initial_token = self.token\n self.operational_token = self.token\n else:\n self.initial_token = self.token\n self.operational_token = self.op_token(display_name, opt)\n if not self.is_authenticated():\n raise aomi.exceptions.AomiCredentials('operational token')\n\n self.token = self.operational_token\n\n return self", "response": "This sets up the tokens we expect to see in a way\n that hvac also expects."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating our first token based on workstation configuration", "response": "def init_token(self):\n \"\"\"Generate our first token based on workstation configuration\"\"\"\n\n app_filename = appid_file()\n token_filename = token_file()\n approle_filename = approle_file()\n token = None\n if 'VAULT_ROLE_ID' in os.environ and \\\n 'VAULT_SECRET_ID' in os.environ and \\\n os.environ['VAULT_ROLE_ID'] and os.environ['VAULT_SECRET_ID']:\n token = approle_token(self,\n os.environ['VAULT_ROLE_ID'],\n os.environ['VAULT_SECRET_ID'])\n LOG.debug(\"Token derived from VAULT_ROLE_ID and VAULT_SECRET_ID\")\n elif 'VAULT_TOKEN' in os.environ and os.environ['VAULT_TOKEN']:\n LOG.debug('Token derived from VAULT_TOKEN environment variable')\n token = os.environ['VAULT_TOKEN'].strip()\n elif 'VAULT_USER_ID' in os.environ and \\\n 'VAULT_APP_ID' in os.environ and \\\n os.environ['VAULT_USER_ID'] and os.environ['VAULT_APP_ID']:\n LOG.debug(\"Token derived from VAULT_APP_ID and VAULT_USER_ID\")\n token = app_token(self,\n os.environ['VAULT_APP_ID'].strip(),\n os.environ['VAULT_USER_ID'].strip())\n elif approle_filename:\n creds = yaml.safe_load(open(approle_filename).read().strip())\n if 'role_id' in creds and 'secret_id' in creds:\n LOG.debug(\"Token derived from approle file\")\n token = approle_token(self,\n creds['role_id'],\n creds['secret_id'])\n elif token_filename:\n LOG.debug(\"Token derived from %s\", token_filename)\n try:\n token = open(token_filename, 'r').read().strip()\n except IOError as os_exception:\n if os_exception.errno == 21:\n raise aomi.exceptions.AomiFile('Bad Vault token file')\n\n raise\n elif app_filename:\n token = yaml.safe_load(open(app_filename).read().strip())\n if 'app_id' in token and 'user_id' in token:\n LOG.debug(\"Token derived from %s\", app_filename)\n token = app_token(self,\n token['app_id'],\n token['user_id'])\n else:\n raise aomi.exceptions.AomiCredentials('unknown method')\n\n return token"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef op_token(self, display_name, opt):\n args = {\n 'lease': opt.lease,\n 'display_name': display_name,\n 'meta': token_meta(opt)\n }\n try:\n token = self.create_token(**args)\n except (hvac.exceptions.InvalidRequest,\n hvac.exceptions.Forbidden) as vault_exception:\n if vault_exception.errors[0] == 'permission denied':\n emsg = \"Permission denied creating operational token\"\n raise aomi.exceptions.AomiCredentials(emsg)\n else:\n raise\n\n LOG.debug(\"Created operational token with lease of %s\", opt.lease)\n return token['auth']['client_token']", "response": "Return a properly annotated token for our use."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwrapping the hvac read call using the right token for cubbyhole interactions.", "response": "def read(self, path, wrap_ttl=None):\n \"\"\"Wrap the hvac read call, using the right token for\n cubbyhole interactions.\"\"\"\n path = sanitize_mount(path)\n if path.startswith('cubbyhole'):\n self.token = self.initial_token\n val = super(Client, self).read(path, wrap_ttl)\n self.token = self.operational_token\n return val\n\n return super(Client, self).read(path, wrap_ttl)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps the hvac write call using the right token for cubbyhole interactions.", "response": "def write(self, path, wrap_ttl=None, **kwargs):\n \"\"\"Wrap the hvac write call, using the right token for\n cubbyhole interactions.\"\"\"\n path = sanitize_mount(path)\n val = None\n if path.startswith('cubbyhole'):\n self.token = self.initial_token\n val = super(Client, self).write(path, wrap_ttl=wrap_ttl, **kwargs)\n self.token = self.operational_token\n else:\n super(Client, self).write(path, wrap_ttl=wrap_ttl, **kwargs)\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwraps the hvac delete call using the right token for cubbyhole interactions.", "response": "def delete(self, path):\n \"\"\"Wrap the hvac delete call, using the right token for\n cubbyhole interactions.\"\"\"\n path = sanitize_mount(path)\n val = None\n if path.startswith('cubbyhole'):\n self.token = self.initial_token\n val = super(Client, self).delete(path)\n self.token = self.operational_token\n else:\n super(Client, self).delete(path)\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_keybase(username):\n public_key = key_from_keybase(username)\n fingerprint = public_key['fingerprint'][-8:].upper().encode('ascii')\n key = public_key['bundle'].encode('ascii')\n if not has_gpg_key(fingerprint):\n LOG.debug(\"Importing gpg key for %s\", username)\n if not import_gpg_key(key):\n raise aomi.exceptions.KeybaseAPI(\"import key for %s\" % username)\n\n return fingerprint", "response": "Will attempt to retrieve a GPG public key from Keybase if neccesary"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef grok_keys(config):\n key_ids = []\n for key in config['pgp_keys']:\n if key.startswith('keybase:'):\n key_id = from_keybase(key[8:])\n LOG.debug(\"Encrypting for keybase user %s\", key[8:])\n else:\n if not has_gpg_key(key):\n raise aomi.exceptions.GPG(\"Do not actually have key %s\" % key)\n\n LOG.debug(\"Encrypting for gpg id %s\", key)\n key_id = key\n\n validate_gpg_fingerprint(key_id)\n key_ids.append(key_id)\n\n return key_ids", "response": "Will retrieve a GPG key from either Keybase or GPG directly"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a ZIP file of secrets", "response": "def freeze_archive(tmp_dir, dest_prefix):\n \"\"\"Generates a ZIP file of secrets\"\"\"\n zip_filename = \"%s/aomi-blah.zip\" % tmp_dir\n archive = zipfile.ZipFile(zip_filename, 'w')\n for root, _dirnames, filenames in os.walk(dest_prefix):\n for filename in filenames:\n relative_path = subdir_path(root, dest_prefix).split(os.sep)[1:]\n relative_path = os.sep.join(relative_path)\n archive.write(\"%s/%s\" % (root, filename),\n \"%s/%s\" % (relative_path, filename))\n\n archive.close()\n return zip_filename"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nencrypting the zip file", "response": "def freeze_encrypt(dest_dir, zip_filename, config, opt):\n \"\"\"Encrypts the zip file\"\"\"\n pgp_keys = grok_keys(config)\n icefile_prefix = \"aomi-%s\" % \\\n os.path.basename(os.path.dirname(opt.secretfile))\n if opt.icefile_prefix:\n icefile_prefix = opt.icefile_prefix\n\n timestamp = time.strftime(\"%H%M%S-%m-%d-%Y\",\n datetime.datetime.now().timetuple())\n ice_file = \"%s/%s-%s.ice\" % (dest_dir, icefile_prefix, timestamp)\n if not encrypt(zip_filename, ice_file, pgp_keys):\n raise aomi.exceptions.GPG(\"Unable to encrypt zipfile\")\n\n return ice_file"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef freeze(dest_dir, opt):\n tmp_dir = ensure_tmpdir()\n dest_prefix = \"%s/dest\" % tmp_dir\n ensure_dir(dest_dir)\n ensure_dir(dest_prefix)\n config = get_secretfile(opt)\n Context.load(config, opt) \\\n .freeze(dest_prefix)\n zip_filename = freeze_archive(tmp_dir, dest_prefix)\n ice_file = freeze_encrypt(dest_dir, zip_filename, config, opt)\n shutil.rmtree(tmp_dir)\n LOG.debug(\"Generated file is %s\", ice_file)", "response": "Freeze the secrets in the Secretfile and encrypt it."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef thaw_decrypt(vault_client, src_file, tmp_dir, opt):\n\n if not os.path.isdir(opt.secrets):\n LOG.info(\"Creating secret directory %s\", opt.secrets)\n os.mkdir(opt.secrets)\n\n zip_file = \"%s/aomi.zip\" % tmp_dir\n\n if opt.gpg_pass_path:\n gpg_path_bits = opt.gpg_pass_path.split('/')\n gpg_path = '/'.join(gpg_path_bits[0:len(gpg_path_bits) - 1])\n gpg_field = gpg_path_bits[len(gpg_path_bits) - 1]\n resp = vault_client.read(gpg_path)\n gpg_pass = None\n if resp and 'data' in resp and gpg_field in resp['data']:\n gpg_pass = resp['data'][gpg_field]\n if not gpg_pass:\n raise aomi.exceptions.GPG(\"Unable to retrieve GPG password\")\n\n LOG.debug(\"Retrieved GPG password from Vault\")\n if not decrypt(src_file, zip_file, passphrase=gpg_pass):\n raise aomi.exceptions.GPG(\"Unable to gpg\")\n\n else:\n raise aomi.exceptions.VaultData(\"Unable to retrieve GPG password\")\n else:\n if not decrypt(src_file, zip_file):\n raise aomi.exceptions.GPG(\"Unable to gpg\")\n\n return zip_file", "response": "Decrypts the encrypted ice file"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives the combination of a Secretfile and the output of a freeze operation, will restore secrets to usable locations", "response": "def thaw(vault_client, src_file, opt):\n \"\"\"Given the combination of a Secretfile and the output of\n a freeze operation, will restore secrets to usable locations\"\"\"\n if not os.path.exists(src_file):\n raise aomi.exceptions.AomiFile(\"%s does not exist\" % src_file)\n\n tmp_dir = ensure_tmpdir()\n zip_file = thaw_decrypt(vault_client, src_file, tmp_dir, opt)\n archive = zipfile.ZipFile(zip_file, 'r')\n for archive_file in archive.namelist():\n archive.extract(archive_file, tmp_dir)\n os.chmod(\"%s/%s\" % (tmp_dir, archive_file), 0o640)\n LOG.debug(\"Extracted %s from archive\", archive_file)\n\n LOG.info(\"Thawing secrets into %s\", opt.secrets)\n config = get_secretfile(opt)\n Context.load(config, opt) \\\n .thaw(tmp_dir)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining if changes are needed for the Vault backend", "response": "def diff(self):\n \"\"\"Determines if changes are needed for the Vault backend\"\"\"\n\n if not self.present:\n if self.existing:\n return DEL\n\n return NOOP\n\n is_diff = NOOP\n if self.present and self.existing:\n a_obj = self.config.copy()\n if self.config and diff_dict(a_obj, self.existing, True):\n is_diff = CHANGED\n\n if self.description != self.existing.get('description'):\n is_diff = CONFLICT\n\n elif self.present and not self.existing:\n is_diff = ADD\n\n return is_diff"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sync(self, vault_client):\n if self.present:\n if not self.existing:\n LOG.info(\"Mounting %s backend on %s\",\n self.backend, self.path)\n self.actually_mount(vault_client)\n else:\n LOG.info(\"%s backend already mounted on %s\",\n self.backend, self.path)\n else:\n if self.existing:\n LOG.info(\"Unmounting %s backend on %s\",\n self.backend, self.path)\n self.unmount(vault_client)\n else:\n LOG.info(\"%s backend already unmounted on %s\",\n self.backend, self.path)\n\n if self.present and vault_client.version:\n self.sync_tunables(vault_client)", "response": "Synchronizes the local and remote Vault resources. Has the net\n effect of adding backend if needed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsynchronize any tunables we have set", "response": "def sync_tunables(self, vault_client):\n \"\"\"Synchtonizes any tunables we have set\"\"\"\n if not self.config:\n return\n\n a_prefix = self.tune_prefix\n if self.tune_prefix:\n a_prefix = \"%s/\" % self.tune_prefix\n\n v_path = \"sys/mounts/%s%s/tune\" % (a_prefix, self.path)\n a_obj = self.config.copy()\n if 'description' in a_obj:\n del a_obj['description']\n\n t_resp = vault_client.write(v_path, **a_obj)\n if t_resp and 'errors' in t_resp and t_resp['errors']:\n e_msg = \"Unable to update tuning info for %s\" % self\n raise aomi_excep.VaultData(e_msg)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fetch(self, vault_client, backends):\n if not is_mounted(self.backend, self.path, backends) or \\\n self.tune_prefix is None:\n return\n\n backend_details = get_backend(self.backend, self.path, backends)\n self.existing = backend_details['config']\n if backend_details['description']:\n self.existing['description'] = backend_details['description']\n\n if vault_client.version is None:\n return\n\n if not self.managed:\n return\n\n a_prefix = self.tune_prefix\n if self.tune_prefix:\n a_prefix = \"%s/\" % self.tune_prefix\n\n v_path = \"sys/mounts/%s%s/tune\" % (a_prefix, self.path)\n t_resp = vault_client.read(v_path)\n if 'data' not in t_resp:\n e_msg = \"Unable to retrieve tuning info for %s\" % self\n raise aomi_excep.VaultData(e_msg)\n\n e_obj = t_resp['data']\n e_obj['description'] = None\n n_path = normalize_vault_path(self.path)\n if n_path in backends:\n a_mount = backends[n_path]\n if 'description' in a_mount and a_mount['description']:\n e_obj['description'] = a_mount['description']\n\n self.existing = e_obj", "response": "Updates the local resource with context on whether this backend is actually mounted and available"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef unmount(self, client):\n getattr(client, self.unmount_fun)(mount_point=self.path)", "response": "Unmounts a backend within Vault"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef thaw(self, tmp_dir):\n for sfile in self.secrets():\n src_file = \"%s/%s\" % (tmp_dir, sfile)\n err_msg = \"%s secret missing from icefile\" % (self)\n if not os.path.exists(src_file):\n if hasattr(self.opt, 'ignore_missing') and \\\n self.opt.ignore_missing:\n LOG.warning(err_msg)\n continue\n else:\n raise aomi_excep.IceFile(err_msg)\n\n dest_file = \"%s/%s\" % (self.opt.secrets, sfile)\n dest_dir = os.path.dirname(dest_file)\n if not os.path.exists(dest_dir):\n os.mkdir(dest_dir)\n\n shutil.copy(src_file, dest_file)\n LOG.debug(\"Thawed %s %s\", self, sfile)", "response": "Will perform some validation and copy decrypted secret to it s final location"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef tunable(self, obj):\n self.tune = dict()\n if 'tune' in obj:\n for tunable in MOUNT_TUNABLES:\n tunable_key = tunable[0]\n map_val(self.tune, obj['tune'], tunable_key)\n if tunable_key in self.tune and \\\n is_vault_time(self.tune[tunable_key]):\n vault_time_s = vault_time_to_s(self.tune[tunable_key])\n self.tune[tunable_key] = vault_time_s\n\n if 'description'in obj:\n self.tune['description'] = obj['description']", "response": "A tunable resource maps against a backend..."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a filehandle for exporting", "response": "def export_handle(self, directory):\n \"\"\"Get a filehandle for exporting\"\"\"\n filename = getattr(self, 'filename')\n dest_file = \"%s/%s\" % (directory, filename)\n dest_dir = os.path.dirname(dest_file)\n if not os.path.isdir(dest_dir):\n os.mkdir(dest_dir, 0o700)\n\n return open(dest_file, 'w')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef export(self, directory):\n if not self.existing or not hasattr(self, 'filename'):\n return\n\n secret_h = self.export_handle(directory)\n obj = self.existing\n if isinstance(obj, str):\n secret_h.write(obj)\n elif isinstance(obj, dict):\n secret_h.write(yaml.safe_dump(obj))", "response": "Export the current object to a directory."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef freeze(self, tmp_dir):\n for sfile in self.secrets():\n src_file = hard_path(sfile, self.opt.secrets)\n if not os.path.exists(src_file):\n raise aomi_excep.IceFile(\"%s secret not found at %s\" %\n (self, src_file))\n\n dest_file = \"%s/%s\" % (tmp_dir, sfile)\n dest_dir = os.path.dirname(dest_file)\n if not os.path.isdir(dest_dir):\n os.mkdir(dest_dir, 0o700)\n\n shutil.copy(src_file, dest_file)\n LOG.debug(\"Froze %s %s\", self, sfile)", "response": "Copies a secret into a particular location"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef grok_state(self, obj):\n if 'state' in obj:\n my_state = obj['state'].lower()\n if my_state != 'absent' and my_state != 'present':\n raise aomi_excep \\\n .Validation('state must be either \"absent\" or \"present\"')\n\n self.present = obj.get('state', 'present').lower() == 'present'", "response": "Determine the desired state of thisCOOKIE based on data present"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(self, obj):\n if 'tags' in obj and not isinstance(obj['tags'], list):\n raise aomi_excep.Validation('tags must be a list')\n\n if self.present:\n check_obj(self.required_fields, self.name(), obj)", "response": "Base validation method. Will inspect class attributes\n to dermine just what should be present"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef diff(self, obj=None):\n if self.no_resource:\n return NOOP\n\n if not self.present:\n if self.existing:\n return DEL\n\n return NOOP\n\n if not obj:\n obj = self.obj()\n\n is_diff = NOOP\n if self.present and self.existing:\n if isinstance(self.existing, dict):\n current = dict(self.existing)\n if 'refresh_interval' in current:\n del current['refresh_interval']\n\n if diff_dict(current, obj):\n is_diff = CHANGED\n elif is_unicode(self.existing):\n if self.existing != obj:\n is_diff = CHANGED\n\n elif self.present and not self.existing:\n is_diff = ADD\n\n return is_diff", "response": "Determine if something has changed or not"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fetch(self, vault_client):\n result = self.read(vault_client)\n if result:\n if isinstance(result, dict) and 'data' in result:\n self.existing = result['data']\n else:\n self.existing = result\n else:\n self.existing = None", "response": "Populate internal representation of remote\n Vault resource contents"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating update and remove Vault resource contents if needed", "response": "def sync(self, vault_client):\n \"\"\"Update remove Vault resource contents if needed\"\"\"\n if self.present and not self.existing:\n LOG.info(\"Writing new %s to %s\",\n self.secret_format, self)\n self.write(vault_client)\n elif self.present and self.existing:\n if self.diff() == CHANGED or self.diff() == OVERWRITE:\n LOG.info(\"Updating %s in %s\",\n self.secret_format, self)\n self.write(vault_client)\n elif not self.present and not self.existing:\n LOG.info(\"No %s to remove from %s\",\n self.secret_format, self)\n elif not self.present and self.existing:\n LOG.info(\"Removing %s from %s\",\n self.secret_format, self)\n self.delete(vault_client)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef filtered(self):\n if not is_tagged(self.tags, self.opt.tags):\n LOG.info(\"Skipping %s as it does not have requested tags\",\n self.path)\n return False\n\n if not specific_path_check(self.path, self.opt):\n LOG.info(\"Skipping %s as it does not match specified paths\",\n self.path)\n return False\n\n return True", "response": "Determines whether or not the resource is filtered based on the tags and paths specified in the command line options."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read(self, client):\n val = None\n if self.no_resource:\n return val\n\n LOG.debug(\"Reading from %s\", self)\n try:\n val = client.read(self.path)\n except hvac.exceptions.InvalidRequest as vault_exception:\n if str(vault_exception).startswith('no handler for route'):\n val = None\n\n return val", "response": "Read from Vault while handling non surprising errors."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrite to Vault while handling non - surprising errors.", "response": "def write(self, client):\n \"\"\"Write to Vault while handling non-surprising errors.\"\"\"\n val = None\n if not self.no_resource:\n val = client.write(self.path, **self.obj())\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find_file(name, directory):\n path_bits = directory.split(os.sep)\n for i in range(0, len(path_bits) - 1):\n check_path = path_bits[0:len(path_bits) - i]\n check_file = \"%s%s%s\" % (os.sep.join(check_path), os.sep, name)\n if os.path.exists(check_file):\n return abspath(check_file)\n\n return None", "response": "Searches up from a directory looking for a file"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlook in a file for a string.", "response": "def in_file(string, search_file):\n \"\"\"Looks in a file for a string.\"\"\"\n handle = open(search_file, 'r')\n for line in handle.readlines():\n if string in line:\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef gitignore(opt):\n directory = os.path.dirname(abspath(opt.secretfile))\n gitignore_file = find_file('.gitignore', directory)\n if gitignore_file:\n secrets_path = subdir_path(abspath(opt.secrets), gitignore_file)\n if secrets_path:\n if not in_file(secrets_path, gitignore_file):\n e_msg = \"The path %s was not found in %s\" \\\n % (secrets_path, gitignore_file)\n raise aomi.exceptions.AomiFile(e_msg)\n else:\n LOG.debug(\"Using a non-relative secret directory\")\n\n else:\n raise aomi.exceptions.AomiFile(\"You should really have a .gitignore\")", "response": "Will check directories upwards from the Secretfile in order\n to ensure the gitignore file is set properly"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef specific_path_check(path, opt):\n if opt.exclude:\n if path in opt.exclude:\n return False\n\n if opt.include:\n if path not in opt.include:\n return False\n\n return True", "response": "Will make checks against include and exclude to determine if we care about the path in question."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndo basic validation on an object", "response": "def check_obj(keys, name, obj):\n \"\"\"Do basic validation on an object\"\"\"\n msg = validate_obj(keys, obj)\n\n if msg:\n raise aomi.exceptions.AomiData(\"object check : %s in %s\" % (msg, name))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sanitize_mount(mount):\n sanitized_mount = mount\n if sanitized_mount.startswith('/'):\n sanitized_mount = sanitized_mount[1:]\n\n if sanitized_mount.endswith('/'):\n sanitized_mount = sanitized_mount[:-1]\n\n sanitized_mount = sanitized_mount.replace('//', '/')\n return sanitized_mount", "response": "Returns a quote - unquote sanitized mount path"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef gpg_fingerprint(key):\n if (len(key) == 8 and re.match(r'^[0-9A-F]{8}$', key)) or \\\n (len(key) == 40 and re.match(r'^[0-9A-F]{40}$', key)):\n return\n\n raise aomi.exceptions.Validation('Invalid GPG Fingerprint')", "response": "Validates a GPG key fingerprint"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates that we are some kinda unicode string", "response": "def is_unicode_string(string):\n \"\"\"Validates that we are some kinda unicode string\"\"\"\n try:\n if sys.version_info >= (3, 0):\n # isn't a python 3 str actually unicode\n if not isinstance(string, str):\n string.decode('utf-8')\n\n else:\n string.decode('utf-8')\n except UnicodeError:\n raise aomi.exceptions.Validation('Not a unicode string')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_unicode(string):\n str_type = str(type(string))\n\n if str_type.find('str') > 0 or str_type.find('unicode') > 0:\n return True\n\n return False", "response": "Validates that the object itself is some kinda string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsafe version of the modulo operation (%) of strings Parameters ---------- s: str string to apply the modulo operation with meta: dict or tuple meta informations to insert (usually via ``s % meta``) checked: {'KEY', 'VALUE'}, optional Security parameter for the recursive structure of this function. It can be set to 'VALUE' if an error shall be raised when facing a TypeError or ValueError or to 'KEY' if an error shall be raised when facing a KeyError. This parameter is mainly for internal processes. print_warning: bool If True and a key is not existent in `s`, a warning is raised stacklevel: int The stacklevel for the :func:`warnings.warn` function Examples -------- The effects are demonstrated by this example:: >>> from docrep import safe_modulo >>> s = \"That's %(one)s string %(with)s missing 'with' and %s key\" >>> s % {'one': 1} # raises KeyError because of missing 'with' Traceback (most recent call last): File \"<stdin>\", line 1, in <module> KeyError: 'with' >>> s % {'one': 1, 'with': 2} # raises TypeError because of '%s' Traceback (most recent call last): File \"<stdin>\", line 1, in <module> TypeError: not enough arguments for format string >>> safe_modulo(s, {'one': 1}) \"That's 1 string %(with)s missing 'with' and %s key\"", "response": "def safe_modulo(s, meta, checked='', print_warning=True, stacklevel=2):\n \"\"\"Safe version of the modulo operation (%) of strings\n\n Parameters\n ----------\n s: str\n string to apply the modulo operation with\n meta: dict or tuple\n meta informations to insert (usually via ``s % meta``)\n checked: {'KEY', 'VALUE'}, optional\n Security parameter for the recursive structure of this function. It can\n be set to 'VALUE' if an error shall be raised when facing a TypeError\n or ValueError or to 'KEY' if an error shall be raised when facing a\n KeyError. This parameter is mainly for internal processes.\n print_warning: bool\n If True and a key is not existent in `s`, a warning is raised\n stacklevel: int\n The stacklevel for the :func:`warnings.warn` function\n\n\n Examples\n --------\n The effects are demonstrated by this example::\n\n >>> from docrep import safe_modulo\n >>> s = \"That's %(one)s string %(with)s missing 'with' and %s key\"\n >>> s % {'one': 1} # raises KeyError because of missing 'with'\n Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n KeyError: 'with'\n >>> s % {'one': 1, 'with': 2} # raises TypeError because of '%s'\n Traceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n TypeError: not enough arguments for format string\n >>> safe_modulo(s, {'one': 1})\n \"That's 1 string %(with)s missing 'with' and %s key\"\n \"\"\"\n try:\n return s % meta\n except (ValueError, TypeError, KeyError):\n # replace the missing fields by %%\n keys = substitution_pattern.finditer(s)\n for m in keys:\n key = m.group('key')\n if not isinstance(meta, dict) or key not in meta:\n if print_warning:\n warn(\"%r is not a valid key!\" % key, SyntaxWarning,\n stacklevel)\n full = m.group()\n s = s.replace(full, '%' + full)\n if 'KEY' not in checked:\n return safe_modulo(s, meta, checked=checked + 'KEY',\n print_warning=print_warning,\n stacklevel=stacklevel)\n if not isinstance(meta, dict) or 'VALUE' in checked:\n raise\n s = re.sub(r\"\"\"(?<!%)(%%)*%(?!%) # uneven number of %\n \\s*(\\w|$) # format strings\"\"\", '%\\g<0>', s,\n flags=re.VERBOSE)\n return safe_modulo(s, meta, checked=checked + 'VALUE',\n print_warning=print_warning, stacklevel=stacklevel)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_sections(self, s, base,\n sections=['Parameters', 'Other Parameters']):\n \"\"\"\n Method that extracts the specified sections out of the given string if\n (and only if) the docstring follows the numpy documentation guidelines\n [1]_. Note that the section either must appear in the\n :attr:`param_like_sections` or the :attr:`text_sections` attribute.\n\n Parameters\n ----------\n s: str\n Docstring to split\n base: str\n base to use in the :attr:`sections` attribute\n sections: list of str\n sections to look for. Each section must be followed by a newline\n character ('\\\\n') and a bar of '-' (following the numpy (napoleon)\n docstring conventions).\n\n Returns\n -------\n str\n The replaced string\n\n References\n ----------\n .. [1] https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt\n\n See Also\n --------\n delete_params, keep_params, delete_types, keep_types, delete_kwargs:\n For manipulating the docstring sections\n save_docstring:\n for saving an entire docstring\n \"\"\"\n params = self.params\n # Remove the summary and dedent the rest\n s = self._remove_summary(s)\n for section in sections:\n key = '%s.%s' % (base, section.lower().replace(' ', '_'))\n params[key] = self._get_section(s, section)\n return s", "response": "Method that extracts the specified sections out of the given string s."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dedent(self, func):\n doc = func.__doc__ and self.dedents(func.__doc__, stacklevel=4)\n return self._set_object_doc(func, doc)", "response": "Dedent the docstring of a function and substitute with params"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dedents(self, s, stacklevel=3):\n s = dedents(s)\n return safe_modulo(s, self.params, stacklevel=stacklevel)", "response": "Dedent a string and substitute with the params attribute and return the string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a function that replaces the docstring of a function with indented versions of the object s parameters with the indented version of the object s parameters.", "response": "def with_indent(self, indent=0):\n \"\"\"\n Substitute in the docstring of a function with indented :attr:`params`\n\n Parameters\n ----------\n indent: int\n The number of spaces that the substitution should be indented\n\n Returns\n -------\n function\n Wrapper that takes a function as input and substitutes it's\n ``__doc__`` with the indented versions of :attr:`params`\n\n See Also\n --------\n with_indents, dedent\"\"\"\n def replace(func):\n doc = func.__doc__ and self.with_indents(\n func.__doc__, indent=indent, stacklevel=4)\n return self._set_object_doc(func, doc)\n return replace"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsubstitute a string with the indented string with the indented params.", "response": "def with_indents(self, s, indent=0, stacklevel=3):\n \"\"\"\n Substitute a string with the indented :attr:`params`\n\n Parameters\n ----------\n s: str\n The string in which to substitute\n indent: int\n The number of spaces that the substitution should be indented\n stacklevel: int\n The stacklevel for the warning raised in :func:`safe_module` when\n encountering an invalid key in the string\n\n Returns\n -------\n str\n The substituted string\n\n See Also\n --------\n with_indent, dedents\"\"\"\n # we make a new dictionary with objects that indent the original\n # strings if necessary. Note that the first line is not indented\n d = {key: _StrWithIndentation(val, indent)\n for key, val in six.iteritems(self.params)}\n return safe_modulo(s, d, stacklevel=stacklevel)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting the given parameters from a string s without the descriptions of the parameters.", "response": "def delete_params_s(s, params):\n \"\"\"\n Delete the given parameters from a string\n\n Same as :meth:`delete_params` but does not use the :attr:`params`\n dictionary\n\n Parameters\n ----------\n s: str\n The string of the parameters section\n params: list of str\n The names of the parameters to delete\n\n Returns\n -------\n str\n The modified string `s` without the descriptions of `params`\n \"\"\"\n patt = '(?s)' + '|'.join(\n '(?<=\\n)' + s + '\\s*:.+?\\n(?=\\S+|$)' for s in params)\n return re.sub(patt, '', '\\n' + s.strip() + '\\n').strip()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_kwargs(self, base_key, args=None, kwargs=None):\n if not args and not kwargs:\n warn(\"Neither args nor kwargs are given. I do nothing for %s\" % (\n base_key))\n return\n ext = '.no' + ('_args' if args else '') + ('_kwargs' if kwargs else '')\n self.params[base_key + ext] = self.delete_kwargs_s(\n self.params[base_key], args, kwargs)", "response": "Delete the kwargs part of the parameters section of the parameters section of the current object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting the args or kwargs part from the parameters section s", "response": "def delete_kwargs_s(cls, s, args=None, kwargs=None):\n \"\"\"\n Deletes the ``*args`` or ``**kwargs`` part from the parameters section\n\n Either `args` or `kwargs` must not be None.\n\n Parameters\n ----------\n s: str\n The string to delete the args and kwargs from\n args: None or str\n The string for the args to delete\n kwargs: None or str\n The string for the kwargs to delete\n\n Notes\n -----\n The type name of `args` in `s` has to be like ````*<args>```` (i.e. the\n `args` argument preceeded by a ``'*'`` and enclosed by double ``'`'``).\n Similarily, the type name of `kwargs` in `s` has to be like\n ````**<kwargs>````\"\"\"\n if not args and not kwargs:\n return s\n types = []\n if args is not None:\n types.append('`?`?\\*%s`?`?' % args)\n if kwargs is not None:\n types.append('`?`?\\*\\*%s`?`?' % kwargs)\n return cls.delete_types_s(s, types)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete the given types from a string s.", "response": "def delete_types_s(s, types):\n \"\"\"\n Delete the given types from a string\n\n Same as :meth:`delete_types` but does not use the :attr:`params`\n dictionary\n\n Parameters\n ----------\n s: str\n The string of the returns like section\n types: list of str\n The type identifiers to delete\n\n Returns\n -------\n str\n The modified string `s` without the descriptions of `types`\n \"\"\"\n patt = '(?s)' + '|'.join(\n '(?<=\\n)' + s + '\\n.+?\\n(?=\\S+|$)' for s in types)\n return re.sub(patt, '', '\\n' + s.strip() + '\\n',).strip()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nkeeps the given parameters from a string s with only the descriptions of params.", "response": "def keep_params_s(s, params):\n \"\"\"\n Keep the given parameters from a string\n\n Same as :meth:`keep_params` but does not use the :attr:`params`\n dictionary\n\n Parameters\n ----------\n s: str\n The string of the parameters like section\n params: list of str\n The parameter names to keep\n\n Returns\n -------\n str\n The modified string `s` with only the descriptions of `params`\n \"\"\"\n patt = '(?s)' + '|'.join(\n '(?<=\\n)' + s + '\\s*:.+?\\n(?=\\S+|$)' for s in params)\n return ''.join(re.findall(patt, '\\n' + s.strip() + '\\n')).rstrip()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef keep_types(self, base_key, out_key, *types):\n self.params['%s.%s' % (base_key, out_key)] = self.keep_types_s(\n self.params[base_key], types)", "response": "Method to keep only specific parameters from a parameter documentation."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nkeeps the given types from a string s", "response": "def keep_types_s(s, types):\n \"\"\"\n Keep the given types from a string\n\n Same as :meth:`keep_types` but does not use the :attr:`params`\n dictionary\n\n Parameters\n ----------\n s: str\n The string of the returns like section\n types: list of str\n The type identifiers to keep\n\n Returns\n -------\n str\n The modified string `s` with only the descriptions of `types`\n \"\"\"\n patt = '|'.join('(?<=\\n)' + s + '\\n(?s).+?\\n(?=\\S+|$)' for s in types)\n return ''.join(re.findall(patt, '\\n' + s.strip() + '\\n')).rstrip()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving a docstring from a function that returns a dict of the key and the value of the docstring.", "response": "def save_docstring(self, key):\n \"\"\"\n Descriptor method to save a docstring from a function\n\n Like the :meth:`get_sectionsf` method this method serves as a\n descriptor for functions but saves the entire docstring\"\"\"\n def func(f):\n self.params[key] = f.__doc__ or ''\n return f\n return func"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_summary(self, s, base=None):\n summary = summary_patt.search(s).group()\n if base is not None:\n self.params[base + '.summary'] = summary\n return summary", "response": "This method extracts the summary of the given docstring s which is the key under which the summary is stored in the given base."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a function that returns the summary of the specified function.", "response": "def get_summaryf(self, *args, **kwargs):\n \"\"\"\n Extract the summary from a function docstring\n\n Parameters\n ----------\n ``*args`` and ``**kwargs``\n See the :meth:`get_summary` method. Note, that the first argument\n will be the docstring of the specified function\n\n Returns\n -------\n function\n Wrapper that takes a function as input and registers its summary\n via the :meth:`get_summary` method\"\"\"\n def func(f):\n doc = f.__doc__\n self.get_summary(doc or '', *args, **kwargs)\n return f\n return func"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the extended summary from a docstring", "response": "def get_extended_summary(self, s, base=None):\n \"\"\"Get the extended summary from a docstring\n\n This here is the extended summary\n\n Parameters\n ----------\n s: str\n The docstring to use\n base: str or None\n A key under which the summary shall be stored in the :attr:`params`\n attribute. If not None, the summary will be stored in\n ``base + '.summary_ext'``. Otherwise, it will not be stored at\n all\n\n Returns\n -------\n str\n The extracted extended summary\"\"\"\n # Remove the summary and dedent\n s = self._remove_summary(s)\n ret = ''\n if not self._all_sections_patt.match(s):\n m = self._extended_summary_patt.match(s)\n if m is not None:\n ret = m.group().strip()\n if base is not None:\n self.params[base + '.summary_ext'] = ret\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_extended_summaryf(self, *args, **kwargs):\n def func(f):\n doc = f.__doc__\n self.get_extended_summary(doc or '', *args, **kwargs)\n return f\n return func", "response": "Returns a function that extracts the extended summary of a function docstring"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_full_description(self, s, base=None):\n summary = self.get_summary(s)\n extended_summary = self.get_extended_summary(s)\n ret = (summary + '\\n\\n' + extended_summary).strip()\n if base is not None:\n self.params[base + '.full_desc'] = ret\n return ret", "response": "Get the full description from a docstring"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a function that extracts the full description of a function docstring", "response": "def get_full_descriptionf(self, *args, **kwargs):\n \"\"\"Extract the full description from a function docstring\n\n This function can be used as a decorator to extract the full\n descriptions of a function docstring (similar to\n :meth:`get_sectionsf`).\n\n Parameters\n ----------\n ``*args`` and ``**kwargs``\n See the :meth:`get_full_description` method. Note, that the first\n argument will be the docstring of the specified function\n\n Returns\n -------\n function\n Wrapper that takes a function as input and registers its summary\n via the :meth:`get_full_description` method\"\"\"\n def func(f):\n doc = f.__doc__\n self.get_full_description(doc or '', *args, **kwargs)\n return f\n return func"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_mergable_if_possible(cls, data, context):\n if isinstance(data, dict):\n return MergableDict(data=data, context=context)\n elif isiterable(data):\n return MergableList(\n data=[cls.make_mergable_if_possible(i, context) for i in data],\n context=context\n )\n else:\n return data", "response": "Makes an object mergable if possible. Returns the virgin object if the object is mergable."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmerges this instance with new instances in - place.", "response": "def merge(self, *args):\n \"\"\"\n Merges this instance with new instances, in-place.\n\n :param \\\\*args: Configuration values to merge with current instance.\n :type \\\\*args: iterable\n\n \"\"\"\n for data in args:\n if isinstance(data, str):\n to_merge = load_string(data, self.context)\n if not to_merge:\n continue\n else:\n to_merge = data\n\n if not self.can_merge(to_merge):\n raise TypeError(\n 'Cannot merge myself:%s with %s. data: %s' \\\n % (type(self), type(data), data)\n )\n\n self._merge(to_merge)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load_file(self, filename):\n if not path.exists(filename):\n raise FileNotFoundError(filename)\n\n loaded_yaml = load_yaml(filename, self.context)\n if loaded_yaml:\n self.merge(loaded_yaml)", "response": "load file which contains yaml configuration entries and merge it by\n current instance"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the configuration manager.", "response": "def initialize(self, init_value, context=None, force=False):\n \"\"\"\n Initialize the configuration manager\n\n :param force: force initialization even if it's already initialized\n :return:\n \"\"\"\n\n if not force and self._instance is not None:\n raise ConfigurationAlreadyInitializedError(\n 'Configuration manager object is already initialized.'\n )\n\n self.__class__._instance = Root(init_value, context=context)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts the name which may be embedded for a Jinja2 filter node", "response": "def grok_filter_name(element):\n \"\"\"Extracts the name, which may be embedded, for a Jinja2\n filter node\"\"\"\n e_name = None\n if element.name == 'default':\n if isinstance(element.node, jinja2.nodes.Getattr):\n e_name = element.node.node.name\n else:\n e_name = element.node.name\n\n return e_name"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef grok_if_node(element, default_vars):\n if isinstance(element.test, jinja2.nodes.Filter) and \\\n element.test.name == 'default':\n default_vars.append(element.test.node.name)\n\n return default_vars + grok_vars(element)", "response": "Properly parses a If element"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef grok_vars(elements):\n default_vars = []\n iterbody = None\n if hasattr(elements, 'body'):\n iterbody = elements.body\n elif hasattr(elements, 'nodes'):\n iterbody = elements.nodes\n\n for element in iterbody:\n if isinstance(element, jinja2.nodes.Output):\n default_vars = default_vars + grok_vars(element)\n elif isinstance(element, jinja2.nodes.Filter):\n e_name = grok_filter_name(element)\n if e_name not in default_vars:\n default_vars.append(e_name)\n elif isinstance(element, jinja2.nodes.For):\n default_vars = grok_for_node(element, default_vars)\n elif isinstance(element, jinja2.nodes.If):\n default_vars = grok_if_node(element, default_vars)\n elif isinstance(element, jinja2.nodes.Assign):\n default_vars.append(element.target.name)\n elif isinstance(element, jinja2.nodes.FromImport):\n for from_var in element.names:\n default_vars.append(from_var)\n\n return default_vars", "response": "Returns a list of vars for which the value is being appropriately set\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef jinja_env(template_path):\n fs_loader = FileSystemLoader(os.path.dirname(template_path))\n env = Environment(loader=fs_loader,\n autoescape=True,\n trim_blocks=True,\n lstrip_blocks=True)\n env.filters['b64encode'] = portable_b64encode\n env.filters['b64decode'] = f_b64decode\n return env", "response": "Sets up our Jinja environment loading the few filters we have"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck if the user has required variables in a template.", "response": "def missing_vars(template_vars, parsed_content, obj):\n \"\"\"If we find missing variables when rendering a template\n we want to give the user a friendly error\"\"\"\n missing = []\n default_vars = grok_vars(parsed_content)\n for var in template_vars:\n if var not in default_vars and var not in obj:\n missing.append(var)\n\n if missing:\n e_msg = \"Missing required variables %s\" % \\\n ','.join(missing)\n raise aomi_excep.AomiData(e_msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef render(filename, obj):\n template_path = abspath(filename)\n env = jinja_env(template_path)\n template_base = os.path.basename(template_path)\n try:\n parsed_content = env.parse(env\n .loader\n .get_source(env, template_base))\n template_vars = meta.find_undeclared_variables(parsed_content)\n if template_vars:\n missing_vars(template_vars, parsed_content, obj)\n\n LOG.debug(\"rendering %s with %s vars\",\n template_path, len(template_vars))\n return env \\\n .get_template(template_base) \\\n .render(**obj)\n except jinja2.exceptions.TemplateSyntaxError as exception:\n template_trace = traceback.format_tb(sys.exc_info()[2])\n # Different error context depending on whether it is the\n # pre-render variable scan or not\n if exception.filename:\n template_line = template_trace[len(template_trace) - 1]\n raise aomi_excep.Validation(\"Bad template %s %s\" %\n (template_line,\n str(exception)))\n\n template_str = ''\n if isinstance(exception.source, tuple):\n # PyLint seems confused about whether or not this is a tuple\n # pylint: disable=locally-disabled, unsubscriptable-object\n template_str = \"Embedded Template\\n%s\" % exception.source[0]\n\n raise aomi_excep.Validation(\"Bad template %s\" % str(exception),\n source=template_str)\n\n except jinja2.exceptions.UndefinedError as exception:\n template_traces = [x.strip()\n for x in traceback.format_tb(sys.exc_info()[2])\n if 'template code' in x]\n raise aomi_excep.Validation(\"Missing template variable %s\" %\n ' '.join(template_traces))", "response": "Render a template with the given object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload variable from cli and var files passing in cli options as a seed.", "response": "def load_vars(opt):\n \"\"\"Loads variable from cli and var files, passing in cli options\n as a seed (although they can be overwritten!).\n Note, turn this into an object so it's a nicer \"cache\".\"\"\"\n if not hasattr(opt, '_vars_cache'):\n cli_opts = cli_hash(opt.extra_vars)\n setattr(opt, '_vars_cache',\n merge_dicts(load_var_files(opt, cli_opts), cli_opts))\n\n return getattr(opt, '_vars_cache')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_var_files(opt, p_obj=None):\n obj = {}\n if p_obj:\n obj = p_obj\n\n for var_file in opt.extra_vars_file:\n LOG.debug(\"loading vars from %s\", var_file)\n obj = merge_dicts(obj.copy(), load_var_file(var_file, obj))\n\n return obj", "response": "Load variable files merge and return contents"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a varible file processing it as a template", "response": "def load_var_file(filename, obj):\n \"\"\"Loads a varible file, processing it as a template\"\"\"\n rendered = render(filename, obj)\n ext = os.path.splitext(filename)[1][1:]\n v_obj = dict()\n if ext == 'json':\n v_obj = json.loads(rendered)\n elif ext == 'yaml' or ext == 'yml':\n v_obj = yaml.safe_load(rendered)\n else:\n LOG.warning(\"assuming yaml for unrecognized extension %s\",\n ext)\n v_obj = yaml.safe_load(rendered)\n\n return v_obj"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_template_help(builtin):\n\n help_file = \"templates/%s-help.yml\" % builtin\n help_file = resource_filename(__name__, help_file)\n help_obj = {}\n if os.path.exists(help_file):\n help_data = yaml.safe_load(open(help_file))\n if 'name' in help_data:\n help_obj['name'] = help_data['name']\n\n if 'help' in help_data:\n help_obj['help'] = help_data['help']\n\n if 'args' in help_data:\n help_obj['args'] = help_data['args']\n\n return help_obj", "response": "Loads the help for a given template"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nshows a listing of all our builtin templates", "response": "def builtin_list():\n \"\"\"Show a listing of all our builtin templates\"\"\"\n for template in resource_listdir(__name__, \"templates\"):\n builtin, ext = os.path.splitext(os.path.basename(abspath(template)))\n if ext == '.yml':\n continue\n\n help_obj = load_template_help(builtin)\n if 'name' in help_obj:\n print(\"%-*s %s\" % (20, builtin, help_obj['name']))\n else:\n print(\"%s\" % builtin)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef builtin_info(builtin):\n help_obj = load_template_help(builtin)\n if help_obj.get('name') and help_obj.get('help'):\n print(\"The %s template\" % (help_obj['name']))\n print(help_obj['help'])\n else:\n print(\"No help for %s\" % builtin)\n\n if help_obj.get('args'):\n for arg, arg_help in iteritems(help_obj['args']):\n print(\" %-*s %s\" % (20, arg, arg_help))", "response": "Show information on a particular builtin template"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrender and returns the Secretfile construct", "response": "def render_secretfile(opt):\n \"\"\"Renders and returns the Secretfile construct\"\"\"\n LOG.debug(\"Using Secretfile %s\", opt.secretfile)\n secretfile_path = abspath(opt.secretfile)\n obj = load_vars(opt)\n return render(secretfile_path, obj)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_user_password(client, userpass):\n vault_path = ''\n user = ''\n user_path_bits = userpass.split('/')\n if len(user_path_bits) == 1:\n user = user_path_bits[0]\n vault_path = \"auth/userpass/users/%s/password\" % user\n LOG.debug(\"Updating password for user %s at the default path\", user)\n elif len(user_path_bits) == 2:\n mount = user_path_bits[0]\n user = user_path_bits[1]\n vault_path = \"auth/%s/users/%s/password\" % (mount, user)\n LOG.debug(\"Updating password for user %s at path %s\", user, mount)\n else:\n client.revoke_self_token()\n raise aomi.exceptions.AomiCommand(\"invalid user path\")\n\n new_password = get_password()\n obj = {\n 'user': user,\n 'password': new_password\n }\n client.write(vault_path, **obj)", "response": "Will update the password for a userpass user"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_generic_password(client, path):\n vault_path, key = path_pieces(path)\n mount = mount_for_path(vault_path, client)\n if not mount:\n client.revoke_self_token()\n raise aomi.exceptions.VaultConstraint('invalid path')\n\n if backend_type(mount, client) != 'generic':\n client.revoke_self_token()\n raise aomi.exceptions.AomiData(\"Unsupported backend type\")\n\n LOG.debug(\"Updating generic password at %s\", path)\n existing = client.read(vault_path)\n if not existing or 'data' not in existing:\n LOG.debug(\"Nothing exists yet at %s!\", vault_path)\n existing = {}\n else:\n LOG.debug(\"Updating %s at %s\", key, vault_path)\n existing = existing['data']\n\n new_password = get_password()\n if key in existing and existing[key] == new_password:\n client.revoke_self_token()\n raise aomi.exceptions.AomiData(\"Password is same as existing\")\n\n existing[key] = new_password\n client.write(vault_path, **existing)", "response": "Will update a single key in a generic secret backend as\n thought it was a password"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef password(client, path):\n if path.startswith('user:'):\n update_user_password(client, path[5:])\n else:\n update_generic_password(client, path)", "response": "Will attempt to update a password in Vault"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef vault_file(env, default):\n home = os.environ['HOME'] if 'HOME' in os.environ else \\\n os.environ['USERPROFILE']\n filename = os.environ.get(env, os.path.join(home, default))\n filename = abspath(filename)\n if os.path.exists(filename):\n return filename\n\n return None", "response": "Returns the path to a misc Vault file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a time string into an integer representation of seconds", "response": "def vault_time_to_s(time_string):\n \"\"\"Will convert a time string, as recognized by other Vault\n tooling, into an integer representation of seconds\"\"\"\n if not time_string or len(time_string) < 2:\n raise aomi.exceptions \\\n .AomiData(\"Invalid timestring %s\" % time_string)\n\n last_char = time_string[len(time_string) - 1]\n if last_char == 's':\n return int(time_string[0:len(time_string) - 1])\n elif last_char == 'm':\n cur = int(time_string[0:len(time_string) - 1])\n return cur * 60\n elif last_char == 'h':\n cur = int(time_string[0:len(time_string) - 1])\n return cur * 3600\n elif last_char == 'd':\n cur = int(time_string[0:len(time_string) - 1])\n return cur * 86400\n else:\n raise aomi.exceptions \\\n .AomiData(\"Invalid time scale %s\" % last_char)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexecute a command and return the output of the command.", "response": "def run(*args, **kwargs):\n \"\"\"Execute a command.\n\n Command can be passed as several arguments, each being a string\n or a list of strings; lists are flattened.\n If opts.verbose is True, output of the command is shown.\n If the command exits with non-zero, print an error message and exit.\n If keyward argument get_output is True, output is returned.\n Additionally, non-zero exit code with empty output is ignored.\n \"\"\"\n\n capture = kwargs.get(\"get_output\", False)\n args = [arg for arglist in args for arg in (arglist if isinstance(arglist, list) else [arglist])]\n\n if opts.verbose:\n print(\"Running {}\".format(\" \".join(args)))\n\n live_output = opts.verbose and not capture\n runner = subprocess.check_call if live_output else subprocess.check_output\n\n try:\n output = runner(args, stderr=subprocess.STDOUT)\n except subprocess.CalledProcessError as exception:\n if capture and not exception.output.strip():\n # Ignore errors if output is empty.\n return \"\"\n\n if not live_output:\n sys.stdout.write(exception.output.decode(default_encoding, \"ignore\"))\n\n sys.exit(\"Error: got exitcode {} from command {}\".format(\n exception.returncode, \" \".join(args)))\n except OSError:\n sys.exit(\"Error: couldn't run {}: is {} in PATH?\".format(\" \".join(args), args[0]))\n\n if opts.verbose and capture:\n sys.stdout.write(output.decode(default_encoding, \"ignore\"))\n\n return capture and output.decode(default_encoding, \"ignore\").strip()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_executing(on: bool):\n\n my_thread = threading.current_thread()\n\n if isinstance(my_thread, threads.CauldronThread):\n my_thread.is_executing = on", "response": "Sets the current thread s is_executing flag."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_file_contents(source_path: str) -> str:\n\n open_funcs = [\n functools.partial(codecs.open, source_path, encoding='utf-8'),\n functools.partial(open, source_path, 'r')\n ]\n\n for open_func in open_funcs:\n try:\n with open_func() as f:\n return f.read()\n except Exception:\n pass\n\n return (\n 'raise IOError(\"Unable to load step file at: {}\")'\n .format(source_path)\n )", "response": "Loads the contents of the source file into a string for execution using multiple\n loading methods."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading the source for a step file at the given path and renders it in a template.", "response": "def load_step_file(source_path: str) -> str:\n \"\"\"\n Loads the source for a step file at the given path location and then\n renders it in a template to add additional footer data.\n\n The footer is used to force the display to flush the print buffer and\n breathe the step to open things up for resolution. This shouldn't be\n necessary, but it seems there's an async race condition with print\n buffers that is hard to reproduce and so this is in place to fix the\n problem.\n \"\"\"\n\n return templating.render_template(\n template_name='embedded-step.py.txt',\n source_contents=get_file_contents(source_path)\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a artificial module that encompasses code execution for the specified project.", "response": "def create_module(\n project: 'projects.Project',\n step: 'projects.ProjectStep'\n):\n \"\"\"\n Creates an artificial module that will encompass the code execution for\n the specified step. The target module is populated with the standard dunder\n attributes like __file__ to simulate the normal way that Python populates\n values when loading a module.\n\n :param project:\n The currently open project.\n :param step:\n The step whose code will be run inside the target_module.\n :return\n The created and populated module for the given step.\n \"\"\"\n\n module_name = step.definition.name.rsplit('.', 1)[0]\n target_module = types.ModuleType(module_name)\n\n dunders = dict(\n __file__=step.source_path,\n __package__='.'.join(\n [project.id.replace('.', '-')] +\n step.filename.rsplit('.', 1)[0].split(os.sep)\n )\n )\n\n for key, value in dunders.items():\n setattr(target_module, key, value)\n\n return target_module"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(\n project: 'projects.Project',\n step: 'projects.ProjectStep',\n) -> dict:\n \"\"\"\n Carries out the execution of the step python source file by loading it into\n an artificially created module and then executing that module and returning\n the result.\n\n :param project:\n The currently open project.\n :param step:\n The project step for which the run execution will take place.\n :return:\n A dictionary containing the results of the run execution, which\n indicate whether or not the run was successful. If the run failed for\n any reason, the dictionary will contain error information for display.\n \"\"\"\n\n target_module = create_module(project, step)\n source_code = load_step_file(step.source_path)\n\n try:\n code = InspectLoader.source_to_code(source_code, step.source_path)\n except SyntaxError as error:\n return render_syntax_error(project, error)\n\n def exec_test():\n step.test_locals = dict()\n step.test_locals.update(target_module.__dict__)\n exec(code, step.test_locals)\n\n try:\n set_executing(True)\n threads.abort_thread()\n\n if environ.modes.has(environ.modes.TESTING):\n exec_test()\n else:\n exec(code, target_module.__dict__)\n out = {\n 'success': True,\n 'stop_condition': projects.StopCondition(False, False)\n }\n except threads.ThreadAbortError:\n # Raised when a user explicitly aborts the running of the step through\n # a user-interface action.\n out = {\n 'success': False,\n 'stop_condition': projects.StopCondition(True, True)\n }\n except UserAbortError as error:\n # Raised when a user explicitly aborts the running of the step using\n # a cd.step.stop(). This behavior should be considered a successful\n # outcome as it was intentional on the part of the user that the step\n # abort running early.\n out = {\n 'success': True,\n 'stop_condition': projects.StopCondition(True, error.halt)\n }\n except Exception as error:\n out = render_error(project, error)\n\n set_executing(False)\n return out", "response": "Runs the python source file of the current project and returns the result of the execution."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef render_syntax_error(\n project: 'projects.Project',\n error: SyntaxError\n) -> dict:\n \"\"\"\n Renders a SyntaxError, which has a shallow, custom stack trace derived\n from the data included in the error, instead of the standard stack trace\n pulled from the exception frames.\n\n :param project:\n Currently open project.\n :param error:\n The SyntaxError to be rendered to html and text for display.\n :return:\n A dictionary containing the error response with rendered display\n messages for both text and html output.\n \"\"\"\n\n return render_error(\n project=project,\n error=error,\n stack=[dict(\n filename=getattr(error, 'filename'),\n location=None,\n line_number=error.lineno,\n line=error.text.rstrip()\n )]\n )", "response": "Renders a SyntaxError to html and text for both text and html output."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrendering an exception to an error response that includes rendered text and html error messages for display.", "response": "def render_error(\n project: 'projects.Project',\n error: Exception,\n stack: typing.List[dict] = None\n) -> dict:\n \"\"\"\n Renders an Exception to an error response that includes rendered text and\n html error messages for display.\n\n :param project:\n Currently open project.\n :param error:\n The SyntaxError to be rendered to html and text for display.\n :param stack:\n Optionally specify a parsed stack. If this value is None the standard\n Cauldron stack frames will be rendered.\n :return:\n A dictionary containing the error response with rendered display\n messages for both text and html output.\n \"\"\"\n\n data = dict(\n type=error.__class__.__name__,\n message='{}'.format(error),\n stack=(\n stack\n if stack is not None else\n render_stack.get_formatted_stack_frame(project)\n )\n )\n\n return dict(\n success=False,\n error=error,\n message=templating.render_template('user-code-error.txt', **data),\n html_message=templating.render_template('user-code-error.html', **data)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute(asynchronous: bool = False):\n r = Response()\n r.update(server=server_runner.get_server_data())\n\n cmd, args = parse_command_args(r)\n if r.failed:\n return flask.jsonify(r.serialize())\n\n try:\n commander.execute(cmd, args, r)\n if not r.thread:\n return flask.jsonify(r.serialize())\n\n if not asynchronous:\n r.thread.join()\n\n server_runner.active_execution_responses[r.thread.uid] = r\n\n # Watch the thread for a bit to see if the command finishes in\n # that time. If it does the command result will be returned directly\n # to the caller. Otherwise, a waiting command will be issued\n count = 0\n while count < 5:\n count += 1\n r.thread.join(0.25)\n if not r.thread.is_alive():\n break\n\n if r.thread.is_alive():\n return flask.jsonify(\n Response()\n .update(\n run_log=r.get_thread_log(),\n run_status='running',\n run_uid=r.thread.uid,\n step_changes=server_runner.get_running_step_changes(True),\n server=server_runner.get_server_data()\n )\n .serialize()\n )\n\n del server_runner.active_execution_responses[r.thread.uid]\n r.update(\n run_log=r.get_thread_log(),\n run_status='complete',\n run_multiple_updates=False,\n run_uid=r.thread.uid\n )\n except Exception as err:\n r.fail(\n code='KERNEL_EXECUTION_FAILURE',\n message='Unable to execute command',\n cmd=cmd,\n args=args,\n error=err\n )\n\n return flask.jsonify(r.serialize())", "response": "Executes a command and returns the result of the command."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\naborting the current server execution process.", "response": "def abort():\n \"\"\"...\"\"\"\n uid_list = list(server_runner.active_execution_responses.keys())\n\n while len(uid_list) > 0:\n uid = uid_list.pop()\n\n response = server_runner.active_execution_responses.get(uid)\n if not response:\n continue\n\n try:\n del server_runner.active_execution_responses[uid]\n except Exception:\n pass\n\n if not response.thread or not response.thread.is_alive():\n continue\n\n # Try to stop the thread gracefully\n response.thread.abort = True\n response.thread.join(2)\n\n try:\n # Force stop the thread explicitly\n if response.thread.is_alive():\n response.thread.abort_running()\n except Exception:\n pass\n\n project = cd.project.internal_project\n\n if project and project.current_step:\n step = project.current_step\n if step.is_running:\n step.is_running = False\n step.progress = 0\n step.progress_message = None\n step.dumps()\n\n # Make sure this is called prior to printing response information to\n # the console or that will come along for the ride\n redirection.disable(step)\n\n # Make sure no print redirection will survive the abort process regardless\n # of whether an active step was found or not (prevents race conditions)\n redirection.restore_default_configuration()\n\n project_data = project.kernel_serialize() if project else None\n\n return flask.jsonify(\n Response()\n .update(project=project_data)\n .serialize()\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove the specified key from the cauldron configs object if the key exists.", "response": "def remove_key(key: str, persists: bool = True):\n \"\"\"\n Removes the specified key from the cauldron configs if the key exists\n\n :param key:\n The key in the cauldron configs object to remove\n :param persists:\n \"\"\"\n\n environ.configs.remove(key, include_persists=persists)\n environ.configs.save()\n\n environ.log(\n '[REMOVED]: \"{}\" from configuration settings'.format(key)\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_key(key: str, value: typing.List[str], persists: bool = True):\n\n if key.endswith('_path') or key.endswith('_paths'):\n for index in range(len(value)):\n value[index] = environ.paths.clean(value[index])\n\n if len(value) == 1:\n value = value[0]\n\n environ.configs.put(**{key: value}, persists=persists)\n environ.configs.save()\n environ.log('[SET]: \"{}\" to \"{}\"'.format(key, value))", "response": "Sets the specified key to the specified value."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef save(self) -> 'Configuration':\n\n data = self.load().persistent\n if data is None:\n return self\n\n directory = os.path.dirname(self._source_path)\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n path = self._source_path\n with open(path, 'w+') as f:\n json.dump(data, f)\n\n return self", "response": "Saves the current configuration settings object to the current user s home\n directory."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_userinfo(self):\n if not hasattr(self, \"_userinfo\"):\n userinfo = {\n \"name\": self.user_name,\n \"email\": self.user_email,\n \"url\": self.user_url\n }\n if self.user_id:\n u = self.user\n if u.email:\n userinfo[\"email\"] = u.email\n\n # If the user has a full name, use that for the user name.\n # However, a given user_name overrides the raw user.username,\n # so only use that if this comment has no associated name.\n if u.get_full_name():\n userinfo[\"name\"] = self.user.get_full_name()\n elif not self.user_name:\n userinfo[\"name\"] = u.get_username()\n self._userinfo = userinfo\n return self._userinfo", "response": "Get a dictionary that pulls together information about the poster\n safely for both authenticated and non - authenticated comments."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_as_text(self):\n d = {\n 'user': self.user or self.name,\n 'date': self.submit_date,\n 'comment': self.comment,\n 'domain': self.site.domain,\n 'url': self.get_absolute_url()\n }\n return _('Posted by %(user)s at %(date)s\\n\\n%(comment)s\\n\\nhttp://%(domain)s%(url)s') % d", "response": "Return this comment as plain text. Useful for emails."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef entry_from_dict(\n data: dict\n) -> typing.Union[FILE_WRITE_ENTRY, FILE_COPY_ENTRY]:\n \"\"\"\n Converts the given data dictionary into either a file write or file copy\n entry depending on the keys in the dictionary. The dictionary should\n contain either ('path', 'contents') keys for file write entries or\n ('source', 'destination') keys for file copy entries.\n \"\"\"\n if 'contents' in data:\n return FILE_WRITE_ENTRY(**data)\n return FILE_COPY_ENTRY(**data)", "response": "Converts the given dictionary into a file write or file copy entry depending on the keys in the dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of files that are deployed to the virtual machine.", "response": "def deploy(files_list: typing.List[tuple]):\n \"\"\"\n Iterates through the specified files_list and copies or writes each entry\n depending on whether its a file copy entry or a file write entry.\n\n :param files_list:\n A list of file write entries and file copy entries\n \"\"\"\n def deploy_entry(entry):\n if not entry:\n return\n\n if hasattr(entry, 'source') and hasattr(entry, 'destination'):\n return copy(entry)\n\n if hasattr(entry, 'path') and hasattr(entry, 'contents'):\n return write(entry)\n\n raise ValueError('Unrecognized deployment entry {}'.format(entry))\n\n return [deploy_entry(f) for f in files_list]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate the output directory for the specified output path.", "response": "def make_output_directory(output_path: str) -> str:\n \"\"\"\n Creates the parent directory or directories for the specified output path\n if they do not already exist to prevent incomplete directory path errors\n during copying/writing operations.\n\n :param output_path:\n The path of the destination file or directory that will be written.\n :return:\n The absolute path to the output directory that was created if missing\n or already existed.\n \"\"\"\n\n output_directory = os.path.dirname(environ.paths.clean(output_path))\n\n if not os.path.exists(output_directory):\n os.makedirs(output_directory)\n\n return output_directory"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncopying the specified file to its destination location.", "response": "def copy(copy_entry: FILE_COPY_ENTRY):\n \"\"\"\n Copies the specified file from its source location to its destination\n location.\n \"\"\"\n source_path = environ.paths.clean(copy_entry.source)\n output_path = environ.paths.clean(copy_entry.destination)\n copier = shutil.copy2 if os.path.isfile(source_path) else shutil.copytree\n make_output_directory(output_path)\n\n for i in range(3):\n try:\n copier(source_path, output_path)\n return\n except Exception:\n time.sleep(0.5)\n\n raise IOError('Unable to copy \"{source}\" to \"{destination}\"'.format(\n source=source_path,\n destination=output_path\n ))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites the contents of the specified file entry to its destination path.", "response": "def write(write_entry: FILE_WRITE_ENTRY):\n \"\"\"\n Writes the contents of the specified file entry to its destination path.\n \"\"\"\n output_path = environ.paths.clean(write_entry.path)\n make_output_directory(output_path)\n writer.write_file(output_path, write_entry.contents)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef enable(step: 'projects.ProjectStep'):\n\n # Prevent anything unusual from causing buffer issues\n restore_default_configuration()\n\n stdout_interceptor = RedirectBuffer(sys.stdout)\n sys.stdout = stdout_interceptor\n step.report.stdout_interceptor = stdout_interceptor\n\n stderr_interceptor = RedirectBuffer(sys.stderr)\n sys.stderr = stderr_interceptor\n step.report.stderr_interceptor = stderr_interceptor\n\n stdout_interceptor.active = True\n stderr_interceptor.active = True", "response": "Enable print equivalent function that also writes the output to the current project page."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef restore_default_configuration():\n\n def restore(target, default_value):\n if target == default_value:\n return default_value\n\n if not isinstance(target, RedirectBuffer):\n return target\n\n try:\n target.active = False\n target.close()\n except Exception:\n pass\n\n return default_value\n\n sys.stdout = restore(sys.stdout, sys.__stdout__)\n sys.stderr = restore(sys.stderr, sys.__stderr__)", "response": "Restores sys. stdout and sys. stderr streams to their default\nAttributeNames values without regard to what step has currently overridden their values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a FILE_WRITE_ENTRY for the rendered HTML file for the given project.", "response": "def create(\n project: 'projects.Project',\n destination_directory,\n destination_filename: str = None\n) -> file_io.FILE_WRITE_ENTRY:\n \"\"\"\n Creates a FILE_WRITE_ENTRY for the rendered HTML file for the given\n project that will be saved in the destination directory with the given\n filename.\n\n :param project:\n The project for which the rendered HTML file will be created\n :param destination_directory:\n The absolute path to the folder where the HTML file will be saved\n :param destination_filename:\n The name of the HTML file to be written in the destination directory.\n Defaults to the project uuid.\n :return:\n A FILE_WRITE_ENTRY for the project's HTML file output\n \"\"\"\n\n template_path = environ.paths.resources('web', 'project.html')\n with open(template_path, 'r') as f:\n dom = f.read()\n\n dom = dom.replace(\n '<!-- CAULDRON:EXPORT -->',\n templating.render_template(\n 'notebook-script-header.html',\n uuid=project.uuid,\n version=environ.version\n )\n )\n\n filename = (\n destination_filename\n if destination_filename else\n '{}.html'.format(project.uuid)\n )\n\n html_out_path = os.path.join(destination_directory, filename)\n\n return file_io.FILE_WRITE_ENTRY(\n path=html_out_path,\n contents=dom\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef run_container():\n\n os.chdir(my_directory)\n\n cmd = [\n 'docker', 'run',\n '-it', '--rm',\n '-v', '{}:/cauldron'.format(my_directory),\n '-p', '5010:5010',\n 'cauldron_app',\n '/bin/bash'\n ]\n\n return os.system(' '.join(cmd))", "response": "Runs an interactive container"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nexecute the Cauldron container command", "response": "def run():\n \"\"\"Execute the Cauldron container command\"\"\"\n\n command = sys.argv[1].strip().lower()\n print('[COMMAND]:', command)\n\n if command == 'test':\n return run_test()\n elif command == 'build':\n return run_build()\n elif command == 'up':\n return run_container()\n elif command == 'serve':\n import cauldron\n cauldron.run_server(port=5010, public=True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninspect the data and structure of the source dictionary object and adds the results to the display.", "response": "def inspect(source: dict):\n \"\"\"\n Inspects the data and structure of the source dictionary object and\n adds the results to the display for viewing.\n\n :param source:\n A dictionary object to be inspected.\n :return:\n \"\"\"\n r = _get_report()\n r.append_body(render.inspect(source))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef header(header_text: str, level: int = 1, expand_full: bool = False):\n r = _get_report()\n r.append_body(render.header(\n header_text,\n level=level,\n expand_full=expand_full\n ))", "response": "Adds a text header to the display with the specified level."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds text to the available language.", "response": "def text(value: str, preformatted: bool = False):\n \"\"\"\n Adds text to the display. If the text is not preformatted, it will be\n displayed in paragraph format. Preformatted text will be displayed\n inside a pre tag with a monospace font.\n\n :param value:\n The text to display.\n :param preformatted:\n Whether or not to preserve the whitespace display of the text.\n \"\"\"\n if preformatted:\n result = render_texts.preformatted_text(value)\n else:\n result = render_texts.text(value)\n r = _get_report()\n r.append_body(result)\n r.stdout_interceptor.write_source(\n '{}\\n'.format(textwrap.dedent(value))\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrender the specified source string or source file using markdown and adds the resulting HTML to the notebook display.", "response": "def markdown(\n source: str = None,\n source_path: str = None,\n preserve_lines: bool = False,\n font_size: float = None,\n **kwargs\n):\n \"\"\"\n Renders the specified source string or source file using markdown and \n adds the resulting HTML to the notebook display.\n\n :param source:\n A markdown formatted string.\n :param source_path:\n A file containing markdown text.\n :param preserve_lines:\n If True, all line breaks will be treated as hard breaks. Use this\n for pre-formatted markdown text where newlines should be retained\n during rendering.\n :param font_size:\n Specifies a relative font size adjustment. The default value is 1.0,\n which preserves the inherited font size values. Set it to a value\n below 1.0 for smaller font-size rendering and greater than 1.0 for\n larger font size rendering.\n :param kwargs:\n Any variable replacements to make within the string using Jinja2\n templating syntax.\n \"\"\"\n r = _get_report()\n\n result = render_texts.markdown(\n source=source,\n source_path=source_path,\n preserve_lines=preserve_lines,\n font_size=font_size,\n **kwargs\n )\n r.library_includes += result['library_includes']\n\n r.append_body(result['body'])\n r.stdout_interceptor.write_source(\n '{}\\n'.format(textwrap.dedent(result['rendered']))\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding the specified data to the output display window with the specified key and value.", "response": "def json(**kwargs):\n \"\"\"\n Adds the specified data to the the output display window with the\n specified key. This allows the user to make available arbitrary\n JSON-compatible data to the display for runtime use.\n\n :param kwargs:\n Each keyword argument is added to the CD.data object with the\n specified key and value.\n \"\"\"\n r = _get_report()\n r.append_body(render.json(**kwargs))\n r.stdout_interceptor.write_source(\n '{}\\n'.format(_json_io.dumps(kwargs, indent=2))\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef plotly(\n data: typing.Union[dict, list] = None,\n layout: dict = None,\n scale: float = 0.5,\n figure: dict = None,\n static: bool = False\n):\n \"\"\"\n Creates a Plotly plot in the display with the specified data and\n layout.\n\n :param data:\n The Plotly trace data to be plotted.\n :param layout:\n The layout data used for the plot.\n :param scale:\n The display scale with units of fractional screen height. A value\n of 0.5 constrains the output to a maximum height equal to half the\n height of browser window when viewed. Values below 1.0 are usually\n recommended so the entire output can be viewed without scrolling.\n :param figure:\n In cases where you need to create a figure instead of separate data\n and layout information, you can pass the figure here and leave the\n data and layout values as None.\n :param static:\n If true, the plot will be created without interactivity.\n This is useful if you have a lot of plots in your notebook.\n \"\"\"\n r = _get_report()\n\n if not figure and not isinstance(data, (list, tuple)):\n data = [data]\n\n if 'plotly' not in r.library_includes:\n r.library_includes.append('plotly')\n\n r.append_body(render.plotly(\n data=data,\n layout=layout,\n scale=scale,\n figure=figure,\n static=static\n ))\n r.stdout_interceptor.write_source('[ADDED] Plotly plot\\n')", "response": "Creates a Plotly trace in the notebook with the specified data and layout."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef table(\n data_frame,\n scale: float = 0.7,\n include_index: bool = False,\n max_rows: int = 500\n):\n \"\"\"\n Adds the specified data frame to the display in a nicely formatted\n scrolling table.\n\n :param data_frame:\n The pandas data frame to be rendered to a table.\n :param scale:\n The display scale with units of fractional screen height. A value\n of 0.5 constrains the output to a maximum height equal to half the\n height of browser window when viewed. Values below 1.0 are usually\n recommended so the entire output can be viewed without scrolling.\n :param include_index:\n Whether or not the index column should be included in the displayed\n output. The index column is not included by default because it is\n often unnecessary extra information in the display of the data.\n :param max_rows:\n This argument exists to prevent accidentally writing very large data\n frames to a table, which can cause the notebook display to become\n sluggish or unresponsive. If you want to display large tables, you need\n only increase the value of this argument.\n \"\"\"\n r = _get_report()\n r.append_body(render.table(\n data_frame=data_frame,\n scale=scale,\n include_index=include_index,\n max_rows=max_rows\n ))\n r.stdout_interceptor.write_source('[ADDED] Table\\n')", "response": "Adds the specified data frame to the notebook display in a nicely formatted table."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef svg(svg_dom: str, filename: str = None):\n r = _get_report()\n r.append_body(render.svg(svg_dom))\n r.stdout_interceptor.write_source('[ADDED] SVG\\n')\n\n if not filename:\n return\n\n if not filename.endswith('.svg'):\n filename += '.svg'\n\n r.files[filename] = svg_dom", "response": "Adds the SVG string to the display."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders the specified Jinja2 template to HTML and adds the output to the display.", "response": "def jinja(path: str, **kwargs):\n \"\"\"\n Renders the specified Jinja2 template to HTML and adds the output to the\n display.\n\n :param path:\n The fully-qualified path to the template to be rendered.\n :param kwargs:\n Any keyword arguments that will be use as variable replacements within\n the template.\n \"\"\"\n r = _get_report()\n r.append_body(render.jinja(path, **kwargs))\n r.stdout_interceptor.write_source('[ADDED] Jinja2 rendered HTML\\n')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds the specified number of lines of whitespace to the current page.", "response": "def whitespace(lines: float = 1.0):\n \"\"\"\n Adds the specified number of lines of whitespace.\n\n :param lines:\n The number of lines of whitespace to show.\n \"\"\"\n r = _get_report()\n r.append_body(render.whitespace(lines))\n r.stdout_interceptor.write_source('\\n')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding an image to the notebook display.", "response": "def image(\n filename: str,\n width: int = None,\n height: int = None,\n justify: str = 'left'\n):\n \"\"\"\n Adds an image to the display. The image must be located within the\n assets directory of the Cauldron notebook's folder.\n\n :param filename:\n Name of the file within the assets directory,\n :param width:\n Optional width in pixels for the image.\n :param height:\n Optional height in pixels for the image.\n :param justify:\n One of 'left', 'center' or 'right', which specifies how the image\n is horizontally justified within the notebook display.\n \"\"\"\n r = _get_report()\n path = '/'.join(['reports', r.project.uuid, 'latest', 'assets', filename])\n r.append_body(render.image(path, width, height, justify))\n r.stdout_interceptor.write_source('[ADDED] Image\\n')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef html(dom: str):\n r = _get_report()\n r.append_body(render.html(dom))\n r.stdout_interceptor.write_source('[ADDED] HTML\\n')", "response": "Adds a valid HTML snippet to the display."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a list of the shared variables currently stored in the project workspace.", "response": "def workspace(show_values: bool = True, show_types: bool = True):\n \"\"\"\n Adds a list of the shared variables currently stored in the project\n workspace.\n\n :param show_values:\n When true the values for each variable will be shown in addition to\n their name.\n :param show_types:\n When true the data types for each shared variable will be shown in\n addition to their name.\n \"\"\"\n r = _get_report()\n\n data = {}\n for key, value in r.project.shared.fetch(None).items():\n if key.startswith('__cauldron_'):\n continue\n data[key] = value\n\n r.append_body(render.status(data, values=show_values, types=show_types))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pyplot(\n figure=None,\n scale: float = 0.8,\n clear: bool = True,\n aspect_ratio: typing.Union[list, tuple] = None\n):\n \"\"\"\n Creates a matplotlib plot in the display for the specified figure. The size\n of the plot is determined automatically to best fit the notebook.\n\n :param figure:\n The matplotlib figure to plot. If omitted, the currently active\n figure will be used.\n :param scale:\n The display scale with units of fractional screen height. A value\n of 0.5 constrains the output to a maximum height equal to half the\n height of browser window when viewed. Values below 1.0 are usually\n recommended so the entire output can be viewed without scrolling.\n :param clear:\n Clears the figure after it has been rendered. This is useful to\n prevent persisting old plot data between repeated runs of the\n project files. This can be disabled if the plot is going to be\n used later in the project files.\n :param aspect_ratio:\n The aspect ratio for the displayed plot as a two-element list or\n tuple. The first element is the width and the second element the\n height. The units are \"inches,\" which is an important consideration\n for the display of text within the figure. If no aspect ratio is\n specified, the currently assigned values to the plot will be used\n instead.\n \"\"\"\n r = _get_report()\n r.append_body(render_plots.pyplot(\n figure,\n scale=scale,\n clear=clear,\n aspect_ratio=aspect_ratio\n ))\n r.stdout_interceptor.write_source('[ADDED] PyPlot plot\\n')", "response": "Creates a matplotlib plot in the display for the specified figure."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a Bokeh plot object to the notebook display.", "response": "def bokeh(model, scale: float = 0.7, responsive: bool = True):\n \"\"\"\n Adds a Bokeh plot object to the notebook display.\n\n :param model:\n The plot object to be added to the notebook display.\n :param scale:\n How tall the plot should be in the notebook as a fraction of screen\n height. A number between 0.1 and 1.0. The default value is 0.7.\n :param responsive:\n Whether or not the plot should responsively scale to fill the width\n of the notebook. The default is True.\n \"\"\"\n r = _get_report()\n\n if 'bokeh' not in r.library_includes:\n r.library_includes.append('bokeh')\n\n r.append_body(render_plots.bokeh_plot(\n model=model,\n scale=scale,\n responsive=responsive\n ))\n r.stdout_interceptor.write_source('[ADDED] Bokeh plot\\n')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef listing(\n source: list,\n ordered: bool = False,\n expand_full: bool = False\n):\n \"\"\"\n An unordered or ordered list of the specified *source* iterable where\n each element is converted to a string representation for display.\n\n :param source:\n The iterable to display as a list.\n :param ordered:\n Whether or not the list should be ordered. If False, which is the\n default, an unordered bulleted list is created.\n :param expand_full:\n Whether or not the list should expand to fill the screen horizontally.\n When defaulted to False, the list is constrained to the center view\n area of the screen along with other text. This can be useful to keep\n lists aligned with the text flow.\n \"\"\"\n r = _get_report()\n r.append_body(render.listing(\n source=source,\n ordered=ordered,\n expand_full=expand_full\n ))\n r.stdout_interceptor.write_source('[ADDED] Listing\\n')", "response": "Displays a list of the specified source iterable."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndisplays a multi - column list of the specified source iterable.", "response": "def list_grid(\n source: list,\n expand_full: bool = False,\n column_count: int = 2,\n row_spacing: float = 1.0\n):\n \"\"\"\n An multi-column list of the specified *source* iterable where\n each element is converted to a string representation for display.\n\n :param source:\n The iterable to display as a list.\n :param expand_full:\n Whether or not the list should expand to fill the screen horizontally.\n When defaulted to False, the list is constrained to the center view\n area of the screen along with other text. This can be useful to keep\n lists aligned with the text flow.\n :param column_count:\n The number of columns to display. The specified count is applicable to\n high-definition screens. For Lower definition screens the actual count\n displayed may be fewer as the layout responds to less available\n horizontal screen space.\n :param row_spacing:\n The number of lines of whitespace to include between each row in the\n grid. Set this to 0 for tightly displayed lists.\n \"\"\"\n r = _get_report()\n r.append_body(render.list_grid(\n source=source,\n expand_full=expand_full,\n column_count=column_count,\n row_spacing=row_spacing\n ))\n r.stdout_interceptor.write_source('[ADDED] List grid\\n')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef latex(source: str):\n r = _get_report()\n if 'katex' not in r.library_includes:\n r.library_includes.append('katex')\n\n r.append_body(render_texts.latex(source.replace('@', '\\\\')))\n r.stdout_interceptor.write_source('[ADDED] Latex equation\\n')", "response": "Adds a mathematical equation in latex math - mode to the display."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef head(source, count: int = 5):\n r = _get_report()\n r.append_body(render_texts.head(source, count=count))\n r.stdout_interceptor.write_source('[ADDED] Head\\n')", "response": "Displays a specified number of elements in a source object of many\n different possible types."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndisplay the status of the current task in the current project.", "response": "def status(\n message: str = None,\n progress: float = None,\n section_message: str = None,\n section_progress: float = None,\n):\n \"\"\"\n Updates the status display, which is only visible while a step is running.\n This is useful for providing feedback and information during long-running\n steps.\n\n A section progress is also available for cases where long running tasks\n consist of multiple tasks and you want to display sub-progress messages\n within the context of the larger status.\n\n Note: this is only supported when running in the Cauldron desktop\n application.\n\n :param message:\n The status message you want to display. If left blank the previously\n set status message will be retained. Should you desire to remove an\n existing message, specify a blank string for this argument.\n :param progress:\n A number between zero and one that indicates the overall progress for\n the current status. If no value is specified, the previously assigned\n progress will be retained.\n :param section_message:\n The status message you want to display for a particular task within a\n long-running step. If left blank the previously set section message\n will be retained. Should you desire to remove an existing message,\n specify a blank string for this argument.\n :param section_progress:\n A number between zero and one that indicates the progress for the\n current section status. If no value is specified, the previously\n assigned section progress value will be retained.\n \"\"\"\n environ.abort_thread()\n step = _cd.project.get_internal_project().current_step\n\n if message is not None:\n step.progress_message = message\n if progress is not None:\n step.progress = max(0.0, min(1.0, progress))\n if section_message is not None:\n step.sub_progress_message = section_message\n if section_progress is not None:\n step.sub_progress = section_progress"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef code_block(\n code: str = None,\n path: str = None,\n language_id: str = None,\n title: str = None,\n caption: str = None\n):\n \"\"\"\n Adds a block of syntax highlighted code to the display from either\n the supplied code argument, or from the code file specified\n by the path argument.\n\n :param code:\n A string containing the code to be added to the display\n :param path:\n A path to a file containing code to be added to the display\n :param language_id:\n The language identifier that indicates what language should\n be used by the syntax highlighter. Valid values are any of the\n languages supported by the Pygments highlighter.\n :param title:\n If specified, the code block will include a title bar with the\n value of this argument\n :param caption:\n If specified, the code block will include a caption box below the code\n that contains the value of this argument\n \"\"\"\n environ.abort_thread()\n r = _get_report()\n r.append_body(render.code_block(\n block=code,\n path=path,\n language=language_id,\n title=title,\n caption=caption\n ))\n r.stdout_interceptor.write_source('{}\\n'.format(code))", "response": "Adds a code block to the display from either the code file specified by the path argument or the language identifier that is used by the Pygments highlighter."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef elapsed():\n environ.abort_thread()\n step = _cd.project.get_internal_project().current_step\n r = _get_report()\n r.append_body(render.elapsed_time(step.elapsed_time))\n\n result = '[ELAPSED]: {}\\n'.format(timedelta(seconds=step.elapsed_time))\n r.stdout_interceptor.write_source(result)", "response": "Displays the elapsed time since the current step started running."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the loaded module with the given name or returns None if no such module has been loaded.", "response": "def get_module(name: str) -> typing.Union[types.ModuleType, None]:\n \"\"\"\n Retrieves the loaded module for the given module name or returns None if\n no such module has been loaded.\n\n :param name:\n The name of the module to be retrieved\n :return:\n Either the loaded module with the specified name, or None if no such\n module has been imported.\n \"\"\"\n return sys.modules.get(name)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_module_name(module: types.ModuleType) -> str:\n try:\n return module.__spec__.name\n except AttributeError:\n return module.__name__", "response": "Returns the name of the specified module in a bunch of ways to prevent incompatibility issues."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef do_reload(module: types.ModuleType, newer_than: int) -> bool:\n path = getattr(module, '__file__')\n directory = getattr(module, '__path__', [None])[0]\n\n if path is None and directory:\n path = os.path.join(directory, '__init__.py')\n\n last_modified = os.path.getmtime(path)\n\n if last_modified < newer_than:\n return False\n\n try:\n importlib.reload(module)\n return True\n except ImportError:\n return False", "response": "Executes the reload of the specified module if the source file that it was loaded from was modified more recently than the specified time."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreloads all imported children of the specified parent module object.", "response": "def reload_children(parent_module: types.ModuleType, newer_than: int) -> bool:\n \"\"\"\n Reloads all imported children of the specified parent module object\n\n :param parent_module:\n A module object whose children should be refreshed if their\n currently loaded versions are out of date.\n :param newer_than:\n An integer time in seconds for comparison. Any children modules that\n were modified more recently than this time will be reloaded.\n :return:\n Whether or not any children were reloaded\n \"\"\"\n if not hasattr(parent_module, '__path__'):\n return False\n\n parent_name = get_module_name(parent_module)\n\n children = filter(\n lambda item: item[0].startswith(parent_name),\n sys.modules.items()\n )\n\n return any([do_reload(item[1], newer_than) for item in children])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreload the specified module.", "response": "def reload_module(\n module: typing.Union[str, types.ModuleType],\n recursive: bool,\n force: bool\n) -> bool:\n \"\"\"\n Reloads the specified module, which can either be a module object or\n a string name of a module. Will not reload a module that has not been\n imported\n\n :param module:\n A module object or string module name that should be refreshed if its\n currently loaded version is out of date or the action is forced.\n :param recursive:\n When true, any imported sub-modules of this module will also be\n refreshed if they have been updated.\n :param force:\n When true, all modules will be refreshed even if it doesn't appear\n that they have been updated.\n :return:\n \"\"\"\n\n if isinstance(module, str):\n module = get_module(module)\n\n if module is None or not isinstance(module, types.ModuleType):\n return False\n\n try:\n step = session.project.get_internal_project().current_step\n modified = step.last_modified if step else None\n except AttributeError:\n modified = 0\n\n if modified is None:\n # If the step has no modified time it hasn't been run yet and\n # a reload won't be needed\n return False\n\n newer_than = modified if not force and modified else 0\n\n if recursive:\n children_reloaded = reload_children(module, newer_than)\n else:\n children_reloaded = False\n reloaded = do_reload(module, newer_than)\n\n return reloaded or children_reloaded"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreload the specified modules and returns True if any of them were refreshed False otherwise.", "response": "def refresh(\n *modules: typing.Union[str, types.ModuleType],\n recursive: bool = False,\n force: bool = False\n) -> bool:\n \"\"\"\n Checks the specified module or modules for changes and reloads them if\n they have been changed since the module was first imported or last\n refreshed.\n\n :param modules:\n One or more module objects that should be refreshed if they the\n currently loaded versions are out of date. The package name for\n modules can also be used.\n :param recursive:\n When true, any imported sub-modules of this module will also be\n refreshed if they have been updated.\n :param force:\n When true, all modules will be refreshed even if it doesn't appear\n that they have been updated.\n :return:\n True or False depending on whether any modules were refreshed by this\n call.\n \"\"\"\n\n out = []\n for module in modules:\n out.append(reload_module(module, recursive, force))\n\n return any(out)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the index of the \"*args\" parameter if such a parameter exists in the or - 1 otherwise.", "response": "def get_args_index(target) -> int:\n \"\"\"\n Returns the index of the \"*args\" parameter if such a parameter exists in\n the function arguments or -1 otherwise.\n\n :param target:\n The target function for which the args index should be determined\n :return:\n The arguments index if it exists or -1 if not\n \"\"\"\n\n code = target.__code__\n\n if not bool(code.co_flags & inspect.CO_VARARGS):\n return -1\n\n return code.co_argcount + code.co_kwonlyargcount"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_kwargs_index(target) -> int:\n\n code = target.__code__\n\n if not bool(code.co_flags & inspect.CO_VARKEYWORDS):\n return -1\n\n return (\n code.co_argcount +\n code.co_kwonlyargcount +\n (1 if code.co_flags & inspect.CO_VARARGS else 0)\n )", "response": "Returns the index of the kwargs parameter if such a parameter exists in the\n ."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_arg_names(target) -> typing.List[str]:\n\n code = getattr(target, '__code__')\n\n if code is None:\n return []\n\n arg_count = code.co_argcount\n kwarg_count = code.co_kwonlyargcount\n args_index = get_args_index(target)\n kwargs_index = get_kwargs_index(target)\n\n arg_names = list(code.co_varnames[:arg_count])\n if args_index != -1:\n arg_names.append(code.co_varnames[args_index])\n arg_names += list(code.co_varnames[arg_count:(arg_count + kwarg_count)])\n if kwargs_index != -1:\n arg_names.append(code.co_varnames[kwargs_index])\n if len(arg_names) > 0 and arg_names[0] in ['self', 'cls']:\n arg_count -= 1\n arg_names.pop(0)\n\n return arg_names", "response": "Gets the list of named arguments for the target function"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a dictionary representation of the parameter with the given name and description.", "response": "def create_argument(target, name, description: str = '') -> dict:\n \"\"\"\n Creates a dictionary representation of the parameter\n\n :param target:\n The function object in which the parameter resides\n :param name:\n The name of the parameter\n :param description:\n The documentation description for the parameter\n \"\"\"\n\n arg_names = get_arg_names(target)\n annotations = getattr(target, '__annotations__', {})\n\n out = dict(\n name=name,\n index=arg_names.index(name),\n description=description,\n type=conversions.arg_type_to_string(annotations.get(name, 'Any'))\n )\n out.update(get_optional_data(target, name, arg_names))\n return out"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef explode_line(argument_line: str) -> typing.Tuple[str, str]:\n\n parts = tuple(argument_line.split(' ', 1)[-1].split(':', 1))\n return parts if len(parts) > 1 else (parts[0], '')", "response": "Explode a line of a parameter line into a tuple containing the parameter name and the description parsed\n from the given argument line."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_base_image(path: str):\n with open(path, 'r') as file_handle:\n contents = file_handle.read()\n\n regex = re.compile('from\\s+(?P<source>[^\\s]+)', re.IGNORECASE)\n matches = regex.findall(contents)\n\n if not matches:\n return None\n\n match = matches[0]\n os.system('docker pull {}'.format(match))\n return match", "response": "Pulls the latest version of the base image"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild the container from the specified docker file path", "response": "def build(path: str) -> dict:\n \"\"\"Builds the container from the specified docker file path\"\"\"\n update_base_image(path)\n match = file_pattern.search(os.path.basename(path))\n build_id = match.group('id')\n tags = [\n '{}:{}-{}'.format(HUB_PREFIX, version, build_id),\n '{}:latest-{}'.format(HUB_PREFIX, build_id),\n '{}:current-{}'.format(HUB_PREFIX, build_id)\n ]\n if build_id == 'standard':\n tags.append('{}:latest'.format(HUB_PREFIX))\n\n command = 'docker build --file \"{}\" {} .'.format(\n path,\n ' '.join(['-t {}'.format(t) for t in tags])\n )\n\n print('[BUILDING]:', build_id)\n os.system(command)\n\n return dict(\n id=build_id,\n path=path,\n command=command,\n tags=tags\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef publish(build_entry: dict):\n for tag in build_entry['tags']:\n print('[PUSHING]:', tag)\n os.system('docker push {}'.format(tag))", "response": "Publishes the specified build entry to docker hub"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse() -> dict:\n parser = ArgumentParser()\n parser.add_argument('-p', '--publish', action='store_true', default=False)\n return vars(parser.parse_args())", "response": "Parse command line arguments"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run():\n args = parse()\n build_results = [build(p) for p in glob.iglob(glob_path)]\n\n if not args['publish']:\n return\n\n for entry in build_results:\n publish(entry)", "response": "Execute the build process"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndisplays the elapsed time since the current step started running.", "response": "def elapsed_time(seconds: float) -> str:\n \"\"\"Displays the elapsed time since the current step started running.\"\"\"\n environ.abort_thread()\n parts = (\n '{}'.format(timedelta(seconds=seconds))\n .rsplit('.', 1)\n )\n hours, minutes, seconds = parts[0].split(':')\n return templating.render_template(\n 'elapsed_time.html',\n hours=hours.zfill(2),\n minutes=minutes.zfill(2),\n seconds=seconds.zfill(2),\n microseconds=parts[-1] if len(parts) > 1 else ''\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef image(\n rendered_path: str,\n width: int = None,\n height: int = None,\n justify: str = None\n) -> str:\n \"\"\"Renders an image block\"\"\"\n environ.abort_thread()\n return templating.render_template(\n 'image.html',\n path=rendered_path,\n width=width,\n height=height,\n justification=(justify or 'left').lower()\n )", "response": "Renders an image block"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary containing information about the Cauldron and its Python interpreter and Python environment.", "response": "def get_environment_info() -> dict:\n \"\"\"\n Information about Cauldron and its Python interpreter.\n\n :return:\n A dictionary containing information about the Cauldron and its\n Python environment. This information is useful when providing feedback\n and bug reports.\n \"\"\"\n data = _environ.systems.get_system_data()\n data['cauldron'] = _environ.package_settings.copy()\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef run_server(port=5010, debug=False, **kwargs):\n from cauldron.cli.server import run\n run.execute(port=port, debug=debug, **kwargs)", "response": "Run the cauldron http server used to interact with cauldron from a remote\n host."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun a single project as a single command directly within the current Python interpreter.", "response": "def run_project(\n project_directory: str,\n output_directory: str = None,\n logging_path: str = None,\n reader_path: str = None,\n reload_project_libraries: bool = False,\n **kwargs\n) -> ExecutionResult:\n \"\"\"\n Runs a project as a single command directly within the current Python\n interpreter.\n\n :param project_directory:\n The fully-qualified path to the directory where the Cauldron project is\n located\n :param output_directory:\n The fully-qualified path to the directory where the results will be\n written. All of the results files will be written within this\n directory. If the directory does not exist, it will be created.\n :param logging_path:\n The fully-qualified path to a file that will be used for logging. If a\n directory is specified instead of a file, a file will be created using\n the default filename of cauldron_run.log. If a file already exists at\n that location it will be removed and a new file created in its place.\n :param reader_path:\n Specifies a path where a reader file will be saved after the project\n has finished running. If no path is specified, no reader file will be\n saved. If the path is a directory, a reader file will be saved in that\n directory with the project name as the file name.\n :param reload_project_libraries:\n Whether or not to reload all project libraries prior to execution of\n the project. By default this is False, but can be enabled in cases\n where refreshing the project libraries before execution is needed.\n :param kwargs:\n Any variables to be available in the cauldron.shared object during\n execution of the project can be specified here as keyword arguments.\n :return:\n A response object that contains information about the run process\n and the shared data from the final state of the project.\n \"\"\"\n from cauldron.cli import batcher\n return batcher.run_project(\n project_directory=project_directory,\n output_directory=output_directory,\n log_path=logging_path,\n reader_path=reader_path,\n reload_project_libraries=reload_project_libraries,\n shared_data=kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef join(self, timeout: float = None) -> bool:\n try:\n self.thread.join(timeout)\n return True\n except AttributeError:\n return False", "response": "Joins the thread associated with the response."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deserialize(serial_data: dict) -> 'Response':\n\n r = Response(serial_data.get('id'))\n r.data.update(serial_data.get('data', {}))\n r.ended = serial_data.get('ended', False)\n r.failed = not serial_data.get('success', True)\n\n def load_messages(message_type: str):\n messages = [\n ResponseMessage(**data)\n for data in serial_data.get(message_type, [])\n ]\n setattr(r, message_type, getattr(r, message_type) + messages)\n\n load_messages('errors')\n load_messages('warnings')\n load_messages('messages')\n\n return r", "response": "Converts a serialized dictionary response to a Response object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nspecifies whether or not the thread is running.", "response": "def is_running(self) -> bool:\n \"\"\"Specifies whether or not the thread is running\"\"\"\n return (\n self._has_started and\n self.is_alive() or\n self.completed_at is None or\n (datetime.utcnow() - self.completed_at).total_seconds() < 0.5\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting the Cauldron command in a thread and returns the result.", "response": "def run(self):\n \"\"\"\n Executes the Cauldron command in a thread to prevent long-running\n computations from locking the main Cauldron thread, which is needed\n to serve and print status information.\n \"\"\"\n\n async def run_command():\n try:\n self.result = self.command(\n context=self.context,\n **self.kwargs\n )\n except Exception as error:\n self.exception = error\n print(error)\n import traceback\n traceback.print_exc()\n import sys\n self.context.response.fail(\n code='COMMAND_EXECUTION_ERROR',\n message='Failed to execute command due to internal error',\n error=error\n ).console(\n whitespace=1\n )\n\n self._has_started = True\n self._loop = asyncio.new_event_loop()\n self._loop.run_until_complete(run_command())\n self._loop.close()\n self._loop = None\n self.completed_at = datetime.utcnow()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef abort_running(self) -> bool:\n\n if not self._loop:\n return False\n\n try:\n self._loop.stop()\n return True\n except Exception:\n return False\n finally:\n self.completed_at = datetime.utcnow()", "response": "Aborts the running command by stopping the event loop."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfetch the arguments for the current Flask application request", "response": "def from_request(request=None) -> dict:\n \"\"\" Fetches the arguments for the current Flask application request \"\"\"\n\n request = request if request else flask_request\n\n try:\n json_args = request.get_json(silent=True)\n except Exception:\n json_args = None\n\n try:\n get_args = request.values\n except Exception:\n get_args = None\n\n arg_sources = list(filter(\n lambda arg: arg is not None,\n [json_args, get_args, {}]\n ))\n\n return arg_sources[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef print_rule(rule, feature_names=None, category_names=None, label=\"label\", support=None):\n # type: (Rule, List[Union[str, None]], List[Union[List[str], None]], str, List[int]) -> str\n \"\"\"\n print the rule in a nice way\n :param rule: An instance of Rule\n :param feature_names: a list of n_features feature names,\n :param category_names: the names of the categories of each feature.\n :param label: Can take two values, 'label': just print the label as output, or 'prob': print the prob distribution.\n :param support: optional values of the data that support the rule\n :return: str\n \"\"\"\n pred_label = np.argmax(rule.output)\n if label == \"label\":\n output = str(pred_label) + \" ({})\".format(rule.output[pred_label])\n elif label == \"prob\":\n output = \"[{}]\".format(\", \".join([\"{:.4f}\".format(prob) for prob in rule.output]))\n else:\n raise ValueError(\"Unknown label {}\".format(label))\n output = \"{}: \".format(label) + output\n\n default = rule.is_default()\n if default:\n s = \"DEFAULT \" + output\n else:\n if feature_names is None:\n _feature_names = [\"X\" + str(idx) for idx, _ in rule.clauses]\n else:\n _feature_names = [feature_names[idx] for idx, _ in rule.clauses]\n\n categories = []\n for feature_idx, category in rule.clauses:\n if category_names is None:\n _category_names = None\n else:\n _category_names = category_names[feature_idx]\n if _category_names is None:\n categories.append(\" = \" + str(category))\n else:\n categories.append(\" in \" + str(_category_names[category]))\n\n s = \"IF \"\n # conditions\n conditions = [\"({}{})\".format(feature, category) for feature, category in zip(_feature_names, categories)]\n s += \" AND \".join(conditions)\n # results\n s += \" THEN \" + output\n\n if support is not None:\n support_list = [(\"+\" if i == pred_label else \"-\") + str(supp) for i, supp in enumerate(support)]\n s += \" [\" + \"/\".join(support_list) + \"]\"\n return s", "response": "Print a rule in a nice way."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rule_str2rule(rule_str, prob):\n # type: (str, float) -> Rule\n \"\"\"\n A helper function that converts the resulting string returned from C function to the Rule object\n\n :param rule_str: a string representing the rule\n :param prob: the output probability\n :return: a Rule object\n \"\"\"\n if rule_str == \"default\":\n return Rule([], prob)\n\n raw_rules = rule_str[1:-1].split(\",\")\n clauses = []\n for raw_rule in raw_rules:\n idx = raw_rule.find(\"=\")\n if idx == -1:\n raise ValueError(\"No \\\"=\\\" find in the rule!\")\n clauses.append(Clause(int(raw_rule[1:idx]), int(raw_rule[(idx + 1):])))\n return Rule(clauses, prob)", "response": "A helper function that converts the resulting string returned from C function to the Rule object\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_satisfied(self, x_cat):\n satisfied = []\n if self.is_default():\n return np.ones(x_cat.shape[0], dtype=bool)\n for idx, cat in self.clauses:\n satisfied.append(x_cat[:, idx] == cat)\n return reduce(np.logical_and, satisfied)", "response": "Returns a 2D array of pure categorical data representing whether the rule is fired by each input data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fit(self, x, y):\n verbose = self.verbose\n # Create temporary files\n data_file = tempfile.NamedTemporaryFile(\"w+b\", delete=False)\n label_file = tempfile.NamedTemporaryFile(\"w+b\", delete=False)\n\n start = time.time()\n raw_rules = categorical2pysbrl_data(x, y, data_file.name, label_file.name, supp=self.min_support,\n zmin=self.min_rule_len, zmax=self.max_rule_len, method=self.fim_method)\n if verbose > 1:\n print(\"Info: sbrl data files saved to %s and %s temporarily\" % (data_file.name, label_file.name))\n data_file.close()\n label_file.close()\n cat_time = time.time() - start\n if verbose:\n print(\"Info: time for rule mining: %.4fs\" % cat_time)\n n_labels = int(np.max(y)) + 1\n start = time.time()\n _model = train_sbrl(data_file.name, label_file.name, self.lambda_, eta=self.eta,\n max_iters=self.iters, n_chains=self.n_chains, seed=self.seed,\n alpha=self.alpha, verbose=verbose)\n train_time = time.time() - start\n if verbose:\n print(\"Info: training time: %.4fs\" % train_time)\n\n # update model parameters\n self._n_classes = n_labels\n self._n_features = x.shape[1]\n\n # convert the raw parameters to rules\n self.from_raw(_model[0], _model[1], raw_rules)\n self._supports = self.compute_support(x, y)\n\n # Close the temp files\n os.unlink(data_file.name)\n os.unlink(label_file.name)", "response": "Fits the SBrl model on the given data and returns the model parameters and labels."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_raw(self, rule_ids, outputs, raw_rules):\n self._rule_pool = [([], [])] + raw_rules\n self._rule_list = []\n for i, idx in enumerate(rule_ids):\n rule = Rule([Clause(f, c) for f, c in zip(*self._rule_pool[idx])], outputs[i])\n self._rule_list.append(rule)\n # self._rule_list.append(rule_str2rule(_rule_name, outputs[i]))\n self._rule_ids = rule_ids\n self._rule_outputs = outputs", "response": "A helper function that converts the results returned from C function\n to the equivalent Python object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the support for the rules.", "response": "def compute_support(self, x, y):\n # type: (np.ndarray, np.ndarray) -> np.ndarray\n \"\"\"\n Calculate the support for the rules.\n The support of each rule is a list of `n_classes` integers: [l1, l2, ...].\n Each integer represents the number of data of label i that is caught by this rule\n :param x: 2D np.ndarray (n_instances, n_features) should be categorical data, must be of type int\n :param y: 1D np.ndarray (n_instances, ) labels\n :return: a np.ndarray of shape (n_rules, n_classes)\n \"\"\"\n caught_matrix = self.caught_matrix(x)\n if np.sum(caught_matrix.astype(np.int)) != x.shape[0]:\n raise RuntimeError(\"The sum of the support should equal to the number of instances!\")\n support_summary = np.zeros((self.n_rules, self.n_classes), dtype=np.int)\n for i, support in enumerate(caught_matrix):\n support_labels = y[support]\n unique_labels, unique_counts = np.unique(support_labels, return_counts=True)\n if len(unique_labels) > 0 and np.max(unique_labels) > support_summary.shape[1]:\n # There occurs labels that have not seen in training\n pad_len = np.max(unique_labels) - support_summary.shape[1]\n support_summary = np.hstack((support_summary, np.zeros((self.n_rules, pad_len), dtype=np.int)))\n support_summary[i, unique_labels] = unique_counts\n return support_summary"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef caught_matrix(self, x):\n # type: (np.ndarray) -> np.ndarray\n \"\"\"\n compute the caught matrix of x\n Each rule has an array of bools, showing whether each instances is caught by this rule\n :param x: 2D np.ndarray (n_instances, n_features) should be categorical data, must be of type int\n :return:\n a bool np.ndarray of shape (n_rules, n_instances)\n \"\"\"\n un_satisfied = np.ones((x.shape[0],), dtype=np.bool)\n supports = np.zeros((self.n_rules, x.shape[0]), dtype=np.bool)\n for i, rule in enumerate(self._rule_list):\n is_satisfied = rule.is_satisfied(x)\n satisfied = np.logical_and(is_satisfied, un_satisfied)\n # marking new satisfied instances as satisfied\n un_satisfied = np.logical_xor(satisfied, un_satisfied)\n supports[i, :] = satisfied\n return supports", "response": "compute the caught matrix of x"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompute the decision path of the rule list on x", "response": "def decision_path(self, x):\n # type: (np.ndarray) -> np.ndarray\n \"\"\"\n compute the decision path of the rule list on x\n :param x: x should be already transformed\n :return:\n return a np.ndarray of shape [n_rules, n_instances] of type bool,\n representing whether an instance has consulted a rule or not\n \"\"\"\n un_satisfied = np.ones([x.shape[0]], dtype=np.bool)\n paths = np.zeros((self.n_rules, x.shape[0]), dtype=np.bool)\n for i, rule in enumerate(self._rule_list):\n paths[i, :] = un_satisfied\n satisfied = rule.is_satisfied(x)\n # marking new satisfied instances as satisfied\n un_satisfied = np.logical_and(np.logical_not(satisfied), un_satisfied)\n return paths"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef put(self, *args, **kwargs) -> 'SharedCache':\n\n environ.abort_thread()\n\n index = 0\n while index < (len(args) - 1):\n key = args[index]\n value = args[index + 1]\n self._shared_cache_data[key] = value\n index += 2\n\n for key, value in kwargs.items():\n if value is None and key in self._shared_cache_data:\n del self._shared_cache_data[key]\n else:\n self._shared_cache_data[key] = value\n\n return self", "response": "Adds one or more variables to the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a tuple containing multiple values from the cache specified by the keys arguments", "response": "def grab(\n self,\n *keys: typing.List[str],\n default_value=None\n ) -> typing.Tuple:\n \"\"\"\n Returns a tuple containing multiple values from the cache specified by\n the keys arguments\n\n :param keys:\n One or more variable names stored in the cache that should be\n returned by the grab function. The order of these arguments are\n preserved by the returned tuple.\n :param default_value:\n If one or more of the keys is not found within the cache, this\n value will be returned as the missing value.\n :return:\n A tuple containing values for each of the keys specified in the\n arguments\n \"\"\"\n\n return tuple([self.fetch(k, default_value) for k in keys])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nretrieve the value of the specified variable from the cache.", "response": "def fetch(self, key: typing.Union[str, None], default_value=None):\n \"\"\"\n Retrieves the value of the specified variable from the cache\n\n :param key:\n The name of the variable for which the value should be returned\n :param default_value:\n The value to return if the variable does not exist in the cache\n :return:\n The value of the specified key if it exists in the cache or the\n default_Value if it does not\n \"\"\"\n\n environ.abort_thread()\n\n if key is None:\n return self._shared_cache_data\n\n return self._shared_cache_data.get(key, default_value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntrain a SBRL rule list using the SBRL model.", "response": "def train_sbrl(data_file, label_file, lambda_=20, eta=2, max_iters=300000, n_chains=20, alpha=1, seed=None, verbose=0):\n \"\"\"\n The basic training function of the scalable bayesian rule list.\n Users are suggested to use SBRL instead of this function.\n It takes the paths of the pre-processed data and label files as input,\n and return the parameters of the trained rule list.\n\n Check pysbrl.utils:categorical2pysbrl_data to see how to convert categorical data to the required format\n :param data_file: The data file\n :param label_file: The label file\n :param lambda_: A hyper parameter, the prior representing the expected length of the rule list\n :param eta: A hyper parameter, the prior representing the expected length of each rule\n :param max_iters: The maximum iteration of the algo\n :param n_chains: The number of markov chains to run\n :param alpha: The prior of the output probability distribution, see the paper for more detail.\n :return: A tuple of (`rule_ids`, `outputs`, `rule_strings`)\n `rule_ids`: the list of ids of rules\n `outputs`: the outputs matrix (prob distribution as a vector per rule)\n `rule_strings`: the whole list of rules in the format of strings like `u'{c2=x,c4=o,c5=b}'`.\n\n \"\"\"\n if isinstance(alpha, int):\n alphas = np.array([alpha], dtype=np.int32)\n elif isinstance(alpha, list):\n for a in alpha:\n assert isinstance(a, int)\n alphas = np.array(alpha, dtype=np.int32)\n else:\n raise ValueError('the argument alpha can only be int or List[int]')\n if seed is None:\n seed = -1\n if not os.path.isfile(data_file):\n raise FileNotFoundError('data file %s does not exists!' % data_file)\n if not os.path.isfile(label_file):\n raise FileNotFoundError('label file %s does not exists!' % label_file)\n return _train(data_file, label_file, lambda_, eta, max_iters, n_chains, alphas, seed, verbose)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse(self, line):\n tree = list(self.parser.raw_parse(line))[0]\n tree = tree[0]\n return tree", "response": "Parses a line into a tree object"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pack_chunk(source_data: bytes) -> str:\n if not source_data:\n return ''\n\n chunk_compressed = zlib.compress(source_data)\n return binascii.b2a_base64(chunk_compressed).decode('utf-8')", "response": "Packs the specified binary source data into a base64 encoded string for the base64 - encoded version of the source data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unpack_chunk(chunk_data: str) -> bytes:\n\n if not chunk_data:\n return b''\n\n chunk_compressed = binascii.a2b_base64(chunk_data.encode('utf-8'))\n return zlib.decompress(chunk_compressed)", "response": "Unpacks a previously packed chunk data back into the original\n bytes representation."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining the number of chunks needed to send the file for the given file size", "response": "def get_file_chunk_count(\n file_path: str,\n chunk_size: int = DEFAULT_CHUNK_SIZE\n) -> int:\n \"\"\"\n Determines the number of chunks necessary to send the file for the given\n chunk size\n\n :param file_path:\n The absolute path to the file that will be synchronized in chunks\n :param chunk_size:\n The maximum size of each chunk in bytes\n :return\n The number of chunks necessary to send the entire contents of the\n specified file for the given chunk size\n \"\"\"\n if not os.path.exists(file_path):\n return 0\n\n file_size = os.path.getsize(file_path)\n return max(1, int(math.ceil(file_size / chunk_size)))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the specified file in chunks and returns a generator where each returned chunk is a base64 encoded string for sync transmission", "response": "def read_file_chunks(\n file_path: str,\n chunk_size: int = DEFAULT_CHUNK_SIZE\n) -> bytes:\n \"\"\"\n Reads the specified file in chunks and returns a generator where\n each returned chunk is a compressed base64 encoded string for sync\n transmission\n\n :param file_path:\n The path to the file to read in chunks\n :param chunk_size:\n The size, in bytes, of each chunk. The final chunk will be less than\n or equal to this size as the remainder.\n \"\"\"\n chunk_count = get_file_chunk_count(file_path, chunk_size)\n\n if chunk_count < 1:\n return ''\n\n with open(file_path, mode='rb') as fp:\n for chunk_index in range(chunk_count):\n source = fp.read(chunk_size)\n chunk = pack_chunk(source)\n yield chunk"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwrite the specified chunk to the given file path.", "response": "def write_file_chunk(\n file_path: str,\n packed_chunk: str,\n append: bool = True,\n offset: int = -1\n):\n \"\"\"\n Write or append the specified chunk data to the given file path, unpacking\n the chunk before writing. If the file does not yet exist, it will be\n created. Set the append argument to False if you do not want the chunk\n to be appended to an existing file.\n\n :param file_path:\n The file where the chunk will be written or appended\n :param packed_chunk:\n The packed chunk data to write to the file. It will be unpacked before\n the file is written.\n :param append:\n Whether or not the chunk should be appended to the existing file. If\n False the chunk data will overwrite the existing file.\n :param offset:\n The byte offset in the file where the chunk should be written.\n If the value is less than zero, the chunk will be written or appended\n based on the `append` argument. Note that if you indicate an append\n write mode and an offset, the mode will be forced to write instead of\n append.\n \"\"\"\n mode = 'ab' if append else 'wb'\n contents = unpack_chunk(packed_chunk)\n writer.write_file(file_path, contents, mode=mode, offset=offset)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_output_path(path: str = None) -> str:\n cleaned = paths.clean(path or os.getcwd())\n if cleaned not in _logging_paths:\n _logging_paths.append(cleaned)\n return cleaned", "response": "Adds the specified path to the output logging paths if it is not already in the list of paths."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove the specified path from the output logging paths if it is in the listed paths.", "response": "def remove_output_path(path: str = None) -> str:\n \"\"\"\n Removes the specified path from the output logging paths if it is\n in the listed paths.\n\n :param path:\n The path to remove from the logging output paths. If the path is empty\n or no path is given, the current working directory will be used\n instead.\n \"\"\"\n cleaned = paths.clean(path or os.getcwd())\n if cleaned in _logging_paths:\n _logging_paths.remove(path)\n return cleaned"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlog a message to the console.", "response": "def log(\n message: typing.Union[str, typing.List[str]],\n whitespace: int = 0,\n whitespace_top: int = 0,\n whitespace_bottom: int = 0,\n indent_by: int = 0,\n trace: bool = True,\n file_path: str = None,\n append_to_file: bool = True,\n **kwargs\n) -> str:\n \"\"\"\n Logs a message to the console with the formatting support beyond a simple\n print statement or logger statement.\n\n :param message:\n The primary log message for the entry\n :param whitespace:\n The number of lines of whitespace to append to the beginning and end\n of the log message when printed to the console\n :param whitespace_top:\n The number of lines of whitespace to append to the beginning only of\n the log message when printed to the console. If whitespace_top and\n whitespace are both specified, the larger of the two values will be\n used.\n :param whitespace_bottom:\n The number of lines of whitespace to append to the end of the log\n message when printed to the console. If whitespace_bottom and\n whitespace are both specified, the larger of hte two values will be\n used.\n :param indent_by:\n The number of spaces that each line of text should be indented\n :param trace:\n Whether or not to trace the output to the console\n :param file_path:\n A path to a logging file where the output should be written\n :param append_to_file:\n Whether or not the log entry should be overwritten or appended to the\n log file specified in the file_path argument\n :param kwargs:\n \"\"\"\n m = add_to_message(message)\n for key, value in kwargs.items():\n m.append('{key}: {value}'.format(key=key, value=value))\n\n pre_whitespace = int(max(whitespace, whitespace_top))\n post_whitespace = int(max(whitespace, whitespace_bottom))\n\n if pre_whitespace:\n m.insert(0, max(0, pre_whitespace - 1) * '\\n')\n if post_whitespace:\n m.append(max(0, post_whitespace - 1) * '\\n')\n\n message = indent('\\n'.join(m), ' ' * indent_by)\n raw(\n message=message,\n trace=trace,\n file_path=file_path,\n append_to_file=append_to_file\n )\n return message"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd data to the message object", "response": "def add_to_message(data, indent_level=0) -> list:\n \"\"\"Adds data to the message object\"\"\"\n message = []\n\n if isinstance(data, str):\n message.append(indent(\n dedent(data.strip('\\n')).strip(),\n indent_level * ' '\n ))\n return message\n\n for line in data:\n offset = 0 if isinstance(line, str) else 1\n message += add_to_message(line, indent_level + offset)\n return message"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding the specified mode identifier to the list of currently active modes and returns a copy of the currently active modes list.", "response": "def add(mode_id: str) -> list:\n \"\"\"\n Adds the specified mode identifier to the list of active modes and returns\n a copy of the currently active modes list.\n \"\"\"\n\n if not has(mode_id):\n _current_modes.append(mode_id)\n return _current_modes.copy()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove(mode_id: str) -> bool:\n\n had_mode = has(mode_id)\n\n if had_mode:\n _current_modes.remove(mode_id)\n\n return had_mode", "response": "Removes the specified mode identifier from the currently active modes and returns True if the operation was carried out."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncloning the redis client to be slowlog - compatible", "response": "def clone(client):\n '''Clone the redis client to be slowlog-compatible'''\n kwargs = client.redis.connection_pool.connection_kwargs\n kwargs['parser_class'] = redis.connection.PythonParser\n pool = redis.connection.ConnectionPool(**kwargs)\n return redis.Redis(connection_pool=pool)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef start(self):\n '''Get ready for a profiling run'''\n self._configs = self._client.config_get('slow-*')\n self._client.config_set('slowlog-max-len', 100000)\n self._client.config_set('slowlog-log-slower-than', 0)\n self._client.execute_command('slowlog', 'reset')", "response": "Get ready for a profiling run"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stop(self):\n '''Set everything back to normal and collect our data'''\n for key, value in self._configs.items():\n self._client.config_set(key, value)\n logs = self._client.execute_command('slowlog', 'get', 100000)\n current = {\n 'name': None, 'accumulated': defaultdict(list)\n }\n for _, _, duration, request in logs:\n command = request[0]\n if command == 'slowlog':\n continue\n if 'eval' in command.lower():\n subcommand = request[3]\n self._timings['qless-%s' % subcommand].append(duration)\n if current['name']:\n if current['name'] not in self._commands:\n self._commands[current['name']] = defaultdict(list)\n for key, values in current['accumulated'].items():\n self._commands[current['name']][key].extend(values)\n current = {\n 'name': subcommand, 'accumulated': defaultdict(list)\n }\n else:\n self._timings[command].append(duration)\n if current['name']:\n current['accumulated'][command].append(duration)\n # Include the last\n if current['name']:\n if current['name'] not in self._commands:\n self._commands[current['name']] = defaultdict(list)\n for key, values in current['accumulated'].items():\n self._commands[current['name']][key].extend(values)", "response": "Stop the slowlog and collect our data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting the results of this profiling", "response": "def display(self):\n '''Print the results of this profiling'''\n self.pretty(self._timings, 'Raw Redis Commands')\n print()\n for key, value in self._commands.items():\n self.pretty(value, 'Qless \"%s\" Command' % key)\n print()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npopulating the sub parser with the shell arguments", "response": "def add_shell_action(sub_parser: ArgumentParser) -> ArgumentParser:\n \"\"\"Populates the sub parser with the shell arguments\"\"\"\n\n sub_parser.add_argument(\n '-p', '--project',\n dest='project_directory',\n type=str,\n default=None\n )\n\n sub_parser.add_argument(\n '-l', '--log',\n dest='logging_path',\n type=str,\n default=None\n )\n\n sub_parser.add_argument(\n '-o', '--output',\n dest='output_directory',\n type=str,\n default=None\n )\n\n sub_parser.add_argument(\n '-s', '--shared',\n dest='shared_data_path',\n type=str,\n default=None\n )\n\n return sub_parser"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse(args: list = None) -> dict:\n\n parser = ArgumentParser(description='Cauldron command')\n parser.add_argument(\n 'command',\n nargs='?',\n default='shell',\n help='The Cauldron command action to execute'\n )\n\n parser.add_argument(\n '-v', '--version',\n dest='show_version_info',\n default=False,\n action='store_true',\n help='show Cauldron version and exit'\n )\n\n sub_parsers = parser.add_subparsers(\n dest='command',\n title='Sub-Command Actions',\n description='The actions you can execute with the cauldron command',\n )\n\n sub_parsers.add_parser('shell', aliases=['version'])\n add_shell_action(sub_parsers.add_parser('shell'))\n add_kernel_action(sub_parsers.add_parser('kernel', aliases=['serve']))\n\n arguments = vars(parser.parse_args(args=args))\n arguments['parser'] = parser\n return arguments", "response": "Parses the command line arguments and returns a dictionary containing the parsed arguments."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef running(self, offset=0, count=25):\n '''Return all the currently-running jobs'''\n return self.client('jobs', 'running', self.name, offset, count)", "response": "Return all currently - running jobs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning all currently - stalled jobs", "response": "def stalled(self, offset=0, count=25):\n '''Return all the currently-stalled jobs'''\n return self.client('jobs', 'stalled', self.name, offset, count)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn all currently - scheduled jobs", "response": "def scheduled(self, offset=0, count=25):\n '''Return all the currently-scheduled jobs'''\n return self.client('jobs', 'scheduled', self.name, offset, count)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef depends(self, offset=0, count=25):\n '''Return all the currently dependent jobs'''\n return self.client('jobs', 'depends', self.name, offset, count)", "response": "Return all the currently dependent jobs"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns all the recurring jobs", "response": "def recurring(self, offset=0, count=25):\n '''Return all the recurring jobs'''\n return self.client('jobs', 'recurring', self.name, offset, count)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a string representative of the class", "response": "def class_string(self, klass):\n '''Return a string representative of the class'''\n if isinstance(klass, string_types):\n return klass\n return klass.__module__ + '.' + klass.__name__"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nputs a job into the provided queue.", "response": "def put(self, klass, data, priority=None, tags=None, delay=None,\n retries=None, jid=None, depends=None):\n '''Either create a new job in the provided queue with the provided\n attributes, or move that job into that queue. If the job is being\n serviced by a worker, subsequent attempts by that worker to either\n `heartbeat` or `complete` the job should fail and return `false`.\n\n The `priority` argument should be negative to be run sooner rather\n than later, and positive if it's less important. The `tags` argument\n should be a JSON array of the tags associated with the instance and\n the `valid after` argument should be in how many seconds the instance\n should be considered actionable.'''\n return self.client('put', self.worker_name,\n self.name,\n jid or uuid.uuid4().hex,\n self.class_string(klass),\n json.dumps(data),\n delay or 0,\n 'priority', priority or 0,\n 'tags', json.dumps(tags or []),\n 'retries', retries or 5,\n 'depends', json.dumps(depends or [])\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef recur(self, klass, data, interval, offset=0, priority=None, tags=None,\n retries=None, jid=None):\n '''Place a recurring job in this queue'''\n return self.client('recur', self.name,\n jid or uuid.uuid4().hex,\n self.class_string(klass),\n json.dumps(data),\n 'interval', interval, offset,\n 'priority', priority or 0,\n 'tags', json.dumps(tags or []),\n 'retries', retries or 5\n )", "response": "Place a recurring job in this queue"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npass in the queue from which to pull items the current time the current time when the locks for these items should expire and the number of items to be popped off.", "response": "def pop(self, count=None):\n '''Passing in the queue from which to pull items, the current time,\n when the locks for these returned items should expire, and the number\n of items to be popped off.'''\n results = [Job(self.client, **job) for job in json.loads(\n self.client('pop', self.name, self.worker_name, count or 1))]\n if count is None:\n return (len(results) and results[0]) or None\n return results"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stats(self, date=None):\n '''Return the current statistics for a given queue on a given date.\n The results are returned are a JSON blob::\n\n {\n 'total' : ...,\n 'mean' : ...,\n 'variance' : ...,\n 'histogram': [\n ...\n ]\n }\n\n The histogram's data points are at the second resolution for the first\n minute, the minute resolution for the first hour, the 15-minute\n resolution for the first day, the hour resolution for the first 3\n days, and then at the day resolution from there on out. The\n `histogram` key is a list of those values.'''\n return json.loads(\n self.client('stats', self.name, date or repr(time.time())))", "response": "Return the current statistics for a given queue on a given date."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run(\n project: 'projects.Project',\n step: 'projects.ProjectStep'\n) -> dict:\n \"\"\"\n Runs the markdown file and renders the contents to the notebook display\n\n :param project:\n :param step:\n :return:\n A run response dictionary containing\n \"\"\"\n\n with open(step.source_path, 'r') as f:\n code = f.read()\n\n try:\n cauldron.display.markdown(code, **project.shared.fetch(None))\n return {'success': True}\n except Exception as err:\n return dict(\n success=False,\n html_message=templating.render_template(\n 'markdown-error.html',\n error=err\n )\n )", "response": "Runs the markdown file and renders the contents to the notebook display"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a dictionary containing all of the available Cauldron commands currently registered.", "response": "def fetch(reload: bool = False) -> dict:\n \"\"\"\n Returns a dictionary containing all of the available Cauldron commands\n currently registered. This data is cached for performance. Unless the\n reload argument is set to True, the command list will only be generated\n the first time this function is called.\n\n :param reload:\n Whether or not to disregard any cached command data and generate a\n new dictionary of available commands.\n\n :return:\n A dictionary where the keys are the name of the commands and the\n values are the modules for the command .\n \"\"\"\n\n if len(list(COMMANDS.keys())) > 0 and not reload:\n return COMMANDS\n\n COMMANDS.clear()\n\n for key in dir(commands):\n e = getattr(commands, key)\n if e and hasattr(e, 'NAME') and hasattr(e, 'DESCRIPTION'):\n COMMANDS[e.NAME] = e\n\n return dict(COMMANDS.items())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_command_from_module(\n command_module,\n remote_connection: environ.RemoteConnection\n):\n \"\"\"\n Returns the execution command to use for the specified module, which may\n be different depending upon remote connection\n\n :param command_module:\n :param remote_connection:\n :return:\n \"\"\"\n\n use_remote = (\n remote_connection.active and\n hasattr(command_module, 'execute_remote')\n )\n return (\n command_module.execute_remote\n if use_remote else\n command_module.execute\n )", "response": "Returns the execution command to use for the specified module"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef show_help(command_name: str = None, raw_args: str = '') -> Response:\n\n response = Response()\n\n cmds = fetch()\n if command_name and command_name in cmds:\n parser, result = parse.get_parser(\n cmds[command_name],\n parse.explode_line(raw_args),\n dict()\n )\n\n if parser is not None:\n out = parser.format_help()\n return response.notify(\n kind='INFO',\n code='COMMAND_DESCRIPTION'\n ).kernel(\n commands=out\n ).console(\n out,\n whitespace=1\n ).response\n\n environ.log_header('Available Commands')\n response.consume(print_module_help())\n\n return response.fail(\n code='NO_SUCH_COMMAND',\n message='Failed to show command help for \"{}\"'.format(command_name)\n ).console(\n \"\"\"\n For more information on the various commands, enter help on the\n specific command:\n\n help [COMMAND]\n \"\"\",\n whitespace_bottom=1\n ).response", "response": "Prints the basic command help to the console"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nimport a class and check its modification time", "response": "def _import(klass):\n '''1) Get a reference to the module\n 2) Check the file that module's imported from\n 3) If that file's been updated, force a reload of that module\n return it'''\n mod = __import__(klass.rpartition('.')[0])\n for segment in klass.split('.')[1:-1]:\n mod = getattr(mod, segment)\n\n # Alright, now check the file associated with it. Note that clases\n # defined in __main__ don't have a __file__ attribute\n if klass not in BaseJob._loaded:\n BaseJob._loaded[klass] = time.time()\n if hasattr(mod, '__file__'):\n try:\n mtime = os.stat(mod.__file__).st_mtime\n if BaseJob._loaded[klass] < mtime:\n mod = reload_module(mod)\n except OSError:\n logger.warn('Could not check modification time of %s',\n mod.__file__)\n\n return getattr(mod, klass.rpartition('.')[2])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading the module containing your class and run the appropriate method.", "response": "def process(self):\n '''Load the module containing your class, and run the appropriate\n method. For example, if this job was popped from the queue\n ``testing``, then this would invoke the ``testing`` staticmethod of\n your class.'''\n try:\n method = getattr(self.klass, self.queue_name,\n getattr(self.klass, 'process', None))\n except Exception as exc:\n # We failed to import the module containing this class\n logger.exception('Failed to import %s', self.klass_name)\n return self.fail(self.queue_name + '-' + exc.__class__.__name__,\n 'Failed to import %s' % self.klass_name)\n\n if method:\n if isinstance(method, types.FunctionType):\n try:\n logger.info('Processing %s in %s',\n self.jid, self.queue_name)\n method(self)\n logger.info('Completed %s in %s',\n self.jid, self.queue_name)\n except Exception as exc:\n # Make error type based on exception type\n logger.exception('Failed %s in %s: %s',\n self.jid, self.queue_name, repr(method))\n self.fail(self.queue_name + '-' + exc.__class__.__name__,\n traceback.format_exc())\n else:\n # Or fail with a message to that effect\n logger.error('Failed %s in %s : %s is not static',\n self.jid, self.queue_name, repr(method))\n self.fail(self.queue_name + '-method-type',\n repr(method) + ' is not static')\n else:\n # Or fail with a message to that effect\n logger.error(\n 'Failed %s : %s is missing a method \"%s\" or \"process\"',\n self.jid, self.klass_name, self.queue_name)\n self.fail(self.queue_name + '-method-missing', self.klass_name +\n ' is missing a method \"' + self.queue_name + '\" or \"process\"')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef move(self, queue, delay=0, depends=None):\n '''Move this job out of its existing state and into another queue. If\n a worker has been given this job, then that worker's attempts to\n heartbeat that job will fail. Like ``Queue.put``, this accepts a\n delay, and dependencies'''\n logger.info('Moving %s to %s from %s',\n self.jid, queue, self.queue_name)\n return self.client('put', self.worker_name,\n queue, self.jid, self.klass_name,\n json.dumps(self.data), delay, 'depends', json.dumps(depends or [])\n )", "response": "Move this job into another queue."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef complete(self, nextq=None, delay=None, depends=None):\n '''Turn this job in as complete, optionally advancing it to another\n queue. Like ``Queue.put`` and ``move``, it accepts a delay, and\n dependencies'''\n if nextq:\n logger.info('Advancing %s to %s from %s',\n self.jid, nextq, self.queue_name)\n return self.client('complete', self.jid, self.client.worker_name,\n self.queue_name, json.dumps(self.data), 'next', nextq,\n 'delay', delay or 0, 'depends',\n json.dumps(depends or [])) or False\n else:\n logger.info('Completing %s', self.jid)\n return self.client('complete', self.jid, self.client.worker_name,\n self.queue_name, json.dumps(self.data)) or False", "response": "Turn this job in as complete optionally advancing it to another\n queue. Like Queue. put and Move it accepts a delay and the dependencies."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef heartbeat(self):\n '''Renew the heartbeat, if possible, and optionally update the job's\n user data.'''\n logger.debug('Heartbeating %s (ttl = %s)', self.jid, self.ttl)\n try:\n self.expires_at = float(self.client('heartbeat', self.jid,\n self.client.worker_name, json.dumps(self.data)) or 0)\n except QlessException:\n raise LostLockException(self.jid)\n logger.debug('Heartbeated %s (ttl = %s)', self.jid, self.ttl)\n return self.expires_at", "response": "Renew the heartbeat if possible and optionally update the job s\n user data."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fail(self, group, message):\n '''Mark the particular job as failed, with the provided type, and a\n more specific message. By `type`, we mean some phrase that might be\n one of several categorical modes of failure. The `message` is\n something more job-specific, like perhaps a traceback.\n\n This method should __not__ be used to note that a job has been dropped\n or has failed in a transient way. This method __should__ be used to\n note that a job has something really wrong with it that must be\n remedied.\n\n The motivation behind the `type` is so that similar errors can be\n grouped together. Optionally, updated data can be provided for the job.\n A job in any state can be marked as failed. If it has been given to a\n worker as a job, then its subsequent requests to heartbeat or complete\n that job will fail. Failed jobs are kept until they are canceled or\n completed. __Returns__ the id of the failed job if successful, or\n `False` on failure.'''\n logger.warn('Failing %s (%s): %s', self.jid, group, message)\n return self.client('fail', self.jid, self.client.worker_name, group,\n message, json.dumps(self.data)) or False", "response": "Mark the particular job as failed and return the id of the job that was cancelled or False on failure."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretries this job in a little bit in the same queue.", "response": "def retry(self, delay=0, group=None, message=None):\n '''Retry this job in a little bit, in the same queue. This is meant\n for the times when you detect a transient failure yourself'''\n args = ['retry', self.jid, self.queue_name, self.worker_name, delay]\n if group is not None and message is not None:\n args.append(group)\n args.append(message)\n return self.client(*args)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef undepend(self, *args, **kwargs):\n '''Remove specific (or all) job dependencies from this job:\n\n job.remove(jid1, jid2)\n job.remove(all=True)'''\n if kwargs.get('all', False):\n return self.client('depends', self.jid, 'off', 'all') or False\n else:\n return self.client('depends', self.jid, 'off', *args) or False", "response": "Removes specific job dependencies from this job"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck to see if the given file path ends with any of the specified file_extensions.", "response": "def has_extension(file_path: str, *args: typing.Tuple[str]) -> bool:\n \"\"\"\n Checks to see if the given file path ends with any of the specified file\n extensions. If a file extension does not begin with a '.' it will be added\n automatically\n\n :param file_path:\n The path on which the extensions will be tested for a match\n :param args:\n One or more extensions to test for a match with the file_path argument\n :return:\n Whether or not the file_path argument ended with one or more of the\n specified extensions\n \"\"\"\n\n def add_dot(extension):\n return (\n extension\n if extension.startswith('.') else\n '.{}'.format(extension)\n )\n\n return any([\n file_path.endswith(add_dot(extension))\n for extension in args\n ])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the documentation string for the given object.", "response": "def get_docstring(target) -> str:\n \"\"\"\n Retrieves the documentation string from the target object and returns it\n after removing insignificant whitespace\n\n :param target:\n The object for which the doc string should be retrieved\n :return:\n The cleaned documentation string for the target. If no doc string\n exists an empty string will be returned instead.\n \"\"\"\n\n raw = getattr(target, '__doc__')\n\n if raw is None:\n return ''\n\n return textwrap.dedent(raw)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_doc_entries(target: typing.Callable) -> list:\n\n raw = get_docstring(target)\n\n if not raw:\n return []\n\n raw_lines = [\n line.strip()\n for line in raw.replace('\\r', '').split('\\n')\n ]\n\n def compactify(compacted: list, entry: str) -> list:\n chars = entry.strip()\n\n if not chars:\n return compacted\n\n if len(compacted) < 1 or chars.startswith(':'):\n compacted.append(entry.rstrip())\n else:\n compacted[-1] = '{}\\n{}'.format(compacted[-1], entry.rstrip())\n\n return compacted\n\n return [\n textwrap.dedent(block).strip()\n for block in functools.reduce(compactify, raw_lines, [])\n ]", "response": "Returns the lines of documentation from the given target which are formatted by the user."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the documentation for a function.", "response": "def parse_function(\n name: str,\n target: typing.Callable\n) -> typing.Union[None, dict]:\n \"\"\"\n Parses the documentation for a function, which is specified by the name of\n the function and the function itself.\n\n :param name:\n Name of the function to parse\n :param target:\n The function to parse into documentation\n :return:\n A dictionary containing documentation for the specified function, or\n None if the target was not a function.\n \"\"\"\n\n if not hasattr(target, '__code__'):\n return None\n\n lines = get_doc_entries(target)\n docs = ' '.join(filter(lambda line: not line.startswith(':'), lines))\n params = parse_params(target, lines)\n returns = parse_returns(target, lines)\n\n return dict(\n name=getattr(target, '__name__'),\n doc=docs,\n params=params,\n returns=returns\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef variable(name: str, target: property) -> typing.Union[None, dict]:\n\n if hasattr(target, 'fget'):\n doc = parse_function(name, target.fget)\n if doc:\n doc['read_only'] = bool(target.fset is None)\n return doc\n\n return dict(\n name=name,\n description=get_docstring(target)\n )", "response": "Returns a dictionary with the value of a variable."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads the current state of the buffer and returns a string that is the current state of the print buffer.", "response": "def read_all(self) -> str:\n \"\"\"\n Reads the current state of the buffer and returns a string those\n contents\n\n :return:\n A string for the current state of the print buffer contents\n \"\"\"\n try:\n buffered_bytes = self.bytes_buffer.getvalue()\n if buffered_bytes is None:\n return ''\n\n return buffered_bytes.decode(self.source_encoding)\n except Exception as err:\n return 'Redirect Buffer Error: {}'.format(err)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef flush_all(self) -> str:\n\n # self.bytes_buffer.seek(0)\n # contents = self.bytes_buffer.read()\n # self.bytes_buffer.truncate(0)\n # self.bytes_buffer.seek(0)\n\n # if contents is None:\n # return ''\n\n contents = self.bytes_buffer.getvalue()\n self.bytes_buffer.truncate(0)\n self.bytes_buffer.seek(0)\n\n return (\n ''\n if not contents else\n contents.decode(self.source_encoding)\n )", "response": "Flushes all the bytes in the buffer and returns the string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating the data object that stores the step information in the notebook results JavaScript file.", "response": "def create_data(step: 'projects.ProjectStep') -> STEP_DATA:\n \"\"\"\n Creates the data object that stores the step information in the notebook\n results JavaScript file.\n\n :param step:\n Project step for which to create the data\n :return:\n Step data tuple containing scaffold data structure for the step output.\n The dictionary must then be populated with data from the step to\n correctly reflect the current state of the step.\n\n This is essentially a \"blank\" step dictionary, which is what the step\n would look like if it had not yet run\n \"\"\"\n\n return STEP_DATA(\n name=step.definition.name,\n status=step.status(),\n has_error=False,\n body=None,\n data=dict(),\n includes=[],\n cauldron_version=list(environ.version_info),\n file_writes=[]\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_cached_data(\n step: 'projects.ProjectStep'\n) -> typing.Union[None, STEP_DATA]:\n \"\"\"\n Attempts to load and return the cached step data for the specified step. If\n not cached data exists, or the cached data is corrupt, a None value is\n returned instead.\n\n :param step:\n The step for which the cached data should be loaded\n\n :return:\n Either a step data structure containing the cached step data or None\n if no cached data exists for the step\n \"\"\"\n\n cache_path = step.report.results_cache_path\n if not os.path.exists(cache_path):\n return None\n\n out = create_data(step)\n\n try:\n with open(cache_path, 'r') as f:\n cached_data = json.load(f)\n except Exception:\n return None\n\n file_writes = [\n file_io.entry_from_dict(fw)\n for fw in cached_data['file_writes']\n ]\n\n return out \\\n ._replace(**cached_data) \\\n ._replace(file_writes=file_writes)", "response": "Loads and returns the cached data for the specified step."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninitialize the logging path for running the project.", "response": "def initialize_logging_path(path: str = None) -> str:\n \"\"\"\n Initializes the logging path for running the project. If no logging path\n is specified, the current directory will be used instead.\n\n :param path:\n Path to initialize for logging. Can be either a path to a file or\n a path to a directory. If a directory is specified, the log file\n written will be called \"cauldron_run.log\".\n :return:\n The absolute path to the log file that will be used when this project\n is executed.\n \"\"\"\n path = environ.paths.clean(path if path else '.')\n\n if os.path.isdir(path) and os.path.exists(path):\n path = os.path.join(path, 'cauldron_run.log')\n elif os.path.exists(path):\n os.remove(path)\n\n directory = os.path.dirname(path)\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n return path"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run_project(\n project_directory: str,\n output_directory: str = None,\n log_path: str = None,\n shared_data: dict = None,\n reader_path: str = None,\n reload_project_libraries: bool = False\n) -> ExecutionResult:\n \"\"\"\n Opens, executes and closes a Cauldron project in a single command in\n production mode (non-interactive).\n\n :param project_directory:\n Directory where the project to run is located\n :param output_directory:\n Directory where the project display data will be saved\n :param log_path:\n Path to a file or directory where logging information will be\n written\n :param shared_data:\n Data to load into the cauldron.shared object prior to executing the\n project\n :param reader_path:\n Specifies a path where a reader file will be saved after the project\n has finished running. If no path is specified, no reader file will be\n saved. If the path is a directory, a reader file will be saved in that\n directory with the project name as the file name.\n :param reload_project_libraries:\n Whether or not to reload all project libraries prior to execution of\n the project. By default this is False, but can be enabled in cases\n where refreshing the project libraries before execution is needed.\n :return:\n The response result from the project execution\n \"\"\"\n log_path = initialize_logging_path(log_path)\n logger.add_output_path(log_path)\n\n def on_complete(\n command_response: environ.Response,\n project_data: SharedCache = None,\n message: str = None\n ) -> ExecutionResult:\n environ.modes.remove(environ.modes.SINGLE_RUN)\n if message:\n logger.log(message)\n logger.remove_output_path(log_path)\n return ExecutionResult(\n command_response=command_response,\n project_data=project_data or SharedCache()\n )\n\n environ.modes.add(environ.modes.SINGLE_RUN)\n\n open_response = open_command.execute(\n context=cli.make_command_context(open_command.NAME),\n path=project_directory,\n results_path=output_directory\n )\n if open_response.failed:\n return on_complete(\n command_response=open_response,\n message='[ERROR]: Aborted trying to open project'\n )\n\n project = cauldron.project.get_internal_project()\n project.shared.put(**(shared_data if shared_data is not None else dict()))\n\n commander.preload()\n run_response = run_command.execute(\n context=cli.make_command_context(run_command.NAME),\n skip_library_reload=not reload_project_libraries\n )\n\n project_cache = SharedCache().put(\n **project.shared._shared_cache_data\n )\n\n if run_response.failed:\n return on_complete(\n command_response=run_response,\n project_data=project_cache,\n message='[ERROR]: Aborted trying to run project steps'\n )\n\n if reader_path:\n save_command.execute(\n context=cli.make_command_context(save_command.NAME),\n path=reader_path\n )\n\n close_response = close_command.execute(\n context=cli.make_command_context(close_command.NAME)\n )\n if close_response.failed:\n return on_complete(\n command_response=close_response,\n project_data=project_cache,\n message='[ERROR]: Failed to close project cleanly after run'\n )\n\n return on_complete(\n command_response=run_response,\n project_data=project_cache,\n message='Project execution complete'\n )", "response": "Executes a Cauldron project in a single mode."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops all the workers and then wait for them to finish.", "response": "def stop(self, sig=signal.SIGINT):\n '''Stop all the workers, and then wait for them'''\n for cpid in self.sandboxes:\n logger.warn('Stopping %i...' % cpid)\n try:\n os.kill(cpid, sig)\n except OSError: # pragma: no cover\n logger.exception('Error stopping %s...' % cpid)\n\n # While we still have children running, wait for them\n # We edit the dictionary during the loop, so we need to copy its keys\n for cpid in list(self.sandboxes):\n try:\n logger.info('Waiting for %i...' % cpid)\n pid, status = os.waitpid(cpid, 0)\n logger.warn('%i stopped with status %i' % (pid, status >> 8))\n except OSError: # pragma: no cover\n logger.exception('Error waiting for %i...' % cpid)\n finally:\n self.sandboxes.pop(cpid, None)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a new worker for a child process", "response": "def spawn(self, **kwargs):\n '''Return a new worker for a child process'''\n copy = dict(self.kwargs)\n copy.update(kwargs)\n # Apparently there's an issue with importing gevent in the parent\n # process and then using it int he child. This is meant to relieve that\n # problem by allowing `klass` to be specified as a string.\n if isinstance(self.klass, string_types):\n self.klass = util.import_class(self.klass)\n return self.klass(self.queues, self.client, **copy)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsignaling handler for this process", "response": "def handler(self, signum, frame): # pragma: no cover\n '''Signal handler for this process'''\n if signum in (signal.SIGTERM, signal.SIGINT, signal.SIGQUIT):\n self.stop(signum)\n os._exit(0)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_stack_frames(error_stack: bool = True) -> list:\n\n cauldron_path = environ.paths.package()\n resources_path = environ.paths.resources()\n frames = (\n list(traceback.extract_tb(sys.exc_info()[-1]))\n if error_stack else\n traceback.extract_stack()\n ).copy()\n\n def is_cauldron_code(test_filename: str) -> bool:\n if not test_filename or not test_filename.startswith(cauldron_path):\n return False\n\n if test_filename.startswith(resources_path):\n return False\n\n return True\n\n while len(frames) > 1 and is_cauldron_code(frames[0].filename):\n frames.pop(0)\n\n return frames", "response": "Returns a list of the current stack frames."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_stack_frame(stack_frame, project: 'projects.Project') -> dict:\n\n filename = stack_frame.filename\n if filename.startswith(project.source_directory):\n filename = filename[len(project.source_directory) + 1:]\n\n location = stack_frame.name\n if location == '<module>':\n location = None\n\n return dict(\n filename=filename,\n location=location,\n line_number=stack_frame.lineno,\n line=stack_frame.line\n )", "response": "Formats a raw stack frame into a dictionary for templating and enriched with information from the currently open project."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_formatted_stack_frame(\n project: 'projects.Project',\n error_stack: bool = True\n) -> list:\n \"\"\"\n Returns a list of the stack frames formatted for user display that has\n been enriched by the project-specific data. \n \n :param project:\n The currently open project used to enrich the stack data.\n :param error_stack:\n Whether or not to return the error stack. When True the stack of the\n last exception will be returned. If no such exception exists, an empty\n list will be returned instead. When False the current execution stack\n trace will be returned.\n \"\"\"\n\n return [\n format_stack_frame(f, project)\n for f in get_stack_frames(error_stack=error_stack)\n ]", "response": "Returns a list of the stack frames formatted for user display that has been enriched by the project - specific data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting the argument type to a string representation of the argument type. Multiple return types are", "response": "def arg_type_to_string(arg_type) -> str:\n \"\"\"\n Converts the argument type to a string\n\n :param arg_type:\n :return:\n String representation of the argument type. Multiple return types are\n turned into a comma delimited list of type names\n \"\"\"\n\n union_params = (\n getattr(arg_type, '__union_params__', None) or\n getattr(arg_type, '__args__', None)\n )\n\n if union_params and isinstance(union_params, (list, tuple)):\n return ', '.join([arg_type_to_string(item) for item in union_params])\n\n try:\n return arg_type.__name__\n except AttributeError:\n return '{}'.format(arg_type)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmerge multiple COMPONENT instances into a single COMPONENT instance.", "response": "def merge_components(\n *components: typing.List[typing.Union[list, tuple, COMPONENT]]\n) -> COMPONENT:\n \"\"\"\n Merges multiple COMPONENT instances into a single one by merging the\n lists of includes and files. Has support for elements of the components\n arguments list to be lists or tuples of COMPONENT instances as well.\n\n :param components:\n :return:\n \"\"\"\n\n flat_components = functools.reduce(flatten_reducer, components, [])\n return COMPONENT(\n includes=functools.reduce(\n functools.partial(combine_lists_reducer, 'includes'),\n flat_components,\n []\n ),\n files=functools.reduce(\n functools.partial(combine_lists_reducer, 'files'),\n flat_components,\n []\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef flatten_reducer(\n flattened_list: list,\n entry: typing.Union[list, tuple, COMPONENT]\n) -> list:\n \"\"\"\n Flattens a list of COMPONENT instances to remove any lists or tuples\n of COMPONENTS contained within the list\n\n :param flattened_list:\n The existing flattened list that has been populated from previous\n calls of this reducer function\n :param entry:\n An entry to be reduced. Either a COMPONENT instance or a list/tuple\n of COMPONENT instances\n :return:\n The flattened list with the entry flatly added to it\n \"\"\"\n\n if hasattr(entry, 'includes') and hasattr(entry, 'files'):\n flattened_list.append(entry)\n elif entry:\n flattened_list.extend(entry)\n return flattened_list", "response": "Flattens a list of COMPONENT instances to remove any lists or tuples of COMPONENTS contained within the list"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef combine_lists_reducer(\n key: str,\n merged_list: list,\n component: COMPONENT\n) -> list:\n \"\"\"\n Reducer function to combine the lists for the specified key into a\n single, flat list\n\n :param key:\n The key on the COMPONENT instances to operate upon\n :param merged_list:\n The accumulated list of values populated by previous calls to this\n reducer function\n :param component:\n The COMPONENT instance from which to append values to the\n merged_list\n :return:\n The updated merged_list with the values for the COMPONENT added\n onto it\n \"\"\"\n\n merged_list.extend(getattr(component, key))\n return merged_list", "response": "Reducer function to combine the lists for the specified key into a single list of values populated by previous calls to the this\n ."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef listen(self):\n '''Listen for events as they come in'''\n try:\n self._pubsub.subscribe(self._channels)\n for message in self._pubsub.listen():\n if message['type'] == 'message':\n yield message\n finally:\n self._channels = []", "response": "Listen for events as they come in"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef thread(self):\n '''Run in a thread'''\n thread = threading.Thread(target=self.listen)\n thread.start()\n try:\n yield self\n finally:\n self.unlisten()\n thread.join()", "response": "Run in a thread"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef on(self, evt, func):\n '''Set a callback handler for a pubsub event'''\n if evt not in self._callbacks:\n raise NotImplementedError('callback \"%s\"' % evt)\n else:\n self._callbacks[evt] = func", "response": "Set a callback handler for a pubsub event"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(self, option, default=None):\n '''Get a particular option, or the default if it's missing'''\n val = self[option]\n return (val is None and default) or val", "response": "Get a particular option or the default if it s missing"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pop(self, option, default=None):\n '''Just like `dict.pop`'''\n val = self[option]\n del self[option]\n return (val is None and default) or val", "response": "Just like dict. pop"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef touch_project():\n r = Response()\n project = cd.project.get_internal_project()\n\n if project:\n project.refresh()\n else:\n r.fail(\n code='NO_PROJECT',\n message='No open project to refresh'\n )\n\n return r.update(\n sync_time=sync_status.get('time', 0)\n ).flask_serialize()", "response": "Touches the project to trigger refreshing its cauldron. json state."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fetch_synchronize_status():\n r = Response()\n project = cd.project.get_internal_project()\n\n if not project:\n r.fail(\n code='NO_PROJECT',\n message='No open project on which to retrieve status'\n )\n else:\n with open(project.source_path, 'r') as f:\n definition = json.load(f)\n\n result = status.of_project(project)\n r.update(\n sync_time=sync_status.get('time', 0),\n source_directory=project.source_directory,\n remote_source_directory=project.remote_source_directory,\n status=result,\n definition=definition\n )\n\n return r.flask_serialize()", "response": "Returns the synchronization status information for the currently opened\n project"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload the specified file if it exists", "response": "def download_file(filename: str):\n \"\"\" downloads the specified project file if it exists \"\"\"\n\n project = cd.project.get_internal_project()\n source_directory = project.source_directory if project else None\n\n if not filename or not project or not source_directory:\n return '', 204\n\n path = os.path.realpath(os.path.join(\n source_directory,\n '..',\n '__cauldron_downloads',\n filename\n ))\n\n if not os.path.exists(path):\n return '', 204\n\n return flask.send_file(path, mimetype=mimetypes.guess_type(path)[0])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_project_source_path(path: str) -> str:\n\n path = environ.paths.clean(path)\n\n if not path.endswith('cauldron.json'):\n return os.path.join(path, 'cauldron.json')\n\n return path", "response": "Converts the given path into a project source path to the cauldron. json file."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload the cauldron. json project definition file for the given path.", "response": "def load_project_definition(path: str) -> dict:\n \"\"\"\n Load the cauldron.json project definition file for the given path. The\n path can be either a source path to the cauldron.json file or the source\n directory where a cauldron.json file resides.\n\n :param path:\n The source path or directory where the definition file will be loaded\n \"\"\"\n\n source_path = get_project_source_path(path)\n if not os.path.exists(source_path):\n raise FileNotFoundError('Missing project file: {}'.format(source_path))\n\n with open(source_path, 'r') as f:\n out = json.load(f)\n\n project_folder = os.path.split(os.path.dirname(source_path))[-1]\n if 'id' not in out or not out['id']:\n out['id'] = project_folder\n\n return out"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsimplifies a path by replacing path prefixes with values specified by path_prefixes in the list of path_prefixes.", "response": "def simplify_path(path: str, path_prefixes: list = None) -> str:\n \"\"\"\n Simplifies package paths by replacing path prefixes with values specified\n in the replacements list\n\n :param path:\n :param path_prefixes:\n :return:\n \"\"\"\n\n test_path = '{}'.format(path if path else '')\n replacements = (path_prefixes if path_prefixes else []).copy()\n replacements.append(('~', os.path.expanduser('~')))\n\n for key, value in replacements:\n if test_path.startswith(value):\n return '{}{}'.format(key, test_path[len(value):])\n\n return test_path"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a module entry into a package data dictionary with information about the module.", "response": "def module_to_package_data(\n name: str,\n entry,\n path_prefixes: list = None\n) -> typing.Union[dict, None]:\n \"\"\"\n Converts a module entry into a package data dictionary with information\n about the module. including version and location on disk\n\n :param name:\n :param entry:\n :param path_prefixes:\n :return:\n \"\"\"\n\n if name.find('.') > -1:\n # Not interested in sub-packages, only root ones\n return None\n\n version = getattr(entry, '__version__', None)\n version = version if not hasattr(version, 'version') else version.version\n location = getattr(entry, '__file__', sys.exec_prefix)\n\n if version is None or location.startswith(sys.exec_prefix):\n # Not interested in core packages. They obviously are standard and\n # don't need to be included in an output.\n return None\n\n return dict(\n name=name,\n version=version,\n location=simplify_path(location, path_prefixes)\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_system_data() -> typing.Union[None, dict]:\n\n site_packages = get_site_packages()\n path_prefixes = [('[SP]', p) for p in site_packages]\n path_prefixes.append(('[CORE]', sys.exec_prefix))\n\n packages = [\n module_to_package_data(name, entry, path_prefixes)\n for name, entry in list(sys.modules.items())\n ]\n\n python_data = dict(\n version=list(sys.version_info),\n executable=simplify_path(sys.executable),\n directory=simplify_path(sys.exec_prefix),\n site_packages=[simplify_path(sp) for sp in site_packages]\n )\n\n return dict(\n python=python_data,\n packages=[p for p in packages if p is not None]\n )", "response": "Returns a dictionary containing information about the Cauldron system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove(path: str, max_retries: int = 3) -> bool:\n if not path:\n return False\n\n if not os.path.exists(path):\n return True\n\n remover = os.remove if os.path.isfile(path) else shutil.rmtree\n\n for attempt in range(max_retries):\n try:\n remover(path)\n return True\n except Exception:\n # Pause briefly in case there's a race condition on lock\n # for the target.\n time.sleep(0.02)\n\n return False", "response": "Removes the specified path from the local filesystem if it exists."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef end(code: int):\n\n print('\\n')\n if code != 0:\n log('Failed with status code: {}'.format(code), whitespace=1)\n sys.exit(code)", "response": "Ends the application with the specified error code."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef folder(self) -> typing.Union[str, None]:\n\n if 'folder' in self.data:\n return self.data.get('folder')\n elif self.project_folder:\n if callable(self.project_folder):\n return self.project_folder()\n else:\n return self.project_folder\n return None", "response": "Returns the folder of the file that is used to store the file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef project_exists(response: 'environ.Response', path: str) -> bool:\n\n if os.path.exists(path):\n return True\n\n response.fail(\n code='PROJECT_NOT_FOUND',\n message='The project path does not exist',\n path=path\n ).console(\n \"\"\"\n [ERROR]: Unable to open project. The specified path does not exist:\n\n {path}\n \"\"\".format(path=path)\n )\n return False", "response": "Determines whether or not a project exists at the specified path."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_recent_paths(response, path):\n\n try:\n recent_paths = environ.configs.fetch('recent_paths', [])\n\n if path in recent_paths:\n recent_paths.remove(path)\n\n recent_paths.insert(0, path)\n environ.configs.put(recent_paths=recent_paths[:10], persists=True)\n environ.configs.save()\n except Exception as error: # pragma: no cover\n response.warn(\n code='FAILED_RECENT_UPDATE',\n message='Unable to update recently opened projects',\n error=str(error)\n ).console(whitespace=1)\n\n return True", "response": "Updates the recent paths of the current project."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef split_line(line: str) -> typing.Tuple[str, str]:\n\n index = line.find(' ')\n if index == -1:\n return line.lower(), ''\n\n return line[:index].lower(), line[index:].strip()", "response": "Splits the raw line into two strings."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef render_stop_display(step: 'projects.ProjectStep', message: str):\n stack = render_stack.get_formatted_stack_frame(\n project=step.project,\n error_stack=False\n )\n\n try:\n names = [frame['filename'] for frame in stack]\n index = names.index(os.path.realpath(__file__))\n frame = stack[index - 1]\n except Exception:\n frame = {}\n\n stop_message = (\n '{}'.format(message)\n if message else\n 'This step was explicitly stopped prior to its completion'\n )\n\n dom = templating.render_template(\n 'step-stop.html',\n message=stop_message,\n frame=frame\n )\n step.report.append_body(dom)", "response": "Renders a stop action to the Cauldron display."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nidentifying for the project.", "response": "def id(self) -> typing.Union[str, None]:\n \"\"\"Identifier for the project.\"\"\"\n return self._project.id if self._project else None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef display(self) -> typing.Union[None, report.Report]:\n return (\n self._project.current_step.report\n if self._project and self._project.current_step else\n None\n )", "response": "The display report for the current project."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef shared(self) -> typing.Union[None, SharedCache]:\n return self._project.shared if self._project else None", "response": "The shared display object associated with this project."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef settings(self) -> typing.Union[None, SharedCache]:\n return self._project.settings if self._project else None", "response": "The settings associated with this project."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef title(self, value: typing.Union[None, str]):\n if not self._project:\n raise RuntimeError('Failed to assign title to an unloaded project')\n self._project.title = value", "response": "Sets the title of the project."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self, project: typing.Union[projects.Project, None]):\n self._project = project", "response": "Connects this object to the specified source project."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an absolute path to the specified file or directory within the project source directory.", "response": "def path(self, *args: typing.List[str]) -> typing.Union[None, str]:\n \"\"\"\n Creates an absolute path in the project source directory from the\n relative path components.\n\n :param args:\n Relative components for creating a path within the project source\n directory\n :return:\n An absolute path to the specified file or directory within the\n project source directory.\n \"\"\"\n if not self._project:\n return None\n\n return environ.paths.clean(os.path.join(\n self._project.source_directory,\n *args\n ))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstopping the execution of the project at the current step immediately holding the action.", "response": "def stop(self, message: str = None, silent: bool = False):\n \"\"\"\n Stops the execution of the project at the current step immediately\n without raising an error. Use this to abort running the project in\n situations where some critical branching action should prevent the\n project from continuing to run.\n\n :param message:\n A custom display message to include in the display for the stop\n action. This message is ignored if silent is set to True.\n :param silent:\n When True nothing will be shown in the notebook display when the\n step is stopped. When False, the notebook display will include\n information relating to the stopped action.\n \"\"\"\n me = self.get_internal_project()\n if not me or not me.current_step:\n return\n\n if not silent:\n render_stop_display(me.current_step, message)\n raise UserAbortError(halt=True)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_internal_project(\n self,\n timeout: float = 1\n ) -> typing.Union['projects.Project', None]:\n \"\"\"\n Attempts to return the internally loaded project. This function\n prevents race condition issues where projects are loaded via threads\n because the internal loop will try to continuously load the internal\n project until it is available or until the timeout is reached.\n\n :param timeout:\n Maximum number of seconds to wait before giving up and returning\n None.\n \"\"\"\n count = int(timeout / 0.1)\n for _ in range(count):\n project = self.internal_project\n if project:\n return project\n time.sleep(0.1)\n\n return self.internal_project", "response": "Attempts to return the internal project."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstop execution of the current node in the current notebook.", "response": "def stop(\n self,\n message: str = None,\n silent: bool = False,\n halt: bool = False\n ):\n \"\"\"\n Stops the execution of the current step immediately without raising\n an error. Use this to abort the step running process if you want\n to return early.\n\n :param message:\n A custom display message to include in the display for the stop\n action. This message is ignored if silent is set to True.\n :param silent:\n When True nothing will be shown in the notebook display when the\n step is stopped. When False, the notebook display will include\n information relating to the stopped action.\n :param halt:\n Whether or not to keep running other steps in the project after\n this step has been stopped. By default this is False and after this\n stops running, future steps in the project will continue running\n if they've been queued to run. If you want stop execution entirely,\n set this value to True and the current run command will be aborted\n entirely.\n \"\"\"\n step = self._step\n if not step:\n return\n\n if not silent:\n render_stop_display(step, message)\n raise UserAbortError(halt=halt)"}