{"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef assert_json_type(value: JsonValue, expected_type: JsonCheckType) -> None:\n\n def type_name(t: Union[JsonCheckType, Type[None]]) -> str:\n if t is None:\n return \"None\"\n if isinstance(t, JList):\n return \"list\"\n return t.__name__\n\n if expected_type is None:\n if value is None:\n return\n elif expected_type == float:\n if isinstance(value, float) or isinstance(value, int):\n return\n elif expected_type in [str, int, bool, list, dict]:\n if isinstance(value, expected_type): # type: ignore\n return\n elif isinstance(expected_type, JList):\n if isinstance(value, list):\n for v in value:\n assert_json_type(v, expected_type.value_type)\n return\n else:\n raise TypeError(\"unsupported type\")\n raise TypeError(\"wrong JSON type {} != {}\".format(\n type_name(expected_type), type_name(type(value))))", "response": "Check that a value has a certain JSON type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a JSON value by path optionally checking its type.", "response": "def json_get(json: JsonValue, path: str, expected_type: Any = ANY) -> Any:\n \"\"\"Get a JSON value by path, optionally checking its type.\n\n >>> j = {\"foo\": {\"num\": 3.4, \"s\": \"Text\"}, \"arr\": [10, 20, 30]}\n >>> json_get(j, \"/foo/num\")\n 3.4\n >>> json_get(j, \"/arr[1]\")\n 20\n\n Raise ValueError if the path is not found:\n\n >>> json_get(j, \"/foo/unknown\")\n Traceback (most recent call last):\n ...\n ValueError: JSON path '/foo/unknown' not found\n\n Raise TypeError if the path contains a non-object element:\n\n >>> json_get(j, \"/foo/num/bar\")\n Traceback (most recent call last):\n ...\n TypeError: JSON path '/foo/num' is not an object\n\n Or a non-array element:\n\n >>> json_get(j, \"/foo[2]\")\n Traceback (most recent call last):\n ...\n TypeError: JSON path '/foo' is not an array\n\n Raise an IndexError if the array index is out of bounds:\n\n >>> json_get(j, \"/arr[10]\")\n Traceback (most recent call last):\n ...\n IndexError: JSON array '/arr' too small (3 <= 10)\n\n Recognized types are: str, int, float, bool, list, dict, and None.\n TypeError is raised if the type does not match.\n\n >>> json_get(j, \"/foo/num\", str)\n Traceback (most recent call last):\n ...\n TypeError: wrong JSON type str != float\n\n float will match any number, int will only match numbers without a\n fractional part.\n\n >>> json_get(j, \"/foo/num\", float)\n 3.4\n >>> json_get(j, \"/foo/num\", int)\n Traceback (most recent call last):\n ...\n TypeError: wrong JSON type int != float\n \"\"\"\n\n elements = _parse_json_path(path)\n\n current = json\n current_path = \"\"\n for i, element in enumerate(elements):\n if isinstance(element, str):\n if not isinstance(current, dict):\n msg = \"JSON path '{}' is not an object\".format(current_path)\n raise TypeError(msg) from None\n if element not in current:\n raise ValueError(\"JSON path '{}' not found\".format(path))\n current_path += \"/\" + element\n current = current[element]\n else:\n if not isinstance(current, list):\n msg = \"JSON path '{}' is not an array\".format(current_path)\n raise TypeError(msg) from None\n if element >= len(current):\n msg = \"JSON array '{}' too small ({} <= {})\".format(\n current_path, len(current), element)\n raise IndexError(msg)\n current_path += \"[{}]\".format(i)\n current = current[element]\n if expected_type != ANY:\n assert_json_type(current, cast(JsonType, expected_type))\n return current"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef json_get_default(json: JsonValue, path: str,\n default: Any, expected_type: Any = ANY) -> Any:\n \"\"\"Get a JSON value by path, optionally checking its type.\n\n This works exactly like json_get(), but instead of raising\n ValueError or IndexError when a path part is not found, return\n the provided default value:\n\n >>> json_get_default({}, \"/foo\", \"I am a default value\")\n 'I am a default value'\n\n TypeErrors will be raised as in json_get() if an expected_type\n is provided:\n\n >>> json_get_default({\"foo\": \"bar\"}, \"/foo\", 123, int)\n Traceback (most recent call last):\n ...\n TypeError: wrong JSON type int != str\n \"\"\"\n try:\n return json_get(json, path, expected_type)\n except (ValueError, IndexError):\n return default", "response": "Get a JSON value by path optionally checking its type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(cls, fh):\n dat = fh.read()\n try:\n ret = cls.from_json(dat)\n except:\n ret = cls.from_yaml(dat)\n return ret", "response": "Load a new object from a file handle."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_json(cls, fh):\n if isinstance(fh, str):\n return cls(json.loads(fh))\n else:\n return cls(json.load(fh))", "response": "Load a new object from a json file handle."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the intersection of two items in the composite object.", "response": "def intersection(self, other, recursive=True):\n \"\"\"\n Recursively compute intersection of data. For dictionaries, items\n for specific keys will be reduced to unique items. For lists, items\n will be reduced to unique items. This method is meant to be analogous\n to set.intersection for composite objects.\n\n Args:\n other (composite): Other composite object to intersect with.\n recursive (bool): Whether or not to perform the operation recursively,\n for all nested composite objects.\n \"\"\"\n if not isinstance(other, composite):\n raise AssertionError('Cannot intersect composite and {} types'.format(type(other)))\n \n if self.meta_type != other.meta_type:\n return composite({})\n\n if self.meta_type == 'list':\n keep = []\n for item in self._list:\n if item in other._list:\n if recursive and isinstance(item, composite):\n keep.extend(item.intersection(other.index(item), recursive=True))\n else:\n keep.append(item)\n return composite(keep)\n elif self.meta_type == 'dict':\n keep = {}\n for key in self._dict:\n item = self._dict[key]\n if key in other._dict:\n if recursive and \\\n isinstance(item, composite) and \\\n isinstance(other.get(key), composite):\n keep[key] = item.intersection(other.get(key), recursive=True)\n elif item == other[key]:\n keep[key] = item\n return composite(keep)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a new object with the union of the two items.", "response": "def union(self, other, recursive=True, overwrite=False):\n \"\"\"\n Recursively compute union of data. For dictionaries, items\n for specific keys will be combined into a list, depending on the\n status of the overwrite= parameter. For lists, items will be appended\n and reduced to unique items. This method is meant to be analogous\n to set.union for composite objects.\n\n Args:\n other (composite): Other composite object to union with.\n recursive (bool): Whether or not to perform the operation recursively,\n for all nested composite objects.\n overwrite (bool): Whether or not to overwrite entries with the same\n key in a nested dictionary. \n \"\"\"\n if not isinstance(other, composite):\n raise AssertionError('Cannot union composite and {} types'.format(type(other)))\n \n if self.meta_type != other.meta_type:\n return composite([self, other])\n\n if self.meta_type == 'list':\n keep = []\n for item in self._list:\n keep.append(item)\n for item in other._list:\n if item not in self._list:\n keep.append(item)\n return composite(keep)\n elif self.meta_type == 'dict':\n keep = {}\n for key in list(set(list(self._dict.keys()) + list(other._dict.keys()))):\n left = self._dict.get(key)\n right = other._dict.get(key)\n if recursive and \\\n isinstance(left, composite) and \\\n isinstance(right, composite):\n keep[key] = left.union(right, recursive=recursive, overwrite=overwrite)\n elif left == right:\n keep[key] = left\n elif left is None:\n keep[key] = right\n elif right is None:\n keep[key] = left\n elif overwrite:\n keep[key] = right\n else:\n keep[key] = composite([left, right])\n return composite(keep)\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating internal dictionary object.", "response": "def update(self, other):\n \"\"\"\n Update internal dictionary object. This is meant to be an\n analog for dict.update().\n \"\"\"\n if self.meta_type == 'list':\n raise AssertionError('Cannot update object of `list` base type!')\n elif self.meta_type == 'dict':\n self._dict = dict(self + composite(other))\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef items(self):\n if self.meta_type == 'list':\n return self._list\n elif self.meta_type == 'dict':\n return self._dict.items()", "response": "Return a list of keys for object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef values(self):\n if self.meta_type == 'list':\n return self._list\n elif self.meta_type == 'dict':\n return self._dict.values()", "response": "Return a list of all the keys for the object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nappending to object if object is list.", "response": "def append(self, item):\n \"\"\"\n Append to object, if object is list.\n \"\"\"\n if self.meta_type == 'dict':\n raise AssertionError('Cannot append to object of `dict` base type!')\n if self.meta_type == 'list':\n self._list.append(item)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nextend list from object.", "response": "def extend(self, item):\n \"\"\"\n Extend list from object, if object is list.\n \"\"\"\n if self.meta_type == 'dict':\n raise AssertionError('Cannot extend to object of `dict` base type!')\n if self.meta_type == 'list':\n self._list.extend(item)\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn JSON representation of object.", "response": "def json(self):\n \"\"\"\n Return JSON representation of object.\n \"\"\"\n if self.meta_type == 'list':\n ret = []\n for dat in self._list:\n if not isinstance(dat, composite):\n ret.append(dat)\n else:\n ret.append(dat.json())\n return ret\n\n elif self.meta_type == 'dict':\n ret = {}\n for key in self._dict:\n if not isinstance(self._dict[key], composite):\n ret[key] = self._dict[key]\n else:\n ret[key] = self._dict[key].json()\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites composite object to file handle in JSON format.", "response": "def write_json(self, fh, pretty=True):\n \"\"\"\n Write composite object to file handle in JSON format.\n\n Args:\n fh (file): File handle to write to.\n pretty (bool): Sort keys and indent in output.\n \"\"\"\n sjson = json.JSONEncoder().encode(self.json())\n if pretty:\n json.dump(json.loads(sjson), fh, sort_keys=True, indent=4)\n else:\n json.dump(json.loads(sjson), fh)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite the current set of keys to fh.", "response": "def write(self, fh, pretty=True):\n \"\"\"\n API niceness defaulting to composite.write_json().\n \"\"\"\n return self.write_json(fh, pretty=pretty)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn JSON representation of object.", "response": "def json(self):\n \"\"\"\n Return JSON representation of object.\n \"\"\"\n data = {}\n for item in self._data:\n if isinstance(self._data[item], filetree):\n data[item] = self._data[item].json()\n else:\n data[item] = self._data[item]\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of files in filetree.", "response": "def filelist(self):\n \"\"\"\n Return list of files in filetree.\n \"\"\"\n if len(self._filelist) == 0:\n for item in self._data:\n if isinstance(self._data[item], filetree):\n self._filelist.extend(self._data[item].filelist())\n else:\n self._filelist.append(self._data[item])\n return self._filelist"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprune leaves of the filetree according to specified regular expression.", "response": "def prune(self, regex=r\".*\"):\n \"\"\"\n Prune leaves of filetree according to specified\n regular expression.\n\n Args:\n regex (str): Regular expression to use in pruning tree.\n \"\"\"\n return filetree(self.root, ignore=self.ignore, regex=regex)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef deref(self, ctx):\n if self in ctx.call_nodes:\n raise CyclicReferenceError(ctx, self)\n\n if self in ctx.cached_results:\n return ctx.cached_results[self]\n\n try:\n ctx.call_nodes.add(self)\n ctx.call_stack.append(self)\n\n result = self.evaluate(ctx)\n ctx.cached_results[self] = result\n return result\n except:\n if ctx.exception_call_stack is None:\n ctx.exception_call_stack = list(ctx.call_stack)\n raise\n finally:\n ctx.call_stack.pop()\n ctx.call_nodes.remove(self)", "response": "Returns the value of this reference pointing to."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getModel(self):\n\t\t'''\n\t\t\tgetModel - get the IndexedRedisModel associated with this list. If one was not provided in constructor,\n\t\t\t it will be inferred from the first item in the list (if present)\n\n\t\t\t @return - None if none could be found, otherwise the IndexedRedisModel type of the items in this list.\n\n\t\t\t@raises ValueError if first item is not the expected type.\n\t\t'''\n\t\tif not self.mdl and len(self) > 0:\n\t\t\tmdl = self[0].__class__\n\t\t\tself.__validate_model(mdl)\n\n\t\t\tself.mdl = mdl\n\n\t\treturn self.mdl", "response": "get the IndexedRedisModel associated with this list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete all objects in this list.", "response": "def delete(self):\n\t\t'''\n\t\t\tdelete - Delete all objects in this list.\n\n\t\t\t@return - Number of objects deleted\n\t\t'''\n\t\tif len(self) == 0:\n\t\t\treturn 0\n\t\tmdl = self.getModel()\n\t\treturn mdl.deleter.deleteMultiple(self)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsaves all objects in this list", "response": "def save(self):\n\t\t'''\n\t\t\tsave - Save all objects in this list\n\t\t'''\n\t\tif len(self) == 0:\n\t\t\treturn []\n\t\tmdl = self.getModel()\n\t\treturn mdl.saver.save(self)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreloading all objects in this list.", "response": "def reload(self):\n\t\t'''\n\t\t\treload - Reload all objects in this list. \n\t\t\t\tUpdates in-place. To just fetch all these objects again, use \"refetch\"\n\n\t\t\t@return - List (same order as current objects) of either exception (KeyError) if operation failed,\n\t\t\t or a dict of fields changed -> (old, new)\n\t\t'''\n\t\tif len(self) == 0:\n\t\t\treturn []\n\n\t\tret = []\n\t\tfor obj in self:\n\t\t\tres = None\n\t\t\ttry:\n\t\t\t\tres = obj.reload()\n\t\t\texcept Exception as e:\n\t\t\t\tres = e\n\n\t\t\tret.append(res)\n\n\t\treturn ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a fresh copy of all items in this list.", "response": "def refetch(self):\n\t\t'''\n\t\t\trefetch - Fetch a fresh copy of all items in this list.\n\t\t\t\tReturns a new list. To update in-place, use \"reload\".\n\n\t\t\t@return IRQueryableList - List of fetched items\n\t\t'''\n\n\t\tif len(self) == 0:\n\t\t\treturn IRQueryableList()\n\n\t\tmdl = self.getModel()\n\t\tpks = [item._id for item in self if item._id]\n\n\t\treturn mdl.objects.getMultiple(pks)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef output(self, to=None, *args, **kwargs):\n '''Outputs to a stream (like a file or request)'''\n for blok in self:\n blok.output(to, *args, **kwargs)\n return self", "response": "Outputs to a stream ( like a file or request )"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrenders as a str", "response": "def render(self, *args, **kwargs):\n '''Renders as a str'''\n render_to = StringIO()\n self.output(render_to, *args, **kwargs)\n return render_to.getvalue()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef output(self, to=None, formatted=False, indent=0, indentation=' ', *args, **kwargs):\n '''Outputs to a stream (like a file or request)'''\n if formatted and self.blox:\n self.blox[0].output(to=to, formatted=True, indent=indent, indentation=indentation, *args, **kwargs)\n for blok in self.blox[1:]:\n to.write('\\n')\n to.write(indent * indentation)\n blok.output(to=to, formatted=True, indent=indent, indentation=indentation, *args, **kwargs)\n if not indent:\n to.write('\\n')\n else:\n for blok in self.blox:\n blok.output(to=to, *args, **kwargs)\n\n return self", "response": "Outputs to a stream ( like a file or request )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef start_tag(self):\n '''Returns the elements HTML start tag'''\n direct_attributes = (attribute.render(self) for attribute in self.render_attributes)\n attributes = ()\n if hasattr(self, '_attributes'):\n attributes = ('{0}=\"{1}\"'.format(key, value)\n for key, value in self.attributes.items() if value)\n\n rendered_attributes = \" \".join(filter(bool, chain(direct_attributes, attributes)))\n return '<{0}{1}{2}{3}>'.format(self.tag, ' ' if rendered_attributes else '',\n rendered_attributes, ' /' if self.tag_self_closes else \"\")", "response": "Returns the elements HTML start tag"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef output(self, to=None, *args, **kwargs):\n '''Outputs to a stream (like a file or request)'''\n to.write(self.start_tag)\n if not self.tag_self_closes:\n to.write(self.end_tag)", "response": "Outputs to a stream ( like a file or request )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\noutputs to a stream ( like a file or request )", "response": "def output(self, to=None, formatted=False, indent=0, indentation=' ', *args, **kwargs):\n '''Outputs to a stream (like a file or request)'''\n if formatted:\n to.write(self.start_tag)\n to.write('\\n')\n if not self.tag_self_closes:\n for blok in self.blox:\n to.write(indentation * (indent + 1))\n blok.output(to=to, indent=indent + 1, formatted=True, indentation=indentation, *args, **kwargs)\n to.write('\\n')\n\n to.write(indentation * indent)\n to.write(self.end_tag)\n if not indentation:\n to.write('\\n')\n else:\n to.write(self.start_tag)\n if not self.tag_self_closes:\n for blok in self.blox:\n blok.output(to=to, *args, **kwargs)\n to.write(self.end_tag)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a repr of an object and falls back to a minimal representation of type and ID.", "response": "def safe_repr(obj):\n \"\"\"Returns a repr of an object and falls back to a minimal representation of type and ID if the call to repr raised\n an error.\n\n :param obj: object to safe repr\n :returns: repr string or '(type repr error)' string\n :rtype: str\n \"\"\"\n try:\n obj_repr = repr(obj)\n except:\n obj_repr = \"({0}<{1}> repr error)\".format(type(obj), id(obj))\n return obj_repr"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\noutput the set text", "response": "def output(self, to=None, formatted=False, *args, **kwargs):\n '''Outputs the set text'''\n to.write(''.format(self.type))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef match_to_clinvar(genome_file, clin_file):\n clin_curr_line = _next_line(clin_file)\n genome_curr_line = _next_line(genome_file)\n\n # Ignores all the lines that start with a hashtag\n while clin_curr_line.startswith('#'):\n clin_curr_line = _next_line(clin_file)\n while genome_curr_line.startswith('#'):\n genome_curr_line = _next_line(genome_file)\n\n # Advance through both files simultaneously to find matches\n while clin_curr_line and genome_curr_line:\n\n # Advance a file when positions aren't equal.\n clin_curr_pos = VCFLine.get_pos(clin_curr_line)\n genome_curr_pos = VCFLine.get_pos(genome_curr_line)\n try:\n if clin_curr_pos['chrom'] > genome_curr_pos['chrom']:\n genome_curr_line = _next_line(genome_file)\n continue\n elif clin_curr_pos['chrom'] < genome_curr_pos['chrom']:\n clin_curr_line = _next_line(clin_file)\n continue\n if clin_curr_pos['pos'] > genome_curr_pos['pos']:\n genome_curr_line = _next_line(genome_file)\n continue\n elif clin_curr_pos['pos'] < genome_curr_pos['pos']:\n clin_curr_line = _next_line(clin_file)\n continue\n except StopIteration:\n break\n\n # If we get here, start positions match.\n # Look for allele matching.\n genome_vcf_line = GenomeVCFLine(vcf_line=genome_curr_line,\n skip_info=True)\n # We can skip if genome has no allele information for this point.\n if not genome_vcf_line.genotype_allele_indexes:\n genome_curr_line = _next_line(genome_file)\n continue\n\n # Match only if ClinVar and Genome ref_alleles match.\n clinvar_vcf_line = ClinVarVCFLine(vcf_line=clin_curr_line)\n if not genome_vcf_line.ref_allele == clinvar_vcf_line.ref_allele:\n try:\n genome_curr_line = _next_line(genome_file)\n clin_curr_line = _next_line(clin_file)\n continue\n except StopIteration:\n break\n\n # Determine genome alleles and zygosity. Zygosity is assumed to be one\n # of: heterozygous, homozygous, or hemizygous.\n genotype_allele_indexes = genome_vcf_line.genotype_allele_indexes\n genome_alleles = [genome_vcf_line.alleles[x] for\n x in genotype_allele_indexes]\n if len(genome_alleles) == 1:\n zygosity = 'Hem'\n elif len(genome_alleles) == 2:\n if genome_alleles[0].sequence == genome_alleles[1].sequence:\n zygosity = 'Hom'\n genome_alleles = [genome_alleles[0]]\n else:\n zygosity = 'Het'\n else:\n raise ValueError('This code only expects to work on genomes ' +\n 'with one or two alleles called at each ' +\n 'location. The following line violates this:' +\n str(genome_vcf_line))\n\n # Look for matches to ClinVar alleles.\n for genome_allele in genome_alleles:\n for allele in clinvar_vcf_line.alleles:\n if genome_allele.sequence == allele.sequence:\n # The 'records' attribute is specific to ClinVarAlleles.\n if hasattr(allele, 'records'):\n yield (genome_vcf_line, allele, zygosity)\n\n # Done matching, move on.\n try:\n genome_curr_line = _next_line(genome_file)\n clin_curr_line = _next_line(clin_file)\n except StopIteration:\n break", "response": "Match a genome VCF to a ClinVar VCF file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn Allele data as dict object.", "response": "def as_dict(self):\n \"\"\"Return Allele data as dict object.\"\"\"\n self_as_dict = dict()\n self_as_dict['sequence'] = self.sequence\n if hasattr(self, 'frequency'):\n self_as_dict['frequency'] = self.frequency\n return self_as_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _parse_allele_data(self):\n return [Allele(sequence=x) for x in\n [self.ref_allele] + self.alt_alleles]", "response": "Create list of Alleles from VCF line data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _parse_info(self, info_field):\n info = dict()\n for item in info_field.split(';'):\n # Info fields may be \"foo=bar\" or just \"foo\".\n # For the first case, store key \"foo\" with value \"bar\"\n # For the second case, store key \"foo\" with value True.\n info_item_data = item.split('=')\n # If length is one, just store as a key with value = true.\n if len(info_item_data) == 1:\n info[info_item_data[0]] = True\n elif len(info_item_data) == 2:\n info[info_item_data[0]] = info_item_data[1]\n return info", "response": "Parse the VCF info field and return a dictionary of the relevant data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a dictionary representation of parsed VCF data.", "response": "def as_dict(self):\n \"\"\"Dict representation of parsed VCF data\"\"\"\n self_as_dict = {'chrom': self.chrom,\n 'start': self.start,\n 'ref_allele': self.ref_allele,\n 'alt_alleles': self.alt_alleles,\n 'alleles': [x.as_dict() for x in self.alleles]}\n try:\n self_as_dict['info'] = self.info\n except AttributeError:\n pass\n return self_as_dict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_pos(vcf_line):\n if not vcf_line:\n return None\n vcf_data = vcf_line.strip().split('\\t')\n return_data = dict()\n return_data['chrom'] = CHROM_INDEX[vcf_data[0]]\n return_data['pos'] = int(vcf_data[1])\n return return_data", "response": "Very lightweight parsing of a vcf line to get position."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the value to a string representation for storing.", "response": "def _toStorage(self, value):\n\t\t'''\n\t\t\t_toStorage - Convert the value to a string representation for storage.\n\n\t\t\t@param value - The value of the item to convert\n\t\t\t@return A string value suitable for storing.\n\t\t'''\n\n\t\tfor chainedField in self.chainedFields:\n\t\t\tvalue = chainedField.toStorage(value)\n\n\t\treturn value"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _fromStorage(self, value):\n\t\t'''\n\t\t\t_fromStorage - Convert the value from storage (string) to the value type.\n\n\t\t\t@return - The converted value, or \"irNull\" if no value was defined (and field type is not default/string)\n\t\t'''\n\n\t\tfor chainedField in reversed(self.chainedFields):\n\t\t\tvalue = chainedField._fromStorage(value)\n\n\t\treturn value", "response": "Convert the value from storage to the value type."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _checkCanIndex(self):\n\t\t'''\n\t\t\t_checkCanIndex - Check if we CAN index (if all fields are indexable).\n\t\t\t\tAlso checks the right-most field for \"hashIndex\" - if it needs to hash we will hash.\n\t\t'''\n\n\t\t# NOTE: We can't just check the right-most field. For types like pickle that don't support indexing, they don't\n\t\t# support it because python2 and python3 have different results for pickle.dumps on the same object.\n\t\t# So if we have a field chain like Pickle, Compressed then we will have two different results.\n\t\tif not self.chainedFields:\n\t\t\treturn (False, False)\n\n\t\tfor chainedField in self.chainedFields:\n\t\t\tif chainedField.CAN_INDEX is False:\n\t\t\t\treturn (False, False)\n\n\t\treturn (True, self.chainedFields[-1].hashIndex)", "response": "Checks if we can index the right - most field."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the total byte length of all the items in the archive", "response": "def byte_length(self):\n \"\"\"\n :returns: sum of lengthes of all ranges or None if one of the ranges is open\n :rtype: int, float or None\n \"\"\"\n sum = 0\n for r in self:\n if r.is_open():\n return None\n sum += r.byte_length()\n return sum"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if one or more range in the list overlaps with another.", "response": "def has_overlaps(self):\n \"\"\"\n :returns: True if one or more range in the list overlaps with another\n :rtype: bool\n \"\"\"\n sorted_list = sorted(self)\n for i in range(0, len(sorted_list) - 1):\n if sorted_list[i].overlaps(sorted_list[i + 1]):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef max_stop(self):\n m = 0\n for r in self:\n if r.is_open():\n return None\n m = max(m, r.stop)\n return m", "response": "returns the maximum stop in list or None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef byte_offset(self, bytes):\n remaining_bytes = bytes\n for r in self:\n if r.is_open() or r.byte_length() >= remaining_bytes:\n return r.start + remaining_bytes\n else:\n remaining_bytes -= r.byte_length()\n assert False, \"requested byte offset {0!r} is outside the range list {1!r}\".format(bytes, self)", "response": "Maps the given byte length to a sequence s offset."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing command line argument and output appropriate file type (csv or JSON)", "response": "def main():\n \"\"\"\n Parse command line argument and\n output appropriate file type (csv or JSON)\n \"\"\"\n parser = ArgumentParser()\n\n parser.add_argument(\n \"-c\", \"--clinvarfile\", dest=\"clinvarfile\",\n help=\"ClinVar VCF file (either this or -C must be specified)\",\n metavar=\"CLINVARFILE\")\n parser.add_argument(\n \"-C\", \"--clinvardir\", dest=\"clinvardir\",\n help=\"ClinVar VCF directory (either this or -c must be specified). \" +\n \"This option will use vcf2clinvar.clinvar_update to automatically \" +\n \"check and import the most recent ClinVar file to this directory.\",\n metavar=\"CLINVARDIR\")\n parser.add_argument(\n \"-i\", \"--input\", dest=\"inputfile\",\n help=\"Input VCF file ['.vcf', '.vcf.gz', '.vcf.bz2']. \" +\n \"Uncompressed genome data is also accepted via stdin.\",\n metavar=\"INPUT\")\n parser.add_argument(\n \"-t\", \"--type\", dest=\"type\", default='csv',\n help=\"Output report type ('csv' or 'json'). Defaults to csv. \" +\n \"CSV Report: Reports all genome variants matching ClinVar records, \" +\n \"and some summary ClinVar data from these records. Header lines \" +\n \"with metadata begin with '##'.\\n\" +\n \"JSON Report: Reports genome variants matching ClinVar records \" +\n \"(no record information is included).\",\n metavar=\"TYPE\")\n parser.add_argument(\n \"-n\", \"--notes\", dest=\"notes\",\n help=\"Notes (JSON format) to include in report. (JSON report only)\",\n metavar=\"NOTES\")\n parser.add_argument(\n \"-g\", \"--genome-build\", dest=\"build\",\n help=\"Genome build to include in report ('b37' or 'b38').\",\n metavar=\"GENOMEBUILD\")\n options = parser.parse_args()\n\n version = os.popen(\"python setup.py --version\").read().strip()\n\n if options.inputfile:\n if options.inputfile.endswith('.vcf'):\n input_genome_file = open(options.inputfile)\n elif options.inputfile.endswith('.vcf.gz'):\n input_genome_file = gzip.open(options.inputfile)\n elif options.inputfile.endswith('.vcf.bz2'):\n input_genome_file = bz2.BZ2File(options.inputfile)\n else:\n raise IOError(\"Genome filename expected to end with ''.vcf',\" +\n \" '.vcf.gz', or '.vcf.bz2'.\")\n elif not sys.stdin.isatty():\n input_genome_file = sys.stdin\n else:\n sys.stderr.write(\"Provide input VCF file\\n\")\n parser.print_help()\n sys.exit(1)\n\n if options.build and options.build in ['b37', 'b38']:\n build = options.build\n else:\n raise IOError(\"Input VCF genome build must be 'b37' or 'b38'.\")\n\n if (not (options.clinvarfile or options.clinvardir) or\n (options.clinvarfile and options.clinvardir)):\n sys.stderr.write(\"Please provide either a ClinVar file or directory.\")\n parser.print_help()\n sys.exit(1)\n if options.clinvarfile:\n clinvarfilename = options.clinvarfile\n elif options.clinvardir:\n clinvarfilename = get_latest_vcf_file(target_dir=options.clinvardir,\n build=build)\n if clinvarfilename.endswith('.vcf'):\n input_clinvar_file = open(options.clinvarfile)\n elif clinvarfilename.endswith('.vcf.gz'):\n input_clinvar_file = gzip.open(clinvarfilename)\n elif clinvarfilename.endswith('.vcf.bz2'):\n input_clinvar_file = bz2.BZ2File(clinvarfilename)\n else:\n raise IOError(\"ClinVar filename expected to end with '.vcf',\" +\n \" '.vcf.gz', or '.vcf.bz2'.\")\n\n if options.type not in ['csv', 'json']:\n raise IOError(\"Not a valid report type, must be 'csv' or 'json'.\")\n if options.type == \"csv\":\n csv_report(input_genome_file=input_genome_file,\n input_clinvar_file=input_clinvar_file,\n build=build,\n version=version)\n elif options.type == \"json\":\n notes_json = {}\n if options.notes:\n notes_json[\"parameter\"] = options.notes\n try:\n notes_json = json.loads(options.notes)\n except:\n sys.stderr.write(\"Could not parse JSON notes field\\n\")\n json_report(input_genome_file=input_genome_file,\n input_clinvar_file=input_clinvar_file,\n build=build,\n notes=notes_json,\n version=version)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nnavigates an open ftplib. FTP to appropriate directory for ClinVar VCF files.", "response": "def nav_to_vcf_dir(ftp, build):\n \"\"\"\n Navigate an open ftplib.FTP to appropriate directory for ClinVar VCF files.\n\n Args:\n ftp: (type: ftplib.FTP) an open connection to ftp.ncbi.nlm.nih.gov\n build: (type: string) genome build, either 'b37' or 'b38'\n \"\"\"\n if build == 'b37':\n ftp.cwd(DIR_CLINVAR_VCF_B37)\n elif build == 'b38':\n ftp.cwd(DIR_CLINVAR_VCF_B38)\n else:\n raise IOError(\"Genome build not recognized.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns ClinVarAllele data as dict object.", "response": "def as_dict(self, *args, **kwargs):\n \"\"\"Return ClinVarAllele data as dict object.\"\"\"\n self_as_dict = super(ClinVarAllele, self).as_dict(*args, **kwargs)\n self_as_dict['hgvs'] = self.hgvs\n self_as_dict['clnalleleid'] = self.clnalleleid\n self_as_dict['clnsig'] = self.clnsig\n self_as_dict['clndn'] = self.clndn\n self_as_dict['clndisdb'] = self.clndisdb\n self_as_dict['clnvi'] = self.clnvi\n return self_as_dict"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing frequency data in ClinVar VCF", "response": "def _parse_frequencies(self):\n \"\"\"Parse frequency data in ClinVar VCF\"\"\"\n frequencies = OrderedDict([\n ('EXAC', 'Unknown'),\n ('ESP', 'Unknown'),\n ('TGP', 'Unknown')])\n pref_freq = 'Unknown'\n for source in frequencies.keys():\n freq_key = 'AF_' + source\n if freq_key in self.info:\n frequencies[source] = self.info[freq_key]\n if pref_freq == 'Unknown':\n pref_freq = frequencies[source]\n return pref_freq, frequencies"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _parse_allele_data(self):\n\n # Get allele frequencies if they exist.\n pref_freq, frequencies = self._parse_frequencies()\n\n info_clnvar_single_tags = ['ALLELEID', 'CLNSIG', 'CLNHGVS']\n cln_data = {x.lower(): self.info[x] if x in self.info else None\n for x in info_clnvar_single_tags}\n cln_data.update(\n {'clndisdb': [x.split(',') for x in\n self.info['CLNDISDB'].split('|')]\n if 'CLNDISDB' in self.info else []})\n cln_data.update({'clndn': self.info['CLNDN'].split('|') if\n 'CLNDN' in self.info else []})\n cln_data.update({'clnvi': self.info['CLNVI'].split(',')\n if 'CLNVI' in self.info else []})\n\n try:\n sequence = self.alt_alleles[0]\n except IndexError:\n sequence = self.ref_allele\n\n allele = ClinVarAllele(frequency=pref_freq, sequence=sequence,\n **cln_data)\n\n # A few ClinVar variants are only reported as a combination with\n # other variants, and no single-variant effect is proposed. Skip these.\n if not cln_data['clnsig']:\n return []\n\n return [allele]", "response": "Parse the alleles for ClinVar VCF overrides parent method."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add(self, *names):\n '''Returns back a class decorator that enables registering Blox to this factory'''\n def decorator(blok):\n for name in names or (blok.__name__, ):\n self[name] = blok\n return blok\n return decorator", "response": "Returns a class decorator that enables registering Blox to this factory"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef clearRedisPools():\n\t'''\n\t\tclearRedisPools - Disconnect all managed connection pools, \n\t\t and clear the connectiobn_pool attribute on all stored managed connection pools.\n\n\t\t A \"managed\" connection pool is one where REDIS_CONNECTION_PARAMS does not define the \"connection_pool\" attribute.\n\t\t If you define your own pools, IndexedRedis will use them and leave them alone.\n\n\t\t This method will be called automatically after calling setDefaultRedisConnectionParams.\n\n\t\t Otherwise, you shouldn't have to call it.. Maybe as some sort of disaster-recovery call..\n\t'''\n\tglobal RedisPools\n\tglobal _redisManagedConnectionParams\n\n\tfor pool in RedisPools.values():\n\t\ttry:\n\t\t\tpool.disconnect()\n\t\texcept:\n\t\t\tpass\n\t\n\tfor paramsList in _redisManagedConnectionParams.values():\n\t\tfor params in paramsList:\n\t\t\tif 'connection_pool' in params:\n\t\t\t\tdel params['connection_pool']\n\t\n\tRedisPools.clear()\n\t_redisManagedConnectionParams.clear()", "response": "This method disconnects all managed connection pools and clears the connectiobn_pool attribute on all stored managed connection pools."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getRedisPool(params):\n\t'''\n\t\tgetRedisPool - Returns and possibly also creates a Redis connection pool\n\t\t\tbased on the REDIS_CONNECTION_PARAMS passed in.\n\n\t\t\tThe goal of this method is to keep a small connection pool rolling\n\t\t\tto each unique Redis instance, otherwise during network issues etc\n\t\t\tpython-redis will leak connections and in short-order can exhaust\n\t\t\tall the ports on a system. There's probably also some minor\n\t\t\tperformance gain in sharing Pools.\n\n\t\t\tWill modify \"params\", if \"host\" and/or \"port\" are missing, will fill\n\t\t\tthem in with defaults, and prior to return will set \"connection_pool\"\n\t\t\ton params, which will allow immediate return on the next call,\n\t\t\tand allow access to the pool directly from the model object.\n\n\t\t\t@param params - REDIS_CONNECTION_PARAMS - kwargs to redis.Redis\n\n\t\t\t@return redis.ConnectionPool corrosponding to this unique server.\n\t'''\n\tglobal RedisPools\n\tglobal _defaultRedisConnectionParams\n\tglobal _redisManagedConnectionParams\n\n\tif not params:\n\t\tparams = _defaultRedisConnectionParams\n\t\tisDefaultParams = True\n\telse:\n\t\tisDefaultParams = bool(params is _defaultRedisConnectionParams)\n\n\tif 'connection_pool' in params:\n\t\treturn params['connection_pool']\n\n\thashValue = hashDictOneLevel(params)\n\n\tif hashValue in RedisPools:\n\t\tparams['connection_pool'] = RedisPools[hashValue]\n\t\treturn RedisPools[hashValue]\n\t\n\t# Copy the params, so that we don't modify the original dict\n\tif not isDefaultParams:\n\t\torigParams = params\n\t\tparams = copy.copy(params)\n\telse:\n\t\torigParams = params\n\n\tcheckAgain = False\n\tif 'host' not in params:\n\t\tif not isDefaultParams and 'host' in _defaultRedisConnectionParams:\n\t\t\tparams['host'] = _defaultRedisConnectionParams['host']\n\t\telse:\n\t\t\tparams['host'] = '127.0.0.1'\n\t\tcheckAgain = True\n\tif 'port' not in params:\n\t\tif not isDefaultParams and 'port' in _defaultRedisConnectionParams:\n\t\t\tparams['port'] = _defaultRedisConnectionParams['port']\n\t\telse:\n\t\t\tparams['port'] = 6379\n\t\tcheckAgain = True\n\t\n\tif 'db' not in params:\n\t\tif not isDefaultParams and 'db' in _defaultRedisConnectionParams:\n\t\t\tparams['db'] = _defaultRedisConnectionParams['db']\n\t\telse:\n\t\t\tparams['db'] = 0\n\t\tcheckAgain = True\n\n\n\tif not isDefaultParams:\n\t\totherGlobalKeys = set(_defaultRedisConnectionParams.keys()) - set(params.keys())\n\t\tfor otherKey in otherGlobalKeys:\n\t\t\tif otherKey == 'connection_pool':\n\t\t\t\tcontinue\n\t\t\tparams[otherKey] = _defaultRedisConnectionParams[otherKey]\n\t\t\tcheckAgain = True\n\n\tif checkAgain:\n\t\thashValue = hashDictOneLevel(params)\n\t\tif hashValue in RedisPools:\n\t\t\tparams['connection_pool'] = RedisPools[hashValue]\n\t\t\treturn RedisPools[hashValue]\n\n\tconnectionPool = redis.ConnectionPool(**params)\n\torigParams['connection_pool'] = params['connection_pool'] = connectionPool\n\tRedisPools[hashValue] = connectionPool\n\n\t# Add the original as a \"managed\" redis connection (they did not provide their own pool)\n\t# such that if the defaults change, we make sure to re-inherit any keys, and can disconnect\n\t# from clearRedisPools\n\torigParamsHash = hashDictOneLevel(origParams)\n\tif origParamsHash not in _redisManagedConnectionParams:\n\t\t_redisManagedConnectionParams[origParamsHash] = [origParams]\n\telif origParams not in _redisManagedConnectionParams[origParamsHash]:\n\t\t_redisManagedConnectionParams[origParamsHash].append(origParams)\n\n\n\treturn connectionPool", "response": "Returns a Redis connection pool corresponding to the given parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef asDict(self, includeMeta=False, forStorage=False, strKeys=False):\n\t\t'''\n\t\t\ttoDict / asDict - Get a dictionary representation of this model.\n\n\t\t\t@param includeMeta - Include metadata in return. For now, this is only pk stored as \"_id\"\n\n\t\t\t@param convertValueTypes - default True. If False, fields with fieldValue defined will be converted to that type.\n\t\t\t\tUse True when saving, etc, as native type is always either str or bytes.\n\n\t\t\t@param strKeys Default False - If True, just the string value of the field name will be used as the key.\n\t\t\t\tOtherwise, the IRField itself will be (although represented and indexed by string)\n\n\t\t\t@return - Dictionary reprensetation of this object and all fields\n\t\t'''\n\t\tret = {}\n\t\tfor thisField in self.FIELDS:\n\t\t\tval = object.__getattribute__(self, thisField)\n\n\t\t\tif forStorage is True:\n\t\t\t\tval = thisField.toStorage(val)\n\n\t\t\tif strKeys:\n\t\t\t\tret[str(thisField)] = val\n\t\t\telse:\n\t\t\t\tret[thisField] = val\n\n\n\t\tif includeMeta is True:\n\t\t\tret['_id'] = getattr(self, '_id', None)\n\t\treturn ret", "response": "Returns a dictionary representation of this object and all fields."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pprint(self, stream=None):\n\t\t'''\n\t\t\tpprint - Pretty-print a dict representation of this object.\n\n\t\t\t@param stream - Either a stream to output, or None to default to sys.stdout\n\t\t'''\n\t\tpprint.pprint(self.asDict(includeMeta=True, forStorage=False, strKeys=True), stream=stream)", "response": "Pretty - print the object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if any unsaved changes are present in this model.", "response": "def hasUnsavedChanges(self, cascadeObjects=False):\n\t\t'''\n\t\t\thasUnsavedChanges - Check if any unsaved changes are present in this model, or if it has never been saved.\n\n\t\t\t@param cascadeObjects default False, if True will check if any foreign linked objects themselves have unsaved changes (recursively).\n\t\t\t\tOtherwise, will just check if the pk has changed.\n\n\t\t\t@return - True if any fields have changed since last fetch, or if never saved. Otherwise, False\n\t\t'''\n\t\tif not self._id or not self._origData:\n\t\t\treturn True\n\n\t\tfor thisField in self.FIELDS:\n\t\t\tthisVal = object.__getattribute__(self, thisField)\n\t\t\tif self._origData.get(thisField, '') != thisVal:\n\t\t\t\treturn True\n\n\t\t\tif cascadeObjects is True and issubclass(thisField.__class__, IRForeignLinkFieldBase):\n\t\t\t\tif thisVal.objHasUnsavedChanges():\n\t\t\t\t\treturn True\n\n\t\treturn False"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a dictionary of fields that have been updated.", "response": "def getUpdatedFields(self, cascadeObjects=False):\n\t\t'''\n\t\t\tgetUpdatedFields - See changed fields.\n\t\t\t\n\t\t\t@param cascadeObjects default False, if True will check if any foreign linked objects themselves have unsaved changes (recursively).\n\t\t\t\tOtherwise, will just check if the pk has changed.\n\n\t\t\t@return - a dictionary of fieldName : tuple(old, new).\n\n\t\t\tfieldName may be a string or may implement IRField (which implements string, and can be used just like a string)\n\t\t'''\n\t\tupdatedFields = {}\n\t\tfor thisField in self.FIELDS:\n\t\t\tthisVal = object.__getattribute__(self, thisField)\n\t\t\tif self._origData.get(thisField, '') != thisVal:\n\t\t\t\tupdatedFields[thisField] = (self._origData[thisField], thisVal)\n\n\t\t\tif cascadeObjects is True and issubclass(thisField.__class__, IRForeignLinkFieldBase) and thisVal.objHasUnsavedChanges():\n\t\t\t\tupdatedFields[thisField] = (self._origData[thisField], thisVal)\n\t\t\t\t\t\n\t\treturn updatedFields"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef diff(firstObj, otherObj, includeMeta=False):\n\t\t'''\n\t\t\tdiff - Compare the field values on two IndexedRedisModels.\n\n\t\t\t@param firstObj - First object (or self)\n\n\t\t\t@param otherObj - Second object\n\n\t\t\t@param includeMeta - If meta information (like pk) should be in the diff results.\n\n\n\t\t\t@return - Dict of 'field' : ( value_firstObjForField, value_otherObjForField ).\n\t\t\t\t\n\t\t\t\tKeys are names of fields with different values.\n\t\t\t\tValue is a tuple of ( value_firstObjForField, value_otherObjForField )\n\n\t\t\tCan be called statically, like: IndexedRedisModel.diff ( obj1, obj2 )\n\n\t\t\t or in reference to an obj : obj1.diff(obj2)\n\t\t'''\n\n\t\tif not isIndexedRedisModel(firstObj):\t\n\t\t\traise ValueError('Type < %s > does not extend IndexedRedisModel.' %( type(firstObj).__name__ , ) )\n\t\tif not isIndexedRedisModel(otherObj):\t\n\t\t\traise ValueError('Type < %s > does not extend IndexedRedisModel.' %( type(otherObj).__name__ , ) )\n\n\t\tfirstObj.validateModel()\n\t\totherObj.validateModel()\n\n\t\t# Types may not match, but could be subclass, special copy class (like connectAlt), etc.\n\t\t# So check if FIELDS matches, and if so, we can continue.\n\t\tif getattr(firstObj, 'FIELDS') != getattr(otherObj, 'FIELDS'):\n\t\t\t# NOTE: Maybe we should iterate here and compare just that field types and names match?\n\t\t\t# In case a copy changes a default or something, we would still be able to diff..\n\t\t\traise ValueError('Cannot compare < %s > and < %s > . Must be same model OR have equal FIELDS.' %( firstObj.__class__, otherObj.__class__) )\n\t\t\n\t\tdiffFields = {}\n\n\t\tfor thisField in firstObj.FIELDS:\n\n\t\t\tthisFieldStr = str(thisField)\n\n\t\t\tfirstVal = object.__getattribute__( firstObj, thisFieldStr )\n\t\t\totherVal = object.__getattribute__( otherObj, thisFieldStr )\n\n\t\t\tif firstVal != otherVal:\n\t\t\t\tdiffFields[ thisFieldStr ] = ( (firstVal, otherVal) )\n\n\t\tif includeMeta:\n\t\t\tfirstPk = firstObj.getPk()\n\t\t\totherPk = otherObj.getPk()\n\t\t\tif firstPk != otherPk:\n\t\t\t\tdiffFields['_id'] = ( firstPk, otherPk )\n\n\t\treturn diffFields", "response": "Return the diff between two IndexedRedisModels."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef save(self, cascadeSave=True):\n\t\t'''\n\t\t\tsave - Save this object.\n\t\t\t\n\t\t\tWill perform an \"insert\" if this object had not been saved before,\n\t\t\t otherwise will update JUST the fields changed on THIS INSTANCE of the model.\n\n\t\t\t i.e. If you have two processes fetch the same object and change different fields, they will not overwrite\n\t\t\t eachother, but only save the ones each process changed.\n\n\t\t\tIf you want to save multiple objects of type MyModel in a single transaction,\n\t\t\tand you have those objects in a list, myObjs, you can do the following:\n\n\t\t\t\tMyModel.saver.save(myObjs)\n\n\t\t\t@param cascadeSave Default True - If True, any Foreign models linked as attributes that have been altered\n\t\t\t or created will be saved with this object. If False, only this object (and the reference to an already-saved foreign model) will be saved.\n\n\t\t\t@see #IndexedRedisSave.save\n\n\t\t\t@return - Single element list, id of saved object (if successful)\n\t\t'''\n\t\tsaver = IndexedRedisSave(self.__class__)\n\t\treturn saver.save(self, cascadeSave=cascadeSave)", "response": "save - Save this object.\n\t\t\t\n\t\t\tWill perform an \"insert\" if this object had not been saved before,\n\t\t\t otherwise will update JUST the fields changed on THIS INSTANCE of the model.\n\n\t\t\t i.e. If you have two processes fetch the same object and change different fields, they will not overwrite\n\t\t\t eachother, but only save the ones each process changed.\n\n\t\t\tIf you want to save multiple objects of type MyModel in a single transaction,\n\t\t\tand you have those objects in a list, myObjs, you can do the following:\n\n\t\t\t\tMyModel.saver.save(myObjs)\n\n\t\t\t@param cascadeSave Default True - If True, any Foreign models linked as attributes that have been altered\n\t\t\t or created will be saved with this object. If False, only this object (and the reference to an already-saved foreign model) will be saved.\n\n\t\t\t@see #IndexedRedisSave.save\n\n\t\t\t@return - Single element list, id of saved object (if successful)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nresets - Remove all stored data associated with this model (i.e. all objects of this type), and then save all the provided objects in #newObjs , all in one atomic transaction. Use this method to move from one complete set of objects to another, where any querying applications will only see the complete before or complete after. @param newObjs list - A list of objects that will replace the current dataset To just replace a specific subset of objects in a single transaction, you can do MyModel.saver.save(objs) and just the objs in \"objs\" will be inserted/updated in one atomic step. This method, on the other hand, will delete all previous objects and add the newly provided objects in a single atomic step, and also reset the primary key ID generator @return list - The new primary keys associated with each object (same order as provided #newObjs list)", "response": "def reset(cls, newObjs):\n\t\t'''\n\t\t\treset - Remove all stored data associated with this model (i.e. all objects of this type),\n\t\t\t\tand then save all the provided objects in #newObjs , all in one atomic transaction.\n\n\t\t\tUse this method to move from one complete set of objects to another, where any querying applications\n\t\t\twill only see the complete before or complete after.\n\n\t\t\t@param newObjs list - A list of objects that will replace the current dataset\n\n\t\t\tTo just replace a specific subset of objects in a single transaction, you can do MyModel.saver.save(objs)\n\t\t\t and just the objs in \"objs\" will be inserted/updated in one atomic step.\n\n\t\t\tThis method, on the other hand, will delete all previous objects and add the newly provided objects in a single atomic step,\n\t\t\t and also reset the primary key ID generator\n\n\t\t\t@return list - The new primary keys associated with each object (same order as provided #newObjs list)\n\t\t'''\n\t\tconn = cls.objects._get_new_connection()\n\n\t\ttransaction = conn.pipeline()\n\t\ttransaction.eval(\"\"\"\n\t\tlocal matchingKeys = redis.call('KEYS', '%s*')\n\n\t\tfor _,key in ipairs(matchingKeys) do\n\t\t\tredis.call('DEL', key)\n\t\tend\n\t\t\"\"\" %( ''.join([INDEXED_REDIS_PREFIX, cls.KEY_NAME, ':']), ), 0)\n\t\tsaver = IndexedRedisSave(cls)\n\t\tnextID = 1\n\t\tfor newObj in newObjs:\n\t\t\tsaver.save(newObj, False, forceID=nextID, conn=transaction)\n\t\t\tnextID += 1\n\t\ttransaction.set(saver._get_next_id_key(), nextID)\n\t\ttransaction.execute()\n\n\t\treturn list( range( 1, nextID, 1) )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef hasSameValues(self, other, cascadeObject=True):\n\t\t'''\n\t\t\thasSameValues - Check if this and another model have the same fields and values.\n\n\t\t\tThis does NOT include id, so the models can have the same values but be different objects in the database.\n\n\t\t\t@param other - Another model\n\n\t\t\t@param cascadeObject default True - If True, foreign link values with changes will be considered a difference.\n\t\t\t\tOtherwise, only the immediate values are checked.\n\n\t\t\t@return - True if all fields have the same value, otherwise False\n\t\t'''\n\t\tif self.FIELDS != other.FIELDS:\n\t\t\treturn False\n\n\t\toga = object.__getattribute__\n\n\t\tfor field in self.FIELDS:\n\t\t\tthisVal = oga(self, field)\n\t\t\totherVal = oga(other, field)\n\t\t\tif thisVal != otherVal:\n\t\t\t\treturn False\n\n\t\t\tif cascadeObject is True and issubclass(field.__class__, IRForeignLinkFieldBase):\n\t\t\t\tif thisVal and thisVal.isFetched():\n\t\t\t\t\tif otherVal and otherVal.isFetched():\n\t\t\t\t\t\ttheseForeign = thisVal.getObjs()\n\t\t\t\t\t\tothersForeign = otherVal.getObjs()\n\t\t\t\t\t\t \n\t\t\t\t\t\tfor i in range(len(theseForeign)):\n\t\t\t\t\t\t\tif not theseForeign[i].hasSameValues(othersForeign[i]):\n\t\t\t\t\t\t\t\treturn False\n\t\t\t\t\telse:\n\t\t\t\t\t\ttheseForeign = thisVal.getObjs()\n\n\t\t\t\t\t\tfor i in range(len(theseForeign)):\n\t\t\t\t\t\t\tif theseForeign[i].hasUnsavedChanges(cascadeObjects=True):\n\t\t\t\t\t\t\t\treturn False\n\t\t\t\telse:\n\t\t\t\t\tif otherVal and otherVal.isFetched():\n\t\t\t\t\t\tothersForeign = otherVal.getObjs()\n\n\t\t\t\t\t\tfor i in range(len(othersForeign)):\n\t\t\t\t\t\t\tif othersForeign[i].hasUnsavedChanges(cascadeObjects=True):\n\t\t\t\t\t\t\t\treturn False\n\n\t\t\t\t\t\t\t\n\t\t\t\t\n\n\t\treturn True", "response": "Check if this model has the same values and values."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef copy(self, copyPrimaryKey=False, copyValues=False):\n\t\t'''\n copy - Copies this object.\n\n @param copyPrimaryKey default False - If True, any changes to the copy will save over-top the existing entry in Redis.\n If False, only the data is copied, and nothing is saved.\n\n\t\t @param copyValues default False - If True, every field value on this object will be explicitly copied. If False,\n\t\t an object will be created with the same values, and depending on the type may share the same reference.\n\t\t \n\t\t This is the difference between a copy and a deepcopy.\n\n\t @return - Copy of this object, per above\n\n\t\t If you need a copy that IS linked, @see IndexedRedisModel.copy\n\t\t'''\n\t\tcpy = self.__class__(**self.asDict(copyPrimaryKey, forStorage=False))\n\t\tif copyValues is True:\n\t\t\tfor fieldName in cpy.FIELDS:\n\t\t\t\tsetattr(cpy, fieldName, copy.deepcopy(getattr(cpy, fieldName)))\n\t\treturn cpy", "response": "Returns a copy of this object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef saveToExternal(self, redisCon):\n\t\t'''\n\t\t\tsaveToExternal - Saves this object to a different Redis than that specified by REDIS_CONNECTION_PARAMS on this model.\n\n\t\t\t@param redisCon - Either a dict of connection params, a la REDIS_CONNECTION_PARAMS, or an existing Redis connection.\n\t\t\t\tIf you are doing a lot of bulk copies, it is recommended that you create a Redis connection and pass it in rather than establish a new\n\t\t\t\tconnection with each call.\n\n\t\t\t@note - You will generate a new primary key relative to the external Redis environment. If you need to reference a \"shared\" primary key, it is better\n\t\t\t\t\tto use an indexed field than the internal pk.\n\n\t\t'''\n\t\tif type(redisCon) == dict:\n\t\t\tconn = redis.Redis(**redisCon)\n\t\telif hasattr(conn, '__class__') and issubclass(conn.__class__, redis.Redis):\n\t\t\tconn = redisCon\n\t\telse:\n\t\t\traise ValueError('saveToExternal \"redisCon\" param must either be a dictionary of connection parameters, or redis.Redis, or extension thereof')\n\n\t\tsaver = self.saver\n\n\t\t# Fetch next PK from external\n\t\tforceID = saver._getNextID(conn) # Redundant because of changes in save method\n\t\tmyCopy = self.copy(False)\n\n\t\treturn saver.save(myCopy, usePipeline=True, forceID=forceID, conn=conn)", "response": "Save this object to an external Redis connection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reload(self, cascadeObjects=True):\n\t\t'''\n reload - Reload this object from the database, overriding any local changes and merging in any updates.\n\n\n\t\t @param cascadeObjects Default True. If True, foreign-linked objects will be reloaded if their values have changed\n\t\t since last save/fetch. If False, only if the pk changed will the foreign linked objects be reloaded.\n\n @raises KeyError - if this object has not been saved (no primary key)\n\n @return - Dict with the keys that were updated. Key is field name that was updated,\n\t\t and value is tuple of (old value, new value). \n\n\t\t NOTE: Currently, this will cause a fetch of all Foreign Link objects, one level\n\n\t\t'''\n\t\t_id = self._id\n\t\tif not _id:\n\t\t\traise KeyError('Object has never been saved! Cannot reload.')\n\n\t\tcurrentData = self.asDict(False, forStorage=False)\n\n\t\t# Get the object, and compare the unconverted \"asDict\" repr.\n\t\t# If any changes, we will apply the already-convered value from\n\t\t# the object, but we compare the unconverted values (what's in the DB).\n\t\tnewDataObj = self.objects.get(_id)\n\t\tif not newDataObj:\n\t\t\traise KeyError('Object with id=%d is not in database. Cannot reload.' %(_id,))\n\n\t\tnewData = newDataObj.asDict(False, forStorage=False)\n\t\tif currentData == newData and not self.foreignFields:\n\t\t\treturn []\n\n\t\tupdatedFields = {}\n\t\tfor thisField, newValue in newData.items():\n\t\t\tdefaultValue = thisField.getDefaultValue()\n\n\t\t\tcurrentValue = currentData.get(thisField, defaultValue)\n\n\t\t\tfieldIsUpdated = False\n\n\t\t\tif currentValue != newValue:\n\t\t\t\tfieldIsUpdated = True\n\t\t\telif cascadeObjects is True and issubclass(thisField.__class__, IRForeignLinkFieldBase):\n\t\t\t\t# If we are cascading objects, and at this point the pk is the same\n\n\t\t\t\tif currentValue.isFetched():\n\t\t\t\t\t# If we have fetched the current set, we might need to update (pks already match)\n\t\t\t\t\toldObjs = currentValue.getObjs()\n\t\t\t\t\tnewObjs = newValue.getObjs()\n\n\t\t\t\t\tif oldObjs != newObjs: # This will check using __eq__, so one-level including pk\n\t\t\t\t\t\tfieldIsUpdated = True\n\t\t\t\t\telse:\n\t\t\t\t\t\t# Use hasSameValues with cascadeObjects=True to scan past one level\n\t\t\t\t\t\tfor i in range(len(oldObjs)):\n\t\t\t\t\t\t\tif not oldObjs[i].hasSameValues(newObjs[i], cascadeObjects=True):\n\t\t\t\t\t\t\t\tfieldIsUpdated = True\n\t\t\t\t\t\t\t\tbreak\n\n\t\t\tif fieldIsUpdated is True:\n\t\t\t\t# Use \"converted\" values in the updatedFields dict, and apply on the object.\n\t\t\t\tupdatedFields[thisField] = ( currentValue, newValue) \n\t\t\t\tsetattr(self, thisField, newValue)\n\t\t\t\tself._origData[thisField] = newDataObj._origData[thisField]\n\n\n\t\treturn updatedFields", "response": "Reloads this object from the database overriding any local changes and merging in any updates."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a copy of this model and returns that copy.", "response": "def copyModel(mdl):\n\t\t'''\n\t\t\tcopyModel - Copy this model, and return that copy.\n\n\t\t\t The copied model will have all the same data, but will have a fresh instance of the FIELDS array and all members,\n\t\t\t and the INDEXED_FIELDS array.\n\t\t\t \n\t\t\t This is useful for converting, like changing field types or whatever, where you can load from one model and save into the other.\n\n\t\t\t@return - A copy class of this model class with a unique name.\n\t\t'''\n\t\t\t \n\t\tcopyNum = _modelCopyMap[mdl]\n\t\t_modelCopyMap[mdl] += 1\n\t\tmdlCopy = type(mdl.__name__ + '_Copy' + str(copyNum), mdl.__bases__, copy.deepcopy(dict(mdl.__dict__)))\n\n\t\tmdlCopy.FIELDS = [field.copy() for field in mdl.FIELDS]\n\t\t\n\t\tmdlCopy.INDEXED_FIELDS = [str(idxField) for idxField in mdl.INDEXED_FIELDS] # Make sure they didn't do INDEXED_FIELDS = FIELDS or something wacky,\n\t\t\t\t\t\t\t\t\t\t\t # so do a comprehension of str on these to make sure we only get names\n\n\t\tmdlCopy.validateModel()\n\n\t\treturn mdlCopy"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validateModel(model):\n\t\t'''\n\t\t\tvalidateModel - Class method that validates a given model is implemented correctly. Will only be validated once, on first model instantiation.\n\n\t\t\t@param model - Implicit of own class\n\n\t\t\t@return - True\n\n\t\t\t@raises - InvalidModelException if there is a problem with the model, and the message contains relevant information.\n\t\t'''\n\t\tif model == IndexedRedisModel:\n\t\t\timport re\n\t\t\tif re.match('.*pydoc(|[\\d]|[\\d][\\.][\\d])([\\.]py(|[co])){0,1}$', sys.argv[0]):\n\t\t\t\treturn\n\t\t\traise ValueError('Cannot use IndexedRedisModel directly. You must implement this class for your model.')\n\n\t\tglobal validatedModels\n\t\tkeyName = model.KEY_NAME\n\t\tif not keyName:\n\t\t\traise InvalidModelException('\"%s\" has no KEY_NAME defined.' %(str(model.__name__), ) )\n\n\t\tif model in validatedModels:\n\t\t\treturn True\n\n\t\tfailedValidationStr = '\"%s\" Failed Model Validation:' %(str(model.__name__), ) \n\n\t\t# Convert items in model to set\n\t\t#model.FIELDS = set(model.FIELDS)\n\n\t\t\n\t\tfieldSet = set(model.FIELDS)\n\t\tindexedFieldSet = set(model.INDEXED_FIELDS)\n\n\t\tif not fieldSet:\n\t\t\traise InvalidModelException('%s No fields defined. Please populate the FIELDS array with a list of field names' %(failedValidationStr,))\n\n\n\t\tif hasattr(model, 'BASE64_FIELDS'):\n\t\t\traise InvalidModelException('BASE64_FIELDS is no longer supported since IndexedRedis 5.0.0 . Use IndexedRedis.fields.IRBase64Field in the FIELDS array for the same functionality.')\n\n\t\tif hasattr(model, 'BINARY_FIELDS'):\n\t\t\traise InvalidModelException('BINARY_FIELDS is no longer supported since IndexedRedis 5.0.0 . Use IndexedRedis.fields.IRBytesField in the FIELDS array for the same functionality, use IRBytesField for same functionality. Use IRField(valueType=bytes) for python-3 only support. Use IRRawField to perform no conversion at all.')\n\n\t\tnewFields = []\n\t\tupdatedFields = []\n\t\tmustUpdateFields = False\n\n\t\tforeignFields = []\n\n\t\tfor thisField in fieldSet:\n\t\t\tif thisField == '_id':\n\t\t\t\traise InvalidModelException('%s You cannot have a field named _id, it is reserved for the primary key.' %(failedValidationStr,))\n\n\t\t\t# XXX: Is this ascii requirement still needed since all is unicode now?\n\t\t\ttry:\n\t\t\t\tcodecs.ascii_encode(thisField)\n\t\t\texcept UnicodeDecodeError as e:\n\t\t\t\traise InvalidModelException('%s All field names must be ascii-encodable. \"%s\" was not. Error was: %s' %(failedValidationStr, to_unicode(thisField), str(e)))\n\n\t\t\tif issubclass(thisField.__class__, IRForeignLinkFieldBase):\n\t\t\t\tforeignFields.append(thisField)\n\n\t\t\t# If a classic string field, convert to IRClassicField\n\t\t\tif issubclass(thisField.__class__, IRField):\n\t\t\t\tnewFields.append(thisField)\n\t\t\telse:\n\t\t\t\tmustUpdateFields = True\n\t\t\t\tnewField = IRClassicField(thisField)\n\t\t\t\tnewFields.append(newField)\n\t\t\t\tupdatedFields.append(thisField)\n\n\t\t\t\tthisField = newField\n\n\t\t\tif str(thisField) == '':\n\t\t\t\traise InvalidModelException('%s Field defined without a name, or name was an empty string. Type=%s Field is: %s' %(failedValidationStr, str(type(thisField)), repr(thisField) ) )\n\n\t\t\tif thisField in indexedFieldSet and thisField.CAN_INDEX is False:\n\t\t\t\traise InvalidModelException('%s Field Type %s - (%s) cannot be indexed.' %(failedValidationStr, str(thisField.__class__.__name__), repr(thisField)))\n\n\t\t\tif hasattr(IndexedRedisModel, thisField) is True:\n\t\t\t\traise InvalidModelException('%s Field name %s is a reserved attribute on IndexedRedisModel.' %(failedValidationStr, str(thisField)))\n\n\n\n\t\tif mustUpdateFields is True:\n\t\t\tmodel.FIELDS = newFields\n\t\t\tdeprecatedMessage('Model \"%s\" contains plain-string fields. These have been converted to IRClassicField objects to retain the same functionality. plain-string fields will be removed in a future version. The converted fields are: %s' %(model.__name__, repr(updatedFields)), 'UPDATED_FIELDS_' + model.__name__)\n\n\t\tmodel.FIELDS = KeyList(model.FIELDS)\n\n\t\tif bool(indexedFieldSet - fieldSet):\n\t\t\traise InvalidModelException('%s All INDEXED_FIELDS must also be present in FIELDS. %s exist only in INDEXED_FIELDS' %(failedValidationStr, str(list(indexedFieldSet - fieldSet)), ) )\n\n\t\tmodel.foreignFields = foreignFields\n\t\t\n\t\tvalidatedModels.add(model)\n\t\treturn True", "response": "Validate a given model."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connectAlt(cls, redisConnectionParams):\n\t\t'''\n\t\t\tconnectAlt - Create a class of this model which will use an alternate connection than the one specified by REDIS_CONNECTION_PARAMS on this model.\n\n\t\t\t@param redisConnectionParams - Dictionary of arguments to redis.Redis, same as REDIS_CONNECTION_PARAMS.\n\n\t\t\t@return - A class that can be used in all the same ways as the existing IndexedRedisModel, but that connects to a different instance.\n\n\t\t\t The fields and key will be the same here, but the connection will be different. use #copyModel if you want an independent class for the model\n\t\t'''\n\t\tif not isinstance(redisConnectionParams, dict):\n\t\t\traise ValueError('redisConnectionParams must be a dictionary!')\n\n\t\thashVal = hashDictOneLevel(redisConnectionParams)\n\n\t\tmodelDictCopy = copy.deepcopy(dict(cls.__dict__))\n\t\tmodelDictCopy['REDIS_CONNECTION_PARAMS'] = redisConnectionParams\n\n\t\tConnectedIndexedRedisModel = type('AltConnect' + cls.__name__ + str(hashVal), cls.__bases__, modelDictCopy)\n\n\t\treturn ConnectedIndexedRedisModel", "response": "Create a new instance of this model which will connect to an alternate connection than the one specified by REDIS_CONNECTION_PARAMS on this model."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a new connection", "response": "def _get_new_connection(self):\n\t\t'''\n\t\t\t_get_new_connection - Get a new connection\n\t\t\tinternal\n\t\t'''\n\t\tpool = getRedisPool(self.mdl.REDIS_CONNECTION_PARAMS)\n\t\treturn redis.Redis(connection_pool=pool)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding primary key to the list of keys in the table", "response": "def _add_id_to_keys(self, pk, conn=None):\n\t\t'''\n\t\t\t_add_id_to_keys - Adds primary key to table\n\t\t\tinternal\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\t\tconn.sadd(self._get_ids_key(), pk)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _rem_id_from_keys(self, pk, conn=None):\n\t\t'''\n\t\t\t_rem_id_from_keys - Remove primary key from table\n\t\t\tinternal\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\t\tconn.srem(self._get_ids_key(), pk)", "response": "Remove primary key from table\n\t\t\t internal connection"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd an id to an index", "response": "def _add_id_to_index(self, indexedField, pk, val, conn=None):\n\t\t'''\n\t\t\t_add_id_to_index - Adds an id to an index\n\t\t\tinternal\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\t\tconn.sadd(self._get_key_for_index(indexedField, val), pk)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _rem_id_from_index(self, indexedField, pk, val, conn=None):\n\t\t'''\n\t\t\t_rem_id_from_index - Removes an id from an index\n\t\t\tinternal\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\t\tconn.srem(self._get_key_for_index(indexedField, val), pk)", "response": "Removes an id from an indexed field from an index"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _compat_get_str_key_for_index(self, indexedField, val):\n\t\t'''\n\t\t\t_compat_get_str_key_for_index - Return the key name as a string, even if it is a hashed index field.\n\t\t\t This is used in converting unhashed fields to a hashed index (called by _compat_rem_str_id_from_index which is called by compat_convertHashedIndexes)\n\n\t\t\t @param inde\n\t\t\t@param indexedField - string of field name\n\t\t\t@param val - Value of field\n\n\t\t\t@return - Key name string, always a string regardless of hash\n\t\t'''\n\t\treturn ''.join([INDEXED_REDIS_PREFIX, self.keyName, ':idx:', indexedField, ':', getattr(indexedField, 'toStorage', to_unicode)(val)])", "response": "This function returns the key name string for the given indexedField and value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _peekNextID(self, conn=None):\n\t\t'''\n\t\t\t_peekNextID - Look at, but don't increment the primary key for this model.\n\t\t\t\tInternal.\n\n\t\t\t@return int - next pk\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\t\treturn to_unicode(conn.get(self._get_next_id_key()) or 0)", "response": "Internal method to peek the next ID in the table."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfilter - Add filters based on INDEXED_FIELDS having or not having a value. Note, no objects are actually fetched until .all() is called Use the field name [ model.objects.filter(some_field='value')] to filter on items containing that value. Use the field name suffxed with '__ne' for a negation filter [ model.objects.filter(some_field__ne='value') ] Example: query = Model.objects.filter(field1='value', field2='othervalue') objs1 = query.filter(something__ne='value').all() objs2 = query.filter(something__ne=7).all() @returns - A copy of this object, with the additional filters. If you want to work inline on this object instead, use the filterInline method.", "response": "def filter(self, **kwargs):\n\t\t'''\n\t\t\tfilter - Add filters based on INDEXED_FIELDS having or not having a value.\n\t\t\t Note, no objects are actually fetched until .all() is called\n\n\t\t\t\tUse the field name [ model.objects.filter(some_field='value')] to filter on items containing that value.\n\t\t\t\tUse the field name suffxed with '__ne' for a negation filter [ model.objects.filter(some_field__ne='value') ]\n\n\t\t\tExample:\n\t\t\t\tquery = Model.objects.filter(field1='value', field2='othervalue')\n\n\t\t\t\tobjs1 = query.filter(something__ne='value').all()\n\t\t\t\tobjs2 = query.filter(something__ne=7).all()\n\n\n\t\t\t@returns - A copy of this object, with the additional filters. If you want to work inline on this object instead, use the filterInline method.\n\t\t'''\n\t\tselfCopy = self.__copy__()\n\t\treturn IndexedRedisQuery._filter(selfCopy, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a filter to the filterObj.", "response": "def _filter(filterObj, **kwargs):\n\t\t'''\n\t\t\tInternal for handling filters; the guts of .filter and .filterInline\n\t\t'''\n\t\tfor key, value in kwargs.items():\n\t\t\tif key.endswith('__ne'):\n\t\t\t\tnotFilter = True\n\t\t\t\tkey = key[:-4]\n\t\t\telse:\n\t\t\t\tnotFilter = False\n\t\t\tif key not in filterObj.indexedFields:\n\t\t\t\traise ValueError('Field \"' + key + '\" is not in INDEXED_FIELDS array. Filtering is only supported on indexed fields.')\n\n\t\t\tif notFilter is False:\n\t\t\t\tfilterObj.filters.append( (key, value) )\n\t\t\telse:\n\t\t\t\tfilterObj.notFilters.append( (key, value) )\n\n\t\treturn filterObj"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the number of records matching the filter criteria", "response": "def count(self):\n\t\t'''\n\t\t\tcount - gets the number of records matching the filter criteria\n\n\t\t\tExample:\n\t\t\t\ttheCount = Model.objects.filter(field1='value').count()\n\t\t'''\n\t\tconn = self._get_connection()\n\t\t\n\t\tnumFilters = len(self.filters)\n\t\tnumNotFilters = len(self.notFilters)\n\t\tif numFilters + numNotFilters == 0:\n\t\t\treturn conn.scard(self._get_ids_key())\n\n\t\tif numNotFilters == 0:\n\t\t\tif numFilters == 1:\n\t\t\t\t(filterFieldName, filterValue) = self.filters[0]\n\t\t\t\treturn conn.scard(self._get_key_for_index(filterFieldName, filterValue))\n\t\t\tindexKeys = [self._get_key_for_index(filterFieldName, filterValue) for filterFieldName, filterValue in self.filters]\n\n\t\t\treturn len(conn.sinter(indexKeys))\n\n\t\tnotIndexKeys = [self._get_key_for_index(filterFieldName, filterValue) for filterFieldName, filterValue in self.notFilters]\n\t\tif numFilters == 0:\n\t\t\treturn len(conn.sdiff(self._get_ids_key(), *notIndexKeys))\n\n\t\tindexKeys = [self._get_key_for_index(filterFieldName, filterValue) for filterFieldName, filterValue in self.filters]\n\t\t\n\t\ttempKey = self._getTempKey()\n\t\tpipeline = conn.pipeline()\n\t\tpipeline.sinterstore(tempKey, *indexKeys)\n\t\tpipeline.sdiff(tempKey, *notIndexKeys)\n\t\tpipeline.delete(tempKey)\n\t\tpks = pipeline.execute()[1] # sdiff\n\n\t\treturn len(pks)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef exists(self, pk):\n\t\t'''\n\t\t\texists - Tests whether a record holding the given primary key exists.\n\n\t\t\t@param pk - Primary key (see getPk method)\n\n\t\t\tExample usage: Waiting for an object to be deleted without fetching the object or running a filter. \n\n\t\t\tThis is a very cheap operation.\n\n\t\t\t@return - True if object with given pk exists, otherwise False\n\t\t'''\n\t\tconn = self._get_connection()\n\t\tkey = self._get_key_for_id(pk)\n\t\treturn conn.exists(key)", "response": "Tests whether a record with the given pk exists."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getPrimaryKeys(self, sortByAge=False):\n\t\t'''\n\t\t\tgetPrimaryKeys - Returns all primary keys matching current filterset.\n\n\t\t\t@param sortByAge - If False, return will be a set and may not be ordered.\n\t\t\t\tIf True, return will be a list and is guarenteed to represent objects oldest->newest\n\n\t\t\t@return - A set of all primary keys associated with current filters.\n\t\t'''\n\t\tconn = self._get_connection()\n\t\t# Apply filters, and return object\n\t\tnumFilters = len(self.filters)\n\t\tnumNotFilters = len(self.notFilters)\n\n\t\tif numFilters + numNotFilters == 0:\n\t\t\t# No filters, get all.\n\t\t\tconn = self._get_connection()\n\t\t\tmatchedKeys = conn.smembers(self._get_ids_key())\n\n\t\telif numNotFilters == 0:\n\t\t\t# Only Inclusive\n\t\t\tif numFilters == 1:\n\t\t\t\t# Only one filter, get members of that index key\n\t\t\t\t(filterFieldName, filterValue) = self.filters[0]\n\t\t\t\tmatchedKeys = conn.smembers(self._get_key_for_index(filterFieldName, filterValue))\n\t\t\telse:\n\t\t\t\t# Several filters, intersect the index keys\n\t\t\t\tindexKeys = [self._get_key_for_index(filterFieldName, filterValue) for filterFieldName, filterValue in self.filters]\n\t\t\t\tmatchedKeys = conn.sinter(indexKeys)\n\n\t\telse:\n\t\t\t# Some negative filters present\n\t\t\tnotIndexKeys = [self._get_key_for_index(filterFieldName, filterValue) for filterFieldName, filterValue in self.notFilters]\n\t\t\tif numFilters == 0:\n\t\t\t\t# Only negative, diff against all keys\n\t\t\t\tmatchedKeys = conn.sdiff(self._get_ids_key(), *notIndexKeys)\n\t\t\telse:\n\t\t\t\t# Negative and positive. Use pipeline, find all positive intersections, and remove negative matches\n\t\t\t\tindexKeys = [self._get_key_for_index(filterFieldName, filterValue) for filterFieldName, filterValue in self.filters]\n\t\t\t\t\n\t\t\t\ttempKey = self._getTempKey()\n\t\t\t\tpipeline = conn.pipeline()\n\t\t\t\tpipeline.sinterstore(tempKey, *indexKeys)\n\t\t\t\tpipeline.sdiff(tempKey, *notIndexKeys)\n\t\t\t\tpipeline.delete(tempKey)\n\t\t\t\tmatchedKeys = pipeline.execute()[1] # sdiff\n\n\n\t\tmatchedKeys = [ int(_key) for _key in matchedKeys ]\n\n\t\tif sortByAge is False:\n\t\t\treturn list(matchedKeys)\n\t\telse:\n\t\t\tmatchedKeys = list(matchedKeys)\n\t\t\tmatchedKeys.sort()\n\n\t\t\treturn matchedKeys", "response": "Returns all primary keys associated with current filterset."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef all(self, cascadeFetch=False):\n\t\t'''\n\t\t\tall - Get the underlying objects which match the filter criteria.\n\n\t\t\tExample: objs = Model.objects.filter(field1='value', field2='value2').all()\n\n\t\t\t@param cascadeFetch Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@return - Objects of the Model instance associated with this query.\n\t\t'''\n\t\tmatchedKeys = self.getPrimaryKeys()\n\t\tif matchedKeys:\n\t\t\treturn self.getMultiple(matchedKeys, cascadeFetch=cascadeFetch)\n\n\t\treturn IRQueryableList([], mdl=self.mdl)", "response": "Returns a list of all Model instances that match the filter criteria."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of all Model instances that match the filter criteria ordered oldest - > newest - > oldest", "response": "def allByAge(self, cascadeFetch=False):\n\t\t'''\n\t\t\tallByAge - Get the underlying objects which match the filter criteria, ordered oldest -> newest\n\t\t\t\tIf you are doing a queue or just need the head/tail, consider .first() and .last() instead.\n\n\n\t\t\t@param cascadeFetch Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@return - Objects of the Model instance associated with this query, sorted oldest->newest\n\t\t'''\n\t\tmatchedKeys = self.getPrimaryKeys(sortByAge=True)\n\t\tif matchedKeys:\n\t\t\treturn self.getMultiple(matchedKeys, cascadeFetch=cascadeFetch)\n\n\t\treturn IRQueryableList([], mdl=self.mdl)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef allOnlyFields(self, fields, cascadeFetch=False):\n\t\t'''\n\t\t\tallOnlyFields - Get the objects which match the filter criteria, only fetching given fields.\n\n\t\t\t@param fields - List of fields to fetch\n\n\t\t\t@param cascadeFetch Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\n\t\t\t@return - Partial objects with only the given fields fetched\n\t\t'''\n\t\tmatchedKeys = self.getPrimaryKeys()\n\t\tif matchedKeys:\n\t\t\treturn self.getMultipleOnlyFields(matchedKeys, fields, cascadeFetch=cascadeFetch)\n\n\t\treturn IRQueryableList([], mdl=self.mdl)", "response": "Returns the list of objects with only the given fields."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef allOnlyIndexedFields(self):\n\t\t'''\n\t\t\tallOnlyIndexedFields - Get the objects which match the filter criteria, only fetching indexed fields.\n\n\t\t\t@return - Partial objects with only the indexed fields fetched\n\t\t'''\n\t\tmatchedKeys = self.getPrimaryKeys()\n\t\tif matchedKeys:\n\t\t\treturn self.getMultipleOnlyIndexedFields(matchedKeys)\n\n\t\treturn IRQueryableList([], mdl=self.mdl)", "response": "Returns a list of all indexed fields that match the filter criteria."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef first(self, cascadeFetch=False):\n\t\t'''\n\t\t\tFirst - Returns the oldest record (lowerst primary key) with current filters.\n\t\t\t\tThis makes an efficient queue, as it only fetches a single object.\n\t\t\n\n\t\t\t@param cascadeFetch Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@return - Instance of Model object, or None if no items match current filters\n\t\t'''\n\t\tobj = None\n\n\t\tmatchedKeys = self.getPrimaryKeys(sortByAge=True)\n\t\tif matchedKeys:\n\t\t\t# Loop so we don't return None when there are items, if item is deleted between getting key and getting obj\n\t\t\twhile matchedKeys and obj is None:\n\t\t\t\tobj = self.get(matchedKeys.pop(0), cascadeFetch=cascadeFetch)\n\n\t\treturn obj", "response": "Returns the first record with current filters."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef random(self, cascadeFetch=False):\n\t\t'''\n\t\t\tRandom - Returns a random record in current filterset.\n\n\n\t\t\t@param cascadeFetch Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@return - Instance of Model object, or None if no items math current filters\n\t\t'''\n\t\tmatchedKeys = list(self.getPrimaryKeys())\n\t\tobj = None\n\t\t# Loop so we don't return None when there are items, if item is deleted between getting key and getting obj\n\t\twhile matchedKeys and not obj:\n\t\t\tkey = matchedKeys.pop(random.randint(0, len(matchedKeys)-1))\n\t\t\tobj = self.get(key, cascadeFetch=cascadeFetch)\n\n\t\treturn obj", "response": "Returns a random record in current filterset."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes all entries matching the filter criteria and destroy the model.", "response": "def delete(self):\n\t\t'''\n\t\t\tdelete - Deletes all entries matching the filter criteria\n\n\t\t'''\n\t\tif self.filters or self.notFilters:\n\t\t\treturn self.mdl.deleter.deleteMultiple(self.allOnlyIndexedFields())\n\t\treturn self.mdl.deleter.destroyModel()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(self, pk, cascadeFetch=False):\n\t\t'''\n\t\t\tget - Get a single value with the internal primary key.\n\n\n\t\t\t@param cascadeFetch Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@param pk - internal primary key (can be found via .getPk() on an item)\n\t\t'''\n\t\tconn = self._get_connection()\n\t\tkey = self._get_key_for_id(pk)\n\t\tres = conn.hgetall(key)\n\t\tif type(res) != dict or not len(res.keys()):\n\t\t\treturn None\n\t\tres['_id'] = pk\n\n\t\tret = self._redisResultToObj(res)\n\t\tif cascadeFetch is True:\n\t\t\tself._doCascadeFetch(ret)\n\t\treturn ret", "response": "Get a single value with the internal primary key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reindex(self):\n\t\t'''\n\t\t\treindex - Reindexes the objects matching current filterset. Use this if you add/remove a field to INDEXED_FIELDS.\n\n\t\t\t NOTE - This will NOT remove entries from the old index if you change index type, or change decimalPlaces on a\n\t\t\t IRFixedPointField. To correct these indexes, you'll need to run:\n\n\t\t\t Model.reset(Model.objects.all())\n\n\t\t\tIf you change the value of \"hashIndex\" on a field, you need to call #compat_convertHashedIndexes instead.\n\t\t'''\n\t\tobjs = self.all()\n\t\tsaver = IndexedRedisSave(self.mdl)\n\t\tsaver.reindex(objs)", "response": "Reindexes the objects matching the current filterset."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef compat_convertHashedIndexes(self, fetchAll=True):\n\t\t'''\n\t\t\tcompat_convertHashedIndexes - Reindex fields, used for when you change the propery \"hashIndex\" on one or more fields.\n\n\t\t\tFor each field, this will delete both the hash and unhashed keys to an object, \n\t\t\t and then save a hashed or unhashed value, depending on that field's value for \"hashIndex\".\n\n\t\t\tFor an IndexedRedisModel class named \"MyModel\", call as \"MyModel.objects.compat_convertHashedIndexes()\"\n\n\t\t\tNOTE: This works one object at a time (regardless of #fetchAll), so that an unhashable object does not trash all data.\n\n\t\t\tThis method is intended to be used while your application is offline,\n\t\t\t as it doesn't make sense to be changing your model while applications are actively using it.\n\n\t\t\t@param fetchAll , Default True - If True, all objects will be fetched first, then converted.\n\t\t\t This is generally what you want to do, as it is more efficient. If you are memory contrainted,\n\t\t\t you can set this to \"False\", and it will fetch one object at a time, convert it, and save it back.\n\n\t\t'''\n\n\t\tsaver = IndexedRedisSave(self.mdl)\n\n\t\tif fetchAll is True:\n\t\t\tobjs = self.all()\n\t\t\tsaver.compat_convertHashedIndexes(objs)\n\t\telse:\n\t\t\tdidWarnOnce = False\n\n\t\t\tpks = self.getPrimaryKeys()\n\t\t\tfor pk in pks:\n\t\t\t\tobj = self.get(pk)\n\t\t\t\tif not obj:\n\t\t\t\t\tif didWarnOnce is False:\n\t\t\t\t\t\tsys.stderr.write('WARNING(once)! An object (type=%s , pk=%d) disappered while ' \\\n\t\t\t\t\t\t\t'running compat_convertHashedIndexes! This probably means an application ' \\\n\t\t\t\t\t\t\t'is using the model while converting indexes. This is a very BAD IDEA (tm).')\n\t\t\t\t\t\t\n\t\t\t\t\t\tdidWarnOnce = True\n\t\t\t\t\tcontinue\n\t\t\t\tsaver.compat_convertHashedIndexes([obj])", "response": "This method will convert the hashIndex of the object to the object that is indexed by the primary key of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save(self, obj, usePipeline=True, forceID=False, cascadeSave=True, conn=None):\n\t\t'''\n\t\t\tsave - Save an object / objects associated with this model. \n\t\t\t\n\t\t\tYou probably want to just do object.save() instead of this, but to save multiple objects at once in a single transaction, \n\t\t\t you can use:\n\t\t\t\t\n\t\t\t\tMyModel.saver.save(myObjs)\n\n\t\t\t@param obj - The object to save, or a list of objects to save\n\n\t\t\t@param usePipeline - Use a pipeline for saving. You should always want this, unless you are calling this function from within an existing pipeline.\n\n\t\t\t@param forceID - if not False, force ID to this. If obj is list, this is also list. Forcing IDs also forces insert. Up to you to ensure ID will not clash.\n\t\t\t@param cascadeSave Default True - If True, any Foreign models linked as attributes that have been altered\n\t\t\t or created will be saved with this object. If False, only this object (and the reference to an already-saved foreign model) will be saved.\n\n\t\t\t@param conn - A connection or None\n\n\t\t\t@note - if no ID is specified\n\n\t\t\t@return - List of pks\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\n\t\t# If we are in a pipeline, we need an external connection to fetch any potential IDs for inserts.\n\t\tif usePipeline is True:\n\t\t\tidConn = conn\n\t\telse:\n\t\t\tidConn = self._get_new_connection()\n\n\t\tif issubclass(obj.__class__, (list, tuple)):\n\t\t\tobjs = obj\n\t\telse:\n\t\t\tobjs = [obj]\n\n\n\t\tif usePipeline is True:\n\t\t\tpipeline = conn.pipeline()\n\t\telse:\n\t\t\tpipeline = conn\n\n\t\toga = object.__getattribute__\n\n\t\tif cascadeSave is True:\n\n\t\t\t# TODO: Confirm that this pipeline logic works even when doPipeline is False\n\t\t\t# (i.e. that cascading works through calls to reset)\n#\t\t\tforeignPipelines = OrderedDict()\n\t\t\tforeignSavers = {}\n\n\t\t\tfor thisObj in objs:\n\t\t\t\tif not thisObj.foreignFields:\n\t\t\t\t\tcontinue\n\n\t\t\t\tforeignFields = thisObj.foreignFields\n\t\t\t\tfor foreignField in foreignFields:\n\n\t\t\t\t\trawObj = oga(thisObj, str(foreignField))\n\n\t\t\t\t\tif rawObj in (None, irNull) or not rawObj.isFetched():\n\t\t\t\t\t\tcontinue\n\n\t\t\t\t\tforeignObjects = oga(thisObj, str(foreignField)).getObjs()\n\n\t\t\t\t\tfor foreignObject in foreignObjects:\n\t\t\t\t\t\tdoSaveForeign = False\n\t\t\t\t\t\tif getattr(foreignObject, '_id', None):\n\t\t\t\t\t\t\tif foreignObject.hasUnsavedChanges(cascadeObjects=True):\n\t\t\t\t\t\t\t\tdoSaveForeign = True\n\t\t\t\t\t\telse:\n\t\t\t\t\t\t\tdoSaveForeign = True\n\n\t\t\t\t\t\t# OLD:\n\t\t\t\t\t\t# Assemble each level of Foreign fields into an ordered pipeline. Based on semi-recursion,\n\t\t\t\t\t\t# we will save the deepest level first in a pipeline, then the next up, on until we complete any subs\n\n\t\t\t\t\t\t# NEW:\n\t\t\t\t\t\t# Assemble all foreign fields into current pipeline and execute all in one block\n\t\t\t\t\t\tif doSaveForeign is True:\n\t\t\t\t\t\t\tif foreignField not in foreignSavers:\n#\t\t\t\t\t\t\t\tforeignPipelines[foreignField] = self._get_new_connection().pipeline()\n\t\t\t\t\t\t\t\tforeignSavers[foreignField] = IndexedRedisSave(foreignObject.__class__)\n\n\t\t\t\t\t\t\t#foreignSavers[foreignField].save(foreignObject, usePipeline=False, cascadeSave=True, conn=foreignPipelines[foreignField])\n\t\t\t\t\t\t\tforeignSavers[foreignField].save(foreignObject, usePipeline=False, cascadeSave=True, conn=pipeline)\n\n#\t\t\tif foreignPipelines:\n#\t\t\t\tfor foreignPipeline in foreignPipelines.values():\n#\t\t\t\t\tforeignPipeline.execute()\n\n\t\t\t\t\n\n\t\tobjsLen = len(objs)\n\n\t\tif forceID is not False:\n\t\t\t# Compat with old poor design.. :(\n\t\t\tif isinstance(forceID, (list, tuple)):\n\t\t\t\tforceIDs = forceID\n\t\t\telse:\n\t\t\t\tforceIDs = [forceID]\n\t\t\tisInserts = [] \n\t\t\ti = 0\n\t\t\twhile i < objsLen:\n\t\t\t\tif forceIDs[i] is not False:\n\t\t\t\t\tobjs[i]._id = forceIDs[i]\n\t\t\t\t\tisInserts.append(True)\n\t\t\t\telse:\n\t\t\t\t\tisInsert = not bool(getattr(obj, '_id', None))\n\t\t\t\t\tif isInsert is True:\n\t\t\t\t\t\tobjs[i]._id = self._getNextID(idConn)\n\t\t\t\t\tisInserts.append(isInsert)\n\t\t\t\ti += 1\n\t\telse:\n\t\t\tisInserts = []\n\t\t\tfor obj in objs:\n\t\t\t\tisInsert = not bool(getattr(obj, '_id', None))\n\t\t\t\tif isInsert is True:\n\t\t\t\t\tobj._id = self._getNextID(idConn)\n\t\t\t\tisInserts.append(isInsert)\n\t\t\t\t\n\n\t\tids = [] # Note ids can be derived with all information above..\n\t\ti = 0\n\t\twhile i < objsLen:\n\t\t\tself._doSave(objs[i], isInserts[i], conn, pipeline)\n\t\t\tids.append(objs[i]._id)\n\t\t\ti += 1\n\n\t\tif usePipeline is True:\n\t\t\tpipeline.execute()\n\n\t\treturn ids", "response": "Save an object or a list of objects associated with this model."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _doSave(self, obj, isInsert, conn, pipeline=None):\n\t\t'''\n\t\t\t_doSave - Internal function to save a single object. Don't call this directly. \n\t\t\t Use \"save\" instead.\n\n\t\t\t If a pipeline is provided, the operations (setting values, updating indexes, etc)\n\t\t\t will be queued into that pipeline.\n\t\t\t Otherwise, everything will be executed right away.\n\n\t\t\t @param obj - Object to save\n\t\t\t @param isInsert - Bool, if insert or update. Either way, obj._id is expected to be set.\n\t\t\t @param conn - Redis connection\n\t\t\t @param pipeline - Optional pipeline, if present the items will be queued onto it. Otherwise, go directly to conn.\n\t\t'''\n\n\t\tif pipeline is None:\n\t\t\tpipeline = conn\n\n\t\tnewDict = obj.asDict(forStorage=True)\n\t\tkey = self._get_key_for_id(obj._id)\n\n\t\tif isInsert is True:\n\t\t\tfor thisField in self.fields:\n\n\t\t\t\tfieldValue = newDict.get(thisField, thisField.getDefaultValue())\n\n\t\t\t\tpipeline.hset(key, thisField, fieldValue)\n\n\t\t\t\t# Update origData with the new data\n\t\t\t\tif fieldValue == IR_NULL_STR:\n\t\t\t\t\tobj._origData[thisField] = irNull\n\t\t\t\telse:\n\t\t\t\t\tobj._origData[thisField] = object.__getattribute__(obj, str(thisField))\n\n\t\t\tself._add_id_to_keys(obj._id, pipeline)\n\n\t\t\tfor indexedField in self.indexedFields:\n\t\t\t\tself._add_id_to_index(indexedField, obj._id, obj._origData[indexedField], pipeline)\n\t\telse:\n\t\t\tupdatedFields = obj.getUpdatedFields()\n\t\t\tfor thisField, fieldValue in updatedFields.items():\n\t\t\t\t(oldValue, newValue) = fieldValue\n\n\t\t\t\toldValueForStorage = thisField.toStorage(oldValue)\n\t\t\t\tnewValueForStorage = thisField.toStorage(newValue)\n\n\t\t\t\tpipeline.hset(key, thisField, newValueForStorage)\n\n\t\t\t\tif thisField in self.indexedFields:\n\t\t\t\t\tself._rem_id_from_index(thisField, obj._id, oldValueForStorage, pipeline)\n\t\t\t\t\tself._add_id_to_index(thisField, obj._id, newValueForStorage, pipeline)\n\n\t\t\t\t# Update origData with the new data\n\t\t\t\tobj._origData[thisField] = newValue", "response": "Internal function to save a single object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reindex(self, objs, conn=None):\n\t\t'''\n\t\t\treindex - Reindexes a given list of objects. Probably you want to do Model.objects.reindex() instead of this directly.\n\n\t\t\t@param objs list - List of objects to reindex\n\t\t\t@param conn - Specific Redis connection or None to reuse\n\t\t'''\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\n\t\tpipeline = conn.pipeline()\n\n\t\tobjDicts = [obj.asDict(True, forStorage=True) for obj in objs]\n\n\t\tfor indexedFieldName in self.indexedFields:\n\t\t\tfor objDict in objDicts:\n\t\t\t\tself._rem_id_from_index(indexedFieldName, objDict['_id'], objDict[indexedFieldName], pipeline)\n\t\t\t\tself._add_id_to_index(indexedFieldName, objDict['_id'], objDict[indexedFieldName], pipeline)\n\n\t\tpipeline.execute()", "response": "Reindexes a list of objects in the index."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete one object from the database.", "response": "def deleteOne(self, obj, conn=None):\n\t\t'''\n\t\t\tdeleteOne - Delete one object\n\n\t\t\t@param obj - object to delete\n\t\t\t@param conn - Connection to reuse, or None\n\n\t\t\t@return - number of items deleted (0 or 1)\n\t\t'''\n\t\tif not getattr(obj, '_id', None):\n\t\t\treturn 0\n\n\t\tif conn is None:\n\t\t\tconn = self._get_connection()\n\t\t\tpipeline = conn.pipeline()\n\t\t\texecuteAfter = True\n\t\telse:\n\t\t\tpipeline = conn # In this case, we are inheriting a pipeline\n\t\t\texecuteAfter = False\n\t\t\n\t\tpipeline.delete(self._get_key_for_id(obj._id))\n\t\tself._rem_id_from_keys(obj._id, pipeline)\n\t\tfor indexedFieldName in self.indexedFields:\n\t\t\tself._rem_id_from_index(indexedFieldName, obj._id, obj._origData[indexedFieldName], pipeline)\n\n\t\tobj._id = None\n\n\t\tif executeAfter is True:\n\t\t\tpipeline.execute()\n\n\t\treturn 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deleteByPk(self, pk):\n\t\t'''\n\t\t\tdeleteByPk - Delete object associated with given primary key\n\t\t'''\n\t\tobj = self.mdl.objects.getOnlyIndexedFields(pk)\n\t\tif not obj:\n\t\t\treturn 0\n\t\treturn self.deleteOne(obj)", "response": "Delete object associated with given primary key."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deleteMultiple(self, objs):\n\t\t'''\n\t\t\tdeleteMultiple - Delete multiple objects\n\n\t\t\t@param objs - List of objects\n\n\t\t\t@return - Number of objects deleted\n\t\t'''\n\t\tconn = self._get_connection()\n\t\tpipeline = conn.pipeline()\n\n\t\tnumDeleted = 0\n\n\t\tfor obj in objs:\n\t\t\tnumDeleted += self.deleteOne(obj, pipeline)\n\n\t\tpipeline.execute()\n\n\t\treturn numDeleted", "response": "Delete multiple objects in the database."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting multiple objects given their primary keys.", "response": "def deleteMultipleByPks(self, pks):\n\t\t'''\n\t\t\tdeleteMultipleByPks - Delete multiple objects given their primary keys\n\n\t\t\t@param pks - List of primary keys\n\n\t\t\t@return - Number of objects deleted\n\t\t'''\n\t\tif type(pks) == set:\n\t\t\tpks = list(pks)\n\n\t\tif len(pks) == 1:\n\t\t\treturn self.deleteByPk(pks[0])\n\n\t\tobjs = self.mdl.objects.getMultipleOnlyIndexedFields(pks)\n\t\treturn self.deleteMultiple(objs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef string(html, start_on=None, ignore=(), use_short=True, **queries):\n '''Returns a blox template from an html string'''\n if use_short:\n html = grow_short(html)\n return _to_template(fromstring(html), start_on=start_on,\n ignore=ignore, **queries)", "response": "Returns a blox template from an html string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a blox template from a file stream object", "response": "def file(file_object, start_on=None, ignore=(), use_short=True, **queries):\n '''Returns a blox template from a file stream object'''\n return string(file_object.read(), start_on=start_on, ignore=ignore, use_short=use_short, **queries)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filename(file_name, start_on=None, ignore=(), use_short=True, **queries):\n '''Returns a blox template from a valid file path'''\n with open(file_name) as template_file:\n return file(template_file, start_on=start_on, ignore=ignore, use_short=use_short, **queries)", "response": "Returns a blox template from a valid file path"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_from_string(self, string, context=EMPTY_CONTEXT, *args, **kwargs):\n if not PY2 and not isinstance(string, bytes):\n raise TypeError(\"string should be an instance of bytes in Python 3\")\n\n io = StringIO(string)\n instance = self.create_from_stream(io, context, *args, **kwargs)\n io.close()\n return instance", "response": "Deserializes a new instance from a string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef require(method):\n def decorator(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n # throw exception if input class doesn't have requirement\n if not hasattr(args[0], method):\n raise AssertionError('{} class has no method {}()'.format(args[0].__class__.__name__, method))\n\n # create property to record that method has been called\n callmethod = method + '_called'\n if not hasattr(args[0], callmethod):\n setattr(args[0], callmethod, False)\n\n # call the method if it hasn't yet been called\n if not getattr(args[0], callmethod):\n getattr(args[0], method)()\n setattr(args[0], callmethod, True)\n return func(*args, **kwargs)\n return wrapper\n return decorator", "response": "Decorator for managing chained dependencies of different class\n properties."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef exception(exception):\n def decorator(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except Exception as exe:\n raise raise_with_traceback(exception(exe))\n return wrapper\n return decorator", "response": "Decorator for any exception occurs during function execution."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\naccumulate all dictionary and named arguments as keyword argument dictionary. This is generally useful for functions that try to automatically resolve inputs. Examples: >>> @keywords >>> def test(*args, **kwargs): >>> return kwargs >>> >>> print test({'one': 1}, two=2) {'one': 1, 'two': 2}", "response": "def keywords(func):\n \"\"\"\n Accumulate all dictionary and named arguments as\n keyword argument dictionary. This is generally useful for\n functions that try to automatically resolve inputs.\n\n Examples:\n >>> @keywords\n >>> def test(*args, **kwargs):\n >>> return kwargs\n >>>\n >>> print test({'one': 1}, two=2)\n {'one': 1, 'two': 2}\n \"\"\"\n @wraps(func)\n def decorator(*args, **kwargs):\n idx = 0 if inspect.ismethod(func) else 1\n if len(args) > idx:\n if isinstance(args[idx], (dict, composite)):\n for key in args[idx]:\n kwargs[key] = args[idx][key]\n args = args[:idx]\n return func(*args, **kwargs)\n return decorator"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the module used for compression on this field.", "response": "def getCompressMod(self):\n\t\t'''\n\t\t\tgetCompressMod - Return the module used for compression on this field\n\n\t\t\t@return - The module for compression\n\t\t'''\n\t\tif self.compressMode == COMPRESS_MODE_ZLIB:\n\t\t\treturn zlib\n\t\tif self.compressMode == COMPRESS_MODE_BZ2:\n\t\t\treturn bz2\n\t\tif self.compressMode == COMPRESS_MODE_LZMA:\n\t\t\t# Since lzma is not provided by python core in python2, search out some common alternatives.\n\t\t\t# Throw exception if we can find no lzma implementation.\n\t\t\tglobal _lzmaMod\n\t\t\tif _lzmaMod is not None:\n\t\t\t\treturn _lzmaMod\n\t\t\ttry:\n\t\t\t\timport lzma\n\t\t\t\t_lzmaMod = lzma\n\t\t\t\treturn _lzmaMod\n\t\t\texcept:\n\t\t\t\t# Python2 does not provide \"lzma\" module, search for common alternatives\n\t\t\t\ttry:\n\t\t\t\t\tfrom backports import lzma\n\t\t\t\t\t_lzmaMod = lzma\n\t\t\t\t\treturn _lzmaMod\n\t\t\t\texcept:\n\t\t\t\t\tpass\n\t\t\t\ttry:\n\t\t\t\t\timport lzmaffi as lzma\n\t\t\t\t\t_lzmaMod = lzma\n\t\t\t\t\treturn _lzmaMod\n\t\t\t\texcept:\n\t\t\t\t\tpass\n\t\t\t\traise ImportError(\"Requested compress mode is lzma and could not find a module providing lzma support. Tried: 'lzma', 'backports.lzma', 'lzmaffi' and none of these were available. Please install one of these, or to use an unlisted implementation, set IndexedRedis.fields.compressed._lzmaMod to the module (must implement standard python compression interface)\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef output(self, to=None, *args, **kwargs):\n '''Outputs the set text'''\n to.write(str(self._value))", "response": "Outputs the set text to the given file object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef toBytes(self, value):\n\t\t'''\n\t\t\ttoBytes - Convert a value to bytes using the encoding specified on this field\n\n\t\t\t@param value - The field to convert to bytes\n\n\t\t\t@return - The object encoded using the codec specified on this field.\n\n\t\t\tNOTE: This method may go away.\n\t\t'''\n\t\tif type(value) == bytes:\n\t\t\treturn value\n\t\treturn value.encode(self.getEncoding())", "response": "Convert a value to bytes using the specified encoding"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ConstField(name, value, marshal=None):\n if marshal is None:\n marshal = value\n if isinstance(marshal, Struct):\n marshal = type(marshal)\n elif not isinstance(marshal, Marshal):\n raise InstructError(\"don't know how to serialize const field %s value %s (consider adding a marshal argument)\" %\n (name, value))\n return OrigConstField(name, marshal, value)", "response": "This function creates a constant field with the given name and value."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlike functools. partial but keeps the old ones.", "response": "def keep_kwargs_partial(func, *args, **keywords):\n \"\"\"Like functools.partial but instead of using the new kwargs, keeps the old ones.\"\"\"\n def newfunc(*fargs, **fkeywords):\n newkeywords = fkeywords.copy()\n newkeywords.update(keywords)\n return func(*(args + fargs), **newkeywords)\n newfunc.func = func\n newfunc.args = args\n newfunc.keywords = keywords\n return newfunc"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nserialize this NoteItem to a byte - stream and writes it to the file - like object stream.", "response": "def dump(self, stream, contentType=None, version=None):\n '''\n Serializes this NoteItem to a byte-stream and writes it to the\n file-like object `stream`. `contentType` and `version` must be one\n of the supported content-types, and if not specified, will default\n to ``text/plain``.\n '''\n if contentType is None or contentType == constants.TYPE_TEXT_PLAIN:\n stream.write(self.body)\n return\n if contentType == constants.TYPE_SIF_NOTE:\n root = ET.Element('note')\n # TODO: check `version`...\n ET.SubElement(root, 'SIFVersion').text = '1.1'\n if self.name is not None:\n ET.SubElement(root, 'Subject').text = self.name\n if self.body is not None:\n ET.SubElement(root, 'Body').text = self.body\n for name, values in self.extensions.items():\n for value in values:\n ET.SubElement(root, name).text = value\n ET.ElementTree(root).write(stream)\n return\n raise common.InvalidContentType('cannot serialize NoteItem to \"%s\"' % (contentType,))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a NoteItem from the specified file - like stream.", "response": "def load(cls, stream, contentType=None, version=None):\n '''\n Reverses the effects of the :meth:`dump` method, creating a NoteItem\n from the specified file-like `stream` object.\n '''\n if contentType is None or contentType == constants.TYPE_TEXT_PLAIN:\n data = stream.read()\n name = data.split('\\n')[0]\n # todo: localize?!...\n name = re.compile(r'^(title|name):\\s*', re.IGNORECASE).sub('', name).strip()\n return NoteItem(name=name, body=data)\n if contentType == constants.TYPE_SIF_NOTE:\n data = ET.parse(stream).getroot()\n ret = NoteItem(name=data.findtext('Subject'), body=data.findtext('Body'))\n for child in data:\n if child.tag in ('SIFVersion', 'Subject', 'Body'):\n continue\n ret.addExtension(child.tag, child.text)\n return ret\n raise common.InvalidContentType('cannot de-serialize NoteItem from \"%s\"' % (contentType,))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalls at the start of the notebook execution to setup the environment.", "response": "def setup_notebook(debug=False):\n \"\"\"Called at the start of notebook execution to setup the environment.\n\n This will configure bokeh, and setup the logging library to be\n reasonable.\"\"\"\n output_notebook(INLINE, hide_banner=True)\n if debug:\n _setup_logging(logging.DEBUG)\n logging.debug('Running notebook in debug mode.')\n else:\n _setup_logging(logging.WARNING)\n\n # If JUPYTERHUB_SERVICE_PREFIX environment variable isn't set,\n # this means that you're running JupyterHub not with Hub in k8s,\n # and not using run_local.sh (which sets it to empty).\n if 'JUPYTERHUB_SERVICE_PREFIX' not in os.environ:\n global jupyter_proxy_url\n jupyter_proxy_url = 'localhost:8888'\n logging.info('Setting jupyter proxy to local mode.')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef overview():\n range_search = RangeSearch()\n ranges = range_search.get_ranges()\n if ranges:\n formatted_ranges = []\n tags_lookup = {}\n for r in ranges:\n formatted_ranges.append({'mask': r.range})\n tags_lookup[r.range] = r.tags\n search = Host.search()\n search = search.filter('term', status='up')\n search.aggs.bucket('hosts', 'ip_range', field='address', ranges=formatted_ranges)\n response = search.execute()\n print_line(\"{0:<18} {1:<6} {2}\".format(\"Range\", \"Count\", \"Tags\"))\n print_line(\"-\" * 60)\n for entry in response.aggregations.hosts.buckets:\n print_line(\"{0:<18} {1:<6} {2}\".format(entry.key, entry.doc_count, tags_lookup[entry.key]))\n else:\n print_error(\"No ranges defined.\")", "response": "Creates a overview of the hosts per range."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsplit a resource in filename directory tuple with taking care of external resources", "response": "def resource_qualifier(resource):\n \"\"\" Split a resource in (filename, directory) tuple with taking care of external resources\n\n :param resource: A file path or a URI\n :return: (Filename, Directory) for files, (URI, None) for URI\n \"\"\"\n if resource.startswith(\"//\") or resource.startswith(\"http\"):\n return resource, None\n else:\n return reversed(op.split(resource))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_hierarchy(hierarchy, level):\n if level not in hierarchy:\n hierarchy[level] = OrderedDict()\n return hierarchy[level]", "response": "Create an OrderedDict that is a dictionary of nested dicts."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngroups line reference together with their human readable version", "response": "def line_chunker(text, getreffs, lines=30):\n \"\"\" Groups line reference together\n\n :param text: Text object\n :type text: MyCapytains.resources.text.api\n :param getreffs: Callback function to retrieve text\n :type getreffs: function(level)\n :param lines: Number of lines to use by group\n :type lines: int\n :return: List of grouped urn references with their human readable version\n :rtype: [(str, str)]\n \"\"\"\n level = len(text.citation)\n source_reffs = [reff.split(\":\")[-1] for reff in getreffs(level=level)]\n reffs = []\n i = 0\n while i + lines - 1 < len(source_reffs):\n reffs.append(tuple([source_reffs[i]+\"-\"+source_reffs[i+lines-1], source_reffs[i]]))\n i += lines\n if i < len(source_reffs):\n reffs.append(tuple([source_reffs[i]+\"-\"+source_reffs[len(source_reffs)-1], source_reffs[i]]))\n return reffs"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef level_chunker(text, getreffs, level=1):\n references = getreffs(level=level)\n return [(ref.split(\":\")[-1], ref.split(\":\")[-1]) for ref in references]", "response": "Chunk a text at the passage level"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef teff(cluster):\n b_vs, _ = cluster.stars()\n teffs = []\n for b_v in b_vs:\n b_v -= cluster.eb_v\n if b_v > -0.04:\n x = (14.551 - b_v) / 3.684\n else:\n x = (3.402 - math.sqrt(0.515 + 1.376 * b_v)) / 0.688\n\n teffs.append(math.pow(10, x))\n return teffs", "response": "Calculate Teff for main sequence stars ranging from Teff 3500K - 8000K. Use\n [ Fe / H ] of the cluster."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of color descriptions of the given teffs.", "response": "def color(teffs):\n \"\"\"\n Conventional color descriptions of stars.\n Source: https://en.wikipedia.org/wiki/Stellar_classification\n \"\"\"\n colors = []\n for t in teffs:\n if t >= 7500:\n colors.append('blue_white') # RGB:CAE1FF\n elif t >= 6000:\n colors.append('white') # RGB:F6F6F6\n elif t >= 5200:\n colors.append('yellowish_white') # RGB:FFFEB2\n elif t >= 3700:\n colors.append('pale_yellow_orange') # RGB:FFB28B\n else:\n colors.append('light_orange_red') # RGB:FF9966\n return colors"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a numpy. ndarray with all observed fields and computed teff and luminosity values and all observed fields and computed teff and luminosity values.", "response": "def table(cluster):\n \"\"\"\n Create a numpy.ndarray with all observed fields and\n computed teff and luminosity values.\n \"\"\"\n teffs = teff(cluster)\n lums = luminosity(cluster)\n arr = cluster.to_array()\n i = 0\n for row in arr:\n row['lum'][0] = np.array([lums[i]], dtype='f')\n row['temp'][0] = np.array([teffs[i]], dtype='f')\n i += 1\n arr = round_arr_teff_luminosity(arr)\n return arr"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef round_arr_teff_luminosity(arr):\n arr['temp'] = np.around(arr['temp'], -1)\n arr['lum'] = np.around(arr['lum'], 3)\n return arr", "response": "Return the numpy array with rounded teff and luminosity columns."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef brutefore_passwords(ip, url, credentials, service):\n auth_requests = []\n for credential in credentials:\n split = credential.strip().split(':')\n username = split[0]\n password = ''\n if len(split) > 1:\n password = split[1]\n auth_requests.append(grequests.get(url, auth=(username, password)))\n results = grequests.map(auth_requests)\n for result in results:\n if result and result.status_code == 200:\n creds = result.request.headers['Authorization'].split(' ')[1]\n creds = base64.b64decode(creds).decode('utf-8')\n creds = creds.split(':')\n print_success(\"Found a password for tomcat: {0}:{1} at: {2}\".format(\n creds[0], creds[1], url))\n credential = Credential(secret=creds[1], username=creds[0], type='plaintext', access_level='administrator', service_id=service.id, host_ip=ip, description='Tomcat')\n credential.save()", "response": "Bruteforce function will try all the credentials at the same time"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking the arguments to brutefore and spawns greenlets to perform the bruteforcing.", "response": "def main():\n \"\"\"\n Checks the arguments to brutefore and spawns greenlets to perform the bruteforcing.\n \"\"\"\n services = ServiceSearch()\n argparse = services.argparser\n argparse.add_argument('-f', '--file', type=str, help=\"File\")\n arguments = argparse.parse_args()\n\n if not arguments.file:\n print_error(\"Please provide a file with credentials seperated by ':'\")\n sys.exit()\n\n services = services.get_services(search=[\"Tomcat\"], up=True, tags=['!tomcat_brute'])\n\n credentials = []\n with open(arguments.file, 'r') as f:\n credentials = f.readlines()\n\n for service in services:\n print_notification(\"Checking ip:{} port {}\".format(service.address, service.port))\n url = 'http://{}:{}/manager/html'\n gevent.spawn(brutefore_passwords, service.address, url.format(service.address, service.port), credentials, service)\n service.add_tag('tomcat_brute')\n service.update(tags=service.tags)\n\n gevent.wait()\n # TODO fix stats\n Logger().log(\"tomcat_brute\", \"Performed tomcat bruteforce scan\", {'scanned_services': len(services)})"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _diagram(plot_figure, source=None, color='black', line_color='#444444',\n xaxis_label='B-V [mag]', yaxis_label='V [mag]', name=None):\n \"\"\"Use a :class:`~bokeh.plotting.figure.Figure` and x and y collections\n to create an H-R diagram.\n \"\"\"\n plot_figure.circle(x='x', y='y', source=source,\n size=5, color=color, alpha=1, name=name,\n line_color=line_color, line_width=0.5)\n plot_figure.xaxis.axis_label = xaxis_label\n plot_figure.yaxis.axis_label = yaxis_label\n plot_figure.yaxis.formatter = NumeralTickFormatter()", "response": "Create a H - R diagram."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a H - R diagram using the cluster_name ; then show it.", "response": "def cc_diagram(cluster_name):\n \"\"\"Create a :class:`~bokeh.plotting.figure.Figure` to create an H-R\n diagram using the cluster_name; then show it.\n \"\"\"\n x, y = get_hr_data(cluster_name)\n y_range = [max(y) + 0.5, min(y) - 0.25]\n pf = figure(y_range=y_range, title=cluster_name)\n _diagram(x, y, pf)\n show_with_bokeh_server(pf)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef hr_diagram(cluster_name, output=None):\n cluster = get_hr_data(cluster_name)\n pf = hr_diagram_figure(cluster)\n show_with_bokeh_server(pf)", "response": "Create a H - R\n diagram using the cluster_name and show it."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef skyimage_figure(cluster):\n pf_image = figure(x_range=(0, 1), y_range=(0, 1),\n title='Image of {0}'.format(cluster.name))\n pf_image.image_url(url=[cluster.image_path],\n x=0, y=0, w=1, h=1, anchor='bottom_left')\n pf_image.toolbar_location = None\n pf_image.axis.visible = False\n return pf_image", "response": "Given a cluster create a Bokeh plot figure using the cluster s image."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef round_teff_luminosity(cluster):\n temps = [round(t, -1) for t in teff(cluster)]\n lums = [round(l, 3) for l in luminosity(cluster)]\n return temps, lums", "response": "Returns rounded teff and luminosity lists."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hr_diagram_figure(cluster):\n temps, lums = round_teff_luminosity(cluster)\n x, y = temps, lums\n colors, color_mapper = hr_diagram_color_helper(temps)\n x_range = [max(x) + max(x) * 0.05, min(x) - min(x) * 0.05]\n source = ColumnDataSource(data=dict(x=x, y=y, color=colors))\n\n pf = figure(y_axis_type='log', x_range=x_range, name='hr',\n tools='box_select,lasso_select,reset,hover',\n title='H-R Diagram for {0}'.format(cluster.name))\n pf.select(BoxSelectTool).select_every_mousemove = False\n pf.select(LassoSelectTool).select_every_mousemove = False\n hover = pf.select(HoverTool)[0]\n hover.tooltips = [(\"Temperature (Kelvin)\", \"@x{0}\"),\n (\"Luminosity (solar units)\", \"@y{0.00}\")]\n _diagram(source=source, plot_figure=pf, name='hr',\n color={'field': 'color', 'transform': color_mapper},\n xaxis_label='Temperature (Kelvin)',\n yaxis_label='Luminosity (solar units)')\n return pf", "response": "Given a cluster create a Bokeh plot figure creating an H - R diagram."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalculating the ranges of the H - R diagram.", "response": "def calculate_diagram_ranges(data):\n \"\"\"\n Given a numpy array calculate what the ranges of the H-R\n diagram should be.\n \"\"\"\n data = round_arr_teff_luminosity(data)\n temps = data['temp']\n x_range = [1.05 * np.amax(temps), .95 * np.amin(temps)]\n lums = data['lum']\n y_range = [.50 * np.amin(lums), 2 * np.amax(lums)]\n return (x_range, y_range)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngive a numpy array create a Bokeh plot figure creating an H - R diagram.", "response": "def hr_diagram_from_data(data, x_range, y_range):\n \"\"\"\n Given a numpy array create a Bokeh plot figure creating an\n H-R diagram.\n \"\"\"\n _, color_mapper = hr_diagram_color_helper([])\n data_dict = {\n 'x': list(data['temperature']),\n 'y': list(data['luminosity']),\n 'color': list(data['color'])\n }\n source = ColumnDataSource(data=data_dict)\n pf = figure(y_axis_type='log', x_range=x_range, y_range=y_range)\n _diagram(source=source, plot_figure=pf,\n color={'field': 'color', 'transform': color_mapper},\n xaxis_label='Temperature (Kelvin)',\n yaxis_label='Luminosity (solar units)')\n show_with_bokeh_server(pf)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cluster_text_input(cluster, title=None):\n if not title:\n title = 'Type in the name of your cluster and press Enter/Return:'\n return TextInput(value=cluster.name, title=title)", "response": "Create a TextInput with the cluster s name and title."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef hr_diagram_selection(cluster_name):\n cluster = get_hr_data(cluster_name)\n temps, lums = round_teff_luminosity(cluster)\n x, y = temps, lums\n colors, color_mapper = hr_diagram_color_helper(temps)\n x_range = [max(x) + max(x) * 0.05, min(x) - min(x) * 0.05]\n source = ColumnDataSource(data=dict(x=x, y=y, color=colors), name='hr')\n source_selected = ColumnDataSource(data=dict(x=[], y=[], color=[]),\n name='hr')\n pf = figure(y_axis_type='log', x_range=x_range,\n tools='lasso_select,reset',\n title='H-R Diagram for {0}'.format(cluster.name))\n _diagram(source=source, plot_figure=pf, name='hr', color={'field':\n 'color', 'transform': color_mapper},\n xaxis_label='Temperature (Kelvin)',\n yaxis_label='Luminosity (solar units)')\n pf_selected = figure(y_axis_type='log', y_range=pf.y_range,\n x_range=x_range,\n tools='reset',\n title='H-R Diagram for {0}'.format(cluster.name))\n _diagram(source=source_selected, plot_figure=pf_selected, name='hr',\n color={'field': 'color', 'transform': color_mapper},\n xaxis_label='Temperature (Kelvin)',\n yaxis_label='Luminosity (solar units)')\n source.callback = CustomJS(args=dict(source_selected=source_selected),\n code=\"\"\"\n var inds = cb_obj.selected['1d'].indices;\n var d1 = cb_obj.data;\n var d2 = source_selected.data;\n console.log(inds);\n d2['x'] = []\n d2['y'] = []\n d2['color'] = []\n for (i = 0; i < inds.length; i++) {\n d2['x'].push(d1['x'][inds[i]])\n d2['y'].push(d1['y'][inds[i]])\n d2['color'].push(d1['color'][inds[i]])\n }\n source_selected.change.emit();\n \"\"\")\n show_with_bokeh_server(row(pf, pf_selected))", "response": "Create two Bokeh diagrams based H - R selection. Given a cluster create two Bokeh diagrams based H - R diagrams."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _filter_cluster_data(self):\n min_temp = self.temperature_range_slider.value[0]\n max_temp = self.temperature_range_slider.value[1]\n temp_mask = np.logical_and(\n self.cluster.catalog['temperature'] >= min_temp,\n self.cluster.catalog['temperature'] <= max_temp\n )\n\n min_lum = self.luminosity_range_slider.value[0]\n max_lum = self.luminosity_range_slider.value[1]\n lum_mask = np.logical_and(\n self.cluster.catalog['luminosity'] >= min_lum,\n self.cluster.catalog['luminosity'] <= max_lum\n )\n\n selected_mask = np.isin(self.cluster.catalog['id'], self.selection_ids)\n\n filter_mask = temp_mask & lum_mask & selected_mask\n self.filtered_data = self.cluster.catalog[filter_mask].data\n\n self.source.data = {\n 'id': list(self.filtered_data['id']),\n 'temperature': list(self.filtered_data['temperature']),\n 'luminosity': list(self.filtered_data['luminosity']),\n 'color': list(self.filtered_data['color'])\n }\n\n logging.debug(\"Selected data is now: %s\", self.filtered_data)", "response": "Filter the cluster data catalog into the filtered_data catalog which is what is shown in the H - R diagram."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a tempfile and starts the editor and returns the data afterwards.", "response": "def modify_data(data):\n \"\"\"\n Creates a tempfile and starts the given editor, returns the data afterwards.\n \"\"\"\n with tempfile.NamedTemporaryFile('w') as f:\n for entry in data:\n f.write(json.dumps(entry.to_dict(\n include_meta=True),\n default=datetime_handler))\n f.write('\\n')\n f.flush()\n print_success(\"Starting editor\")\n subprocess.call(['nano', '-', f.name])\n with open(f.name, 'r') as f:\n return f.readlines()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nperform a bruteforce for the given users password domain on the given host.", "response": "def bruteforce(users, domain, password, host):\n \"\"\"\n Performs a bruteforce for the given users, password, domain on the given host.\n \"\"\"\n cs = CredentialSearch(use_pipe=False)\n\n print_notification(\"Connecting to {}\".format(host))\n\n s = Server(host)\n c = Connection(s)\n\n for user in users:\n if c.rebind(user=\"{}\\\\{}\".format(domain, user.username), password=password, authentication=NTLM):\n print_success('Success for: {}:{}'.format(user.username, password))\n credential = cs.find_object(\n user.username, password, domain=domain, host_ip=host)\n if not credential:\n credential = Credential(username=user.username, secret=password,\n domain=domain, host_ip=host, type=\"plaintext\", port=389)\n credential.add_tag(tag)\n credential.save()\n\n # Add a tag to the user object, so we dont have to bruteforce it again.\n user.add_tag(tag)\n user.save()\n else:\n print_error(\"Fail for: {}:{}\".format(user.username, password))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef Adapter(self, **kw):\n '''\n .. TODO:: move this documentation into model/adapter.py?...\n\n The Adapter constructor supports the following parameters:\n\n :param devID:\n\n sets the local adapter\\'s device identifier. For servers, this\n should be the externally accessible URL that launches the SyncML\n transaction, and for clients this should be a unique ID, such as\n the IMEI number (for mobile phones). If not specified, it will\n be defaulted to the `devID` of the `devinfo` object. If it\n cannot be loaded from the database or from the `devinfo`, then\n it must be provided before any synchronization can begin.\n\n :param name:\n\n sets the local adapter\\'s device name - usually a human-friendly\n description of this SyncML\\'s function.\n\n :param devinfo:\n\n sets the local adapter :class:`pysyncml.devinfo.DeviceInfo`. If\n not specified, it will be auto-loaded from the database. If it\n cannot be loaded from the database, then it must be provided\n before any synchronization can begin.\n\n :param peer:\n\n TODO: document...\n\n :param maxGuidSize:\n\n TODO: document...\n\n :param maxMsgSize:\n\n TODO: document...\n\n :param maxObjSize:\n\n TODO: document...\n\n :param conflictPolicy:\n\n sets the default conflict handling policy for this adapter,\n and can be overriden on a per-store basis (applies only when\n operating as the server role).\n\n '''\n try:\n ret = self._model.Adapter.q(isLocal=True).one()\n for k, v in kw.items():\n setattr(ret, k, v)\n except NoResultFound:\n ret = self._model.Adapter(**kw)\n ret.isLocal = True\n self._model.session.add(ret)\n if ret.devID is not None:\n self._model.session.flush()\n ret.context = self\n # todo: is this really the best place to do this?...\n ret.router = self.router or router.Router(ret)\n ret.protocol = self.protocol or protocol.Protocol(ret)\n ret.synchronizer = self.synchronizer or synchronizer.Synchronizer(ret)\n ret.codec = self.codec or 'xml'\n if isinstance(ret.codec, basestring):\n ret.codec = codec.Codec.factory(ret.codec)\n if ret.devID is not None:\n peers = ret.getKnownPeers()\n if len(peers) == 1 and peers[0].url is not None:\n ret._peer = peers[0]\n return ret", "response": "Creates a new local adapter object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef RemoteAdapter(self, **kw):\n '''\n .. TODO:: move this documentation into model/adapter.py?...\n\n The RemoteAdapter constructor supports the following parameters:\n\n :param url:\n\n specifies the URL that this remote SyncML server can be reached\n at. The URL must be a fully-qualified URL.\n\n :param auth:\n\n set what kind of authentication scheme to use, which generally is\n one of the following values:\n\n **None**:\n\n indicates no authentication is required.\n\n **pysyncml.NAMESPACE_AUTH_BASIC**:\n\n specifies to use \"Basic-Auth\" authentication scheme.\n\n **pysyncml.NAMESPACE_AUTH_MD5**:\n\n specifies to use MD5 \"Digest-Auth\" authentication scheme.\n NOTE: this may not be implemented yet...\n\n :param username:\n\n if the `auth` is not ``None``, then the username to authenticate\n as must be provided via this parameter.\n\n :param password:\n\n if the `auth` is not ``None``, then the password to authenticate\n with must be provided via this parameter.\n\n '''\n # TODO: is this really the right way?...\n ret = self._model.Adapter(isLocal=False, **kw)\n self._model.session.add(ret)\n if ret.devID is not None:\n self._model.session.flush()\n return ret", "response": "Creates a new RemoteAdapter object and returns it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting information about a pathlib. Path object.", "response": "def pprint_path(path):\n \"\"\"\n print information of a pathlib / os.DirEntry() instance with all \"is_*\" functions.\n \"\"\"\n print(\"\\n*** %s\" % path)\n for attrname in sorted(dir(path)):\n if attrname.startswith(\"is_\"):\n value = getattr(path, attrname)\n print(\"%20s: %s\" % (attrname, value))\n print()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef utime(self, *args, **kwargs):\n os.utime(self.extended_path, *args, **kwargs)", "response": "Set the access and modified times of the file specified by path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new instance of WindowsPath2 from command line arguments.", "response": "def _from_parts(cls, args, init=True):\n \"\"\"\n Strip \\\\?\\ prefix in init phase\n \"\"\"\n if args:\n args = list(args)\n if isinstance(args[0], WindowsPath2):\n args[0] = args[0].path\n elif args[0].startswith(\"\\\\\\\\?\\\\\"):\n args[0] = args[0][4:]\n args = tuple(args)\n return super(WindowsPath2, cls)._from_parts(args, init)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extended_path(self):\n if self.is_absolute() and not self.path.startswith(\"\\\\\\\\\"):\n return \"\\\\\\\\?\\\\%s\" % self.path\n return self.path", "response": "Return the extended path of the resource."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef path(self):\n path = super(WindowsPath2, self).path\n if path.startswith(\"\\\\\\\\?\\\\\"):\n return path[4:]\n return path", "response": "Return the path always without the \\\\? prefix."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the path that is relative to other.", "response": "def relative_to(self, other):\n \"\"\"\n Important here is, that both are always the same:\n both with \\\\?\\ prefix or both without it.\n \"\"\"\n return super(WindowsPath2, Path2(self.path)).relative_to(Path2(other).path)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format():\n argparser = argparse.ArgumentParser(description='Formats a json object in a certain way. Use with pipes.')\n argparser.add_argument('format', metavar='format', help='How to format the json for example \"{address}:{port}\".', nargs='?')\n arguments = argparser.parse_args()\n service_style = \"{address:15} {port:7} {protocol:5} {service:15} {state:10} {banner} {tags}\"\n host_style = \"{address:15} {tags}\"\n ranges_style = \"{range:18} {tags}\"\n users_style = \"{username}\"\n if arguments.format:\n format_input(arguments.format)\n else:\n doc_mapper = DocMapper()\n if doc_mapper.is_pipe:\n for obj in doc_mapper.get_pipe():\n style = ''\n if isinstance(obj, Range):\n style = ranges_style\n elif isinstance(obj, Host):\n style = host_style\n elif isinstance(obj, Service):\n style = service_style\n elif isinstance(obj, User):\n style = users_style\n print_line(fmt.format(style, **obj.to_dict(include_meta=True)))\n else:\n print_error(\"Please use this script with pipes\")", "response": "Formats the output of another tool in a given way."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef print_line(text):\n try:\n signal.signal(signal.SIGPIPE, signal.SIG_DFL)\n except ValueError:\n pass\n\n try:\n sys.stdout.write(text)\n if not text.endswith('\\n'):\n sys.stdout.write('\\n')\n sys.stdout.flush()\n except IOError:\n sys.exit(0)", "response": "Print the given line to stdout"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndrawing a ncurses interface based on the given object list and callback.", "response": "def draw_interface(objects, callback, callback_text):\n \"\"\"\n Draws a ncurses interface. Based on the given object list, every object should have a \"string\" key, this is whats displayed on the screen, callback is called with the selected object.\n Rest of the code is modified from:\n https://stackoverflow.com/a/30834868\n \"\"\"\n screen = curses.initscr()\n height, width = screen.getmaxyx()\n curses.noecho()\n curses.cbreak()\n curses.start_color()\n screen.keypad( 1 )\n curses.init_pair(1,curses.COLOR_BLACK, curses.COLOR_CYAN)\n highlightText = curses.color_pair( 1 )\n normalText = curses.A_NORMAL\n screen.border( 0 )\n curses.curs_set( 0 )\n max_row = height - 15 # max number of rows\n box = curses.newwin( max_row + 2, int(width - 2), 1, 1 )\n box.box()\n fmt = PartialFormatter()\n\n row_num = len( objects )\n\n pages = int( ceil( row_num / max_row ) )\n position = 1\n page = 1\n for i in range( 1, max_row + 1 ):\n if row_num == 0:\n box.addstr( 1, 1, \"There aren't strings\", highlightText )\n else:\n if (i == position):\n box.addstr( i, 2, str( i ) + \" - \" + objects[ i - 1 ]['string'], highlightText )\n else:\n box.addstr( i, 2, str( i ) + \" - \" + objects[ i - 1 ]['string'], normalText )\n if i == row_num:\n break\n\n screen.refresh()\n box.refresh()\n\n x = screen.getch()\n while x != 27:\n if x == curses.KEY_DOWN:\n if page == 1:\n if position < i:\n position = position + 1\n else:\n if pages > 1:\n page = page + 1\n position = 1 + ( max_row * ( page - 1 ) )\n elif page == pages:\n if position < row_num:\n position = position + 1\n else:\n if position < max_row + ( max_row * ( page - 1 ) ):\n position = position + 1\n else:\n page = page + 1\n position = 1 + ( max_row * ( page - 1 ) )\n if x == curses.KEY_UP:\n if page == 1:\n if position > 1:\n position = position - 1\n else:\n if position > ( 1 + ( max_row * ( page - 1 ) ) ):\n position = position - 1\n else:\n page = page - 1\n position = max_row + ( max_row * ( page - 1 ) )\n\n screen.erase()\n if x == ord( \"\\n\" ) and row_num != 0:\n screen.erase()\n screen.border( 0 )\n service = objects[position -1]\n text = fmt.format(callback_text, **service)\n screen.addstr( max_row + 4, 3, text)\n text = callback(service)\n count = 0\n for line in text:\n screen.addstr( max_row + 5 + count, 3, line)\n count += 1\n\n box.erase()\n screen.border( 0 )\n box.border( 0 )\n\n for i in range( 1 + ( max_row * ( page - 1 ) ), max_row + 1 + ( max_row * ( page - 1 ) ) ):\n if row_num == 0:\n box.addstr( 1, 1, \"There aren't strings\", highlightText )\n else:\n if ( i + ( max_row * ( page - 1 ) ) == position + ( max_row * ( page - 1 ) ) ):\n box.addstr( i - ( max_row * ( page - 1 ) ), 2, str( i ) + \" - \" + objects[ i - 1 ]['string'], highlightText )\n else:\n box.addstr( i - ( max_row * ( page - 1 ) ), 2, str( i ) + \" - \" + objects[ i - 1 ]['string'], normalText )\n if i == row_num:\n break\n\n screen.refresh()\n box.refresh()\n x = screen.getch()\n\n curses.endwin()\n exit()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the IP from the inet interfaces.", "response": "def get_own_ip():\n \"\"\"\n Gets the IP from the inet interfaces.\n \"\"\"\n own_ip = None\n interfaces = psutil.net_if_addrs()\n for _, details in interfaces.items():\n for detail in details:\n if detail.family == socket.AF_INET:\n ip_address = ipaddress.ip_address(detail.address)\n if not (ip_address.is_link_local or ip_address.is_loopback):\n own_ip = str(ip_address)\n break\n return own_ip"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a pandas DataFrame from a numpy ndarray. By default use temp and lum with max rows of 32 and precision of 2. arr - An numpy.ndarray. columns - The columns to include in the pandas DataFrame. Defaults to temperature and luminosity. names - The column names for the pandas DataFrame. Defaults to Temperature and Luminosity. max_rows - If max_rows is an integer then set the pandas display.max_rows option to that value. If max_rows is True then set display.max_rows option to 1000. precision - An integer to set the pandas precision option.", "response": "def pprint(arr, columns=('temperature', 'luminosity'),\n names=('Temperature (Kelvin)', 'Luminosity (solar units)'),\n max_rows=32, precision=2):\n \"\"\"\n Create a pandas DataFrame from a numpy ndarray.\n\n By default use temp and lum with max rows of 32 and precision of 2.\n\n arr - An numpy.ndarray.\n columns - The columns to include in the pandas DataFrame. Defaults to\n temperature and luminosity.\n names - The column names for the pandas DataFrame. Defaults to\n Temperature and Luminosity.\n max_rows - If max_rows is an integer then set the pandas\n display.max_rows option to that value. If max_rows\n is True then set display.max_rows option to 1000.\n precision - An integer to set the pandas precision option.\n \"\"\"\n if max_rows is True:\n pd.set_option('display.max_rows', 1000)\n elif type(max_rows) is int:\n pd.set_option('display.max_rows', max_rows)\n pd.set_option('precision', precision)\n df = pd.DataFrame(arr.flatten(), index=arr['id'].flatten(),\n columns=columns)\n df.columns = names\n return df.style.format({names[0]: '{:.0f}',\n names[1]: '{:.2f}'})"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef strip_labels(filename):\n labels = []\n with open(filename) as f, open('processed_labels.txt', 'w') as f1:\n for l in f:\n if l.startswith('#'):\n next\n l = l.replace(\" .\", '')\n l = l.replace(\">\\tskos:prefLabel\\t\", ' ')\n l = l.replace(\"<\", '')\n l = l.replace(\">\\trdfs:label\\t\", ' ')\n f1.write(l)", "response": "Strips labels from the file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_namespace(doc, namespace):\n '''Remove namespace in the passed document in place.'''\n ns = u'{%s}' % namespace\n nsl = len(ns)\n for elem in doc.getiterator():\n if elem.tag.startswith(ns):\n elem.tag = elem.tag[nsl:]\n elem.attrib['oxmlns'] = namespace", "response": "Remove namespace in the passed document in place."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nresolve a Resource identified by URI", "response": "def resolve(self, uri):\n \"\"\" Resolve a Resource identified by URI\n :param uri: The URI of the resource to be resolved\n :type uri: str\n :return: the contents of the resource as a string\n :rtype: str\n \"\"\"\n for r in self.__retrievers__:\n if r.match(uri):\n return r\n raise UnresolvableURIError()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves the contents of the resource with the given URI", "response": "def read(self, uri):\n \"\"\" Retrieve the contents of the resource\n\n :param uri: the URI of the resource to be retrieved\n :type uri: str\n :return: the contents of the resource\n :rtype: str\n \"\"\"\n req = request(\"GET\", uri)\n return req.content, req.headers['Content-Type']"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef match(self, uri):\n absolute_uri = self.__absolute__(uri)\n\n return absolute_uri.startswith(self.__path__) and op.exists(absolute_uri)", "response": "Check to see if this URI is retrievable by this Retriever implementation\n\n ."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read(self, uri):\n uri = self.__absolute__(uri)\n mime, _ = guess_type(uri)\n if \"image\" in mime:\n return send_file(uri), mime\n else:\n with open(uri, \"r\") as f:\n file = f.read()\n return file, mime", "response": "Retrieve the contents of the resource with the given URI."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrieving the contents of the resource with the given URI", "response": "def read(self, uri):\n \"\"\" Retrieve the contents of the resource\n\n :param uri: the URI of the resource to be retrieved\n :type uri: str\n :return: the contents of the resource\n :rtype: str\n \"\"\"\n return self.__resolver__.getTextualNode(uri).export(Mimetypes.XML.TEI), \"text/xml\""} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef addHook(self, name, callable):\n '''\n Subscribes `callable` to listen to events of `name` type. The\n parameters passed to `callable` are dependent on the specific\n event being triggered.\n '''\n if name not in self._hooks:\n self._hooks[name] = []\n self._hooks[name].append(callable)", "response": "Adds a hook to be called when a specific event is triggered."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _makeAdapter(self):\n '''\n Creates a tuple of ( Context, Adapter ) based on the options\n specified by `self.options`. The Context is the pysyncml.Context created for\n the storage location specified in `self.options`, and the Adapter is a newly\n created Adapter if a previously created one was not found.\n '''\n\n self._callHooks('adapter.create.init')\n\n # create a new pysyncml.Context. the main function that this provides is\n # to give the Adapter a storage engine to store state information across\n # synchronizations.\n context = pysyncml.Context(storage='sqlite:///%ssyncml.db' % (self.dataDir,),\n owner=None, autoCommit=True)\n\n self._callHooks('adapter.create.context', context)\n\n # create an Adapter from the current context. this will either create\n # a new adapter, or load the current local adapter for the specified\n # context storage location. if it is new, then lots of required\n # information (such as device info) will not be set, so we need to\n # check that and specify it if missing.\n\n adapter = context.Adapter()\n\n if hasattr(self, 'serverConf') and self.serverConf.policy is not None:\n adapter.conflictPolicy = self.serverConf.policy\n\n if self.options.name is not None or self.appDisplay is not None:\n adapter.name = self.options.name or self.appDisplay\n\n # TODO: stop ignoring ``self.options.remoteUri``... (the router must first support\n # manual routes...)\n # if self.options.remoteUri is not None:\n # adapter.router.addRoute(self.agent.uri, self.options.remoteUri)\n\n if adapter.devinfo is None:\n log.info('adapter has no device info - registering new device')\n else:\n if self.options.devid is not None and self.options.devid != adapter.devinfo.devID:\n log.info('adapter has different device ID - overwriting with new device info')\n adapter.devinfo = None\n\n if adapter.devinfo is None:\n # setup some information about the local device, most importantly the\n # device ID, which the remote peer will use to uniquely identify this peer\n devinfoParams = dict(\n devID = self.options.devid or self.defaultDevID,\n devType = pysyncml.DEVTYPE_SERVER if self.options.server else \\\n pysyncml.DEVTYPE_WORKSTATION,\n manufacturerName = 'pysyncml',\n modelName = self.appLabel,\n softwareVersion = pysyncml.version,\n hierarchicalSync = self.agent.hierarchicalSync if self.agent is not None else False,\n )\n if self.devinfoParams is not None:\n devinfoParams.update(self.devinfoParams)\n adapter.devinfo = context.DeviceInfo(**devinfoParams)\n\n self._callHooks('adapter.create.adapter', context, adapter)\n\n if not self.options.server:\n\n # servers don't have a fixed peer; i.e. the SyncML message itself\n # defines which peer is connecting.\n\n if adapter.peer is None:\n if self.options.remote is None:\n self.options.remote = raw_input('SyncML remote URL: ')\n if self.options.username is None:\n self.options.username = raw_input('SyncML remote username (leave empty if none): ')\n if len(self.options.username) <= 0:\n self.options.username = None\n log.info('adapter has no remote info - registering new remote adapter')\n else:\n if self.options.remote is not None:\n if self.options.remote != adapter.peer.url \\\n or self.options.username != adapter.peer.username \\\n or self.options.password != adapter.peer.password:\n #or self.options.password is not None:\n log.info('adapter has invalid or rejected remote info - overwriting with new remote info')\n adapter.peer = None\n\n if adapter.peer is None:\n auth = None\n if self.options.username is not None:\n auth = pysyncml.NAMESPACE_AUTH_BASIC\n if self.options.password is None:\n self.options.password = getpass.getpass('SyncML remote password: ')\n # setup the remote connection parameters, if not already stored in\n # the adapter sync tables or the URL has changed.\n adapter.peer = context.RemoteAdapter(\n url = self.options.remote,\n auth = auth,\n username = self.options.username,\n password = self.options.password,\n )\n\n self._callHooks('adapter.create.peer', context, adapter, adapter.peer)\n\n # add a datastore attached to the URI \"note\". the actual value of\n # the URI is irrelevant - it is only an identifier for this item\n # synchronization channel. it must be unique within this adapter\n # and must stay consistent across synchronizations.\n\n # TODO: this check should be made redundant... (ie. once the\n # implementation of Store.merge() is fixed this will\n # become a single \"addStore()\" call without the check first).\n uri = self.storeParams.get('uri', self.appLabel)\n if uri in adapter.stores:\n store = adapter.stores[uri]\n store.agent = self.agent\n else:\n storeParams = dict(\n uri = uri,\n displayName = self.options.name or self.appDisplay,\n agent = self.agent,\n # TODO: adding this for funambol-compatibility...\n maxObjSize = None)\n if self.storeParams is not None:\n storeParams.update(self.storeParams)\n store = adapter.addStore(context.Store(**storeParams))\n\n self._callHooks('adapter.create.store', context, adapter, store)\n\n if self.options.local:\n def locprint(msg):\n print msg\n else:\n locprint = log.info\n def showChanges(changes, prefix):\n for c in changes:\n if c.state != pysyncml.ITEM_DELETED:\n item = self.agent.getItem(c.itemID)\n else:\n item = 'Item ID %s' % (c.itemID,)\n locprint('%s - %s: %s' % (prefix, item, pysyncml.state2string(c.state)))\n if self.options.server:\n peers = adapter.getKnownPeers()\n if len(peers) > 0:\n locprint('Pending changes to propagate:')\n else:\n locprint('No pending changes to propagate (no peers yet)')\n for peer in peers:\n for puri, pstore in peer.stores.items():\n if pstore.binding is None or pstore.binding.uri != store.uri:\n continue\n changes = list(pstore.getRegisteredChanges())\n if len(changes) <= 0:\n locprint(' Registered to peer \"%s\" URI \"%s\": (none)' % (peer.devID, puri))\n else:\n locprint(' Registered to peer \"%s\" URI \"%s\":' % (peer.devID, puri))\n showChanges(changes, ' ')\n else:\n if store.peer is None:\n locprint('No pending local changes (not associated yet).')\n else:\n changes = list(store.peer.getRegisteredChanges())\n if len(changes) <= 0:\n locprint('No pending local changes to synchronize.')\n else:\n locprint('Pending local changes:')\n showChanges(changes, '')\n\n self._callHooks('adapter.create.term', context, adapter)\n\n return (context, adapter)", "response": "Create a new context and Adapter based on the options."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef configure(self, argv=None):\n '''\n Configures this engine based on the options array passed into\n `argv`. If `argv` is ``None``, then ``sys.argv`` is used instead.\n During configuration, the command line options are merged with\n previously stored values. Then the logging subsystem and the\n database model are initialized, and all storable settings are\n serialized to configurations files.\n '''\n self._setupOptions()\n self._parseOptions(argv)\n self._setupLogging()\n self._setupModel()\n self.dbsession.commit()\n return self", "response": "Configure this engine based on the command line options passed into sys. argv."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(self, stdout=sys.stdout, stderr=sys.stderr):\n '''\n Runs this SyncEngine by executing one of the following functions\n (as controlled by command-line options or stored parameters):\n\n * Display local pending changes.\n * Describe local configuration.\n * Run an HTTP server and engage server-side mode.\n * Connect to a remote SyncML peer and engage client-side mode.\n\n NOTE: when running in the first two modes, all database interactions\n are rolled back in order to keep the SyncEngine idempotent.\n '''\n if self.options.local or self.options.describe:\n context, adapter = self._makeAdapter()\n if self.options.describe:\n self.describe(stdout)\n adapter.describe(stdout)\n self.dbsession.rollback()\n return 0\n if self.options.server:\n return self._runServer(stdout, stderr)\n return self._runClient(stdout, stderr)", "response": "Runs this SyncEngine by executing one of the functions in the command - line."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _assemble_with_columns(self, sql_str, columns, *args, **kwargs):\n\n # Handle any aliased columns we get (e.g. table_alias.column)\n qcols = []\n for col in columns:\n if '.' in col:\n # Explodeded it\n wlist = col.split('.')\n\n # Reassemble into string and drop it into the list\n qcols.append(sql.SQL('.').join([sql.Identifier(x) for x in wlist]))\n else:\n qcols.append(sql.Identifier(col))\n\n # sql.SQL(', ').join([sql.Identifier(x) for x in columns]),\n \n query_string = sql.SQL(sql_str).format(\n sql.SQL(', ').join(qcols),\n *[sql.Literal(a) for a in args]\n )\n \n return query_string", "response": "Assemble a select statement with specific columns."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nassembles SELECT statement into a table.", "response": "def _assemble_select(self, sql_str, columns, *args, **kwargs):\n \"\"\" Alias for _assemble_with_columns\n \"\"\"\n warnings.warn(\"_assemble_select has been depreciated for _assemble_with_columns. It will be removed in a future version.\", DeprecationWarning)\n return self._assemble_with_columns(sql_str, columns, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _assemble_simple(self, sql_str, *args, **kwargs):\n \n query_string = sql.SQL(sql_str).format(\n *[sql.Literal(a) for a in args]\n )\n\n return query_string", "response": "Assemble a simple select statement with specific columns."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes a query and returns a list of rows.", "response": "def _execute(self, query, commit=False, working_columns=None):\n \"\"\" \n Execute a query with provided parameters \n\n Parameters\n :query: SQL string with parameter placeholders\n :commit: If True, the query will commit\n :returns: List of rows\n \"\"\"\n\n log.debug(\"RawlBase._execute()\")\n\n result = []\n\n if working_columns is None:\n working_columns = self.columns\n\n with RawlConnection(self.dsn) as conn:\n\n query_id = random.randrange(9999)\n\n curs = conn.cursor()\n\n try:\n log.debug(\"Executing(%s): %s\" % (query_id, query.as_string(curs)))\n except:\n log.exception(\"LOGGING EXCEPTION LOL\")\n\n curs.execute(query)\n\n log.debug(\"Executed\")\n\n if commit == True:\n log.debug(\"COMMIT(%s)\" % query_id)\n conn.commit()\n \n log.debug(\"curs.rowcount: %s\" % curs.rowcount)\n \n if curs.rowcount > 0:\n #result = curs.fetchall()\n # Process the results into a dict and stuff it in a RawlResult\n # object. Then append that object to result\n result_rows = curs.fetchall()\n for row in result_rows:\n \n i = 0\n row_dict = {}\n for col in working_columns:\n try:\n #log.debug(\"row_dict[%s] = row[%s] which is %s\" % (col, i, row[i]))\n # For aliased columns, we need to get rid of the dot\n col = col.replace('.', '_')\n row_dict[col] = row[i]\n except IndexError: pass\n i += 1\n \n log.debug(\"Appending dict to result: %s\" % row_dict)\n \n rr = RawlResult(working_columns, row_dict)\n result.append(rr)\n \n curs.close()\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nhandle provided columns and if necessary convert them to a list for internal strage.", "response": "def process_columns(self, columns):\n \"\"\" \n Handle provided columns and if necessary, convert columns to a list for \n internal strage.\n\n :columns: A sequence of columns for the table. Can be list, comma\n -delimited string, or IntEnum.\n \"\"\"\n if type(columns) == list:\n self.columns = columns\n elif type(columns) == str:\n self.columns = [c.strip() for c in columns.split()]\n elif type(columns) == IntEnum:\n self.columns = [str(c) for c in columns]\n else:\n raise RawlException(\"Unknown format for columns\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef query(self, sql_string, *args, **kwargs):\n commit = None\n columns = None\n if kwargs.get('commit') is not None:\n commit = kwargs.pop('commit')\n if kwargs.get('columns') is not None:\n columns = kwargs.pop('columns')\n query = self._assemble_simple(sql_string, *args, **kwargs)\n return self._execute(query, commit=commit, working_columns=columns)", "response": "Execute a simple SQL query on the database and return the result."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef select(self, sql_string, cols, *args, **kwargs):\n working_columns = None\n if kwargs.get('columns') is not None:\n working_columns = kwargs.pop('columns')\n query = self._assemble_select(sql_string, cols, *args, *kwargs)\n return self._execute(query, working_columns=working_columns)", "response": "Execute a SELECT statement in the database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef insert_dict(self, value_dict, commit=False):\n\n # Sanity check the value_dict\n for key in value_dict.keys():\n if key not in self.columns:\n raise ValueError(\"Column %s does not exist\" % key)\n\n # These lists will make up the columns and values of the INSERT\n insert_cols = []\n value_set = []\n\n # Go through all the possible columns and look for that column in the\n # dict. If available, we need to add it to our col/val sets\n for col in self.columns:\n if col in value_dict:\n #log.debug(\"Inserting with column %s\" % col)\n insert_cols.append(col)\n value_set.append(value_dict[col])\n\n # Create SQL statement placeholders for the dynamic values\n placeholders = ', '.join([\"{%s}\" % x for x in range(1, len(value_set) + 1)])\n\n # TODO: Maybe don't trust table_name ane pk_name? Shouldn't really be \n # user input, but who knows.\n query = self._assemble_with_columns('''\n INSERT INTO \"''' + self.table + '''\" ({0}) \n VALUES (''' + placeholders + ''') \n RETURNING ''' + self.pk + '''\n ''', insert_cols, *value_set)\n\n result = self._execute(query, commit=commit)\n\n # Inca\n if len(result) > 0:\n # Return the pk if we can\n if hasattr(result[0], self.pk):\n return getattr(result[0], self.pk)\n # Otherwise, the full result\n else:\n return result[0]\n else:\n return None", "response": "Insert a dictionary into the table_name table."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self, pk):\n\n if type(pk) == str:\n # Probably an int, give it a shot\n try:\n pk = int(pk)\n except ValueError: pass\n\n return self.select(\n \"SELECT {0} FROM \" + self.table + \" WHERE \" + self.pk + \" = {1};\",\n self.columns, pk)", "response": "Returns a single record from the table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dump(self, stream, contentType=None, version=None):\n '''\n Serializes this FileItem to a byte-stream and writes it to the\n file-like object `stream`. `contentType` and `version` must be one\n of the supported content-types, and if not specified, will default\n to ``application/vnd.omads-file``.\n '''\n if contentType is None:\n contentType = constants.TYPE_OMADS_FILE\n if ctype.getBaseType(contentType) != constants.TYPE_OMADS_FILE:\n raise common.InvalidContentType('cannot serialize FileItem to \"%s\"' % (contentType,))\n if version is None:\n version = '1.2'\n if version != '1.2':\n raise common.InvalidContentType('invalid file serialization version \"%s\"' % (version,))\n root = ET.Element('File')\n if self.name is not None:\n ET.SubElement(root, 'name').text = self.name\n # todo: do anything with \"parent\"?...\n for attr in ('created', 'modified', 'accessed'):\n if getattr(self, attr) is None:\n continue\n ET.SubElement(root, attr).text = common.ts_iso(getattr(self, attr))\n if self.contentType is not None:\n ET.SubElement(root, 'cttype').text = self.contentType\n attrs = [attr\n for attr in ('hidden', 'system', 'archived', 'delete', 'writable', 'readable', 'executable')\n if getattr(self, attr) is not None]\n if len(attrs) > 0:\n xa = ET.SubElement(root, 'attributes')\n for attr in attrs:\n ET.SubElement(xa, attr[0]).text = 'true' if getattr(self, attr) else 'false'\n if self.body is not None:\n ET.SubElement(root, 'body').text = self.body\n if self.body is None and self.size is not None:\n ET.SubElement(root, 'size').text = str(self.size)\n if len(self.extensions) > 0:\n xe = ET.SubElement(root, 'Ext')\n for name, values in self.extensions.items():\n ET.SubElement(xe, 'XNam').text = name\n for value in values:\n ET.SubElement(xe, 'XVal').text = value\n ET.ElementTree(root).write(stream)\n return (constants.TYPE_OMADS_FILE + '+xml', '1.2')", "response": "Serializes this FileItem to a byte - stream and writes it to the file - like object stream."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(cls, stream, contentType=None, version=None):\n '''\n Reverses the effects of the :meth:`dump` method, creating a FileItem\n from the specified file-like `stream` object.\n '''\n if contentType is None:\n contentType = constants.TYPE_OMADS_FILE\n if ctype.getBaseType(contentType) == constants.TYPE_OMADS_FOLDER:\n from .folder import FolderItem\n return FolderItem.load(stream, contentType, version)\n if ctype.getBaseType(contentType) != constants.TYPE_OMADS_FILE:\n raise common.InvalidContentType('cannot de-serialize FileItem from \"%s\"' % (contentType,))\n if version is None:\n version = '1.2'\n if version != '1.2':\n raise common.InvalidContentType('invalid FileItem de-serialization version \"%s\"' % (version,))\n ret = FileItem()\n data = stream.read()\n xdoc = ET.fromstring(data)\n if xdoc.tag != 'File':\n raise common.InvalidContent('root of application/vnd.omads-file XML must be \"File\" not \"%s\"'\n % (xdoc.tag,))\n ret.name = xdoc.findtext('name')\n ret.body = xdoc.findtext('body')\n ret.size = xdoc.findtext('size')\n if ret.body is not None:\n ret.size = len(ret.body)\n elif ret.size is not None:\n ret.size = int(ret.size)\n # todo: do anything with \"parent\"?...\n # load the date attributes\n for attr in ('created', 'modified', 'accessed'):\n val = xdoc.findtext(attr)\n if val is not None:\n setattr(ret, attr, int(common.parse_ts_iso(val)))\n # load the boolean attributes\n for attr in ('hidden', 'system', 'archived', 'delete',\n 'writable', 'readable', 'executable'):\n val = xdoc.findtext('attributes/' + attr[0])\n if val is not None:\n setattr(ret, attr, val.lower() == 'true')\n return ret", "response": "Creates a FileItem from the specified file - like stream object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating the final payload based on the x86 and x64 meterpreters.", "response": "def create_payload(self, x86_file, x64_file, payload_file):\n \"\"\"\n Creates the final payload based on the x86 and x64 meterpreters.\n \"\"\"\n sc_x86 = open(os.path.join(self.datadir, x86_file), 'rb').read()\n sc_x64 = open(os.path.join(self.datadir, x64_file), 'rb').read()\n\n fp = open(os.path.join(self.datadir, payload_file), 'wb')\n fp.write(b'\\x31\\xc0\\x40\\x0f\\x84' + pack(' 0:\n best_match = request.accept_languages[0][0][:2]\n else:\n return self.__default_lang__\n lang = self.__default_lang__\n if best_match == \"de\":\n lang = \"ger\"\n elif best_match == \"fr\":\n lang = \"fre\"\n elif best_match == \"en\":\n lang = \"eng\"\n elif best_match == \"la\":\n lang = \"lat\"\n return lang"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef transform(self, work, xml, objectId, subreference=None):\n # We check first that we don't have\n if str(objectId) in self._transform:\n func = self._transform[str(objectId)]\n else:\n func = self._transform[\"default\"]\n\n # If we have a string, it means we get a XSL filepath\n if isinstance(func, str):\n with open(func) as f:\n xslt = etree.XSLT(etree.parse(f))\n return etree.tostring(\n xslt(xml),\n encoding=str, method=\"html\",\n xml_declaration=None, pretty_print=False, with_tail=True, standalone=None\n )\n\n # If we have a function, it means we return the result of the function\n elif isinstance(func, Callable):\n return func(work, xml, objectId, subreference)\n # If we have None, it means we just give back the xml\n elif func is None:\n return etree.tostring(xml, encoding=str)", "response": "Transform the xml to a resource."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrequests the api endpoint to retrieve information about the inventory", "response": "def get_inventory(self):\n \"\"\" Request the api endpoint to retrieve information about the inventory\n\n :return: Main Collection\n :rtype: Collection\n \"\"\"\n if self._inventory is not None:\n return self._inventory\n\n self._inventory = self.resolver.getMetadata()\n return self._inventory"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_reffs(self, objectId, subreference=None, collection=None, export_collection=False):\n if collection is not None:\n text = collection\n else:\n text = self.get_collection(objectId)\n reffs = self.chunk(\n text,\n lambda level: self.resolver.getReffs(objectId, level=level, subreference=subreference)\n )\n if export_collection is True:\n return text, reffs\n return reffs", "response": "Retrieve and transform a list of references."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_passage(self, objectId, subreference):\n passage = self.resolver.getTextualNode(\n textId=objectId,\n subreference=subreference,\n metadata=True\n )\n return passage", "response": "Retrieve the object bearing metadata and its textual node."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_siblings(self, objectId, subreference, passage):\n reffs = [reff for reff, _ in self.get_reffs(objectId)]\n if subreference in reffs:\n index = reffs.index(subreference)\n # Not the first item and not the last one\n if 0 < index < len(reffs) - 1:\n return reffs[index-1], reffs[index+1]\n elif index == 0 and index < len(reffs) - 1:\n return None, reffs[1]\n elif index > 0 and index == len(reffs) - 1:\n return reffs[index-1], None\n else:\n return None, None\n else:\n return passage.siblingsId", "response": "Get the siblings of a browsed subreference."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating a SEO friendly string for given collection.", "response": "def semantic(self, collection, parent=None):\n \"\"\" Generates a SEO friendly string for given collection\n\n :param collection: Collection object to generate string for\n :param parent: Current collection parent\n :return: SEO/URL Friendly string\n \"\"\"\n if parent is not None:\n collections = parent.parents[::-1] + [parent, collection]\n else:\n collections = collection.parents[::-1] + [collection]\n\n return filters.slugify(\"--\".join([item.get_label() for item in collections if item.get_label()]))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_coins(self, collection, text, subreference=\"\", lang=None):\n if lang is None:\n lang = self.__default_lang__\n return \"url_ver=Z39.88-2004\"\\\n \"&ctx_ver=Z39.88-2004\"\\\n \"&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook\"\\\n \"&rft_id={cid}\"\\\n \"&rft.genre=bookitem\"\\\n \"&rft.btitle={title}\"\\\n \"&rft.edition={edition}\"\\\n \"&rft.au={author}\"\\\n \"&rft.atitle={pages}\"\\\n \"&rft.language={language}\"\\\n \"&rft.pages={pages}\".format(\n title=quote(str(text.get_title(lang))), author=quote(str(text.get_creator(lang))),\n cid=url_for(\".r_collection\", objectId=collection.id, _external=True),\n language=collection.lang, pages=quote(subreference), edition=quote(str(text.get_description(lang)))\n )", "response": "Creates a CoINS Title string from a text object containing the coins."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nbuilding an ancestor or descendant dict view based on selected information.", "response": "def expose_ancestors_or_children(self, member, collection, lang=None):\n \"\"\" Build an ancestor or descendant dict view based on selected information\n\n :param member: Current Member to build for\n :param collection: Collection from which we retrieved it\n :param lang: Language to express data in\n :return:\n \"\"\"\n x = {\n \"id\": member.id,\n \"label\": str(member.get_label(lang)),\n \"model\": str(member.model),\n \"type\": str(member.type),\n \"size\": member.size,\n \"semantic\": self.semantic(member, parent=collection)\n }\n if isinstance(member, ResourceCollection):\n x[\"lang\"] = str(member.lang)\n return x"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding list of basic objects for given collection", "response": "def make_members(self, collection, lang=None):\n \"\"\" Build member list for given collection\n\n :param collection: Collection to build dict view of for its members\n :param lang: Language to express data in\n :return: List of basic objects\n \"\"\"\n objects = sorted([\n self.expose_ancestors_or_children(member, collection, lang=lang)\n for member in collection.members\n if member.get_label()\n ],\n key=itemgetter(\"label\")\n )\n return objects"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_parents(self, collection, lang=None):\n return [\n {\n \"id\": member.id,\n \"label\": str(member.get_label(lang)),\n \"model\": str(member.model),\n \"type\": str(member.type),\n \"size\": member.size\n }\n for member in collection.parents\n if member.get_label()\n ]", "response": "Build list of basic objects for given collection and language"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the top collections of the inventory", "response": "def r_collections(self, lang=None):\n \"\"\" Retrieve the top collections of the inventory\n\n :param lang: Lang in which to express main data\n :type lang: str\n :return: Collections information and template\n :rtype: {str: Any}\n \"\"\"\n collection = self.resolver.getMetadata()\n return {\n \"template\": \"main::collection.html\",\n \"current_label\": collection.get_label(lang),\n \"collections\": {\n \"members\": self.make_members(collection, lang=lang)\n }\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef r_collection(self, objectId, lang=None):\n collection = self.resolver.getMetadata(objectId)\n return {\n \"template\": \"main::collection.html\",\n \"collections\": {\n \"current\": {\n \"label\": str(collection.get_label(lang)),\n \"id\": collection.id,\n \"model\": str(collection.model),\n \"type\": str(collection.type),\n },\n \"members\": self.make_members(collection, lang=lang),\n \"parents\": self.make_parents(collection, lang=lang)\n },\n }", "response": "Route function for collection content browsing route function"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef r_first_passage(self, objectId):\n collection, reffs = self.get_reffs(objectId=objectId, export_collection=True)\n first, _ = reffs[0]\n return redirect(\n url_for(\".r_passage_semantic\", objectId=objectId, subreference=first, semantic=self.semantic(collection))\n )", "response": "Provides a redirect to the first passage of given objectId"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef r_passage(self, objectId, subreference, lang=None):\n collection = self.get_collection(objectId)\n if isinstance(collection, CtsWorkMetadata):\n editions = [t for t in collection.children.values() if isinstance(t, CtsEditionMetadata)]\n if len(editions) == 0:\n raise UnknownCollection(\"This work has no default edition\")\n return redirect(url_for(\".r_passage\", objectId=str(editions[0].id), subreference=subreference))\n text = self.get_passage(objectId=objectId, subreference=subreference)\n passage = self.transform(text, text.export(Mimetypes.PYTHON.ETREE), objectId)\n prev, next = self.get_siblings(objectId, subreference, text)\n return {\n \"template\": \"main::text.html\",\n \"objectId\": objectId,\n \"subreference\": subreference,\n \"collections\": {\n \"current\": {\n \"label\": collection.get_label(lang),\n \"id\": collection.id,\n \"model\": str(collection.model),\n \"type\": str(collection.type),\n \"author\": text.get_creator(lang),\n \"title\": text.get_title(lang),\n \"description\": text.get_description(lang),\n \"citation\": collection.citation,\n \"coins\": self.make_coins(collection, text, subreference, lang=lang)\n },\n \"parents\": self.make_parents(collection, lang=lang)\n },\n \"text_passage\": Markup(passage),\n \"prev\": prev,\n \"next\": next\n }", "response": "Retrieve the text of the passage of the object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef r_assets(self, filetype, asset):\n if filetype in self.assets and asset in self.assets[filetype] and self.assets[filetype][asset]:\n return send_from_directory(\n directory=self.assets[filetype][asset],\n filename=asset\n )\n abort(404)", "response": "Route for specific assets."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmerge and register assets both as routes and dictionary", "response": "def register_assets(self):\n \"\"\" Merge and register assets, both as routes and dictionary\n\n :return: None\n \"\"\"\n self.blueprint.add_url_rule(\n # Register another path to ensure assets compatibility\n \"{0}.secondary//\".format(self.static_url_path),\n view_func=self.r_assets,\n endpoint=\"secondary_assets\",\n methods=[\"GET\"]\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a blueprint of the current nemo app and register rules", "response": "def create_blueprint(self):\n \"\"\" Create blueprint and register rules\n\n :return: Blueprint of the current nemo app\n :rtype: flask.Blueprint\n \"\"\"\n self.register_plugins()\n\n self.blueprint = Blueprint(\n self.name,\n \"nemo\",\n url_prefix=self.prefix,\n template_folder=self.template_folder,\n static_folder=self.static_folder,\n static_url_path=self.static_url_path\n )\n\n for url, name, methods, instance in self._urls:\n self.blueprint.add_url_rule(\n url,\n view_func=self.view_maker(name, instance),\n endpoint=_plugin_endpoint_rename(name, instance),\n methods=methods\n )\n\n for url, name, methods, instance in self._semantic_url:\n self.blueprint.add_url_rule(\n url,\n view_func=self.view_maker(name, instance),\n endpoint=_plugin_endpoint_rename(name, instance)+\"_semantic\",\n methods=methods\n )\n\n self.register_assets()\n self.register_filters()\n\n # We extend the loading list by the instance value\n self.__templates_namespaces__.extend(self.__instance_templates__)\n # We generate a template loader\n for namespace, directory in self.__templates_namespaces__[::-1]:\n if namespace not in self.__template_loader__:\n self.__template_loader__[namespace] = []\n self.__template_loader__[namespace].append(\n jinja2.FileSystemLoader(op.abspath(directory))\n )\n self.blueprint.jinja_loader = jinja2.PrefixLoader(\n {namespace: jinja2.ChoiceLoader(paths) for namespace, paths in self.__template_loader__.items()},\n \"::\"\n )\n\n if self.cache is not None:\n for func, instance in self.cached:\n setattr(instance, func.__name__, self.cache.memoize()(func))\n\n return self.blueprint"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a view function which makes use of Nemo context.", "response": "def view_maker(self, name, instance=None):\n \"\"\" Create a view\n\n :param name: Name of the route function to use for the view.\n :type name: str\n :return: Route function which makes use of Nemo context (such as menu informations)\n :rtype: function\n \"\"\"\n if instance is None:\n instance = self\n sig = \"lang\" in [\n parameter.name\n for parameter in inspect.signature(getattr(instance, name)).parameters.values()\n ]\n\n def route(**kwargs):\n if sig and \"lang\" not in kwargs:\n kwargs[\"lang\"] = self.get_locale()\n if \"semantic\" in kwargs:\n del kwargs[\"semantic\"]\n return self.route(getattr(instance, name), **kwargs)\n return route"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main_collections(self, lang=None):\n return sorted([\n {\n \"id\": member.id,\n \"label\": str(member.get_label(lang=lang)),\n \"model\": str(member.model),\n \"type\": str(member.type),\n \"size\": member.size\n }\n for member in self.resolver.getMetadata().members\n ], key=itemgetter(\"label\"))", "response": "Retrieve main parent collections of a repository\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_cache_keys(self, endpoint, kwargs):\n keys = sorted(kwargs.keys())\n i18n_cache_key = endpoint+\"|\"+\"|\".join([kwargs[k] for k in keys])\n if \"lang\" in keys:\n cache_key = endpoint+\"|\" + \"|\".join([kwargs[k] for k in keys if k != \"lang\"])\n else:\n cache_key = i18n_cache_key\n return i18n_cache_key, cache_key", "response": "This function is used to provide cache keys for templates. It is used to provide cache keys for i18n dependant and i18n ignoring cache keys for templates."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrender a route template and adds information to the current route.", "response": "def render(self, template, **kwargs):\n \"\"\" Render a route template and adds information to this route.\n\n :param template: Template name.\n :type template: str\n :param kwargs: dictionary of named arguments used to be passed to the template\n :type kwargs: dict\n :return: Http Response with rendered template\n :rtype: flask.Response\n \"\"\"\n\n kwargs[\"cache_key\"] = \"%s\" % kwargs[\"url\"].values()\n kwargs[\"lang\"] = self.get_locale()\n kwargs[\"assets\"] = self.assets\n kwargs[\"main_collections\"] = self.main_collections(kwargs[\"lang\"])\n kwargs[\"cache_active\"] = self.cache is not None\n kwargs[\"cache_time\"] = 0\n kwargs[\"cache_key\"], kwargs[\"cache_key_i18n\"] = self.make_cache_keys(request.endpoint, kwargs[\"url\"])\n kwargs[\"template\"] = template\n\n for plugin in self.__plugins_render_views__:\n kwargs.update(plugin.render(**kwargs))\n\n return render_template(kwargs[\"template\"], **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef route(self, fn, **kwargs):\n new_kwargs = fn(**kwargs)\n\n # If there is no templates, we assume that the response is finalized :\n if not isinstance(new_kwargs, dict):\n return new_kwargs\n\n new_kwargs[\"url\"] = kwargs\n return self.render(**new_kwargs)", "response": "Route helper that applies fn function but keep the calling object as kwargs"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters the app using Blueprint Taxonomy", "response": "def register(self):\n \"\"\" Register the app using Blueprint\n\n :return: Nemo blueprint\n :rtype: flask.Blueprint\n \"\"\"\n if self.app is not None:\n if not self.blueprint:\n self.blueprint = self.create_blueprint()\n self.app.register_blueprint(self.blueprint)\n if self.cache is None:\n # We register a fake cache extension.\n setattr(self.app.jinja_env, \"_fake_cache_extension\", self)\n self.app.jinja_env.add_extension(FakeCacheExtension)\n return self.blueprint\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nregister filters for Jinja to use", "response": "def register_filters(self):\n \"\"\" Register filters for Jinja to use\n\n .. note:: Extends the dictionary filters of jinja_env using self._filters list\n \"\"\"\n for _filter, instance in self._filters:\n if not instance:\n self.app.jinja_env.filters[\n _filter.replace(\"f_\", \"\")\n ] = getattr(flask_nemo.filters, _filter)\n else:\n self.app.jinja_env.filters[\n _filter.replace(\"f_\", \"\")\n ] = getattr(instance, _filter.replace(\"_{}\".format(instance.name), \"\"))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters plugins in Nemo instance", "response": "def register_plugins(self):\n \"\"\" Register plugins in Nemo instance\n\n - Clear routes first if asked by one plugin\n - Clear assets if asked by one plugin and replace by the last plugin registered static_folder\n - Register each plugin\n - Append plugin routes to registered routes\n - Append plugin filters to registered filters\n - Append templates directory to given namespaces\n - Append assets (CSS, JS, statics) to given resources \n - Append render view (if exists) to Nemo.render stack\n \"\"\"\n if len([plugin for plugin in self.__plugins__.values() if plugin.clear_routes]) > 0: # Clear current routes\n self._urls = list()\n self.cached = list()\n\n clear_assets = [plugin for plugin in self.__plugins__.values() if plugin.clear_assets]\n if len(clear_assets) > 0 and not self.prevent_plugin_clearing_assets: # Clear current Assets\n self.__assets__ = copy(type(self).ASSETS)\n static_path = [plugin.static_folder for plugin in clear_assets if plugin.static_folder]\n if len(static_path) > 0:\n self.static_folder = static_path[-1]\n\n for plugin in self.__plugins__.values():\n self._urls.extend([(url, function, methods, plugin) for url, function, methods in plugin.routes])\n self._filters.extend([(filt, plugin) for filt in plugin.filters])\n self.__templates_namespaces__.extend(\n [(namespace, directory) for namespace, directory in plugin.templates.items()]\n )\n for asset_type in self.__assets__:\n for key, value in plugin.assets[asset_type].items():\n self.__assets__[asset_type][key] = value\n if plugin.augment:\n self.__plugins_render_views__.append(plugin)\n\n if hasattr(plugin, \"CACHED\"):\n for func in plugin.CACHED:\n self.cached.append((getattr(plugin, func), plugin))\n plugin.register_nemo(self)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhandles a list of references depending on the text identifier.", "response": "def chunk(self, text, reffs):\n \"\"\" Handle a list of references depending on the text identifier using the chunker dictionary.\n\n :param text: Text object from which comes the references\n :type text: MyCapytains.resources.texts.api.Text\n :param reffs: List of references to transform\n :type reffs: References\n :return: Transformed list of references\n :rtype: [str]\n \"\"\"\n if str(text.id) in self.chunker:\n return self.chunker[str(text.id)](text, reffs)\n return self.chunker[\"default\"](text, reffs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update(self, fieldname, localValue, remoteValue):\n '''\n Returns the appropriate current value, based on the changes\n recorded by this ChangeTracker, the value stored by the server\n (`localValue`), and the value stored by the synchronizing client\n (`remoteValue`). If `remoteValue` conflicts with changes stored\n locally, then a `pysyncml.ConflictError` is raised.\n\n If a change needs to be applied because `remoteValue` has been\n updated, then the new value will be returned, and this\n ChangeTracker will be updated such that a call to\n :meth:`getChangeSpec` will incorporate the change.\n\n :param fieldname:\n\n The name of the fieldname being evaluated.\n\n :param localValue:\n\n The value of the field as stored by the server, usually the one that\n also stored the current change-spec. If `localValue` is ``None``,\n then it is assumed that the field was potentially added (this will\n first be verified against the stored change-spec).\n\n :param remoteValue:\n\n The new value being presented that may or may not be a source of\n conflict. If `remoteValue` is ``None``, then it is assumed that\n the field was potentially deleted (this will first be verified\n against the stored change-spec).\n\n '''\n if localValue == remoteValue:\n return localValue\n ct = constants.ITEM_DELETED if remoteValue is None else constants.ITEM_MODIFIED\n if localValue is None:\n ct = constants.ITEM_ADDED\n\n # todo: i should probably trap irep errors. for example, if this\n # cspec has a field \"x\" marked as deleted, then `localValue`\n # must be None... etc.\n\n # TODO: i think this kind of handling would break in ListChangeTracker!...\n\n changed = self.isChange(fieldname, ct, remoteValue)\n if changed is None:\n return localValue\n self.append(changed, ct, initialValue=localValue, isMd5=False)\n return remoteValue", "response": "Updates the value of the field in the current change - tree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if the change object for the given fieldname is changed.", "response": "def isChange(self, fieldname, changeType, newValue=None, isMd5=False):\n '''\n Implements as specified in :meth:`.ChangeTracker.isChange` where\n the `changeObject` is simply the fieldname that needs to be\n updated with the `newValue`. Currently, this is always equal to\n `fieldname`.\n '''\n # todo: this seems inefficient...\n changes = self._collapseChanges(self.baseline, self.current)\n if fieldname not in changes:\n return fieldname\n cur = changes[fieldname]\n if changeType == constants.ITEM_DELETED:\n if cur.op == constants.ITEM_ADDED or cur.op == constants.ITEM_DELETED:\n # the field is deleted because it hasn't been added yet\n # (the check for cur.op == constants.ITEM_DELETED should\n # never be true, so just here for paranoia...)\n return None\n # we are requiring that the current/new values are different,\n # thus there is a collision between the added values\n raise ConflictError('conflicting deletion of field \"%s\"'\n % (fieldname,))\n\n # the `newValue` is different than the current value (otherwise\n # this method should not have been called) -- either it was added\n # or modified.\n\n # if it appears to be \"added\", then it may be because it was\n # deleted in this tracker.\n\n # if it appears to be \"modified\", then it may be because it\n # was modified in this tracker.\n\n # in either case, check to see if it is equal to the initial\n # value, and if it was, then there was actually no change.\n\n if isMd5Equal(newValue, isMd5, cur.ival, cur.md5):\n # the new value is equal to the initial value, so this\n # field was not changed (but has local changes)\n return None\n\n # the new value is not equal to the initial value, which means\n # that they were both changed and/or added.\n raise ConflictError(\n 'conflicting addition or modification of field \"%s\"' % (fieldname,))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef append(self, listIndex, changeType, initialValue=None, isMd5=False):\n '''\n Adds a change spec to the current list of changes. The `listIndex`\n represents the line number (in multi-line mode) or word number (in\n single-line mode), and must be **INCLUSIVE** of both additions and\n deletions.\n '''\n if not isMd5 and initialValue is not None and len(initialValue) > 32:\n initialValue = hashlib.md5(initialValue).hexdigest()\n isMd5 = True\n cur = adict(index = int(listIndex),\n op = changeType,\n ival = initialValue,\n md5 = isMd5)\n for idx, val in enumerate(self.current):\n if val.index < cur.index:\n continue\n if val.index > cur.index:\n self.current.insert(idx, cur)\n break\n # todo: this should never happen... (there should not be a change\n # reported for the same line without a `pushChangeSpec()` between)\n # todo: perhaps attempt a merging?...\n raise InvalidChangeSpec('conflicting changes for index %d' % (cur.index,))\n else:\n self.current.append(cur)", "response": "Adds a change spec to the current list of changes."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if the list index is changed.", "response": "def isChange(self, listIndex, changeType, newValue=None, isMd5=False, token=None):\n '''\n Implements as specified in :meth:`.ChangeTracker.isChange` where\n the `changeObject` is a two-element tuple. The first element is\n the index at which the change should be applied, and the second\n element is an abstract token that should be passed back into this\n method at every iteration.\n\n IMPORTANT: unlike the AttributeChangeTracker, the\n ListChangeTracker's `isChange()` method is sensitive to order\n (which is why it uses the `changeObject` and `token`\n mechanisms. Therefore, it is important to call `isChange()`\n sequentially with all changes in the order that they occur in the\n change list.\n '''\n\n # THE INDEX PASSED TO ListChangeTracker.isChange() DOES NOT INCLUDE:\n # - local deletions\n # - remote additions\n\n adjust = 0 # tracks local deletes\n token = token # tracks consecutive addition adjustments\n index = int(listIndex)\n ret = index\n\n # todo: this should reduce complexity later on, but something\n # went wrong...\n # if changeType != constants.ITEM_ADDED:\n # token = None\n # else:\n # if token is None or token[0] != index:\n # token = (ret, 0)\n # token = (ret, token[1] + 1)\n\n # todo: this seems inefficient...\n changes = self._collapseChanges(self.baseline, self.current)\n\n for cur in changes:\n if cur.index > index:\n if changeType != constants.ITEM_ADDED:\n return (ret, None)\n if token is None or token[0] != index - adjust:\n token = (ret, 0)\n token = (ret, token[1] + 1)\n return (ret, token)\n\n if cur.index != index:\n if cur.op == constants.ITEM_DELETED:\n index += 1\n adjust += 1\n continue\n\n if token is not None and token[0] == index - adjust:\n index += token[1]\n continue\n\n if changeType == constants.ITEM_DELETED:\n if cur.op == constants.ITEM_ADDED:\n # the field is deleted because it hasn't been added yet\n return (None, None)\n # we are requiring that the current/new values are different,\n # thus there is a collision between the added values\n raise ConflictError(\n 'conflicting deletion of list index %r' % (index,))\n\n if changeType == constants.ITEM_ADDED:\n if token is None:\n token = (ret, 0)\n token = (ret, token[1] + 1)\n if cur.op == constants.ITEM_DELETED:\n if isMd5Equal(newValue, isMd5, cur.ival, cur.md5):\n return (None, token)\n # todo: this *could* be a del-mod *conflict*... but not\n # *NECESSARILY* so, since it could be a\n # del-adjacent-add, which is not a problem. in the\n # conflict case, the resolution will cause the\n # modified line to silently win.\n # TODO: perhaps i should err on the side of safety and\n # issue a ConflictError?...\n return (ret, token)\n\n if cur.op == constants.ITEM_DELETED:\n index += 1\n adjust += 1\n continue\n\n # changeType = mod, op = add/mod\n\n if cur.op == constants.ITEM_ADDED:\n # todo: i'm not sure if this case is even possible...\n raise ConflictError(\n 'conflicting addition of list index %r' % (index,))\n\n # mod/mod - check initvalue\n\n if isMd5Equal(newValue, isMd5, cur.ival, cur.md5):\n # the new value is equal to the initial value, so this\n # line was not changed (but has local changes)\n return (None, None)\n # the new value is not equal to the initial value, which means\n # that they were both changed and/or added.\n raise ConflictError(\n 'conflicting modification of list index %r' % (index,))\n\n if changeType != constants.ITEM_ADDED:\n return (ret, None)\n if token is None or token[0] != index - adjust:\n token = (ret, 0)\n token = (ret, token[1] + 1)\n return (ret, token)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_tag():\n if len(sys.argv) > 1:\n tag = sys.argv[1]\n doc_mapper = DocMapper()\n if doc_mapper.is_pipe:\n count = 0\n for obj in doc_mapper.get_pipe():\n obj.add_tag(tag)\n obj.update(tags=obj.tags)\n count += 1\n print_success(\"Added tag '{}' to {} object(s)\".format(tag, count))\n else:\n print_error(\"Please use this script with pipes\")\n else:\n print_error(\"Usage: jk-add-tag \")\n sys.exit()", "response": "Adds a tag to the object list."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef manual_configure():\n print(\"Manual configuring jackal\")\n mapping = { '1': 'y', '0': 'n'}\n config = Config()\n # Host\n host = input_with_default(\"What is the Elasticsearch host?\", config.get('jackal', 'host'))\n config.set('jackal', 'host', host)\n\n # SSL\n if input_with_default(\"Use SSL?\", mapping[config.get('jackal', 'use_ssl')]) == 'y':\n config.set('jackal', 'use_ssl', '1')\n if input_with_default(\"Setup custom server cert?\", 'y') == 'y':\n ca_certs = input_with_default(\"Server certificate location?\", config.get('jackal', 'ca_certs'))\n config.set('jackal', 'ca_certs', ca_certs)\n else:\n config.set('jackal', 'ca_certs', '')\n else:\n config.set('jackal', 'use_ssl', '0')\n\n if input_with_default(\"Setup client certificates?\", mapping[config.get('jackal', 'client_certs')]) == 'y':\n config.set('jackal', 'client_certs', '1')\n client_cert = input_with_default(\"Client cert location?\", config.get('jackal', 'client_cert'))\n config.set('jackal', 'client_cert', client_cert)\n client_key = input_with_default(\"Client key location?\", config.get('jackal', 'client_key'))\n config.set('jackal', 'client_key', client_key)\n else:\n config.set('jackal', 'client_certs', '0')\n\n # Index\n index = input_with_default(\"What index prefix should jackal use?\", config.get('jackal', 'index'))\n config.set('jackal', 'index', index)\n initialize_indices = (input_with_default(\"Do you want to initialize the indices?\", 'y').lower() == 'y')\n\n # Nmap\n nmap_dir = input_with_default(\"What directory do you want to place the nmap results in?\", config.get('nmap', 'directory'))\n if not os.path.exists(nmap_dir):\n os.makedirs(nmap_dir)\n config.set('nmap', 'directory', nmap_dir)\n nmap_options = input_with_default(\"What nmap options do you want to set for 'custom' (for example '-p 22,445')?\", config.get('nmap', 'options'))\n config.set('nmap', 'options', nmap_options)\n\n # Nessus\n configure_nessus = (input_with_default(\"Do you want to setup nessus?\", 'n').lower() == 'y')\n if configure_nessus:\n nessus_host = input_with_default(\"What is the nessus host?\", config.get('nessus', 'host'))\n nessus_template = input_with_default(\"What template should jackal use?\", config.get('nessus', 'template_name'))\n nessus_access = input_with_default(\"What api access key should jackal use?\", config.get('nessus', 'access_key'))\n nessus_secret = input_with_default(\"What api secret key should jackal use?\", config.get('nessus', 'secret_key'))\n config.set('nessus', 'host', nessus_host)\n config.set('nessus', 'template_name', nessus_template)\n config.set('nessus', 'access_key', nessus_access)\n config.set('nessus', 'secret_key', nessus_secret)\n\n # Named pipes\n configure_pipes = (input_with_default(\"Do you want to setup named pipes?\", 'n').lower() == 'y')\n if configure_pipes:\n directory = input_with_default(\"What directory do you want to place the named pipes in?\", config.get('pipes', 'directory'))\n config.set('pipes', 'directory', directory)\n config_file = input_with_default(\"What is the name of the named pipe config?\", config.get('pipes', 'config_file'))\n config.set('pipes', 'config_file', config_file)\n if not os.path.exists(directory):\n create = (input_with_default(\"Do you want to create the directory?\", 'n').lower() == 'y')\n if create:\n os.makedirs(directory)\n if not os.path.exists(os.path.join(config.config_dir, config_file)):\n f = open(os.path.join(config.config_dir, config_file), 'a')\n f.close()\n\n config.write_config(initialize_indices)", "response": "Function to manually configure jackal."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set(self, section, key, value):\n if not section in self.config:\n self.config.add_section(section)\n self.config.set(section, key, value)", "response": "Sets the value of the key in the specified section."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the configuration directory that contains the user s data.", "response": "def config_dir(self):\n \"\"\"\n Returns the configuration directory\n \"\"\"\n home = expanduser('~')\n config_dir = os.path.join(home, '.jackal')\n return config_dir"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef write_config(self, initialize_indices=False):\n if not os.path.exists(self.config_dir):\n os.mkdir(self.config_dir)\n\n with open(self.config_file, 'w') as configfile:\n self.config.write(configfile)\n\n if initialize_indices:\n index = self.get('jackal', 'index')\n from jackal import Host, Range, Service, User, Credential, Log\n from jackal.core import create_connection\n create_connection(self)\n Host.init(index=\"{}-hosts\".format(index))\n Range.init(index=\"{}-ranges\".format(index))\n Service.init(index=\"{}-services\".format(index))\n User.init(index=\"{}-users\".format(index))\n Credential.init(index=\"{}-creds\".format(index))\n Log.init(index=\"{}-log\".format(index))", "response": "Write the current config to disk to store them."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nensure that the specified remote branch is not already tracked.", "response": "def ensure_remote_branch_is_tracked(branch):\n \"\"\"Track the specified remote branch if it is not already tracked.\"\"\"\n if branch == MASTER_BRANCH:\n # We don't need to explicitly track the master branch, so we're done.\n return\n\n # Ensure the specified branch is in the local branch list.\n output = subprocess.check_output(['git', 'branch', '--list'])\n for line in output.split('\\n'):\n if line.strip() == branch:\n # We are already tracking the remote branch\n break\n else:\n # We are not tracking the remote branch, so track it.\n try:\n sys.stdout.write(subprocess.check_output(\n ['git', 'checkout', '--track', 'origin/%s' % branch]))\n except subprocess.CalledProcessError:\n # Bail gracefully.\n raise SystemExit(1)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main(branch):\n try:\n # Ensure that we're in a git repository. This command is silent unless\n # you're not actually in a git repository, in which case, you receive a\n # \"Not a git repository\" error message.\n output = subprocess.check_output(['git', 'rev-parse']).decode('utf-8')\n sys.stdout.write(output)\n except subprocess.CalledProcessError:\n # Bail if we're not in a git repository.\n return\n\n # This behavior ensures a better user experience for those that aren't\n # intimately familiar with git.\n ensure_remote_branch_is_tracked(branch)\n\n # Switch to the specified branch and update it.\n subprocess.check_call(['git', 'checkout', '--quiet', branch])\n\n # Pulling is always safe here, because we never commit to this branch.\n subprocess.check_call(['git', 'pull', '--quiet'])\n\n # Checkout the top commit in the branch, effectively going \"untracked.\"\n subprocess.check_call(['git', 'checkout', '--quiet', '%s~0' % branch])\n\n # Clean up the repository of Python cruft. Because we've just switched\n # branches and compiled Python files should not be version controlled,\n # there are likely leftover compiled Python files sitting on disk which may\n # confuse some tools, such as sqlalchemy-migrate.\n subprocess.check_call(['find', '.', '-name', '\"*.pyc\"', '-delete'])\n\n # For the sake of user experience, give some familiar output.\n print('Your branch is up to date with branch \\'origin/%s\\'.' % branch)", "response": "Main function for the branch command."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_interface_name():\n interface_name = ''\n interfaces = psutil.net_if_addrs()\n for name, details in interfaces.items():\n for detail in details:\n if detail.family == socket.AF_INET:\n ip_address = ipaddress.ip_address(detail.address)\n if not (ip_address.is_link_local or ip_address.is_loopback):\n interface_name = name\n break\n return interface_name", "response": "Returns the first not link local and not loopback interface."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_targets(self):\n ldap_services = []\n if self.ldap:\n ldap_services = self.search.get_services(ports=[389], up=True)\n\n self.ldap_strings = [\"ldap://{}\".format(service.address) for service in ldap_services]\n self.services = self.search.get_services(tags=['smb_signing_disabled'])\n self.ips = [str(service.address) for service in self.services]", "response": "Load the services with smb signing disabled and ldap enabled."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_targets(self):\n if len(self.ldap_strings) == 0 and len(self.ips) == 0:\n print_notification(\"No targets left\")\n if self.auto_exit:\n if self.notifier:\n self.notifier.stop()\n self.terminate_processes()\n\n with open(self.targets_file, 'w') as f:\n f.write('\\n'.join(self.ldap_strings + self.ips))", "response": "write the contents of ldap_strings and ips to the targets_file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef start_processes(self):\n self.relay = subprocess.Popen(['ntlmrelayx.py', '-6', '-tf', self.targets_file, '-w', '-l', self.directory, '-of', self.output_file], cwd=self.directory)\n self.responder = subprocess.Popen(['responder', '-I', self.interface_name])", "response": "Starts the ntlmrelayx. py and responder processes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef callback(self, event):\n # IN_CLOSE_WRITE -> 0x00000008\n if event.mask == 0x00000008:\n if event.name.endswith('.json'):\n print_success(\"Ldapdomaindump file found\")\n if event.name in ['domain_groups.json', 'domain_users.json']:\n if event.name == 'domain_groups.json':\n self.domain_groups_file = event.pathname\n if event.name == 'domain_users.json':\n self.domain_users_file = event.pathname\n if self.domain_groups_file and self.domain_users_file:\n print_success(\"Importing users\")\n subprocess.Popen(['jk-import-domaindump', self.domain_groups_file, self.domain_users_file])\n elif event.name == 'domain_computers.json':\n print_success(\"Importing computers\")\n subprocess.Popen(['jk-import-domaindump', event.pathname])\n\n # Ldap has been dumped, so remove the ldap targets.\n self.ldap_strings = []\n self.write_targets()\n\n if event.name.endswith('_samhashes.sam'):\n host = event.name.replace('_samhashes.sam', '')\n # TODO import file.\n print_success(\"Secretsdump file, host ip: {}\".format(host))\n subprocess.Popen(['jk-import-secretsdump', event.pathname])\n\n # Remove this system from this ip list.\n self.ips.remove(host)\n self.write_targets()", "response": "Function that gets called on each event from pyinotify."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwatches the directory for changes and processes them.", "response": "def watch(self):\n \"\"\"\n Watches directory for changes\n \"\"\"\n wm = pyinotify.WatchManager()\n self.notifier = pyinotify.Notifier(wm, default_proc_fun=self.callback)\n wm.add_watch(self.directory, pyinotify.ALL_EVENTS)\n try:\n self.notifier.loop()\n except (KeyboardInterrupt, AttributeError):\n print_notification(\"Stopping\")\n finally:\n self.notifier.stop()\n self.terminate_processes()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef getAnnotations(self, targets, wildcard=\".\", include=None, exclude=None, limit=None, start=1, expand=False,\n **kwargs):\n \"\"\" Retrieve annotations from the query provider\n\n :param targets: The CTS URN(s) to query as the target of annotations\n :type targets: [MyCapytain.common.reference.URN], URN or None\n :param wildcard: Wildcard specifier for how to match the URN\n :type wildcard: str\n :param include: URI(s) of Annotation types to include in the results\n :type include: list(str)\n :param exclude: URI(s) of Annotation types to include in the results\n :type exclude: list(str)\n :param limit: The max number of results to return (Default is None for no limit)\n :type limit: int\n :param start: the starting record to return (Default is 1)\n :type start: int \n :param expand: Flag to state whether Annotations are expanded (Default is False)\n :type expand: bool\n \n :return: Tuple representing the query results. The first element\n The first element is the number of total Annotations found\n The second element is the list of Annotations\n :rtype: (int, list(Annotation)\n\n .. note::\n\n Wildcard should be one of the following value\n\n - '.' to match exact,\n - '.%' to match exact plus lower in the hierarchy\n - '%.' to match exact + higher in the hierarchy\n - '-' to match in the range\n - '%.%' to match all\n\n \"\"\"\n return 0, []", "response": "Retrieve the annotations from the query provider"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrender the breadcrumbs for a specific resource.", "response": "def render(self, **kwargs):\n \"\"\" Make breadcrumbs for a route\n\n :param kwargs: dictionary of named arguments used to construct the view\n :type kwargs: dict\n :return: List of dict items the view can use to construct the link.\n :rtype: {str: list({ \"link\": str, \"title\", str, \"args\", dict})}\n \"\"\"\n breadcrumbs = []\n # this is the list of items we want to accumulate in the breadcrumb trail.\n # item[0] is the key into the kwargs[\"url\"] object and item[1] is the name of the route\n # setting a route name to None means that it's needed to construct the route of the next item in the list\n # but shouldn't be included in the list itself (this is currently the case for work --\n # at some point we probably should include work in the navigation)\n breadcrumbs = []\n if \"collections\" in kwargs:\n breadcrumbs = [{\n \"title\": \"Text Collections\",\n \"link\": \".r_collections\",\n \"args\": {}\n }]\n\n if \"parents\" in kwargs[\"collections\"]:\n breadcrumbs += [\n {\n \"title\": parent[\"label\"],\n \"link\": \".r_collection_semantic\",\n \"args\": {\n \"objectId\": parent[\"id\"],\n \"semantic\": f_slugify(parent[\"label\"]),\n },\n }\n for parent in kwargs[\"collections\"][\"parents\"]\n ][::-1]\n\n if \"current\" in kwargs[\"collections\"]:\n breadcrumbs.append({\n \"title\": kwargs[\"collections\"][\"current\"][\"label\"],\n \"link\": None,\n \"args\": {}\n })\n\n # don't link the last item in the trail\n if len(breadcrumbs) > 0:\n breadcrumbs[-1][\"link\"] = None\n\n return {\"breadcrumbs\": breadcrumbs}"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the uuid of the given template name.", "response": "def get_template_uuid(self):\n \"\"\"\n Retrieves the uuid of the given template name.\n \"\"\"\n response = requests.get(self.url + 'editor/scan/templates', headers=self.headers, verify=False)\n templates = json.loads(response.text)\n for template in templates['templates']:\n if template['name'] == self.template_name:\n return template['uuid']"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new scan with the given host ips Returns the scan id of the created object.", "response": "def create_scan(self, host_ips):\n \"\"\"\n Creates a scan with the given host ips\n Returns the scan id of the created object.\n \"\"\"\n now = datetime.datetime.now()\n data = {\n \"uuid\": self.get_template_uuid(),\n \"settings\": {\n \"name\": \"jackal-\" + now.strftime(\"%Y-%m-%d %H:%M\"),\n \"text_targets\": host_ips\n }\n }\n response = requests.post(self.url + 'scans', data=json.dumps(data), verify=False, headers=self.headers)\n if response:\n result = json.loads(response.text)\n return result['scan']['id']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting the scan identified by the scan_id.", "response": "def start_scan(self, scan_id):\n \"\"\"\n Starts the scan identified by the scan_id.s\n \"\"\"\n requests.post(self.url + 'scans/{}/launch'.format(scan_id), verify=False, headers=self.headers)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stable(rankings, A, B):\n partners = dict((a, (rankings[(a, 1)], 1)) for a in A)\n is_stable = False # whether the current pairing (given by `partners`) is stable\n while is_stable == False:\n is_stable = True\n for b in B:\n is_paired = False # whether b has a pair which b ranks <= to n\n for n in range(1, len(B) + 1):\n a = rankings[(b, n)]\n a_partner, a_n = partners[a]\n if a_partner == b:\n if is_paired:\n is_stable = False\n partners[a] = (rankings[(a, a_n + 1)], a_n + 1)\n else:\n is_paired = True\n return sorted((a, b) for (a, (b, n)) in partners.items())", "response": "r Returns a list of rankings that are stable for A and B."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cmpToDataStore_uri(base, ds1, ds2):\n '''Bases the comparison of the datastores on URI alone.'''\n ret = difflib.get_close_matches(base.uri, [ds1.uri, ds2.uri], 1, cutoff=0.5)\n if len(ret) <= 0:\n return 0\n if ret[0] == ds1.uri:\n return -1\n return 1", "response": "Bases the comparison of the datastores on URI alone."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_tag(self, tag):\n self.tags = list(set(self.tags or []) | set([tag]))", "response": "Adds a tag to the list of tags that are not already present in the result list."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove_tag(self, tag):\n self.tags = list(set(self.tags or []) - set([tag]))", "response": "Removes a tag from this object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the result as a dictionary", "response": "def to_dict(self, include_meta=False):\n \"\"\"\n Returns the result as a dictionary, provide the include_meta flag to als show information like index and doctype.\n \"\"\"\n result = super(JackalDoc, self).to_dict(include_meta=include_meta)\n if include_meta:\n source = result.pop('_source')\n return {**result, **source}\n else:\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef r_annotations(self):\n\n target = request.args.get(\"target\", None)\n wildcard = request.args.get(\"wildcard\", \".\", type=str)\n include = request.args.get(\"include\")\n exclude = request.args.get(\"exclude\")\n limit = request.args.get(\"limit\", None, type=int)\n start = request.args.get(\"start\", 1, type=int)\n expand = request.args.get(\"expand\", False, type=bool)\n\n if target:\n\n try:\n urn = MyCapytain.common.reference.URN(target)\n except ValueError:\n return \"invalid urn\", 400\n\n count, annotations = self.__queryinterface__.getAnnotations(urn, wildcard=wildcard, include=include,\n exclude=exclude, limit=limit, start=start,\n expand=expand)\n else:\n # Note that this implementation is not done for too much annotations\n # because we do not implement pagination here\n count, annotations = self.__queryinterface__.getAnnotations(None, limit=limit, start=start, expand=expand)\n mapped = []\n response = {\n \"@context\": type(self).JSONLD_CONTEXT,\n \"id\": url_for(\".r_annotations\", start=start, limit=limit),\n \"type\": \"AnnotationCollection\",\n \"startIndex\": start,\n \"items\": [\n ],\n \"total\": count\n }\n for a in annotations:\n mapped.append({\n \"id\": url_for(\".r_annotation\", sha=a.sha),\n \"body\": url_for(\".r_annotation_body\", sha=a.sha),\n \"type\": \"Annotation\",\n \"target\": a.target.to_json(),\n \"dc:type\": a.type_uri,\n \"owl:sameAs\": [a.uri],\n \"nemo:slug\": a.slug\n })\n response[\"items\"] = mapped\n response = jsonify(response)\n return response", "response": "Route to retrieve annotations by target_urn"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nroute to retrieve contents of an annotation resource", "response": "def r_annotation(self, sha):\n \"\"\" Route to retrieve contents of an annotation resource\n\n :param uri: The uri of the annotation resource\n :type uri: str\n :return: annotation contents\n :rtype: {str: Any}\n \"\"\"\n annotation = self.__queryinterface__.getResource(sha)\n if not annotation:\n return \"invalid resource uri\", 404\n response = {\n \"@context\": type(self).JSONLD_CONTEXT,\n \"id\": url_for(\".r_annotation\", sha=annotation.sha),\n \"body\": url_for(\".r_annotation_body\", sha=annotation.sha),\n \"type\": \"Annotation\",\n \"target\": annotation.target.to_json(),\n \"owl:sameAs\": [annotation.uri],\n \"dc:type\": annotation.type_uri,\n \"nemo:slug\": annotation.slug\n }\n return jsonify(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef r_annotation_body(self, sha):\n annotation = self.__queryinterface__.getResource(sha)\n if not annotation:\n return \"invalid resource uri\", 404\n # TODO this should inspect the annotation content\n # set appropriate Content-Type headers\n # and return the actual content\n content = annotation.read()\n if isinstance(content, Response):\n return content\n headers = {\"Content-Type\": annotation.mimetype}\n return Response(content, headers=headers)", "response": "Route to retrieve the contents of an annotation resource"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprint the statistics of the current state of the virtual machine.", "response": "def describeStats(stats, stream, title=None, details=True, totals=True, gettext=None):\n '''\n Renders an ASCII-table of the synchronization statistics `stats`,\n example output:\n\n .. code-block::\n\n +----------------------------------------------------------------------------------+\n | TITLE |\n +----------+------+-------------------------+--------------------------+-----------+\n | | | Local | Remote | Conflicts |\n | Source | Mode | Add | Mod | Del | Err | Add | Mod | Del | Err | Col | Mrg |\n +----------+------+-------+-----+-----+-----+--------+-----+-----+-----+-----+-----+\n | contacts | <= | - | - | - | - | 10,387 | - | - | - | - | - |\n | note | SS | 1,308 | - | 2 | - | - | - | - | - | - | - |\n +----------+------+-------+-----+-----+-----+--------+-----+-----+-----+-----+-----+\n | 1,310 local changes and 10,387 remote changes. |\n +----------------------------------------------------------------------------------+\n\n :Parameters:\n\n stats : dict\n\n The synchronization stats returned by a call to Adapter.sync().\n\n stream : file-like-object\n\n An output file-like object that has at least a `write()` method,\n e.g. ``sys.stdout`` can be used.\n\n title : str, optional, default: null\n\n A title placed at the top of the table -- if omitted (the default),\n then no title is rendered.\n\n details : bool, optional, default: true\n\n If truthy, a per-datastore listing of changes will be displayed\n (as in the above example).\n\n totals : bool, optional, default: true\n\n If truthy, a summary of all changes will be displayed (as in the\n above example).\n\n gettext : callable, optional, @DEPRECATED(0.2.0), default: null\n\n A `gettext.gettext` compatible callable used for translating\n localized content (such as number formatting, etc.).\n\n NOTE: this parameter is deprecated, and will be replaced with\n a generalized i18n solution.\n '''\n\n from . import state\n modeStringLut = dict((\n (constants.SYNCTYPE_TWO_WAY, '<>'),\n (constants.SYNCTYPE_SLOW_SYNC, 'SS'),\n (constants.SYNCTYPE_ONE_WAY_FROM_CLIENT, '->'),\n (constants.SYNCTYPE_REFRESH_FROM_CLIENT, '=>'),\n (constants.SYNCTYPE_ONE_WAY_FROM_SERVER, '<-'),\n (constants.SYNCTYPE_REFRESH_FROM_SERVER, '<='),\n ))\n\n if gettext is not None:\n _ = gettext\n else:\n _ = lambda s: s\n\n # todo: this does not handle the case where the title is wider than the table.\n\n wSrc = len(_('Source'))\n wMode = len(_('Mode'))\n wCon = len(_('Conflicts'))\n wCol = len(_('Col'))\n wMrg = len(_('Mrg'))\n wHereAdd = wPeerAdd = len(_('Add'))\n wHereMod = wPeerMod = len(_('Mod'))\n wHereDel = wPeerDel = len(_('Del'))\n wHereErr = wPeerErr = len(_('Err'))\n\n totLoc = 0\n totRem = 0\n totErr = 0\n totCol = 0\n totMrg = 0\n\n for key in stats.keys():\n wSrc = max(wSrc, len(key))\n wMode = max(wMode, len(modeStringLut.get(stats[key].mode)))\n wCol = max(wCol, len(num2str(stats[key].conflicts)))\n wMrg = max(wMrg, len(num2str(stats[key].merged)))\n wHereAdd = max(wHereAdd, len(num2str(stats[key].hereAdd)))\n wPeerAdd = max(wPeerAdd, len(num2str(stats[key].peerAdd)))\n wHereMod = max(wHereMod, len(num2str(stats[key].hereMod)))\n wPeerMod = max(wPeerMod, len(num2str(stats[key].peerMod)))\n wHereDel = max(wHereDel, len(num2str(stats[key].hereDel)))\n wPeerDel = max(wPeerDel, len(num2str(stats[key].peerDel)))\n wHereErr = max(wHereErr, len(num2str(stats[key].hereErr)))\n wPeerErr = max(wPeerErr, len(num2str(stats[key].peerErr)))\n totLoc += stats[key].hereAdd + stats[key].hereMod + stats[key].hereDel\n totRem += stats[key].peerAdd + stats[key].peerMod + stats[key].peerDel\n totErr += stats[key].hereErr + stats[key].peerErr\n totCol += stats[key].conflicts\n totMrg += stats[key].merged\n\n # TODO: i'm 100% sure there is a python library that can do this for me...\n\n if wCon > wCol + 3 + wMrg:\n diff = wCon - ( wCol + 3 + wMrg )\n wCol += diff / 2\n wMrg = wCon - 3 - wCol\n else:\n wCon = wCol + 3 + wMrg\n\n if details:\n tWid = ( wSrc + 3 + wMode + 3\n + wHereAdd + wHereMod + wHereDel + wHereErr + 9 + 3\n + wPeerAdd + wPeerMod + wPeerDel + wPeerErr + 9 + 3\n + wCon )\n else:\n if title is None:\n tWid = 0\n else:\n tWid = len(title)\n\n if totals:\n # TODO: oh dear. from an i18n POV, this is *horrible*!...\n sumlist = []\n for val, singular, plural in [\n (totLoc, _('local change'), _('local changes')),\n (totRem, _('remote change'), _('remote changes')),\n (totErr, _('error'), _('errors')),\n ]:\n if val == 1:\n sumlist.append(num2str(val) + ' ' + singular)\n elif val > 1:\n sumlist.append(num2str(val) + ' ' + plural)\n if len(sumlist) <= 0:\n sumlist = _('No changes')\n elif len(sumlist) == 1:\n sumlist = sumlist[0]\n else:\n sumlist = ', '.join(sumlist[:-1]) + ' ' + _('and') + ' ' + sumlist[-1]\n if totMrg > 0 or totCol > 0:\n sumlist += ': '\n if totMrg == 1:\n sumlist += num2str(totMrg) + ' ' + _('merge')\n elif totMrg > 1:\n sumlist += num2str(totMrg) + ' ' + _('merges')\n if totMrg > 0 and totCol > 0:\n sumlist += ' ' + _('and') + ' '\n if totCol == 1:\n sumlist += num2str(totCol) + ' ' + _('conflict')\n elif totCol > 1:\n sumlist += num2str(totCol) + ' ' + _('conflicts')\n sumlist += '.'\n if len(sumlist) > tWid:\n wSrc += len(sumlist) - tWid\n tWid = len(sumlist)\n\n if title is not None:\n stream.write('+-' + '-' * tWid + '-+\\n')\n stream.write('| {0: ^{w}}'.format(title, w=tWid))\n stream.write(' |\\n')\n\n hline = '+-' \\\n + '-' * wSrc \\\n + '-+-' \\\n + '-' * wMode \\\n + '-+-' \\\n + '-' * ( wHereAdd + wHereMod + wHereDel + wHereErr + 9 ) \\\n + '-+-' \\\n + '-' * ( wPeerAdd + wPeerMod + wPeerDel + wPeerErr + 9 ) \\\n + '-+-' \\\n + '-' * wCon \\\n + '-+\\n'\n\n if details:\n\n stream.write(hline)\n\n stream.write('| ' + ' ' * wSrc)\n stream.write(' | ' + ' ' * wMode)\n stream.write(' | {0: ^{w}}'.format(_('Local'), w=( wHereAdd + wHereMod + wHereDel + wHereErr + 9 )))\n stream.write(' | {0: ^{w}}'.format(_('Remote'), w=( wPeerAdd + wPeerMod + wPeerDel + wPeerErr + 9 )))\n stream.write(' | {0: ^{w}}'.format(_('Conflicts'), w=wCon))\n stream.write(' |\\n')\n\n stream.write('| {0: >{w}}'.format(_('Source'), w=wSrc))\n stream.write(' | {0: >{w}}'.format(_('Mode'), w=wMode))\n stream.write(' | {0: ^{w}}'.format(_('Add'), w=wHereAdd))\n stream.write(' | {0: ^{w}}'.format(_('Mod'), w=wHereMod))\n stream.write(' | {0: ^{w}}'.format(_('Del'), w=wHereDel))\n stream.write(' | {0: ^{w}}'.format(_('Err'), w=wHereErr))\n stream.write(' | {0: ^{w}}'.format(_('Add'), w=wPeerAdd))\n stream.write(' | {0: ^{w}}'.format(_('Mod'), w=wPeerMod))\n stream.write(' | {0: ^{w}}'.format(_('Del'), w=wPeerDel))\n stream.write(' | {0: ^{w}}'.format(_('Err'), w=wPeerErr))\n stream.write(' | {0: ^{w}}'.format(_('Col'), w=wCol))\n stream.write(' | {0: ^{w}}'.format(_('Mrg'), w=wMrg))\n stream.write(' |\\n')\n\n hsline = '+-' + '-' * wSrc \\\n + '-+-' + '-' * wMode \\\n + '-+-' + '-' * wHereAdd \\\n + '-+-' + '-' * wHereMod \\\n + '-+-' + '-' * wHereDel \\\n + '-+-' + '-' * wHereErr \\\n + '-+-' + '-' * wPeerAdd \\\n + '-+-' + '-' * wPeerMod \\\n + '-+-' + '-' * wPeerDel \\\n + '-+-' + '-' * wPeerErr \\\n + '-+-' + '-' * wCol \\\n + '-+-' + '-' * wMrg \\\n + '-+\\n'\n\n stream.write(hsline)\n\n def numcol(val, wid):\n if val == 0:\n return ' | {0: ^{w}}'.format('-', w=wid)\n return ' | {0: >{w}}'.format(num2str(val), w=wid)\n\n for key in sorted(stats.keys(), key=lambda k: str(k).lower()):\n stream.write('| {0: >{w}}'.format(key, w=wSrc))\n stream.write(' | {0: ^{w}}'.format(modeStringLut.get(stats[key].mode), w=wMode))\n stream.write(numcol(stats[key].hereAdd, wHereAdd))\n stream.write(numcol(stats[key].hereMod, wHereMod))\n stream.write(numcol(stats[key].hereDel, wHereDel))\n stream.write(numcol(stats[key].hereErr, wHereErr))\n stream.write(numcol(stats[key].peerAdd, wPeerAdd))\n stream.write(numcol(stats[key].peerMod, wPeerMod))\n stream.write(numcol(stats[key].peerDel, wPeerDel))\n stream.write(numcol(stats[key].peerErr, wPeerErr))\n stream.write(numcol(stats[key].conflicts, wCol))\n stream.write(numcol(stats[key].merged, wMrg))\n stream.write(' |\\n')\n\n stream.write(hsline)\n\n if totals:\n if title is None and not details:\n stream.write('+-' + '-' * tWid + '-+\\n')\n stream.write('| {0: ^{w}}'.format(sumlist, w=tWid))\n stream.write(' |\\n')\n stream.write('+-' + '-' * tWid + '-+\\n')\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef lookup(cls, key, get=False):\n if get:\n item = cls._item_dict.get(key)\n return item.name if item else key\n return cls._item_dict[key].name", "response": "Returns the label for a given Enum key"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef verbose(cls, key=False, default=''):\n if key is False:\n items = cls._item_dict.values()\n return [(x.key, x.value) for x in sorted(items, key=lambda x:x.sort or x.key)]\n\n item = cls._item_dict.get(key)\n return item.value if item else default", "response": "Returns the verbose name for a given enum value"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the list of DNS servers configured with nmcli.", "response": "def get_configured_dns():\n \"\"\"\n Returns the configured DNS servers with the use f nmcli.\n \"\"\"\n ips = []\n try:\n output = subprocess.check_output(['nmcli', 'device', 'show'])\n output = output.decode('utf-8')\n\n for line in output.split('\\n'):\n if 'DNS' in line:\n pattern = r\"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\"\n for hit in re.findall(pattern, line):\n ips.append(hit)\n except FileNotFoundError:\n pass\n return ips"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the dns servers configured in the resolv. conf file", "response": "def get_resolv_dns():\n \"\"\"\n Returns the dns servers configured in /etc/resolv.conf\n \"\"\"\n result = []\n try:\n for line in open('/etc/resolv.conf', 'r'):\n if line.startswith('search'):\n result.append(line.strip().split(' ')[1])\n except FileNotFoundError:\n pass\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntries to perform a zone transfer on a given address.", "response": "def zone_transfer(address, dns_name):\n \"\"\"\n Tries to perform a zone transfer.\n \"\"\"\n ips = []\n try:\n print_notification(\"Attempting dns zone transfer for {} on {}\".format(dns_name, address))\n z = dns.zone.from_xfr(dns.query.xfr(address, dns_name))\n except dns.exception.FormError:\n print_notification(\"Zone transfer not allowed\")\n return ips\n names = z.nodes.keys()\n print_success(\"Zone transfer successfull for {}, found {} entries\".format(address, len(names)))\n for n in names:\n node = z[n]\n data = node.get_rdataset(dns.rdataclass.IN, dns.rdatatype.A)\n if data:\n # TODO add hostnames to entries.\n # hostname = n.to_text()\n for item in data.items:\n address = item.address\n ips.append(address)\n return ips"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nresolve the list of domains and returns the ips.", "response": "def resolve_domains(domains, disable_zone=False):\n \"\"\"\n Resolves the list of domains and returns the ips.\n \"\"\"\n dnsresolver = dns.resolver.Resolver()\n\n ips = []\n\n for domain in domains:\n print_notification(\"Resolving {}\".format(domain))\n try:\n result = dnsresolver.query(domain, 'A')\n for a in result.response.answer[0]:\n ips.append(str(a))\n if not disable_zone:\n ips.extend(zone_transfer(str(a), domain))\n except dns.resolver.NXDOMAIN as e:\n print_error(e)\n return ips"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_ips(ips, netmask, include_public):\n hs = HostSearch()\n rs = RangeSearch()\n ranges = []\n ips = list(set(ips))\n included_ips = []\n print_success(\"Found {} ips\".format(len(ips)))\n for ip in ips:\n ip_address = ipaddress.ip_address(ip)\n if include_public or ip_address.is_private:\n # To stop the screen filling with ranges.\n if len(ips) < 15:\n print_success(\"Found ip: {}\".format(ip))\n host = hs.id_to_object(ip)\n host.add_tag('dns_discover')\n host.save()\n r = str(ipaddress.IPv4Network(\"{}/{}\".format(ip, netmask), strict=False))\n ranges.append(r)\n included_ips.append(ip)\n else:\n print_notification(\"Excluding ip {}\".format(ip))\n\n ranges = list(set(ranges))\n print_success(\"Found {} ranges\".format(len(ranges)))\n for rng in ranges:\n # To stop the screen filling with ranges.\n if len(ranges) < 15:\n print_success(\"Found range: {}\".format(rng))\n r = rs.id_to_object(rng)\n r.add_tag('dns_discover')\n r.save()\n\n stats = {}\n stats['ips'] = included_ips\n stats['ranges'] = ranges\n return stats", "response": "Parses the list of ips into ranges based on the netmask given."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_connection(conf):\n host_config = {}\n host_config['hosts'] = [conf.get('jackal', 'host')]\n if int(conf.get('jackal', 'use_ssl')):\n host_config['use_ssl'] = True\n if conf.get('jackal', 'ca_certs'):\n host_config['ca_certs'] = conf.get('jackal', 'ca_certs')\n if int(conf.get('jackal', 'client_certs')):\n host_config['client_cert'] = conf.get('jackal', 'client_cert')\n host_config['client_key'] = conf.get('jackal', 'client_key')\n\n # Disable hostname checking for now.\n host_config['ssl_assert_hostname'] = False\n\n connections.create_connection(**host_config)", "response": "Creates a connection based upon the given configuration object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches the elasticsearch instance to retrieve the requested documents.", "response": "def search(self, number=None, *args, **kwargs):\n \"\"\"\n Searches the elasticsearch instance to retrieve the requested documents.\n \"\"\"\n search = self.create_search(*args, **kwargs)\n try:\n if number:\n response = search[0:number]\n else:\n args, _ = self.core_parser.parse_known_args()\n if args.number:\n response = search[0:args.number]\n else:\n response = search.scan()\n\n return [hit for hit in response]\n except NotFoundError:\n print_error(\"The index was not found, have you initialized the index?\")\n return []\n except (ConnectionError, TransportError):\n print_error(\"Cannot connect to elasticsearch\")\n return []"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nusing the command line arguments to fill the search function and call it.", "response": "def argument_search(self):\n \"\"\"\n Uses the command line arguments to fill the search function and call it.\n \"\"\"\n arguments, _ = self.argparser.parse_known_args()\n return self.search(**vars(arguments))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef count(self, *args, **kwargs):\n search = self.create_search(*args, **kwargs)\n try:\n return search.count()\n except NotFoundError:\n print_error(\"The index was not found, have you initialized the index?\")\n except (ConnectionError, TransportError):\n print_error(\"Cannot connect to elasticsearch\")", "response": "Returns the number of results after filtering with the given arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef argument_count(self):\n arguments, _ = self.argparser.parse_known_args()\n return self.count(**vars(arguments))", "response": "Returns the number of arguments for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a generator that maps the input of the pipe to an elasticsearch object.", "response": "def get_pipe(self, object_type):\n \"\"\"\n Returns a generator that maps the input of the pipe to an elasticsearch object.\n Will call id_to_object if it cannot serialize the data from json.\n \"\"\"\n for line in sys.stdin:\n try:\n data = json.loads(line.strip())\n obj = object_type(**data)\n yield obj\n except ValueError:\n yield self.id_to_object(line.strip())"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef id_to_object(self, line):\n result = Range.get(line, ignore=404)\n if not result:\n result = Range(range=line)\n result.save()\n return result", "response": "Resolves an ip adres to a range object creating it if it doesn t exist."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning an Argparser instance with search functionality specific for ranges.", "response": "def argparser(self):\n \"\"\"\n Argparser option with search functionality specific for ranges.\n \"\"\"\n core_parser = self.core_parser\n core_parser.add_argument('-r', '--range', type=str, help=\"The range to search for use\")\n return core_parser"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert an object to an object ID.", "response": "def object_to_id(self, obj):\n \"\"\"\n Searches elasticsearch for objects with the same address, protocol, port and state.\n \"\"\"\n search = Service.search()\n search = search.filter(\"term\", address=obj.address)\n search = search.filter(\"term\", protocol=obj.protocol)\n search = search.filter(\"term\", port=obj.port)\n search = search.filter(\"term\", state=obj.state)\n if search.count():\n result = search[0].execute()[0]\n return result.meta.id\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresolves the given id to a user object.", "response": "def id_to_object(self, line):\n \"\"\"\n Resolves the given id to a user object, if it doesn't exists it will be created.\n \"\"\"\n user = User.get(line, ignore=404)\n if not user:\n user = User(username=line)\n user.save()\n return user"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_users(self, *args, **kwargs):\n arguments, _ = self.argparser.parse_known_args()\n if self.is_pipe and self.use_pipe:\n return self.get_pipe(self.object_type)\n elif arguments.tags or arguments.group or arguments.search or arguments.domain:\n return self.argument_search()\n else:\n return self.search(*args, **kwargs)", "response": "Retrieves the users from elastic."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the domains of the users from elastic.", "response": "def get_domains(self):\n \"\"\"\n Retrieves the domains of the users from elastic.\n \"\"\"\n search = User.search()\n search.aggs.bucket('domains', 'terms', field='domain', order={'_count': 'desc'}, size=100)\n response = search.execute()\n return [entry.key for entry in response.aggregations.domains.buckets]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_object(self, username, secret, domain=None, host_ip=None, service_id=None):\n # Not sure yet if this is advisable... Older passwords can be overwritten...\n search = Credential.search()\n search = search.filter(\"term\", username=username)\n search = search.filter(\"term\", secret=secret)\n if domain:\n search = search.filter(\"term\", domain=domain)\n else:\n search = search.exclude(\"exists\", field=\"domain\")\n if host_ip:\n search = search.filter(\"term\", host_ip=host_ip)\n else:\n search = search.exclude(\"exists\", field=\"host_ip\")\n if service_id:\n search = search.filter(\"term\", service_id=service_id)\n else:\n search = search.exclude(\"exists\", field=\"service_id\")\n if search.count():\n result = search[0].execute()[0]\n return result\n else:\n return None", "response": "Searches elasticsearch for objects with the same username password optional domain host_ip and service_id."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef object_to_id(self, obj):\n # Not sure yet if this is advisable... Older passwords can be overwritten...\n search = Credential.search()\n search = search.filter(\"term\", username=obj.username)\n search = search.filter(\"term\", secret=obj.secret)\n if obj.domain:\n search = search.filter(\"term\", domain=obj.domain)\n else:\n search = search.exclude(\"exists\", field=\"domain\")\n if obj.host_ip:\n search = search.filter(\"term\", host_ip=obj.host_ip)\n else:\n search = search.exclude(\"exists\", field=\"host_ip\")\n if obj.service_id:\n search = search.filter(\"term\", service_id=obj.service_id)\n else:\n search = search.exclude(\"exists\", field=\"service_id\")\n if search.count():\n result = search[0].execute()[0]\n return result.meta.id\n else:\n return None", "response": "Converts an object to an object ID."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_credentials(self, *args, **kwargs):\n arguments, _ = self.argparser.parse_known_args()\n if self.is_pipe and self.use_pipe:\n return self.get_pipe(self.object_type)\n elif arguments.tags or arguments.type or arguments.search or arguments.password or arguments.cracked or arguments.range or arguments.domain:\n return self.argument_search()\n else:\n return self.search(*args, **kwargs)", "response": "Retrieves the users from elastic."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list that maps the input of the pipe to an elasticsearch object.", "response": "def get_pipe(self):\n \"\"\"\n Returns a list that maps the input of the pipe to an elasticsearch object.\n Will call id_to_object if it cannot serialize the data from json.\n \"\"\"\n lines = []\n for line in sys.stdin:\n try:\n lines.append(self.line_to_object(line.strip()))\n except ValueError:\n pass\n except KeyError:\n pass\n return lines"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconsumes state. Command commands and converts them to an ET protocol tree.", "response": "def commands2tree(self, adapter, session, commands):\n '''Consumes state.Command commands and converts them to an ET protocol tree'''\n\n # todo: trap errors...\n\n hdrcmd = commands[0]\n commands = commands[1:]\n\n if hdrcmd.name != constants.CMD_SYNCHDR:\n raise common.InternalError('unexpected first command \"%s\" (expected \"%s\")'\n % (hdrcmd.name, constants.CMD_SYNCHDR))\n\n if hdrcmd.version != constants.SYNCML_VERSION_1_2:\n raise common.FeatureNotSupported('unsupported SyncML version \"%s\"' % (hdrcmd.version,))\n\n xsync = ET.Element(constants.NODE_SYNCML)\n xhdr = ET.SubElement(xsync, hdrcmd.name)\n if hdrcmd.version == constants.SYNCML_VERSION_1_2:\n ET.SubElement(xhdr, 'VerDTD').text = constants.SYNCML_DTD_VERSION_1_2\n ET.SubElement(xhdr, 'VerProto').text = hdrcmd.version\n\n ET.SubElement(xhdr, 'SessionID').text = hdrcmd.sessionID\n ET.SubElement(xhdr, 'MsgID').text = hdrcmd.msgID\n xsrc = ET.SubElement(xhdr, 'Source')\n ET.SubElement(xsrc, 'LocURI').text = hdrcmd.source\n if hdrcmd.sourceName is not None:\n ET.SubElement(xsrc, 'LocName').text = hdrcmd.sourceName\n xtgt = ET.SubElement(xhdr, 'Target')\n ET.SubElement(xtgt, 'LocURI').text = hdrcmd.target\n if hdrcmd.targetName is not None:\n ET.SubElement(xtgt, 'LocName').text = hdrcmd.targetName\n if hdrcmd.respUri is not None:\n ET.SubElement(xhdr, 'RespURI').text = hdrcmd.respUri\n\n if hdrcmd.auth is not None and not session.authAccepted:\n if hdrcmd.auth != constants.NAMESPACE_AUTH_BASIC:\n raise NotImplementedError('auth method \"%s\"' % (common.auth2string(hdrcmd.auth),))\n if hdrcmd.auth == constants.NAMESPACE_AUTH_BASIC:\n xcred = ET.SubElement(xhdr, 'Cred')\n xmeta = ET.SubElement(xcred, 'Meta')\n ET.SubElement(xmeta, 'Format', {'xmlns': constants.NAMESPACE_METINF}).text = 'b64'\n ET.SubElement(xmeta, 'Type', {'xmlns': constants.NAMESPACE_METINF}).text = hdrcmd.auth\n ET.SubElement(xcred, 'Data').text = base64.b64encode(\n '%s:%s' % (adapter.peer.username, adapter.peer.password))\n if hdrcmd.maxMsgSize is not None or hdrcmd.maxObjSize is not None:\n xmeta = ET.SubElement(xhdr, 'Meta')\n if hdrcmd.maxMsgSize is not None:\n ET.SubElement(xmeta, 'MaxMsgSize', {'xmlns': constants.NAMESPACE_METINF}).text = hdrcmd.maxMsgSize\n if hdrcmd.maxObjSize is not None:\n ET.SubElement(xmeta, 'MaxObjSize', {'xmlns': constants.NAMESPACE_METINF}).text = hdrcmd.maxObjSize\n\n xbody = ET.SubElement(xsync, constants.NODE_SYNCBODY)\n\n for cmdidx, cmd in enumerate(commands):\n\n xcmd = ET.SubElement(xbody, cmd.name)\n if cmd.cmdID is not None:\n ET.SubElement(xcmd, 'CmdID').text = cmd.cmdID\n\n if cmd.name == constants.CMD_ALERT:\n ET.SubElement(xcmd, 'Data').text = str(cmd.data)\n xitem = ET.SubElement(xcmd, 'Item')\n ET.SubElement(ET.SubElement(xitem, 'Source'), 'LocURI').text = cmd.source\n ET.SubElement(ET.SubElement(xitem, 'Target'), 'LocURI').text = cmd.target\n if cmd.lastAnchor is not None \\\n or cmd.nextAnchor is not None \\\n or cmd.maxObjSize is not None:\n xmeta = ET.SubElement(xitem, 'Meta')\n xanch = ET.SubElement(xmeta, 'Anchor', {'xmlns': constants.NAMESPACE_METINF})\n if cmd.lastAnchor is not None:\n ET.SubElement(xanch, 'Last').text = cmd.lastAnchor\n if cmd.nextAnchor is not None:\n ET.SubElement(xanch, 'Next').text = cmd.nextAnchor\n if cmd.maxObjSize is not None:\n ET.SubElement(xmeta, 'MaxObjSize', {'xmlns': constants.NAMESPACE_METINF}).text = cmd.maxObjSize\n continue\n\n if cmd.name == constants.CMD_STATUS:\n ET.SubElement(xcmd, 'MsgRef').text = cmd.msgRef\n ET.SubElement(xcmd, 'CmdRef').text = cmd.cmdRef\n ET.SubElement(xcmd, 'Cmd').text = cmd.statusOf\n if cmd.sourceRef is not None:\n ET.SubElement(xcmd, 'SourceRef').text = cmd.sourceRef\n if cmd.targetRef is not None:\n ET.SubElement(xcmd, 'TargetRef').text = cmd.targetRef\n ET.SubElement(xcmd, 'Data').text = cmd.statusCode\n if cmd.nextAnchor is not None or cmd.lastAnchor is not None:\n xdata = ET.SubElement(ET.SubElement(xcmd, 'Item'), 'Data')\n xanch = ET.SubElement(xdata, 'Anchor', {'xmlns': constants.NAMESPACE_METINF})\n if cmd.lastAnchor is not None:\n ET.SubElement(xanch, 'Last').text = cmd.lastAnchor\n if cmd.nextAnchor is not None:\n ET.SubElement(xanch, 'Next').text = cmd.nextAnchor\n # NOTE: this is NOT standard SyncML...\n if cmd.errorCode is not None or cmd.errorMsg is not None:\n xerr = ET.SubElement(xcmd, 'Error')\n if cmd.errorCode is not None:\n ET.SubElement(xerr, 'Code').text = cmd.errorCode\n if cmd.errorMsg is not None:\n ET.SubElement(xerr, 'Message').text = cmd.errorMsg\n if cmd.errorTrace is not None:\n ET.SubElement(xerr, 'Trace').text = cmd.errorTrace\n continue\n\n if cmd.name in [constants.CMD_GET, constants.CMD_PUT]:\n ET.SubElement(ET.SubElement(xcmd, 'Meta'), 'Type',\n {'xmlns': constants.NAMESPACE_METINF}).text = cmd.type\n if cmd.source is not None or cmd.target is not None or cmd.data:\n xitem = ET.SubElement(xcmd, 'Item')\n if cmd.source is not None:\n xsrc = ET.SubElement(xitem, 'Source')\n ET.SubElement(xsrc, 'LocURI').text = cmd.source\n ET.SubElement(xsrc, 'LocName').text = cmd.source\n if cmd.target is not None:\n xtgt = ET.SubElement(xitem, 'Target')\n ET.SubElement(xtgt, 'LocURI').text = cmd.target\n ET.SubElement(xtgt, 'LocName').text = cmd.target\n if cmd.data is not None:\n if isinstance(cmd.data, basestring):\n ET.SubElement(xitem, 'Data').text = cmd.data\n else:\n ET.SubElement(xitem, 'Data').append(cmd.data)\n continue\n\n if cmd.name == constants.CMD_RESULTS:\n ET.SubElement(xcmd, 'MsgRef').text = cmd.msgRef\n ET.SubElement(xcmd, 'CmdRef').text = cmd.cmdRef\n ET.SubElement(ET.SubElement(xcmd, 'Meta'), 'Type',\n {'xmlns': constants.NAMESPACE_METINF}).text = cmd.type\n xitem = ET.SubElement(xcmd, 'Item')\n xsrc = ET.SubElement(xitem, 'Source')\n ET.SubElement(xsrc, 'LocURI').text = cmd.source\n ET.SubElement(xsrc, 'LocName').text = cmd.source\n if cmd.data is not None:\n if isinstance(cmd.data, basestring):\n ET.SubElement(xitem, 'Data').text = cmd.data\n else:\n ET.SubElement(xitem, 'Data').append(cmd.data)\n continue\n\n if cmd.name == constants.CMD_SYNC:\n ET.SubElement(ET.SubElement(xcmd, 'Source'), 'LocURI').text = cmd.source\n ET.SubElement(ET.SubElement(xcmd, 'Target'), 'LocURI').text = cmd.target\n if cmd.noc is not None:\n ET.SubElement(xcmd, 'NumberOfChanges').text = cmd.noc\n if cmd.data is not None:\n for scmd in cmd.data:\n xscmd = ET.SubElement(xcmd, scmd.name)\n if scmd.cmdID is not None:\n ET.SubElement(xscmd, 'CmdID').text = scmd.cmdID\n if scmd.type is not None or \\\n ( scmd.format is not None and scmd.format != constants.FORMAT_AUTO ):\n xsmeta = ET.SubElement(xscmd, 'Meta')\n # todo: implement auto encoding determination...\n # (the current implementation just lets XML encoding do it,\n # which is for most things good enough, but not so good\n # for sequences that need a large amount escaping such as\n # binary data...)\n if scmd.format is not None and scmd.format != constants.FORMAT_AUTO:\n ET.SubElement(xsmeta, 'Format', {'xmlns': constants.NAMESPACE_METINF}).text = scmd.format\n if scmd.type is not None:\n ET.SubElement(xsmeta, 'Type', {'xmlns': constants.NAMESPACE_METINF}).text = scmd.type\n xsitem = ET.SubElement(xscmd, 'Item')\n if scmd.source is not None:\n ET.SubElement(ET.SubElement(xsitem, 'Source'), 'LocURI').text = scmd.source\n if scmd.sourceParent is not None:\n ET.SubElement(ET.SubElement(xsitem, 'SourceParent'), 'LocURI').text = scmd.sourceParent\n if scmd.target is not None:\n ET.SubElement(ET.SubElement(xsitem, 'Target'), 'LocURI').text = scmd.target\n if scmd.targetParent is not None:\n ET.SubElement(ET.SubElement(xsitem, 'TargetParent'), 'LocURI').text = scmd.targetParent\n if scmd.data is not None:\n if isinstance(scmd.data, basestring):\n ET.SubElement(xsitem, 'Data').text = scmd.data\n else:\n ET.SubElement(xsitem, 'Data').append(scmd.data)\n continue\n\n if cmd.name == constants.CMD_MAP:\n ET.SubElement(ET.SubElement(xcmd, 'Source'), 'LocURI').text = cmd.source\n ET.SubElement(ET.SubElement(xcmd, 'Target'), 'LocURI').text = cmd.target\n if cmd.sourceItem is not None or cmd.targetItem is not None:\n xitem = ET.SubElement(xcmd, constants.CMD_MAPITEM)\n if cmd.sourceItem is not None:\n ET.SubElement(ET.SubElement(xitem, 'Source'), 'LocURI').text = cmd.sourceItem\n if cmd.targetItem is not None:\n ET.SubElement(ET.SubElement(xitem, 'Target'), 'LocURI').text = cmd.targetItem\n continue\n\n if cmd.name == constants.CMD_FINAL:\n if cmdidx + 1 < len(commands):\n raise common.InternalError('command \"%s\" not at tail end of commands' % (cmd.name,))\n continue\n\n raise common.InternalError('unexpected command \"%s\"' % (cmd.name,))\n\n return xsync"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconsume an ET protocol tree and converts it to state. Command commands", "response": "def tree2commands(self, adapter, session, lastcmds, xsync):\n '''Consumes an ET protocol tree and converts it to state.Command commands'''\n\n # do some preliminary sanity checks...\n # todo: do i really want to be using assert statements?...\n\n assert xsync.tag == constants.NODE_SYNCML\n assert len(xsync) == 2\n assert xsync[0].tag == constants.CMD_SYNCHDR\n assert xsync[1].tag == constants.NODE_SYNCBODY\n\n version = xsync[0].findtext('VerProto')\n if version != constants.SYNCML_VERSION_1_2:\n raise common.FeatureNotSupported('unsupported SyncML version \"%s\" (expected \"%s\")' \\\n % (version, constants.SYNCML_VERSION_1_2))\n verdtd = xsync[0].findtext('VerDTD')\n if verdtd != constants.SYNCML_DTD_VERSION_1_2:\n raise common.FeatureNotSupported('unsupported SyncML DTD version \"%s\" (expected \"%s\")' \\\n % (verdtd, constants.SYNCML_DTD_VERSION_1_2))\n\n ret = self.initialize(adapter, session, xsync)\n hdrcmd = ret[0]\n\n if session.isServer:\n log.debug('received request SyncML message from \"%s\" (s%s.m%s)',\n hdrcmd.target, hdrcmd.sessionID, hdrcmd.msgID)\n else:\n log.debug('received response SyncML message from \"%s\" (s%s.m%s)',\n lastcmds[0].target, lastcmds[0].sessionID, lastcmds[0].msgID)\n\n try:\n return self._tree2commands(adapter, session, lastcmds, xsync, ret)\n except Exception, e:\n if not session.isServer:\n raise\n # TODO: make this configurable as to whether or not any error\n # is sent back to the peer as a SyncML \"standardized\" error\n # status...\n code = '%s.%s' % (e.__class__.__module__, e.__class__.__name__)\n msg = ''.join(traceback.format_exception_only(type(e), e)).strip()\n log.exception('failed while interpreting command tree: %s', msg)\n # TODO: for some reason, the active exception is not being logged...\n return [\n hdrcmd,\n state.Command(\n name = constants.CMD_STATUS,\n cmdID = '1',\n msgRef = session.pendingMsgID,\n cmdRef = 0,\n sourceRef = xsync[0].findtext('Source/LocURI'),\n targetRef = xsync[0].findtext('Target/LocURI'),\n statusOf = constants.CMD_SYNCHDR,\n statusCode = constants.STATUS_COMMAND_FAILED,\n errorCode = code,\n errorMsg = msg,\n errorTrace = ''.join(traceback.format_exception(type(e), e, sys.exc_info()[2])),\n ),\n state.Command(name=constants.CMD_FINAL)]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndump the object into a string representation.", "response": "def dumps(self, contentType=None, version=None):\n '''\n [OPTIONAL] Identical to :meth:`dump`, except the serialized form\n is returned as a string representation. As documented in\n :meth:`dump`, the return value can optionally be a three-element\n tuple of (contentType, version, data) if the provided content-type\n should be overridden or enhanced. The default implementation just\n wraps :meth:`dump`.\n '''\n buf = six.StringIO()\n ret = self.dump(buf, contentType, version)\n if ret is None:\n return buf.getvalue()\n return (ret[0], ret[1], buf.getvalue())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dumpsItem(self, item, contentType=None, version=None):\n '''\n [OPTIONAL] Identical to :meth:`dump`, except the serialized form\n is returned as a string representation. As documented in\n :meth:`dump`, the return value can optionally be a three-element\n tuple of (contentType, version, data) if the provided content-type\n should be overridden or enhanced. The default implementation just\n wraps :meth:`dump`.\n '''\n buf = six.StringIO()\n ret = self.dumpItem(item, buf, contentType, version)\n if ret is None:\n return buf.getvalue()\n return (ret[0], ret[1], buf.getvalue())", "response": "Dumps an item into a three - element tuple."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind the specified item and returns a item that matches the item.", "response": "def matchItem(self, item):\n '''\n [OPTIONAL] Attempts to find the specified item and returns an item\n that describes the same object although it's specific properties\n may be different. For example, a contact whose name is an\n identical match, but whose telephone number has changed would\n return the matched item. ``None`` should be returned if no match\n is found, otherwise the item that `item` matched should be\n returned.\n\n This is used primarily when a slow-sync is invoked and objects\n that exist in both peers should not be replicated.\n\n Note that **NO** merging of the items' properties should be done;\n that will be initiated via a separate call to :meth:`mergeItems`.\n\n This method by default will iterate over all items (by calling\n :meth:`getAllItems`) and compare them using ``cmp()``. This means\n that if the items managed by this agent implement the ``__eq__``\n or ``__cmp__`` methods, then matching items will be detected and\n returned. Otherwise, any items that exist in both peers will be\n duplicated on slow-sync.\n\n Sub-classes *should* implement a more efficient method of finding\n matching items.\n\n See :doc:`../merging` for details.\n '''\n for match in self.getAllItems():\n if cmp(match, item) == 0:\n return match\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef initialize_indices():\n Host.init()\n Range.init()\n Service.init()\n User.init()\n Credential.init()\n Log.init()", "response": "Initializes the indices of the hosts services credentials and log objects."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns an ElementTree node representing this ContentTypeInfo.", "response": "def toSyncML(self, nodeName=None, uniqueVerCt=False):\n '''\n Returns an ElementTree node representing this ContentTypeInfo. If\n `nodeName` is not None, then it will be used as the containing\n element node name (this is useful, for example, to differentiate\n between a standard content-type and a preferred content-type). If\n `uniqueVerCt` is True, then an array of elements will be returned\n instead of a single element with multiple VerCT elements (for\n content-types that support multiple versions).\n '''\n if uniqueVerCt:\n ret = []\n for v in self.versions:\n tmp = ET.Element(nodeName or 'ContentType')\n ET.SubElement(tmp, 'CTType').text = self.ctype\n ET.SubElement(tmp, 'VerCT').text = v\n ret.append(tmp)\n return ret\n ret = ET.Element(nodeName or 'ContentType')\n ET.SubElement(ret, 'CTType').text = self.ctype\n for v in self.versions:\n ET.SubElement(ret, 'VerCT').text = v\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the entry into a Computer object.", "response": "def parse_single_computer(entry):\n \"\"\"\n Parse the entry into a computer object.\n \"\"\"\n computer = Computer(dns_hostname=get_field(entry, 'dNSHostName'), description=get_field(\n entry, 'description'), os=get_field(entry, 'operatingSystem'), group_id=get_field(entry, 'primaryGroupID'))\n try:\n ip = str(ipaddress.ip_address(get_field(entry, 'IPv4')))\n except ValueError:\n ip = ''\n\n if ip:\n computer.ip = ip\n elif computer.dns_hostname:\n computer.ip = resolve_ip(computer.dns_hostname)\n return computer"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the file and extract the computers that resolve into jackal.", "response": "def parse_domain_computers(filename):\n \"\"\"\n Parse the file and extract the computers, import the computers that resolve into jackal.\n \"\"\"\n with open(filename) as f:\n data = json.loads(f.read())\n hs = HostSearch()\n count = 0\n entry_count = 0\n print_notification(\"Parsing {} entries\".format(len(data)))\n for system in data:\n entry_count += 1\n parsed = parse_single_computer(system)\n if parsed.ip:\n try:\n host = hs.id_to_object(parsed.ip)\n host.description.append(parsed.description)\n host.hostname.append(parsed.dns_hostname)\n if parsed.os:\n host.os = parsed.os\n host.domain_controller = parsed.dc\n host.add_tag('domaindump')\n host.save()\n count += 1\n except ValueError:\n pass\n sys.stdout.write('\\r')\n sys.stdout.write(\n \"[{}/{}] {} resolved\".format(entry_count, len(data), count))\n sys.stdout.flush()\n sys.stdout.write('\\r')\n return count"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_user(entry, domain_groups):\n result = {}\n distinguished_name = get_field(entry, 'distinguishedName')\n result['domain'] = \".\".join(distinguished_name.split(',DC=')[1:])\n result['name'] = get_field(entry, 'name')\n result['username'] = get_field(entry, 'sAMAccountName')\n result['description'] = get_field(entry, 'description')\n result['sid'] = get_field(entry, 'objectSid').split('-')[-1]\n\n primary_group = get_field(entry, 'primaryGroupID')\n member_of = entry['attributes'].get('memberOf', [])\n groups = []\n for member in member_of:\n for e in member.split(','):\n if e.startswith('CN='):\n groups.append(e[3:])\n groups.append(domain_groups.get(primary_group, ''))\n result['groups'] = groups\n\n flags = []\n try:\n uac = int(get_field(entry, 'userAccountControl'))\n\n for flag, value in uac_flags.items():\n if uac & value:\n flags.append(flag)\n except ValueError:\n pass\n result['flags'] = flags\n return result", "response": "Parses a single user from the domaindump entry."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_domain_users(domain_users_file, domain_groups_file):\n with open(domain_users_file) as f:\n users = json.loads(f.read())\n\n domain_groups = {}\n if domain_groups_file:\n with open(domain_groups_file) as f:\n groups = json.loads(f.read())\n for group in groups:\n sid = get_field(group, 'objectSid')\n domain_groups[int(sid.split('-')[-1])] = get_field(group, 'cn')\n\n user_search = UserSearch()\n count = 0\n total = len(users)\n print_notification(\"Importing {} users\".format(total))\n for entry in users:\n result = parse_user(entry, domain_groups)\n user = user_search.id_to_object(result['username'])\n user.name = result['name']\n user.domain.append(result['domain'])\n user.description = result['description']\n user.groups.extend(result['groups'])\n user.flags.extend(result['flags'])\n user.sid = result['sid']\n user.add_tag(\"domaindump\")\n user.save()\n count += 1\n sys.stdout.write('\\r')\n sys.stdout.write(\"[{}/{}]\".format(count, total))\n sys.stdout.flush()\n sys.stdout.write('\\r')\n return count", "response": "Parses the domain users and groups files."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse ldapdomaindump files and stores hosts and users in elasticsearch.", "response": "def import_domaindump():\n \"\"\"\n Parses ldapdomaindump files and stores hosts and users in elasticsearch.\n \"\"\"\n parser = argparse.ArgumentParser(\n description=\"Imports users, groups and computers result files from the ldapdomaindump tool, will resolve the names from domain_computers output for IPs\")\n parser.add_argument(\"files\", nargs='+',\n help=\"The domaindump files to import\")\n arguments = parser.parse_args()\n domain_users_file = ''\n domain_groups_file = ''\n computer_count = 0\n user_count = 0\n stats = {}\n for filename in arguments.files:\n if filename.endswith('domain_computers.json'):\n print_notification('Parsing domain computers')\n computer_count = parse_domain_computers(filename)\n if computer_count:\n stats['hosts'] = computer_count\n print_success(\"{} hosts imported\".format(computer_count))\n elif filename.endswith('domain_users.json'):\n domain_users_file = filename\n elif filename.endswith('domain_groups.json'):\n domain_groups_file = filename\n if domain_users_file:\n print_notification(\"Parsing domain users\")\n user_count = parse_domain_users(domain_users_file, domain_groups_file)\n if user_count:\n print_success(\"{} users imported\".format(user_count))\n stats['users'] = user_count\n Logger().log(\"import_domaindump\", 'Imported domaindump, found {} user, {} systems'.format(user_count, computer_count), stats)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake an autocomplete API request This can be used to find cities and/or hurricanes by name :param string query: city :param string country: restrict search to a specific country. Must be a two letter country code :param boolean hurricanes: whether to search for hurricanes or not :param boolean cities: whether to search for cities or not :param integer timeout: timeout of the api request :returns: result of the autocomplete API request :rtype: dict", "response": "def autocomplete(query, country=None, hurricanes=False, cities=True, timeout=5):\n \"\"\"Make an autocomplete API request\n\n This can be used to find cities and/or hurricanes by name\n\n :param string query: city\n :param string country: restrict search to a specific country. Must be a two letter country code\n :param boolean hurricanes: whether to search for hurricanes or not\n :param boolean cities: whether to search for cities or not\n :param integer timeout: timeout of the api request\n :returns: result of the autocomplete API request\n :rtype: dict\n\n \"\"\"\n data = {}\n data['query'] = quote(query)\n data['country'] = country or ''\n data['hurricanes'] = 1 if hurricanes else 0\n data['cities'] = 1 if cities else 0\n data['format'] = 'JSON'\n r = requests.get(AUTOCOMPLETE_URL.format(**data), timeout=timeout)\n results = json.loads(r.content)['RESULTS']\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes an API request to get a list of features from the NCBI server.", "response": "def request(key, features, query, timeout=5):\n \"\"\"Make an API request\n\n :param string key: API key to use\n :param list features: features to request. It must be a subset of :data:`FEATURES`\n :param string query: query to send\n :param integer timeout: timeout of the request\n :returns: result of the API request\n :rtype: dict\n\n \"\"\"\n data = {}\n data['key'] = key\n data['features'] = '/'.join([f for f in features if f in FEATURES])\n data['query'] = quote(query)\n data['format'] = 'json'\n r = requests.get(API_URL.format(**data), timeout=timeout)\n results = json.loads(_unicode(r.content))\n return results"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _unicode(string):\n for encoding in ['utf-8', 'latin1']:\n try:\n result = unicode(string, encoding)\n return result\n except UnicodeDecodeError:\n pass\n result = unicode(string, 'utf-8', 'replace')\n return result", "response": "Try to convert a string to unicode using different encodings"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nhandles HTTP GET requests on an authentication endpoint.", "response": "def http_get_provider(provider,\n request_url, params, token_secret, token_cookie = None):\n '''Handle HTTP GET requests on an authentication endpoint.\n\n Authentication flow begins when ``params`` has a ``login`` key with a value\n of ``start``. For instance, ``/auth/twitter?login=start``.\n\n :param str provider: An provider to obtain a user ID from.\n :param str request_url: The authentication endpoint/callback.\n :param dict params: GET parameters from the query string.\n :param str token_secret: An app secret to encode/decode JSON web tokens.\n :param str token_cookie: The current JSON web token, if available.\n :return: A dict containing any of the following possible keys:\n\n ``status``: an HTTP status code the server should sent\n\n ``redirect``: where the client should be directed to continue the flow\n\n ``set_token_cookie``: contains a JSON web token and should be stored by\n the client and passed in the next call.\n\n ``provider_user_id``: the user ID from the login provider\n\n ``provider_user_name``: the user name from the login provider\n '''\n\n if not validate_provider(provider):\n raise InvalidUsage('Provider not supported')\n\n klass = getattr(socialauth.providers, provider.capitalize())\n provider = klass(request_url, params, token_secret, token_cookie)\n if provider.status == 302:\n ret = dict(status = 302, redirect = provider.redirect)\n tc = getattr(provider, 'set_token_cookie', None)\n if tc is not None:\n ret['set_token_cookie'] = tc\n\n return ret\n\n if provider.status == 200 and provider.user_id is not None:\n ret = dict(status = 200, provider_user_id = provider.user_id)\n if provider.user_name is not None:\n ret['provider_user_name'] = provider.user_name\n\n return ret\n\n raise InvalidUsage('Invalid request')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads the contents of the Annotation Resource", "response": "def read(self):\n \"\"\" Read the contents of the Annotation Resource\n\n :return: the contents of the resource\n :rtype: str or bytes or flask.response\n \"\"\"\n if not self.__content__:\n self.__retriever__ = self.__resolver__.resolve(self.uri)\n self.__content__, self.__mimetype__ = self.__retriever__.read(self.uri)\n return self.__content__"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build_index_and_mapping(triples):\n ents = bidict()\n rels = bidict()\n ent_id = 0\n rel_id = 0\n\n collected = []\n for t in triples:\n for e in (t.head, t.tail):\n if e not in ents:\n ents[e] = ent_id\n ent_id += 1\n if t.relation not in rels:\n rels[t.relation] = rel_id\n rel_id += 1\n collected.append(kgedata.TripleIndex(ents[t.head], rels[t.relation], ents[t.tail]))\n\n return collected, ents, rels", "response": "index all triples into indexes and return their mappings"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef recover_triples_from_mapping(indexes, ents: bidict, rels: bidict):\n triples = []\n for t in indexes:\n triples.append(kgedata.Triple(ents.inverse[t.head], rels.inverse[t.relation], ents.inverse[t.tail]))\n return triples", "response": "recover triples from mapping."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef split_golden_set(triples, valid_ratio, test_ratio):\n assert valid_ratio >= 0.0\n assert test_ratio >= 0.0\n num_valid = int(len(triples) * valid_ratio)\n num_test = int(len(triples) * test_ratio)\n valid_set = triples[:num_valid]\n test_set = triples[num_valid:num_valid+num_test]\n train_set = triples[num_valid+num_test:]\n assert len(valid_set) + len(test_set) + len(train_set) == len(triples)\n\n return train_set, valid_set, test_set", "response": "Split the data into train valid and test sets."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _transform_triple_numpy(x):\n return np.array([x.head, x.relation, x.tail], dtype=np.int64)", "response": "Transform triple index into a 1 - D numpy array."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef pack_triples_numpy(triples):\n if len(triples) == 0:\n return np.array([], dtype=np.int64)\n return np.stack(list(map(_transform_triple_numpy, triples)), axis=0)", "response": "Packs a list of triple indexes into a 2D numpy array."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of entity pairs that are near duplicate relations.", "response": "def remove_near_duplicate_relation(triples, threshold=0.97):\n \"\"\"If entity pairs in a relation is as close as another relations, only keep one relation of such set.\"\"\"\n logging.debug(\"remove duplicate\")\n\n _assert_threshold(threshold)\n\n duplicate_rel_counter = defaultdict(list)\n relations = set()\n for t in triples:\n duplicate_rel_counter[t.relation].append(f\"{t.head} {t.tail}\")\n relations.add(t.relation)\n relations = list(relations)\n\n num_triples = len(triples)\n removal_relation_set = set()\n\n for rel, values in duplicate_rel_counter.items():\n duplicate_rel_counter[rel] = Superminhash(values)\n for i in relations:\n for j in relations:\n if i == j or i in removal_relation_set or j in removal_relation_set: continue\n close_relations = [i]\n if _set_close_to(duplicate_rel_counter[i], duplicate_rel_counter[j], threshold):\n close_relations.append(j)\n if len(close_relations) > 1:\n close_relations.pop(np.random.randint(len(close_relations)))\n removal_relation_set |= set(close_relations)\n logging.info(\"Removing {} relations: {}\".format(len(removal_relation_set), str(removal_relation_set)))\n\n return list(filterfalse(lambda x: x.relation in removal_relation_set, triples))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove all direct links in the training set.", "response": "def remove_direct_link_triples(train, valid, test):\n \"\"\"Remove direct links in the training sets.\"\"\"\n pairs = set()\n merged = valid + test\n for t in merged:\n pairs.add((t.head, t.tail))\n\n filtered = filterfalse(lambda t: (t.head, t.tail) in pairs or (t.tail, t.head) in pairs, train)\n return list(filtered)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nusing a union find to find segment.", "response": "def shrink_indexes_in_place(self, triples):\n \"\"\"Uses a union find to find segment.\"\"\"\n\n _ent_roots = self.UnionFind(self._ent_id)\n _rel_roots = self.UnionFind(self._rel_id)\n\n for t in triples:\n _ent_roots.add(t.head)\n _ent_roots.add(t.tail)\n _rel_roots.add(t.relation)\n\n for i, t in enumerate(triples):\n h = _ent_roots.find(t.head)\n r = _rel_roots.find(t.relation)\n t = _ent_roots.find(t.tail)\n triples[i] = kgedata.TripleIndex(h, r, t)\n\n ents = bidict()\n available_ent_idx = 0\n for previous_idx, ent_exist in enumerate(_ent_roots.roots()):\n if not ent_exist:\n self._ents.inverse.pop(previous_idx)\n else:\n ents[self._ents.inverse[previous_idx]] = available_ent_idx\n available_ent_idx += 1\n rels = bidict()\n available_rel_idx = 0\n for previous_idx, rel_exist in enumerate(_rel_roots.roots()):\n if not rel_exist:\n self._rels.inverse.pop(previous_idx)\n else:\n rels[self._rels.inverse[previous_idx]] = available_rel_idx\n available_rel_idx += 1\n self._ents = ents\n self._rels = rels\n self._ent_id = available_ent_idx\n self._rel_id = available_rel_idx"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef freeze(self):\n data = super(IndexBuilder, self).freeze()\n try:\n # Sphinx >= 1.5 format\n # Due to changes from github.com/sphinx-doc/sphinx/pull/2454\n base_file_names = data['docnames']\n except KeyError:\n # Sphinx < 1.5 format\n base_file_names = data['filenames']\n\n store = {}\n c = itertools.count()\n for prefix, items in iteritems(data['objects']):\n for name, (index, typeindex, _, shortanchor) in iteritems(items):\n objtype = data['objtypes'][typeindex]\n if objtype.startswith('cpp:'):\n split = name.rsplit('::', 1)\n if len(split) != 2:\n warnings.warn(\"What's up with %s?\" % str((prefix, name, objtype)))\n continue\n prefix, name = split\n last_prefix = prefix.split('::')[-1]\n else:\n last_prefix = prefix.split('.')[-1]\n\n store[next(c)] = {\n 'filename': base_file_names[index],\n 'objtype': objtype,\n 'prefix': prefix,\n 'last_prefix': last_prefix,\n 'name': name,\n 'shortanchor': shortanchor,\n }\n\n data.update({'store': store})\n return data", "response": "Create a usable data structure for serializing."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlog an entity creation", "response": "def log_entity_creation(entity, params=None):\n \"\"\"Logs an entity creation\n \"\"\"\n p = {'entity': entity}\n if params:\n p['params'] = params\n _log(TYPE_CODES.CREATE, p)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs an entity deletion", "response": "def log_entity_deletion(entity, params=None):\n \"\"\"Logs an entity creation\n \"\"\"\n p = {'entity': entity}\n if params:\n p['params'] = params\n _log(TYPE_CODES.DELETE, p)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef log_operation(entities, operation_name, params=None):\n if isinstance(entities, (list, tuple)):\n entities = list(entities)\n else:\n entities = [entities]\n\n p = {'name': operation_name, 'on': entities}\n if params:\n p['params'] = params\n _log(TYPE_CODES.OPERATION, p)", "response": "Logs an operation done on an entity possibly with other arguments\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs a new state of an entity", "response": "def log_state(entity, state):\n \"\"\"Logs a new state of an entity\n \"\"\"\n p = {'on': entity, 'state': state}\n _log(TYPE_CODES.STATE, p)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs an update done on an entity", "response": "def log_update(entity, update):\n \"\"\"Logs an update done on an entity\n \"\"\"\n p = {'on': entity, 'update': update}\n _log(TYPE_CODES.UPDATE, p)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlog an error and result", "response": "def log_error(error, result):\n \"\"\"Logs an error\n \"\"\"\n p = {'error': error, 'result':result}\n _log(TYPE_CODES.ERROR, p)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dict_cursor(func):\n\n @wraps(func)\n def wrapper(cls, *args, **kwargs):\n with (yield from cls.get_cursor(_CursorType.DICT)) as c:\n return (yield from func(cls, c, *args, **kwargs))\n\n return wrapper", "response": "Decorator that provides a dictionary cursor to the calling function\nWorkItem Adds the cursor as the second argument to the calling function\nWorkItem Returns a generator that yields a dictionary cursor"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprovide a transacted cursor which will run in autocommit=false mode For any exception the transaction will be rolled back. Requires that the function being decorated is an instance of a class or object that yields a cursor from a get_cursor(cursor_type=CursorType.NAMEDTUPLE) coroutine or provides such an object as the first argument in its signature Yields: A client-side transacted named cursor", "response": "def transaction(func):\n \"\"\"\n Provides a transacted cursor which will run in autocommit=false mode\n\n For any exception the transaction will be rolled back.\n Requires that the function being decorated is an instance of a class or object\n that yields a cursor from a get_cursor(cursor_type=CursorType.NAMEDTUPLE) coroutine or provides such an object\n as the first argument in its signature\n\n Yields:\n A client-side transacted named cursor\n \"\"\"\n\n @wraps(func)\n def wrapper(cls, *args, **kwargs):\n with (yield from cls.get_cursor(_CursorType.NAMEDTUPLE)) as c:\n try:\n yield from c.execute('BEGIN')\n result = (yield from func(cls, c, *args, **kwargs))\n except Exception:\n yield from c.execute('ROLLBACK')\n else:\n yield from c.execute('COMMIT')\n return result\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef connect(cls, database: str, user: str, password: str, host: str, port: int, *, use_pool: bool=True,\n enable_ssl: bool=False, minsize=1, maxsize=50, keepalives_idle=5, keepalives_interval=4, echo=False,\n **kwargs):\n \"\"\"\n Sets connection parameters\n For more information on the parameters that is accepts,\n see : http://www.postgresql.org/docs/9.2/static/libpq-connect.html\n \"\"\"\n cls._connection_params['database'] = database\n cls._connection_params['user'] = user\n cls._connection_params['password'] = password\n cls._connection_params['host'] = host\n cls._connection_params['port'] = port\n cls._connection_params['sslmode'] = 'prefer' if enable_ssl else 'disable'\n cls._connection_params['minsize'] = minsize\n cls._connection_params['maxsize'] = maxsize\n cls._connection_params['keepalives_idle'] = keepalives_idle\n cls._connection_params['keepalives_interval'] = keepalives_interval\n cls._connection_params['echo'] = echo\n cls._connection_params.update(kwargs)\n cls._use_pool = use_pool", "response": "Connects to a PostGIS database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_pool(cls) -> Pool:\n if len(cls._connection_params) < 5:\n raise ConnectionError('Please call SQLStore.connect before calling this method')\n if not cls._pool:\n cls._pool = yield from create_pool(**cls._connection_params)\n return cls._pool", "response": "Returns a pool for the current instance of this class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_cursor(cls, cursor_type=_CursorType.PLAIN) -> Cursor:\n _cur = None\n if cls._use_pool:\n _connection_source = yield from cls.get_pool()\n else:\n _connection_source = yield from aiopg.connect(echo=False, **cls._connection_params)\n\n if cursor_type == _CursorType.PLAIN:\n _cur = yield from _connection_source.cursor()\n if cursor_type == _CursorType.NAMEDTUPLE:\n _cur = yield from _connection_source.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor)\n if cursor_type == _CursorType.DICT:\n _cur = yield from _connection_source.cursor(cursor_factory=psycopg2.extras.DictCursor)\n\n if not cls._use_pool:\n _cur = cursor_context_manager(_connection_source, _cur)\n\n return _cur", "response": "Yields a client - side cursor from existing db connection pool"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the number of records in a table", "response": "def count(cls, cur, table:str, where_keys: list=None):\n \"\"\"\n gives the number of records in the table\n\n Args:\n table: a string indicating the name of the table\n\n Returns:\n an integer indicating the number of records in the table\n\n \"\"\"\n\n if where_keys:\n where_clause, values = cls._get_where_clause_with_values(where_keys)\n query = cls._count_query_where.format(table, where_clause)\n q, t = query, values\n else:\n query = cls._count_query.format(table)\n q, t = query, ()\n yield from cur.execute(q, t)\n result = yield from cur.fetchone()\n return int(result[0])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate an insert statement with only chosen fields", "response": "def insert(cls, cur, table: str, values: dict):\n \"\"\"\n Creates an insert statement with only chosen fields\n\n Args:\n table: a string indicating the name of the table\n values: a dict of fields and values to be inserted\n\n Returns:\n A 'Record' object with table columns as properties\n\n \"\"\"\n keys = cls._COMMA.join(values.keys())\n value_place_holder = cls._PLACEHOLDER * len(values)\n query = cls._insert_string.format(table, keys, value_place_holder[:-1])\n yield from cur.execute(query, tuple(values.values()))\n return (yield from cur.fetchone())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(cls, cur, table: str, values: dict, where_keys: list) -> tuple:\n keys = cls._COMMA.join(values.keys())\n value_place_holder = cls._PLACEHOLDER * len(values)\n where_clause, where_values = cls._get_where_clause_with_values(where_keys)\n query = cls._update_string.format(table, keys, value_place_holder[:-1], where_clause)\n yield from cur.execute(query, (tuple(values.values()) + where_values))\n return (yield from cur.fetchall())", "response": "Creates an update query with only chosen fields\n and values to be inserted"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting the items in the specified table", "response": "def delete(cls, cur, table: str, where_keys: list):\n \"\"\"\n Creates a delete query with where keys\n Supports multiple where clause with and or or both\n\n Args:\n table: a string indicating the name of the table\n where_keys: list of dictionary\n example of where keys: [{'name':('>', 'cip'),'url':('=', 'cip.com'},{'type':{'<=', 'manufacturer'}}]\n where_clause will look like ((name>%s and url=%s) or (type <= %s))\n items within each dictionary get 'AND'-ed and dictionaries themselves get 'OR'-ed\n\n Returns:\n an integer indicating count of rows deleted\n\n \"\"\"\n where_clause, values = cls._get_where_clause_with_values(where_keys)\n query = cls._delete_query.format(table, where_clause)\n yield from cur.execute(query, values)\n return cur.rowcount"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef select(cls, cur, table: str, order_by: str, columns: list=None, where_keys: list=None, limit=100,\n offset=0):\n \"\"\"\n Creates a select query for selective columns with where keys\n Supports multiple where claus with and or or both\n\n Args:\n table: a string indicating the name of the table\n order_by: a string indicating column name to order the results on\n columns: list of columns to select from\n where_keys: list of dictionary\n limit: the limit on the number of results\n offset: offset on the results\n\n example of where keys: [{'name':('>', 'cip'),'url':('=', 'cip.com'},{'type':{'<=', 'manufacturer'}}]\n where_clause will look like ((name>%s and url=%s) or (type <= %s))\n items within each dictionary get 'AND'-ed and across dictionaries get 'OR'-ed\n\n Returns:\n A list of 'Record' object with table columns as properties\n\n \"\"\"\n if columns:\n columns_string = cls._COMMA.join(columns)\n if where_keys:\n where_clause, values = cls._get_where_clause_with_values(where_keys)\n query = cls._select_selective_column_with_condition.format(columns_string, table, where_clause,\n order_by, limit, offset)\n q, t = query, values\n else:\n query = cls._select_selective_column.format(columns_string, table, order_by, limit, offset)\n q, t = query, ()\n else:\n if where_keys:\n where_clause, values = cls._get_where_clause_with_values(where_keys)\n query = cls._select_all_string_with_condition.format(table, where_clause, order_by, limit, offset)\n q, t = query, values\n else:\n query = cls._select_all_string.format(table, order_by, limit, offset)\n q, t = query, ()\n\n yield from cur.execute(q, t)\n return (yield from cur.fetchall())", "response": "Creates a select query for selective columns with where clauses"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef raw_sql(cls, cur, query: str, values: tuple):\n yield from cur.execute(query, values)\n return (yield from cur.fetchall())", "response": "Run a raw sql query returning a list of named tuple"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a method which returns an ASCII which outputs all object s non - protected attributes and non - protected attributes.", "response": "def make_repr(*args, **kwargs):\n \"\"\"Returns __repr__ method which returns ASCII\n representaion of the object with given fields.\n\n Without arguments, ``make_repr`` generates a method\n which outputs all object's non-protected (non-undercored)\n arguments which are not callables.\n\n Accepts ``*args``, which should be a names of object's\n attributes to be included in the output::\n\n __repr__ = make_repr('foo', 'bar')\n\n If you want to generate attribute's content on the fly,\n then you should use keyword arguments and pass a callable\n of one argument::\n\n __repr__ = make_repr(foo=lambda obj: obj.blah + 100500)\n\n \"\"\"\n\n def method(self):\n cls_name = self.__class__.__name__\n\n if args:\n field_names = args\n else:\n def undercored(name): return name.startswith('_')\n\n def is_method(name): return callable(getattr(self, name))\n\n def good_name(name):\n return not undercored(name) and not is_method(name)\n\n field_names = filter(good_name, dir(self))\n field_names = sorted(field_names)\n\n # on this stage, we make from field_names an\n # attribute getters\n field_getters = zip(field_names,\n map(attrgetter, field_names))\n\n # now process keyword args, they must\n # contain callables of one argument\n # and callable should return a field's value\n field_getters = chain(\n field_getters,\n kwargs.items())\n\n fields = ((name, format_value(getter(self)))\n for name, getter in field_getters)\n\n # prepare key strings\n fields = ((u'{0}='.format(name), value)\n for name, value in fields)\n\n # join values with they respective keys\n fields = list(starmap(serialize_text, fields))\n\n beginning = u'<{cls_name} '.format(\n cls_name=cls_name,\n )\n result = serialize_list(\n beginning,\n fields)\n\n # append closing braket\n result += u'>'\n\n if ON_PYTHON2:\n # on python 2.x repr returns bytes, but on python3 - unicode strings\n result = result.encode('utf-8')\n\n return result\n\n return method"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef connect(self, host, port, minsize=5, maxsize=10, loop=asyncio.get_event_loop()):\n self._pool = yield from aioredis.create_pool((host, port), minsize=minsize, maxsize=maxsize, loop=loop)", "response": "Setup a connection to a redis server"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_key(self, key, value, namespace=None, expire=0):\n with (yield from self._pool) as redis:\n if namespace is not None:\n key = self._get_key(namespace, key)\n yield from redis.set(key, value, expire=expire)", "response": "Set a key in a cache."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntraverse an element tree rooted at element yielding the nodes matching the query.", "response": "def traverse(element, query, deep=False):\n \"\"\"\n Helper function to traverse an element tree rooted at element, yielding nodes matching the query.\n \"\"\"\n # Grab the next part of the query (it will be chopped from the front each iteration).\n part = query[0]\n if not part:\n # If the part is blank, we encountered a //, meaning search all sub-nodes.\n query = query[1:]\n part = query[0]\n deep = True\n # Parse out any predicate (tag[pred]) from this part of the query.\n part, predicate = xpath_re.match(query[0]).groups()\n for c in element._children:\n if part in ('*', c.tagname) and c._match(predicate):\n # A potential matching branch: this child matches the next query part (and predicate).\n if len(query) == 1:\n # If this is the last part of the query, we found a matching element, yield it.\n yield c\n else:\n # Otherwise, check the children of this child against the next query part.\n for e in traverse(c, query[1:]):\n yield e\n if deep:\n # If we're searching all sub-nodes, traverse with the same query, regardless of matching.\n # This basically creates a recursion branch to search EVERYWHERE for anything after //.\n for e in traverse(c, query, deep=True):\n yield e"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_query(query):\n parts = query.split('/')\n norm = []\n for p in parts:\n p = p.strip()\n if p:\n norm.append(p)\n elif '' not in norm:\n norm.append('')\n return norm", "response": "Given a simplified XPath query string returns an array of normalized query parts."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse(url_or_path, encoding=None, handler_class=DrillHandler):\n handler = handler_class()\n parser = expat.ParserCreate(encoding)\n parser.buffer_text = 1\n parser.StartElementHandler = handler.start_element\n parser.EndElementHandler = handler.end_element\n parser.CharacterDataHandler = handler.characters\n if isinstance(url_or_path, basestring):\n if '://' in url_or_path[:20]:\n with contextlib.closing(url_lib.urlopen(url_or_path)) as f:\n parser.ParseFile(f)\n elif url_or_path[:100].strip().startswith('<'):\n if isinstance(url_or_path, unicode):\n if encoding is None:\n encoding = 'utf-8'\n url_or_path = url_or_path.encode(encoding)\n parser.Parse(url_or_path, True)\n else:\n with open(url_or_path, 'rb') as f:\n parser.ParseFile(f)\n elif PY3 and isinstance(url_or_path, bytes):\n parser.ParseFile(bytes_io(url_or_path))\n else:\n parser.ParseFile(url_or_path)\n return handler.root", "response": "Parses a file - like object or a string containing XML elements."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite an XML representation of this node to the specified file - like object.", "response": "def write(self, writer):\n \"\"\"\n Writes an XML representation of this node (including descendants) to the specified file-like object.\n\n :param writer: An :class:`XmlWriter` instance to write this node to\n \"\"\"\n multiline = bool(self._children)\n newline_start = multiline and not bool(self.data)\n writer.start(self.tagname, self.attrs, newline=newline_start)\n if self.data:\n writer.data(self.data, newline=bool(self._children))\n for c in self._children:\n c.write(writer)\n writer.end(self.tagname, indent=multiline)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef xml(self, **kwargs):\n s = bytes_io()\n writer = XmlWriter(s, **kwargs)\n self.write(writer)\n return s.getvalue()", "response": "Returns an XML representation of this node including descendants."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef append(self, name, attrs=None, data=None):\n elem = self.__class__(name, attrs, data, parent=self, index=len(self._children))\n self._children.append(elem)\n return elem", "response": "Called when the parser detects a start tag while in this node. Creates a new element and appends it to the end of the node s children."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insert(self, before, name, attrs=None, data=None):\n if isinstance(before, self.__class__):\n if before.parent != self:\n raise ValueError('Cannot insert before an element with a different parent.')\n before = before.index\n # Make sure 0 <= before <= len(_children).\n before = min(max(0, before), len(self._children))\n elem = self.__class__(name, attrs, data, parent=self, index=before)\n self._children.insert(before, elem)\n # Re-index all the children.\n for idx, c in enumerate(self._children):\n c.index = idx\n return elem", "response": "Inserts a new element as a child of this element before the specified index or sibling."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _match(self, pred):\n if not pred:\n return True\n # Strip off the [ and ]\n pred = pred[1:-1]\n if pred.startswith('@'):\n # An attribute predicate checks the existence (and optionally value) of an attribute on this tag.\n pred = pred[1:]\n if '=' in pred:\n attr, value = pred.split('=', 1)\n if value[0] in ('\"', \"'\"):\n value = value[1:]\n if value[-1] in ('\"', \"'\"):\n value = value[:-1]\n return self.attrs.get(attr) == value\n else:\n return pred in self.attrs\n elif num_re.match(pred):\n # An index predicate checks whether we are the n-th child of our parent (0-based).\n index = int(pred)\n if index < 0:\n if self.parent:\n # For negative indexes, count from the end of the list.\n return self.index == (len(self.parent._children) + index)\n else:\n # If we're the root node, the only index we could be is 0.\n return index == 0\n else:\n return index == self.index\n else:\n if '=' in pred:\n tag, value = pred.split('=', 1)\n if value[0] in ('\"', \"'\"):\n value = value[1:]\n if value[-1] in ('\"', \"'\"):\n value = value[:-1]\n for c in self._children:\n if c.tagname == tag and c.data == value:\n return True\n else:\n # A plain [tag] predicate means we match if we have a child with tagname \"tag\".\n for c in self._children:\n if c.tagname == pred:\n return True\n return False", "response": "Helper function to determine if this node matches the given predicate."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef path(self, include_root=False):\n path = '%s[%d]' % (self.tagname, self.index or 0)\n p = self.parent\n while p is not None:\n if p.parent or include_root:\n path = '%s[%d]/%s' % (p.tagname, p.index or 0, path)\n p = p.parent\n return path", "response": "Returns a canonical path to this element relative to the root node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef iter(self, name=None):\n for c in self._children:\n if name is None or c.tagname == name:\n yield c\n for gc in c.find(name):\n yield gc", "response": "Iterate over all children of this node with the given tag name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the last child of this node.", "response": "def last(self, name=None):\n \"\"\"\n Returns the last child of this node.\n\n :param name: If specified, only consider elements with this tag name\n :rtype: :class:`XmlElement`\n \"\"\"\n for c in self.children(name, reverse=True):\n return c"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nyields all parents of this element.", "response": "def parents(self, name=None):\n \"\"\"\n Yields all parents of this element, back to the root element.\n\n :param name: If specified, only consider elements with this tag name\n \"\"\"\n p = self.parent\n while p is not None:\n if name is None or p.tagname == name:\n yield p\n p = p.parent"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef siblings(self, name=None):\n if self.parent and self.index:\n for c in self.parent._children:\n if c.index != self.index and (name is None or name == c.tagname):\n yield c", "response": "Yields all siblings of this node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the next sibling of this node.", "response": "def next(self, name=None):\n \"\"\"\n Returns the next sibling of this node.\n\n :param name: If specified, only consider elements with this tag name\n :rtype: :class:`XmlElement`\n \"\"\"\n if self.parent is None or self.index is None:\n return None\n for idx in xrange(self.index + 1, len(self.parent)):\n if name is None or self.parent[idx].tagname == name:\n return self.parent[idx]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the previous sibling of this node.", "response": "def prev(self, name=None):\n \"\"\"\n Returns the previous sibling of this node.\n\n :param name: If specified, only consider elements with this tag name\n :rtype: :class:`XmlElement`\n \"\"\"\n if self.parent is None or self.index is None:\n return None\n for idx in xrange(self.index - 1, -1, -1):\n if name is None or self.parent[idx].tagname == name:\n return self.parent[idx]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse the HTML table into a list of dictionaries each of which represents a single observation.", "response": "def get_observations(self):\n \"\"\"\n Parses the HTML table into a list of dictionaries, each of which\n represents a single observation.\n \"\"\"\n if self.empty:\n return []\n rows = list(self.tbody)\n observations = []\n for row_observation, row_details in zip(rows[::2], rows[1::2]):\n data = {}\n cells = OBSERVATION_XPATH(row_observation)\n data['name'] = _clean_cell(cells[0])\n data['date'] = _clean_cell(cells[1])\n data['magnitude'] = _clean_cell(cells[3])\n data['obscode'] = _clean_cell(cells[6])\n cells = DETAILS_XPATH(row_details)\n data['comp1'] = _clean_cell(cells[0])\n data['chart'] = _clean_cell(cells[3]).replace('None', '')\n data['comment_code'] = _clean_cell(cells[4])\n data['notes'] = _clean_cell(cells[5])\n observations.append(data)\n return observations"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_cache_key(prefix, *args, **kwargs):\n hash_args_kwargs = hash(tuple(kwargs.iteritems()) + args)\n return '{}_{}'.format(prefix, hash_args_kwargs)", "response": "Calculates cache key based on args and kwargs."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncaches result of function execution into the `self` object (mostly useful in models). Calculate cache key based on `args` and `kwargs` of the function (except `self`).", "response": "def cache_method(func=None, prefix=''):\n \"\"\"\n Cache result of function execution into the `self` object (mostly useful in models).\n Calculate cache key based on `args` and `kwargs` of the function (except `self`).\n \"\"\"\n def decorator(func):\n @wraps(func)\n def wrapper(self, *args, **kwargs):\n cache_key_prefix = prefix or '_cache_{}'.format(func.__name__)\n cache_key = get_cache_key(cache_key_prefix, *args, **kwargs)\n if not hasattr(self, cache_key):\n setattr(self, cache_key, func(self))\n return getattr(self, cache_key)\n return wrapper\n if func is None:\n return decorator\n else:\n return decorator(func)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cache_func(prefix, method=False):\n def decorator(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n cache_args = args\n if method:\n cache_args = args[1:]\n cache_key = get_cache_key(prefix, *cache_args, **kwargs)\n cached_value = cache.get(cache_key)\n if cached_value is None:\n cached_value = func(*args, **kwargs)\n cache.set(cache_key, cached_value)\n return cached_value\n return wrapper\n return decorator", "response": "Decorator for caching a function into the django cache backend."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap around Django's ORM `get` functionality. Wrap anything that raises ObjectDoesNotExist exception and provide the default value if necessary. `default` by default is None. `default` can be any callable, if it is callable it will be called when ObjectDoesNotExist exception will be raised.", "response": "def get_or_default(func=None, default=None):\n \"\"\"\n Wrapper around Django's ORM `get` functionality.\n Wrap anything that raises ObjectDoesNotExist exception\n and provide the default value if necessary.\n `default` by default is None. `default` can be any callable,\n if it is callable it will be called when ObjectDoesNotExist\n exception will be raised.\n \"\"\"\n def decorator(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except ObjectDoesNotExist:\n if callable(default):\n return default()\n else:\n return default\n return wrapper\n if func is None:\n return decorator\n else:\n return decorator(func)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_column_nums_from_args(columns):\n nums = []\n for c in columns:\n for p in c.split(','):\n p = p.strip()\n try:\n c = int(p)\n nums.append(c)\n except (TypeError, ValueError):\n start, ignore, end = p.partition('-')\n try:\n start = int(start)\n end = int(end)\n except (TypeError, ValueError):\n raise ValueError(\n 'Did not understand %r, expected digit-digit' % c\n )\n inc = 1 if start < end else -1\n nums.extend(range(start, end + inc, inc))\n # The user will pass us 1-based indexes, but we need to use\n # 0-based indexing with the row.\n return [n - 1 for n in nums]", "response": "Turn column inputs from user into list of simple numbers."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_printable_columns(columns, row):\n if not columns:\n return row\n\n # Extract the column values, in the order specified.\n return tuple(row[c] for c in columns)", "response": "Return only the part of the row which should be printed."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite a single observation to the output file.", "response": "def writerow(self, observation_data):\n \"\"\"\n Writes a single observation to the output file.\n\n If the ``observation_data`` parameter is a dictionary, it is\n converted to a list to keep a consisted field order (as described\n in format specification). Otherwise it is assumed that the data\n is a raw record ready to be written to file.\n\n :param observation_data: a single observation as a dictionary or list\n \"\"\"\n if isinstance(observation_data, (list, tuple)):\n row = observation_data\n else:\n row = self.dict_to_row(observation_data)\n self.writer.writerow(row)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dict_to_row(cls, observation_data):\n row = []\n row.append(observation_data['name'])\n row.append(observation_data['date'])\n row.append(observation_data['magnitude'])\n comment_code = observation_data.get('comment_code', 'na')\n if not comment_code:\n comment_code = 'na'\n row.append(comment_code)\n comp1 = observation_data.get('comp1', 'na')\n if not comp1:\n comp1 = 'na'\n row.append(comp1)\n comp2 = observation_data.get('comp2', 'na')\n if not comp2:\n comp2 = 'na'\n row.append(comp2)\n chart = observation_data.get('chart', 'na')\n if not chart:\n chart = 'na'\n row.append(chart)\n notes = observation_data.get('notes', 'na')\n if not notes:\n notes = 'na'\n row.append(notes)\n return row", "response": "Takes a dictionary of observation data and converts it to a list of fields according to AAVSO visual format specification."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a raw input record to a dictionary of observation data.", "response": "def row_to_dict(cls, row):\n \"\"\"\n Converts a raw input record to a dictionary of observation data.\n\n :param cls: current class\n :param row: a single observation as a list or tuple\n \"\"\"\n comment_code = row[3]\n if comment_code.lower() == 'na':\n comment_code = ''\n comp1 = row[4]\n if comp1.lower() == 'na':\n comp1 = ''\n comp2 = row[5]\n if comp2.lower() == 'na':\n comp2 = ''\n chart = row[6]\n if chart.lower() == 'na':\n chart = ''\n notes = row[7]\n if notes.lower() == 'na':\n notes = ''\n return {\n 'name': row[0],\n 'date': row[1],\n 'magnitude': row[2],\n 'comment_code': comment_code,\n 'comp1': comp1,\n 'comp2': comp2,\n 'chart': chart,\n 'notes': notes,\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_default_tag(app):\n '''Get the name of the view function used to prevent having to set the tag\n manually for every endpoint'''\n view_func = get_view_function(app, request.path, request.method)\n if view_func:\n return view_func.__name__", "response": "Get the name of the view function used to prevent having to set the tag\n manually for every endpoint"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_view_function(app, url, method):\n # pylint: disable=too-many-return-statements\n\n adapter = app.create_url_adapter(request)\n\n try:\n match = adapter.match(url, method=method)\n except RequestRedirect as ex:\n # recursively match redirects\n return get_view_function(app, ex.new_url, method)\n except (MethodNotAllowed, NotFound):\n # no match\n return None\n\n try:\n return app.view_functions[match[0]]\n except KeyError:\n # no view is associated with the endpoint\n return None", "response": "Match a url and return the view and arguments\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_observations(observer_code):\n page_number = 1\n observations = []\n while True:\n logger.info('Downloading page %d...', page_number)\n response = requests.get(WEBOBS_RESULTS_URL, params={\n 'obscode': observer_code,\n 'num_results': 200,\n 'obs_types': 'all',\n 'page': page_number,\n })\n logger.debug(response.request.url)\n parser = WebObsResultsParser(response.text)\n observations.extend(parser.get_observations())\n # kinda silly, but there's no need for lxml machinery here\n if '>Next' not in response.text:\n break\n page_number += 1\n return observations", "response": "Downloads all variable star observations by a given observer."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating random filename for uploading file using uuid4 hashes", "response": "def get_random_filename(instance, filename):\n \"\"\"\n Generates random filename for uploading file using uuid4 hashes\n You need to define UPLOADS_ROOT in your django settings\n something like this\n UPLOADS_ROOT = rel(MEDIA_ROOT, 'uploads')\n \"\"\"\n folder = settings.UPLOADS_ROOT\n ext = filename.split('.')[-1]\n filename = '{}.{}'.format(str(uuid4()), ext)\n return os.path.join(folder, filename)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate likely unique image path using md5 hashes", "response": "def image_path(instance, filename):\n \"\"\"Generates likely unique image path using md5 hashes\"\"\"\n filename, ext = os.path.splitext(filename.lower())\n instance_id_hash = hashlib.md5(str(instance.id)).hexdigest()\n filename_hash = ''.join(random.sample(hashlib.md5(filename.encode('utf-8')).hexdigest(), 8))\n return '{}/{}{}'.format(instance_id_hash, filename_hash, ext)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def get_ltd_product_urls(session):\n product_url = 'https://keeper.lsst.codes/products/'\n async with session.get(product_url) as response:\n data = await response.json()\n\n return data['products']", "response": "Get URLs for LSST the Docs ( LTD products from the LTD Keeper API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the product resource from the LSST the Docs API.", "response": "async def get_ltd_product(session, slug=None, url=None):\n \"\"\"Get the product resource (JSON document) from the LSST the Docs API.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n slug : `str`, optional\n Slug identfying the product. This is the same as the subdomain.\n For example, ``'ldm-151'`` is the slug for ``ldm-151.lsst.io``.\n A full product URL can be provided instead, see ``url``.\n url : `str`, optional\n The full LTD Keeper URL for the product resource. For example,\n ``'https://keeper.lsst.codes/products/ldm-151'``. The ``slug``\n can be provided instead.\n\n Returns\n -------\n product : `dict`\n Product dataset. See\n https://ltd-keeper.lsst.io/products.html#get--products-(slug)\n for fields.\n \"\"\"\n if url is None:\n url = 'https://keeper.lsst.codes/products/{}'.format(slug)\n\n async with session.get(url) as response:\n data = await response.json()\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\nasync def process_lander_page(session, github_api_token, ltd_product_data,\n mongo_collection=None):\n \"\"\"Extract, transform, and load metadata from Lander-based projects.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n ltd_product_data : `dict`\n Contents of ``metadata.yaml``, obtained via `download_metadata_yaml`.\n Data for this technote from the LTD Keeper API\n (``GET /products/``). Usually obtained via\n `lsstprojectmeta.ltd.get_ltd_product`.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records. If provided, ths JSON-LD is upserted\n into the MongoDB collection.\n\n Returns\n -------\n metadata : `dict`\n JSON-LD-formatted dictionary.\n\n Raises\n ------\n NotLanderPageError\n Raised when the LTD product cannot be interpreted as a Lander page\n because the ``/metadata.jsonld`` file is absent. This implies that\n the LTD product *could* be of a different format.\n\n .. `GitHub personal access token guide`: https://ls.st/41d\n \"\"\"\n logger = logging.getLogger(__name__)\n\n # Try to download metadata.jsonld from the Landing page site.\n published_url = ltd_product_data['published_url']\n jsonld_url = urljoin(published_url, '/metadata.jsonld')\n try:\n async with session.get(jsonld_url) as response:\n logger.debug('%s response status %r', jsonld_url, response.status)\n response.raise_for_status()\n json_data = await response.text()\n except aiohttp.ClientResponseError as err:\n logger.debug('Tried to download %s, got status %d',\n jsonld_url, err.code)\n raise NotLanderPageError()\n # Use our own json parser to get datetimes\n metadata = decode_jsonld(json_data)\n\n if mongo_collection is not None:\n await _upload_to_mongodb(mongo_collection, metadata)\n\n return metadata", "response": "Extract and load metadata from Lander - based projects."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def _upload_to_mongodb(collection, jsonld):\n document = {\n 'data': jsonld\n }\n query = {\n 'data.reportNumber': jsonld['reportNumber']\n }\n await collection.update(query, document, upsert=True, multi=False)", "response": "Upsert the technote resource into the projectmeta MongoDB collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef json_doc_to_xml(json_obj, lang='en', custom_namespace=None):\n if 'meta' not in json_obj:\n raise Exception(\"This function requires a conforming Open511 JSON document with a 'meta' section.\")\n json_obj = dict(json_obj)\n meta = json_obj.pop('meta')\n elem = get_base_open511_element(lang=lang, version=meta.pop('version'))\n\n pagination = json_obj.pop('pagination', None)\n\n json_struct_to_xml(json_obj, elem, custom_namespace=custom_namespace)\n\n if pagination:\n elem.append(json_struct_to_xml(pagination, 'pagination', custom_namespace=custom_namespace))\n\n json_struct_to_xml(meta, elem)\n\n return elem", "response": "Converts a JSON document to XML."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef json_struct_to_xml(json_obj, root, custom_namespace=None):\n if isinstance(root, (str, unicode)):\n if root.startswith('!'):\n root = etree.Element('{%s}%s' % (NS_PROTECTED, root[1:]))\n elif root.startswith('+'):\n if not custom_namespace:\n raise Exception(\"JSON fields starts with +, but no custom namespace provided\")\n root = etree.Element('{%s}%s' % (custom_namespace, root[1:]))\n else:\n root = etree.Element(root)\n if root.tag in ('attachments', 'grouped_events', 'media_files'):\n for link in json_obj:\n root.append(json_link_to_xml(link))\n elif isinstance(json_obj, (str, unicode)):\n root.text = json_obj\n elif isinstance(json_obj, (int, float)):\n root.text = unicode(json_obj)\n elif isinstance(json_obj, dict):\n if frozenset(json_obj.keys()) == frozenset(('type', 'coordinates')):\n root.append(geojson_to_gml(json_obj))\n else:\n for key, val in json_obj.items():\n if key == 'url' or key.endswith('_url'):\n el = json_link_to_xml(val, json_link_key_to_xml_rel(key))\n else:\n el = json_struct_to_xml(val, key, custom_namespace=custom_namespace)\n if el is not None:\n root.append(el)\n elif isinstance(json_obj, list):\n tag_name = root.tag\n if tag_name.endswith('ies'):\n tag_name = tag_name[:-3] + 'y'\n elif tag_name.endswith('s'):\n tag_name = tag_name[:-1]\n for val in json_obj:\n el = json_struct_to_xml(val, tag_name, custom_namespace=custom_namespace)\n if el is not None:\n root.append(el)\n elif json_obj is None:\n return None\n else:\n raise NotImplementedError\n return root", "response": "Converts a dict deserialized from JSON into an XML."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef geojson_to_gml(gj, set_srs=True):\n tag = G(gj['type'])\n if set_srs:\n tag.set('srsName', 'urn:ogc:def:crs:EPSG::4326')\n\n if gj['type'] == 'Point':\n tag.append(G.pos(_reverse_geojson_coords(gj['coordinates'])))\n elif gj['type'] == 'LineString':\n tag.append(G.posList(' '.join(_reverse_geojson_coords(ll) for ll in gj['coordinates'])))\n elif gj['type'] == 'Polygon':\n rings = [\n G.LinearRing(\n G.posList(' '.join(_reverse_geojson_coords(ll) for ll in ring))\n ) for ring in gj['coordinates']\n ]\n tag.append(G.exterior(rings.pop(0)))\n for ring in rings:\n tag.append(G.interior(ring))\n elif gj['type'] in ('MultiPoint', 'MultiLineString', 'MultiPolygon'):\n single_type = gj['type'][5:]\n member_tag = single_type[0].lower() + single_type[1:] + 'Member'\n for coord in gj['coordinates']:\n tag.append(\n G(member_tag, geojson_to_gml({'type': single_type, 'coordinates': coord}, set_srs=False))\n )\n else:\n raise NotImplementedError\n\n return tag", "response": "Given a dict deserialized from a GeoJSON object returns an lxml Element\n of the corresponding GML geometry."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef geom_to_xml_element(geom):\n if geom.srs.srid != 4326:\n raise NotImplementedError(\"Only WGS 84 lat/long geometries (SRID 4326) are supported.\")\n # GeoJSON output is far more standard than GML, so go through that\n return geojson_to_gml(json.loads(geom.geojson))", "response": "Transform a GEOS or OGR geometry object into an lxml Element\n for the GML geometry."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_comments(tex_source):\n # Expression via http://stackoverflow.com/a/13365453\n return re.sub(r'(?>> macros = {r'\\handle': 'LDM-nnn'}\n >>> sample = r'This is document \\handle.'\n >>> replace_macros(sample, macros)\n 'This is document LDM-nnn.'\n\n Any trailing slash after the macro command is also replaced by this\n function.\n\n >>> macros = {r'\\product': 'Data Management'}\n >>> sample = r'\\title [Test Plan] { \\product\\ Test Plan}'\n >>> replace_macros(sample, macros)\n '\\\\title [Test Plan] { Data Management Test Plan}'\n \"\"\"\n for macro_name, macro_content in macros.items():\n # '\\\\?' suffix matches an optional trailing '\\' that might be used\n # for spacing.\n pattern = re.escape(macro_name) + r\"\\\\?\"\n # Wrap macro_content in lambda to avoid processing escapes\n tex_source = re.sub(pattern, lambda _: macro_content, tex_source)\n return tex_source"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ensure_format(doc, format):\n assert format in ('xml', 'json')\n if getattr(doc, 'tag', None) == 'open511':\n if format == 'json':\n return xml_to_json(doc)\n elif isinstance(doc, dict) and 'meta' in doc:\n if format == 'xml':\n return json_doc_to_xml(doc)\n else:\n raise ValueError(\"Unrecognized input document\")\n return doc", "response": "Ensures that the provided document is of the given format."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef open511_convert(input_doc, output_format, serialize=True, **kwargs):\n\n try:\n output_format_info = FORMATS[output_format]\n except KeyError:\n raise ValueError(\"Unrecognized output format %s\" % output_format)\n\n input_doc = ensure_format(input_doc, output_format_info.input_format)\n\n result = output_format_info.func(input_doc, **kwargs)\n if serialize:\n result = output_format_info.serializer(result)\n return result", "response": "Convert an Open511 document between formats."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading and parse the LaTeX source file and return a LsstLatexDoc instance.", "response": "def read(cls, root_tex_path):\n \"\"\"Construct an `LsstLatexDoc` instance by reading and parsing the\n LaTeX source.\n\n Parameters\n ----------\n root_tex_path : `str`\n Path to the LaTeX source on the filesystem. For multi-file LaTeX\n projects this should be the path to the root document.\n\n Notes\n -----\n This method implements the following pipeline:\n\n 1. `lsstprojectmeta.tex.normalizer.read_tex_file`\n 2. `lsstprojectmeta.tex.scraper.get_macros`\n 3. `lsstprojectmeta.tex.normalizer.replace_macros`\n\n Thus ``input`` and ``includes`` are resolved along with simple macros.\n \"\"\"\n # Read and normalize the TeX source, replacing macros with content\n root_dir = os.path.dirname(root_tex_path)\n tex_source = read_tex_file(root_tex_path)\n tex_macros = get_macros(tex_source)\n tex_source = replace_macros(tex_source, tex_macros)\n return cls(tex_source, root_dir=root_dir)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef html_short_title(self):\n return self.format_short_title(format='html5', deparagraph=True,\n mathjax=False, smart=True)", "response": "HTML5 - formatted document short title."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef html_authors(self):\n return self.format_authors(format='html5', deparagraph=True,\n mathjax=False, smart=True)", "response": "HTML5 - formatted authors of the user."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_draft(self):\n if not hasattr(self, '_document_options'):\n self._parse_documentclass()\n\n if 'lsstdraft' in self._document_options:\n return True\n else:\n return False", "response": "Return True if the document is a draft."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the document content in the specified markup format.", "response": "def format_content(self, format='plain', mathjax=False,\n smart=True, extra_args=None):\n \"\"\"Get the document content in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `str`\n Converted content.\n \"\"\"\n output_text = convert_lsstdoc_tex(\n self._tex, format,\n mathjax=mathjax,\n smart=smart,\n extra_args=extra_args)\n return output_text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the title of the document in the specified markup format.", "response": "def format_title(self, format='html5', deparagraph=True, mathjax=False,\n smart=True, extra_args=None):\n \"\"\"Get the document title in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n deparagraph : `bool`, optional\n Remove the paragraph tags from single paragraph content.\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `str`\n Converted content or `None` if the title is not available in\n the document.\n \"\"\"\n if self.title is None:\n return None\n\n output_text = convert_lsstdoc_tex(\n self.title, format,\n deparagraph=deparagraph,\n mathjax=mathjax,\n smart=smart,\n extra_args=extra_args)\n return output_text"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format_short_title(self, format='html5', deparagraph=True,\n mathjax=False, smart=True, extra_args=None):\n \"\"\"Get the document short title in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n deparagraph : `bool`, optional\n Remove the paragraph tags from single paragraph content.\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `str`\n Converted content or `None` if the short title is not available in\n the document.\n \"\"\"\n if self.short_title is None:\n return None\n\n output_text = convert_lsstdoc_tex(\n self.short_title, 'html5',\n deparagraph=deparagraph,\n mathjax=mathjax,\n smart=smart,\n extra_args=extra_args)\n return output_text", "response": "Returns the document short title in the specified markup format."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the document abstract in the specified markup format.", "response": "def format_abstract(self, format='html5', deparagraph=False, mathjax=False,\n smart=True, extra_args=None):\n \"\"\"Get the document abstract in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n deparagraph : `bool`, optional\n Remove the paragraph tags from single paragraph content.\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `str`\n Converted content or `None` if the title is not available in\n the document.\n \"\"\"\n if self.abstract is None:\n return None\n\n abstract_latex = self._prep_snippet_for_pandoc(self.abstract)\n\n output_text = convert_lsstdoc_tex(\n abstract_latex, format,\n deparagraph=deparagraph,\n mathjax=mathjax,\n smart=smart,\n extra_args=extra_args)\n return output_text"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef format_authors(self, format='html5', deparagraph=True, mathjax=False,\n smart=True, extra_args=None):\n \"\"\"Get the document authors in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n deparagraph : `bool`, optional\n Remove the paragraph tags from single paragraph content.\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `list` of `str`\n Sequence of author names in the specified output markup format.\n \"\"\"\n formatted_authors = []\n for latex_author in self.authors:\n formatted_author = convert_lsstdoc_tex(\n latex_author, format,\n deparagraph=deparagraph,\n mathjax=mathjax,\n smart=smart,\n extra_args=extra_args)\n # removes Pandoc's terminal newlines\n formatted_author = formatted_author.strip()\n formatted_authors.append(formatted_author)\n return formatted_authors", "response": "Returns the document authors in the specified markup format."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses documentclass options. Sets the the ``_document_options`` attribute.", "response": "def _parse_documentclass(self):\n \"\"\"Parse documentclass options.\n\n Sets the the ``_document_options`` attribute.\n \"\"\"\n command = LatexCommand(\n 'documentclass',\n {'name': 'options', 'required': False, 'bracket': '['},\n {'name': 'class_name', 'required': True, 'bracket': '{'})\n try:\n parsed = next(command.parse(self._tex))\n except StopIteration:\n self._logger.warning('lsstdoc has no documentclass')\n self._document_options = []\n\n try:\n content = parsed['options']\n self._document_options = [opt.strip()\n for opt in content.split(',')]\n except KeyError:\n self._logger.warning('lsstdoc has no documentclass options')\n self._document_options = []"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses the title from the TeX source.", "response": "def _parse_title(self):\n \"\"\"Parse the title from TeX source.\n\n Sets these attributes:\n\n - ``_title``\n - ``_short_title``\n \"\"\"\n command = LatexCommand(\n 'title',\n {'name': 'short_title', 'required': False, 'bracket': '['},\n {'name': 'long_title', 'required': True, 'bracket': '{'})\n try:\n parsed = next(command.parse(self._tex))\n except StopIteration:\n self._logger.warning('lsstdoc has no title')\n self._title = None\n self._short_title = None\n\n self._title = parsed['long_title']\n\n try:\n self._short_title = parsed['short_title']\n except KeyError:\n self._logger.warning('lsstdoc has no short title')\n self._short_title = None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing the document handle and set the _handle series and serial attributes.", "response": "def _parse_doc_ref(self):\n \"\"\"Parse the document handle.\n\n Sets the ``_series``, ``_serial``, and ``_handle`` attributes.\n \"\"\"\n command = LatexCommand(\n 'setDocRef',\n {'name': 'handle', 'required': True, 'bracket': '{'})\n try:\n parsed = next(command.parse(self._tex))\n except StopIteration:\n self._logger.warning('lsstdoc has no setDocRef')\n self._handle = None\n self._series = None\n self._serial = None\n return\n\n self._handle = parsed['handle']\n try:\n self._series, self._serial = self._handle.split('-', 1)\n except ValueError:\n self._logger.warning('lsstdoc handle cannot be parsed into '\n 'series and serial: %r', self._handle)\n self._series = None\n self._serial = None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _parse_author(self):\n command = LatexCommand(\n 'author',\n {'name': 'authors', 'required': True, 'bracket': '{'})\n try:\n parsed = next(command.parse(self._tex))\n except StopIteration:\n self._logger.warning('lsstdoc has no author')\n self._authors = []\n return\n\n try:\n content = parsed['authors']\n except KeyError:\n self._logger.warning('lsstdoc has no author')\n self._authors = []\n return\n\n # Clean content\n content = content.replace('\\n', ' ')\n content = content.replace('~', ' ')\n content = content.strip()\n\n # Split content into list of individual authors\n authors = []\n for part in content.split(','):\n part = part.strip()\n for split_part in part.split('and '):\n split_part = split_part.strip()\n if len(split_part) > 0:\n authors.append(split_part)\n self._authors = authors", "response": "Parse the author from the TeX source."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _parse_abstract(self):\n command = LatexCommand(\n 'setDocAbstract',\n {'name': 'abstract', 'required': True, 'bracket': '{'})\n try:\n parsed = next(command.parse(self._tex))\n except StopIteration:\n self._logger.warning('lsstdoc has no abstract')\n self._abstract = None\n return\n\n try:\n content = parsed['abstract']\n except KeyError:\n self._logger.warning('lsstdoc has no abstract')\n self._abstract = None\n return\n\n content = content.strip()\n self._abstract = content", "response": "Parse the abstract from the TeX source."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprocess a LaTeX snippet of content for better transformation with pandoc.", "response": "def _prep_snippet_for_pandoc(self, latex_text):\n \"\"\"Process a LaTeX snippet of content for better transformation\n with pandoc.\n\n Currently runs the CitationLinker to convert BibTeX citations to\n href links.\n \"\"\"\n replace_cite = CitationLinker(self.bib_db)\n latex_text = replace_cite(latex_text)\n return latex_text"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads the BibTeX file for the current locale.", "response": "def _load_bib_db(self):\n r\"\"\"Load the BibTeX bibliography referenced by the document.\n\n This method triggered by the `bib_db` attribute and populates the\n `_bib_db` private attribute.\n\n The ``\\bibliography`` command is parsed to identify the bibliographies\n referenced by the document.\n \"\"\"\n # Get the names of custom bibtex files by parsing the\n # \\bibliography command and filtering out the default lsstdoc\n # bibliographies.\n command = LatexCommand(\n 'bibliography',\n {'name': 'bib_names', 'required': True, 'bracket': '{'})\n try:\n parsed = next(command.parse(self._tex))\n bib_names = [n.strip() for n in parsed['bib_names'].split(',')]\n except StopIteration:\n self._logger.warning('lsstdoc has no bibliography command')\n bib_names = []\n custom_bib_names = [n for n in bib_names\n if n not in KNOWN_LSSTTEXMF_BIB_NAMES]\n\n # Read custom bibliographies.\n custom_bibs = []\n for custom_bib_name in custom_bib_names:\n custom_bib_path = os.path.join(\n os.path.join(self._root_dir),\n custom_bib_name + '.bib'\n )\n if not os.path.exists(custom_bib_path):\n self._logger.warning('Could not find bibliography %r',\n custom_bib_path)\n continue\n with open(custom_bib_path, 'r') as file_handle:\n custom_bibs.append(file_handle.read())\n if len(custom_bibs) > 0:\n custom_bibtex = '\\n\\n'.join(custom_bibs)\n else:\n custom_bibtex = None\n\n # Get the combined pybtex bibliography\n db = get_bibliography(bibtex=custom_bibtex)\n\n self._bib_db = db"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _parse_revision_date(self):\n doc_datetime = None\n\n # First try to parse the \\date command in the latex.\n # \\date is ignored for draft documents.\n if not self.is_draft:\n date_command = LatexCommand(\n 'date',\n {'name': 'content', 'required': True, 'bracket': '{'})\n try:\n parsed = next(date_command.parse(self._tex))\n command_content = parsed['content'].strip()\n except StopIteration:\n command_content = None\n self._logger.warning('lsstdoc has no date command')\n\n # Try to parse a date from the \\date command\n if command_content is not None and command_content != r'\\today':\n try:\n doc_datetime = datetime.datetime.strptime(command_content,\n '%Y-%m-%d')\n # Assume LSST project time (Pacific)\n project_tz = timezone('US/Pacific')\n localized_datetime = project_tz.localize(doc_datetime)\n # Normalize to UTC\n doc_datetime = localized_datetime.astimezone(pytz.utc)\n\n self._revision_datetime_source = 'tex'\n except ValueError:\n self._logger.warning('Could not parse a datetime from '\n 'lsstdoc date command: %r',\n command_content)\n\n # Fallback to getting the datetime from Git\n if doc_datetime is None:\n content_extensions = ('tex', 'bib', 'pdf', 'png', 'jpg')\n try:\n doc_datetime = get_content_commit_date(\n content_extensions,\n root_dir=self._root_dir)\n self._revision_datetime_source = 'git'\n except RuntimeError:\n self._logger.warning('Could not get a datetime from the Git '\n 'repository at %r',\n self._root_dir)\n\n # Final fallback to the current datetime\n if doc_datetime is None:\n doc_datetime = pytz.utc.localize(datetime.datetime.now())\n self._revision_datetime_source = 'now'\n\n self._datetime = doc_datetime", "response": "Parse the \\ date command in the lsstdoc and return the most recent Git commit date and the current datetime."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build_jsonld(self, url=None, code_url=None, ci_url=None,\n readme_url=None, license_id=None):\n \"\"\"Create a JSON-LD representation of this LSST LaTeX document.\n\n Parameters\n ----------\n url : `str`, optional\n URL where this document is published to the web. Prefer\n the LSST the Docs URL if possible.\n Example: ``'https://ldm-151.lsst.io'``.\n code_url : `str`, optional\n Path the the document's repository, typically on GitHub.\n Example: ``'https://github.com/lsst/LDM-151'``.\n ci_url : `str`, optional\n Path to the continuous integration service dashboard for this\n document's repository.\n Example: ``'https://travis-ci.org/lsst/LDM-151'``.\n readme_url : `str`, optional\n URL to the document repository's README file. Example:\n ``https://raw.githubusercontent.com/lsst/LDM-151/master/README.rst``.\n license_id : `str`, optional\n License identifier, if known. The identifier should be from the\n listing at https://spdx.org/licenses/. Example: ``CC-BY-4.0``.\n\n Returns\n -------\n jsonld : `dict`\n JSON-LD-formatted dictionary.\n \"\"\"\n jsonld = {\n '@context': [\n \"https://raw.githubusercontent.com/codemeta/codemeta/2.0-rc/\"\n \"codemeta.jsonld\",\n \"http://schema.org\"],\n '@type': ['Report', 'SoftwareSourceCode'],\n 'language': 'TeX',\n 'reportNumber': self.handle,\n 'name': self.plain_title,\n 'description': self.plain_abstract,\n 'author': [{'@type': 'Person', 'name': author_name}\n for author_name in self.plain_authors],\n # This is a datetime.datetime; not a string. If writing to a file,\n # Need to convert this to a ISO 8601 string.\n 'dateModified': self.revision_datetime\n }\n\n try:\n jsonld['articleBody'] = self.plain_content\n jsonld['fileFormat'] = 'text/plain' # MIME type of articleBody\n except RuntimeError:\n # raised by pypandoc when it can't convert the tex document\n self._logger.exception('Could not convert latex body to plain '\n 'text for articleBody.')\n self._logger.warning('Falling back to tex source for articleBody')\n jsonld['articleBody'] = self._tex\n jsonld['fileFormat'] = 'text/plain' # no mimetype for LaTeX?\n\n if url is not None:\n jsonld['@id'] = url\n jsonld['url'] = url\n else:\n # Fallback to using the document handle as the ID. This isn't\n # entirely ideal from a linked data perspective.\n jsonld['@id'] = self.handle\n\n if code_url is not None:\n jsonld['codeRepository'] = code_url\n\n if ci_url is not None:\n jsonld['contIntegration'] = ci_url\n\n if readme_url is not None:\n jsonld['readme'] = readme_url\n\n if license_id is not None:\n jsonld['license_id'] = None\n\n return jsonld", "response": "Build a JSON - LD representation of this LSST LaTeX document."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rename(self, from_name, to_name):\n log.info('renaming database from %s to %s' % (from_name, to_name))\n self._run_stmt('alter database %s rename to %s' % (from_name, to_name))", "response": "Renames an existing database."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of existing connections to the named database.", "response": "def connections(self, name):\n \"\"\"Returns a list of existing connections to the named database.\"\"\"\n stmt = \"\"\"\n select {fields} from pg_stat_activity\n where datname = {datname!r} and pid <> pg_backend_pid()\n \"\"\".format(fields=', '.join(CONNECTION_FIELDS), datname=name)\n return list(Connection(**x) for x in self._iter_results(stmt))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if database server is running False otherwise.", "response": "def available(self, timeout=5):\n \"\"\"Returns True if database server is running, False otherwise.\"\"\"\n host = self._connect_args['host']\n port = self._connect_args['port']\n try:\n sock = socket.create_connection((host, port), timeout=timeout)\n sock.close()\n return True\n except socket.error:\n pass\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndumps the state of a database to a file.", "response": "def dump(self, name, filename):\n \"\"\"\n Saves the state of a database to a file.\n\n Parameters\n ----------\n name: str\n the database to be backed up.\n filename: str\n path to a file where database backup will be written.\n \"\"\"\n if not self.exists(name):\n raise DatabaseError('database %s does not exist!')\n log.info('dumping %s to %s' % (name, filename))\n self._run_cmd('pg_dump', '--verbose', '--blobs', '--format=custom',\n '--file=%s' % filename, name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrestoring a postgres database from a file.", "response": "def restore(self, name, filename):\n \"\"\"\n Loads state of a backup file to a database.\n\n Note\n ----\n If database name does not exist, it will be created.\n\n Parameters\n ----------\n name: str\n the database to which backup will be restored.\n filename: str\n path to a file contain a postgres database backup.\n \"\"\"\n if not self.exists(name):\n self.create(name)\n else:\n log.warn('overwriting contents of database %s' % name)\n log.info('restoring %s from %s' % (name, filename))\n self._run_cmd('pg_restore', '--verbose', '--dbname=%s' % name, filename)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connection_dsn(self, name=None):\n return ' '.join(\"%s=%s\" % (param, value) for param, value in self._connect_options(name))", "response": "Returns a connection string for the current database."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef connection_url(self, name=None):\n return 'postgresql://{user}@{host}:{port}/{dbname}'.format(**{k: v for k, v in self._connect_options(name)})", "response": "Provides a connection string for the database."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconnect the database client shell to the database.", "response": "def shell(self, expect=pexpect):\n \"\"\"\n Connects the database client shell to the database.\n\n Parameters\n ----------\n expect_module: str\n the database to which backup will be restored.\n \"\"\"\n dsn = self.connection_dsn()\n log.debug('connection string: %s' % dsn)\n child = expect.spawn('psql \"%s\"' % dsn)\n if self._connect_args['password'] is not None:\n child.expect('Password: ')\n child.sendline(self._connect_args['password'])\n child.interact()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of all settings from the server.", "response": "def settings(self):\n \"\"\"Returns settings from the server.\"\"\"\n stmt = \"select {fields} from pg_settings\".format(fields=', '.join(SETTINGS_FIELDS))\n settings = []\n for row in self._iter_results(stmt):\n row['setting'] = self._vartype_map[row['vartype']](row['setting'])\n settings.append(Settings(**row))\n return settings"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef twitter_bootstrap(element, args=\"\"):\n element_type = element.__class__.__name__.lower()\n\n args_list = [arg.strip() for arg in args.split(',')]\n\n layout = (len(args_list) and args_list[0]) or \"default\"\n size = (len(args_list) > 1 and args_list[1]) or \"sm\"\n label_cols = (len(args_list) > 2 and args_list[2]) or \"2\"\n input_cols = (len(args_list) > 3 and args_list[3]) or str(12 - int(label_cols))\n\n lbl_size_class = \"col-%s-%s\" % (size, label_cols)\n lbl_size_offset_class = \"col-%s-offset-%s\" % (size, label_cols)\n ipt_size_class = \"col-%s-%s\" % (size, input_cols)\n\n if layout not in [\"default\", \"search\", \"inline\", \"horizontal\"]:\n layout = \"default\"\n\n if element_type == 'boundfield':\n pass\n else:\n\n if layout == \"default\":\n field_template_file = \"field.html\"\n else:\n field_template_file = \"%s_field.html\" % layout\n\n template = get_template(\"twitter_bootstrap_form/form.html\")\n context = {\n 'form': element,\n 'layout': layout,\n 'lbl_size_class': lbl_size_class,\n 'lbl_size_offset_class': lbl_size_offset_class,\n 'ipt_size_class': ipt_size_class,\n 'required_suffix': settings.BOOTSTRAP_REQUIRED_SUFFIX,\n 'field_template': \"twitter_bootstrap_form/%s\" % field_template_file}\n\n return template.render(context)", "response": "Returns a twitter bootstrap form for the given element."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsay something in the morning", "response": "def breakfast(self, message=\"Breakfast is ready\", shout: bool = False):\n \"\"\"Say something in the morning\"\"\"\n return self.helper.output(message, shout)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef lunch(self, message=\"Time for lunch\", shout: bool = False):\n return self.helper.output(message, shout)", "response": "Say something in the afternoon"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsay something in the evening", "response": "def dinner(self, message=\"Dinner is served\", shout: bool = False):\n \"\"\"Say something in the evening\"\"\"\n return self.helper.output(message, shout)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef main():\n parser = argparse.ArgumentParser(\n description='Discover and ingest metadata from document sources, '\n 'including lsstdoc-based LaTeX documents and '\n 'reStructuredText-based technotes. Metadata can be '\n 'upserted into the LSST Projectmeta MongoDB.')\n parser.add_argument(\n '--ltd-product',\n dest='ltd_product_url',\n help='URL of an LSST the Docs product '\n '(https://keeper.lsst.codes/products/). If provided, '\n 'only this document will be ingested.')\n parser.add_argument(\n '--github-token',\n help='GitHub personal access token.')\n parser.add_argument(\n '--mongodb-uri',\n help='MongoDB connection URI. If provided, metadata will be loaded '\n 'into the Projectmeta database. Omit this argument to just '\n 'test the ingest pipeline.')\n parser.add_argument(\n '--mongodb-db',\n default='lsstprojectmeta',\n help='Name of MongoDB database')\n parser.add_argument(\n '--mongodb-collection',\n default='resources',\n help='Name of the MongoDB collection for projectmeta resources')\n args = parser.parse_args()\n\n # Configure the root logger\n stream_handler = logging.StreamHandler()\n stream_formatter = logging.Formatter(\n '%(asctime)s %(levelname)8s %(name)s | %(message)s')\n stream_handler.setFormatter(stream_formatter)\n root_logger = logging.getLogger()\n root_logger.addHandler(stream_handler)\n root_logger.setLevel(logging.WARNING)\n # Configure app logger\n app_logger = logging.getLogger('lsstprojectmeta')\n app_logger.setLevel(logging.DEBUG)\n\n if args.mongodb_uri is not None:\n mongo_client = AsyncIOMotorClient(args.mongodb_uri, ssl=True)\n collection = mongo_client[args.mongodb_db][args.mongodb_collection]\n else:\n collection = None\n\n loop = asyncio.get_event_loop()\n\n if args.ltd_product_url is not None:\n # Run single technote\n loop.run_until_complete(run_single_ltd_doc(args.ltd_product_url,\n args.github_token,\n collection))\n else:\n # Run bulk technote processing\n loop.run_until_complete(run_bulk_etl(args.github_token,\n collection))", "response": "This is the main function for the lsstprojectmeta command line entrypoint."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprocess a list of LSST Docs products and return a list of AsyncIOMotorCollection objects.", "response": "async def process_ltd_doc_products(session, product_urls, github_api_token,\n mongo_collection=None):\n \"\"\"Run a pipeline to process extract, transform, and load metadata for\n multiple LSST the Docs-hosted projects\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n product_urls : `list` of `str`\n List of LSST the Docs product URLs.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records.\n \"\"\"\n tasks = [asyncio.ensure_future(\n process_ltd_doc(session, github_api_token,\n product_url,\n mongo_collection=mongo_collection))\n for product_url in product_urls]\n await asyncio.gather(*tasks)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def process_ltd_doc(session, github_api_token, ltd_product_url,\n mongo_collection=None):\n \"\"\"Ingest any kind of LSST document hosted on LSST the Docs from its\n source.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n ltd_product_url : `str`\n URL of the technote's product resource in the LTD Keeper API.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records. If provided, ths JSON-LD is upserted\n into the MongoDB collection.\n\n Returns\n -------\n metadata : `dict`\n JSON-LD-formatted dictionary.\n\n .. `GitHub personal access token guide`: https://ls.st/41d\n \"\"\"\n logger = logging.getLogger(__name__)\n\n ltd_product_data = await get_ltd_product(session, url=ltd_product_url)\n\n # Ensure the LTD product is a document\n product_name = ltd_product_data['slug']\n doc_handle_match = DOCUMENT_HANDLE_PATTERN.match(product_name)\n if doc_handle_match is None:\n logger.debug('%s is not a document repo', product_name)\n return\n\n # Figure out the format of the document by probing for metadata files.\n # reStructuredText-based Sphinx documents have metadata.yaml file.\n try:\n return await process_sphinx_technote(session,\n github_api_token,\n ltd_product_data,\n mongo_collection=mongo_collection)\n except NotSphinxTechnoteError:\n # Catch error so we can try the next format\n logger.debug('%s is not a Sphinx-based technote.', product_name)\n except Exception:\n # Something bad happened trying to process the technote.\n # Log and just move on.\n logger.exception('Unexpected error trying to process %s', product_name)\n return\n\n # Try interpreting it as a Lander page with a /metadata.jsonld document\n try:\n return await process_lander_page(session,\n github_api_token,\n ltd_product_data,\n mongo_collection=mongo_collection)\n except NotLanderPageError:\n # Catch error so we can try the next format\n logger.debug('%s is not a Lander page with a metadata.jsonld file.',\n product_name)\n except Exception:\n # Something bad happened; log and move on\n logger.exception('Unexpected error trying to process %s', product_name)\n return", "response": "Ingest any kind of LSST document hosted on LSST the Docs from its LTD Keeper API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nallowing a decorator to be called with or without keyword arguments.", "response": "def decorator(decorator_func):\n \"\"\"Allows a decorator to be called with or without keyword arguments.\"\"\"\n assert callable(decorator_func), type(decorator_func)\n\n def _decorator(func=None, **kwargs):\n assert func is None or callable(func), type(func)\n if func:\n return decorator_func(func, **kwargs)\n else:\n def _decorator_helper(func):\n return decorator_func(func, **kwargs)\n\n return _decorator_helper\n\n return _decorator"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a GitHub token for an integration installation.", "response": "def get_installation_token(installation_id, integration_jwt):\n \"\"\"Create a GitHub token for an integration installation.\n\n Parameters\n ----------\n installation_id : `int`\n Installation ID. This is available in the URL of the integration's\n **installation** ID.\n integration_jwt : `bytes`\n The integration's JSON Web Token (JWT). You can create this with\n `create_jwt`.\n\n Returns\n -------\n token_obj : `dict`\n GitHub token object. Includes the fields:\n\n - ``token``: the token string itself.\n - ``expires_at``: date time string when the token expires.\n\n Example\n -------\n The typical workflow for authenticating to an integration installation is:\n\n .. code-block:: python\n\n from dochubadapter.github import auth\n jwt = auth.create_jwt(integration_id, private_key_path)\n token_obj = auth.get_installation_token(installation_id, jwt)\n print(token_obj['token'])\n\n Notes\n -----\n See\n https://developer.github.com/early-access/integrations/authentication/#as-an-installation\n for more information\n \"\"\"\n api_root = 'https://api.github.com'\n url = '{root}/installations/{id_:d}/access_tokens'.format(\n api_root=api_root,\n id_=installation_id)\n\n headers = {\n 'Authorization': 'Bearer {0}'.format(integration_jwt.decode('utf-8')),\n 'Accept': 'application/vnd.github.machine-man-preview+json'\n }\n\n resp = requests.post(url, headers=headers)\n resp.raise_for_status()\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a JSON Web Token to authenticate a GitHub Integration or installation.", "response": "def create_jwt(integration_id, private_key_path):\n \"\"\"Create a JSON Web Token to authenticate a GitHub Integration or\n installation.\n\n Parameters\n ----------\n integration_id : `int`\n Integration ID. This is available from the GitHub integration's\n homepage.\n private_key_path : `str`\n Path to the integration's private key (a ``.pem`` file).\n\n Returns\n -------\n jwt : `bytes`\n JSON Web Token that is good for 9 minutes.\n\n Notes\n -----\n The JWT is encoded with the RS256 algorithm. It includes a payload with\n fields:\n\n - ``'iat'``: The current time, as an `int` timestamp.\n - ``'exp'``: Expiration time, as an `int timestamp. The expiration\n time is set of 9 minutes in the future (maximum allowance is 10 minutes).\n - ``'iss'``: The integration ID (`int`).\n\n For more information, see\n https://developer.github.com/early-access/integrations/authentication/.\n \"\"\"\n integration_id = int(integration_id)\n\n with open(private_key_path, 'rb') as f:\n cert_bytes = f.read()\n\n now = datetime.datetime.now()\n expiration_time = now + datetime.timedelta(minutes=9)\n payload = {\n # Issued at time\n 'iat': int(now.timestamp()),\n # JWT expiration time (10 minute maximum)\n 'exp': int(expiration_time.timestamp()),\n # Integration's GitHub identifier\n 'iss': integration_id\n }\n\n return jwt.encode(payload, cert_bytes, algorithm='RS256')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntrying to load and return a module", "response": "def load(directory_name, module_name):\n \"\"\"Try to load and return a module\n\n Will add DIRECTORY_NAME to sys.path and tries to import MODULE_NAME.\n\n For example:\n load(\"~/.yaz\", \"yaz_extension\")\n \"\"\"\n directory_name = os.path.expanduser(directory_name)\n if os.path.isdir(directory_name) and directory_name not in sys.path:\n sys.path.append(directory_name)\n\n try:\n return importlib.import_module(module_name)\n except ImportError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_aware(value, timezone):\n if hasattr(timezone, 'localize') and value not in (datetime.datetime.min, datetime.datetime.max):\n # available for pytz time zones\n return timezone.localize(value, is_dst=None)\n else:\n # may be wrong around DST changes\n return value.replace(tzinfo=timezone)", "response": "Makes a naive datetime. datetime in a given time zone aware."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_naive(value, timezone):\n value = value.astimezone(timezone)\n if hasattr(timezone, 'normalize'):\n # available for pytz time zones\n value = timezone.normalize(value)\n return value.replace(tzinfo=None)", "response": "Makes a aware datetime. datetime naive in a given time zone."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a Schedule object based on an lxml Element for the schedule tag. timezone is a tzinfo object ideally from pytz.", "response": "def from_element(root, timezone):\n \"\"\"Return a Schedule object based on an lxml Element for the \n tag. timezone is a tzinfo object, ideally from pytz.\"\"\"\n assert root.tag == 'schedule'\n if root.xpath('intervals'):\n return _ScheduleIntervals(root, timezone)\n elif root.xpath('recurring_schedules'):\n return _ScheduleRecurring(root, timezone)\n raise NotImplementedError"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_timezone(self, dt):\n if timezone.is_aware(dt):\n return dt.astimezone(self.timezone)\n else:\n return timezone.make_aware(dt, self.timezone)", "response": "Converts a datetime to the timezone of this Schedule."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of tuples of start and end datetimes for when the schedule is active during the provided range.", "response": "def intervals(self, range_start=datetime.datetime.min, range_end=datetime.datetime.max):\n \"\"\"Returns a list of tuples of start/end datetimes for when the schedule\n is active during the provided range.\"\"\"\n raise NotImplementedError"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef next_interval(self, after=None):\n if after is None:\n after = timezone.now()\n after = self.to_timezone(after)\n return next(self.intervals(range_start=after), None)", "response": "Returns the next Period this event is in effect or None if the event has no remaining periods."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef includes(self, query):\n query = self.to_timezone(query)\n return any(self.intervals(range_start=query, range_end=query))", "response": "Does this schedule include the provided time?"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of Period tuples for each period represented in an exception > that falls between range_start and range_end.", "response": "def exception_periods(self, range_start=datetime.date.min, range_end=datetime.date.max):\n \"\"\"Returns a list of Period tuples for each period represented in an \n that falls between range_start and range_end.\"\"\"\n periods = []\n for exception_date, exception_times in self.exceptions.items():\n if exception_date >= range_start and exception_date <= range_end:\n for exception_time in exception_times:\n periods.append(\n Period(\n self.timezone.localize(datetime.datetime.combine(exception_date, exception_time.start)),\n self.timezone.localize(datetime.datetime.combine(exception_date, exception_time.end))\n )\n )\n\n periods.sort()\n return periods"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef includes(self, query):\n query = self.to_timezone(query)\n query_date = query.date()\n query_time = query.time()\n\n # Is the provided time an exception for this schedule?\n specific = self.exceptions.get(query_date)\n if specific is not None:\n if len(specific) == 0:\n # Not in effect on this day\n return False\n for period in specific:\n if query_time >= period.start and query_time <= period.end:\n return True\n return False\n\n # It's not an exception. Is it within a recurring schedule?\n return any(sched.includes(query_date, query_time) for sched in self._recurring_schedules)", "response": "Returns True if this schedule includes the provided time."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _daily_periods(self, range_start, range_end):\n specific = set(self.exceptions.keys())\n\n return heapq.merge(self.exception_periods(range_start, range_end), *[\n sched.daily_periods(range_start=range_start, range_end=range_end, exclude_dates=specific)\n for sched in self._recurring_schedules\n ])", "response": "Returns an iterator of Period tuples for every day this event is in effect between range_start and range_end."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef intervals(self, range_start=datetime.datetime.min, range_end=datetime.datetime.max):\n\n # At the moment the algorithm works on periods split by calendar day, one at a time,\n # merging them if they're continuous; to avoid looping infinitely for infinitely long\n # periods, it splits periods as soon as they reach 60 days.\n # This algorithm could likely be improved to get rid of this restriction and improve\n # efficiency, so code should not rely on this behaviour.\n\n current_period = None\n max_continuous_days = 60\n\n range_start = self.to_timezone(range_start)\n range_end = self.to_timezone(range_end)\n\n for period in self._daily_periods(range_start.date(), range_end.date()):\n if period.end < range_start or period.start > range_end:\n continue\n if current_period is None:\n current_period = period\n else:\n if ( ((period.start < current_period.end)\n or (period.start - current_period.end) <= datetime.timedelta(minutes=1))\n and (current_period.end - current_period.start) < datetime.timedelta(days=max_continuous_days)):\n # Merge\n current_period = Period(current_period.start, period.end)\n else:\n yield current_period\n current_period = period\n if current_period:\n yield current_period", "response": "Returns an iterator of Period tuples for continuous stretches of time during the current event is in effect between range_start and range_end."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndoing this schedule include the provided time?", "response": "def includes(self, query_date, query_time=None):\n \"\"\"Does this schedule include the provided time?\n query_date and query_time are date and time objects, interpreted\n in this schedule's timezone\"\"\"\n\n if self.start_date and query_date < self.start_date:\n return False\n if self.end_date and query_date > self.end_date:\n return False\n if query_date.weekday() not in self.weekdays:\n return False\n\n if not query_time:\n return True\n\n if query_time >= self.period.start and query_time <= self.period.end:\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef daily_periods(self, range_start=datetime.date.min, range_end=datetime.date.max, exclude_dates=tuple()):\n tz = self.timezone\n period = self.period\n weekdays = self.weekdays\n\n current_date = max(range_start, self.start_date)\n end_date = range_end\n if self.end_date:\n end_date = min(end_date, self.end_date)\n\n while current_date <= end_date:\n if current_date.weekday() in weekdays and current_date not in exclude_dates:\n yield Period(\n tz.localize(datetime.datetime.combine(current_date, period.start)),\n tz.localize(datetime.datetime.combine(current_date, period.end))\n )\n current_date += datetime.timedelta(days=1)", "response": "Returns an iterator of Period tuples for every day this schedule is in effect between range_start and range_end."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\nasync def _download_text(url, session):\n logger = logging.getLogger(__name__)\n async with session.get(url) as response:\n # aiohttp decodes the content to a Python string\n logger.info('Downloading %r', url)\n return await response.text()", "response": "Asynchronously request a URL and get the encoded text content of the\n ."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget content of lsst - texmf bibliographies.", "response": "def get_lsst_bibtex(bibtex_filenames=None):\n \"\"\"Get content of lsst-texmf bibliographies.\n\n BibTeX content is downloaded from GitHub (``master`` branch of\n https://github.com/lsst/lsst-texmf or retrieved from an in-memory cache.\n\n Parameters\n ----------\n bibtex_filenames : sequence of `str`, optional\n List of lsst-texmf BibTeX files to retrieve. These can be the filenames\n of lsst-bibtex files (for example, ``['lsst.bib', 'lsst-dm.bib']``)\n or names without an extension (``['lsst', 'lsst-dm']``). The default\n (recommended) is to get *all* lsst-texmf bibliographies:\n\n .. code-block:: python\n\n ['lsst', 'lsst-dm', 'refs', 'books', 'refs_ads']\n\n Returns\n -------\n bibtex : `dict`\n Dictionary with keys that are bibtex file names (such as ``'lsst'``,\n ``'lsst-dm'``). Values are the corresponding bibtex file content\n (`str`).\n \"\"\"\n logger = logging.getLogger(__name__)\n\n if bibtex_filenames is None:\n # Default lsst-texmf bibliography files\n bibtex_names = KNOWN_LSSTTEXMF_BIB_NAMES\n else:\n # Sanitize filenames (remove extensions, path)\n bibtex_names = []\n for filename in bibtex_filenames:\n name = os.path.basename(os.path.splitext(filename)[0])\n if name not in KNOWN_LSSTTEXMF_BIB_NAMES:\n logger.warning('%r is not a known lsst-texmf bib file',\n name)\n continue\n bibtex_names.append(name)\n\n # names of bibtex files not in cache\n uncached_names = [name for name in bibtex_names\n if name not in _LSSTTEXMF_BIB_CACHE]\n if len(uncached_names) > 0:\n # Download bibtex and put into the cache\n loop = asyncio.get_event_loop()\n future = asyncio.ensure_future(_download_lsst_bibtex(uncached_names))\n loop.run_until_complete(future)\n for name, text in zip(bibtex_names, future.result()):\n _LSSTTEXMF_BIB_CACHE[name] = text\n\n return {name: _LSSTTEXMF_BIB_CACHE[name] for name in bibtex_names}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_bibliography(lsst_bib_names=None, bibtex=None):\n bibtex_data = get_lsst_bibtex(bibtex_filenames=lsst_bib_names)\n\n # Parse with pybtex into BibliographyData instances\n pybtex_data = [pybtex.database.parse_string(_bibtex, 'bibtex')\n for _bibtex in bibtex_data.values()]\n\n # Also parse local bibtex content\n if bibtex is not None:\n pybtex_data.append(pybtex.database.parse_string(bibtex, 'bibtex'))\n\n # Merge BibliographyData\n bib = pybtex_data[0]\n if len(pybtex_data) > 1:\n for other_bib in pybtex_data[1:]:\n for key, entry in other_bib.entries.items():\n bib.add_entry(key, entry)\n\n return bib", "response": "Get a BibliographyData instance from standard lsst - texmf bibtex files and local bibtex content."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_url_from_entry(entry):\n if 'url' in entry.fields:\n return entry.fields['url']\n elif entry.type.lower() == 'docushare':\n return 'https://ls.st/' + entry.fields['handle']\n elif 'adsurl' in entry.fields:\n return entry.fields['adsurl']\n elif 'doi' in entry.fields:\n return 'https://doi.org/' + entry.fields['doi']\n else:\n raise NoEntryUrlError()", "response": "Get a usable URL from a pybtex entry."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets and format author - year text from a pybtex entry to emulate natb citations.", "response": "def get_authoryear_from_entry(entry, paren=False):\n \"\"\"Get and format author-year text from a pybtex entry to emulate\n natbib citations.\n\n Parameters\n ----------\n entry : `pybtex.database.Entry`\n A pybtex bibliography entry.\n parens : `bool`, optional\n Whether to add parentheses around the year. Default is `False`.\n\n Returns\n -------\n authoryear : `str`\n The author-year citation text.\n \"\"\"\n def _format_last(person):\n \"\"\"Reformat a pybtex Person into a last name.\n\n Joins all parts of a last name and strips \"{}\" wrappers.\n \"\"\"\n return ' '.join([n.strip('{}') for n in person.last_names])\n\n if len(entry.persons['author']) > 0:\n # Grab author list\n persons = entry.persons['author']\n elif len(entry.persons['editor']) > 0:\n # Grab editor list\n persons = entry.persons['editor']\n else:\n raise AuthorYearError\n\n try:\n year = entry.fields['year']\n except KeyError:\n raise AuthorYearError\n\n if paren and len(persons) == 1:\n template = '{author} ({year})'\n return template.format(author=_format_last(persons[0]),\n year=year)\n elif not paren and len(persons) == 1:\n template = '{author} {year}'\n return template.format(author=_format_last(persons[0]),\n year=year)\n elif paren and len(persons) == 2:\n template = '{author1} and {author2} ({year})'\n return template.format(author1=_format_last(persons[0]),\n author2=_format_last(persons[1]),\n year=year)\n elif not paren and len(persons) == 2:\n template = '{author1} and {author2} {year}'\n return template.format(author1=_format_last(persons[0]),\n author2=_format_last(persons[1]),\n year=year)\n elif not paren and len(persons) > 2:\n template = '{author} et al {year}'\n return template.format(author=_format_last(persons[0]),\n year=year)\n elif paren and len(persons) > 2:\n template = '{author} et al ({year})'\n return template.format(author=_format_last(persons[0]),\n year=year)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting and transform and load Sphinx - based technote metadata.", "response": "async def process_sphinx_technote(session, github_api_token, ltd_product_data,\n mongo_collection=None):\n \"\"\"Extract, transform, and load Sphinx-based technote metadata.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n ltd_product_data : `dict`\n Contents of ``metadata.yaml``, obtained via `download_metadata_yaml`.\n Data for this technote from the LTD Keeper API\n (``GET /products/``). Usually obtained via\n `lsstprojectmeta.ltd.get_ltd_product`.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records. If provided, ths JSON-LD is upserted\n into the MongoDB collection.\n\n Returns\n -------\n metadata : `dict`\n JSON-LD-formatted dictionary.\n\n Raises\n ------\n NotSphinxTechnoteError\n Raised when the LTD product cannot be interpreted as a Sphinx-based\n technote project because it's missing a metadata.yaml file in its\n GitHub repository. This implies that the LTD product *could* be of a\n different format.\n\n .. `GitHub personal access token guide`: https://ls.st/41d\n \"\"\"\n logger = logging.getLogger(__name__)\n\n github_url = ltd_product_data['doc_repo']\n github_url = normalize_repo_root_url(github_url)\n repo_slug = parse_repo_slug_from_url(github_url)\n\n try:\n metadata_yaml = await download_metadata_yaml(session, github_url)\n except aiohttp.ClientResponseError as err:\n # metadata.yaml not found; probably not a Sphinx technote\n logger.debug('Tried to download %s\\'s metadata.yaml, got status %d',\n ltd_product_data['slug'], err.code)\n raise NotSphinxTechnoteError()\n\n # Extract data from the GitHub API\n github_query = GitHubQuery.load('technote_repo')\n github_variables = {\n \"orgName\": repo_slug.owner,\n \"repoName\": repo_slug.repo\n }\n github_data = await github_request(session, github_api_token,\n query=github_query,\n variables=github_variables)\n\n try:\n jsonld = reduce_technote_metadata(\n github_url, metadata_yaml, github_data, ltd_product_data)\n except Exception as exception:\n message = \"Issue building JSON-LD for technote %s\"\n logger.exception(message, github_url, exception)\n raise\n\n if mongo_collection is not None:\n await _upload_to_mongodb(mongo_collection, jsonld)\n\n logger.info('Ingested technote %s into MongoDB', github_url)\n\n return jsonld"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreduce a technote project s metadata from multiple sources into a single JSON - LD resource.", "response": "def reduce_technote_metadata(github_url, metadata, github_data,\n ltd_product_data):\n \"\"\"Reduce a technote project's metadata from multiple sources into a\n single JSON-LD resource.\n\n Parameters\n ----------\n github_url : `str`\n URL of the technote's GitHub repository.\n metadata : `dict`\n The parsed contents of ``metadata.yaml`` found in a technote's\n repository.\n github_data : `dict`\n The contents of the ``technote_repo`` GitHub GraphQL API query.\n ltd_product_data : `dict`\n JSON dataset for the technote corresponding to the\n ``/products/`` of LTD Keeper.\n\n Returns\n -------\n metadata : `dict`\n JSON-LD-formatted dictionary.\n\n .. `GitHub personal access token guide`: https://ls.st/41d\n \"\"\"\n repo_slug = parse_repo_slug_from_url(github_url)\n\n # Initialize a schema.org/Report and schema.org/SoftwareSourceCode\n # linked data resource\n jsonld = {\n '@context': [\n \"https://raw.githubusercontent.com/codemeta/codemeta/2.0-rc/\"\n \"codemeta.jsonld\",\n \"http://schema.org\"],\n '@type': ['Report', 'SoftwareSourceCode'],\n 'codeRepository': github_url\n }\n\n if 'url' in metadata:\n url = metadata['url']\n elif 'published_url' in ltd_product_data:\n url = ltd_product_data['published_url']\n else:\n raise RuntimeError('No identifying url could be found: '\n '{}'.format(github_url))\n jsonld['@id'] = url\n jsonld['url'] = url\n\n if 'series' in metadata and 'serial_number' in metadata:\n jsonld['reportNumber'] = '{series}-{serial_number}'.format(**metadata)\n else:\n raise RuntimeError('No reportNumber: {}'.format(github_url))\n\n if 'doc_title' in metadata:\n jsonld['name'] = metadata['doc_title']\n\n if 'description' in metadata:\n jsonld['description'] = metadata['description']\n\n if 'authors' in metadata:\n jsonld['author'] = [{'@type': 'Person', 'name': author_name}\n for author_name in metadata['authors']]\n\n if 'last_revised' in metadata:\n # Prefer getting the 'last_revised' date from metadata.yaml\n # since it's considered an override.\n jsonld['dateModified'] = datetime.datetime.strptime(\n metadata['last_revised'],\n '%Y-%m-%d')\n else:\n # Fallback to parsing the date of the last commit to the\n # default branch on GitHub (usually `master`).\n try:\n _repo_data = github_data['data']['repository']\n _master_data = _repo_data['defaultBranchRef']\n jsonld['dateModified'] = datetime.datetime.strptime(\n _master_data['target']['committedDate'],\n '%Y-%m-%dT%H:%M:%SZ')\n except KeyError:\n pass\n\n try:\n _license_data = github_data['data']['repository']['licenseInfo']\n _spdxId = _license_data['spdxId']\n if _spdxId is not None:\n _spdx_url = 'https://spdx.org/licenses/{}.html'.format(_spdxId)\n jsonld['license'] = _spdx_url\n except KeyError:\n pass\n\n try:\n # Find the README(|.md|.rst|*) file in the repo root\n _master_data = github_data['data']['repository']['defaultBranchRef']\n _files = _master_data['target']['tree']['entries']\n for _node in _files:\n filename = _node['name']\n normalized_filename = filename.lower()\n if normalized_filename.startswith('readme'):\n readme_url = make_raw_content_url(repo_slug, 'master',\n filename)\n jsonld['readme'] = readme_url\n break\n except KeyError:\n pass\n\n # Assume Travis is the CI service (always true at the moment)\n travis_url = 'https://travis-ci.org/{}'.format(repo_slug.full)\n jsonld['contIntegration'] = travis_url\n\n return jsonld"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndownloading the metadata. yaml file from a technote s GitHub repository.", "response": "async def download_metadata_yaml(session, github_url):\n \"\"\"Download the metadata.yaml file from a technote's GitHub repository.\n \"\"\"\n metadata_yaml_url = _build_metadata_yaml_url(github_url)\n async with session.get(metadata_yaml_url) as response:\n response.raise_for_status()\n yaml_data = await response.text()\n return yaml.safe_load(yaml_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_repo_slug_from_url(github_url):\n match = GITHUB_SLUG_PATTERN.match(github_url)\n if not match:\n message = 'Could not parse GitHub slug from {}'.format(github_url)\n raise RuntimeError(message)\n\n _full = '/'.join((match.group('org'),\n match.group('name')))\n return RepoSlug(_full, match.group('org'), match.group('name'))", "response": "Parse a GitHub repository slug from a URL."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_raw_content_url(repo_slug, git_ref, file_path):\n if isinstance(repo_slug, RepoSlug):\n slug_str = repo_slug.full\n else:\n slug_str = repo_slug\n\n if file_path.startswith('/'):\n file_path = file_path.lstrip('/')\n\n template = 'https://raw.githubusercontent.com/{slug}/{git_ref}/{path}'\n return template.format(\n slug=slug_str,\n git_ref=git_ref,\n path=file_path)", "response": "Make a raw content URL to a file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tz(self):\n if not self._tz:\n self._tz = tzlocal.get_localzone().zone\n return self._tz", "response": "Return the timezone. If none is set use system timezone"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_tag(self, _tags):\n if isinstance(_tags, list):\n for t in _tags:\n self.tags.append(t)\n else:\n self.tags.append(_tags)", "response": "Add tag ( s ) to a DayOneEntry"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef time(self, t):\n _time = arrow.get(t).format('YYYY-MM-DDTHH:mm:ss')\n self._time = datetime.datetime.strptime(_time, '%Y-%m-%dT%H:%M:%S')", "response": "Convert any timestamp into a datetime and save as _time"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a dict that represents the DayOneEntry", "response": "def as_dict(self):\n \"\"\"Return a dict that represents the DayOneEntry\"\"\"\n entry_dict = {}\n entry_dict['UUID'] = self.uuid\n entry_dict['Creation Date'] = self.time\n entry_dict['Time Zone'] = self.tz\n if self.tags:\n entry_dict['Tags'] = self.tags\n entry_dict['Entry Text'] = self.text\n entry_dict['Starred'] = self.starred\n entry_dict['Location'] = self.location\n return entry_dict"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save(self, entry, with_location=True, debug=False):\n entry_dict = {}\n if isinstance(entry, DayOneEntry):\n # Get a dict of the DayOneEntry\n entry_dict = entry.as_dict()\n else:\n entry_dict = entry\n \n # Set the UUID\n entry_dict['UUID'] = uuid.uuid4().get_hex()\n if with_location and not entry_dict['Location']:\n entry_dict['Location'] = self.get_location()\n\n\n # Do we have everything needed?\n if not all ((entry_dict['UUID'], entry_dict['Time Zone'],\n entry_dict['Entry Text'])):\n print \"You must provide: Time zone, UUID, Creation Date, Entry Text\"\n return False\n\n if debug is False:\n file_path = self._file_path(entry_dict['UUID'])\n plistlib.writePlist(entry_dict, file_path)\n else:\n plist = plistlib.writePlistToString(entry_dict)\n print plist\n\n return True", "response": "Saves a DayOneEntry as a plist"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating and return full file path for DayOne entry", "response": "def _file_path(self, uid):\n \"\"\"Create and return full file path for DayOne entry\"\"\"\n file_name = '%s.doentry' % (uid)\n return os.path.join(self.dayone_journal_path, file_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncombines many files into a single file on disk.", "response": "def combine(self, members, output_file, dimension=None, start_index=None, stop_index=None, stride=None):\n \"\"\" Combine many files into a single file on disk. Defaults to using the 'time' dimension. \"\"\"\n nco = None\n try:\n nco = Nco()\n except BaseException:\n # This is not necessarily an import error (could be wrong PATH)\n raise ImportError(\"NCO not found. The NCO python bindings are required to use 'Collection.combine'.\")\n\n if len(members) > 0 and hasattr(members[0], 'path'):\n # A member DotDoct was passed in, we only need the paths\n members = [ m.path for m in members ]\n\n options = ['-4'] # NetCDF4\n options += ['-L', '3'] # Level 3 compression\n options += ['-h'] # Don't append to the history global attribute\n if dimension is not None:\n if start_index is None:\n start_index = 0\n if stop_index is None:\n stop_index = ''\n if stride is None:\n stride = 1\n options += ['-d', '{0},{1},{2},{3}'.format(dimension, start_index, stop_index, stride)]\n nco.ncrcat(input=members, output=output_file, options=options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main(argv=None, white_list=None, load_yaz_extension=True):\n assert argv is None or isinstance(argv, list), type(argv)\n assert white_list is None or isinstance(white_list, list), type(white_list)\n assert isinstance(load_yaz_extension, bool), type(load_yaz_extension)\n\n argv = sys.argv if argv is None else argv\n assert len(argv) > 0, len(argv)\n\n if load_yaz_extension:\n load(\"~/.yaz\", \"yaz_extension\")\n\n parser = Parser(prog=argv[0])\n parser.add_task_tree(get_task_tree(white_list))\n\n task, kwargs = parser.parse_arguments(argv)\n\n if task:\n try:\n result = task(**kwargs)\n\n # when the result is a boolean, exit with 0 (success) or 1 (failure)\n if isinstance(result, bool):\n code = 0 if result else 1\n output = None\n\n # when the result is an integer, exit with that integer value\n elif isinstance(result, int):\n code = result % 256\n output = None\n\n # otherwise exit with 0 (success) and print the result\n else:\n code = 0\n output = result\n\n # when yaz.Error occurs, exit with the given return code and print the error message\n # when any other error occurs, let python handle the exception (i.e. exit(1) and print call stack)\n except Error as error:\n code = error.return_code\n output = error\n\n else:\n # when no task is found to execute, exit with 1 (failure) and print the help text\n code = 1\n output = parser.format_help().rstrip()\n\n if output is not None:\n print(output)\n\n sys.exit(code)", "response": "This function is called by the yaz script to run the yaz task."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a tree of Task instances for all tasks in the task list.", "response": "def get_task_tree(white_list=None):\n \"\"\"Returns a tree of Task instances\n\n The tree is comprised of dictionaries containing strings for\n keys and either dictionaries or Task instances for values.\n\n When WHITE_LIST is given, only the tasks and plugins in this\n list will become part of the task tree. The WHITE_LIST may\n contain either strings, corresponding to the task of plugin\n __qualname__, or, preferable, the WHITE_LIST contains\n links to the task function or plugin class instead.\n \"\"\"\n assert white_list is None or isinstance(white_list, list), type(white_list)\n\n if white_list is not None:\n white_list = set(item if isinstance(item, str) else item.__qualname__ for item in white_list)\n\n tree = dict((task.qualified_name, task)\n for task\n in _task_list.values()\n if white_list is None or task.qualified_name in white_list)\n\n plugins = get_plugin_list()\n for plugin in [plugin for plugin in plugins.values() if white_list is None or plugin.__qualname__ in white_list]:\n tasks = [func\n for _, func\n in inspect.getmembers(plugin)\n if inspect.isfunction(func) and hasattr(func, \"yaz_task_config\")]\n if len(tasks) == 0:\n continue\n\n node = tree\n for name in plugin.__qualname__.split(\".\"):\n if not name in node:\n node[name] = {}\n node = node[name]\n\n for func in tasks:\n logger.debug(\"Found task %s\", func)\n node[func.__name__] = Task(plugin_class=plugin, func=func, config=func.yaz_task_config)\n\n return tree"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef task(func, **config):\n if func.__name__ == func.__qualname__:\n assert not func.__qualname__ in _task_list, \"Can not define the same task \\\"{}\\\" twice\".format(func.__qualname__)\n logger.debug(\"Found task %s\", func)\n _task_list[func.__qualname__] = Task(plugin_class=None, func=func, config=config)\n else:\n func.yaz_task_config = config\n\n return func", "response": "Declare a function or method to be a Yaz task"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of parameters", "response": "def get_parameters(self):\n \"\"\"Returns a list of parameters\"\"\"\n if self.plugin_class is None:\n sig = inspect.signature(self.func)\n for index, parameter in enumerate(sig.parameters.values()):\n if not parameter.kind in [parameter.POSITIONAL_ONLY, parameter.KEYWORD_ONLY, parameter.POSITIONAL_OR_KEYWORD]:\n raise RuntimeError(\"Task {} contains an unsupported {} parameter\".format(parameter, parameter.kind))\n\n yield parameter\n\n else:\n var_keyword_seen = set()\n\n for cls in inspect.getmro(self.plugin_class):\n if issubclass(cls, BasePlugin) and hasattr(cls, self.func.__name__):\n func = getattr(cls, self.func.__name__)\n logger.debug(\"Found method %s from class %s\", func, cls)\n var_keyword_found = False\n sig = inspect.signature(func)\n for index, parameter in enumerate(sig.parameters.values()):\n if index == 0:\n # skip \"self\" parameter\n continue\n\n if parameter.kind == inspect.Parameter.VAR_KEYWORD:\n # found \"**kwargs\" parameter. we will continue to the next class in the mro\n # to add any keyword parameters we have not yet used (i.e. whose name\n # we have not yet seen)\n var_keyword_found = True\n continue\n\n if parameter.kind in [parameter.POSITIONAL_ONLY, parameter.VAR_POSITIONAL]:\n raise RuntimeError(\"Task {} contains an unsupported parameter \\\"{}\\\"\".format(func, parameter))\n\n if not parameter.name in var_keyword_seen:\n var_keyword_seen.add(parameter.name)\n\n logger.debug(\"Found parameter %s (%s)\", parameter, parameter.kind)\n yield parameter\n\n # we only need to look at the next class in the mro\n # when \"**kwargs\" is found\n if not var_keyword_found:\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_configuration(self, key, default=None):\n if key in self.config:\n return self.config.get(key)\n else:\n return default", "response": "Returns the configuration for KEY"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_plugin_list():\n global _yaz_plugin_classes\n\n def get_recursively(cls, plugin_list):\n for plugin in cls.__subclasses__():\n if not (plugin.yaz_is_final() or plugin.__qualname__ in _yaz_plugin_classes):\n plugin_list[plugin.__qualname__].append(plugin)\n get_recursively(plugin, plugin_list)\n return plugin_list\n\n def include_class(candidate, classes):\n for cls in classes:\n if candidate is cls:\n continue\n\n if issubclass(cls, candidate):\n return False\n\n return True\n\n def get_plugin_type(qualname, plugins):\n classes = sorted(plugins, key=lambda plugin: plugin.yaz_get_ordinal())\n\n # exclude classes that are implicitly included as parent classes\n classes = [cls for cls in classes if include_class(cls, classes)]\n logger.debug(\"New plugin class \\\"%s\\\" extending %s\", qualname, [cls for cls in classes])\n\n return type(qualname, tuple(classes) + (Final,), {})\n\n logger.debug(\"Plugin list: %s\" % _yaz_plugin_classes)\n\n # find all Plugin classes recursively\n plugin_list = get_recursively(BasePlugin, collections.defaultdict(list))\n\n # combine all classes into their Plugin class (i.e. multiple inherited plugin)\n _yaz_plugin_classes.update((qualname, get_plugin_type(qualname, plugins))\n for qualname, plugins\n in plugin_list.items())\n\n assert isinstance(_yaz_plugin_classes, dict), type(_yaz_plugin_classes)\n assert all(isinstance(qualname, str) for qualname in _yaz_plugin_classes.keys()), \"Every key should be a string\"\n assert all(issubclass(plugin_class, Final) for plugin_class in _yaz_plugin_classes.values()), \"Every value should be a 'Final' plugin\"\n\n return _yaz_plugin_classes", "response": "Finds all yaz plugins and returns them in a __qualname__ dictionary"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_plugin_instance(plugin_class, *args, **kwargs):\n assert issubclass(plugin_class, BasePlugin), type(plugin_class)\n\n global _yaz_plugin_instance_cache\n\n qualname = plugin_class.__qualname__\n if not qualname in _yaz_plugin_instance_cache:\n plugin_class = get_plugin_list()[qualname]\n _yaz_plugin_instance_cache[qualname] = plugin = plugin_class(*args, **kwargs)\n\n # find any yaz.dependency decorators, and call them when necessary\n funcs = [func\n for _, func\n in inspect.getmembers(plugin)\n if inspect.ismethod(func) and hasattr(func, \"yaz_dependency_config\")]\n\n for func in funcs:\n signature = inspect.signature(func)\n assert all(parameter.kind is parameter.POSITIONAL_OR_KEYWORD and issubclass(parameter.annotation, BasePlugin) for parameter in signature.parameters.values()), \"All parameters for {} must type hint to a BasePlugin\".format(func)\n func(*[get_plugin_instance(parameter.annotation)\n for parameter\n in signature.parameters.values()])\n\n return _yaz_plugin_instance_cache[qualname]", "response": "Returns an instance of a fully initialized plugin class."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef xml_to_json(root):\n j = {}\n\n if len(root) == 0: # Tag with no children, return str/int\n return _maybe_intify(root.text)\n\n if len(root) == 1 and root[0].tag.startswith('{' + NS_GML): # GML\n return gml_to_geojson(root[0])\n\n if root.tag == 'open511':\n j['meta'] = {'version': root.get('version')}\n\n for elem in root:\n name = elem.tag\n if name == 'link' and elem.get('rel'):\n name = elem.get('rel') + '_url'\n if name == 'self_url':\n name = 'url'\n if root.tag == 'open511':\n j['meta'][name] = elem.get('href')\n continue\n elif name.startswith('{' + NS_PROTECTED):\n name = '!' + name[name.index('}') + 1:] \n elif name[0] == '{':\n # Namespace!\n name = '+' + name[name.index('}') + 1:]\n\n if name in j:\n continue # duplicate\n elif elem.tag == 'link' and not elem.text:\n j[name] = elem.get('href')\n elif len(elem):\n if name == 'grouped_events':\n # An array of URLs\n j[name] = [xml_link_to_json(child, to_dict=False) for child in elem]\n elif name in ('attachments', 'media_files'):\n # An array of JSON objects\n j[name] = [xml_link_to_json(child, to_dict=True) for child in elem]\n elif all((name == pluralize(child.tag) for child in elem)):\n # serializes to a JSON array\n j[name] = [xml_to_json(child) for child in elem]\n else:\n j[name] = xml_to_json(elem)\n else:\n if root.tag == 'open511' and name.endswith('s') and not elem.text:\n # Special case: an empty e.g. container at the root level\n # should be serialized to [], not null\n j[name] = []\n else:\n j[name] = _maybe_intify(elem.text)\n\n return j", "response": "Convert an Open511 XML document or document fragment to JSON."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving an lxml Element of a GML geometry returns a dict in GeoJSON format.", "response": "def gml_to_geojson(el):\n \"\"\"Given an lxml Element of a GML geometry, returns a dict in GeoJSON format.\"\"\"\n if el.get('srsName') not in ('urn:ogc:def:crs:EPSG::4326', None):\n if el.get('srsName') == 'EPSG:4326':\n return _gmlv2_to_geojson(el)\n else:\n raise NotImplementedError(\"Unrecognized srsName %s\" % el.get('srsName'))\n tag = el.tag.replace('{%s}' % NS_GML, '')\n if tag == 'Point':\n coordinates = _reverse_gml_coords(el.findtext('{%s}pos' % NS_GML))[0]\n elif tag == 'LineString':\n coordinates = _reverse_gml_coords(el.findtext('{%s}posList' % NS_GML))\n elif tag == 'Polygon':\n coordinates = []\n for ring in el.xpath('gml:exterior/gml:LinearRing/gml:posList', namespaces=NSMAP) \\\n + el.xpath('gml:interior/gml:LinearRing/gml:posList', namespaces=NSMAP):\n coordinates.append(_reverse_gml_coords(ring.text))\n elif tag in ('MultiPoint', 'MultiLineString', 'MultiPolygon'):\n single_type = tag[5:]\n member_tag = single_type[0].lower() + single_type[1:] + 'Member'\n coordinates = [\n gml_to_geojson(member)['coordinates']\n for member in el.xpath('gml:%s/gml:%s' % (member_tag, single_type), namespaces=NSMAP)\n ]\n else:\n raise NotImplementedError\n\n return {\n 'type': tag,\n 'coordinates': coordinates\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _gmlv2_to_geojson(el):\n tag = el.tag.replace('{%s}' % NS_GML, '')\n if tag == 'Point':\n coordinates = [float(c) for c in el.findtext('{%s}coordinates' % NS_GML).split(',')]\n elif tag == 'LineString':\n coordinates = [\n [float(x) for x in pair.split(',')]\n for pair in el.findtext('{%s}coordinates' % NS_GML).split(' ')\n ]\n elif tag == 'Polygon':\n coordinates = []\n for ring in el.xpath('gml:outerBoundaryIs/gml:LinearRing/gml:coordinates', namespaces=NSMAP) \\\n + el.xpath('gml:innerBoundaryIs/gml:LinearRing/gml:coordinates', namespaces=NSMAP):\n coordinates.append([\n [float(x) for x in pair.split(',')]\n for pair in ring.text.split(' ')\n ])\n elif tag in ('MultiPoint', 'MultiLineString', 'MultiPolygon', 'MultiCurve'):\n if tag == 'MultiCurve':\n single_type = 'LineString'\n member_tag = 'curveMember'\n else:\n single_type = tag[5:]\n member_tag = single_type[0].lower() + single_type[1:] + 'Member'\n coordinates = [\n gml_to_geojson(member)['coordinates']\n for member in el.xpath('gml:%s/gml:%s' % (member_tag, single_type), namespaces=NSMAP)\n ]\n else:\n raise NotImplementedError\n\n return {\n 'type': tag,\n 'coordinates': coordinates\n }", "response": "Translates a deprecated GML 2. 0 geometry to GeoJSON"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef deparagraph(element, doc):\n if isinstance(element, Para):\n # Check if siblings exist; don't process the paragraph in that case.\n if element.next is not None:\n return element\n elif element.prev is not None:\n return element\n\n # Remove the Para wrapper from the lone paragraph.\n # `Plain` is a container that isn't rendered as a paragraph.\n return Plain(*element.content)", "response": "Panflute filter function that converts content wrapped in a Para to a lone paragraph."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef all_subclasses(cls):\n for subclass in cls.__subclasses__():\n yield subclass\n for subc in all_subclasses(subclass):\n yield subc", "response": "Recursively generate all the subclasses of cls"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists unique elements preserving order. Remember only the element just seen.", "response": "def unique_justseen(iterable, key=None):\n \"List unique elements, preserving order. Remember only the element just seen.\"\n # unique_justseen('AAAABBBCCDAABBB') --> A B C D A B\n # unique_justseen('ABBCcAD', str.lower) --> A B C A D\n try:\n # PY2 support\n from itertools import imap as map\n except ImportError:\n from builtins import map\n\n return map(next, map(operator.itemgetter(1), itertools.groupby(iterable, key)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef normalize_array(var):\n if np.issubdtype(var.dtype, 'S1'):\n if var.dtype == str:\n # Python 2 on netCDF4 'string' variables needs this.\n # Python 3 returns false for np.issubdtype(var.dtype, 'S1')\n return var[:]\n\n def decoder(x):\n return str(x.decode('utf-8'))\n vfunc = np.vectorize(decoder)\n return vfunc(nc4.chartostring(var[:]))\n else:\n return var[:]", "response": "Normalizes a NetCDF4 variable."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generic_masked(arr, attrs=None, minv=None, maxv=None, mask_nan=True):\n attrs = attrs or {}\n\n if 'valid_min' in attrs:\n minv = safe_attribute_typing(arr.dtype, attrs['valid_min'])\n if 'valid_max' in attrs:\n maxv = safe_attribute_typing(arr.dtype, attrs['valid_max'])\n if 'valid_range' in attrs:\n vr = attrs['valid_range']\n minv = safe_attribute_typing(arr.dtype, vr[0])\n maxv = safe_attribute_typing(arr.dtype, vr[1])\n\n # Get the min/max of values that the hardware supports\n try:\n info = np.iinfo(arr.dtype)\n except ValueError:\n info = np.finfo(arr.dtype)\n\n minv = minv if minv is not None else info.min\n maxv = maxv if maxv is not None else info.max\n\n if mask_nan is True:\n arr = np.ma.fix_invalid(arr)\n\n return np.ma.masked_outside(\n arr,\n minv,\n maxv\n )", "response": "Generic masked version of the array."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dictify_urn(urn, combine_interval=True):\n ioos_urn = IoosUrn.from_string(urn)\n\n if ioos_urn.valid() is False:\n return dict()\n\n if ioos_urn.asset_type != 'sensor':\n logger.error(\"This function only works on 'sensor' URNs.\")\n return dict()\n\n if '#' in ioos_urn.component:\n standard_name, extras = ioos_urn.component.split('#')\n else:\n standard_name = ioos_urn.component\n extras = ''\n\n d = dict(standard_name=standard_name)\n\n # Discriminant\n if '-' in ioos_urn.component:\n d['discriminant'] = standard_name.split('-')[-1]\n d['standard_name'] = standard_name.split('-')[0]\n\n intervals = []\n cell_methods = []\n if extras:\n for section in extras.split(';'):\n key, values = section.split('=')\n if key == 'interval':\n # special case, intervals should be appended to the cell_methods\n for v in values.split(','):\n intervals.append(v)\n else:\n if key == 'cell_methods':\n value = [ x.replace('_', ' ').replace(':', ': ') for x in values.split(',') ]\n cell_methods = value\n else:\n value = ' '.join([x.replace('_', ' ').replace(':', ': ') for x in values.split(',')])\n d[key] = value\n\n if combine_interval is True:\n if cell_methods and intervals:\n if len(cell_methods) == len(intervals):\n d['cell_methods'] = ' '.join([ '{} (interval: {})'.format(x[0], x[1].upper()) for x in zip(cell_methods, intervals) ])\n else:\n d['cell_methods'] = ' '.join(cell_methods)\n for i in intervals:\n d['cell_methods'] += ' (interval: {})'.format(i.upper())\n elif cell_methods:\n d['cell_methods'] = ' '.join(cell_methods)\n for i in intervals:\n d['cell_methods'] += ' (interval: {})'.format(i.upper())\n elif intervals:\n raise ValueError(\"An interval without a cell_method is not allowed! Not possible!\")\n else:\n d['cell_methods'] = ' '.join(cell_methods)\n d['interval'] = ','.join(intervals).upper()\n\n if 'vertical_datum' in d:\n d['vertical_datum'] = d['vertical_datum'].upper()\n\n return d", "response": "Returns a dictionary with the given URN."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef default(self, obj):\n if isinstance(obj, np.ndarray):\n return obj.tolist()\n elif isinstance(obj, np.generic):\n return np.asscalar(obj)\n # Let the base class default method raise the TypeError\n return json.JSONEncoder(self, obj)", "response": "Return the default encoder for the object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef default(self, obj):\n if isinstance(obj, np.ndarray):\n if obj.flags['C_CONTIGUOUS']:\n obj_data = obj.data\n else:\n cont_obj = np.ascontiguousarray(obj)\n assert(cont_obj.flags['C_CONTIGUOUS'])\n obj_data = cont_obj.data\n data_b64 = base64.b64encode(obj_data)\n return dict(__ndarray__=data_b64,\n dtype=str(obj.dtype),\n shape=obj.shape)\n elif isinstance(obj, np.generic):\n return np.asscalar(obj)\n # Let the base class default method raise the TypeError\n return json.JSONEncoder(self, obj)", "response": "Return a dictionary representation of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mapfivo(ol,*args,**kwargs):\n '''\n #mapfivo f,i,v,o\u56db\u5143\u51b3\u5b9a fivo-4-tuple-engine\n #map_func diff_func(index,value,*diff_args)\n '''\n args = list(args)\n lngth = args.__len__()\n if(lngth==0):\n diff_funcs_arr = kwargs['map_funcs']\n diff_args_arr = kwargs['map_func_args_array']\n elif(lngth==1):\n if('map_func_args_array' in kwargs):\n diff_funcs_arr = args[0]\n diff_args_arr = kwargs['map_func_args_array']\n else:\n diff_funcs_arr = kwargs['map_funcs']\n diff_args_arr = args[0]\n else:\n diff_funcs_arr = args[0]\n diff_args_arr = args[1]\n lngth = ol.__len__()\n rslt = []\n for i in range(0,lngth):\n index = i\n value = ol[i]\n func = diff_funcs_arr[i]\n args = diff_args_arr[i]\n ele = func(index,value,*args)\n rslt.append(ele)\n return(rslt)", "response": "mapfivo - 4 - tuple - engine"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mapfiv(ol,map_func_args,**kwargs):\n '''\n #mapfiv \u5171\u4eab\u76f8\u540c\u7684o share common other_args\n #map_func diff_func(index,value,*common_args)\n '''\n lngth = ol.__len__()\n diff_funcs_arr = kwargs['map_funcs']\n common_args_arr = init(lngth,map_func_args)\n rslt = mapfivo(ol,map_funcs=diff_funcs_arr,map_func_args_array=common_args_arr)\n return(rslt)", "response": "mapfivo \u5171\u4eab\u76f8\u540c\u7684o share common other_args\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef mapivo(ol,map_func,**kwargs):\n '''\n #mapivo \u5171\u4eab\u76f8\u540c\u7684f share common map_func\n #map_func common_func(index,value,*diff_args)\n '''\n lngth = ol.__len__()\n common_funcs_arr = init(lngth,map_func)\n diff_args_arr = kwargs['map_func_args_array']\n rslt = mapfivo(ol,map_funcs=common_funcs_arr,map_func_args_array=diff_args_arr)\n return(rslt)", "response": "mapfivo \u5171\u4eab\u76f8\u540c\u7684f share common map_func"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef array_dualmap(ol,value_map_func,**kwargs):\n '''\n from elist.elist import *\n ol = ['a','b','c','d']\n def index_map_func(index,prefix,suffix):\n s = prefix +str(index+97)+ suffix\n return(s)\n \n def value_map_func(mapped_index,ele,prefix,suffix):\n s = prefix+mapped_index+': ' + str(ele) + suffix\n return(s)\n \n ####\n rslt = array_dualmap2(ol,index_map_func=index_map_func,index_map_func_args=[': ',' is '],value_map_func=value_map_func,value_map_func_args=['ord',' yes?'])\n pobj(rslt)\n '''\n def get_self(obj):\n return(obj)\n if('index_map_func_args' in kwargs):\n index_map_func_args = kwargs['index_map_func_args']\n else:\n index_map_func_args = []\n if('value_map_func_args' in kwargs):\n value_map_func_args = kwargs['value_map_func_args']\n else:\n value_map_func_args = []\n if('index_map_func' in kwargs):\n index_map_func = kwargs['index_map_func']\n else:\n index_map_func = get_self\n length = ol.__len__()\n il = list(range(0,length))\n nil = list(map(lambda ele:index_map_func(ele,*index_map_func_args),il))\n nvl = []\n for i in range(0,length):\n ele = ol[i]\n v = value_map_func(nil[i],ele,*value_map_func_args)\n nvl.append(v)\n return(nvl)", "response": "Returns a list of all the elements in the list that are in the order of the elements in the list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef array_dualmap2(*refls,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n refl1 = ['+','+','+','+']\n refl2 = [7,7,7,7]\n refl3 = ['=','=','=','=']\n def index_map_func(index):\n s =\"<\"+str(index)+\">\"\n return(s)\n \n def value_map_func(mapped_index,ele,ref_ele1,ref_ele2,ref_ele3,prefix,suffix):\n s = prefix+mapped_index+': ' + str(ele) + str(ref_ele1) + str(ref_ele2) + str(ref_ele3) + suffix\n return(s)\n \n ####\n rslt = array_dualmap2(ol,refl1,refl2,refl3,index_map_func=index_map_func,value_map_func=value_map_func,value_map_func_args=['Q','?'])\n pobj(rslt)\n '''\n def get_self(obj,*args):\n return(obj)\n if('value_map_func_args' in kwargs):\n value_map_func_args = kwargs['value_map_func_args']\n else:\n value_map_func_args = []\n if('index_map_func' in kwargs):\n index_map_func = kwargs['index_map_func']\n else:\n index_map_func = get_self\n if('index_map_func_args' in kwargs):\n index_map_func_args = kwargs['index_map_func_args']\n else:\n index_map_func_args = []\n length = ol.__len__()\n il = list(range(0,length))\n nil = list(map(lambda ele:index_map_func(ele,*index_map_func_args),il))\n refls = list(refls)\n refls = prepend(refls,nil)\n nvl = array_map2(*refls,map_func = value_map_func,map_func_args=value_map_func_args)\n return(nvl)", "response": "A function that returns a list of all the elements in the list that are in the order of the elements in the list."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mapfi(ol,map_func_args,**kwargs):\n '''\n #mapfi \u5171\u4eab\u76f8\u540c\u7684o,v\u4e0d\u4f5c\u4e3amap_func\u53c2\u6570\n # share common other_args,NOT take value as a param for map_func\n #map_func diff_func(index,*common_args)\n '''\n diff_funcs_arr = kwargs['map_funcs']\n lngth = ol.__len__()\n rslt = []\n for i in range(0,lngth):\n index = i\n value = ol[i]\n func = diff_funcs_arr[i]\n args = map_func_args\n ele = func(index,*args)\n rslt.append(ele)\n return(rslt)", "response": "mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi - mapfi"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mapiv2(ol,map_func,*args,**kwargs):\n '''\n from elist.elist import *\n ol = ['a','b','c','d']\n #1\n def map_func(index,value,*others):\n return(value * index + others[0] +others[-1])\n mapiv(ol,map_func,'tailA-','tailB')\n #2\n mapiv2(ol,lambda index,value,other:(value*index+other),['-'])\n mapiv2(ol,lambda index,value,other:(value*index+other),'-')\n mapiv2(ol,lambda index,value:(value*index))\n '''\n args = list(args)\n if(args.__len__() > 0):\n map_func_args = args\n else:\n if('map_func_args' in kwargs):\n map_func_args = kwargs['map_func_args']\n else:\n map_func_args = []\n lngth = ol.__len__()\n rslt = []\n for i in range(0,lngth):\n ele = map_func(i,ol[i],*map_func_args)\n rslt.append(ele)\n return(rslt)", "response": "map_func is a function that returns a list of all elements in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mapi(ol,map_func,map_func_args=[]):\n '''\n #mapi v\u4e0d\u4f5c\u4e3amap_func\u53c2\u6570,\u5171\u4eab\u76f8\u540c\u7684f,\u5171\u4eab\u76f8\u540c\u7684o\n # NOT take value as a param for map_func\n # share common other_args\n # share common map_func\n # common_func(index,*common_args)\n '''\n lngth = ol.__len__()\n rslt = []\n for i in range(0,lngth):\n index = i\n value = ol[i]\n func = map_func\n args = map_func_args\n ele = func(index,*args)\n rslt.append(ele)\n return(rslt)", "response": "mapi - map a list of items"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mapv(ol,map_func,map_func_args=[]):\n '''\n #mapv i\u4e0d\u4f5c\u4e3amap_func\u53c2\u6570,\u5171\u4eab\u76f8\u540c\u7684f,\u5171\u4eab\u76f8\u540c\u7684o\n # NOT take index as a param for map_func\n # share common other_args\n # share common map_func\n # common_func(value,*common_args)\n\n '''\n rslt = list(map(lambda ele:map_func(ele,*map_func_args),ol))\n return(rslt)", "response": "mapv - map_func - map_func to list of elements"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef array_map(ol,map_func,*args):\n '''\n obseleted,just for compatible\n from elist.elist import *\n ol = [1,2,3,4]\n def map_func(ele,mul,plus):\n return(ele*mul+plus)\n\n array_map(ol,map_func,2,100)\n '''\n rslt = list(map(lambda ele:map_func(ele,*args),ol))\n return(rslt)", "response": "A wrapper for array_map that returns a list of lists with the elements in the list."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mapo(ol,map_func,*params,**kwargs):\n '''\n #mapo i\u4e0d\u4f5c\u4e3amap_func\u53c2\u6570,v\u4e0d\u4f5c\u4e3amap_func\u53c2\u6570,\u5171\u4eab\u76f8\u540c\u7684f\n # NOT take index as a param for map_func\n # NOT take value as a param for map_func\n # share common map_func\n # common_func(*priv_args)\n '''\n params = list(params)\n if(params.__len__()==0):\n diff_args_arr = kwargs['map_func_args_array']\n elif(isinstance(params[0],list)):\n diff_args_arr = params[0]\n else:\n diff_args_arr = params\n lngth = ol.__len__()\n rslt = []\n for i in range(0,lngth):\n index = i\n value = ol[i]\n func = map_func\n args = diff_args_arr[i]\n ele = func(*args)\n rslt.append(ele)\n return(rslt)", "response": "mapo - map_func is a function that takes a list of parameters and returns a list of elements"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef findfiv(ol,cond_func_args,**kwargs):\n '''\n #findfiv \u5171\u4eab\u76f8\u540c\u7684o share common other_args\n #cond_func diff_func(index,value,*common_args)\n '''\n lngth = ol.__len__()\n diff_funcs_arr = kwargs['cond_funcs']\n common_args_arr = init(lngth,map_func_args)\n rslt = findfivo(ol,cond_funcs=diff_funcs_arr,cond_func_args_array=common_args_arr)\n return(rslt)", "response": "findfivo \u5171\u4eab\u76f8\u540c\u7684o share common other_args\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nselecting all the indexes of the elements in the list", "response": "def cond_select_indexes_all(ol,**kwargs):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n def test_func(ele,x):\n cond = (ele > x)\n return(cond)\n \n ol = [1,2,3,4,5,6,7]\n rslt = cond_select_indexes_all(ol,cond_func = test_func, cond_func_args = [3])\n pobj(rslt)\n '''\n cond_func = kwargs['cond_func']\n if('cond_func_args' in kwargs):\n cond_func_args = kwargs['cond_func_args']\n else:\n cond_func_args = []\n ####\n founded = find_all(ol,cond_func,*cond_func_args)\n rslt = array_map(founded,lambda ele:ele['index'])\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cond_select_indexes_all2(ol,**kwargs):\n '''\n from elist.elist import *\n from xdict.jprint import pobj\n def test_func(ele,index,x):\n cond1 = (ele > x)\n cond2 = (index %2 == 0)\n cond =(cond1 & cond2)\n return(cond)\n\n ol = [1,2,3,4,5,6,7]\n rslt = cond_select_indexes_all2(ol,cond_func = test_func,cond_func_args = [3])\n pobj(rslt)\n '''\n cond_func = kwargs['cond_func']\n if('cond_func_args' in kwargs):\n cond_func_args = kwargs['cond_func_args']\n else:\n cond_func_args = []\n ####\n founded = find_all2(ol,cond_func,*cond_func_args)\n rslt = array_map(founded,lambda ele:ele['index'])\n return(rslt)", "response": "select all entries that are greater than or equal to the index of the first entry in the list"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef select_seqs(ol,seqs):\n '''\n from elist.elist import *\n ol = ['a','b','c','d']\n select_seqs(ol,[1,2])\n '''\n rslt =copy.deepcopy(ol)\n rslt = itemgetter(*seqs)(ol)\n if(seqs.__len__()==0):\n rslt = []\n elif(seqs.__len__()==1):\n rslt = [rslt]\n else:\n rslt = list(rslt)\n return(rslt)", "response": "select_seqs returns a list of all the key - value pairs in the given list"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nappends an element to the end of the list", "response": "def append(ol,ele,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n ele = 5\n id(ol)\n append(ol,ele,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n ele = 5\n id(ol)\n new = append(ol,ele)\n new\n id(new)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n new.append(ele)\n return(new)\n else:\n ol.append(ele)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nappend some elements to the end of the list.", "response": "def append_some(ol,*eles,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n id(ol)\n append_some(ol,5,6,7,8,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n id(ol)\n new = append_some(ol,5,6,7,8)\n new\n id(new)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n return(extend(ol,list(eles),mode=mode))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prepend_some(ol,*eles,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n id(ol)\n prepend_some(ol,5,6,7,8,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n id(ol)\n new = prepend_some(ol,5,6,7,8)\n new\n id(new)\n #####unshift is the same as prepend_some\n >>> unshift(ol,9,10,11,12)\n [9, 10, 11, 12, 1, 2, 3, 4]\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n return(prextend(ol,list(eles),mode=mode))", "response": "prepends some elements to the beginning of the list"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextend the list of items with the same list of items", "response": "def extend(ol,nl,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n nl = [5,6,7,8]\n id(ol)\n extend(ol,nl,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n nl = [5,6,7,8]\n id(ol)\n new = extend(ol,nl)\n new\n id(new)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n cpnl = copy.deepcopy(nl)\n new.extend(cpnl)\n return(new)\n else:\n ol.extend(nl)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npush a new or existing elist into the new list", "response": "def push(ol,*eles,**kwargs):\n '''\n from elist.elist import *\n ol=[1,2,3,4]\n id(ol)\n new = push(ol,5,6,7)\n new\n id(new)\n ####\n ol=[1,2,3,4]\n id(ol)\n rslt = push(ol,5,6,7,mode=\"original\")\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n eles = list(eles)\n return(extend(ol,eles,mode=mode))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a copy of the list with the same attributes as the original list.", "response": "def prextend(ol,nl,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n nl = [5,6,7,8]\n id(ol)\n id(nl)\n prextend(ol,nl,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n nl = [5,6,7,8]\n id(ol)\n id(nl)\n new = prextend(ol,nl)\n new\n id(new)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(nl)\n cpol = copy.deepcopy(ol)\n new.extend(cpol)\n return(new)\n else:\n length = ol.__len__()\n nl_len = nl.__len__()\n for i in range(0,nl_len):\n ol.append(None)\n for i in range(length-1,-1,-1):\n ol[i+nl_len] = ol[i]\n for i in range(0,nl_len):\n ol[i] = nl[i]\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef concat(*arrays):\n '''\n from elist.elist import *\n l1 = [1,2,3]\n l2 = [\"a\",\"b\",\"c\"]\n l3 = [100,200]\n id(l1)\n id(l2)\n id(l3)\n arrays = [l1,l2,l3]\n new = concat(arrays)\n new\n id(new)\n '''\n new = []\n length = arrays.__len__()\n for i in range(0,length):\n array = copy.deepcopy(arrays[i])\n new.extend(array)\n return(new)", "response": "concat returns a new list of all the elements in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the next set of entries in the list", "response": "def cdr(ol,**kwargs):\n '''\n from elist.elist import *\n ol=[1,2,3,4]\n id(ol)\n new = cdr(ol)\n new\n id(new)\n ####\n ol=[1,2,3,4]\n id(ol)\n rslt = cdr(ol,mode=\"original\")\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n if(mode == \"new\"):\n cpol = copy.deepcopy(ol)\n return(cpol[1:])\n else:\n ol.pop(0)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconning - cons - cons - head - ele l - list of elements to be added", "response": "def cons(head_ele,l,**kwargs):\n '''\n from elist.elist import *\n ol=[1,2,3,4]\n id(ol)\n new = cons(5,ol)\n new\n id(new)\n ####\n ol=[1,2,3,4]\n id(ol)\n rslt = cons(5,ol,mode=\"original\")\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n return(prepend(l,head_ele,mode=mode))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the index of a random entry in the list.", "response": "def uniform_index(index,length):\n '''\n uniform_index(0,3)\n uniform_index(-1,3)\n uniform_index(-4,3)\n uniform_index(-3,3)\n uniform_index(5,3)\n '''\n if(index<0):\n rl = length+index\n if(rl<0):\n index = 0\n else:\n index = rl\n elif(index>=length):\n index = length\n else:\n index = index\n return(index)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef insert(ol,start_index,ele,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n ele = 5\n id(ol)\n insert(ol,2,ele,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n ele = 5\n id(ol)\n new = insert(ol,2,ele)\n new\n id(new)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n length = ol.__len__()\n cpol = copy.deepcopy(ol)\n si = uniform_index(start_index,length)\n new = copy.deepcopy(cpol[0:si])\n new.append(ele)\n new.extend(cpol[si:])\n return(new)\n else:\n ol.insert(start_index,ele)\n return(ol)", "response": "inserts an element into the list"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insert_some(ol,*eles,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n id(ol)\n insert_some(ol,5,6,7,8,index=2,mode=\"original\")\n ol\n id(ol)\n ####\n ol = [1,2,3,4]\n id(ol)\n new = insert_some(ol,5,6,7,8,index=2)\n new\n id(new)\n '''\n start_index = kwargs['index']\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n length = ol.__len__()\n cpol = copy.deepcopy(ol)\n if(mode == \"new\"):\n si = uniform_index(start_index,length)\n new = copy.deepcopy(cpol[0:si])\n new.extend(list(eles))\n new.extend(cpol[si:])\n return(new)\n else:\n si = uniform_index(start_index,length)\n new = cpol[0:si]\n new.extend(list(eles))\n new.extend(cpol[si:])\n ol.clear()\n for i in range(0,new.__len__()):\n ol.append(new[i])\n return(ol)", "response": "Insert some elements into the list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insert_many(ol,eles,locs,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4,5]\n eles = [7,77,777]\n locs = [0,2,4]\n id(ol)\n new = insert_many(ol,eles,locs)\n ol\n new\n id(new)\n ####\n ol = [1,2,3,4,5]\n eles = [7,77,777]\n locs = [0,2,4]\n id(ol)\n rslt = insert_many(ol,eles,locs,mode=\"original\")\n ol\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n eles = copy.deepcopy(eles)\n locs = copy.deepcopy(locs)\n new = []\n length = ol.__len__()\n cpol = copy.deepcopy(ol)\n for i in range(0,locs.__len__()):\n if(locs[i]>=length):\n pass\n else:\n locs[i] = uniform_index(locs[i],length)\n tmp = sorted_refer_to(eles,locs)\n eles = tmp['list']\n locs = tmp['referer']\n label = eles.__len__()\n si = 0\n ei = 0\n for i in range(0,locs.__len__()):\n if(locs[i]>=length):\n label = i\n break\n else:\n ei = locs[i]\n new.extend(cpol[si:ei])\n new.append(eles[i])\n si = ei\n for i in range(label,locs.__len__()):\n new.append(eles[i])\n new.extend(cpol[ei:])\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)", "response": "Insert a new entry into the list of eles and locations."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef insert_sections_some(ol,*secs,**kwargs):\n '''\n ol = initRange(0,20,1)\n ol\n loc = 6\n rslt = insert_sections_some(ol,['a','a','a'],['c','c','c','c'],index=loc)\n rslt\n ####\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n loc = kwargs['index']\n secs = list(secs)\n secs = [concat(*secs)]\n locs = [loc]\n return(insert_sections_many(ol,secs,locs,mode=mode))", "response": "Insert some sections into the tree."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef insert_section(ol,sec,loc,**kwargs):\n '''\n ol = initRange(0,20,1)\n ol\n loc = 6\n sec = ['a','b','c','d']\n rslt = insert_section(ol,sec,loc)\n rslt\n ####\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n secs = [sec]\n locs = [loc]\n return(insert_sections_many(ol,secs,locs,mode=mode))", "response": "Insert a section into the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef insert_sections_many(ol,secs,locs,**kwargs):\n '''\n ol = initRange(0,20,1)\n ol\n locs = [1,6,14,9]\n secs = [\n ['a','a','a'],\n ['b','b'],\n ['c','c','c','c'],\n ['d','d']\n ]\n rslt = insert_sections_many(ol,secs,locs)\n rslt\n ####\n ol\n locs = [0,3,6,9,12,15,16]\n secs = [\n ['a','a','a'],\n ['b','b'],\n ['c','c','c','c'],\n ['d','d']\n ]\n rslt = insert_sections_many(ol,secs,locs)\n rslt\n ####\n ol\n locs = [1,6,14,9]\n secs = [\n ['a','a','a'],\n ['b','b'],\n ['c','c','c','c'],\n ['d','d'],\n ['e'],\n ['f','f','f','f'],\n [777,777,777,777]\n ]\n rslt = insert_sections_many(ol,secs,locs)\n rslt\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n secs = copy.deepcopy(secs)\n locs = copy.deepcopy(locs)\n brked = broken_seqs(ol,locs)\n seclen = secs.__len__()\n brklen = brked.__len__()\n if(locs[0]==0):\n new = secs[0]\n length = seclen -1\n if(length < brklen):\n for i in range(0,length):\n new.extend(brked[i])\n new.extend(secs[i+1])\n for i in range(length,brklen):\n new.extend(brked[i])\n elif(length == brklen):\n for i in range(0,length):\n new.extend(brked[i])\n new.extend(secs[i+1])\n else:\n for i in range(0,brklen):\n new.extend(brked[i])\n new.extend(secs[i+1])\n for i in range(brklen,length):\n new.extend(secs[i])\n else:\n new = brked[0]\n length = brklen -1\n if(length < seclen):\n for i in range(0,length):\n new.extend(secs[i])\n new.extend(brked[i+1])\n for i in range(length,seclen):\n new.extend(secs[i])\n elif(length == seclen):\n for i in range(0,length):\n new.extend(secs[i])\n new.extend(brked[i+1])\n else:\n for i in range(0,seclen):\n new.extend(secs[i])\n new.extend(brked[i+1])\n for i in range(seclen,length):\n new.extend(brked[i])\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)", "response": "Insert a set of sections into a single tree."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reorder_sub(ol,sub):\n '''\n sub = ['query', 'params', 'fragment', 'path']\n ol = ['scheme', 'username', 'password', 'hostname', 'port', 'path', 'params', 'query', 'fragment']\n reorder_sub(ol,sub)\n '''\n def cond_func(ele,ol):\n index = ol.index(ele)\n return(index)\n indexes = array_map(sub,cond_func,ol)\n nsub = sorted_refer_to(sub,indexes)['list']\n return(nsub)", "response": "Reorder the list of entries in the list of sub entries in the list of objects in the current list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsort the elist. list", "response": "def sort(ol,**kwargs):\n '''\n from elist.elist import *\n ol = [1,3,4,2]\n id(ol)\n new = sort(ol)\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,3,4,2]\n id(ol)\n rslt = sort(ol,mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n new.sort()\n return(new) \n else:\n ol.sort()\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of all the entries in l that are in the referer.", "response": "def sorted_refer_to(l,referer,reverse=False,**kwargs):\n '''\n from elist.elist import *\n l = [\"a\",\"b\",\"c\"]\n referer = [7,8,6]\n sorted_refer_to(l,referer)\n {'list': ['c', 'a', 'b'], 'referer': [6, 7, 8]}\n l\n referer\n >>>\n '''\n if(\"mode\" in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"both\"\n tl =[]\n length = l.__len__()\n for i in range(0,length):\n ele = (l[i],referer[i])\n tl.append(ele)\n tl = sorted(tl,key=itemgetter(1),reverse=reverse)\n sorted_l =[]\n sorted_r = []\n for i in range(0,length):\n sorted_l.append(tl[i][0])\n sorted_r.append(tl[i][1])\n if(mode == \"only-list\"):\n return(sorted_l)\n elif(mode == \"only-referer\"):\n return(referer)\n else:\n return({\"list\":sorted_l,\"referer\":sorted_r})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a tuple of all the elements in the list that are in the same order as the original lists", "response": "def batsorted(referer,*lists,**kwargs):\n '''\n from elist.elist import *\n referer = [4,2,3,1]\n l1 = ['a','b','c','d']\n l2 = [100,200,300,400]\n l3 = ['A','B','A','B']\n nl1,nl2,nl3 = batsorted(referer,l1,l2,l3)\n nl1\n nl2\n nl3\n nl1,nl2,nl3 = batsorted(referer,l1,l2,l3,reverse=True)\n nl1\n nl2\n nl3\n ####the batsorted will not modify the original lists\n l1\n l2\n l3\n '''\n if('reverse' in kwargs):\n reverse = kwargs['reverse']\n else:\n reverse = False\n length = referer.__len__()\n indexes = list(range(0,length))\n rslt = sorted_refer_to(indexes,referer,reverse=reverse)\n referer = rslt['referer']\n indexes = rslt['list']\n rslt = []\n lists = copy.deepcopy(list(lists))\n for i in range(0,lists.__len__()):\n l = lists[i]\n nl = []\n for j in range(0,length):\n loc = indexes[j]\n nl.append(l[loc])\n rslt.append(nl)\n return(tuple(rslt))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsorts a dictionary list by the key and the value.", "response": "def sortDictList(dictList,**kwargs):\n '''\n students = [\n {'name':'john','class':'A', 'year':15},\n {'name':'jane','class':'B', 'year':12},\n {'name':'dave','class':'B', 'year':10}\n ]\n \n rslt = sortDictList(students,cond_keys=['name','class','year'])\n pobj(rslt)\n rslt = sortDictList(students,cond_keys=['name','year','class'])\n pobj(rslt)\n rslt = sortDictList(students,cond_keys=['class','name','year'])\n pobj(rslt)\n rslt = sortDictList(students,cond_keys=['class','year','name'])\n pobj(rslt)\n rslt = sortDictList(students,cond_keys=['year','name','class'])\n pobj(rslt)\n rslt = sortDictList(students,cond_keys=['year','name','class'])\n pobj(rslt)\n '''\n def default_eq_func(value1,value2):\n cond = (value1 == value2)\n return(cond)\n def default_gt_func(value1,value2):\n cond = (value1 > value2)\n return(cond)\n def default_lt_func(value1,value2):\n cond = (value1 < value2)\n return(cond)\n if('eq_func' in kwargs):\n eq_func = kwargs['eq_func']\n else:\n eq_func = default_eq_func\n if('gt_func' in kwargs):\n gt_func = kwargs['gt_func']\n else:\n gt_func = default_gt_func\n if('lt_func' in kwargs):\n lt_func = kwargs['lt_func']\n else:\n lt_func = default_lt_func\n if('reverse' in kwargs):\n reverse = kwargs['reverse']\n else:\n reverse = False\n keys = kwargs['cond_keys']\n def cmp_dict(d1,d2):\n '''\n '''\n length = keys.__len__()\n for i in range(0,length):\n key = keys[i]\n cond = eq_func(d1[key],d2[key])\n if(cond):\n pass\n else:\n cond = gt_func(d1[key],d2[key])\n if(cond):\n return(1)\n else:\n return(-1)\n return(0)\n ndl = dictList\n ndl = sorted(ndl,key=functools.cmp_to_key(cmp_dict),reverse=reverse)\n return(ndl)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef index_firstnot(ol,value):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',4,'a',5]\n index_firstnot(ol,'a')\n ####index_firstnot, array_indexnot, indexOfnot are the same\n array_indexnot(ol,'a')\n indexOfnot(ol,'a')\n '''\n length = ol.__len__()\n for i in range(0,length):\n if(value == ol[i]):\n pass\n else:\n return(i)\n return(None)", "response": "Return the first non - empty entry in the list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef index_last(ol,value):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',4,'a',5]\n index_last(ol,'a')\n ####lastIndexOf is the same as index_last\n lastIndexOf(ol,'a')\n '''\n length = ol.__len__()\n for i in range(length-1,-1,-1):\n if(value == ol[i]):\n return(i)\n else:\n pass\n return(None)", "response": "Returns the index of the last element in the list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the index of the last not value in the list.", "response": "def index_lastnot(ol,value):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',4,'a',5]\n index_lastnot(ol,'a')\n ####lastIndexOfnot is the same as index_lastnot\n lastIndexOfnot(ol,'a')\n '''\n length = ol.__len__()\n for i in range(length-1,-1,-1):\n if(value == ol[i]):\n pass\n else:\n return(i)\n return(None)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the index of the first element in the list which is not the given value.", "response": "def index_whichnot(ol,value,which):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',4,'a',5]\n index_whichnot(ol,'a',0)\n index_whichnot(ol,'a',1)\n index_whichnot(ol,'a',2)\n '''\n length = ol.__len__()\n seq = -1\n for i in range(0,length):\n if(value == ol[i]):\n pass\n else:\n seq = seq + 1\n if(seq == which):\n return(i)\n else:\n pass\n return(None)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of all the elements in the list that have the given value.", "response": "def indexes_all(ol,value):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',4,'a',5]\n indexes_all(ol,'a')\n '''\n length = ol.__len__()\n indexes =[]\n for i in range(0,length):\n if(value == ol[i]):\n indexes.append(i)\n else:\n pass\n return(indexes)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef indexes_some(ol,value,*seqs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',4,'a',5]\n indexes_some(ol,'a',0,2)\n indexes_some(ol,'a',0,1)\n indexes_some(ol,'a',1,2)\n indexes_some(ol,'a',3,4)\n '''\n seqs = list(seqs)\n length = ol.__len__()\n indexes =[]\n seq = -1\n for i in range(0,length):\n if(value == ol[i]):\n seq = seq + 1\n if(seq in seqs):\n indexes.append(i)\n else:\n pass\n else:\n pass\n return(indexes)", "response": "Returns a list of the indices of the elements in the list that contain the given value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef first_continuous_indexesnot_slice(ol,value):\n '''\n from elist.elist import *\n ol = [\"a\",0,1,\"a\",\"a\",2,3,\"a\",4,\"a\",\"a\",\"a\",5]\n first_continuous_indexesnot_slice(ol,\"a\")\n '''\n length = ol.__len__()\n begin = None\n slice = []\n for i in range(0,length):\n if(not(ol[i]==value)):\n begin = i\n break\n else:\n pass\n if(begin == None):\n return(None)\n else:\n slice.append(begin)\n for i in range(begin+1,length):\n if(not(ol[i]==value)):\n slice.append(i)\n else:\n break\n return(slice)", "response": "returns a list of the first continuous index that does not contain the given value"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef last_continuous_indexes_slice(ol,value):\n '''\n from elist.elist import *\n ol = [1,\"a\",\"a\",2,3,\"a\",4,\"a\",\"a\",\"a\",5]\n last_continuous_indexes_slice(ol,\"a\")\n '''\n length = ol.__len__()\n end = None\n slice = []\n for i in range(length-1,-1,-1):\n if(ol[i]==value):\n end = i\n break\n else:\n pass\n if(end == None):\n return(None)\n else:\n slice.append(end)\n for i in range(end-1,-1,-1):\n if(ol[i]==value):\n slice.append(i)\n else:\n break\n slice.reverse()\n return(slice)", "response": "returns a list of the last continuous indexes of the given value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef which_continuous_indexes_slice(ol,value,which):\n '''\n from elist.elist import *\n ol = [1,\"a\",\"a\",2,3,\"a\",4,\"a\",\"a\",\"a\",5]\n which_continuous_indexes_slice(ol,\"a\",0)\n which_continuous_indexes_slice(ol,\"a\",1)\n which_continuous_indexes_slice(ol,\"a\",2)\n which_continuous_indexes_slice(ol,\"a\",3)\n which_continuous_indexes_slice(ol,\"b\",0)\n '''\n length = ol.__len__()\n seq = -1\n cursor = 0\n begin = None\n slice = []\n while(cursor < length):\n cond1 = (ol[cursor] == value)\n cond2 = (begin == None)\n if(cond1 & cond2):\n begin = cursor\n slice.append(cursor)\n cursor = cursor + 1\n elif(cond1 & (not(cond2))):\n slice.append(cursor)\n cursor = cursor + 1\n elif((not(cond1)) & (not(cond2))):\n seq = seq + 1\n if(seq == which):\n return(slice)\n else:\n cursor = cursor + 1\n begin = None\n slice = []\n else:\n cursor = cursor + 1\n if(slice):\n seq = seq + 1\n else:\n pass\n if(seq == which):\n return(slice)\n else:\n return([])", "response": "Returns a list of the entries which are equal to the given value."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef seqs_continuous_indexesnot_slices(ol,value,seqs):\n '''\n from elist.elist import *\n ol = [1,\"a\",\"a\",2,3,\"a\",4,\"a\",\"a\",\"a\",5]\n seqs_continuous_indexesnot_slices(ol,\"a\",{0,2})\n '''\n rslt = []\n length = ol.__len__()\n seq = -1\n cursor = 0\n begin = None\n slice = []\n while(cursor < length):\n cond1 = not(ol[cursor] == value)\n cond2 = (begin == None)\n if(cond1 & cond2):\n begin = cursor\n slice.append(cursor)\n elif(cond1 & (not(cond2))):\n slice.append(cursor)\n elif((not(cond1)) & (not(cond2))):\n seq = seq + 1\n if(seq in seqs):\n rslt.append(slice)\n else:\n pass\n begin = None\n slice = []\n else:\n pass\n cursor = cursor + 1\n if(slice):\n seq = seq + 1\n if(seq in seqs):\n rslt.append(slice)\n else:\n pass\n else:\n pass\n return(rslt)", "response": "Returns a list of lists where the first element of the list is value and the second is value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of lists of lists of lists where the first element of the list is equal to the given value.", "response": "def all_continuous_indexes_slices(ol,value):\n '''\n from elist.elist import *\n ol = [1,\"a\",\"a\",2,3,\"a\",4,\"a\",\"a\",\"a\",5]\n all_continuous_indexes_slices(ol,\"a\")\n '''\n rslt = []\n length = ol.__len__()\n cursor = 0\n begin = None\n slice = []\n while(cursor < length):\n cond1 = (ol[cursor] == value)\n cond2 = (begin == None)\n if(cond1 & cond2):\n begin = cursor\n slice.append(cursor)\n elif(cond1 & (not(cond2))):\n slice.append(cursor)\n elif((not(cond1)) & (not(cond2))):\n rslt.append(slice)\n begin = None\n slice = []\n else:\n pass\n cursor = cursor + 1\n if(slice):\n rslt.append(slice)\n else:\n pass\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef shift(ol,**kwargs):\n '''\n from elist.jprint import pobj\n from elist.elist import *\n ol = [1,2,3,4]\n id(ol)\n rslt = shift(ol)\n pobj(rslt)\n ol\n id(ol)\n id(rslt['list'])\n ####\n ol = [1,2,3,4]\n id(ol)\n rslt = shift(ol,mode=\"original\")\n rslt\n ol\n id(ol)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n length = ol.__len__()\n rslt = pop(ol,0,mode=mode)\n return(rslt)", "response": "shifts the first entry in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npops - pop a specific element from the list", "response": "def pop(ol,index,**kwargs):\n '''\n from elist.jprint import pobj\n from elist.elist import *\n ol = [1,2,3,4]\n id(ol)\n rslt = pop(ol,2)\n pobj(rslt)\n ol\n id(ol)\n id(rslt['list'])\n ####\n ol = [1,2,3,4]\n id(ol)\n rslt = pop(ol,2,mode=\"original\")\n rslt\n ol\n id(ol)\n '''\n index = uniform_index(index,ol.__len__())\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n popped = new.pop(index)\n return({'popped':popped,'list':new})\n else:\n popped = ol.pop(index)\n return(popped)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cond_pop(ol,index,**kwargs):\n '''\n from elist.jprint import pobj\n from elist.elist import *\n ol = [{'data':0;'type':'number'},{'data':'x';'type':'str'},{'data':'y';'type':'str'},4]\n #cond_func_args is a array\n def cond_func(index,value,cond_func_args):\n \n '''\n cond_func = kwargs['cond_func']\n cond_func_args = kwargs['cond_func_args']\n index = uniform_index(index,ol.__len__())\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n value = ol[index]\n cond = cond_func(index,value,*cond_func_args)\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n if(cond):\n popped = new.pop(index)\n else:\n popped = new\n return({'popped':popped,'list':new})\n else:\n if(cond):\n popped = ol.pop(index)\n else:\n popped = ol\n return(popped)", "response": "pop a specific element from the list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving first entry in the list", "response": "def remove_first(ol,value,**kwargs):\n '''\n from elist.jprint import pobj\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a']\n id(ol)\n new = remove_first(ol,'a')\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a']\n id(ol)\n rslt = remove_first(ol,'a',mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n ####array_remove is the same as remove_first\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n new.remove(value)\n return(new)\n else:\n ol.remove(value)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove the last value from the list.", "response": "def remove_last(ol,value,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a']\n id(ol)\n new = remove_last(ol,'a')\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a']\n id(ol)\n rslt = remove_last(ol,'a',mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n new = copy.deepcopy(ol)\n new.reverse()\n new.remove(value)\n new.reverse()\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves the elements which value is in the list", "response": "def remove_which(ol,value,which,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a']\n id(ol)\n new = remove_which(ol,'a',1)\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a']\n id(ol)\n rslt = remove_which(ol,'a',1,mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n new = copy.deepcopy(ol)\n length = ol.__len__()\n if(mode == \"new\"):\n l = new \n else:\n l = ol\n seq = -1\n for i in range(0,length):\n if(ol[i]==value):\n seq = seq + 1\n if(seq == which):\n l.pop(i)\n break\n else:\n pass\n else:\n pass\n return(l)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving some element from elist. elist", "response": "def remove_some(ol,value,*seqs,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n new = remove_some(ol,'a',1,3)\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n rslt = remove_some(ol,'a',1,3,mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n seqs = list(seqs)\n new = []\n length = ol.__len__()\n seq = -1\n cpol = copy.deepcopy(ol)\n for i in range(0,length):\n if(cpol[i]==value):\n seq = seq + 1\n if(seq in seqs):\n pass\n else:\n new.append(cpol[i])\n else:\n new.append(cpol[i])\n if(mode == \"new\"):\n return(new) \n else:\n ol.clear()\n ol.extend(new)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove all entries with the given value from the list.", "response": "def remove_all(ol,value,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n new = remove_all(ol,'a')\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n rslt = remove_all(ol,'a',mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n new = []\n length = ol.__len__()\n cpol = copy.deepcopy(ol)\n for i in range(0,length):\n if(cpol[i]==value):\n pass\n else:\n new.append(cpol[i])\n if(mode == \"new\"):\n return(new) \n else:\n ol.clear()\n ol.extend(new)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove_many(ol,values,seqs,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'b',5,'a',6,'a',7,'b',8,'b',9]\n id(ol)\n new = remove_many(ol,['a','b'],[1,2])\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'b',5,'a',6,'a',7,'b',8,'b',9]\n id(ol)\n rslt = remove_many(ol,['a','b'],[1,2],mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n values = copy.deepcopy(values)\n seqs = copy.deepcopy(seqs)\n cursors = [-1] * values.__len__()\n new = []\n length = ol.__len__()\n cpol = copy.deepcopy(ol)\n for i in range(0,length):\n label = True\n for j in range(0,cursors.__len__()):\n which = seqs[j]\n value = values[j]\n if(cpol[i] == value):\n cursors[j] = cursors[j] + 1\n if(cursors[j] == which):\n label = False\n break\n else:\n pass\n else:\n pass\n if(label):\n new.append(cpol[i])\n else:\n pass\n if(mode == \"new\"):\n return(new) \n else:\n ol.clear()\n ol.extend(new)\n return(ol)", "response": "remove_many - Removes a set of items from an elist. list"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves all entries in elist. elist", "response": "def cond_remove_all(ol,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'X',3,'b',5,'c',6,'A',7,'b',8,'B',9]\n id(ol)\n def afterCH(ele,ch):\n cond = (ord(str(ele)) > ord(ch))\n return(cond)\n\n new = cond_remove_all(ol,cond_func=afterCH,cond_func_args=['B'])\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'X',3,'b',5,'c',6,'A',7,'b',8,'B',9]\n id(ol)\n rslt = cond_remove_all(ol,cond_func=afterCH,cond_func_args=['B'],mode='original')\n ol\n rslt\n id(ol)\n id(rslt)\n\n '''\n cond_func = kwargs['cond_func']\n if('cond_func_args' in kwargs):\n cond_func_args = kwargs['cond_func_args']\n else:\n cond_func_args = []\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n new = copy.deepcopy(ol)\n selected = find_all(new,cond_func,*cond_func_args)\n selected_indexes = array_map(selected,lambda ele:ele['index'])\n new = pop_indexes(new,selected_indexes)['list']\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cond_remove_some(ol,*some,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'X',3,'b',5,'c',6,'A',7,'b',8,'B',9]\n id(ol)\n \n def afterCH(ele,ch):\n cond = (ord(str(ele)) > ord(ch))\n return(cond)\n \n new = cond_remove_some(ol,0,2,cond_func=afterCH,cond_func_args=['B'])\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'X',3,'b',5,'c',6,'A',7,'b',8,'B',9]\n id(ol)\n rslt = cond_remove_some(ol,0,2,cond_func=afterCH,cond_func_args=['B'],mode='original')\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n cond_func = kwargs['cond_func']\n if('cond_func_args' in kwargs):\n cond_func_args = kwargs['cond_func_args']\n else:\n cond_func_args = []\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n seqs = list(some)\n rslt = cond_remove_seqs(ol,seqs,cond_func=cond_func,cond_func_args=cond_func_args)\n return(rslt)", "response": "remove some elements from the list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninitializing the list of elements with the given length.", "response": "def init(len,default_element=None):\n '''\n from elist.elist import *\n init(5)\n init(5,\"x\")\n '''\n rslt = []\n for i in range(0,len):\n rslt.append(copy.deepcopy(default_element))\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef array_from(obj,func,*args):\n '''\n from elist.elist import *\n array_from(\"abcd\",None)\n #####\n def map_func(ele,x,y):\n return(int(ele)+x+y)\n \n array_from(\"1234\",map_func,1000,100)\n \n def map_func(ele):\n return(int(ele)*2)\n \n array_from(\"1234\",map_func)\n \n array_from(\"1234\",None)\n '''\n if(func):\n l = list(obj)\n rslt = list(map(lambda ele:func(ele,*args),l))\n return(rslt)\n else:\n return(list(obj))", "response": "A simple function that returns a list of all the elements in the object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreverse the list of elements in order.", "response": "def reverse(ol,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n id(ol)\n new = reverse(ol)\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,2,3,4]\n id(ol)\n rslt = reverse(ol,mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n if(mode == \"new\"):\n new = copy.deepcopy(ol)\n new.reverse()\n return(new) \n else:\n ol.reverse()\n return(ol)\n\n\n 'reverse',\n 'sort'"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef comprise(list1,list2,**kwargs):\n '''\n from elist.elist import *\n comprise([1,2,3,4,5],[2,3,4],mode=\"loose\")\n comprise([1,2,3,4,5],[2,3,4])\n comprise([1,2,3,4,5],[2,3,4],mode=\"strict\")\n comprise([1,2,3,4,5],[1,2,3,4],mode=\"strict\")\n #not recursive ,only one level\n #please refer to ListTree.search for recursive support\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"loose\"\n len_1 = list1.__len__()\n len_2 = list2.__len__()\n if(len_2>len_1):\n return(False)\n else:\n if(mode==\"strict\"):\n if(list2 == list1[:len_2]):\n return(True)\n else:\n return(False)\n else:\n end = len_1 - len_2\n for i in range(0,end+1):\n if(list2 == list1[i:(i+len_2)]):\n return(True)\n else:\n pass\n return(False)", "response": "Returns True if list1 and list2 are comprise."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all the entries in the elist", "response": "def entries(ol):\n '''\n from elist.elist import *\n ol = ['a','b','c']\n rslt = entries(ol)\n rslt\n '''\n rslt = []\n length = ol.__len__()\n for i in range(0,length):\n entry = [i,ol[i]]\n rslt.append(entry)\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef splice(ol,start,deleteCount,*eles,**kwargs):\n '''\n from elist.elist import *\n ol = [\"angel\", \"clown\", \"mandarin\", \"surgeon\"]\n id(ol)\n new = splice(ol,2,0,\"drum\")\n new\n id(new)\n ####\n ol = [\"angel\", \"clown\", \"mandarin\", \"surgeon\"]\n id(ol)\n new = splice(ol,2,1,\"drum\",mode=\"original\")\n new\n id(new)\n ####\n ol = [1,2,3,4,5,6]\n id(ol)\n new = splice(ol,2,2,77,777)\n new\n id(new)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n length = ol.__len__()\n new = copy.deepcopy(ol)\n if(start >= length):\n eles = list(eles)\n new.extend(eles)\n else:\n start = uniform_index(start,length)\n end = start + deleteCount\n tmp = pop_range(new,start,end,mode=\"new\")['list']\n new = insert_some(tmp,*eles,index=start,mode=\"new\")\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)", "response": "splice - Remove elements from the list and return the new list."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef slice(ol,start,end=None,**kwargs):\n '''\n from elist.elist import *\n ol = [1,2,3,4,5]\n id(ol)\n new = slice(ol,2,4)\n new\n id(new)\n ####\n id(ol)\n rslt = slice(ol,1,-2,mode=\"original\")\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n length = ol.__len__()\n new = copy.deepcopy(ol)\n if(end == None):\n end = length\n else:\n end = uniform_index(end,length)\n start = uniform_index(start,length)\n if(mode == \"new\"):\n return(new[start:end])\n else:\n ol.clear()\n ol.extend(new[start:end])\n return(ol)", "response": "Returns a new list of the elements from the list."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\njoining a list of elist entries into a single string", "response": "def join(ol,separator=\",\"):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n join(ol,separator=\"-\")\n '''\n if(ol.__len__() == 0):\n return(\"\")\n else:\n pass\n cond = (type(ol[0])==type(b''))\n if(cond):\n rslt = b''\n else:\n rslt =\"\"\n length = ol.__len__()\n for i in range(0,length-1):\n ele = ol[i]\n if(cond):\n pass\n else:\n ele = str(ele)\n rslt = rslt + ele + separator\n if(cond):\n rslt = rslt + ol[length - 1]\n else:\n rslt = rslt + str(ol[length - 1])\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\njoining two lists of names and values.", "response": "def join2(ol,*sps):\n '''\n from elist.elist import *\n ol = [1,2,3,4]\n join2(ol,\"-\",\"+\",\"*\")\n '''\n rslt =\"\"\n length = ol.__len__()\n for i in range(0,length-1):\n rslt = rslt + str(ol[i]) + sps[i]\n rslt = rslt + str(ol[length - 1])\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef htmljoin(ol,sp,**kwargs):\n '''\n ol = [1,2,3,4]\n htmljoin(ol,\"option\",outer=\"select\")\n \n '''\n if('outer' in kwargs):\n outer = kwargs['outer']\n else:\n outer = \"\"\n if(outer):\n head = \"<\" + outer + \">\"\n tail = \"\"\n else:\n head = \"\"\n tail = \"\"\n rslt = head\n length = ol.__len__()\n begin = \"<\" + sp + \">\"\n end = \"\"\n for i in range(0,length):\n rslt = rslt + begin + str(ol[i]) + end\n rslt = rslt + tail\n return(rslt)", "response": "htmljoin - returns a string with the specified list of items in the specified HTML page."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef uniqualize(l,**kwargs):\n '''\n from elist.elist import *\n l = [1, 2, 2]\n new = uniqualize(l)\n new\n id(l)\n id(new)\n ####\n l = [1, 2, 2]\n rslt = uniqualize(l,mode=\"original\")\n rslt\n id(l)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = 'new'\n pt = copy.deepcopy(l)\n seqs =[]\n freq = {}\n for i in range(0,pt.__len__()):\n v = pt[i]\n if(v in freq):\n freq[v] = freq[v] + 1\n else:\n freq[v] = 0\n seqs.append(i)\n #####\u4e0b\u9762\u662f\u5f71\u54cd\u901f\u5ea6\u7684\u5173\u952e,append\u7279\u522b\u8017\u65f6\n npt = select_seqs(pt,seqs)\n ########################\n pt = npt\n if(mode == 'new'):\n return(npt)\n else:\n l.clear()\n l.extend(npt)\n return(l)", "response": "uniqualize a list of unique names"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef interleave(*arrays,**kwargs):\n '''\n arr1 = [1,2,3,4]\n arr2 = ['a','b','c','d']\n arr3 = ['@','#','%','*']\n interleave(arr1,arr2,arr3)\n '''\n anum = arrays.__len__()\n rslt = []\n length = arrays[0].__len__()\n for j in range(0,length):\n for i in range(0,anum):\n array = arrays[i]\n rslt.append(array[j])\n return(rslt)", "response": "Interleave two lists of strings."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef for_each(ol,test_func,*args):\n '''\n from elist.elist import *\n def show_func(ele):\n print(\"<{0}>\".format(ele))\n \n ol = [1,2,3,4]\n for_each(ol,show_func)\n \n ####forEach is the same as for_each\n ####forEach have no return value\n '''\n rslt = (True,None)\n length = ol.__len__()\n for i in range(0,length):\n test_func(ol[i],*args)", "response": "for_each test_func for each element in elist."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning true if k is in the list of loose_in", "response": "def loose_in(pl,k):\n '''\n pl = ['bcd','xabcxx','x','y']\n loose_in(pl,'abc')\n \n '''\n cond = some(pl,lambda ele:(k in ele))['cond']\n return(cond)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef select_strict_in(pl,k):\n '''\n pl = ['bcd','xabcxx','x','y']\n select_strict_in(pl,'abc')\n '''\n def cond_func(ele,index,k):\n if(type(ele) == type([])):\n cond = (k in ele)\n else:\n cond = (k == ele)\n return(cond)\n arr = cond_select_values_all2(pl,cond_func=cond_func, cond_func_args =[k])\n return(arr)", "response": "Select a single element from a list of strict names."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef regex_in(pl,regex):\n '''\n regex = re.compile(\"^[a-z]+$\")\n pl = ['b1c3d','xab15cxx','1x','y2']\n regex_in(pl,regex)\n \n regex = re.compile(\"^[0-9a-z]+$\")\n pl = ['b1c3d','xab15cxx','1x','y2']\n regex_in(pl,regex)\n \n '''\n def cond_func(ele,regex):\n m = regex.search(ele)\n if(m == None):\n return(False)\n else:\n return(True)\n cond = some(pl,cond_func,regex)['cond']\n return(cond)", "response": "Returns True if the regex is in the list of available log entries."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nselects all the entries that match a regular expression", "response": "def select_regex_in(pl,regex):\n '''\n regex = re.compile(\"^x.*x$\")\n pl = ['bcd','xabcxx','xx','y']\n select_regex_in(pl,'abc')\n '''\n def cond_func(ele,index,regex):\n if(type(ele)==type([])):\n cond = regex_in(ele,regex)\n else:\n m = regex.search(ele)\n if(m == None):\n cond = False\n else:\n cond = True\n return(cond)\n arr = cond_select_values_all2(pl,cond_func=cond_func, cond_func_args =[regex])\n return(arr)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfills the list with the given value", "response": "def fill(ol,value,start=None, end=None,**kwargs):\n '''\n from elist.elist import *\n ol = [1, 2, 3,4,5]\n id(ol)\n rslt = fill(ol,4)\n rslt\n id(rslt)\n ####\n ol = [1, 2, 3,4,5]\n id(ol)\n rslt = fill(ol,4,1)\n rslt\n id(rslt)\n ####\n ol = [1, 2, 3,4,5]\n id(ol)\n rslt = fill(ol,6,1,3,mode=\"original\")\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n length = ol.__len__()\n if(start==None):\n start = 0\n else:\n pass\n if(end==None):\n end = length\n else:\n pass\n start = uniform_index(start,length)\n end = uniform_index(end,length)\n new = copy.deepcopy(ol)\n for i in range(start,end):\n new[i] = value\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfilters the list of elist items by test_func", "response": "def filter(ol,test_func,*args,**kwargs):\n '''\n from elist.elist import *\n def test_func(ele,x):\n cond = (ele > x)\n return(cond)\n \n ol = [1,2,3,4]\n id(ol)\n new = filter(ol,test_func,3)\n new\n id(new)\n #####\n ol = [10,20,30,40]\n id(ol)\n rslt = filter(ol,test_func,3,mode=\"original\")\n rslt\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs['mode']\n else:\n mode = \"new\"\n length = ol.__len__()\n new = []\n cpol = copy.deepcopy(ol)\n for i in range(0,length):\n cond = test_func(cpol[i],*args)\n if(cond):\n new.append(cpol[i])\n else:\n pass\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_last(ol,test_func,*args):\n '''\n from elist.elist import *\n def test_func(ele,x):\n cond = (ele > x)\n return(cond)\n \n ol = [1,2,3,4]\n last = find_last(ol,test_func,3)\n last\n #####\n ol = [10,20,30,40]\n last = find_last(ol,test_func,3)\n last\n '''\n length = ol.__len__()\n for i in range(length-1,-1,-1):\n cond = test_func(ol[i],*args)\n if(cond):\n return({'index':i,'value':ol[i]})\n else:\n pass\n return({'index':None,'value':None})", "response": "find the last entry in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds which in elist. list", "response": "def find_which(ol,which,test_func,*args):\n '''\n from elist.elist import *\n def test_func(ele,x):\n cond = (ele > x)\n return(cond)\n \n ol = [1,2,3,4,5,6,7]\n last = find_which(ol,0,test_func,3)\n last\n last = find_which(ol,2,test_func,3)\n last\n '''\n length = ol.__len__()\n seq = -1\n for i in range(0,length):\n cond = test_func(ol[i],*args)\n if(cond):\n seq = seq + 1\n if(seq == which):\n return({'index':i,'value':ol[i]})\n else:\n pass\n else:\n pass\n return({'index':None,'value':None})"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_seqs(ol,seqs,test_func,*args):\n '''\n from elist.elist import *\n def test_func(ele,x):\n cond = (ele > x)\n return(cond)\n \n ol = [1,2,3,4,5,6,7]\n some = find_seqs(ol,[0,3],test_func,3)\n some\n some = find_some(ol,[0,1,2],test_func,3)\n some\n #find_some is the same as find_seqs\n '''\n rslt =[]\n seqs = list(seqs)\n length = ol.__len__()\n seq = -1\n for i in range(0,length):\n cond = test_func(ol[i],*args)\n if(cond):\n seq = seq + 1\n if(seq in seqs):\n rslt.append({'index':i,'value':ol[i]})\n else:\n pass\n else:\n pass\n return(rslt)", "response": "find_seqs - Finds the elements in the list of entries in the order of the elements in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_allnot(ol,test_func,*args):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n def test_func(ele,x):\n cond = (ele > x)\n return(cond)\n \n ol = [1,2,3,4,5,6,7]\n rslt = find_allnot(ol,test_func,3)\n pobj(rslt)\n '''\n rslt =[]\n length = ol.__len__()\n for i in range(0,length):\n cond = test_func(ol[i],*args)\n if(cond):\n pass\n else:\n rslt.append({'index':i,'value':ol[i]})\n return(rslt)", "response": "find all not in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reduce_left(ol,callback,initialValue):\n '''\n from elist.elist import *\n def callback(accumulator,currentValue):\n accumulator.append(currentValue[0])\n accumulator.append(currentValue[1])\n return(accumulator)\n \n ol = [(1,2),(\"a\",\"b\"),(\"x\",\"y\")]\n reduce_left(ol,callback,[])\n #array_reduce, reduceLeft ,reduce_left are the same\n '''\n length = ol.__len__()\n accumulator = initialValue\n for i in range(0,length):\n accumulator = callback(accumulator,ol[i])\n return(accumulator)", "response": "reduce_left - reduce the elements of an elist object to the left"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef diff_indexes(l1,l2):\n '''\n from elist.elist import *\n l1 = [1,2,3,5]\n l2 = [0,2,3,4]\n diff_indexes(l1,l2)\n '''\n rslt = []\n for i in range(0,l1.__len__()):\n if(l1[i]!=l2[i]):\n rslt.append(i)\n return(rslt)", "response": "Returns a list of the index indices that differ between two lists of entries."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef diff_values(l1,l2):\n '''\n from elist.elist import *\n l1 = [1,2,3,5]\n l2 = [0,2,3,4]\n diff_values(l1,l2)\n '''\n rslt = []\n for i in range(0,l1.__len__()):\n if(l1[i]!=l2[i]):\n rslt.append(l1[i])\n return(rslt)", "response": "Returns a list of the differences between two lists of entries."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of the same elements in the list l1 and l2.", "response": "def same_indexes(l1,l2):\n '''\n from elist.elist import *\n l1 = [1,2,3,5]\n l2 = [0,2,3,4]\n same_indexes(l1,l2)\n '''\n rslt = []\n for i in range(0,l1.__len__()):\n if(l1[i]==l2[i]):\n rslt.append(i)\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef same_values(l1,l2):\n '''\n from elist.elist import *\n l1 = [1,2,3,5]\n l2 = [0,2,3,4]\n same_values(l1,l2)\n '''\n rslt = []\n for i in range(0,l1.__len__()):\n if(l1[i]==l2[i]):\n rslt.append(l1[i])\n return(rslt)", "response": "Returns a list of all the elements that are the same."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a dictionary mapping from value to index.", "response": "def value_indexes_mapping(l):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n l = ['a','b','b','a','c','b']\n desc = value_indexes_mapping(l)\n pobj(desc)\n '''\n pt = copy.deepcopy(l)\n desc = {}\n vset = set({})\n for v in pt:\n vset.add(v)\n for v in vset:\n desc[v] = []\n for i in range(0,l.__len__()):\n desc[l[i]].append(i)\n return(desc)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a dict mapping from a list of elist items to a list of elist items where the key is the index of the item in the list.", "response": "def cond_value_indexes_mapping(l,**kwargs):\n '''\n from elist.elist import *\n l = [('BIGipServer', 'rd19'), ('TS013d8ed5', '0105b6b0'), ('BIGipServer', 'rd19'), ('TS013d8ed5', '0105b6b0'), ('SID', '1'), ('SID', '2')]\n \n def cond_func(ele,*args):\n cond = ele[0]\n return(cond)\n \n desc = cond_value_indexes_mapping(l,cond_func=cond_func)\n pobj(desc)\n \n '''\n cond_func = kwargs['cond_func']\n if('cond_func_args' in kwargs):\n cond_func_args = kwargs['cond_func_args']\n else:\n cond_func_args = []\n if('with_none' in kwargs):\n with_none = kwargs['with_none']\n else:\n with_none = False\n desc = {}\n for i in range(0,l.__len__()):\n ele = l[i]\n cond = cond_func(ele,*cond_func_args)\n if((cond == None)&(not(with_none))):\n pass\n else:\n if(cond in desc):\n desc[cond].append(i)\n else:\n desc[cond] = [i]\n return(desc)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getitem_via_pathlist(ol,pathlist):\n '''\n from elist.elist import *\n y = ['a',['b',[\"bb\"]],'c']\n y[1][1]\n getitem_via_pathlist(y,[1,1])\n '''\n this = ol\n for i in range(0,pathlist.__len__()):\n key = pathlist[i]\n this = this.__getitem__(key)\n return(this)", "response": "getitem_via_pathlist returns the first item in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setitem_via_pathlist(ol,value,pathlist):\n '''\n from elist.elist import *\n y = ['a',['b',[\"bb\"]],'c']\n y[1][1]\n setitem_via_pathlist(y,\"500\",[1,1])\n y\n '''\n this = ol\n for i in range(0,pathlist.__len__()-1):\n key = pathlist[i]\n this = this.__getitem__(key)\n this.__setitem__(pathlist[-1],value)\n return(ol)", "response": "setitem_via_pathlist - set the value of the first item in the list"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setitem_via_sibseqs(ol,value,*sibseqs):\n '''\n from elist.elist import *\n y = ['a',['b',[\"bb\"]],'c']\n y[1][1]\n setitem_via_sibseqs(y,\"500\",1,1)\n y\n '''\n this = ol\n pathlist = list(sibseqs)\n for i in range(0,pathlist.__len__()-1):\n key = pathlist[i]\n this = this.__getitem__(key)\n this.__setitem__(pathlist[-1],value)\n return(ol)", "response": "setitem_via_sibseqs - set the value of the first node in the list"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delitem_via_pathlist(ol,pathlist):\n '''\n from elist.elist import *\n y = ['a',['b',[\"bb\"]],'c']\n y[1][1]\n delitem_via_pathlist(y,[1,1])\n y\n '''\n this = ol\n for i in range(0,pathlist.__len__()-1):\n key = pathlist[i]\n this = this.__getitem__(key)\n this.__delitem__(pathlist[-1])\n return(ol)", "response": "Delete the first item in the list of items in the list given by the pathlist."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete an item from an elist. list by deleting the first entry in the list.", "response": "def delitem_via_sibseqs(ol,*sibseqs):\n '''\n from elist.elist import *\n y = ['a',['b',[\"bb\"]],'c']\n y[1][1]\n delitem_via_sibseqs(y,1,1)\n y\n '''\n pathlist = list(sibseqs)\n this = ol\n for i in range(0,pathlist.__len__()-1):\n key = pathlist[i]\n this = this.__getitem__(key)\n this.__delitem__(pathlist[-1])\n return(ol)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef replace_seqs(ol,value,indexes,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n new = replace_seqs(ol,'AAA',[1,3,7])\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n rslt = replace_seqs(ol,'AAA',[1,3,7],mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n #replace_indexes = replace_seqs\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n indexes = list(indexes)\n new = []\n length = ol.__len__()\n cpol = copy.deepcopy(ol)\n for i in range(0,length):\n if(i in indexes):\n new.append(value)\n else:\n new.append(cpol[i])\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)", "response": "replace_seqs - replace the elements in the list with the given value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef replace_some(ol,value,*indexes,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n new = replace_some(ol,'AAA',1,3,7)\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n rslt = replace_some(ol,'AAA',1,3,7,mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n indexes = list(indexes)\n return(replace_seqs(ol,value,indexes,mode=mode))", "response": "Replace some value in the elist."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreplace value in the list with new value in src_value.", "response": "def replace_value_seqs(ol,src_value,dst_value,seqs,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n new = replace_value_seqs(ol,'a','AAA',[0,1])\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n rslt = replace_value_seqs(ol,'a','AAA',[0,1],mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n indexes = indexes_seqs(ol,src_value,seqs)\n return(replace_indexes(ol,dst_value,indexes,mode=mode))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreplace some value in the elist with another value.", "response": "def replace_value_some(ol,src_value,dst_value,*seqs,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n new = replace_value_some(ol,'a','AAA',0,1)\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'a',3,'a',5,'a',6,'a']\n id(ol)\n rslt = replace_value_some(ol,'a','AAA',0,1,mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n return(replace_value_seqs(ol,src_value,dst_value,list(seqs),mode=mode))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cond_replace_value_some(ol,dst_value,*some,**kwargs):\n '''\n from elist.elist import *\n ol = [1,'X',3,'b',5,'c',6,'A',7,'b',8,'B',9]\n id(ol)\n def afterCH(ele,ch):\n cond = (ord(str(ele)) > ord(ch))\n return(cond)\n \n new = cond_replace_value_some(ol,\"REPLACED\",0,2,cond_func=afterCH,cond_func_args=['B'])\n ol\n new\n id(ol)\n id(new)\n ####\n ol = [1,'X',3,'b',5,'c',6,'A',7,'b',8,'B',9]\n id(ol)\n rslt = cond_replace_value_some(ol,\"REPLACED\",0,2,cond_func=afterCH,cond_func_args=['B'],mode=\"original\")\n ol\n rslt\n id(ol)\n id(rslt)\n '''\n cond_func = kwargs['cond_func']\n if('cond_func_args' in kwargs):\n cond_func_args = kwargs['cond_func_args']\n else:\n cond_func_args = []\n if('mode' in kwargs):\n mode = kwargs[\"mode\"]\n else:\n mode = \"new\"\n seqs = list(some)\n new = copy.deepcopy(ol)\n selected = find_all(new,cond_func,*cond_func_args)\n selected_indexes = array_map(selected,lambda ele:ele['index'])\n selected_indexes = select_seqs(selected_indexes,seqs)\n new = replace_seqs(new,dst_value,selected_indexes)\n if(mode == \"new\"):\n return(new)\n else:\n ol.clear()\n ol.extend(new)\n return(ol)", "response": "replace_value_some - Finds and replaces some elements in the list of elists with dst_value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nspan = [(0, 3), (4, 7), (8, 10), (11, 12), (13, 16), (17, 20)] rangize_supplement(spans,24)", "response": "def rangize_supplement(spans,lngth):\n '''\n spans = [(0, 3), (4, 7), (8, 10), (11, 12), (13, 16), (17, 20)]\n rangize_supplement(spans,24)\n \n '''\n rslt = []\n si = 0\n ei = spans[0][0]\n if(si == ei):\n pass\n else:\n rslt.append((si,ei))\n prev_ei = spans[0][1]\n for i in range(1,spans.__len__()):\n si = prev_ei\n ei = spans[i][0]\n rslt.append((si,ei))\n prev_ei = spans[i][1]\n if(prev_ei < lngth):\n rslt.append((prev_ei,lngth))\n else:\n rslt.append((prev_ei,lngth+1))\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef range_compress(ol):\n '''\n #only support sorted-ints or sorted-ascii\n l = [1,5,6,7,8,13,14,18,30,31,32,33,34]\n range_compress(l)\n l = [1,5,6,7,8,13,14,18,30,31,32,33,34,40]\n range_compress(l)\n l = ['a','b','c','d','j','k','l','m','n','u','y','z']\n range_compress(l)\n '''\n T = (type(ol[0]) == type(0))\n if(T):\n l = ol\n else:\n l = array_map(ol,ord)\n length = l.__len__()\n secs = []\n si = 0\n ei = 0\n prev = l[0]\n for i in range(1,length):\n curr = l[i]\n cond = (curr == (prev+1))\n if(cond):\n ei = i\n prev = curr\n else:\n if(T):\n sec = (l[si],l[ei])\n else:\n sec = (ol[si],ol[ei])\n if(si == ei):\n sec = sec[0]\n else:\n pass\n secs.append(sec)\n si = i\n ei = i \n prev = curr\n if(T):\n sec = (l[si],l[ei])\n else:\n sec = (ol[si],ol[ei])\n if(si == ei):\n sec = sec[0]\n else:\n pass\n secs.append(sec)\n return(secs)", "response": "returns a list of all the unique elements in the list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef range_decompress(cl):\n '''\n #only support sorted-ints or sorted-ascii\n cl = [1, (5, 8), (13, 14), 18, (30, 34)]\n range_decompress(cl)\n cl = [1, (5, 8), (13, 14), 18, (30, 34), 40]\n range_decompress(cl)\n cl = [('a', 'd'), ('j', 'n'), 'u', ('y', 'z')]\n range_decompress(cl)\n '''\n def cond_func(ele):\n length = ele.__len__()\n cond = (length == 1)\n if(cond):\n return(ord(ele))\n else:\n x = ord(ele[0])\n y = ord(ele[1])\n return((x,y))\n if(type(cl[0])==type(0)):\n T = True\n elif(cl[0].__len__() == 1):\n T = (type(cl[0]) == type(0))\n else:\n T = (type(cl[0][0]) == type(0))\n if(T):\n l = cl \n else:\n l = array_map(cl,cond_func)\n rslt = []\n for i in range(0,l.__len__()):\n ele = l[i]\n if(type(ele) == type(0)):\n arr = [ele]\n elif(ele.__len__() == 1):\n arr = [ele]\n else:\n sv = ele[0]\n ev = ele[1]\n arr = init_range(sv,ev+1,1)\n if(T):\n pass\n else:\n arr = array_map(arr,chr)\n rslt.extend(arr)\n return(rslt)", "response": "decompress a list of n - tuple to a single list of n - tuple"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of broken sequence names.", "response": "def broken_seqs(ol,break_points):\n '''\n ol = initRange(0,20,1)\n ol\n break_points = [1,6,14,9]\n secs = broken_seqs(ol,break_points)\n forEach(secs,print)\n '''\n bps = list(break_points)\n length = ol.__len__()\n rgs = rangize(bps,length)\n rslt = []\n for i in range(0,rgs.__len__()):\n si,ei = rgs[i]\n sec = ol[si:ei]\n rslt.append(sec)\n return(rslt)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef brkl2kvlist(arr,interval,sub_pos=1,**kwargs):\n '''\n arr = [\"color1\",\"r1\",\"g1\",\"b1\",\"a1\",\"color2\",\"r2\",\"g2\",\"b2\",\"a2\"]\n brkl2kvlist(arr,5)\n (['color1', 'color2'], [['r1', 'g1', 'b1', 'a1'], ['r2', 'g2', 'b2', 'a2']])\n brkl2kvlist(arr,5,2)\n ([['color1', 'r1'], ['color2', 'r2']], [['g1', 'b1', 'a1'], ['g2', 'b2', 'a2']])\n '''\n lngth = arr.__len__()\n brkseqs1 = init_range(0,lngth,interval)\n brkseqs2 = init_range(sub_pos,lngth,interval)\n brkseqs = interleave(brkseqs1,brkseqs2)\n l = broken_seqs(arr,brkseqs)\n kl = select_evens(l)\n vl = select_odds(l)\n if(\"single_key\" in kwargs):\n single_key = kwargs['single_key']\n else:\n single_key = True\n if(sub_pos == 1):\n if(single_key):\n kl = mapv(kl,lambda ele:ele[0])\n else:\n pass\n else:\n pass\n return((kl,vl))", "response": "converts a Brkset - like array into a list of key - value pairs"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef divide(ol,interval):\n '''\n ol = elel.initRange(0,20,1)\n interval = 3\n rslt = elel.divide(ol,interval)\n rslt\n rslt = elel.divide(ol,4)\n rslt\n '''\n length = ol.__len__()\n seqs = initRange(0,length,interval)\n rslt = broken_seqs(ol,seqs)\n return(rslt)", "response": "divide an element list into a single element list"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef split(ol,value,**kwargs):\n '''\n ol = ['a',1,'a',2,'a',3,'a',4,'a']\n split(ol,'a')\n split(ol,'a',whiches=[0])\n split(ol,'a',whiches=[1])\n split(ol,'a',whiches=[2])\n split(ol,'a',whiches=[0,2])\n ol = [1,'a',2,'a',3,'a',4]\n split(ol,'a')\n split('x=bcdsef=g','=',whiches=[0])\n \n '''\n if('whiches' in kwargs):\n whiches = kwargs['whiches'] \n else:\n whiches = None\n indexes = indexes_all(ol,value)\n if(whiches == None):\n pass\n else:\n indexes = select_indexes(indexes,whiches)\n rslt = []\n rslt.append(ol[:indexes[0]])\n si = indexes[0]+1\n for i in range(1,indexes.__len__()):\n ei = indexes[i]\n ele = ol[si:ei]\n rslt.append(ele)\n si = ei + 1\n ele = ol[si:ol.__len__()]\n rslt.append(ele)\n return(rslt)", "response": "split an object into a list of lists"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef where(ol,value):\n '''\n ol = [0, 4, 6, 8, 10, 14]\n where(ol,-1)\n where(ol,1)\n where(ol,2)\n where(ol,3)\n where(ol,4)\n where(ol,9)\n where(ol,14)\n where(ol,17)\n '''\n si = None\n ei = None\n for i in range(0,ol.__len__()):\n ele = ol[i]\n if(value >ele):\n si = i \n elif(value == ele):\n return((i,i))\n else:\n ei = i \n return((si,ei))\n return((si,ei))", "response": "returns the i \n s where value is in the order of the entries in the order of the entries in the archive"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef value_interval(ol,value):\n '''\n ol = [0, 4, 6, 8, 10, 14]\n value_interval(ol,-1)\n value_interval(ol,1)\n value_interval(ol,2)\n value_interval(ol,3)\n value_interval(ol,4)\n value_interval(ol,9)\n value_interval(ol,14)\n value_interval(ol,17)\n '''\n si,ei = where(ol,value)\n if(si == None):\n sv = None\n else:\n sv = ol[si]\n if(ei == None):\n ev = None\n else:\n ev = ol[ei]\n return((sv,ev))", "response": "Return the value of the key in the order of the key in the order of the value"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rand_sub(arr,*args,**kwargs):\n '''\n arr = ['scheme', 'username', 'password', 'hostname', 'port', 'path', 'params', 'query', 'fragment']\n rand_sub(arr,3)\n rand_sub(arr,3)\n rand_sub(arr,3)\n rand_sub(arr)\n rand_sub(arr)\n rand_sub(arr)\n '''\n arr = copy.deepcopy(arr)\n lngth = arr.__len__()\n args = list(args)\n if(args.__len__() == 0):\n n = random.randrange(0,lngth)\n else:\n n = args[0]\n if(n>lngth):\n n = lngth\n else:\n pass\n indexes = rand_some_indexes(0,lngth,n,**kwargs)\n narr = select_seqs_keep_order(arr,indexes)\n return(narr)", "response": "Returns a random subset of the array"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_leaf(obj):\n '''\n the below is for nested-list\n any type is not list will be treated as a leaf\n empty list will be treated as a leaf\n from elist.elist import *\n is_leaf(1)\n is_leaf([1,2,3])\n is_leaf([])\n '''\n if(is_list(obj)):\n length = obj.__len__()\n if(length == 0):\n return(True)\n else:\n return(False)\n else:\n return(True)", "response": "returns True if the object is a leaf"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef new_ele_description(**kwargs):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n root_desc = new_ele_description(leaf=False,depth=0,breadth_path=[],path=[],parent_path=[],parent_breadth_path=[])\n pobj(root_desc)\n #None means not handled\n '''\n desc = {\n 'leaf':None,\n 'depth':None,\n 'breadth':None,\n 'breadth_path':None,\n 'sib_seq':None,\n 'path':None,\n 'parent_path':None,\n 'parent_breadth_path':None,\n 'lsib_path':None,\n 'rsib_path':None,\n 'lcin_path':None,\n 'rcin_path':None,\n 'sons_count':None,\n 'leaf_son_paths':None,\n 'non_leaf_son_paths':None,\n 'leaf_descendant_paths':None,\n 'non_leaf_descendant_paths':None,\n 'flat_offset':None,\n 'flat_len':None\n }\n for key in kwargs:\n desc[key.lower()] = kwargs[key]\n return(desc)", "response": "Create a new elist. xml element description."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef init_desc_matrix(l):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n l = [1,[4],2,[3,[5,6]]]\n desc_matrix = init_desc_matrix(l)\n pobj(desc_matrix)\n '''\n leaf = is_leaf(l)\n root_desc = new_ele_description(leaf=leaf,depth=0,breadth_path=[],path=[],parent_path=[],parent_breadth_path=[])\n if(leaf):\n root_desc['sons_count'] = 0\n else:\n pass\n desc_matrix = [\n [root_desc]\n ]\n return(desc_matrix)", "response": "init_desc_matrix returns a list of dicts"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes the unhandled elements in the list l.", "response": "def _init_unhandled(l,inited_matrix):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n l = [1,[4],2,[3,[5,6]]]\n desc_matrix = init_desc_matrix(l)\n unhandled = _init_unhandled(l,desc_matrix)\n unhandled_data = unhandled['data']\n unhandled_desc = unhandled['desc']\n unhandled_data[0]\n unhandled_desc[0]\n unhandled_data[1]\n unhandled_desc[1]\n '''\n root_desc = inited_matrix[0][0]\n unhandled = {'data':[],'desc':[]}\n length = l.__len__()\n root_desc['sons_count'] = length\n root_desc['leaf_son_paths'] = []\n root_desc['non_leaf_son_paths'] = [] \n if(length == 0):\n pass\n else:\n inited_matrix.append([])\n level = inited_matrix[1]\n for i in range(0,length):\n child = l[i]\n desc = copy.deepcopy(root_desc)\n desc = reset_parent_desc_template(desc)\n desc['depth'] = 1\n desc['breadth'] = i\n desc['parent_breadth_path'] = copy.deepcopy(desc['breadth_path'])\n desc['breadth_path'].append(i)\n desc['sib_seq'] = i\n desc['parent_path'] = copy.deepcopy(desc['path'])\n desc['path'].append(i)\n if(i==0):\n pass\n else:\n desc['lsib_path'] = [i-1]\n if(i == (length - 1)):\n pass\n else:\n desc['rsib_path'] = [i+1]\n if(is_leaf(child)):\n desc['leaf'] = True\n desc['sons_count'] = 0\n root_desc['leaf_son_paths'].append(copy.deepcopy(desc['path']))\n else:\n desc['leaf'] = False\n root_desc['non_leaf_son_paths'].append(copy.deepcopy(desc['path']))\n unhandled['data'].append(child)\n unhandled['desc'].append(desc)\n level.append(desc)\n return(unhandled)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the lsib_path field in the given dict.", "response": "def update_desc_lsib_path(desc):\n '''\n leftSibling\n previousSibling\n leftSib\n prevSib\n lsib\n psib\n \n have the same parent,and on the left\n '''\n if(desc['sib_seq']>0):\n lsib_path = copy.deepcopy(desc['path'])\n lsib_path[-1] = desc['sib_seq']-1\n desc['lsib_path'] = lsib_path\n else:\n pass\n return(desc)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_desc_rsib_path(desc,sibs_len):\n '''\n rightSibling\n nextSibling\n rightSib\n nextSib\n rsib\n nsib\n \n have the same parent,and on the right\n '''\n if(desc['sib_seq']<(sibs_len-1)):\n rsib_path = copy.deepcopy(desc['path'])\n rsib_path[-1] = desc['sib_seq']+1\n desc['rsib_path'] = rsib_path\n else:\n pass\n return(desc)", "response": "Update the rsib_path field of the given descriptor."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate lcin_path field in the given desc.", "response": "def update_desc_lcin_path(desc,pdesc_level):\n '''\n leftCousin\n previousCousin\n leftCin\n prevCin\n lcin\n pcin\n \n parents are neighbors,and on the left\n '''\n parent_breadth = desc['parent_breadth_path'][-1]\n if(desc['sib_seq']==0):\n if(parent_breadth==0):\n pass\n else:\n parent_lsib_breadth = parent_breadth - 1\n plsib_desc = pdesc_level[parent_lsib_breadth]\n if(plsib_desc['leaf']):\n pass\n else:\n lcin_path = copy.deepcopy(plsib_desc['path'])\n lcin_path.append(plsib_desc['sons_count'] - 1)\n desc['lcin_path'] = lcin_path\n else:\n pass\n return(desc)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_desc_rcin_path(desc,sibs_len,pdesc_level):\n '''\n rightCousin\n nextCousin\n rightCin\n nextCin\n rcin\n ncin\n \n parents are neighbors,and on the right\n '''\n psibs_len = pdesc_level.__len__()\n parent_breadth = desc['parent_breadth_path'][-1]\n if(desc['sib_seq']==(sibs_len - 1)):\n if(parent_breadth==(psibs_len -1)):\n pass\n else:\n parent_rsib_breadth = parent_breadth + 1\n prsib_desc = pdesc_level[parent_rsib_breadth]\n #because from left to right to handle each level\n #sons_count will only be updated in the next-round \n if(prsib_desc['leaf']):\n pass\n else:\n rcin_path = copy.deepcopy(prsib_desc['path'])\n rcin_path.append(0)\n desc['rcin_path'] = rcin_path\n else:\n pass\n return(desc)", "response": "Update desc with rcin_path based on the tree tree"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef scan(l,**kwargs):\n '''\n from elist.elist import *\n from elist.jprint import pobj\n l = [1,[4],2,[3,[5,6]]]\n desc = description(l)\n l = [1,2,[4],[3,[5,6]]]\n desc = description(l)\n '''\n if('itermode' in kwargs):\n itermode = True\n else:\n itermode = False\n ####level == 0\n desc_matrix = init_desc_matrix(l)\n if(desc_matrix[0][0]['leaf'] == True):\n return(desc_matrix)\n else:\n pass\n ####cache\n lcache=LevelCache(datas=l,descs=desc_matrix[0][0])\n scache=StateCache(desc_matrix)\n pcache = init_pcache_handler_inline(kwargs)\n ####level > 0\n while(lcache.data.__len__() > 0):\n #add next desc_level \n scache.update()\n for unhandled_seq in range(0,lcache.data.__len__()):\n #handle parent\n pcache.update_pdesc(lcache,unhandled_seq)\n for sib_seq in range(0,pcache.sibs_len):\n #handle child\n pcache.update_desc(lcache,scache,sib_seq)\n #update level lcache\n lcache.update()\n return(desc_matrix)", "response": "scan the list l for a single entry in the tree and return a list of dictionaries where each entry is a dictionary of the key - value pairs that are used to identify the entry in the tree."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fullfill_descendants_info(desc_matrix):\n '''\n flat_offset\n '''\n pathloc_mapping = {}\n locpath_mapping = {}\n #def leaf_handler(desc,pdesc,offset):\n def leaf_handler(desc,pdesc):\n #desc['flat_offset'] = (offset,offset+1)\n desc['non_leaf_son_paths'] = []\n desc['leaf_son_paths'] = []\n desc['non_leaf_descendant_paths'] = []\n desc['leaf_descendant_paths'] = []\n desc['flat_len'] = 1\n if(pdesc['flat_len']):\n pdesc['flat_len'] = pdesc['flat_len'] + 1\n else:\n pdesc['flat_len'] = 1\n #def non_leaf_handler(desc,pdesc,offset):\n def non_leaf_handler(desc,pdesc):\n #desc['flat_offset'] = (offset,offset+desc['flat_len'])\n pdesc['non_leaf_descendant_paths'].extend(copy.deepcopy(desc['non_leaf_descendant_paths']))\n pdesc['leaf_descendant_paths'].extend(copy.deepcopy(desc['leaf_descendant_paths']))\n if(pdesc['flat_len']):\n pdesc['flat_len'] = pdesc['flat_len'] + desc['flat_len']\n else:\n pdesc['flat_len'] = desc['flat_len']\n def fill_path_mapping(desc):\n pmk = tuple(desc['path'])\n pmv = tuple(DescMatrix.loc(desc))\n pathloc_mapping[pmk] = pmv\n locpath_mapping[pmv] = pmk\n dm = DescMatrix(desc_matrix)\n depth = desc_matrix.__len__()\n desc_level = desc_matrix[depth - 1]\n length = desc_level.__len__()\n #the last level\n #offset = 0\n for j in range(length - 1,-1,-1):\n desc = desc_level[j]\n fill_path_mapping(desc)\n pdesc = dm.pdesc(desc)\n leaf_handler(desc,pdesc)\n #leaf_handler(desc,pdesc,offset)\n #offset = offset + 1\n for i in range(depth-2,0,-1):\n #offset = 0\n desc_level = desc_matrix[i]\n length = desc_level.__len__()\n for j in range(length-1,-1,-1):\n desc = desc_level[j]\n fill_path_mapping(desc)\n pdesc = dm.pdesc(desc)\n if(desc['leaf']):\n leaf_handler(desc,pdesc)\n #leaf_handler(desc,pdesc,offset)\n #offset = offset + 1\n else:\n non_leaf_handler(desc,pdesc)\n #non_leaf_handler(desc,pdesc,offset)\n #offset = offset + desc['flat_len']\n desc_matrix[0][0]['flat_offset'] = (0,desc_matrix[0][0]['flat_len'])\n for i in range(0,depth-1):\n pdesc_level = desc_matrix[i]\n length = pdesc_level.__len__()\n for j in range(0,length):\n pdesc = pdesc_level[j]\n si = pdesc['flat_offset'][0]\n for i in range(0,pdesc['sons_count']):\n spl = append(pdesc['path'],i,mode='new')\n pk = tuple(spl)\n locx,locy = pathloc_mapping[pk]\n son = desc_matrix[locx][locy]\n ei = si + son['flat_len']\n son['flat_offset'] = (si,ei)\n si = ei\n return(desc_matrix,pathloc_mapping,locpath_mapping)", "response": "fullfills the leaf and non - leaf descendant trees info"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a string representation of a pathlist.", "response": "def pathlist_to_getStr(path_list):\n '''\n >>> pathlist_to_getStr([1, '1', 2])\n \"[1]['1'][2]\"\n >>>\n '''\n t1 = path_list.__repr__()\n t1 = t1.lstrip('[')\n t1 = t1.rstrip(']')\n t2 = t1.split(\", \")\n s = ''\n for i in range(0,t2.__len__()):\n s = ''.join((s,'[',t2[i],']'))\n return(s)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_block_op_pairs(pairs_str):\n '''\n # >>> get_block_op_pairs(\"{}[]\") \n # {1: ('{', '}'), 2: ('[', ']')}\n # >>> get_block_op_pairs(\"{}[]()\")\n # {1: ('{', '}'), 2: ('[', ']'), 3: ('(', ')')}\n # >>> get_block_op_pairs(\"{}[]()<>\")\n # {1: ('{', '}'), 2: ('[', ']'), 3: ('(', ')'), 4: ('<', '>')}\n '''\n pairs_str_len = pairs_str.__len__()\n pairs_len = pairs_str_len // 2\n pairs_dict = {}\n for i in range(1,pairs_len +1):\n pairs_dict[i] = pairs_str[i*2-2],pairs_str[i*2-1]\n return(pairs_dict)", "response": "Get the list of key - value pairs from a string."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_lop(ch,block_op_pairs_dict=get_block_op_pairs('{}[]()')):\n '''\n # is_lop('{',block_op_pairs_dict)\n # is_lop('[',block_op_pairs_dict)\n # is_lop('}',block_op_pairs_dict)\n # is_lop(']',block_op_pairs_dict)\n # is_lop('a',block_op_pairs_dict)\n '''\n for i in range(1,block_op_pairs_dict.__len__()+1):\n if(ch == block_op_pairs_dict[i][0]):\n return(True)\n else:\n pass\n return(False)", "response": "Check if a character is a LP."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_next_char_level_in_j_str(curr_lv,curr_seq,j_str,block_op_pairs_dict=get_block_op_pairs(\"{}[]()\")):\n ''' the first-char is level-1\n when current is non-op, next-char-level = curr-level\n when current is lop, non-paired-rop-next-char-level = lop-level+1;\n when current is lop, paired-rop-next-char-level = lop-level\n when current is rop, next-char-level = rop-level - 1\n # {\"key_4_UF0aJJ6v\": \"value_1\", \"key_2_Hd0t\": [\"value_16\", \"value_8\", \"value_8\", \"value_15\", \"value_14\", \"value_19\", {......\n # 122222222222222222222222222222222222222222222333333333333333333333333333333333333333333333333333333333333333333333334......\n # {\\n\"key_4_UF0aJJ6v\": \"value_1\", \\n\"key_2_Hd0t\": [\\n\"value_16\", \\n\"value_8\", \\n\"value_8\", \\n\"value_15\", \\n\"value_14\", \\n\"value_19\",...... \n # 1 222222222222222222222222222222 2222222222222222 3333333333333 333333333333 333333333333 3333333333333 3333333333333 3333333333333...... \n '''\n curr_ch = j_str[curr_seq]\n next_ch = j_str[curr_seq + 1]\n cond = 0\n for i in range(1,block_op_pairs_dict.__len__()+1):\n if(curr_ch == block_op_pairs_dict[i][0]):\n if(next_ch == block_op_pairs_dict[i][1]):\n next_lv = curr_lv \n else:\n next_lv = curr_lv + 1\n cond = 1\n break\n elif(curr_ch == block_op_pairs_dict[i][1]):\n if(is_rop(next_ch,block_op_pairs_dict)):\n next_lv = curr_lv - 1\n else:\n next_lv = curr_lv\n cond = 1\n break\n else:\n pass\n if(cond == 1):\n pass\n elif(is_rop(next_ch,block_op_pairs_dict)):\n next_lv = curr_lv - 1\n else: \n next_lv = curr_lv\n curr_lv = next_lv\n curr_seq = curr_seq + 1\n return(curr_lv,curr_lv,curr_seq)", "response": "get next level in J_Str"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the width of the string s in a human readable format.", "response": "def str_display_width(s):\n '''\n from elist.utils import *\n str_display_width('a')\n str_display_width('\u53bb')\n '''\n s= str(s)\n width = 0\n len = s.__len__()\n for i in range(0,len):\n sublen = s[i].encode().__len__()\n sublen = int(sublen/2 + 1/2)\n width = width + sublen\n return(width)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the WFS matrix for a list of objects.", "response": "def get_wfsmat(l):\n '''\n l = ['v_7', 'v_3', 'v_1', 'v_4', ['v_4', 'v_2'], 'v_5', 'v_6', 'v_1', 'v_6', 'v_7', 'v_5', ['v_4', ['v_1', 'v_8', 'v_3', 'v_4', 'v_2', 'v_7', [['v_3', 'v_2'], 'v_4', 'v_5', 'v_1', 'v_3', 'v_1', 'v_2', 'v_5', 'v_8', 'v_8', 'v_7'], 'v_5', 'v_8', 'v_7', 'v_1', 'v_5'], 'v_6'], 'v_4', 'v_5', 'v_8', 'v_5']\n get_wfs(l)\n '''\n ltree = ListTree(l)\n vdescmat = ltree.desc\n wfsmat = matrix_map(vdescmat,lambda v,ix,iy:v['path'])\n wfsmat.pop(0)\n return(wfsmat)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts a list of WFS to a list of WFSs.", "response": "def wfs2mat(wfs):\n '''\n wfs = [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [4, 0], [4, 1], [11, 0], [11, 1], [11, 2], [11, 1, 0], [11, 1, 1], [11, 1, 2], [11, 1, 3], [11, 1, 4], [11, 1, 5], [11, 1, 6], [11, 1, 7], [11, 1, 8], [11, 1, 9], [11, 1, 10], [11, 1, 11], [11, 1, 6, 0], [11, 1, 6, 1], [11, 1, 6, 2], [11, 1, 6, 3], [11, 1, 6, 4], [11, 1, 6, 5], [11, 1, 6, 6], [11, 1, 6, 7], [11, 1, 6, 8], [11, 1, 6, 9], [11, 1, 6, 10], [11, 1, 6, 0, 0], [11, 1, 6, 0, 1]]\n '''\n wfsmat = []\n depth = 0\n level = filter(wfs,lambda ele:ele.__len__()==1)\n while(level.__len__()>0):\n wfsmat.append([])\n wfsmat[depth] = level\n depth = depth+1\n level = filter(wfs,lambda ele:ele.__len__()==depth+1)\n return(wfsmat)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dfs2wfsmat(dfs):\n '''\n dfs = [[0], [1], [2], [3], [4], [4, 0], [4, 1], [5], [6], [7], [8], [9], [10], [11], [11, 0], [11, 1], [11, 1, 0], [11, 1, 1], [11, 1, 2], [11, 1, 3], [11, 1, 4], [11, 1, 5], [11, 1, 6], [11, 1, 6, 0], [11, 1, 6, 0, 0], [11, 1, 6, 0, 1], [11, 1, 6, 1], [11, 1, 6, 2], [11, 1, 6, 3], [11, 1, 6, 4], [11, 1, 6, 5], [11, 1, 6, 6], [11, 1, 6, 7], [11, 1, 6, 8], [11, 1, 6, 9], [11, 1, 6, 10], [11, 1, 7], [11, 1, 8], [11, 1, 9], [11, 1, 10], [11, 1, 11], [11, 2], [12], [13], [14], [15]]\n \n dfs2wfs(dfs)\n '''\n wfsmat = []\n depth = 0\n level = filter(dfs,lambda ele:ele.__len__()==1)\n while(level.__len__()>0):\n wfsmat.append([])\n wfsmat[depth] = level\n depth = depth+1\n level = filter(dfs,lambda ele:ele.__len__()==depth+1)\n return(wfsmat)", "response": "Convert a DFS matrix into a WFSMatrix."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef non_leaf_handler(self,lcache):\n '''#nonleaf child handler'''\n desc = self.desc\n pdesc = self.pdesc\n desc['leaf'] = False\n pdesc['non_leaf_son_paths'].append(copy.deepcopy(desc['path']))\n pdesc['non_leaf_descendant_paths'].append(copy.deepcopy(desc['path']))\n lcache.ndata.append(self.data)\n lcache.ndesc.append(desc)", "response": "non - leaf child handler"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef child_end_handler(self,scache):\n '''\n _upgrade_breadth_info\n update breadth, breadth_path, and add desc to desc_level\n '''\n desc = self.desc\n desc_level = scache.desc_level\n breadth = desc_level.__len__()\n desc['breadth'] = breadth\n desc['breadth_path'].append(breadth)\n desc_level.append(desc)", "response": "Update breadth and breadth_path and add desc to desc_level"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse(self, source):\n command_regex = self._make_command_regex(self.name)\n for match in re.finditer(command_regex, source):\n self._logger.debug(match)\n start_index = match.start(0)\n yield self._parse_command(source, start_index)", "response": "Parse the LaTeX source and return a list of ParsedCommand instances."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing a single command from the source string at the given index.", "response": "def _parse_command(self, source, start_index):\n \"\"\"Parse a single command.\n\n Parameters\n ----------\n source : `str`\n The full source of the tex document.\n start_index : `int`\n Character index in ``source`` where the command begins.\n\n Returns\n -------\n parsed_command : `ParsedCommand`\n The parsed command from the source at the given index.\n \"\"\"\n parsed_elements = []\n\n # Index of the parser in the source\n running_index = start_index\n\n for element in self.elements:\n opening_bracket = element['bracket']\n closing_bracket = self._brackets[opening_bracket]\n\n # Find the opening bracket.\n element_start = None\n element_end = None\n for i, c in enumerate(source[running_index:], start=running_index):\n if c == element['bracket']:\n element_start = i\n break\n elif c == '\\n':\n # No starting bracket on the line.\n if element['required'] is True:\n # Try to parse a single single-word token after the\n # command, like '\\input file'\n content = self._parse_whitespace_argument(\n source[running_index:],\n self.name)\n return ParsedCommand(\n self.name,\n [{'index': element['index'],\n 'name': element['name'],\n 'content': content.strip()}],\n start_index,\n source[start_index:i])\n else:\n # Give up on finding an optional element\n break\n\n # Handle cases when the opening bracket is never found.\n if element_start is None and element['required'] is False:\n # Optional element not found. Continue to next element,\n # not advancing the running_index of the parser.\n continue\n elif element_start is None and element['required'] is True:\n message = ('Parsing command {0} at index {1:d}, '\n 'did not detect element {2:d}'.format(\n self.name,\n start_index,\n element['index']))\n raise CommandParserError(message)\n\n # Find the closing bracket, keeping track of the number of times\n # the same type of bracket was opened and closed.\n balance = 1\n for i, c in enumerate(source[element_start + 1:],\n start=element_start + 1):\n if c == opening_bracket:\n balance += 1\n elif c == closing_bracket:\n balance -= 1\n\n if balance == 0:\n element_end = i\n break\n\n if balance > 0:\n message = ('Parsing command {0} at index {1:d}, '\n 'did not find closing bracket for required '\n 'command element {2:d}'.format(\n self.name,\n start_index,\n element['index']))\n raise CommandParserError(message)\n\n # Package the parsed element's content.\n element_content = source[element_start + 1:element_end]\n parsed_element = {\n 'index': element['index'],\n 'name': element['name'],\n 'content': element_content.strip()\n }\n parsed_elements.append(parsed_element)\n\n running_index = element_end + 1\n\n command_source = source[start_index:running_index]\n parsed_command = ParsedCommand(self.name, parsed_elements,\n start_index, command_source)\n return parsed_command"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _parse_whitespace_argument(source, name):\n # First match the command name itself so that we find the argument\n # *after* the command\n command_pattern = r'\\\\(' + name + r')(?:[\\s{[%])'\n command_match = re.search(command_pattern, source)\n if command_match is not None:\n # Trim `source` so we only look after the command\n source = source[command_match.end(1):]\n\n # Find the whitespace-delimited argument itself.\n pattern = r'(?P\\S+)(?:[ %\\t\\n]+)'\n match = re.search(pattern, source)\n if match is None:\n message = (\n 'When parsing {}, did not find whitespace-delimited command '\n 'argument'\n )\n raise CommandParserError(message.format(name))\n content = match.group('content')\n content.strip()\n return content", "response": "r Attempt to parse a single token on the first line of this source."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _tmdd_datetime_to_iso(dt, include_offset=True, include_seconds=True):\n datestring = dt.findtext('date')\n timestring = dt.findtext('time')\n assert len(datestring) == 8\n assert len(timestring) >= 6\n iso = datestring[0:4] + '-' + datestring[4:6] + '-' + datestring[6:8] + 'T' \\\n + timestring[0:2] + ':' + timestring[2:4]\n if include_seconds:\n iso += ':' + timestring[4:6]\n if include_offset:\n offset = dt.findtext('offset')\n if offset:\n assert len(offset) == 5\n iso += offset[0:3] + ':' + offset[3:5]\n else:\n raise Exception(\"TMDD date is not timezone-aware: %s\" % etree.tostring(dt))\n return iso", "response": "Convert a tmdd datetime to an ISO8601 string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate automatic headline for an event.", "response": "def _generate_automatic_headline(c):\n \"\"\"The only field that maps closely to Open511 , a required field, is optional\n in TMDD. So we sometimes need to generate our own.\"\"\"\n # Start with the event type, e.g. \"Incident\"\n headline = c.data['event_type'].replace('_', ' ').title()\n if c.data['roads']:\n # Add the road name\n headline += ' on ' + c.data['roads'][0]['name']\n direction = c.data['roads'][0].get('direction')\n if direction and direction not in ('BOTH', 'NONE'):\n headline += ' ' + direction\n return headline"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets severity from the event - indicator.", "response": "def _get_severity(c):\n \"\"\"\n 1. Collect all and values.\n 2. Convert impact-level of 1-3 to MINOR, 4-7 to MODERATE, 8-10 to MAJOR\n 3. Map severity -> none to MINOR, natural-disaster to MAJOR, other to UNKNOWN\n 4. Pick the highest severity.\n \"\"\"\n severities = c.feu.xpath('event-indicators/event-indicator/event-severity/text()|event-indicators/event-indicator/severity/text()')\n impacts = c.feu.xpath('event-indicators/event-indicator/event-impact/text()|event-indicators/event-indicator/impact/text()')\n\n severities = [convert_severity[s] for s in severities]\n impacts = [convert_impact[i] for i in impacts]\n\n return ['UNKNOWN', 'MINOR', 'MODERATE', 'MAJOR'][max(itertools.chain(severities, impacts))]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_from_document(cls, doc):\n objs = []\n for feu in doc.xpath('//FEU'):\n detail_els = feu.xpath('event-element-details/event-element-detail')\n for idx, detail in enumerate(detail_els):\n objs.append(cls(feu, detail, id_suffix=idx, number_in_group=len(detail_els)))\n return objs", "response": "Returns a list of TMDDEventConverter elements. doc is an XML Element containing one or more events\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_geo(self, geo_location):\n if not geo_location.xpath('latitude') and geo_location.xpath('longitude'):\n raise Exception(\"Invalid geo-location %s\" % etree.tostring(geo_location))\n if _xpath_or_none(geo_location, 'horizontal-datum/text()') not in ('wgs84', None):\n logger.warning(\"Unsupported horizontal-datum in %s\" % etree.tostring(geo_location))\n return\n point = (\n float(_xpath_or_none(geo_location, 'longitude/text()')) / 1000000,\n float(_xpath_or_none(geo_location, 'latitude/text()')) / 1000000\n )\n self.points.add(point)", "response": "Adds a geo - location element to the Open511 national domain."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef clone(src, dst_path, skip_globals, skip_dimensions, skip_variables):\n\n if os.path.exists(dst_path):\n os.unlink(dst_path)\n dst = netCDF4.Dataset(dst_path, 'w')\n\n # Global attributes\n for attname in src.ncattrs():\n if attname not in skip_globals:\n setattr(dst, attname, getattr(src, attname))\n\n # Dimensions\n unlimdim = None\n unlimdimname = False\n for dimname, dim in src.dimensions.items():\n\n # Skip what we need to\n if dimname in skip_dimensions:\n continue\n\n if dim.isunlimited():\n unlimdim = dim\n unlimdimname = dimname\n dst.createDimension(dimname, None)\n else:\n dst.createDimension(dimname, len(dim))\n\n # Variables\n for varname, ncvar in src.variables.items():\n\n # Skip what we need to\n if varname in skip_variables:\n continue\n\n hasunlimdim = False\n if unlimdimname and unlimdimname in ncvar.dimensions:\n hasunlimdim = True\n\n filler = None\n if hasattr(ncvar, '_FillValue'):\n filler = ncvar._FillValue\n\n if ncvar.chunking == \"contiguous\":\n var = dst.createVariable(varname, ncvar.dtype, ncvar.dimensions, fill_value=filler)\n else:\n var = dst.createVariable(varname, ncvar.dtype, ncvar.dimensions, fill_value=filler, chunksizes=ncvar.chunking())\n\n # Attributes\n for attname in ncvar.ncattrs():\n if attname == '_FillValue':\n continue\n else:\n setattr(var, attname, getattr(ncvar, attname))\n\n # Data\n nchunk = 1000\n if hasunlimdim:\n if nchunk:\n start = 0\n stop = len(unlimdim)\n step = nchunk\n if step < 1:\n step = 1\n for n in range(start, stop, step):\n nmax = n + nchunk\n if nmax > len(unlimdim):\n nmax = len(unlimdim)\n idata = ncvar[n:nmax]\n var[n:nmax] = idata\n else:\n idata = ncvar[:]\n var[0:len(unlimdim)] = idata\n else:\n idata = ncvar[:]\n var[:] = idata\n\n dst.sync()\n\n src.close()\n dst.close()", "response": "Clone a netCDF4. Dataset object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a Pandas DataFrame of the data.", "response": "def get_dataframe_from_variable(nc, data_var):\n \"\"\" Returns a Pandas DataFrame of the data.\n This always returns positive down depths\n \"\"\"\n time_var = nc.get_variables_by_attributes(standard_name='time')[0]\n\n depth_vars = nc.get_variables_by_attributes(axis=lambda v: v is not None and v.lower() == 'z')\n depth_vars += nc.get_variables_by_attributes(standard_name=lambda v: v in ['height', 'depth' 'surface_altitude'], positive=lambda x: x is not None)\n\n # Find the correct depth variable\n depth_var = None\n for d in depth_vars:\n try:\n if d._name in data_var.coordinates.split(\" \") or d._name in data_var.dimensions:\n depth_var = d\n break\n except AttributeError:\n continue\n\n times = netCDF4.num2date(time_var[:], units=time_var.units, calendar=getattr(time_var, 'calendar', 'standard'))\n original_times_size = times.size\n\n if depth_var is None and hasattr(data_var, 'sensor_depth'):\n depth_type = get_type(data_var.sensor_depth)\n depths = np.asarray([data_var.sensor_depth] * len(times)).flatten()\n values = data_var[:].flatten()\n elif depth_var is None:\n depths = np.asarray([np.nan] * len(times)).flatten()\n depth_type = get_type(depths)\n values = data_var[:].flatten()\n else:\n depths = depth_var[:]\n depth_type = get_type(depths)\n if len(data_var.shape) > 1:\n times = np.repeat(times, depths.size)\n depths = np.tile(depths, original_times_size)\n values = data_var[:, :].flatten()\n else:\n values = data_var[:].flatten()\n\n if getattr(depth_var, 'positive', 'down').lower() == 'up':\n logger.warning(\"Converting depths to positive down before returning the DataFrame\")\n depths = depths * -1\n\n # https://github.com/numpy/numpy/issues/4595\n # We can't call astype on a MaskedConstant\n if (\n isinstance(depths, np.ma.core.MaskedConstant) or\n (hasattr(depths, 'mask') and depths.mask.all())\n ):\n depths = np.asarray([np.nan] * len(times)).flatten()\n\n df = pd.DataFrame({ 'time': times,\n 'value': values.astype(data_var.dtype),\n 'unit': data_var.units if hasattr(data_var, 'units') else np.nan,\n 'depth': depths.astype(depth_type) })\n\n df.set_index([pd.DatetimeIndex(df['time']), pd.Float64Index(df['depth'])], inplace=True)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending a request to the GitHub v4 GraphQL API.", "response": "async def github_request(session, api_token,\n query=None, mutation=None, variables=None):\n \"\"\"Send a request to the GitHub v4 (GraphQL) API.\n\n The request is asynchronous, with asyncio.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n query : `str` or `GitHubQuery`\n GraphQL query string. If provided, then the ``mutation`` parameter\n should not be set. For examples, see the `GitHub guide to query and\n mutation operations`_.\n mutation : `str` or `GitHubQuery`\n GraphQL mutation string. If provided, then the ``query`` parameter\n should not be set. For examples, see the `GitHub guide to query and\n mutation operations`_.\n variables : `dict`\n GraphQL variables, as a JSON-compatible dictionary. This is only\n required if the ``query`` or ``mutation`` uses GraphQL variables.\n\n Returns\n -------\n data : `dict`\n Parsed JSON as a `dict` object.\n\n .. `GitHub personal access token guide`: https://ls.st/41d\n .. `GitHub guide to query and mutation operations`: https://ls.st/9s7\n \"\"\"\n payload = {}\n if query is not None:\n payload['query'] = str(query) # converts a GitHubQuery\n if mutation is not None:\n payload['mutation'] = str(mutation) # converts a GitHubQuery\n if variables is not None:\n payload['variables'] = variables\n\n headers = {'Authorization': 'token {}'.format(api_token)}\n\n url = 'https://api.github.com/graphql'\n async with session.post(url, json=payload, headers=headers) as response:\n data = await response.json()\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading a pre - made query.", "response": "def load(cls, query_name):\n \"\"\"Load a pre-made query.\n\n These queries are distributed with lsstprojectmeta. See\n :file:`lsstrojectmeta/data/githubv4/README.rst` inside the\n package repository for details on available queries.\n\n Parameters\n ----------\n query_name : `str`\n Name of the query, such as ``'technote_repo'``.\n\n Returns\n -------\n github_query : `GitHubQuery\n A GitHub query or mutation object that you can pass to\n `github_request` to execute the request itself.\n \"\"\"\n template_path = os.path.join(\n os.path.dirname(__file__),\n '../data/githubv4',\n query_name + '.graphql')\n\n with open(template_path) as f:\n query_data = f.read()\n\n return cls(query_data, name=query_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_git_commit_timestamp_for_file(filepath, repo_path=None, repo=None):\n logger = logging.getLogger(__name__)\n\n if repo is None:\n repo = git.repo.base.Repo(path=repo_path,\n search_parent_directories=True)\n repo_path = repo.working_tree_dir\n\n head_commit = repo.head.commit\n\n # filepath relative to the repo path\n logger.debug('Using Git repo at %r', repo_path)\n filepath = os.path.relpath(\n os.path.abspath(filepath),\n start=repo_path)\n logger.debug('Repo-relative filepath is %r', filepath)\n\n # Most recent commit datetime of the given file.\n # Don't use head_commit.iter_parents because then it skips the\n # commit of a file that's added but never modified.\n for commit in head_commit.iter_items(repo,\n head_commit,\n [filepath],\n skip=0):\n return commit.committed_datetime\n\n # Only get here if git could not find the file path in the history\n raise IOError('File {} not found'.format(filepath))", "response": "Obtain the timestamp for the most recent commit to a given file in a Git repository."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the datetime for the most recent commit to a project that affected certain types of content.", "response": "def get_content_commit_date(extensions, acceptance_callback=None,\n root_dir='.'):\n \"\"\"Get the datetime for the most recent commit to a project that\n affected certain types of content.\n\n Parameters\n ----------\n extensions : sequence of 'str'\n Extensions of files to consider in getting the most recent commit\n date. For example, ``('rst', 'svg', 'png')`` are content extensions\n for a Sphinx project. **Extension comparision is case sensitive.** add\n uppercase variants to match uppercase extensions.\n acceptance_callback : callable\n Callable function whose sole argument is a file path, and returns\n `True` or `False` depending on whether the file's commit date should\n be considered or not. This callback is only run on files that are\n included by ``extensions``. Thus this callback is a way to exclude\n specific files that would otherwise be included by their extension.\n root_dir : 'str`, optional\n Only content contained within this root directory is considered.\n This directory must be, or be contained by, a Git repository. This is\n the current working directory by default.\n\n Returns\n -------\n commit_date : `datetime.datetime`\n Datetime of the most recent content commit.\n\n Raises\n ------\n RuntimeError\n Raised if no content files are found.\n \"\"\"\n logger = logging.getLogger(__name__)\n\n def _null_callback(_):\n return True\n\n if acceptance_callback is None:\n acceptance_callback = _null_callback\n\n # Cache the repo object for each query\n root_dir = os.path.abspath(root_dir)\n repo = git.repo.base.Repo(path=root_dir, search_parent_directories=True)\n\n # Iterate over all files with all file extensions, looking for the\n # newest commit datetime.\n newest_datetime = None\n iters = [_iter_filepaths_with_extension(ext, root_dir=root_dir)\n for ext in extensions]\n for content_path in itertools.chain(*iters):\n content_path = os.path.abspath(os.path.join(root_dir, content_path))\n\n if acceptance_callback(content_path):\n logger.debug('Found content path %r', content_path)\n try:\n commit_datetime = read_git_commit_timestamp_for_file(\n content_path, repo=repo)\n logger.debug('Commit timestamp of %r is %s',\n content_path, commit_datetime)\n except IOError:\n logger.warning(\n 'Count not get commit for %r, skipping',\n content_path)\n continue\n\n if not newest_datetime or commit_datetime > newest_datetime:\n # Seed initial newest_datetime\n # or set a newer newest_datetime\n newest_datetime = commit_datetime\n logger.debug('Newest commit timestamp is %s', newest_datetime)\n\n logger.debug('Final commit timestamp is %s', newest_datetime)\n\n if newest_datetime is None:\n raise RuntimeError('No content files found in {}'.format(root_dir))\n\n return newest_datetime"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\niterate over relative filepaths of files in a directory and sub - directories with the given extension.", "response": "def _iter_filepaths_with_extension(extname, root_dir='.'):\n \"\"\"Iterative over relative filepaths of files in a directory, and\n sub-directories, with the given extension.\n\n Parameters\n ----------\n extname : `str`\n Extension name (such as 'txt' or 'rst'). Extension comparison is\n case sensitive.\n root_dir : 'str`, optional\n Root directory. Current working directory by default.\n\n Yields\n ------\n filepath : `str`\n File path, relative to ``root_dir``, with the given extension.\n \"\"\"\n # needed for comparison with os.path.splitext\n if not extname.startswith('.'):\n extname = '.' + extname\n\n root_dir = os.path.abspath(root_dir)\n\n for dirname, sub_dirnames, filenames in os.walk(root_dir):\n for filename in filenames:\n if os.path.splitext(filename)[-1] == extname:\n full_filename = os.path.join(dirname, filename)\n rel_filepath = os.path.relpath(full_filename, start=root_dir)\n yield rel_filepath"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of variables that match specific conditions.", "response": "def get_variables_by_attributes(self, **kwargs):\n \"\"\" Returns variables that match specific conditions.\n\n * Can pass in key=value parameters and variables are returned that\n contain all of the matches. For example,\n\n >>> # Get variables with x-axis attribute.\n >>> vs = nc.get_variables_by_attributes(axis='X')\n >>> # Get variables with matching \"standard_name\" attribute.\n >>> nc.get_variables_by_attributes(standard_name='northward_sea_water_velocity')\n\n * Can pass in key=callable parameter and variables are returned if the\n callable returns True. The callable should accept a single parameter,\n the attribute value. None is given as the attribute value when the\n attribute does not exist on the variable. For example,\n\n >>> # Get Axis variables.\n >>> vs = nc.get_variables_by_attributes(axis=lambda v: v in ['X', 'Y', 'Z', 'T'])\n >>> # Get variables that don't have an \"axis\" attribute.\n >>> vs = nc.get_variables_by_attributes(axis=lambda v: v is None)\n >>> # Get variables that have a \"grid_mapping\" attribute.\n >>> vs = nc.get_variables_by_attributes(grid_mapping=lambda v: v is not None)\n\n \"\"\"\n vs = []\n\n has_value_flag = False\n for vname in self.variables:\n var = self.variables[vname]\n for k, v in kwargs.items():\n if callable(v):\n has_value_flag = v(getattr(var, k, None))\n if has_value_flag is False:\n break\n elif hasattr(var, k) and getattr(var, k) == v:\n has_value_flag = True\n else:\n has_value_flag = False\n break\n\n if has_value_flag is True:\n vs.append(self.variables[vname])\n\n return vs"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary of attributes to be used in the json format.", "response": "def json_attributes(self, vfuncs=None):\n \"\"\"\n vfuncs can be any callable that accepts a single argument, the\n Variable object, and returns a dictionary of new attributes to\n set. These will overwrite existing attributes\n \"\"\"\n\n vfuncs = vfuncs or []\n\n js = {'global': {}}\n\n for k in self.ncattrs():\n js['global'][k] = self.getncattr(k)\n\n for varname, var in self.variables.items():\n js[varname] = {}\n for k in var.ncattrs():\n z = var.getncattr(k)\n try:\n assert not np.isnan(z).all()\n js[varname][k] = z\n except AssertionError:\n js[varname][k] = None\n except TypeError:\n js[varname][k] = z\n\n for vf in vfuncs:\n try:\n js[varname].update(vfuncs(var))\n except BaseException:\n logger.exception(\"Could not apply custom variable attribue function\")\n\n return json.loads(json.dumps(js, cls=BasicNumpyEncoder))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndecorate a function that uses pypandoc to ensure that pandoc is installed if necessary.", "response": "def ensure_pandoc(func):\n \"\"\"Decorate a function that uses pypandoc to ensure that pandoc is\n installed if necessary.\n \"\"\"\n logger = logging.getLogger(__name__)\n\n @functools.wraps(func)\n def _install_and_run(*args, **kwargs):\n try:\n # First try to run pypandoc function\n result = func(*args, **kwargs)\n except OSError:\n # Install pandoc and retry\n message = \"pandoc needed but not found. Now installing it for you.\"\n logger.warning(message)\n # This version of pandoc is known to be compatible with both\n # pypandoc.download_pandoc and the functionality that\n # lsstprojectmeta needs. Travis CI tests are useful for ensuring\n # download_pandoc works.\n pypandoc.download_pandoc(version='1.19.1')\n logger.debug(\"pandoc download complete\")\n\n result = func(*args, **kwargs)\n\n return result\n\n return _install_and_run"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert_text(content, from_fmt, to_fmt, deparagraph=False, mathjax=False,\n smart=True, extra_args=None):\n \"\"\"Convert text from one markup format to another using pandoc.\n\n This function is a thin wrapper around `pypandoc.convert_text`.\n\n Parameters\n ----------\n content : `str`\n Original content.\n\n from_fmt : `str`\n Format of the original ``content``. Format identifier must be one of\n those known by Pandoc. See https://pandoc.org/MANUAL.html for details.\n\n to_fmt : `str`\n Output format for the content.\n\n deparagraph : `bool`, optional\n If `True`, then the\n `lsstprojectmeta.pandoc.filters.deparagraph.deparagraph` filter is\n used to remove paragraph (``

``, for example) tags around a single\n paragraph of content. That filter does not affect content that\n consists of multiple blocks (several paragraphs, or lists, for\n example). Default is `False`.\n\n For example, **without** this filter Pandoc will convert\n the string ``\"Title text\"`` to ``\"

Title text

\"`` in HTML. The\n paragraph tags aren't useful if you intend to wrap the converted\n content in different tags, like ``

``, using your own templating\n system.\n\n **With** this filter, Pandoc will convert the string ``\"Title text\"``\n to ``\"Title text\"`` in HTML.\n\n mathjax : `bool`, optional\n If `True` then Pandoc will markup output content to work with MathJax.\n Default is False.\n\n smart : `bool`, optional\n If `True` (default) then ascii characters will be converted to unicode\n characters like smart quotes and em dashes.\n\n extra_args : `list`, optional\n Sequence of Pandoc arguments command line arguments (such as\n ``'--normalize'``). The ``deparagraph``, ``mathjax``, and ``smart``\n arguments are convenience arguments that are equivalent to items\n in ``extra_args``.\n\n Returns\n -------\n output : `str`\n Content in the output (``to_fmt``) format.\n\n Notes\n -----\n This function will automatically install Pandoc if it is not available.\n See `ensure_pandoc`.\n \"\"\"\n logger = logging.getLogger(__name__)\n\n if extra_args is not None:\n extra_args = list(extra_args)\n else:\n extra_args = []\n\n if mathjax:\n extra_args.append('--mathjax')\n\n if smart:\n extra_args.append('--smart')\n\n if deparagraph:\n extra_args.append('--filter=lsstprojectmeta-deparagraph')\n\n extra_args.append('--wrap=none')\n\n # de-dupe extra args\n extra_args = set(extra_args)\n\n logger.debug('Running pandoc from %s to %s with extra_args %s',\n from_fmt, to_fmt, extra_args)\n\n output = pypandoc.convert_text(content, to_fmt, format=from_fmt,\n extra_args=extra_args)\n return output", "response": "Convert text from one markup format to another using pandoc. convert_text."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert_lsstdoc_tex(\n content, to_fmt, deparagraph=False, mathjax=False,\n smart=True, extra_args=None):\n \"\"\"Convert lsstdoc-class LaTeX to another markup format.\n\n This function is a thin wrapper around `convert_text` that automatically\n includes common lsstdoc LaTeX macros.\n\n Parameters\n ----------\n content : `str`\n Original content.\n\n to_fmt : `str`\n Output format for the content (see https://pandoc.org/MANUAL.html).\n For example, 'html5'.\n\n deparagraph : `bool`, optional\n If `True`, then the\n `lsstprojectmeta.pandoc.filters.deparagraph.deparagraph` filter is\n used to remove paragraph (``

``, for example) tags around a single\n paragraph of content. That filter does not affect content that\n consists of multiple blocks (several paragraphs, or lists, for\n example). Default is `False`.\n\n For example, **without** this filter Pandoc will convert\n the string ``\"Title text\"`` to ``\"

Title text

\"`` in HTML. The\n paragraph tags aren't useful if you intend to wrap the converted\n content in different tags, like ``

``, using your own templating\n system.\n\n **With** this filter, Pandoc will convert the string ``\"Title text\"``\n to ``\"Title text\"`` in HTML.\n\n mathjax : `bool`, optional\n If `True` then Pandoc will markup output content to work with MathJax.\n Default is False.\n\n smart : `bool`, optional\n If `True` (default) then ascii characters will be converted to unicode\n characters like smart quotes and em dashes.\n\n extra_args : `list`, optional\n Sequence of Pandoc arguments command line arguments (such as\n ``'--normalize'``). The ``deparagraph``, ``mathjax``, and ``smart``\n arguments are convenience arguments that are equivalent to items\n in ``extra_args``.\n\n Returns\n -------\n output : `str`\n Content in the output (``to_fmt``) format.\n\n Notes\n -----\n This function will automatically install Pandoc if it is not available.\n See `ensure_pandoc`.\n \"\"\"\n augmented_content = '\\n'.join((LSSTDOC_MACROS, content))\n return convert_text(\n augmented_content, 'latex', to_fmt,\n deparagraph=deparagraph, mathjax=mathjax,\n smart=smart, extra_args=extra_args)", "response": "Convert lsstdoc LaTeX to another markup format."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecode a JSON - LD dataset including decoding datetime strings into datetime. datetime objects.", "response": "def decode_jsonld(jsonld_text):\n \"\"\"Decode a JSON-LD dataset, including decoding datetime\n strings into `datetime.datetime` objects.\n\n Parameters\n ----------\n encoded_dataset : `str`\n The JSON-LD dataset encoded as a string.\n\n Returns\n -------\n jsonld_dataset : `dict`\n A JSON-LD dataset.\n\n Examples\n --------\n\n >>> doc = '{\"dt\": \"2018-01-01T12:00:00Z\"}'\n >>> decode_jsonld(doc)\n {'dt': datetime.datetime(2018, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)}\n \"\"\"\n decoder = json.JSONDecoder(object_pairs_hook=_decode_object_pairs)\n return decoder.decode(jsonld_text)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nencode values as JSON strings.", "response": "def default(self, obj):\n \"\"\"Encode values as JSON strings.\n\n This method overrides the default implementation from\n `json.JSONEncoder`.\n \"\"\"\n if isinstance(obj, datetime.datetime):\n return self._encode_datetime(obj)\n\n # Fallback to the default encoding\n return json.JSONEncoder.default(self, obj)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nencoding a datetime in the format '%Y-%m-%dT%H:%M:%SZ.", "response": "def _encode_datetime(self, dt):\n \"\"\"Encode a datetime in the format '%Y-%m-%dT%H:%M:%SZ'.\n\n The datetime can be naieve (doesn't have timezone info) or aware\n (it does have a tzinfo attribute set). Regardless, the datetime\n is transformed into UTC.\n \"\"\"\n if dt.tzinfo is None:\n # Force it to be a UTC datetime\n dt = dt.replace(tzinfo=datetime.timezone.utc)\n\n # Convert to UTC (no matter what)\n dt = dt.astimezone(datetime.timezone.utc)\n\n return dt.strftime('%Y-%m-%dT%H:%M:%SZ')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting all git repositories within this environment", "response": "def find_repos(self, depth=10):\n '''Get all git repositories within this environment'''\n\n repos = []\n\n for root, subdirs, files in walk_dn(self.root, depth=depth):\n if 'modules' in root:\n continue\n if '.git' in subdirs:\n repos.append(root)\n\n return repos"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef clone(self, repo_path, destination, branch=None):\n '''Clone a repository to a destination relative to envrionment root'''\n\n logger.debug('Installing ' + repo_path)\n if not destination.startswith(self.env_path):\n destination = unipath(self.env_path, destination)\n\n if branch:\n return shell.run('git', 'clone', repo_path, '--branch', branch,\n '--single-branch', '--recursive', destination)\n\n return shell.run('git', 'clone', '--recursive', repo_path, destination)", "response": "Clone a repository to a destination relative to envrionment root"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclone a repository to a destination relative to envrionment root", "response": "def pull(self, repo_path, *args):\n '''Clone a repository to a destination relative to envrionment root'''\n\n logger.debug('Pulling ' + repo_path)\n if not repo_path.startswith(self.env_path):\n repo_path = unipath(self.env_path, repo_path)\n\n return shell.run('git', 'pull', *args, **{'cwd': repo_path})"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef install(self, package):\n '''Install a python package using pip'''\n\n logger.debug('Installing ' + package)\n shell.run(self.pip_path, 'install', package)", "response": "Install a python package using pip"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating a python package using pip", "response": "def upgrade(self, package):\n '''Update a python package using pip'''\n\n logger.debug('Upgrading ' + package)\n shell.run(self.pip_path, 'install', '--upgrade', '--no-deps', package)\n shell.run(self.pip_path, 'install', package)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the nb quantiles for datas in a dataframe", "response": "def df_quantile(df, nb=100):\n \"\"\"Returns the nb quantiles for datas in a dataframe\n \"\"\"\n quantiles = np.linspace(0, 1., nb)\n res = pd.DataFrame()\n for q in quantiles:\n res = res.append(df.quantile(q), ignore_index=True)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the average along a 1D array like ma. mean but with a representativity coefficient", "response": "def mean(a, rep=0.75, **kwargs):\n \"\"\"Compute the average along a 1D array like ma.mean,\n but with a representativity coefficient : if ma.count(a)/ma.size(a)>=rep,\n then the result is a masked value\n \"\"\"\n return rfunc(a, ma.mean, rep, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the max along a 1D array like ma. mean and a representativity coefficient coefficient.", "response": "def max(a, rep=0.75, **kwargs):\n \"\"\"Compute the max along a 1D array like ma.mean,\n but with a representativity coefficient : if ma.count(a)/ma.size(a)>=rep,\n then the result is a masked value\n \"\"\"\n return rfunc(a, ma.max, rep, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef min(a, rep=0.75, **kwargs):\n return rfunc(a, ma.min, rep, **kwargs)", "response": "Compute the minimum value along a 1D array like ma. mean but with a representativity coefficient coefficient."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rfunc(a, rfunc=None, rep=0.75, **kwargs):\n if float(ma.count(a)) / ma.size(a) < rep:\n return ma.masked\n else:\n if rfunc is None:\n return a\n return rfunc(a, **kwargs)", "response": "Applies func on a if a comes with a representativity coefficient rep returns a masked array\n i. e. ma. count ( a ) > rep."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rmse(a, b):\n return np.sqrt(np.square(a - b).mean())", "response": "Returns the root mean square error betwwen a and b\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef nmse(a, b):\n return np.square(a - b).mean() / (a.mean() * b.mean())", "response": "Returns the normalized mean square error of a and b"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mfbe(a, b):\n return 2 * bias(a, b) / (a.mean() + b.mean())", "response": "Returns the mean fractionalized bias error between two data sets."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fa(a, b, alpha=2):\n return np.sum((a > b / alpha) & (a < b * alpha), dtype=float) / len(a) * 100", "response": "Returns the factor of alpha when a > b"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the factor of exceedance", "response": "def foex(a, b):\n \"\"\"Returns the factor of exceedance\n \"\"\"\n return (np.sum(a > b, dtype=float) / len(a) - 0.5) * 100"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef correlation(a, b):\n diff1 = a - a.mean()\n diff2 = b - b.mean()\n return (diff1 * diff2).mean() / (np.sqrt(np.square(diff1).mean() * np.square(diff2).mean()))", "response": "Computes the correlation between a and b says the Pearson s correlation\n coefficient R\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfigure of merit in time", "response": "def fmt(a, b):\n \"\"\"Figure of merit in time\n \"\"\"\n return 100 * np.min([a, b], axis=0).sum() / np.max([a, b], axis=0).sum()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fullStats(a, b):\n\n stats = [\n ['bias', 'Bias', bias(a, b)],\n ['stderr', 'Standard Deviation Error', stderr(a, b)],\n ['mae', 'Mean Absolute Error', mae(a, b)],\n ['rmse', 'Root Mean Square Error', rmse(a, b)],\n ['nmse', 'Normalized Mean Square Error', nmse(a, b)],\n ['mfbe', 'Mean Fractionalized bias Error', mfbe(a, b)],\n ['fa2', 'Factor of Two', fa(a, b, 2)],\n ['foex', 'Factor of Exceedance', foex(a, b)],\n ['correlation', 'Correlation R', correlation(a, b)],\n ['determination', 'Coefficient of Determination r2', determination(a, b)],\n ['gmb', 'Geometric Mean Bias', gmb(a, b)],\n ['gmv', 'Geometric Mean Variance', gmv(a, b)],\n ['fmt', 'Figure of Merit in Time', fmt(a, b)]\n ]\n rec = np.rec.fromrecords(stats, names=('stat', 'description', 'result'))\n df = pd.DataFrame.from_records(rec, index='stat')\n return df", "response": "Performs several stats on a against b typically a is the predictions array and b is the observations array and a is the observations array and b is the observations array"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef site_path(self):\n '''Path to environments site-packages'''\n\n if platform == 'win':\n return unipath(self.path, 'Lib', 'site-packages')\n\n py_ver = 'python{0}'.format(sys.version[:3])\n return unipath(self.path, 'lib', py_ver, 'site-packages')", "response": "Path to site - packages"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nactivates the current python environment.", "response": "def _activate(self):\n '''\n Do some serious mangling to the current python environment...\n This is necessary to activate an environment via python.\n '''\n\n old_syspath = set(sys.path)\n site.addsitedir(self.site_path)\n site.addsitedir(self.bin_path)\n new_syspaths = set(sys.path) - old_syspath\n for path in new_syspaths:\n sys.path.remove(path)\n sys.path.insert(1, path)\n\n if not hasattr(sys, 'real_prefix'):\n sys.real_prefix = sys.prefix\n\n sys.prefix = self.path"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves this environment and all of its children.", "response": "def remove(self):\n '''\n Remove this environment\n '''\n self.run_hook('preremove')\n utils.rmtree(self.path)\n self.run_hook('postremove')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef command(self):\n '''Command used to launch this application module'''\n\n cmd = self.config.get('command', None)\n if cmd is None:\n return\n\n cmd = cmd[platform]\n return cmd['path'], cmd['args']", "response": "Command used to launch this application module"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a virtual environment.", "response": "def create(name_or_path=None, config=None):\n '''Create a virtual environment. You can pass either the name of a new\n environment to create in your CPENV_HOME directory OR specify a full path\n to create an environment outisde your CPENV_HOME.\n\n Create an environment in CPENV_HOME::\n\n >>> cpenv.create('myenv')\n\n Create an environment elsewhere::\n\n >>> cpenv.create('~/custom_location/myenv')\n\n :param name_or_path: Name or full path of environment\n :param config: Environment configuration including dependencies etc...\n '''\n\n # Get the real path of the environment\n if utils.is_system_path(name_or_path):\n path = unipath(name_or_path)\n else:\n path = unipath(get_home_path(), name_or_path)\n\n if os.path.exists(path):\n raise OSError('{} already exists'.format(path))\n\n env = VirtualEnvironment(path)\n utils.ensure_path_exists(env.path)\n\n if config:\n if utils.is_git_repo(config):\n Git('').clone(config, env.path)\n else:\n shutil.copy2(config, env.config_path)\n else:\n with open(env.config_path, 'w') as f:\n f.write(defaults.environment_config)\n\n utils.ensure_path_exists(env.hook_path)\n utils.ensure_path_exists(env.modules_path)\n\n env.run_hook('precreate')\n\n virtualenv.create_environment(env.path)\n if not utils.is_home_environment(env.path):\n EnvironmentCache.add(env)\n EnvironmentCache.save()\n\n try:\n env.update()\n except:\n utils.rmtree(path)\n logger.debug('Failed to update, rolling back...')\n raise\n else:\n env.run_hook('postcreate')\n\n return env"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving an environment or module", "response": "def remove(name_or_path):\n '''Remove an environment or module\n\n :param name_or_path: name or path to environment or module\n '''\n\n r = resolve(name_or_path)\n r.resolved[0].remove()\n\n EnvironmentCache.discard(r.resolved[0])\n EnvironmentCache.save()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nactivating and launches a module", "response": "def launch(module_name, *args, **kwargs):\n '''Activates and launches a module\n\n :param module_name: name of module to launch\n '''\n\n r = resolve(module_name)\n r.activate()\n mod = r.resolved[0]\n mod.launch(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndeactivate an environment by restoring all env vars to a clean state stored prior to activating environments", "response": "def deactivate():\n '''Deactivates an environment by restoring all env vars to a clean state\n stored prior to activating environments\n '''\n\n if 'CPENV_ACTIVE' not in os.environ or 'CPENV_CLEAN_ENV' not in os.environ:\n raise EnvironmentError('Can not deactivate environment...')\n\n utils.restore_env_from_file(os.environ['CPENV_CLEAN_ENV'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning your home path... CPENV_HOME env var OR ~/.cpenv", "response": "def get_home_path():\n ''':returns: your home path...CPENV_HOME env var OR ~/.cpenv'''\n\n home = unipath(os.environ.get('CPENV_HOME', '~/.cpenv'))\n home_modules = unipath(home, 'modules')\n if not os.path.exists(home):\n os.makedirs(home)\n if not os.path.exists(home_modules):\n os.makedirs(home_modules)\n return home"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of paths in CPENV_MODULES env var and CPENV_HOME / modules", "response": "def get_module_paths():\n ''':returns: paths in CPENV_MODULES env var and CPENV_HOME/modules'''\n\n module_paths = []\n\n cpenv_modules_path = os.environ.get('CPENV_MODULES', None)\n if cpenv_modules_path:\n module_paths.extend(cpenv_modules_path.split(os.pathsep))\n\n module_paths.append(unipath(get_home_path(), 'modules'))\n\n return module_paths"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of all known virtual environments as . This includes those in CPENV_HOME and any others that are cached. This includes those in CPENV_HOME and any others that are not cached.", "response": "def get_environments():\n '''Returns a list of all known virtual environments as\n :class:`VirtualEnvironment` instances. This includes those in CPENV_HOME\n and any others that are cached(created by the current user or activated\n once by full path.)\n '''\n\n environments = set()\n\n cwd = os.getcwd()\n for d in os.listdir(cwd):\n\n if d == 'environment.yml':\n environments.add(VirtualEnvironment(cwd))\n continue\n\n path = unipath(cwd, d)\n if utils.is_environment(path):\n environments.add(VirtualEnvironment(path))\n\n home = get_home_path()\n for d in os.listdir(home):\n\n path = unipath(home, d)\n if utils.is_environment(path):\n environments.add(VirtualEnvironment(path))\n\n for env in EnvironmentCache:\n environments.add(env)\n\n return sorted(list(environments), key=lambda x: x.name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of available modules.", "response": "def get_modules():\n '''Returns a list of available modules.'''\n\n modules = set()\n\n cwd = os.getcwd()\n for d in os.listdir(cwd):\n\n if d == 'module.yml':\n modules.add(Module(cwd))\n\n path = unipath(cwd, d)\n if utils.is_module(path):\n modules.add(Module(cwd))\n\n module_paths = get_module_paths()\n for module_path in module_paths:\n for d in os.listdir(module_path):\n\n path = unipath(module_path, d)\n if utils.is_module(path):\n modules.add(Module(path))\n\n return sorted(list(modules), key=lambda x: x.name)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of active : class:`Module` s or [ ]", "response": "def get_active_modules():\n ''':returns: a list of active :class:`Module` s or []'''\n\n modules = os.environ.get('CPENV_ACTIVE_MODULES', None)\n if modules:\n modules = modules.split(os.pathsep)\n return [Module(module) for module in modules]\n\n return []"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a module to CPENV_ACTIVE_MODULES environment variable", "response": "def add_active_module(module):\n '''Add a module to CPENV_ACTIVE_MODULES environment variable'''\n\n modules = set(get_active_modules())\n modules.add(module)\n new_modules_path = os.pathsep.join([m.path for m in modules])\n os.environ['CPENV_ACTIVE_MODULES'] = str(new_modules_path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving a module from CPENV_ACTIVE_MODULES environment variable", "response": "def rem_active_module(module):\n '''Remove a module from CPENV_ACTIVE_MODULES environment variable'''\n\n modules = set(get_active_modules())\n modules.discard(module)\n new_modules_path = os.pathsep.join([m.path for m in modules])\n os.environ['CPENV_ACTIVE_MODULES'] = str(new_modules_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nformats a list of environments and modules for terminal output", "response": "def format_objects(objects, children=False, columns=None, header=True):\n '''Format a list of environments and modules for terminal output'''\n\n columns = columns or ('NAME', 'TYPE', 'PATH')\n objects = sorted(objects, key=_type_and_name)\n data = []\n for obj in objects:\n if isinstance(obj, cpenv.VirtualEnvironment):\n data.append(get_info(obj))\n modules = obj.get_modules()\n if children and modules:\n for mod in modules:\n data.append(get_info(mod, indent=2, root=obj.path))\n else:\n data.append(get_info(obj))\n\n maxes = [len(max(col, key=len)) for col in zip(*data)]\n tmpl = '{:%d} {:%d} {:%d}' % tuple(maxes)\n lines = []\n if header:\n lines.append('\\n' + bold_blue(tmpl.format(*columns)))\n\n for obj_data in data:\n lines.append(tmpl.format(*obj_data))\n\n return '\\n'.join(lines)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists available environments and modules", "response": "def list_():\n '''List available environments and modules'''\n\n environments = cpenv.get_environments()\n modules = cpenv.get_modules()\n click.echo(format_objects(environments + modules, children=True))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(name_or_path, config):\n '''Create a new environment.'''\n\n if not name_or_path:\n ctx = click.get_current_context()\n click.echo(ctx.get_help())\n examples = (\n '\\nExamples:\\n'\n ' cpenv create my_env\\n'\n ' cpenv create ./relative/path/to/my_env\\n'\n ' cpenv create my_env --config ./relative/path/to/config\\n'\n ' cpenv create my_env --config git@github.com:user/config.git\\n'\n )\n click.echo(examples)\n return\n\n click.echo(\n blue('Creating a new virtual environment ' + name_or_path)\n )\n try:\n env = cpenv.create(name_or_path, config)\n except Exception as e:\n click.echo(bold_red('FAILED TO CREATE ENVIRONMENT!'))\n click.echo(e)\n else:\n click.echo(bold_green('Successfully created environment!'))\n click.echo(blue('Launching subshell'))\n\n cpenv.activate(env)\n shell.launch(env.name)", "response": "Create a new virtual environment."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting available environments and modules", "response": "def list_():\n '''List available environments and modules'''\n\n click.echo('Cached Environments')\n\n environments = list(EnvironmentCache)\n click.echo(format_objects(environments, children=False))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add(path):\n '''Add an environment to the cache. Allows you to activate the environment\n by name instead of by full path'''\n\n click.echo('\\nAdding {} to cache......'.format(path), nl=False)\n try:\n r = cpenv.resolve(path)\n except Exception as e:\n click.echo(bold_red('FAILED'))\n click.echo(e)\n return\n\n if isinstance(r.resolved[0], cpenv.VirtualEnvironment):\n EnvironmentCache.add(r.resolved[0])\n EnvironmentCache.save()\n click.echo(bold_green('OK!'))", "response": "Add an environment to the cache. Allows you to activate the environment by name instead of by full path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove a cached environment.", "response": "def remove(path):\n '''Remove a cached environment. Removed paths will no longer be able to\n be activated by name'''\n\n r = cpenv.resolve(path)\n if isinstance(r.resolved[0], cpenv.VirtualEnvironment):\n EnvironmentCache.discard(r.resolved[0])\n EnvironmentCache.save()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create(name_or_path, config):\n '''Create a new template module.\n\n You can also specify a filesystem path like \"./modules/new_module\"\n '''\n\n click.echo('Creating module {}...'.format(name_or_path), nl=False)\n try:\n module = cpenv.create_module(name_or_path, config)\n except Exception as e:\n click.echo(bold_red('FAILED'))\n raise\n else:\n click.echo(bold_green('OK!'))\n click.echo('Browse to your new module and make some changes.')\n click.echo(\"When you're ready add the module to an environment:\")\n click.echo(' cpenv module add my_module ./path/to/my_module')\n click.echo('Or track your module on git and add it directly from the repo:')\n click.echo(' cpenv module add my_module git@github.com:user/my_module.git')", "response": "Create a new template module."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a module to an environment. PATH can be a git repository path or a filesystem path. branch can be a git repository path or a filesystem path. type can be local or shared", "response": "def add(name, path, branch, type):\n '''Add a module to an environment. PATH can be a git repository path or\n a filesystem path. '''\n\n if not name and not path:\n ctx = click.get_current_context()\n click.echo(ctx.get_help())\n examples = (\n '\\nExamples:\\n'\n ' cpenv module add my_module ./path/to/my_module\\n'\n ' cpenv module add my_module git@github.com:user/my_module.git'\n ' cpenv module add my_module git@github.com:user/my_module.git --branch=master --type=shared'\n )\n click.echo(examples)\n return\n\n if not name:\n click.echo('Missing required argument: name')\n return\n\n if not path:\n click.echo('Missing required argument: path')\n\n env = cpenv.get_active_env()\n if type=='local':\n if not env:\n click.echo('\\nActivate an environment to add a local module.\\n')\n return\n\n if click.confirm('\\nAdd {} to active env {}?'.format(name, env.name)):\n click.echo('Adding module...', nl=False)\n try:\n env.add_module(name, path, branch)\n except:\n click.echo(bold_red('FAILED'))\n raise\n else:\n click.echo(bold_green('OK!'))\n\n return\n\n module_paths = cpenv.get_module_paths()\n click.echo('\\nAvailable module paths:\\n')\n for i, mod_path in enumerate(module_paths):\n click.echo(' {}. {}'.format(i, mod_path))\n choice = click.prompt(\n 'Where do you want to add your module?',\n type=int,\n default=0\n )\n module_root = module_paths[choice]\n module_path = utils.unipath(module_root, name)\n click.echo('Creating module {}...'.format(module_path), nl=False)\n try:\n cpenv.create_module(module_path, path, branch)\n except:\n click.echo(bold_red('FAILED'))\n raise\n else:\n click.echo(bold_green('OK!'))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remove(name, local):\n '''Remove a module named NAME. Will remove the first resolved module named NAME. You can also specify a full path to a module. Use the --local option\n to ensure removal of modules local to the currently active environment.'''\n\n click.echo()\n if not local: # Use resolver to find module\n try:\n r = cpenv.resolve(name)\n except cpenv.ResolveError as e:\n click.echo(e)\n return\n\n obj = r.resolved[0]\n else: # Try to find module in active environment\n env = cpenv.get_active_env()\n if not env:\n click.echo('You must activate an env to remove local modules')\n return\n\n mod = env.get_module(name)\n if not mod:\n click.echo('Failed to resolve module: ' + name)\n return\n\n obj = mod\n\n if isinstance(obj, cpenv.VirtualEnvironment):\n click.echo('{} is an environment. Use `cpenv remove` instead.')\n return\n\n click.echo(format_objects([obj]))\n click.echo()\n\n user_confirmed = click.confirm(\n red('Are you sure you want to remove this module?')\n )\n if user_confirmed:\n click.echo('Attempting to remove...', nl=False)\n\n try:\n obj.remove()\n except Exception as e:\n click.echo(bold_red('FAILED'))\n click.echo(e)\n else:\n click.echo(bold_green('OK!'))", "response": "Remove a module named NAME. Will remove the first resolved module named NAME."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncopies a global module to the active environment.", "response": "def localize(name):\n '''Copy a global module to the active environment.'''\n\n env = cpenv.get_active_env()\n if not env:\n click.echo('You need to activate an environment first.')\n return\n\n try:\n r = cpenv.resolve(name)\n except cpenv.ResolveError as e:\n click.echo('\\n' + str(e))\n\n module = r.resolved[0]\n if isinstance(module, cpenv.VirtualEnvironment):\n click.echo('\\nCan only localize a module not an environment')\n return\n\n active_modules = cpenv.get_active_modules()\n if module in active_modules:\n click.echo('\\nCan not localize an active module.')\n return\n\n if module in env.get_modules():\n click.echo('\\n{} is already local to {}'.format(module.name, env.name))\n return\n\n if click.confirm('\\nAdd {} to env {}?'.format(module.name, env.name)):\n click.echo('Adding module...', nl=False)\n try:\n module = env.add_module(module.name, module.path)\n except:\n click.echo(bold_red('FAILED'))\n raise\n else:\n click.echo(bold_green('OK!'))\n\n click.echo('\\nActivate the localize module:')\n click.echo(' cpenv activate {} {}'.format(env.name, module.name))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nresolving VirtualEnvironments with a relative or absolute path", "response": "def path_resolver(resolver, path):\n '''Resolves VirtualEnvironments with a relative or absolute path'''\n\n path = unipath(path)\n\n if is_environment(path):\n return VirtualEnvironment(path)\n\n raise ResolveError"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nresolves VirtualEnvironments in CPENV_HOME", "response": "def home_resolver(resolver, path):\n '''Resolves VirtualEnvironments in CPENV_HOME'''\n\n from .api import get_home_path\n\n path = unipath(get_home_path(), path)\n\n if is_environment(path):\n return VirtualEnvironment(path)\n\n raise ResolveError"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nresolve VirtualEnvironments in EnvironmentCache", "response": "def cache_resolver(resolver, path):\n '''Resolves VirtualEnvironments in EnvironmentCache'''\n\n env = resolver.cache.find(path)\n if env:\n return env\n\n raise ResolveError"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef module_resolver(resolver, path):\n '''Resolves module in previously resolved environment.'''\n\n if resolver.resolved:\n\n if isinstance(resolver.resolved[0], VirtualEnvironment):\n env = resolver.resolved[0]\n mod = env.get_module(path)\n\n if mod:\n return mod\n\n raise ResolveError", "response": "Resolves module in previously resolved environment."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresolves modules in CPENV_MODULES path and CPENV_HOME / modules", "response": "def modules_path_resolver(resolver, path):\n '''Resolves modules in CPENV_MODULES path and CPENV_HOME/modules'''\n\n from .api import get_module_paths\n\n for module_dir in get_module_paths():\n mod_path = unipath(module_dir, path)\n\n if is_module(mod_path):\n return Module(mod_path)\n\n raise ResolveError"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nresolve modules in currently active environment.", "response": "def active_env_module_resolver(resolver, path):\n '''Resolves modules in currently active environment.'''\n\n from .api import get_active_env\n\n env = get_active_env()\n if not env:\n raise ResolveError\n\n mod = env.get_module(path)\n if not mod:\n raise ResolveError\n\n return mod"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef redirect_resolver(resolver, path):\n '''Resolves environment from .cpenv file...recursively walks up the tree\n in attempt to find a .cpenv file'''\n\n if not os.path.exists(path):\n raise ResolveError\n\n if os.path.isfile(path):\n path = os.path.dirname(path)\n\n for root, _, _ in walk_up(path):\n if is_redirecting(root):\n env_paths = redirect_to_env_paths(unipath(root, '.cpenv'))\n r = Resolver(*env_paths)\n return r.resolve()\n\n raise ResolveError", "response": "Resolves environment from. cpenv file... recursively walks up the tree\n in attempt to find a. cpenv file and returns a dict of environment paths to the environment paths that were redirected."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _engine_affinity(obj):\n from distob import engine\n this_engine = engine.eid\n if isinstance(obj, numbers.Number) or obj is None:\n return (this_engine, 0)\n elif hasattr(obj, '__engine_affinity__'):\n # This case includes Remote subclasses and DistArray\n return obj.__engine_affinity__\n else:\n return (this_engine, _rough_size(obj))", "response": "Which engine or engines are preferred for processing this object\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmove an input to a new location.", "response": "def _ufunc_move_input(obj, location, bshape):\n \"\"\"Copy ufunc input `obj` to new engine location(s) unless obj is scalar.\n\n If the input is requested to be distributed to multiple engines, this\n function will also take care of broadcasting along the distributed axis.\n\n If the input obj is a scalar, it will be passed through unchanged.\n\n Args:\n obj (array_like or scalar): one of the inputs to a ufunc\n location (integer or tuple): If an integer, this specifies a single\n engine id to which an array input should be moved. If it is a tuple,\n location[0] is a list of engine ids for distributing the array input\n and location[1] an integer indicating which axis should be distributed.\n bshape (tuple): The shape to which the input will ultimately be broadcast\n\n Returns:\n array_like or RemoteArray or DistArray or scalar\n \"\"\"\n if (not hasattr(type(obj), '__array_interface__') and\n not isinstance(obj, Remote) and\n (isinstance(obj, string_types) or\n not isinstance(obj, Sequence))):\n # then treat it as a scalar\n return obj\n from distob import engine\n this_engine = engine.eid\n if location == this_engine:\n # move obj to the local host, if not already here\n if isinstance(obj, Remote) or isinstance(obj, DistArray):\n return gather(obj)\n else:\n return obj\n elif isinstance(location, numbers.Integral):\n # move obj to a single remote engine\n if isinstance(obj, Remote) and obj._ref.id.engine == location:\n #print('no data movement needed!')\n return obj\n obj = gather(obj)\n return _directed_scatter(obj, axis=None, destination=location)\n else:\n # location is a tuple (list of engine ids, distaxis) indicating that\n # obj should be distributed.\n engine_ids, distaxis = location\n if not isinstance(obj, DistArray):\n gather(obj)\n if isinstance(obj, Sequence):\n obj = np.array(obj)\n if obj.ndim < len(bshape):\n ix = (np.newaxis,)*(len(bshape)-obj.ndim) + (slice(None),)*obj.ndim\n obj = obj[ix]\n if (isinstance(obj, DistArray) and distaxis == obj._distaxis and\n engine_ids == [ra._ref.id.engine for ra in obj._subarrays]):\n #print('no data movement needed!')\n return obj\n obj = gather(obj)\n if obj.shape[distaxis] == 1:\n # broadcast this axis across engines\n subarrays = [_directed_scatter(obj, None, m) for m in engine_ids]\n return DistArray(subarrays, distaxis)\n else:\n return _directed_scatter(obj, distaxis, destination=engine_ids)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _ufunc_dispatch(ufunc, method, i, inputs, **kwargs):\n #__print_ufunc(ufunc, method, i, inputs, **kwargs)\n if 'out' in kwargs and kwargs['out'] is not None:\n raise Error('for distributed ufuncs `out=` is not yet implemented')\n nin = 2 if ufunc is np.dot else ufunc.nin\n if nin is 1 and method == '__call__':\n return vectorize(ufunc.__call__)(inputs[0], **kwargs)\n elif nin is 2 and method == '__call__':\n from distob import engine\n here = engine.eid\n # Choose best location for the computation, possibly distributed:\n locs, weights = zip(*[_engine_affinity(a) for a in inputs])\n # for DistArrays, adjust preferred distaxis to account for broadcasting\n bshape = _broadcast_shape(*inputs)\n locs = list(locs)\n for i, loc in enumerate(locs):\n if isinstance(loc, _TupleType):\n num_new_axes = len(bshape) - inputs[i].ndim\n if num_new_axes > 0:\n locs[i] = (locs[i][0], locs[i][1] + num_new_axes)\n if ufunc is np.dot:\n locs = [here if isinstance(m, _TupleType) else m for m in locs]\n if locs[0] == locs[1]:\n location = locs[0]\n else:\n # TODO: More accurately penalize the increased data movement if we\n # choose to distribute an axis that requires broadcasting.\n smallest = 0 if weights[0] <= weights[1] else 1\n largest = 1 - smallest\n if locs[0] is here or locs[1] is here:\n location = here if weights[0] == weights[1] else locs[largest]\n else:\n # Both inputs are on remote engines. With the current\n # implementation, data on one remote engine can only be moved\n # to another remote engine via the client. Cost accordingly:\n if weights[smallest]*2 < weights[largest] + weights[smallest]:\n location = locs[largest]\n else:\n location = here\n # Move both inputs to the chosen location:\n inputs = [_ufunc_move_input(a, location, bshape) for a in inputs]\n # Execute computation:\n if location is here:\n return ufunc.__call__(inputs[0], inputs[1], **kwargs)\n else:\n if isinstance(location, numbers.Integral):\n # location is a single remote engine\n return call(ufunc.__call__, inputs[0], inputs[1], **kwargs)\n else:\n # location is a tuple (list of engine ids, distaxis) implying\n # that the moved inputs are now distributed arrays (or scalar)\n engine_ids, distaxis = location\n n = len(engine_ids)\n is_dist = tuple(isinstance(a, DistArray) for a in inputs)\n assert(is_dist[0] or is_dist[1])\n for i in 0, 1:\n if is_dist[i]:\n ndim = inputs[i].ndim\n assert(inputs[i]._distaxis == distaxis)\n assert(inputs[i]._n == n)\n def _remote_ucall(inputs, **kwargs):\n \"\"\"(Executed on a remote or local engine) call the ufunc\"\"\"\n return ufunc.__call__(inputs[0], inputs[1], **kwargs)\n results = []\n kwargs = kwargs.copy()\n kwargs['block'] = False\n kwargs['prefer_local'] = False\n for j in range(n):\n subinputs = tuple(inputs[i]._subarrays[j] if \n is_dist[i] else inputs[i] for i in (0, 1))\n results.append(call(_remote_ucall, subinputs, **kwargs))\n results = [convert_result(ar) for ar in results]\n return DistArray(results, distaxis)\n elif ufunc.nin > 2:\n raise Error(u'Distributing ufuncs with >2 inputs is not yet supported')\n else:\n raise Error(u'Distributed ufunc.%s() is not yet implemented' % method)", "response": "Dispatches a ufunc to the local host or remote engine."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef transpose(a, axes=None):\n if isinstance(a, np.ndarray):\n return np.transpose(a, axes)\n elif isinstance(a, RemoteArray):\n return a.transpose(*axes)\n elif isinstance(a, Remote):\n return _remote_to_array(a).transpose(*axes)\n elif isinstance(a, DistArray):\n if axes is None:\n axes = range(a.ndim - 1, -1, -1)\n axes = list(axes)\n if len(set(axes)) < len(axes):\n raise ValueError(\"repeated axis in transpose\")\n if sorted(axes) != list(range(a.ndim)):\n raise ValueError(\"axes don't match array\")\n distaxis = a._distaxis\n new_distaxis = axes.index(distaxis)\n new_subarrays = [ra.transpose(*axes) for ra in a._subarrays]\n return DistArray(new_subarrays, new_distaxis)\n else:\n return np.transpose(a, axes)", "response": "Returns a view of the array with axes transposed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rollaxis(a, axis, start=0):\n if isinstance(a, np.ndarray):\n return np.rollaxis(a, axis, start)\n if axis not in range(a.ndim):\n raise ValueError(\n 'rollaxis: axis (%d) must be >=0 and < %d' % (axis, a.ndim))\n if start not in range(a.ndim + 1):\n raise ValueError(\n 'rollaxis: start (%d) must be >=0 and < %d' % (axis, a.ndim+1))\n axes = list(range(a.ndim))\n axes.remove(axis)\n axes.insert(start, axis)\n return transpose(a, axes)", "response": "Roll the specified axis backwards until it lies in a given position."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninsert a new axis corresponding to a given position in the array shape", "response": "def expand_dims(a, axis):\n \"\"\"Insert a new axis, corresponding to a given position in the array shape\n\n Args:\n a (array_like): Input array.\n axis (int): Position (amongst axes) where new axis is to be inserted.\n \"\"\"\n if hasattr(a, 'expand_dims') and hasattr(type(a), '__array_interface__'):\n return a.expand_dims(axis)\n else:\n return np.expand_dims(a, axis)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconcatenating a sequence of arrays together.", "response": "def concatenate(tup, axis=0):\n \"\"\"Join a sequence of arrays together. \n Will aim to join `ndarray`, `RemoteArray`, and `DistArray` without moving \n their data, if they happen to be on different engines.\n\n Args:\n tup (sequence of array_like): Arrays to be concatenated. They must have\n the same shape, except in the dimension corresponding to `axis`.\n axis (int, optional): The axis along which the arrays will be joined.\n\n Returns: \n res: `ndarray`, if inputs were all local\n `RemoteArray`, if inputs were all on the same remote engine\n `DistArray`, if inputs were already scattered on different engines\n \"\"\"\n from distob import engine\n if len(tup) is 0:\n raise ValueError('need at least one array to concatenate')\n first = tup[0]\n others = tup[1:]\n # allow subclasses to provide their own implementations of concatenate:\n if (hasattr(first, 'concatenate') and \n hasattr(type(first), '__array_interface__')):\n return first.concatenate(others, axis)\n # convert all arguments to arrays/RemoteArrays if they are not already:\n arrays = []\n for ar in tup:\n if isinstance(ar, DistArray):\n if axis == ar._distaxis:\n arrays.extend(ar._subarrays)\n else:\n # Since not yet implemented arrays distributed on more than\n # one axis, will fetch and re-scatter on the new axis:\n arrays.append(gather(ar))\n elif isinstance(ar, RemoteArray):\n arrays.append(ar)\n elif isinstance(ar, Remote):\n arrays.append(_remote_to_array(ar))\n elif hasattr(type(ar), '__array_interface__'):\n # then treat as a local ndarray\n arrays.append(ar)\n else:\n arrays.append(np.array(ar))\n if all(isinstance(ar, np.ndarray) for ar in arrays):\n return np.concatenate(arrays, axis)\n total_length = 0\n # validate dimensions are same, except for axis of concatenation:\n commonshape = list(arrays[0].shape)\n commonshape[axis] = None # ignore this axis for shape comparison\n for ar in arrays:\n total_length += ar.shape[axis]\n shp = list(ar.shape)\n shp[axis] = None\n if shp != commonshape:\n raise ValueError('incompatible shapes for concatenation')\n # set sensible target block size if splitting subarrays further:\n blocksize = ((total_length - 1) // engine.nengines) + 1\n rarrays = []\n for ar in arrays:\n if isinstance(ar, DistArray):\n rarrays.extend(ar._subarrays)\n elif isinstance(ar, RemoteArray):\n rarrays.append(ar)\n else:\n da = _scatter_ndarray(ar, axis, blocksize)\n for ra in da._subarrays:\n rarrays.append(ra)\n del da\n del arrays\n # At this point rarrays is a list of RemoteArray to be concatenated\n eid = rarrays[0]._id.engine\n if all(ra._id.engine == eid for ra in rarrays):\n # Arrays to be joined are all on the same engine\n if eid == engine.eid:\n # Arrays are all local\n return concatenate([gather(r) for r in rarrays], axis)\n else:\n return call(concatenate, rarrays, axis)\n else:\n # Arrays to be joined are on different engines.\n # TODO: consolidate any consecutive arrays already on same engine\n return DistArray(rarrays, axis)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstack arrays in sequence vertically ( row wise ) handling RemoteArray and DistArray without moving data.", "response": "def vstack(tup):\n \"\"\"Stack arrays in sequence vertically (row wise), \n handling ``RemoteArray`` and ``DistArray`` without moving data.\n\n Args:\n tup (sequence of array_like)\n\n Returns: \n res: `ndarray`, if inputs were all local\n `RemoteArray`, if inputs were all on the same remote engine\n `DistArray`, if inputs were already scattered on different engines\n \"\"\"\n # Follow numpy.vstack behavior for 1D arrays:\n arrays = list(tup)\n for i in range(len(arrays)):\n if arrays[i].ndim is 1:\n arrays[i] = arrays[i][np.newaxis, :]\n return concatenate(tup, axis=0)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstacking arrays in sequence horizontally ( column wise ) handling RemoteArray and DistArray without moving data.", "response": "def hstack(tup):\n \"\"\"Stack arrays in sequence horizontally (column wise), \n handling ``RemoteArray`` and ``DistArray`` without moving data.\n\n Args:\n tup (sequence of array_like)\n\n Returns: \n res: `ndarray`, if inputs were all local\n `RemoteArray`, if inputs were all on the same remote engine\n `DistArray`, if inputs were already scattered on different engines\n \"\"\"\n # Follow numpy.hstack behavior for 1D arrays:\n if all(ar.ndim is 1 for ar in tup):\n return concatenate(tup, axis=0)\n else:\n return concatenate(tup, axis=1)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstacks arrays in sequence depth wise.", "response": "def dstack(tup):\n \"\"\"Stack arrays in sequence depth wise (along third dimension), \n handling ``RemoteArray`` and ``DistArray`` without moving data.\n\n Args:\n tup (sequence of array_like)\n\n Returns: \n res: `ndarray`, if inputs were all local\n `RemoteArray`, if inputs were all on the same remote engine\n `DistArray`, if inputs were already scattered on different engines\n \"\"\"\n # Follow numpy.dstack behavior for 1D and 2D arrays:\n arrays = list(tup)\n for i in range(len(arrays)):\n if arrays[i].ndim is 1:\n arrays[i] = arrays[i][np.newaxis, :]\n if arrays[i].ndim is 2:\n arrays[i] = arrays[i][:, :, np.newaxis]\n return concatenate(arrays, axis=2)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _broadcast_shape(*args):\n #TODO: currently incorrect result if a Sequence is provided as an input\n shapes = [a.shape if hasattr(type(a), '__array_interface__')\n else () for a in args]\n ndim = max(len(sh) for sh in shapes) # new common ndim after broadcasting\n for i, sh in enumerate(shapes):\n if len(sh) < ndim:\n shapes[i] = (1,)*(ndim - len(sh)) + sh\n return tuple(max(sh[ax] for sh in shapes) for ax in range(ndim))", "response": "Return the shape that would result from broadcasting the inputs"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mean(a, axis=None, dtype=None, out=None, keepdims=False):\n if (isinstance(a, np.ndarray) or\n isinstance(a, RemoteArray) or\n isinstance(a, DistArray)):\n return a.mean(axis=axis, dtype=dtype, out=out, keepdims=keepdims)\n else:\n return np.mean(a, axis=axis, dtype=dtype, out=out, keepdims=keepdims)", "response": "Compute the arithmetic mean along the specified axis."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nforce update of a local cached copy of the real object", "response": "def _fetch(self):\n \"\"\"forces update of a local cached copy of the real object\n (regardless of the preference setting self.cache)\"\"\"\n if not self._obcache_current:\n from distob import engine\n ax = self._distaxis\n self._obcache = concatenate([ra._ob for ra in self._subarrays], ax)\n # let subarray obcaches and main obcache be views on same memory:\n for i in range(self._n):\n ix = [slice(None)] * self.ndim\n ix[ax] = slice(self._si[i], self._si[i+1])\n self._subarrays[i]._obcache = self._obcache[tuple(ix)]\n self._obcache_current = True\n # now prefer local processing:\n self.__engine_affinity__ = (\n engine.eid, self.__engine_affinity__[1])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _tosubslices(self, sl):\n N = self.shape[self._distaxis]\n start, stop, step = sl.start, sl.stop, sl.step\n if step is None:\n step = 1\n ss = []\n ms = []\n if step > 0:\n if start is None:\n start = 0\n if stop is None:\n stop = N\n subs = range(0, self._n)\n for s in subs:\n low = self._si[s]\n high = self._si[s + 1]\n first = low + ((low - start) % step)\n last = high + ((high - start) % step)\n if start < high and stop > low and first < high:\n ss.append(s)\n substart = max(first, start) - low\n substop = min(last, stop) - low\n ms.append(slice(substart, substop, step))\n elif step < 0:\n if start is None:\n start = N - 1\n if stop is None:\n stop = -1\n subs = range(self._n - 1, -1, -1)\n for s in subs:\n low = self._si[s]\n high = self._si[s + 1]\n first = high + step + ((high - start) % step)\n last = low + step + ((low - start) % step)\n if start >= low and stop < high and first >= low:\n ss.append(s)\n substart = min(first, start) - low\n substop = max(last + step, stop) - low\n if substop < 0:\n substop = None\n ms.append(slice(substart, substop, step))\n else:\n raise ValueError('slice step cannot be zero')\n return ss, ms", "response": "Maps a slice object for whole array to slice objects for subarrays."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks if the given shapes are all the same when ignoring axis ax.", "response": "def _valid_distaxis(shapes, ax):\n \"\"\"`ax` is a valid candidate for a distributed axis if the given\n subarray shapes are all the same when ignoring axis `ax`\"\"\"\n compare_shapes = np.vstack(shapes)\n if ax < compare_shapes.shape[1]:\n compare_shapes[:, ax] = -1\n return np.count_nonzero(compare_shapes - compare_shapes[0]) == 0"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a new DistArray with the same shape as self but with the same dimensions.", "response": "def expand_dims(self, axis):\n \"\"\"Insert a new axis, at a given position in the array shape\n Args:\n axis (int): Position (amongst axes) where new axis is to be inserted.\n \"\"\"\n if axis == -1:\n axis = self.ndim\n if axis <= self._distaxis:\n subaxis = axis\n new_distaxis = self._distaxis + 1\n else:\n subaxis = axis - 1\n new_distaxis = self._distaxis\n new_subarrays = [expand_dims(ra, subaxis) for ra in self._subarrays]\n return DistArray(new_subarrays, new_distaxis)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mean(self, axis=None, dtype=None, out=None, keepdims=False):\n if axis == -1:\n axis = self.ndim\n if axis is None:\n results = vectorize(mean)(self, axis, dtype, keepdims=False)\n weights = self._sublengths\n res = np.average(results, axis=None, weights=weights)\n if keepdims:\n for i in range(self.ndim):\n res = expand_dims(res, res.ndim)\n elif axis == self._distaxis:\n results = vectorize(mean)(self, axis, dtype, keepdims=True)\n results = gather(results)\n # Average manually (np.average doesn't preserve ndarray subclasses)\n weights = (np.array(self._sublengths, dtype=np.float64) /\n sum(self._sublengths))\n ix = [slice(None)] * self.ndim\n ix[axis] = 0\n res = results[ix] * weights[0]\n for i in range(1, self._n):\n ix[axis] = i\n res = res + results[ix] * weights[i]\n if keepdims:\n res = expand_dims(res, axis)\n else:\n res = vectorize(mean)(self, axis, dtype, keepdims=False)\n if keepdims:\n res = expand_dims(res, axis)\n if out is not None:\n out[:] = res\n return res", "response": "Compute the arithmetic mean along the specified axis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if successful False if failure", "response": "def run(*args, **kwargs):\n '''Returns True if successful, False if failure'''\n\n kwargs.setdefault('env', os.environ)\n kwargs.setdefault('shell', True)\n\n try:\n subprocess.check_call(' '.join(args), **kwargs)\n return True\n except subprocess.CalledProcessError:\n logger.debug('Error running: {}'.format(args))\n return False"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a command to launch a subshell", "response": "def cmd():\n '''Return a command to launch a subshell'''\n\n if platform == 'win':\n return ['cmd.exe', '/K']\n\n elif platform == 'linux':\n ppid = os.getppid()\n ppid_cmdline_file = '/proc/{0}/cmdline'.format(ppid)\n try:\n with open(ppid_cmdline_file) as f:\n cmd = f.read()\n if cmd.endswith('\\x00'):\n cmd = cmd[:-1]\n cmd = cmd.split('\\x00')\n return cmd + [binpath('subshell.sh')]\n except:\n cmd = 'bash'\n\n else:\n cmd = 'bash'\n\n return [cmd, binpath('subshell.sh')]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate a prompt with a given prefix", "response": "def prompt(prefix=None, colored=True):\n '''Generate a prompt with a given prefix\n\n linux/osx: [prefix] user@host cwd $\n win: [prefix] cwd:\n '''\n\n if platform == 'win':\n return '[{0}] $P$G'.format(prefix)\n else:\n if colored:\n return (\n '[{0}] ' # White prefix\n '\\\\[\\\\033[01;32m\\\\]\\\\u@\\\\h\\\\[\\\\033[00m\\\\] ' # Green user@host\n '\\\\[\\\\033[01;34m\\\\]\\\\w $ \\\\[\\\\033[00m\\\\]' # Blue cwd $\n ).format(prefix)\n return '[{0}] \\\\u@\\\\h \\\\w $ '.format(prefix)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a file to the file repository.", "response": "def add_file(self, file, **kwargs):\n \"\"\"Append a file to file repository.\n\n For file monitoring, monitor instance needs file.\n Please put the name of file to `file` argument.\n\n :param file: the name of file you want monitor.\n\n \"\"\"\n\n if os.access(file, os.F_OK):\n\n if file in self.f_repository:\n raise DuplicationError(\"file already added.\")\n\n self.f_repository.append(file)\n\n else:\n raise IOError(\"file not found.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_files(self, filelist, **kwargs):\n\n # check filelist is list type\n if not isinstance(filelist, list):\n raise TypeError(\"request the list type.\")\n\n for file in filelist:\n self.add_file(file)", "response": "Append files to the repository using this."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef monitor(self, sleep=5):\n\n\n manager = FileModificationObjectManager()\n\n timestamps = {}\n filebodies = {}\n\n # register original timestamp and filebody to dict\n for file in self.f_repository:\n timestamps[file] = self._get_mtime(file)\n filebodies[file] = open(file).read()\n\n\n while True:\n\n for file in self.f_repository:\n\n mtime = timestamps[file]\n fbody = filebodies[file]\n\n modified = self._check_modify(file, mtime, fbody)\n\n # file not modify -> continue\n if not modified:\n continue\n\n # file modifies -> create the modification object\n\n new_mtime = self._get_mtime(file)\n new_fbody = open(file).read()\n\n obj = FileModificationObject(\n file,\n (mtime, new_mtime),\n (fbody, new_fbody) )\n\n # overwrite new timestamp and filebody\n timestamps[file] = new_mtime\n filebodies[file] = new_fbody\n\n\n # append file modification object to manager\n manager.add_object(obj)\n\n # return new modification object\n yield obj\n\n time.sleep(sleep)", "response": "Monitor file modification using timestamp and file body data and yield a FileModificationObject."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing a Flask application for use with this extension.", "response": "def init_app(self, app):\n \"\"\"Initialize a :class:`~flask.Flask` application for use with\n this extension.\n \"\"\"\n self._jobs = []\n\n if not hasattr(app, 'extensions'):\n app.extensions = {}\n\n app.extensions['restpoints'] = self\n app.restpoints_instance = self\n app.add_url_rule('/ping', 'ping', ping)\n app.add_url_rule('/time', 'time', time)\n app.add_url_rule('/status', 'status', status(self._jobs))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_status_job(self, job_func, name=None, timeout=3):\n job_name = job_func.__name__ if name is None else name\n job = (job_name, timeout, job_func)\n self._jobs.append(job)", "response": "Adds a job to be included during status requests to the status endpoint."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _pipepager(text, cmd, color):\n import subprocess\n env = dict(os.environ)\n\n # If we're piping to less we might support colors under the\n # condition that\n cmd_detail = cmd.rsplit('/', 1)[-1].split()\n if color is None and cmd_detail[0] == 'less':\n less_flags = os.environ.get('LESS', '') + ' '.join(cmd_detail[1:])\n if not less_flags:\n env['LESS'] = '-R'\n color = True\n elif 'r' in less_flags or 'R' in less_flags:\n color = True\n\n if not color:\n text = strip_ansi(text)\n\n c = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE,\n env=env)\n encoding = get_best_encoding(c.stdin)\n try:\n c.stdin.write(text.encode(encoding, 'replace'))\n c.stdin.close()\n except (IOError, KeyboardInterrupt):\n pass\n\n # Less doesn't respect ^C, but catches it for its own UI purposes (aborting\n # search or other commands inside less).\n #\n # That means when the user hits ^C, the parent process (click) terminates,\n # but less is still alive, paging the output and messing up the terminal.\n #\n # If the user wants to make the pager exit on ^C, they should set\n # `LESS='-K'`. It's not our decision to make.\n while True:\n try:\n c.wait()\n except KeyboardInterrupt:\n pass\n else:\n break", "response": "Page through text by feeding it to another program."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_funky(func):\n\n if isinstance(func, str):\n try:\n func = getattr(np, func)\n except:\n raise NameError(\"Nom de fonction non comprise\")\n return func", "response": "Renvoie un nom pass\u00e9 en param\u00e8tre un nom pass\u00e9 un param\u00e8tre un nom pass\u00e9 un param\u00e8tre un nom pass\u00e9 un param\u00e8tre un nom pass\u00e9 un param\u00e8tre"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef profil_journalier(df, func='mean'):\n\n func = _get_funky(func)\n res = df.groupby(lambda x: x.hour).aggregate(func)\n return res", "response": "Calculates du profil journalier un tableau un tableau de donn\u00e9es"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef profil_hebdo(df, func='mean'):\n\n func = _get_funky(func)\n res = df.groupby(lambda x: x.weekday).aggregate(func)\n # On met des noms de jour \u00e0 la place des num\u00e9ros dans l'index\n res.index = [cal.day_name[i] for i in range(0,7)]\n return res", "response": "A helper function to calculate a list of unique calendars in a DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef profil_annuel(df, func='mean'):\n\n func = _get_funky(func)\n res = df.groupby(lambda x: x.month).aggregate(func)\n # On met des noms de mois \u00e0 la place des num\u00e9ros dans l'index\n res.index = [cal.month_name[i] for i in range(1,13)]\n return res", "response": "A helper function to calculate un annuelle s\u00e9rie temporelle\n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nformats un donn\u00e9e du donn\u00e9e du donn\u00e9e et get_*.", "response": "def _format(noms):\n \"\"\"\n Formate une donn\u00e9e d'entr\u00e9e pour \u00eatre exploitable dans les fonctions liste_*\n et get_*.\n\n Param\u00e8tres:\n noms: cha\u00eene de caract\u00e8re, liste ou tuples de cha\u00eenes de caract\u00e8res ou\n pandas.Series de cha\u00eenes de caract\u00e8res.\n\n Retourne:\n Une cha\u00eenes de caract\u00e8res dont chaque \u00e9l\u00e9ment est s\u00e9par\u00e9 du suivant par les\n caract\u00e8res ',' (simples quotes comprises)\n\n \"\"\"\n if isinstance(noms, (list, tuple, pd.Series)):\n noms = ','.join(noms)\n noms = noms.replace(\",\", \"','\")\n return noms"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef date_range(debut, fin, freq):\n\n debut_dt = debut.replace(hour=0, minute=0, second=0, microsecond=0)\n fin_dt = fin.replace(hour=23, minute=59, second=0, microsecond=0)\n if freq in ('M', 'A'): # Calle la fr\u00e9quence sur le d\u00e9but de mois/ann\u00e9e\n freq += 'S'\n debut_dt = debut_dt.replace(day=1, minute=0, second=0, microsecond=0)\n fin_dt = fin_dt.replace(day=1, minute=0, second=0, microsecond=0)\n dates_completes = pd.date_range(start=debut_dt, end=fin_dt, freq=freq)\n return dates_completes", "response": "Return a list of dates in the order of the given date."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconnecting to the XAIR server.", "response": "def _connect(self):\n \"\"\"\n Connexion \u00e0 la base XAIR\n \"\"\"\n\n try:\n # On passe par Oracle Instant Client avec le TNS ORA_FULL\n self.conn = cx_Oracle.connect(self._ORA_FULL)\n self.cursor = self.conn.cursor()\n print('XAIR: Connexion \u00e9tablie')\n except cx_Oracle.Error as e:\n print(\"Erreur: %s\" % (e))\n raise cx_Oracle.Error('Echec de connexion')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef liste_parametres(self, parametre=None):\n condition = \"\"\n if parametre:\n condition = \"WHERE CCHIM='%s'\" % parametre\n _sql = \"\"\"SELECT CCHIM AS PARAMETRE,\n NCON AS LIBELLE,\n NOPOL AS CODE\n FROM NOM_MESURE %s ORDER BY CCHIM\"\"\" % condition\n return psql.read_sql(_sql, self.conn)", "response": "Liste des parametres de la lista de la classe."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef liste_campagnes(self, campagne=None):\n\n condition = \"\"\n if campagne:\n condition = \"WHERE NOM_COURT_CM='%s' \"\"\" % campagne\n _sql = \"\"\"SELECT\n NOM_COURT_CM AS CAMPAGNE,\n IDENTIFIANT AS STATION,\n LIBELLE AS LIBELLE_CM,\n DATEDEB AS DEBUT,\n DATEFIN AS FIN\n FROM CAMPMES\n INNER JOIN CAMPMES_STATION USING (NOM_COURT_CM)\n INNER JOIN STATION USING (NOM_COURT_SIT)\n %s ORDER BY DATEDEB DESC\"\"\" % condition\n return psql.read_sql(_sql, self.conn)", "response": "Liste des campagnes de mesure et des stations associ\u00e9es campagne. Si d\u00e9finie liste des stations associ\u00e9es campagne."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_mesures(self, mes, debut=None, fin=None, freq='H', format=None,\n dayfirst=False, brut=False):\n \"\"\"\n R\u00e9cup\u00e9ration des donn\u00e9es de mesure.\n\n Param\u00e8tres:\n mes: Un nom de mesure ou plusieurs s\u00e9par\u00e9es par des virgules, une liste\n (list, tuple, pandas.Series) de noms\n debut: Chaine de caract\u00e8re ou objet datetime d\u00e9crivant la date de d\u00e9but.\n D\u00e9faut=date du jour\n fin: Chaine de caract\u00e8re ou objet datetime d\u00e9crivant la date de fin.\n D\u00e9faut=date de d\u00e9but\n freq: fr\u00e9quence de temps. '15T' | 'H' | 'D' | 'M' | 'A' (15T pour quart-horaire)\n format: chaine de caract\u00e8re d\u00e9crivant le format des dates (ex:\"%Y-%m-%d\"\n pour debut/fin=\"2013-01-28\"). Appeler pyair.date.strtime_help() pour\n obtenir la liste des codes possibles.\n Defaut=\"%Y-%m-%d\"\n dayfirst: Si aucun format n'est fourni et que les dates sont des chaines\n de caract\u00e8res, aide le d\u00e9crypteur \u00e0 transformer la date en objet datetime\n en sp\u00e9cifiant que les dates commencent par le jour (ex:11/09/2012\n pourrait \u00eatre interpret\u00e9 comme le 09 novembre si dayfirst=False)\n brut: si oui ou non renvoyer le dataframe brut, non invalid\u00e9, et les\n codes d'\u00e9tat des mesures\n Defaut=False\n\n Retourne:\n Un dataframe contenant toutes les mesures demand\u00e9es.\n Si brut=True, renvoie le dataframe des mesures brutes non invalid\u00e9es et\n le dataframe des codes d'\u00e9tats.\n Le dataframe valide (net) peut \u00eatre alors recalcul\u00e9 en faisant:\n brut, etats = xr.get_mesure(..., brut=True)\n invalides = etats_to_invalid(etats)\n net = brut.mask(invalides)\n\n \"\"\"\n\n def create_index(index, freq):\n \"\"\"\n Nouvel index [id, date] avec date format\u00e9 suivant le pas de temps voulu\n index: index de l'ancien dataframe, tel que [date \u00e0 minuit, date \u00e0 ajouter]\n\n \"\"\"\n decalage = 1 # sert \u00e0 compenser l'aberration des temps qui veut qu'on marque sur la fin d'une p\u00e9riode (ex: \u00e0 24h, la pollution de 23 \u00e0 minuit)\n if freq == 'T' or freq == '15T':\n f = pd.tseries.offsets.Minute\n decalage = 15\n if freq == 'H':\n f = pd.tseries.offsets.Hour\n if freq == 'D':\n f = pd.tseries.offsets.Day\n if freq == 'M':\n f = pd.tseries.offsets.MonthBegin\n if freq == 'A':\n f = pd.tseries.offsets.YearBegin\n else:\n f = pd.tseries.offsets.Hour\n new_index = [date + f(int(delta) - decalage) for date, delta in index]\n return new_index\n\n # Reformatage du champ des noms de mesure\n mes = _format(mes)\n\n # Analyse des champs dates\n debut = to_date(debut, dayfirst, format)\n if not fin:\n fin = debut\n else:\n fin = to_date(fin, dayfirst, format)\n\n\n # La freq de temps Q n'existe pas, on passe d'abord par une fr\u00e9quence 15 minutes\n if freq in ('Q', 'T'):\n freq = '15T'\n\n # S\u00e9lection des champs et de la table en fonctions de la fr\u00e9quence de temps souhait\u00e9e\n if freq == '15T':\n diviseur = 96\n champ_val = ','.join(['Q_M%02i AS \"%i\"' % (x, x * 15) for x in range(1, diviseur + 1)])\n champ_code = 'Q_ETATV'\n table = 'JOURNALIER'\n elif freq == 'H':\n diviseur = 24\n champ_val = ','.join(['H_M%02i AS \"%i\"' % (x, x) for x in range(1, diviseur + 1)])\n champ_code = 'H_ETAT'\n table = 'JOURNALIER'\n elif freq == 'D':\n diviseur = 1\n champ_val = 'J_M01 AS \"1\"'\n champ_code = 'J_ETAT'\n table = 'JOURNALIER'\n elif freq == 'M':\n diviseur = 12\n champ_val = ','.join(['M_M%02i AS \"%i\"' % (x, x) for x in range(1, diviseur + 1)])\n champ_code = 'M_ETAT'\n table = 'MOIS'\n elif freq == 'A':\n diviseur = 1\n champ_val = 'A_M01 AS \"1\"'\n champ_code = 'A_ETAT'\n table = 'MOIS'\n else:\n raise ValueError(\"freq doit \u00eatre T, H, D, M ou A\")\n\n if table == 'JOURNALIER':\n champ_date = 'J_DATE'\n debut_db = debut\n fin_db = fin\n else:\n champ_date = 'M_DATE'\n # Pour les freq='M' et 'A', la table contient toutes les valeurs sur une\n # ann\u00e9e enti\u00e8re. Pour ne pas perturber la r\u00e9cup\u00e9ration si on passait des\n # dates en milieu d'ann\u00e9e, on transforme les dates pour \u00eatre cal\u00e9es en d\u00e9but\n # et en fin d'ann\u00e9e. Le recadrage se fera plus loin dans le code, lors du reindex\n debut_db = debut.replace(month=1, day=1, hour=0, minute=0)\n fin_db = fin.replace(month=12, day=31, hour=23, minute=0)\n\n debut_db = debut_db.strftime(\"%Y-%m-%d\")\n fin_db = fin_db.strftime(\"%Y-%m-%d\")\n\n # R\u00e9cup\u00e9ration des valeurs et codes d'\u00e9tats associ\u00e9s\n _sql = \"\"\"SELECT\n IDENTIFIANT as \"id\",\n {champ_date} as \"date\",\n {champ_code} as \"etat\",\n {champ_val}\n FROM {table}\n INNER JOIN MESURE USING (NOM_COURT_MES)\n WHERE IDENTIFIANT IN ('{mes}')\n AND {champ_date} BETWEEN TO_DATE('{debut}', 'YYYY-MM-DD') AND TO_DATE('{fin}', 'YYYY-MM-DD')\n ORDER BY IDENTIFIANT, {champ_date} ASC\"\"\".format(champ_date=champ_date,\n table=table,\n champ_code=champ_code,\n mes=mes,\n champ_val=champ_val,\n debut=debut_db,\n fin=fin_db)\n ## TODO : A essayer quand la base sera en version 11g\n # _sql = \"\"\"SELECT *\n # FROM ({selection})\n # UNPIVOT (IDENTIFIANT FOR VAL IN ({champ_as}))\"\"\".format(selection=_sql,\n # champ_date=champ_date,\n # champ_as=champ_as)\n\n # On recupere les valeurs depuis la freq dans une dataframe\n rep = psql.read_sql(_sql, self.conn)\n\n # On cr\u00e9ait un multiindex pour manipuler plus facilement le dataframe\n df = rep.set_index(['id', 'date'])\n\n # Stack le dataframe pour mettre les colonnes en lignes, en supprimant la colonne des \u00e9tats\n # puis on unstack suivant l'id pour avoir les polluants en colonnes\n etats = df['etat']\n df = df.drop('etat', axis=1)\n df_stack = df.stack(dropna=False)\n df = df_stack.unstack('id')\n\n # Calcul d'un nouvel index avec les bonnes dates. L'index du df est\n # form\u00e9 du champ date \u00e0 minuit, et des noms des champs de valeurs\n # qui sont aliass\u00e9s de 1 \u00e0 24 pour les heures, ... voir champ_val.\n # On aggr\u00e8ve alors ces 2 valeurs pour avoir des dates align\u00e9es qu'on utilise alors comme index final\n index = create_index(df.index, freq)\n df.reset_index(inplace=True, drop=True)\n df['date'] = index\n df = df.set_index(['date'])\n # Traitement des codes d'\u00e9tat\n # On concat\u00e8ne les codes d'\u00e9tat pour chaque polluant\n # etats = etats.sum(level=0)\n # etats = pd.DataFrame(zip(*etats.apply(list)))\n etats = etats.unstack('id')\n etats.fillna(value=MISSING_CODE * diviseur, inplace=True)\n etats = etats.sum(axis=0)\n etats = pd.DataFrame(list(zip(*etats.apply(list))))\n etats.index = df.index\n etats.columns = df.columns\n\n # Remplacement des valeurs aux dates manquantes par des NaN\n dates_completes = date_range(debut, fin, freq)\n df = df.reindex(dates_completes)\n etats = etats.reindex(dates_completes)\n\n # Invalidation par codes d'\u00e9tat\n # Pour chaque code d'\u00e9tat, regarde si oui ou non il est invalidant en le remplacant par un bool\u00e9en\n invalid = etats_to_invalid(etats)\n\n if not brut:\n # dans le dataframe, masque toute valeur invalide par NaN\n dfn = df.mask(invalid) # DataFrame net\n return dfn\n else:\n return df, etats", "response": "Return a list of all the mesures in the current state."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of manuelles of a specific site.", "response": "def get_manuelles(self, site, code_parametre, debut, fin, court=False):\n \"\"\"\n Recup\u00e9ration des mesures manuelles (labo) pour un site\n\n site: num\u00e9ro du site (voir fonction liste_sites_prelevement)\n code_parametre: code ISO du param\u00e8tre \u00e0 rechercher (C6H6=V4)\n debut: date de d\u00e9but du premier pr\u00e9l\u00e8vement\n fin: date de fin du dernier pr\u00e9l\u00e8vement\n court: Renvoie un tableau au format court ou long (colonnes)\n\n \"\"\"\n\n condition = \"WHERE MESLA.NOPOL='%s' \" % code_parametre\n condition += \"AND SITMETH.NSIT=%s \" % site\n condition += \"AND PRELEV.DATE_DEB>=TO_DATE('%s', 'YYYY-MM-DD') \" % debut\n condition += \"AND PRELEV.DATE_FIN<=TO_DATE('%s', 'YYYY-MM-DD') \" % fin\n if court == False:\n select = \"\"\"SELECT\n MESLA.LIBELLE AS MESURE,\n METH.LIBELLE AS METHODE,\n ANA.VALEUR AS VALEUR,\n MESLA.UNITE AS UNITE,\n ANA.CODE_QUALITE AS CODE_QUALITE,\n ANA.DATE_ANA AS DATE_ANALYSE,\n ANA.ID_LABO AS LABO,\n PRELEV.DATE_DEB AS DEBUT,\n PRELEV.DATE_FIN AS FIN,\n ANA.COMMENTAIRE AS COMMENTAIRE,\n SITE.LIBELLE AS SITE,\n SITE.AXE AS ADRESSE,\n COM.NOM_COMMUNE AS COMMUNE\"\"\"\n else:\n select = \"\"\"SELECT\n MESLA.LIBELLE AS MESURE,\n ANA.VALEUR AS VALEUR,\n MESLA.UNITE AS UNITE,\n ANA.CODE_QUALITE AS CODE_QUALITE,\n PRELEV.DATE_DEB AS DEBUT,\n PRELEV.DATE_FIN AS FIN,\n SITE.AXE AS ADRESSE,\n COM.NOM_COMMUNE AS COMMUNE\"\"\"\n _sql = \"\"\"%s\n FROM ANALYSE ANA\n INNER JOIN PRELEVEMENT PRELEV ON (ANA.CODE_PRELEV=PRELEV.CODE_PRELEV AND ANA.CODE_SMP=PRELEV.CODE_SMP)\n INNER JOIN MESURE_LABO MESLA ON (ANA.CODE_MES_LABO=MESLA.CODE_MES_LABO AND ANA.CODE_SMP=MESLA.CODE_SMP)\n INNER JOIN SITE_METH_PRELEV SITMETH ON (ANA.CODE_SMP=SITMETH.CODE_SMP)\n INNER JOIN METH_PRELEVEMENT METH ON (SITMETH.CODE_METH_P=METH.CODE_METH_P)\n INNER JOIN SITE_PRELEVEMENT SITE ON (SITE.NSIT=SITMETH.NSIT)\n INNER JOIN COMMUNE COM ON (COM.NINSEE=SITE.NINSEE)\n %s\n ORDER BY MESLA.NOPOL,MESLA.LIBELLE,PRELEV.DATE_DEB\"\"\" % (select, condition)\n return psql.read_sql(_sql, self.conn)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a pandas. DataFrame with the indices of the entries in the result set.", "response": "def get_indices(self, res, debut, fin):\n \"\"\"\n R\u00e9cup\u00e9ration des indices ATMO pour un r\u00e9seau donn\u00e9.\n\n Param\u00e8tres:\n res : Nom du ou des r\u00e9seaux \u00e0 chercher (str, list, pandas.Series)\n debut: date de d\u00e9but, format YYYY-MM-JJ (str)\n fin: Date de fin, format YYYY-MM-JJ (str)\n\n \"\"\"\n res = _format(res)\n\n _sql = \"\"\"SELECT\n J_DATE AS \"date\",\n NOM_AGGLO AS \"reseau\",\n C_IND_CALCULE AS \"indice\"\n FROM RESULTAT_INDICE\n INNER JOIN GROUPE_ATMO USING (NOM_COURT_GRP)\n WHERE NOM_AGGLO IN ('%s')\n AND J_DATE BETWEEN TO_DATE('%s', 'YYYY-MM-DD') AND TO_DATE('%s', 'YYYY-MM-DD') \"\"\" % (res, debut, fin)\n rep = psql.read_sql(_sql, self.conn)\n df = rep.set_index(['reseau', 'date'])\n df = df['indice']\n df = df.unstack('reseau')\n dates_completes = date_range(to_date(debut), to_date(fin), freq='D')\n df = df.reindex(dates_completes)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_indices_et_ssi(self, reseau, debut, fin, complet=True):\n if complet:\n i_str = \"c_ind_diffuse\"\n ssi_str = \"c_ss_indice\"\n else:\n i_str = \"p_ind_diffuse\"\n ssi_str = \"p_ss_indice\"\n\n _sql = \"\"\"SELECT\n g.nom_agglo as \"reseau\",\n i.j_date as \"date\",\n max(case when i.{0}>0 then i.{0} else 0 end) indice,\n max(case when n.cchim='NO2' then ssi.{1} else 0 end) no2,\n max(case when n.cchim='PM10' then ssi.{1} else 0 end) pm10,\n max(case when n.cchim='O3' then ssi.{1} else 0 end) o3,\n max(case when n.cchim='SO2' then ssi.{1} else 0 end) so2\n FROM resultat_indice i\n INNER JOIN resultat_ss_indice ssi ON (i.nom_court_grp=ssi.nom_court_grp AND i.j_date=ssi.j_date)\n INNER JOIN groupe_atmo g ON (i.nom_court_grp=g.nom_court_grp)\n INNER JOIN nom_mesure n ON (ssi.nopol=n.nopol)\n WHERE g.nom_agglo='{2}'\n AND i.j_date BETWEEN\n TO_DATE('{3}', 'YYYY-MM-DD') AND\n TO_DATE('{4}', 'YYYY-MM-DD')\n GROUP BY\n g.nom_agglo,\n i.j_date\n ORDER BY i.j_date\"\"\".format(i_str, ssi_str, reseau, debut, fin)\n df = psql.read_sql(_sql, self.conn)\n df = df.set_index(['reseau', 'date'])\n return df", "response": "Renvoie l indice et les sous_indices\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nattempts to run a global hook by name with args", "response": "def run_global_hook(hook_name, *args):\n '''Attempt to run a global hook by name with args'''\n\n hook_finder = HookFinder(get_global_hook_path())\n hook = hook_finder(hook_name)\n if hook:\n hook.run(*args)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef status(jobs):\n def status_handler():\n endpoints = []\n stats = {\"endpoints\": None}\n\n executor = concurrent.futures.ThreadPoolExecutor(max_workers=5)\n # This is basically calling the below within the executor:\n #\n # >>> timeit(job[2], number=1)\n #\n # job is a tuple of (name, timeout, func) so the above is really:\n # >>> timeit(func, number=1)\n #\n #gen = ((job, executor.submit(timeit, job[2], number=1)) for job in jobs)\n #for job, future in gen:\n for job, future in [(job, executor.submit(timeit, job[2], number=1)) for job in jobs]:\n name, timeout, _ = job\n endpoint = {\"endpoint\": name}\n try:\n data = future.result(timeout=timeout)\n endpoint[\"duration\"] = data\n except concurrent.futures.TimeoutError:\n endpoint[\"error\"] = \"timeout exceeded\"\n except Exception as ex:\n endpoint[\"error\"] = str(ex)\n endpoints.append(endpoint)\n\n if len(endpoints) > 0:\n stats[\"endpoints\"] = endpoints\n\n executor.shutdown(wait=False)\n return jsonify(**stats)\n # TODO: Look into potentially cleaning up jobs that have timed-out.\n #\n # This could be done by changing jobs from a func to an object\n # that implements `def interrupt(self):` which would be used\n # to interrupt/stop/close/cleanup any resources.\n return status_handler", "response": "Handler that calls each status job in a worker pool attempting to timeout."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef moyennes_glissantes(df, sur=8, rep=0.75):\n return pd.rolling_mean(df, window=sur, min_periods=rep * sur)", "response": "Calculates the rolling mean of moyennes glissantes"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef nombre_depassement(df, valeur, freq=None):\n\n dep = depassement(df, valeur)\n if freq is not None:\n dep = dep.resample(freq, how='sum')\n return dep.count()", "response": "Return the nombre de d\u00e9passement d un valeur."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef aot40_vegetation(df, nb_an):\n\n return _aot(df.tshift(1), nb_an=nb_an, limite=80, mois_debut=5, mois_fin=7,\n heure_debut=8, heure_fin=19)", "response": "Aot - 40 de la souce un AOT - 40 de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la souce de la sou"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef no2(df):\n\n polluant = \"NO2\"\n\n # Le DataFrame doit \u00eatre en heure\n if not isinstance(df.index.freq, pdoffset.Hour):\n raise FreqException(\"df doit \u00eatre en heure.\")\n\n res = {\"Seuil de RI en moyenne H: 200u\": depassement(df, valeur=200),\n \"Seuil d'Alerte sur 3H cons\u00e9cutives: 400u\": consecutive(df, valeur=400, sur=3),\n \"Seuil d'Alerte sur 3J cons\u00e9cutifs: 200u\": consecutive(df.resample('D', how='max'), valeur=200, sur=3),\n \"Valeur limite pour la sant\u00e9 humaine 18H/A: 200u\": depassement(df, valeur=200),\n \"Valeur limite pour la sant\u00e9 humaine en moyenne A: 40u\": depassement(df.resample('A', how='mean'),\n valeur=40),\n \"Objectif de qualit\u00e9 en moyenne A: 40u\": depassement(df.resample('A', how='mean'), valeur=40),\n \"Protection de la v\u00e9g\u00e9tation en moyenne A: 30u\": depassement(df.resample('A', how='mean'), valeur=30),\n }\n\n return polluant, res", "response": "A helper function for calculating the non - no2 sans un terme d un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 sans un no2 s"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npouring les particules PM10", "response": "def pm10(df):\n \"\"\"\n Calculs r\u00e9glementaires pour les particules PM10\n\n Param\u00e8tres:\n df: DataFrame contenant les mesures, avec un index temporel\n (voir xair.get_mesure)\n\n Retourne:\n Une s\u00e9rie de r\u00e9sultats dans un DataFrame :\n ******\n unit\u00e9 (u): \u00b5g/m3 (microgramme par m\u00e8tre cube)\n\n Seuil de RI en moyenne J: 50u\n Seuil d'Alerte en moyenne J: 80u\n Valeur limite pour la sant\u00e9 humaine 35J/A: 50u\n Valeur limite pour la sant\u00e9 humaine en moyenne A: 40u\n Objectif de qualit\u00e9 en moyenne A: 30u\n\n Les r\u00e9sultats sont donn\u00e9s en terme d'heure de d\u00e9passement\n\n \"\"\"\n\n polluant = 'PM10'\n # Le DataFrame doit \u00eatre en jour\n if not isinstance(df.index.freq, pdoffset.Day):\n raise FreqException(\"df doit \u00eatre en jour.\")\n\n res = {\"Seuil de RI en moyenne J: 50u\": depassement(df, valeur=50),\n \"Seuil d'Alerte en moyenne J: 80u\": depassement(df, valeur=80),\n \"Valeur limite pour la sant\u00e9 humaine 35J/A: 50u\": depassement(df, valeur=50),\n \"Valeur limite pour la sant\u00e9 humaine en moyenne A: 40u\": depassement(df.resample('A', how='mean'),\n valeur=40),\n \"Objectif de qualit\u00e9 en moyenne A: 30u\": depassement(df.resample('A', how='mean'), valeur=30),\n }\n\n return polluant, res"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef so2(df):\n\n polluant = 'SO2'\n\n # Le DataFrame doit \u00eatre en heure\n if not isinstance(df.index.freq, pdoffset.Hour):\n raise FreqException(\"df doit \u00eatre en heure.\")\n\n res = {\"Seuil de RI en moyenne H: 300u\": depassement(df, valeur=300),\n \"Seuil d'Alerte sur 3H cons\u00e9cutives: 500u\": depassement(df, valeur=500),\n \"Valeur limite pour la sant\u00e9 humaine 24H/A: 350u\": depassement(df, valeur=350),\n \"Valeur limite pour la sant\u00e9 humaine 3J/A: 125u\": depassement(df.resample('D', how='mean'), valeur=125),\n \"Objectif de qualit\u00e9 en moyenne A: 50u\": depassement(df.resample('A', how='mean'), valeur=50),\n \"Protection de la v\u00e9g\u00e9tation en moyenne A: 20u\": depassement(df.resample('A', how='mean'), valeur=20),\n \"Protection de la v\u00e9g\u00e9tation du 01/10 au 31/03: 20u\": depassement(\n df[(df.index.month <= 3) | (df.index.month >= 10)], valeur=20),\n }\n\n return polluant, res", "response": "A helper function for calculating the non - defaul sont donn\u00e9s in a single sont donn\u00e9s."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef c6h6(df):\n\n polluant = 'C6H6'\n\n # Le DataFrame doit \u00eatre en heure\n if not isinstance(df.index.freq, pdoffset.Hour):\n raise FreqException(\"df doit \u00eatre en heure.\")\n\n res = {\"Objectif de qualit\u00e9 en moyenne A: 2u\": depassement(df.resample('A', how='mean'), valeur=2),\n \"Valeur limite pour la sant\u00e9 humaine en moyenne A: 5u\": depassement(df.resample('A', how='mean'), valeur=5),\n }\n\n return polluant, res", "response": "C6H6 - > C6H6"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprint out synthes de la fonction renvoyant les valeurs de la fonction de calcul\u00e9es.", "response": "def print_synthese(fct, df):\n \"\"\"\n Pr\u00e9sente une synth\u00e8se des calculs r\u00e9glementaires en fournissant les valeurs\n calcul\u00e9es suivant les r\u00e9glementations d\u00e9finies dans chaque fonction de calcul\n et un tableau de nombre de d\u00e9passement.\n\n Param\u00e8tres:\n fct: fonction renvoyant les \u00e9l\u00e9ments calcul\u00e9es\n df: DataFrame de valeurs d'entr\u00e9e \u00e0 fournir \u00e0 la fonction\n\n Retourne:\n Imprime sur l'\u00e9cran les valeurs synth\u00e9tis\u00e9es\n\n \"\"\"\n\n res_count = dict()\n\n polluant, res = fct(df)\n print(\"\\nPour le polluant: %s\" % polluant)\n\n print(\"\\nValeurs mesur\u00e9es suivant crit\u00e8res:\")\n for k, v in res.items():\n comp = compresse(v)\n if not comp.empty:\n comp.index.name = k\n print(comp.to_string(na_rep='', float_format=lambda x: \"%.0f\" % x))\n else:\n print(\"\\n%s: aucune valeur en d\u00e9passement\" % k)\n res_count[k] = v.count()\n\n res_count = pd.DataFrame(res_count).T\n print(\"Nombre de d\u00e9passements des crit\u00e8res:\\n\")\n print(res_count)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef html_synthese(fct, df):\n\n html = str()\n res_count = dict()\n buf = StringIO()\n polluant, res = fct(df)\n html += '

Pour le polluant: {}

'.format(polluant)\n\n # On enregistre tous les r\u00e9sultats dans le buffer et on calcule la somme de chaque\n for k, v in res.items():\n buf.write(\"

\")\n comp = compresse(v)\n if not comp.empty:\n comp.index.name = k\n comp.to_html(buf=buf,\n sparsify=True,\n na_rep=\"\")\n else:\n buf.write(\n '
{}Aucun d\u00e9passement
'.format(\n k))\n buf.write(\"

\")\n res_count[k] = v.count()\n\n res_count = pd.DataFrame(res_count).T\n res_count.index.name = \"Nombre de d\u00e9passements des crit\u00e8res\"\n html += \"

\"\n html += res_count.to_html(sparsify=True)\n html += \"

\"\n\n html += buf.getvalue()\n\n return html", "response": "A helper function to format html synthese un synthant de un synthant."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nshow the max value of the max value of the last date in a DataFrame.", "response": "def show_max(df):\n \"\"\"Pour chaque serie (colonne) d'un DataFrame, va rechercher la (les) valeur(s)\n et la (les) date(s) du (des) max.\n\n Param\u00e8tres:\n df: DataFrame de valeurs \u00e0 calculer\n\n Retourne:\n Un DataFrame montrant pour chaque serie (colonne), les valeurs maxs aux dates\n d'apparition.\n \"\"\"\n df = df.astype(pd.np.float)\n res = list()\n for c in df.columns:\n serie = df[c]\n res.append(serie.where(cond=serie == serie.max(), other=pd.np.nan).dropna())\n return pd.DataFrame(res).T"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef taux_de_representativite(df):\n return (df.count().astype(pd.np.float) / df.shape[0] * 100).round(1)", "response": "Calculate le taux de representativite"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nvalidates all the entries in the environment cache.", "response": "def validate(self):\n '''Validate all the entries in the environment cache.'''\n\n for env in list(self):\n if not env.exists:\n self.remove(env)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load(self):\n '''Load the environment cache from disk.'''\n\n if not os.path.exists(self.path):\n return\n\n with open(self.path, 'r') as f:\n env_data = yaml.load(f.read())\n\n if env_data:\n for env in env_data:\n self.add(VirtualEnvironment(env['root']))", "response": "Load the environment cache from disk."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save(self):\n '''Save the environment cache to disk.'''\n\n env_data = [dict(name=env.name, root=env.path) for env in self]\n encode = yaml.safe_dump(env_data, default_flow_style=False)\n\n with open(self.path, 'w') as f:\n f.write(encode)", "response": "Save the environment cache to disk."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef prompt(text, default=None, hide_input=False,\n confirmation_prompt=False, type=None,\n value_proc=None, prompt_suffix=': ',\n show_default=True, err=False):\n \"\"\"Prompts a user for input. This is a convenience function that can\n be used to prompt a user for input later.\n\n If the user aborts the input by sending a interrupt signal, this\n function will catch it and raise a :exc:`Abort` exception.\n\n .. versionadded:: 6.0\n Added unicode support for cmd.exe on Windows.\n\n .. versionadded:: 4.0\n Added the `err` parameter.\n\n :param text: the text to show for the prompt.\n :param default: the default value to use if no input happens. If this\n is not given it will prompt until it's aborted.\n :param hide_input: if this is set to true then the input value will\n be hidden.\n :param confirmation_prompt: asks for confirmation for the value.\n :param type: the type to use to check the value against.\n :param value_proc: if this parameter is provided it's a function that\n is invoked instead of the type conversion to\n convert a value.\n :param prompt_suffix: a suffix that should be added to the prompt.\n :param show_default: shows or hides the default value in the prompt.\n :param err: if set to true the file defaults to ``stderr`` instead of\n ``stdout``, the same as with echo.\n \"\"\"\n result = None\n\n def prompt_func(text):\n f = hide_input and hidden_prompt_func or visible_prompt_func\n try:\n # Write the prompt separately so that we get nice\n # coloring through colorama on Windows\n echo(text, nl=False, err=err)\n return f('')\n except (KeyboardInterrupt, EOFError):\n # getpass doesn't print a newline if the user aborts input with ^C.\n # Allegedly this behavior is inherited from getpass(3).\n # A doc bug has been filed at https://bugs.python.org/issue24711\n if hide_input:\n echo(None, err=err)\n raise Abort()\n\n if value_proc is None:\n value_proc = convert_type(type, default)\n\n prompt = _build_prompt(text, prompt_suffix, show_default, default)\n\n while 1:\n while 1:\n value = prompt_func(prompt)\n if value:\n break\n # If a default is set and used, then the confirmation\n # prompt is always skipped because that's the only thing\n # that really makes sense.\n elif default is not None:\n return default\n try:\n result = value_proc(value)\n except UsageError as e:\n echo('Error: %s' % e.message, err=err)\n continue\n if not confirmation_prompt:\n return result\n while 1:\n value2 = prompt_func('Repeat for confirmation: ')\n if value2:\n break\n if value == value2:\n return result\n echo('Error: the two entered values do not match', err=err)", "response": "Prompts a user for input."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _remote_setup_engine(engine_id, nengines):\n if distob.engine is None:\n distob.engine = distob.ObjectEngine(engine_id, nengines)\n # TODO these imports should be unnecessary with improved deserialization\n import numpy as np\n from scipy import stats\n # TODO Using @ipyparallel.interactive still did not import to __main__\n # so will do it this way for now.\n import __main__\n __main__.__dict__['np'] = np\n __main__.__dict__['stats'] = stats", "response": "Create an objectEngine instance and set up the objectengine module."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nprepares all iPython engines for distributed object processing.", "response": "def setup_engines(client=None):\n \"\"\"Prepare all iPython engines for distributed object processing.\n\n Args:\n client (ipyparallel.Client, optional): If None, will create a client\n using the default ipyparallel profile.\n \"\"\"\n if not client:\n try:\n client = ipyparallel.Client()\n except:\n raise DistobClusterError(\n u\"\"\"Could not connect to an ipyparallel cluster. Make\n sure a cluster is started (e.g. to use the CPUs of a\n single computer, can type 'ipcluster start')\"\"\")\n eids = client.ids\n if not eids:\n raise DistobClusterError(\n u'No ipyparallel compute engines are available')\n nengines = len(eids)\n dv = client[eids]\n dv.use_dill()\n with dv.sync_imports(quiet=True):\n import distob\n # create global ObjectEngine distob.engine on each engine\n ars = []\n for i in eids:\n dv.targets = i\n ars.append(dv.apply_async(_remote_setup_engine, i, nengines))\n dv.wait(ars)\n for ar in ars:\n if not ar.successful():\n raise ar.r\n # create global ObjectHub distob.engine on the client host\n if distob.engine is None:\n distob.engine = ObjectHub(-1, client)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _process_args(args, kwargs, prefer_local=True, recurse=True):\n this_engine = distob.engine.eid\n local_args = []\n remote_args = []\n execloc = this_engine # the chosen engine id for execution of the call\n for a in args:\n id = None\n if isinstance(a, Remote):\n id = a._ref.id\n elif isinstance(a, Ref):\n id = a.id\n elif isinstance(a, Id):\n id = a\n if id is not None:\n if id.engine is this_engine:\n local_args.append(distob.engine[id])\n remote_args.append(distob.engine[id])\n else:\n if (prefer_local and isinstance(a, Remote) and \n a._obcache_current):\n local_args.append(a._obcache)\n remote_args.append(id)\n else:\n # will choose remote execution\n if execloc is not this_engine and id.engine is not execloc:\n raise DistobValueError(\n 'two remote arguments are from different engines')\n else:\n execloc = id.engine\n local_args.append(None)\n remote_args.append(id)\n elif (isinstance(a, collections.Sequence) and\n not isinstance(a, string_types) and recurse):\n eid, ls, _ = _process_args(a, {}, prefer_local, recurse=False)\n if eid is not this_engine:\n if execloc is not this_engine and eid is not execloc:\n raise DistobValueError(\n 'two remote arguments are from different engines')\n execloc = eid\n local_args.append(ls)\n remote_args.append(ls)\n else:\n # argument is an ordinary object\n local_args.append(a)\n remote_args.append(a)\n local_kwargs = dict()\n remote_kwargs = dict()\n for k, a in kwargs.items():\n id = None\n if isinstance(a, Remote):\n id = a._ref.id\n elif isinstance(a, Ref):\n id = a.id\n elif isinstance(a, Id):\n id = a\n if id is not None:\n if id.engine is this_engine:\n local_kwargs[k] = distob.engine[id]\n remote_kwargs[k] = distob.engine[id]\n else:\n if (prefer_local and isinstance(a, Remote) and\n a._obcache_current):\n local_kwargs[k] = a._obcache\n remote_kwargs[k] = id\n else:\n # will choose remote execution\n if execloc is not this_engine and id.engine is not execloc:\n raise DistobValueError(\n 'two remote arguments are from different engines')\n else:\n execloc = id.engine\n local_kwargs[k] = None\n remote_kwargs[k] = id\n elif (isinstance(a, collections.Sequence) and\n not isinstance(a, string_types) and recurse):\n eid, ls, _ = _process_args(a, {}, prefer_local, recurse=False)\n if eid is not this_engine:\n if execloc is not this_engine and eid is not execloc:\n raise DistobValueError(\n 'two remote arguments are from different engines')\n execloc = eid\n local_kwargs[k] = ls\n remote_kwargs[k] = ls\n else:\n # argument is an ordinary object \n local_kwargs[k] = a\n remote_kwargs[k] = a\n if execloc is this_engine:\n return execloc, tuple(local_args), local_kwargs\n else:\n return execloc, tuple(remote_args), remote_kwargs", "response": "Process the arguments and kwargs."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef call(f, *args, **kwargs):\n this_engine = distob.engine.eid\n prefer_local = kwargs.pop('prefer_local', True)\n block = kwargs.pop('block', True)\n execloc, args, kwargs = _process_args(args, kwargs, prefer_local)\n if execloc is this_engine:\n r = f(*args, **kwargs)\n else:\n if False and prefer_local:\n # result cache disabled until issue mattja/distob#1 is fixed\n try:\n kwtuple = tuple((k, kwargs[k]) for k in sorted(kwargs.keys()))\n key = (f, args, kwtuple)\n r = _call_cache[key]\n except TypeError as te:\n if te.args[0][:10] == 'unhashable':\n #print(\"unhashable. won't be able to cache\")\n r = _uncached_call(execloc, f, *args, **kwargs)\n else:\n raise\n except KeyError:\n r = _uncached_call(execloc, f, *args, **kwargs)\n if block:\n _call_cache[key] = r.r\n else:\n r = _uncached_call(execloc, f, *args, **kwargs)\n if block:\n return convert_result(r)\n else:\n return r", "response": "Execute a function on the arguments locally or remotely as appropriate."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef convert_result(r):\n if (isinstance(r, collections.Sequence) and\n not isinstance(r, string_types)):\n rs = []\n for subresult in r:\n rs.append(convert_result(subresult))\n return rs\n if isinstance(r, ipyparallel.AsyncResult):\n r = r.r\n if isinstance(r, Ref):\n RemoteClass = distob.engine.proxy_types[r.type]\n r = RemoteClass(r)\n return r", "response": "Waits for and converts any AsyncResults."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _remote_methodcall(id, method_name, *args, **kwargs):\n obj = distob.engine[id]\n nargs = []\n for a in args:\n if isinstance(a, Id):\n nargs.append(distob.engine[a])\n elif (isinstance(a, collections.Sequence) and\n not isinstance(a, string_types)):\n nargs.append(\n [distob.engine[b] if isinstance(b, Id) else b for b in a])\n else: nargs.append(a)\n for k, a in kwargs.items():\n if isinstance(a, Id):\n kwargs[k] = distob.engine[a]\n elif (isinstance(a, collections.Sequence) and\n not isinstance(a, string_types)):\n kwargs[k] = [\n distob.engine[b] if isinstance(b, Id) else b for b in a]\n result = getattr(obj, method_name)(*nargs, **kwargs)\n if (isinstance(result, collections.Sequence) and\n not isinstance(result, string_types)):\n # We will return any sub-sequences by value, not recurse deeper\n results = []\n for subresult in result:\n if type(subresult) in distob.engine.proxy_types: \n results.append(Ref(subresult))\n else:\n results.append(subresult)\n return results\n elif type(result) in distob.engine.proxy_types:\n return Ref(result)\n else:\n return result", "response": "Internal method to call the remote method on the object with the given id."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall a method of obj locally or remotely as appropriate.", "response": "def methodcall(obj, method_name, *args, **kwargs):\n \"\"\"Call a method of `obj`, either locally or remotely as appropriate.\n obj may be an ordinary object, or a Remote object (or Ref or object Id)\n If there are multiple remote arguments, they must be on the same engine.\n \n kwargs:\n prefer_local (bool, optional): Whether to return cached local results if\n available, in preference to returning Remote objects. Default is True.\n block (bool, optional): Whether remote calls should be synchronous.\n If False, returned results may be AsyncResults and should be converted\n by the caller using convert_result() before use. Default is True.\n \"\"\"\n this_engine = distob.engine.eid\n args = [obj] + list(args)\n prefer_local = kwargs.pop('prefer_local', None)\n if prefer_local is None:\n if isinstance(obj, Remote):\n prefer_local = obj.prefer_local\n else:\n prefer_local = True\n block = kwargs.pop('block', True)\n execloc, args, kwargs = _process_args(args, kwargs, prefer_local)\n if execloc is this_engine:\n r = getattr(args[0], method_name)(*args[1:], **kwargs)\n else:\n if False and prefer_local:\n # result cache disabled until issue mattja/distob#1 is fixed\n try:\n kwtuple = tuple((k, kwargs[k]) for k in sorted(kwargs.keys()))\n key = (args[0], method_name, args, kwtuple)\n r = _call_cache[key]\n except TypeError as te:\n if te.args[0][:10] == 'unhashable':\n #print(\"unhashable. won't be able to cache\")\n r = _uncached_methodcall(execloc, args[0], method_name,\n *args[1:], **kwargs)\n else:\n raise\n except KeyError:\n r = _uncached_methodcall(execloc, args[0], method_name,\n *args[1:], **kwargs)\n if block:\n _call_cache[key] = r.r\n else:\n r = _uncached_methodcall(execloc, args[0], method_name,\n *args[1:], **kwargs)\n if block:\n return convert_result(r)\n else:\n return r"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nscan an object and return a tuple of method and attribute info.", "response": "def _scan_instance(obj, include_underscore, exclude):\n \"\"\"(Executed on remote or local engine) Examines an object and returns info\n about any instance-specific methods or attributes.\n (For example, any attributes that were set by __init__() )\n\n By default, methods or attributes starting with an underscore are ignored.\n\n Args:\n obj (object): the object to scan. must be on this local engine.\n include_underscore (bool or sequence of str): Should methods or\n attributes that start with an underscore be proxied anyway? If a\n sequence of names is provided then methods or attributes starting with\n an underscore will only be proxied if their names are in the sequence.\n exclude (sequence of str): names of any methods or attributes that should\n not be reported.\n \"\"\"\n from sys import getsizeof\n always_exclude = ('__new__', '__init__', '__getattribute__', '__class__',\n '__reduce__', '__reduce_ex__')\n method_info = []\n attributes_info = []\n if hasattr(obj, '__dict__'):\n for name in obj.__dict__:\n if (name not in exclude and \n name not in always_exclude and\n (name[0] != '_' or \n include_underscore is True or\n name in include_underscore)):\n f = obj.__dict__[name]\n if hasattr(f, '__doc__'):\n doc = f.__doc__\n else:\n doc = None\n if callable(f) and not isinstance(f, type):\n method_info.append((name, doc))\n else:\n attributes_info.append((name, doc))\n return (method_info, attributes_info, getsizeof(obj))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nclasses decorator. Modifies `Remote` subclasses to add proxy methods and attributes that mimic those defined in class `base`. Example: @proxy_methods(Tree) class RemoteTree(Remote, Tree) The decorator registers the new proxy class and specifies which methods and attributes of class `base` should be proxied via a remote call to a real object, and which methods/attributes should not be proxied but instead called directly on the instance of the proxy class. By default all methods and attributes of the class `base` will be proxied except those starting with an underscore. The MRO of the decorated class is respected: Any methods and attributes defined in the decorated class (or in other bases of the decorated class that do not come after `base` in its MRO) will override those added by this decorator, so that `base` is treated like a base class. Args: base (type): The class whose instances should be remotely controlled. include_underscore (bool or sequence of str): Should methods or attributes that start with an underscore be proxied anyway? If a sequence of names is provided then methods or attributes starting with an underscore will only be proxied if their names are in the sequence. exclude (sequence of str): Names of any methods or attributes that should not be proxied. supers (bool): Proxy methods and attributes defined in superclasses of ``base``, in addition to those defined directly in class ``base``", "response": "def proxy_methods(base, include_underscore=None, exclude=None, supers=True):\n \"\"\"class decorator. Modifies `Remote` subclasses to add proxy methods and\n attributes that mimic those defined in class `base`.\n\n Example:\n\n @proxy_methods(Tree)\n class RemoteTree(Remote, Tree)\n\n The decorator registers the new proxy class and specifies which methods\n and attributes of class `base` should be proxied via a remote call to\n a real object, and which methods/attributes should not be proxied but\n instead called directly on the instance of the proxy class.\n\n By default all methods and attributes of the class `base` will be\n proxied except those starting with an underscore.\n\n The MRO of the decorated class is respected:\n Any methods and attributes defined in the decorated class\n (or in other bases of the decorated class that do not come after `base`\n in its MRO) will override those added by this decorator,\n so that `base` is treated like a base class.\n\n Args:\n base (type): The class whose instances should be remotely controlled.\n include_underscore (bool or sequence of str): Should methods or\n attributes that start with an underscore be proxied anyway? If a\n sequence of names is provided then methods or attributes starting with\n an underscore will only be proxied if their names are in the sequence.\n exclude (sequence of str): Names of any methods or attributes that \n should not be proxied.\n supers (bool): Proxy methods and attributes defined in superclasses \n of ``base``, in addition to those defined directly in class ``base``\n \"\"\"\n always_exclude = ('__new__', '__init__', '__getattribute__', '__class__',\n '__reduce__', '__reduce_ex__')\n if isinstance(include_underscore, str):\n include_underscore = (include_underscore,)\n if isinstance(exclude, str):\n exclude = (exclude,)\n if not include_underscore:\n include_underscore = ()\n if not exclude:\n exclude = ()\n def rebuild_class(cls):\n # Identify any bases of cls that do not come after `base` in the list:\n bases_other = list(cls.__bases__)\n if bases_other[-1] is object:\n bases_other.pop()\n if base in bases_other:\n bases_other = bases_other[:bases_other.index(base)]\n if not issubclass(cls.__bases__[0], Remote):\n raise DistobTypeError('First base class must be subclass of Remote')\n if not issubclass(base, object):\n raise DistobTypeError('Only new-style classes currently supported')\n dct = cls.__dict__.copy()\n if cls.__doc__ is None or '\\n' not in cls.__doc__:\n base_doc = base.__doc__\n if base_doc is None:\n base_doc = ''\n dct['__doc__'] = \"\"\"Local object representing a remote %s\n It can be used just like a %s object, but behind the scenes \n all requests are passed to a real %s object on a remote host.\n\n \"\"\" % ((base.__name__,)*3) + base_doc\n newcls = type(cls.__name__, cls.__bases__, dct)\n newcls._include_underscore = include_underscore\n newcls._exclude = exclude\n if supers:\n proxied_classes = base.__mro__[:-1]\n else:\n proxied_classes = (base,)\n for c in proxied_classes:\n for name in c.__dict__:\n #respect MRO: proxy an attribute only if it is not overridden\n if (name not in newcls.__dict__ and\n all(name not in b.__dict__ \n for c in bases_other for b in c.mro()[:-1]) and\n name not in newcls._exclude and\n name not in always_exclude and\n (name[0] != '_' or \n newcls._include_underscore is True or\n name in newcls._include_underscore)):\n f = c.__dict__[name]\n if hasattr(f, '__doc__'):\n doc = f.__doc__\n else:\n doc = None\n if callable(f) and not isinstance(f, type):\n setattr(newcls, name, _make_proxy_method(name, doc))\n else:\n setattr(newcls, name, _make_proxy_property(name, doc))\n newcls.__module__ = '__main__' # cause dill to pickle it whole\n import __main__\n __main__.__dict__[newcls.__name__] = newcls # for dill..\n ObjectHub.register_proxy_type(base, newcls)\n return newcls\n return rebuild_class"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndistributes an object or list to remote engines. Returns an async result or lists of nested objects where each of which is a Ref", "response": "def _async_scatter(obj, destination=None):\n \"\"\"Distribute an obj or list to remote engines. \n Return an async result or (possibly nested) lists of async results, \n each of which is a Ref\n \"\"\"\n #TODO Instead of special cases for strings and Remote, should have a\n # list of types that should not be proxied, inc. strings and Remote.\n if (isinstance(obj, Remote) or \n isinstance(obj, numbers.Number) or \n obj is None):\n return obj\n if (isinstance(obj, collections.Sequence) and \n not isinstance(obj, string_types)):\n ars = []\n if destination is not None:\n assert(len(destination) == len(obj))\n for i in range(len(obj)):\n ars.append(_async_scatter(obj[i], destination[i]))\n else:\n for i in range(len(obj)):\n ars.append(_async_scatter(obj[i], destination=None))\n return ars\n else:\n if distob.engine is None:\n setup_engines()\n client = distob.engine._client\n dv = distob.engine._dv\n def remote_put(obj):\n return Ref(obj)\n if destination is not None:\n assert(isinstance(destination, numbers.Integral))\n dv.targets = destination\n else:\n dv.targets = _async_scatter.next_engine\n _async_scatter.next_engine = (\n _async_scatter.next_engine + 1) % len(client)\n ar_ref = dv.apply_async(remote_put, obj)\n dv.targets = client.ids\n return ar_ref"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _ars_to_proxies(ars):\n if (isinstance(ars, Remote) or\n isinstance(ars, numbers.Number) or\n ars is None):\n return ars\n elif isinstance(ars, collections.Sequence):\n res = []\n for i in range(len(ars)):\n res.append(_ars_to_proxies(ars[i]))\n return res\n elif isinstance(ars, ipyparallel.AsyncResult):\n ref = ars.r\n ObClass = ref.type\n if ObClass in distob.engine.proxy_types:\n RemoteClass = distob.engine.proxy_types[ObClass]\n else:\n RemoteClass = type(\n 'Remote' + ObClass.__name__, (Remote, ObClass), dict())\n RemoteClass = proxy_methods(ObClass)(RemoteClass)\n proxy_obj = RemoteClass(ref)\n return proxy_obj\n else:\n raise DistobTypeError('Unpacking ars: unexpected type %s' % type(ars))", "response": "wait for AsyncResults and return Remote objects"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _scatter_ndarray(ar, axis=-1, destination=None, blocksize=None):\n from .arrays import DistArray, RemoteArray\n shape = ar.shape\n ndim = len(shape)\n if axis is None:\n return _directed_scatter([ar], destination=[destination],\n blocksize=blocksize)[0]\n if axis < -ndim or axis > ndim - 1:\n raise DistobValueError('axis out of range')\n if axis < 0:\n axis = ndim + axis\n n = shape[axis]\n if n == 1:\n return _directed_scatter([ar], destination=[destination])[0]\n if isinstance(destination, collections.Sequence):\n ne = len(destination) # number of engines to scatter array to\n else:\n if distob.engine is None:\n setup_engines()\n ne = distob.engine.nengines # by default scatter across all engines\n if blocksize is None:\n blocksize = ((n - 1) // ne) + 1\n if blocksize > n:\n blocksize = n\n if isinstance(ar, DistArray):\n if axis == ar._distaxis:\n return ar\n else:\n raise DistobError('Currently can only scatter one axis of array')\n # Currently, if requested to scatter an array that is already Remote and\n # large, first get whole array locally, then scatter. Not really optimal.\n if isinstance(ar, RemoteArray) and n > blocksize:\n ar = ar._ob\n s = slice(None)\n subarrays = []\n low = 0\n for i in range(0, n // blocksize):\n high = low + blocksize\n index = (s,)*axis + (slice(low, high),) + (s,)*(ndim - axis - 1)\n subarrays.append(ar[index])\n low += blocksize\n if n % blocksize != 0:\n high = low + (n % blocksize)\n index = (s,)*axis + (slice(low, high),) + (s,)*(ndim - axis - 1)\n subarrays.append(ar[index])\n subarrays = _directed_scatter(subarrays, destination=destination)\n return DistArray(subarrays, axis)", "response": "Turn a numpy array into a DistArray or RemoteArray with optional destination and blocksize."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef scatter(obj, axis=-1, blocksize=None):\n if hasattr(obj, '__distob_scatter__'):\n return obj.__distob_scatter__(axis, None, blocksize)\n if distob._have_numpy and (isinstance(obj, np.ndarray) or\n hasattr(type(obj), '__array_interface__')):\n return _scatter_ndarray(obj, axis, blocksize)\n elif isinstance(obj, Remote):\n return obj\n ars = _async_scatter(obj)\n proxy_obj = _ars_to_proxies(ars)\n return proxy_obj", "response": "Distribute a python object or list of objects along a given axis."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef gather(obj):\n if hasattr(obj, '__distob_gather__'):\n return obj.__distob_gather__()\n elif (isinstance(obj, collections.Sequence) and \n not isinstance(obj, string_types)):\n return [gather(subobj) for subobj in obj]\n else:\n return obj", "response": "Retrieve objects that have been distributed making them local again"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef vectorize(f):\n def vf(obj, *args, **kwargs):\n # user classes can customize how to vectorize a function:\n if hasattr(obj, '__distob_vectorize__'):\n return obj.__distob_vectorize__(f)(obj, *args, **kwargs)\n if isinstance(obj, Remote):\n return call(f, obj, *args, **kwargs)\n elif distob._have_numpy and (isinstance(obj, np.ndarray) or\n hasattr(type(obj), '__array_interface__')):\n distarray = scatter(obj, axis=-1)\n return vf(distarray, *args, **kwargs)\n elif isinstance(obj, collections.Sequence):\n inputs = scatter(obj)\n dv = distob.engine._client[:]\n kwargs = kwargs.copy()\n kwargs['block'] = False\n results = []\n for obj in inputs:\n results.append(call(f, obj, *args, **kwargs))\n for i in range(len(results)):\n results[i] = convert_result(results[i])\n return results\n if hasattr(f, '__name__'):\n vf.__name__ = 'v' + f.__name__\n f_str = f.__name__ + '()'\n else:\n f_str = 'callable'\n doc = u\"\"\"Apply %s in parallel to a list or array\\n\n Args:\n obj (Sequence of objects or an array)\n other args are the same as for %s\n \"\"\" % (f_str, f_str)\n if hasattr(f, '__doc__') and f.__doc__ is not None:\n doc = doc.rstrip() + (' detailed below:\\n----------\\n' + f.__doc__)\n vf.__doc__ = doc\n return vf", "response": "Upgrade normal function f to act in parallel on distibuted lists or arrays of N + 1 dimensions."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef apply(f, obj, *args, **kwargs):\n return vectorize(f)(obj, *args, **kwargs)", "response": "Apply a function in parallel to each element of the input"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalls a method on each element of a sequence in parallel.", "response": "def call_all(sequence, method_name, *args, **kwargs):\n \"\"\"Call a method on each element of a sequence, in parallel.\n Returns:\n list of results\n \"\"\"\n kwargs = kwargs.copy()\n kwargs['block'] = False\n results = []\n for obj in sequence:\n results.append(methodcall(obj, method_name, *args, **kwargs))\n for i in range(len(results)):\n results[i] = convert_result(results[i])\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_proxy_type(cls, real_type, proxy_type):\n if distob.engine is None:\n cls._initial_proxy_types[real_type] = proxy_type\n elif isinstance(distob.engine, ObjectHub):\n distob.engine._runtime_reg_proxy_type(real_type, proxy_type)\n else:\n # TODO: remove next line after issue #58 in dill is fixed.\n distob.engine._singleeng_reg_proxy_type(real_type, proxy_type)\n pass", "response": "Configure engines so that remote methods returning values of typereal_type will instead return by proxy as type proxy_type"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _fetch(self):\n if not self.is_local and not self._obcache_current:\n #print('fetching data from %s' % self._ref.id)\n def _remote_fetch(id):\n return distob.engine[id]\n self._obcache = self._dv.apply_sync(_remote_fetch, self._id)\n self._obcache_current = True\n self.__engine_affinity__ = (distob.engine.eid, \n self.__engine_affinity__[1])", "response": "forces update of a local cached copy of the real object"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef extract_json(fileobj, keywords, comment_tags, options):\n data=fileobj.read()\n json_extractor=JsonExtractor(data)\n strings_data=json_extractor.get_lines_data()\n\n for item in strings_data:\n messages = [item['content']]\n if item.get('funcname') == 'ngettext':\n messages.append(item['alt_content'])\n yield item['line_number'],item.get('funcname','gettext'),tuple(messages),[]", "response": "Extract a list of strings from a JSON file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_lines_data(self):\n\n encoding = 'utf-8'\n\n for token in tokenize(self.data.decode(encoding)):\n if token.type == 'operator':\n if token.value == '{':\n self.start_object()\n elif token.value ==':':\n self.with_separator(token)\n elif token.value == '}':\n self.end_object()\n elif token.value == ',':\n self.end_pair()\n\n\n elif token.type=='string':\n if self.state=='key':\n self.current_key=unquote_string(token.value)\n if self.current_key==JSON_GETTEXT_KEYWORD:\n self.gettext_mode=True\n\n #==value not actually used, but if only key was met (like in list) it still will be used. The important part, that key wont be parsed as value, not reversal\n if self.gettext_mode:\n if self.current_key==JSON_GETTEXT_KEY_CONTENT:\n self.token_to_add=token\n elif self.current_key==JSON_GETTEXT_KEY_ALT_CONTENT:\n self.token_params['alt_token']=token\n elif self.current_key==JSON_GETTEXT_KEY_FUNCNAME:\n self.token_params['funcname']=token.value\n else:\n self.token_to_add=token\n\n return self.results", "response": "Returns a list of line numbers for the current line number."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef is_git_repo(path):\n '''Returns True if path is a git repository.'''\n\n if path.startswith('git@') or path.startswith('https://'):\n return True\n\n if os.path.exists(unipath(path, '.git')):\n return True\n\n return False", "response": "Returns True if path is a git repository."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn True if path is in the home directory of the environment.", "response": "def is_home_environment(path):\n '''Returns True if path is in CPENV_HOME'''\n\n home = unipath(os.environ.get('CPENV_HOME', '~/.cpenv'))\n path = unipath(path)\n\n return path.startswith(home)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_redirecting(path):\n '''Returns True if path contains a .cpenv file'''\n\n candidate = unipath(path, '.cpenv')\n return os.path.exists(candidate) and os.path.isfile(candidate)", "response": "Returns True if path contains a. cpenv file"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef redirect_to_env_paths(path):\n '''Get environment path from redirect file'''\n\n with open(path, 'r') as f:\n redirected = f.read()\n\n return shlex.split(redirected)", "response": "Get environment paths from redirect file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef expandpath(path):\n '''Returns an absolute expanded path'''\n\n return os.path.abspath(os.path.expandvars(os.path.expanduser(path)))", "response": "Returns an absolute expanded path"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nliking os. path. join but also expands and normalizes path parts.", "response": "def unipath(*paths):\n '''Like os.path.join but also expands and normalizes path parts.'''\n\n return os.path.normpath(expandpath(os.path.join(*paths)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef binpath(*paths):\n '''Like os.path.join but acts relative to this packages bin path.'''\n\n package_root = os.path.dirname(__file__)\n return os.path.normpath(os.path.join(package_root, 'bin', *paths))", "response": "Like os. path. join but acts relative to this packages bin path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ensure_path_exists(path, *args):\n '''Like os.makedirs but keeps quiet if path already exists'''\n if os.path.exists(path):\n return\n\n os.makedirs(path, *args)", "response": "Like os. makedirs but keeps quiet if path already exists"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwalks down a directory tree. Same as os. walk but allows for a depth limit.", "response": "def walk_dn(start_dir, depth=10):\n '''\n Walk down a directory tree. Same as os.walk but allows for a depth limit\n via depth argument\n '''\n\n start_depth = len(os.path.split(start_dir))\n end_depth = start_depth + depth\n\n for root, subdirs, files in os.walk(start_dir):\n yield root, subdirs, files\n\n if len(os.path.split(root)) >= end_depth:\n break"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwalking up a directory tree and yield a list of all the files and subdirs.", "response": "def walk_up(start_dir, depth=20):\n '''\n Walk up a directory tree\n '''\n root = start_dir\n\n for i in xrange(depth):\n contents = os.listdir(root)\n subdirs, files = [], []\n for f in contents:\n if os.path.isdir(os.path.join(root, f)):\n subdirs.append(f)\n else:\n files.append(f)\n\n yield root, subdirs, files\n\n parent = os.path.dirname(root)\n if parent and not parent == root:\n root = parent\n else:\n break"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a sequence value to env dict d", "response": "def _join_seq(d, k, v):\n '''Add a sequence value to env dict'''\n\n if k not in d:\n d[k] = list(v)\n\n elif isinstance(d[k], list):\n for item in v:\n if item not in d[k]:\n d[k].insert(0, item)\n\n elif isinstance(d[k], string_types):\n v.append(d[k])\n d[k] = v"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\njoin a bunch of dicts", "response": "def join_dicts(*dicts):\n '''Join a bunch of dicts'''\n\n out_dict = {}\n\n for d in dicts:\n for k, v in d.iteritems():\n\n if not type(v) in JOINERS:\n raise KeyError('Invalid type in dict: {}'.format(type(v)))\n\n JOINERS[type(v)](out_dict, k, v)\n\n return out_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef env_to_dict(env, pathsep=os.pathsep):\n '''\n Convert a dict containing environment variables into a standard dict.\n Variables containing multiple values will be split into a list based on\n the argument passed to pathsep.\n\n :param env: Environment dict like os.environ.data\n :param pathsep: Path separator used to split variables\n '''\n\n out_dict = {}\n\n for k, v in env.iteritems():\n if pathsep in v:\n out_dict[k] = v.split(pathsep)\n else:\n out_dict[k] = v\n\n return out_dict", "response": "Convert a dict containing environment variables into a standard dict."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a python dict to an environment dict containing valid environment variable values.", "response": "def dict_to_env(d, pathsep=os.pathsep):\n '''\n Convert a python dict to a dict containing valid environment variable\n values.\n\n :param d: Dict to convert to an env dict\n :param pathsep: Path separator used to join lists(default os.pathsep)\n '''\n\n out_env = {}\n\n for k, v in d.iteritems():\n if isinstance(v, list):\n out_env[k] = pathsep.join(v)\n elif isinstance(v, string_types):\n out_env[k] = v\n else:\n raise TypeError('{} not a valid env var type'.format(type(v)))\n\n return out_env"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef expand_envvars(env):\n '''\n Expand all environment variables in an environment dict\n\n :param env: Environment dict\n '''\n\n out_env = {}\n\n for k, v in env.iteritems():\n out_env[k] = Template(v).safe_substitute(env)\n\n # Expand twice to make sure we expand everything we possibly can\n for k, v in out_env.items():\n out_env[k] = Template(v).safe_substitute(out_env)\n\n return out_env", "response": "Expand all environment variables in an environment dict"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns an unused random filepath.", "response": "def get_store_env_tmp():\n '''Returns an unused random filepath.'''\n\n tempdir = tempfile.gettempdir()\n temp_name = 'envstore{0:0>3d}'\n temp_path = unipath(tempdir, temp_name.format(random.getrandbits(9)))\n if not os.path.exists(temp_path):\n return temp_path\n else:\n return get_store_env_tmp()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef store_env(path=None):\n '''Encode current environment as yaml and store in path or a temporary\n file. Return the path to the stored environment.\n '''\n\n path = path or get_store_env_tmp()\n\n env_dict = yaml.safe_dump(os.environ.data, default_flow_style=False)\n\n with open(path, 'w') as f:\n f.write(env_dict)\n\n return path", "response": "Encode current environment as yaml and store in path or a temporary\n file. Return the path to the stored environment."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef restore_env(env_dict):\n '''Set environment variables in the current python process from a dict\n containing envvars and values.'''\n\n if hasattr(sys, 'real_prefix'):\n sys.prefix = sys.real_prefix\n del(sys.real_prefix)\n\n replace_osenviron(expand_envvars(dict_to_env(env_dict)))", "response": "Set environment variables in the current python process from a dict\n containing envvars and values."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrestore the current environment from an environment stored in a yaml yaml file.", "response": "def restore_env_from_file(env_file):\n '''Restore the current environment from an environment stored in a yaml\n yaml file.\n\n :param env_file: Path to environment yaml file.\n '''\n\n with open(env_file, 'r') as f:\n env_dict = yaml.load(f.read())\n\n restore_env(env_dict)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset environment variables in the current python process from a dict containing envvars and values.", "response": "def set_env(*env_dicts):\n '''Set environment variables in the current python process from a dict\n containing envvars and values.'''\n\n old_env_dict = env_to_dict(os.environ.data)\n new_env_dict = join_dicts(old_env_dict, *env_dicts)\n new_env = dict_to_env(new_env_dict)\n replace_osenviron(expand_envvars(new_env))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrestore the current environment from an environment stored in a yaml file.", "response": "def set_env_from_file(env_file):\n '''Restore the current environment from an environment stored in a yaml\n yaml file.\n\n :param env_file: Path to environment yaml file.\n '''\n\n with open(env_file, 'r') as f:\n env_dict = yaml.load(f.read())\n\n if 'environment' in env_dict:\n env_dict = env_dict['environment']\n\n set_env(env_dict)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the URL to the upstream data source for the given URI based on configuration", "response": "def upstream_url(self, uri):\n \"Returns the URL to the upstream data source for the given URI based on configuration\"\n return self.application.options.upstream + self.request.uri"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn ( headers body ) from the cache or raise KeyError", "response": "def cache_get(self):\n \"Returns (headers, body) from the cache or raise KeyError\"\n key = self.cache_key()\n (headers, body, expires_ts) = self.application._cache[key]\n if expires_ts < time.now():\n # asset has expired, delete it\n del self.application._cache[key]\n raise KeyError(key)\n\n return (headers, body)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn request object for calling the upstream", "response": "def make_upstream_request(self):\n \"Return request object for calling the upstream\"\n url = self.upstream_url(self.request.uri)\n return tornado.httpclient.HTTPRequest(url,\n method=self.request.method,\n headers=self.request.headers,\n body=self.request.body if self.request.body else None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the time to live in seconds. 0 means no caching.", "response": "def ttl(self, response):\n \"\"\"Returns time to live in seconds. 0 means no caching.\n\n Criteria:\n - response code 200\n - read-only method (GET, HEAD, OPTIONS)\n Plus http headers:\n - cache-control: option1, option2, ...\n where options are:\n private | public\n no-cache\n no-store\n max-age: seconds\n s-maxage: seconds\n must-revalidate\n proxy-revalidate\n - expires: Thu, 01 Dec 1983 20:00:00 GMT\n - pragma: no-cache (=cache-control: no-cache)\n\n See http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/\n\n TODO: tests\n\n \"\"\"\n if response.code != 200: return 0\n if not self.request.method in ['GET', 'HEAD', 'OPTIONS']: return 0\n\n try:\n pragma = self.request.headers['pragma']\n if pragma == 'no-cache':\n return 0\n except KeyError:\n pass\n\n try:\n cache_control = self.request.headers['cache-control']\n\n # no caching options\n for option in ['private', 'no-cache', 'no-store', 'must-revalidate', 'proxy-revalidate']:\n if cache_control.find(option): return 0\n\n # further parsing to get a ttl\n options = parse_cache_control(cache_control)\n try:\n return int(options['s-maxage'])\n except KeyError:\n pass\n try:\n return int(options['max-age'])\n except KeyError:\n pass\n\n if 's-maxage' in options:\n max_age = options['s-maxage']\n if max_age < ttl: ttl = max_age\n if 'max-age' in options:\n max_age = options['max-age']\n if max_age < ttl: ttl = max_age\n return ttl\n except KeyError:\n pass\n\n try:\n expires = self.request.headers['expires']\n return time.mktime(time.strptime(expires, '%a, %d %b %Y %H:%M:%S')) - time.time()\n except KeyError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef hist2d(x, y, bins=10, labels=None, aspect=\"auto\", plot=True, fig=None, ax=None, interpolation='none', cbar=True, **kwargs):\n h_range = kwargs.pop('range', None)\n h_normed = kwargs.pop('normed', None)\n h_weights = kwargs.pop('weights', None)\n\n h, xe, ye = _np.histogram2d(x, y, bins=bins, range=h_range, normed=h_normed, weights=h_weights)\n extent = [xe[0], xe[-1], ye[0], ye[-1]]\n # fig = plt.figure()\n if plot:\n if ax is None:\n if fig is None:\n fig = _figure('hist2d')\n ax = fig.gca()\n ax.clear()\n\n img = ax.imshow(h.transpose(), extent=extent, interpolation=interpolation, aspect=aspect, **kwargs)\n if cbar:\n _colorbar(ax=ax, im=img)\n\n if labels is not None:\n _addlabel(labels[0], labels[1], labels[2])\n\n # _showfig(fig, aspect)\n return h, extent", "response": "Create a 2 - D histogram of data x y."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the frequency of the plasma density in units of mass.", "response": "def w_p(self):\n \"\"\"\n Plasma frequency :math:`\\\\omega_p` for given plasma density\n \"\"\"\n return _np.sqrt(self.n_p * _np.power(_spc.e, 2) / (_spc.m_e * _spc.epsilon_0))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef k_ion(self, E):\n return self.n_p * _np.power(_spc.e, 2) / (2*_sltr.GeV2joule(E) * _spc.epsilon_0)", "response": "Geometric focusing force due to ion column for given plasma density"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nguarantees the existence of a basic MANIFEST. in.", "response": "def manifest():\n \"\"\"Guarantee the existence of a basic MANIFEST.in.\n\n manifest doc: http://docs.python.org/distutils/sourcedist.html#manifest\n\n `options.paved.dist.manifest.include`: set of files (or globs) to include with the `include` directive.\n\n `options.paved.dist.manifest.recursive_include`: set of files (or globs) to include with the `recursive-include` directive.\n\n `options.paved.dist.manifest.prune`: set of files (or globs) to exclude with the `prune` directive.\n\n `options.paved.dist.manifest.include_sphinx_docroot`: True -> sphinx docroot is added as `graft`\n\n `options.paved.dist.manifest.include_sphinx_docroot`: True -> sphinx builddir is added as `prune`\n \"\"\"\n prune = options.paved.dist.manifest.prune\n graft = set()\n\n\n if options.paved.dist.manifest.include_sphinx_docroot:\n docroot = options.get('docroot', 'docs')\n graft.update([docroot])\n\n if options.paved.dist.manifest.exclude_sphinx_builddir:\n builddir = docroot + '/' + options.get(\"builddir\", \".build\")\n prune.update([builddir])\n\n with open(options.paved.cwd / 'MANIFEST.in', 'w') as fo:\n for item in graft:\n fo.write('graft %s\\n' % item)\n for item in options.paved.dist.manifest.include:\n fo.write('include %s\\n' % item)\n for item in options.paved.dist.manifest.recursive_include:\n fo.write('recursive-include %s\\n' % item)\n for item in prune:\n fo.write('prune %s\\n' % item)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_threadpool_executed_func(original_func):\n def wrapped_func(*args, **kwargs):\n try:\n result = original_func(*args, **kwargs)\n return True, result\n except:\n return False, sys.exc_info()\n\n def new_func(*args, **kwargs):\n status, result = gevent.get_hub().threadpool.apply(wrapped_func, args, kwargs)\n if status:\n return result\n else:\n six.reraise(*result)\n new_func.__name__ = original_func.__name__\n new_func.__doc__ = \"(gevent-friendly)\" + (\" \" + original_func.__doc__ if original_func.__doc__ is not None else \"\")\n return new_func", "response": "Creates a function that defers function calls execute inside gevent s threadpool."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef format_pathname(\n pathname,\n max_length):\n \"\"\"\n Format a pathname\n\n :param str pathname: Pathname to format\n :param int max_length: Maximum length of result pathname (> 3)\n :return: Formatted pathname\n :rtype: str\n :raises ValueError: If *max_length* is not larger than 3\n\n This function formats a pathname so it is not longer than *max_length*\n characters. The resulting pathname is returned. It does so by replacing\n characters at the start of the *pathname* with three dots, if necessary.\n The idea is that the end of the *pathname* is the most important part\n to be able to identify the file.\n \"\"\"\n if max_length <= 3:\n raise ValueError(\"max length must be larger than 3\")\n\n if len(pathname) > max_length:\n pathname = \"...{}\".format(pathname[-(max_length-3):])\n\n return pathname", "response": "Formats a pathname so that it is not longer than max_length characters."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef format_time_point(\n time_point_string):\n\n \"\"\"\n :param str time_point_string: String representation of a time point\n to format\n :return: Formatted time point\n :rtype: str\n :raises ValueError: If *time_point_string* is not formatted by\n dateutil.parser.parse\n\n See :py:meth:`datetime.datetime.isoformat` function for supported formats.\n \"\"\"\n time_point = dateutil.parser.parse(time_point_string)\n\n if not is_aware(time_point):\n time_point = make_aware(time_point)\n\n time_point = local_time_point(time_point)\n\n return time_point.strftime(\"%Y-%m-%dT%H:%M:%S\")", "response": "Formats a time point in a cluster node s time series."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef format_uuid(\n uuid,\n max_length=10):\n \"\"\"\n Format a UUID string\n\n :param str uuid: UUID to format\n :param int max_length: Maximum length of result string (> 3)\n :return: Formatted UUID\n :rtype: str\n :raises ValueError: If *max_length* is not larger than 3\n\n This function formats a UUID so it is not longer than *max_length*\n characters. The resulting string is returned. It does so by replacing\n characters at the end of the *uuid* with three dots, if necessary.\n The idea is that the start of the *uuid* is the most important part\n to be able to identify the related entity.\n\n The default *max_length* is 10, which will result in a string\n containing the first 7 characters of the *uuid* passed in. Most of\n the time, such a string is still unique within a collection of UUIDs.\n \"\"\"\n if max_length <= 3:\n raise ValueError(\"max length must be larger than 3\")\n\n if len(uuid) > max_length:\n uuid = \"{}...\".format(uuid[0:max_length-3])\n\n return uuid", "response": "Formats a UUID string for a given locale."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding the reduced chi square difference of observe and expect with a given error and ddof degrees of freedom.", "response": "def chisquare(observe, expect, error, ddof, verbose=True):\n \"\"\"\n Finds the reduced chi square difference of *observe* and *expect* with a given *error* and *ddof* degrees of freedom.\n\n *verbose* flag determines if the reduced chi square is printed to the terminal. \n \"\"\"\n chisq = 0\n error = error.flatten()\n observe = observe.flatten()\n expect = expect.flatten()\n for i, el in enumerate(observe):\n chisq = chisq + _np.power((el - expect[i]) / error[i], 2)\n\n red_chisq = chisq / (len(observe) - ddof)\n if verbose:\n # print 'Chi-Squared is {}.'.format(chisq)\n print('Reduced Chi-Squared is {}.'.format(red_chisq))\n\n return red_chisq"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the mantissa and exponent of a number x.", "response": "def frexp10(x):\n \"\"\"\n Finds the mantissa and exponent of a number :math:`x` such that :math:`x = m 10^e`.\n\n Parameters\n ----------\n\n x : float\n Number :math:`x` such that :math:`x = m 10^e`.\n\n Returns\n -------\n\n mantissa : float\n Number :math:`m` such that :math:`x = m 10^e`.\n exponent : float\n Number :math:`e` such that :math:`x = m 10^e`.\n \"\"\"\n expon = _np.int(_np.floor(_np.log10(_np.abs(x))))\n mant = x/_np.power(10, expon)\n return (mant, expon)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_upcoming_events_count(days=14, featured=False):\n from happenings.models import Event\n start_period = today - datetime.timedelta(days=2)\n end_period = today + datetime.timedelta(days=days)\n if featured:\n return Event.objects.filter(\n featured=True,\n start_date__gte=start_period,\n start_date__lte=end_period\n ).count()\n return Event.objects.filter(start_date__gte=start_period, start_date__lte=end_period).count()", "response": "Returns the number of upcoming events for a given number of days"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets upcoming events for a given number of days.", "response": "def get_upcoming_events(num, days, featured=False):\n \"\"\"\n Get upcoming events.\n Allows slicing to a given number,\n picking the number of days to hold them after they've started\n and whether they should be featured or not.\n Usage:\n {% get_upcoming_events 5 14 featured as events %}\n Would return no more than 5 Featured events,\n holding them for 14 days past their start date.\n\n \"\"\"\n from happenings.models import Event\n start_date = today - datetime.timedelta(days=days)\n events = Event.objects.filter(start_date__gt=start_date).order_by('start_date')\n if featured:\n events = events.filter(featured=True)\n events = events[:num]\n return events"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_events_by_date_range(days_out, days_hold, max_num=5, featured=False):\n from happenings.models import Event\n range_start = today - datetime.timedelta(days=days_hold)\n range_end = today + datetime.timedelta(days=days_out)\n\n events = Event.objects.filter(\n start_date__gte=range_start,\n start_date__lte=range_end\n ).order_by('start_date')\n if featured:\n events = events.filter(featured=True)\n events = events[:max_num]\n return events", "response": "Get upcoming events for a given number of days out"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dict of next and previous on updates", "response": "def paginate_update(update):\n \"\"\"\n attempts to get next and previous on updates\n \"\"\"\n from happenings.models import Update\n time = update.pub_time\n event = update.event\n try:\n next = Update.objects.filter(\n event=event,\n pub_time__gt=time\n ).order_by('pub_time').only('title')[0]\n except:\n next = None\n try:\n previous = Update.objects.filter(\n event=event,\n pub_time__lt=time\n ).order_by('-pub_time').only('title')[0]\n except:\n previous = None\n return {'next': next, 'previous': previous, 'event': event}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef notify_client(\n notifier_uri,\n client_id,\n status_code,\n message=None):\n \"\"\"\n Notify the client of the result of handling a request\n\n The payload contains two elements:\n\n - client_id\n - result\n\n The *client_id* is the id of the client to notify. It is assumed\n that the notifier service is able to identify the client by this id\n and that it can pass the *result* to it.\n\n The *result* always contains a *status_code* element. In case the\n message passed in is not None, it will also contain a *message*\n element.\n\n In case the notifier service does not exist or returns an error,\n an error message will be logged to *stderr*.\n \"\"\"\n payload = {\n \"client_id\": client_id,\n \"result\": {\n \"response\": {\n \"status_code\": status_code\n }\n }\n }\n\n if message is not None:\n payload[\"result\"][\"response\"][\"message\"] = message\n\n response = requests.post(notifier_uri, json=payload)\n\n if response.status_code != 201:\n sys.stderr.write(\"failed to notify client: {}\\n\".format(payload))\n sys.stderr.flush()", "response": "Notify the client of the result of handling a request."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef consume_message_with_notify(\n notifier_uri_getter):\n \"\"\"\n Decorator for methods handling requests from RabbitMQ\n\n This decorator builds on the :py:func:`consume_message` decorator. It extents\n it by logic for notifying a client of the result of handling the\n request.\n\n The *notifier_uri_getter* argument must be a callable which accepts\n *self* and returns the uri of the notifier service.\n \"\"\"\n\n def consume_message_with_notify_decorator(\n method):\n\n @consume_message\n def wrapper(\n self,\n data):\n\n notifier_uri = notifier_uri_getter(self)\n client_id = data[\"client_id\"]\n\n # Forward the call to the method and notify the client of the\n # result\n try:\n method(self, data)\n notify_client(notifier_uri, client_id, 200)\n except Exception as exception:\n notify_client(notifier_uri, client_id, 400, str(exception))\n raise\n\n return wrapper\n\n return consume_message_with_notify_decorator", "response": "This is a simple decorator for methods handling requests from RabbitMQ and notifies a client of the result of the message."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, key):\n try:\n value = self._cache[key]\n self._order.push(key)\n self._hits += 1\n return value\n except KeyError, e:\n self._misses += 1\n raise", "response": "Get a key from the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef put(self, key, value):\n self._cache[key] = value\n self._order.push(key)\n self._size += 1", "response": "Add a new entry to the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes the entry with the given key.", "response": "def delete(self, key):\n \"\"\"\n >>> c = LRUCache()\n >>> c.put(1, 'one')\n >>> c.get(1)\n 'one'\n >>> c.delete(1)\n >>> c.get(1)\n Traceback (most recent call last):\n ...\n KeyError: 1\n >>> c.delete(1)\n Traceback (most recent call last):\n ...\n KeyError: 1\n \"\"\"\n del self._cache[key]\n self._order.delete(key)\n self._size -= 1"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef put(self, key, value):\n mem = sys.getsizeof(value)\n if self._mem + mem > self._maxmem:\n self.delete(self.last())\n LRUCache.put(self, key, (value, mem))\n self._mem += mem", "response": "Put a key - value pair into the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete an entry from the cache.", "response": "def delete(self, key):\n \"\"\"\n >>> c = MemSizeLRUCache()\n >>> c.put(1, 1)\n >>> c.mem()\n 24\n >>> c.delete(1)\n >>> c.mem()\n 0\n \"\"\"\n (_value, mem) = LRUCache.get(self, key)\n self._mem -= mem\n LRUCache.delete(self, key)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef put(self, key, value):\n # check if we're maxed out first\n if self.size() == self._maxsize:\n # need to kick something out...\n self.delete(self.last())\n LRUCache.put(self, key, value)", "response": "Put a key - value pair into the cache."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nretrieve the value of a setting whose name is indicated by name_hyphen.", "response": "def setting(self, name_hyphen):\n \"\"\"\n Retrieves the setting value whose name is indicated by name_hyphen.\n\n Values starting with $ are assumed to reference environment variables,\n and the value stored in environment variables is retrieved. It's an\n error if thes corresponding environment variable it not set.\n \"\"\"\n if name_hyphen in self._instance_settings:\n value = self._instance_settings[name_hyphen][1]\n else:\n msg = \"No setting named '%s'\" % name_hyphen\n raise UserFeedback(msg)\n\n if hasattr(value, 'startswith') and value.startswith(\"$\"):\n env_var = value.lstrip(\"$\")\n if env_var in os.environ:\n return os.getenv(env_var)\n else:\n msg = \"'%s' is not defined in your environment\" % env_var\n raise UserFeedback(msg)\n\n elif hasattr(value, 'startswith') and value.startswith(\"\\$\"):\n return value.replace(\"\\$\", \"$\")\n\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dict of all setting values.", "response": "def setting_values(self, skip=None):\n \"\"\"\n Returns dict of all setting values (removes the helpstrings).\n \"\"\"\n if not skip:\n skip = []\n\n return dict(\n (k, v[1])\n for k, v in six.iteritems(self._instance_settings)\n if not k in skip)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the internal settings of the cashew instance.", "response": "def _update_settings(self, new_settings, enforce_helpstring=True):\n \"\"\"\n This method does the work of updating settings. Can be passed with\n enforce_helpstring = False which you may want if allowing end users to\n add arbitrary metadata via the settings system.\n\n Preferable to use update_settings (without leading _) in code to do the\n right thing and always have docstrings.\n \"\"\"\n for raw_setting_name, value in six.iteritems(new_settings):\n setting_name = raw_setting_name.replace(\"_\", \"-\")\n\n setting_already_exists = setting_name in self._instance_settings\n value_is_list_len_2 = isinstance(value, list) and len(value) == 2\n treat_as_tuple = not setting_already_exists and value_is_list_len_2\n\n if isinstance(value, tuple) or treat_as_tuple:\n self._instance_settings[setting_name] = value\n\n else:\n if setting_name not in self._instance_settings:\n if enforce_helpstring:\n msg = \"You must specify param '%s' as a tuple of (helpstring, value)\"\n raise InternalCashewException(msg % setting_name)\n\n else:\n # Create entry with blank helpstring.\n self._instance_settings[setting_name] = ('', value,)\n\n else:\n # Save inherited helpstring, replace default value.\n orig = self._instance_settings[setting_name]\n self._instance_settings[setting_name] = (orig[0], value,)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a combined dictionary of setting values and attribute values.", "response": "def settings_and_attributes(self):\n \"\"\"Return a combined dictionary of setting values and attribute values.\"\"\"\n attrs = self.setting_values()\n attrs.update(self.__dict__)\n skip = [\"_instance_settings\", \"aliases\"]\n for a in skip:\n del attrs[a]\n return attrs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndetecting if we get a class or a name convert a name to a class.", "response": "def get_reference_to_class(cls, class_or_class_name):\n \"\"\"\n Detect if we get a class or a name, convert a name to a class.\n \"\"\"\n if isinstance(class_or_class_name, type):\n return class_or_class_name\n\n elif isinstance(class_or_class_name, string_types):\n if \":\" in class_or_class_name:\n mod_name, class_name = class_or_class_name.split(\":\")\n\n if not mod_name in sys.modules:\n __import__(mod_name)\n\n mod = sys.modules[mod_name]\n return mod.__dict__[class_name]\n\n else:\n return cls.load_class_from_locals(class_or_class_name)\n\n else:\n msg = \"Unexpected Type '%s'\" % type(class_or_class_name)\n raise InternalCashewException(msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks that the class has a docstring and returns it if successful.", "response": "def check_docstring(cls):\n \"\"\"\n Asserts that the class has a docstring, returning it if successful.\n \"\"\"\n docstring = inspect.getdoc(cls)\n if not docstring:\n breadcrumbs = \" -> \".join(t.__name__ for t in inspect.getmro(cls)[:-1][::-1])\n msg = \"docstring required for plugin '%s' (%s, defined in %s)\"\n args = (cls.__name__, breadcrumbs, cls.__module__)\n raise InternalCashewException(msg % args)\n\n max_line_length = cls._class_settings.get('max-docstring-length')\n if max_line_length:\n for i, line in enumerate(docstring.splitlines()):\n if len(line) > max_line_length:\n msg = \"docstring line %s of %s is %s chars too long\" \n args = (i, cls.__name__, len(line) - max_line_length)\n raise Exception(msg % args)\n\n return docstring"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef resourcePath(self, relative_path):\n from os import path\n import sys\n \n try:\n # PyInstaller creates a temp folder and stores path in _MEIPASS\n base_path = sys._MEIPASS\n except Exception:\n base_path = path.dirname(path.abspath(__file__))\n return path.join(base_path, relative_path)", "response": "Get absolute path to resource"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef addLogbook(self, physDef= \"LCLS\", mccDef=\"MCC\", initialInstance=False):\n '''Add new block of logbook selection windows. Only 5 allowed.'''\n if self.logMenuCount < 5:\n self.logMenus.append(LogSelectMenu(self.logui.multiLogLayout, initialInstance))\n self.logMenus[-1].addLogbooks(self.logTypeList[1], self.physics_programs, physDef)\n self.logMenus[-1].addLogbooks(self.logTypeList[0], self.mcc_programs, mccDef)\n self.logMenus[-1].show()\n self.logMenuCount += 1\n if initialInstance:\n # Initial logbook menu can add additional menus, all others can only remove themselves.\n QObject.connect(self.logMenus[-1].logButton, SIGNAL(\"clicked()\"), self.addLogbook)\n else:\n from functools import partial\n QObject.connect(self.logMenus[-1].logButton, SIGNAL(\"clicked()\"), partial(self.removeLogbook, self.logMenus[-1]))", "response": "Add new block of logbook selection windows. Only 5 allowed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove logbook menu set.", "response": "def removeLogbook(self, menu=None):\n '''Remove logbook menu set.'''\n if self.logMenuCount > 1 and menu is not None:\n menu.removeMenu()\n self.logMenus.remove(menu)\n self.logMenuCount -= 1"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning selected log books by type.", "response": "def selectedLogs(self):\n '''Return selected log books by type.'''\n mcclogs = []\n physlogs = []\n for i in range(len(self.logMenus)):\n logType = self.logMenus[i].selectedType()\n log = self.logMenus[i].selectedProgram()\n if logType == \"MCC\":\n if log not in mcclogs:\n mcclogs.append(log)\n elif logType == \"Physics\":\n if log not in physlogs:\n physlogs.append(log)\n return mcclogs, physlogs"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef acceptedUser(self, logType):\n '''Verify enetered user name is on accepted MCC logbook list.'''\n from urllib2 import urlopen, URLError, HTTPError\n import json\n \n isApproved = False\n \n userName = str(self.logui.userName.text())\n if userName == \"\":\n return False # Must have a user name to submit entry\n \n if logType == \"MCC\":\n networkFault = False\n data = []\n log_url = \"https://mccelog.slac.stanford.edu/elog/dev/mgibbs/dev_json_user_list.php/?username=\" + userName\n try:\n data = urlopen(log_url, None, 5).read()\n data = json.loads(data)\n except URLError as error:\n print(\"URLError: \" + str(error.reason))\n networkFault = True\n except HTTPError as error:\n print(\"HTTPError: \" + str(error.reason))\n networkFault = True\n \n # If network fails, ask user to verify\n if networkFault:\n msgBox = QMessageBox()\n msgBox.setText(\"Cannot connect to MCC Log Server!\")\n msgBox.setInformativeText(\"Use entered User name anyway?\")\n msgBox.setStandardButtons(QMessageBox.Ok | QMessageBox.Cancel)\n msgBox.setDefaultButton(QMessageBox.Ok)\n if msgBox.exec_() == QMessageBox.Ok:\n isApproved = True\n \n if data != [] and (data is not None):\n isApproved = True\n else:\n isApproved = True\n return isApproved", "response": "Verify enetered user name is on accepted MCC logbook list."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef xmlSetup(self, logType, logList):\n \n from xml.etree.ElementTree import Element, SubElement, ElementTree\n from datetime import datetime\n \n curr_time = datetime.now()\n if logType == \"MCC\":\n # Set up xml tags\n log_entry = Element('log_entry')\n title = SubElement(log_entry, 'title')\n program = SubElement(log_entry, 'program')\n timestamp = SubElement(log_entry, 'timestamp')\n priority = SubElement(log_entry, 'priority')\n os_user = SubElement(log_entry, 'os_user')\n hostname = SubElement(log_entry, 'hostname')\n text = SubElement(log_entry, 'text')\n log_user = SubElement(log_entry, 'log_user')\n\n # Check for multiple logbooks and parse into seperate tags\n logbook = []\n for i in range(len(logList)):\n logbook.append(SubElement(log_entry, 'logbook'))\n logbook[i].text = logList[i].lower()\n \n # Take care of dummy, unchanging tags first\n log_entry.attrib['type'] = \"LOGENTRY\"\n program.text = \"152\"\n priority.text = \"NORMAL\"\n os_user.text = \"nobody\"\n hostname.text = \"mccelog\"\n text.attrib['type'] = \"text/plain\"\n \n # Handle attachment if image exists\n if not self.imagePixmap.isNull():\n attachment = SubElement(log_entry, 'attachment')\n attachment.attrib['name'] = \"Figure 1\"\n attachment.attrib['type'] = \"image/\" + self.imageType\n attachment.text = curr_time.strftime(\"%Y%m%d_%H%M%S_\") + str(curr_time.microsecond) + \".\" + self.imageType\n \n # Set timestamp format\n timestamp.text = curr_time.strftime(\"%Y/%m/%d %H:%M:%S\")\n \n fileName = \"/tmp/\" + curr_time.strftime(\"%Y%m%d_%H%M%S_\") + str(curr_time.microsecond) + \".xml\"\n \n else: # If using Physics logbook\n timeString = curr_time.strftime(\"%Y-%m-%dT%H:%M:%S\")\n \n # Set up xml tags\n log_entry = Element(None)\n severity = SubElement(log_entry, 'severity')\n location = SubElement(log_entry, 'location')\n keywords = SubElement(log_entry, 'keywords')\n time = SubElement(log_entry, 'time')\n isodate = SubElement(log_entry, 'isodate')\n log_user = SubElement(log_entry, 'author')\n category = SubElement(log_entry, 'category')\n title = SubElement(log_entry, 'title')\n metainfo = SubElement(log_entry, 'metainfo')\n \n # Handle attachment if image exists\n if not self.imagePixmap.isNull():\n imageFile = SubElement(log_entry, 'link')\n imageFile.text = timeString + \"-00.\" + self.imageType\n thumbnail = SubElement(log_entry, 'file')\n thumbnail.text = timeString + \"-00.png\"\n \n text = SubElement(log_entry, 'text') # Logbook expects Text tag to come last (for some strange reason)\n \n # Take care of dummy, unchanging tags first\n log_entry.attrib['type'] = \"LOGENTRY\"\n category.text = \"USERLOG\"\n location.text = \"not set\"\n severity.text = \"NONE\"\n keywords.text = \"none\"\n \n time.text = curr_time.strftime(\"%H:%M:%S\")\n isodate.text = curr_time.strftime(\"%Y-%m-%d\")\n \n metainfo.text = timeString + \"-00.xml\"\n fileName = \"/tmp/\" + metainfo.text\n \n # Fill in user inputs\n log_user.text = str(self.logui.userName.text())\n\n title.text = str(self.logui.titleEntry.text())\n if title.text == \"\":\n QMessageBox().warning(self, \"No Title entered\", \"Please enter a title for the entry...\")\n return None\n \n text.text = str(self.logui.textEntry.toPlainText())\n # If text field is truly empty, ElementTree leaves off tag entirely which causes logbook parser to fail\n if text.text == \"\":\n text.text = \" \"\n \n # Create xml file\n xmlFile = open(fileName, \"w\")\n if logType == \"MCC\":\n ElementTree(log_entry).write(xmlFile)\n else:\n xmlString = self.prettify(log_entry)\n xmlFile.write(xmlString)\n xmlFile.write(\"\\n\") # Close with newline so cron job parses correctly\n xmlFile.close()\n \n return fileName.rstrip(\".xml\")", "response": "Create xml file with fields from logbook form."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing xml elements for pretty printing", "response": "def prettify(self, elem):\n \"\"\"Parse xml elements for pretty printing\"\"\"\n \n from xml.etree import ElementTree\n from re import sub\n \n rawString = ElementTree.tostring(elem, 'utf-8')\n parsedString = sub(r'(?=<[^/].*>)', '\\n', rawString) # Adds newline after each closing tag\n \n return parsedString[1:]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert supplied QPixmap object to image file.", "response": "def prepareImages(self, fileName, logType):\n \"\"\"Convert supplied QPixmap object to image file.\"\"\"\n import subprocess\n \n if self.imageType == \"png\":\n self.imagePixmap.save(fileName + \".png\", \"PNG\", -1)\n if logType == \"Physics\":\n makePostScript = \"convert \" + fileName + \".png \" + fileName + \".ps\"\n process = subprocess.Popen(makePostScript, shell=True)\n process.wait()\n thumbnailPixmap = self.imagePixmap.scaled(500, 450, Qt.KeepAspectRatio)\n thumbnailPixmap.save(fileName + \".png\", \"PNG\", -1)\n else:\n renameImage = \"cp \" + self.image + \" \" + fileName + \".gif\"\n process = subprocess.Popen(renameImage, shell=True)\n process.wait()\n if logType == \"Physics\":\n thumbnailPixmap = self.imagePixmap.scaled(500, 450, Qt.KeepAspectRatio)\n thumbnailPixmap.save(fileName + \".png\", \"PNG\", -1)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef submitEntry(self):\n \n # logType = self.logui.logType.currentText()\n mcclogs, physlogs = self.selectedLogs()\n success = True\n \n if mcclogs != []:\n if not self.acceptedUser(\"MCC\"):\n QMessageBox().warning(self, \"Invalid User\", \"Please enter a valid user name!\")\n return\n \n fileName = self.xmlSetup(\"MCC\", mcclogs)\n if fileName is None:\n return\n \n if not self.imagePixmap.isNull():\n self.prepareImages(fileName, \"MCC\")\n success = self.sendToLogbook(fileName, \"MCC\")\n \n if physlogs != []:\n for i in range(len(physlogs)):\n fileName = self.xmlSetup(\"Physics\", physlogs[i])\n if fileName is None:\n return\n \n if not self.imagePixmap.isNull():\n self.prepareImages(fileName, \"Physics\")\n success_phys = self.sendToLogbook(fileName, \"Physics\", physlogs[i])\n success = success and success_phys\n \n self.done(success)", "response": "Process user inputs and subit logbook entry when user clicks Submit button"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprocessing log information and push to selected logbooks.", "response": "def sendToLogbook(self, fileName, logType, location=None):\n '''Process log information and push to selected logbooks.'''\n import subprocess\n \n success = True\n if logType == \"MCC\":\n fileString = \"\"\n if not self.imagePixmap.isNull():\n fileString = fileName + \".\" + self.imageType\n \n logcmd = \"xml2elog \" + fileName + \".xml \" + fileString\n process = subprocess.Popen(logcmd, shell=True)\n process.wait()\n if process.returncode != 0:\n success = False\n else:\n from shutil import copy\n\n path = \"/u1/\" + location.lower() + \"/physics/logbook/data/\" # Prod path\n # path = \"/home/softegr/alverson/log_test/\" # Dev path\n try:\n if not self.imagePixmap.isNull():\n copy(fileName + \".png\", path)\n if self.imageType == \"png\":\n copy(fileName + \".ps\", path)\n else:\n copy(fileName + \".\" + self.imageType, path)\n \n # Copy .xml file last to ensure images will be picked up by cron job\n # print(\"Copying file \" + fileName + \" to path \" + path)\n copy(fileName + \".xml\", path)\n except IOError as error:\n print(error)\n success = False\n \n return success"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nclears all form fields except author.", "response": "def clearForm(self):\n \"\"\"Clear all form fields (except author).\"\"\"\n \n self.logui.titleEntry.clear()\n self.logui.textEntry.clear()\n\n # Remove all log selection menus except the first\n while self.logMenuCount > 1:\n self.removeLogbook(self.logMenus[-1])"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setupUI(self):\n '''Create graphical objects for menus.'''\n \n labelSizePolicy = QSizePolicy(QSizePolicy.Fixed, QSizePolicy.Fixed)\n labelSizePolicy.setHorizontalStretch(0)\n labelSizePolicy.setVerticalStretch(0)\n menuSizePolicy = QSizePolicy(QSizePolicy.Expanding, QSizePolicy.Fixed)\n menuSizePolicy.setHorizontalStretch(0)\n menuSizePolicy.setVerticalStretch(0)\n \n logTypeLayout = QHBoxLayout()\n logTypeLayout.setSpacing(0)\n \n typeLabel = QLabel(\"Log Type:\")\n typeLabel.setMinimumSize(QSize(65, 0))\n typeLabel.setMaximumSize(QSize(65, 16777215))\n typeLabel.setSizePolicy(labelSizePolicy)\n logTypeLayout.addWidget(typeLabel)\n self.logType = QComboBox(self)\n self.logType.setMinimumSize(QSize(100, 0))\n self.logType.setMaximumSize(QSize(150, 16777215))\n menuSizePolicy.setHeightForWidth(self.logType.sizePolicy().hasHeightForWidth())\n self.logType.setSizePolicy(menuSizePolicy)\n logTypeLayout.addWidget(self.logType)\n logTypeLayout.setStretch(1, 6)\n \n programLayout = QHBoxLayout()\n programLayout.setSpacing(0)\n \n programLabel = QLabel(\"Program:\")\n programLabel.setMinimumSize(QSize(60, 0))\n programLabel.setMaximumSize(QSize(60, 16777215))\n programLabel.setSizePolicy(labelSizePolicy)\n programLayout.addWidget(programLabel)\n self.programName = QComboBox(self)\n self.programName.setMinimumSize(QSize(100, 0))\n self.programName.setMaximumSize(QSize(150, 16777215))\n menuSizePolicy.setHeightForWidth(self.programName.sizePolicy().hasHeightForWidth())\n self.programName.setSizePolicy(menuSizePolicy)\n programLayout.addWidget(self.programName)\n programLayout.setStretch(1, 6)\n \n # Initial instance allows adding additional menus, all following menus can only remove themselves.\n if self.initialInstance:\n self.logButton = QPushButton(\"+\", self)\n self.logButton.setToolTip(\"Add logbook\")\n else:\n self.logButton = QPushButton(\"-\")\n self.logButton.setToolTip(\"Remove logbook\")\n \n self.logButton.setMinimumSize(QSize(16, 16)) # 24x24\n self.logButton.setMaximumSize(QSize(16, 16)) # 24x24\n self.logButton.setObjectName(\"roundButton\")\n # self.logButton.setAutoFillBackground(True)\n # region = QRegion(QRect(self.logButton.x()+15, self.logButton.y()+14, 20, 20), QRegion.Ellipse)\n # self.logButton.setMask(region)\n \n self.logButton.setStyleSheet(\"QPushButton {border-radius: 8px;}\")\n \n self._logSelectLayout = QHBoxLayout()\n self._logSelectLayout.setSpacing(6)\n self._logSelectLayout.addLayout(logTypeLayout)\n self._logSelectLayout.addLayout(programLayout)\n self._logSelectLayout.addWidget(self.logButton)\n self._logSelectLayout.setStretch(0, 6)\n self._logSelectLayout.setStretch(1, 6)", "response": "Create graphical objects for menus."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndisplays menus and connect even signals.", "response": "def show(self):\n '''Display menus and connect even signals.'''\n self.parent.addLayout(self._logSelectLayout)\n self.menuCount += 1\n self._connectSlots()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef addLogbooks(self, type=None, logs=[], default=\"\"):\n '''Add or change list of logbooks.'''\n if type is not None and len(logs) != 0:\n if type in self.logList:\n for logbook in logs:\n if logbook not in self.logList.get(type)[0]:\n # print(\"Adding log \" + \" to \" + type + \" log type.\")\n self.logList.get(type)[0].append(logbook)\n else:\n # print(\"Adding log type: \" + type)\n self.logList[type] = []\n self.logList[type].append(logs)\n \n # If default given, auto-select upon menu creation\n if len(self.logList[type]) > 1 and default != \"\":\n self.logList.get(type)[1] == default\n else:\n self.logList.get(type).append(default)\n \n self.logType.clear()\n self.logType.addItems(list(self.logList.keys()))\n self.changeLogType()", "response": "Add or change list of logbooks."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef removeLogbooks(self, type=None, logs=[]):\n '''Remove unwanted logbooks from list.'''\n if type is not None and type in self.logList:\n if len(logs) == 0 or logs == \"All\":\n del self.logList[type]\n else:\n for logbook in logs:\n if logbook in self.logList[type]:\n self.logList[type].remove(logbook)\n \n self.changeLogType()", "response": "Remove unwanted logbooks from list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npopulate log program list to correspond with log type selection.", "response": "def changeLogType(self):\n '''Populate log program list to correspond with log type selection.'''\n logType = self.selectedType()\n programs = self.logList.get(logType)[0]\n default = self.logList.get(logType)[1]\n if logType in self.logList:\n self.programName.clear()\n self.programName.addItems(programs)\n self.programName.setCurrentIndex(programs.index(default))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef addMenu(self):\n '''Add menus to parent gui.'''\n self.parent.multiLogLayout.addLayout(self.logSelectLayout)\n self.getPrograms(logType, programName)", "response": "Add menus to parent gui."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef removeLayout(self, layout):\n '''Iteratively remove graphical objects from layout.'''\n for cnt in reversed(range(layout.count())):\n item = layout.takeAt(cnt)\n widget = item.widget()\n if widget is not None:\n widget.deleteLater()\n else:\n '''If sublayout encountered, iterate recursively.'''\n self.removeLayout(item.layout())", "response": "Remove graphical objects from layout."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef addlabel(ax=None, toplabel=None, xlabel=None, ylabel=None, zlabel=None, clabel=None, cb=None, windowlabel=None, fig=None, axes=None):\n\n if (axes is None) and (ax is not None):\n axes = ax\n\n if (windowlabel is not None) and (fig is not None):\n fig.canvas.set_window_title(windowlabel)\n\n if fig is None:\n fig = _plt.gcf()\n\n if fig is not None and axes is None:\n axes = fig.get_axes()\n if axes == []:\n logger.error('No axes found!')\n\n if axes is not None:\n if toplabel is not None:\n axes.set_title(toplabel)\n if xlabel is not None:\n axes.set_xlabel(xlabel)\n if ylabel is not None:\n axes.set_ylabel(ylabel)\n if zlabel is not None:\n axes.set_zlabel(zlabel)\n\n if (clabel is not None) or (cb is not None):\n if (clabel is not None) and (cb is not None):\n cb.set_label(clabel)\n else:\n if clabel is None:\n logger.error('Missing colorbar label')\n else:\n logger.error('Missing colorbar instance')", "response": "Adds labels to a plot."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining the URL corresponding to Python object", "response": "def linkcode_resolve(domain, info):\n \"\"\"\n Determine the URL corresponding to Python object\n \"\"\"\n if domain != 'py':\n return None\n\n modname = info['module']\n fullname = info['fullname']\n\n submod = sys.modules.get(modname)\n if submod is None:\n return None\n\n obj = submod\n for part in fullname.split('.'):\n try:\n obj = getattr(obj, part)\n except:\n return None\n\n try:\n fn = inspect.getsourcefile(obj)\n except:\n fn = None\n if not fn:\n return None\n\n try:\n source, lineno = inspect.getsourcelines(obj)\n except:\n lineno = None\n\n if lineno:\n linespec = \"#L%d-L%d\" % (lineno, lineno + len(source) - 1)\n else:\n linespec = \"\"\n\n fn = relpath(fn, start=dirname(scisalt.__file__))\n\n if 'dev' in scisalt.__version__:\n return \"http://github.com/joelfrederico/SciSalt/blob/master/scisalt/%s%s\" % (\n fn, linespec)\n else:\n return \"http://github.com/joelfrederico/SciSalt/blob/v%s/scisalt/%s%s\" % (\n scisalt.__version__, fn, linespec)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef syncdb(args):\n cmd = args and 'syncdb %s' % ' '.join(options.args) or 'syncdb --noinput'\n call_manage(cmd)\n for fixture in options.paved.django.syncdb.fixtures:\n call_manage(\"loaddata %s\" % fixture)", "response": "Update the database with model schema. Shorthand for paver manage syncdb."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start(info):\n cmd = options.paved.django.runserver\n\n if cmd == 'runserver_plus':\n try:\n import django_extensions\n except ImportError:\n info(\"Could not import django_extensions. Using default runserver.\")\n cmd = 'runserver'\n\n port = options.paved.django.runserver_port\n if port:\n cmd = '%s %s' % (cmd, port)\n\n call_manage(cmd)", "response": "Start the dev server."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef schema(args):\n try:\n import south\n cmd = args and 'schemamigration %s' % ' '.join(options.args) or 'schemamigration'\n call_manage(cmd)\n except ImportError:\n error('Could not import south.')", "response": "Run South s schemamigration command."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(cls, definition):\n '''\n This static method validates a BioMapMapper definition.\n It returns None on success and throws an exception otherwise.\n '''\n schema_path = os.path.join(os.path.dirname(__file__),\n '../../schema/mapper_definition_schema.json')\n with open(schema_path, 'r') as jsonfp:\n schema = json.load(jsonfp)\n # Validation of JSON schema\n jsonschema.validate(definition, schema)\n # Validation of JSON properties relations\n assert definition['main_key'] in definition['supported_keys'], \\\n '\\'main_key\\' must be contained in \\'supported_keys\\''\n assert set(definition.get('list_valued_keys', [])) <= set(definition['supported_keys']), \\\n '\\'list_valued_keys\\' must be a subset of \\'supported_keys\\''\n assert set(definition.get('disjoint', [])) <= set(definition.get('list_valued_keys', [])), \\\n '\\'disjoint\\' must be a subset of \\'list_valued_keys\\''\n assert set(definition.get('key_synonyms', {}).values()) <= set(definition['supported_keys']), \\\n '\\'The values of the \\'key_synonyms\\' mapping must be in \\'supported_keys\\''", "response": "This static method validates a BioMapMapper definition. It returns None on failure and throws an exception if the definition is invalid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning all data entries for a particular key. Default is the main key.", "response": "def get_all(self, key=None):\n '''\n Returns all data entries for a particular key. Default is the main key.\n\n Args:\n\n key (str): key whose values to return (default: main key)\n\n Returns:\n\n List of all data entries for the key\n '''\n key = self.definition.main_key if key is None else key\n key = self.definition.key_synonyms.get(key, key)\n entries = self._get_all(key)\n if key in self.definition.scalar_nonunique_keys:\n return set(entries)\n return entries"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets all the requests in the database.", "response": "def get_requests(self, params={}):\n \"\"\"\n List requests\n\n http://dev.wheniwork.com/#listing-requests\n \"\"\"\n if \"status\" in params:\n params['status'] = ','.join(map(str, params['status']))\n\n requests = []\n users = {}\n messages = {}\n params['page'] = 0\n while True:\n param_list = [(k, params[k]) for k in sorted(params)]\n url = \"/2/requests/?%s\" % urlencode(param_list)\n\n data = self._get_resource(url)\n for entry in data[\"users\"]:\n user = Users.user_from_json(entry)\n users[user.user_id] = user\n for entry in data[\"requests\"]:\n request = self.request_from_json(entry)\n requests.append(request)\n for entry in data[\"messages\"]:\n message = Messages.message_from_json(entry)\n if message.request_id not in messages:\n messages[message.request_id] = []\n messages[message.request_id].append(message)\n\n if not data['more']:\n break\n\n params['page'] += 1\n\n for request in requests:\n request.user = users.get(request.user_id, None)\n request.messages = messages.get(request.request_id, [])\n\n return requests"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking a guess of the hidden object to a set of digits.", "response": "def guess(self, *args):\n \"\"\"\n Make a guess, comparing the hidden object to a set of provided digits. The digits should\n be passed as a set of arguments, e.g:\n\n * for a normal game: 0, 1, 2, 3\n * for a hex game: 0xA, 0xB, 5, 4\n * alternate for hex game: 'A', 'b', 5, 4\n\n :param args: An iterable of digits (int or str)\n :return: A dictionary object detailing the analysis and results of the guess\n \"\"\"\n\n if self.game is None:\n raise ValueError(\"The Game is unexpectedly undefined!\")\n\n response_object = {\n \"bulls\": None,\n \"cows\": None,\n \"analysis\": None,\n \"status\": None\n }\n\n if self.game.status == self.GAME_WON:\n response_object[\"status\"] = \\\n self._start_again_message(\"You already won!\")\n elif self.game.status == self.GAME_LOST:\n response_object[\"status\"] = \\\n self._start_again_message(\"You already lost!\")\n elif self.game.guesses_remaining < 1:\n response_object[\"status\"] = \\\n self._start_again_message(\"You've made too many guesses\")\n else:\n guess_made = DigitWord(*args, wordtype=self.game.mode.digit_type)\n comparison = self.game.answer.compare(guess_made)\n\n self.game.guesses_made += 1\n response_object[\"bulls\"] = 0\n response_object[\"cows\"] = 0\n response_object[\"analysis\"] = []\n\n for comparison_object in comparison:\n if comparison_object.match:\n response_object[\"bulls\"] += 1\n elif comparison_object.in_word:\n response_object[\"cows\"] += 1\n response_object[\"analysis\"].append(comparison_object.get_object())\n\n if response_object[\"bulls\"] == self.game.mode.digits:\n self.game.status = self.GAME_WON\n self.game.guesses_made = self.game.mode.guesses_allowed\n response_object[\"status\"] = self._start_again_message(\n \"Congratulations, you win!\"\n )\n elif self.game.guesses_remaining < 1:\n self.game.status = self.GAME_LOST\n response_object[\"status\"] = self._start_again_message(\n \"Sorry, you lost!\"\n )\n\n return response_object"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload a game from a serialized JSON representation.", "response": "def load(self, game_json=None, mode=None):\n \"\"\"\n Load a game from a serialized JSON representation. The game expects a well defined\n structure as follows (Note JSON string format):\n\n '{\n \"guesses_made\": int,\n \"key\": \"str:a 4 word\",\n \"status\": \"str: one of playing, won, lost\",\n \"mode\": {\n \"digits\": int,\n \"digit_type\": DigitWord.DIGIT | DigitWord.HEXDIGIT,\n \"mode\": GameMode(),\n \"priority\": int,\n \"help_text\": str,\n \"instruction_text\": str,\n \"guesses_allowed\": int\n },\n \"ttl\": int,\n \"answer\": [int|str0, int|str1, ..., int|strN]\n }'\n\n * \"mode\" will be cast to a GameMode object\n * \"answer\" will be cast to a DigitWord object\n\n :param game_json: The source JSON - MUST be a string\n :param mode: A mode (str or GameMode) for the game being loaded\n :return: A game object\n \"\"\"\n\n if game_json is None: # New game_json\n if mode is not None:\n if isinstance(mode, str):\n _game_object = GameObject(mode=self._match_mode(mode=mode))\n elif isinstance(mode, GameMode):\n _game_object = GameObject(mode=mode)\n else:\n raise TypeError(\"Game mode must be a GameMode or string\")\n else:\n _game_object = GameObject(mode=self._game_modes[0])\n _game_object.status = self.GAME_PLAYING\n else:\n if not isinstance(game_json, str):\n raise TypeError(\"Game must be passed as a serialized JSON string.\")\n\n game_dict = json.loads(game_json)\n\n if not 'mode' in game_dict:\n raise ValueError(\"Mode is not provided in JSON; game_json cannot be loaded!\")\n\n _mode = GameMode(**game_dict[\"mode\"])\n _game_object = GameObject(mode=_mode, source_game=game_dict)\n\n self.game = copy.deepcopy(_game_object)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload the game modes for the current locale.", "response": "def load_modes(self, input_modes=None):\n \"\"\"\n Loads modes (GameMode objects) to be supported by the game object. Four default\n modes are provided (normal, easy, hard, and hex) but others could be provided\n either by calling load_modes directly or passing a list of GameMode objects to\n the instantiation call.\n\n :param input_modes: A list of GameMode objects; nb: even if only one new GameMode\n object is provided, it MUST be passed as a list - for example, passing GameMode gm1\n would require passing [gm1] NOT gm1.\n\n :return: A list of GameMode objects (both defaults and any added).\n \"\"\"\n\n # Set default game modes\n _modes = [\n GameMode(\n mode=\"normal\", priority=2, digits=4, digit_type=DigitWord.DIGIT, guesses_allowed=10\n ),\n GameMode(\n mode=\"easy\", priority=1, digits=3, digit_type=DigitWord.DIGIT, guesses_allowed=6\n ),\n GameMode(\n mode=\"hard\", priority=3, digits=6, digit_type=DigitWord.DIGIT, guesses_allowed=6\n ),\n GameMode(\n mode=\"hex\", priority=4, digits=4, digit_type=DigitWord.HEXDIGIT, guesses_allowed=10\n )\n ]\n\n if input_modes is not None:\n if not isinstance(input_modes, list):\n raise TypeError(\"Expected list of input_modes\")\n\n for mode in input_modes:\n if not isinstance(mode, GameMode):\n raise TypeError(\"Expected list to contain only GameMode objects\")\n _modes.append(mode)\n\n self._game_modes = copy.deepcopy(_modes)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns list of strings split by input delimeter", "response": "def line(self, line):\n \"\"\"Returns list of strings split by input delimeter\n\n Argument:\n line - Input line to cut\n \"\"\"\n # Remove empty strings in case of multiple instances of delimiter\n return [x for x in re.split(self.delimiter, line.rstrip()) if x != '']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_message(self, message_id):\n url = \"/2/messages/%s\" % message_id\n\n return self.message_from_json(self._get_resource(url)[\"message\"])", "response": "Get a specific message by its ID"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_messages(self, params={}):\n param_list = [(k, params[k]) for k in sorted(params)]\n url = \"/2/messages/?%s\" % urlencode(param_list)\n\n data = self._get_resource(url)\n messages = []\n for entry in data[\"messages\"]:\n messages.append(self.message_from_json(entry))\n\n return messages", "response": "Get all the messages from wheniwork."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_message(self, params={}):\n url = \"/2/messages/\"\n body = params\n\n data = self._post_resource(url, body)\n return self.message_from_json(data[\"message\"])", "response": "Creates a new message in the given params"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_message(self, message):\n url = \"/2/messages/%s\" % message.message_id\n\n data = self._put_resource(url, message.json_data())\n return self.message_from_json(data)", "response": "Modify an existing message."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_messages(self, messages):\n url = \"/2/messages/?%s\" % urlencode([('ids', \",\".join(messages))])\n\n data = self._delete_resource(url)\n return data", "response": "Delete existing messages.\n\n http://dev.wheniwork.com/#delete-existing-message"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns site data. http://dev.wheniwork.com/#get-existing-site", "response": "def get_site(self, site_id):\n \"\"\"\n Returns site data.\n\n http://dev.wheniwork.com/#get-existing-site\n \"\"\"\n url = \"/2/sites/%s\" % site_id\n\n return self.site_from_json(self._get_resource(url)[\"site\"])"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_sites(self):\n url = \"/2/sites\"\n\n data = self._get_resource(url)\n sites = []\n for entry in data['sites']:\n sites.append(self.site_from_json(entry))\n\n return sites", "response": "Returns a list of sites."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a site with the given parameters.", "response": "def create_site(self, params={}):\n \"\"\"\n Creates a site\n\n http://dev.wheniwork.com/#create-update-site\n \"\"\"\n url = \"/2/sites/\"\n body = params\n\n data = self._post_resource(url, body)\n return self.site_from_json(data[\"site\"])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a link to a view that moves the passed in object up in rank.", "response": "def admin_link_move_up(obj, link_text='up'):\n \"\"\"Returns a link to a view that moves the passed in object up in rank.\n\n :param obj:\n Object to move\n :param link_text:\n Text to display in the link. Defaults to \"up\"\n :returns:\n HTML link code to view for moving the object\n \"\"\"\n if obj.rank == 1:\n return ''\n\n content_type = ContentType.objects.get_for_model(obj)\n link = reverse('awl-rankedmodel-move', args=(content_type.id, obj.id, \n obj.rank - 1))\n\n return '%s' % (link, link_text)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a link to a view that moves the passed in object down in rank.", "response": "def admin_link_move_down(obj, link_text='down'):\n \"\"\"Returns a link to a view that moves the passed in object down in rank.\n\n :param obj:\n Object to move\n :param link_text:\n Text to display in the link. Defaults to \"down\"\n :returns:\n HTML link code to view for moving the object\n \"\"\"\n if obj.rank == obj.grouped_filter().count():\n return ''\n\n content_type = ContentType.objects.get_for_model(obj)\n link = reverse('awl-rankedmodel-move', args=(content_type.id, obj.id, \n obj.rank + 1))\n\n return '%s' % (link, link_text)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nshow a figure with a typical orientation so that x and y axes are set up as expected.", "response": "def showfig(fig, aspect=\"auto\"):\n \"\"\"\n Shows a figure with a typical orientation so that x and y axes are set up as expected.\n \"\"\"\n\n ax = fig.gca()\n\n # Swap y axis if needed\n alim = list(ax.axis())\n if alim[3] < alim[2]:\n temp = alim[2]\n alim[2] = alim[3]\n alim[3] = temp\n ax.axis(alim)\n\n ax.set_aspect(aspect)\n fig.show()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _setup_index(index):\n index = int(index)\n if index > 0:\n index -= 1\n elif index == 0:\n # Zero indicies should not be allowed by default.\n raise ValueError\n return index", "response": "Sets up the index for the base base."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cut(self, line):\n result = []\n line = self.line(line)\n\n for i, field in enumerate(self.positions):\n try:\n index = _setup_index(field)\n try:\n result += line[index]\n except IndexError:\n result.append(self.invalid_pos)\n except ValueError:\n result.append(str(field))\n except TypeError:\n result.extend(self._cut_range(line, int(field[0]), i))\n\n return ''.join(result)", "response": "Returns selected positions from cut input source in desired\n arrangement."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprocess the list of positions and ranges to account for ranges", "response": "def _setup_positions(self, positions):\n \"\"\"Processes positions to account for ranges\n\n Arguments:\n positions - list of positions and/or ranges to process\n \"\"\"\n updated_positions = []\n\n for i, position in enumerate(positions):\n ranger = re.search(r'(?P-?\\d*):(?P\\d*)', position)\n\n if ranger:\n if i > 0:\n updated_positions.append(self.separator)\n start = group_val(ranger.group('start'))\n end = group_val(ranger.group('end'))\n\n if start and end:\n updated_positions.extend(self._extendrange(start, end + 1))\n # Since the number of positions on a line is unknown,\n # send input to cause exception that can be caught and call\n # _cut_range helper function\n elif ranger.group('start'):\n updated_positions.append([start])\n else:\n updated_positions.extend(self._extendrange(1, end + 1))\n else:\n updated_positions.append(positions[i])\n try:\n if int(position) and int(positions[i+1]):\n updated_positions.append(self.separator)\n except (ValueError, IndexError):\n pass\n\n return updated_positions"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nperform a cut for a range from start to end", "response": "def _cut_range(self, line, start, current_position):\n \"\"\"Performs cut for range from start position to end\n\n Arguments:\n line - input to cut\n start - start of range\n current_position - current position in main cut function\n \"\"\"\n result = []\n try:\n for j in range(start, len(line)):\n index = _setup_index(j)\n try:\n result.append(line[index])\n except IndexError:\n result.append(self.invalid_pos)\n finally:\n result.append(self.separator)\n result.append(line[-1])\n except IndexError:\n pass\n\n try:\n int(self.positions[current_position+1])\n result.append(self.separator)\n except (ValueError, IndexError):\n pass\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _extendrange(self, start, end):\n range_positions = []\n for i in range(start, end):\n if i != 0:\n range_positions.append(str(i))\n if i < end:\n range_positions.append(self.separator)\n return range_positions", "response": "Creates a list of values in a range with output delimiters."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lock_file(filename):\n\n lockfile = \"%s.lock\"%filename\n if isfile(lockfile):\n return False\n else:\n with open(lockfile, \"w\"):\n pass\n return True", "response": "Locks the file by writing a. lock file. Returns True when the file is locked and False when the file is locked and\nuggage was already locked."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unlock_file(filename):\n\n lockfile = \"%s.lock\"%filename\n if isfile(lockfile):\n os.remove(lockfile)\n return True\n else:\n return False", "response": "Unlocks the file by remove a. lock file. Returns True when the file is unlocked and False when the file was unlocked and the file was not unlocked already."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef copy_smart_previews(local_catalog, cloud_catalog, local2cloud=True):\n\n lcat_noext = local_catalog[0:local_catalog.rfind(\".lrcat\")]\n ccat_noext = cloud_catalog[0:cloud_catalog.rfind(\".lrcat\")]\n lsmart = join(dirname(local_catalog),\"%s Smart Previews.lrdata\"%basename(lcat_noext))\n csmart = join(dirname(cloud_catalog),\"%s Smart Previews.lrdata\"%basename(ccat_noext))\n if local2cloud and os.path.isdir(lsmart):\n logging.info(\"Copy Smart Previews - local to cloud: %s => %s\"%(lsmart, csmart))\n distutils.dir_util.copy_tree(lsmart,csmart, update=1)\n elif os.path.isdir(csmart):\n logging.info(\"Copy Smart Previews - cloud to local: %s => %s\"%(csmart, lsmart))\n distutils.dir_util.copy_tree(csmart,lsmart, update=1)", "response": "Copy Smart Previews from local to cloud or local2cloud"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a hash of the file From ", "response": "def hashsum(filename):\n \"\"\"Return a hash of the file From \"\"\"\n\n with open(filename, mode='rb') as f:\n d = hashlib.sha1()\n for buf in iter(partial(f.read, 2**20), b''):\n d.update(buf)\n return d.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninitiates the local catalog and push it the cloud", "response": "def cmd_init_push_to_cloud(args):\n \"\"\"Initiate the local catalog and push it the cloud\"\"\"\n\n (lcat, ccat) = (args.local_catalog, args.cloud_catalog)\n logging.info(\"[init-push-to-cloud]: %s => %s\"%(lcat, ccat))\n\n if not isfile(lcat):\n args.error(\"[init-push-to-cloud] The local catalog does not exist: %s\"%lcat)\n if isfile(ccat):\n args.error(\"[init-push-to-cloud] The cloud catalog already exist: %s\"%ccat)\n\n (lmeta, cmeta) = (\"%s.lrcloud\"%lcat, \"%s.lrcloud\"%ccat)\n if isfile(lmeta):\n args.error(\"[init-push-to-cloud] The local meta-data already exist: %s\"%lmeta)\n if isfile(cmeta):\n args.error(\"[init-push-to-cloud] The cloud meta-data already exist: %s\"%cmeta)\n\n #Let's \"lock\" the local catalog\n logging.info(\"Locking local catalog: %s\"%(lcat))\n if not lock_file(lcat):\n raise RuntimeError(\"The catalog %s is locked!\"%lcat)\n\n #Copy catalog from local to cloud, which becomes the new \"base\" changeset\n util.copy(lcat, ccat)\n\n # Write meta-data both to local and cloud\n mfile = MetaFile(lmeta)\n utcnow = datetime.utcnow().strftime(DATETIME_FORMAT)[:-4]\n mfile['catalog']['hash'] = hashsum(lcat)\n mfile['catalog']['modification_utc'] = utcnow\n mfile['catalog']['filename'] = lcat\n mfile['last_push']['filename'] = ccat\n mfile['last_push']['hash'] = hashsum(lcat)\n mfile['last_push']['modification_utc'] = utcnow\n mfile.flush()\n mfile = MetaFile(cmeta)\n mfile['changeset']['is_base'] = True\n mfile['changeset']['hash'] = hashsum(lcat)\n mfile['changeset']['modification_utc'] = utcnow\n mfile['changeset']['filename'] = basename(ccat)\n mfile.flush()\n\n #Let's copy Smart Previews\n if not args.no_smart_previews:\n copy_smart_previews(lcat, ccat, local2cloud=True)\n\n #Finally,let's unlock the catalog files\n logging.info(\"Unlocking local catalog: %s\"%(lcat))\n unlock_file(lcat)\n\n logging.info(\"[init-push-to-cloud]: Success!\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cmd_init_pull_from_cloud(args):\n\n (lcat, ccat) = (args.local_catalog, args.cloud_catalog)\n logging.info(\"[init-pull-from-cloud]: %s => %s\"%(ccat, lcat))\n\n if isfile(lcat):\n args.error(\"[init-pull-from-cloud] The local catalog already exist: %s\"%lcat)\n if not isfile(ccat):\n args.error(\"[init-pull-from-cloud] The cloud catalog does not exist: %s\"%ccat)\n\n (lmeta, cmeta) = (\"%s.lrcloud\"%lcat, \"%s.lrcloud\"%ccat)\n if isfile(lmeta):\n args.error(\"[init-pull-from-cloud] The local meta-data already exist: %s\"%lmeta)\n if not isfile(cmeta):\n args.error(\"[init-pull-from-cloud] The cloud meta-data does not exist: %s\"%cmeta)\n\n #Let's \"lock\" the local catalog\n logging.info(\"Locking local catalog: %s\"%(lcat))\n if not lock_file(lcat):\n raise RuntimeError(\"The catalog %s is locked!\"%lcat)\n\n #Copy base from cloud to local\n util.copy(ccat, lcat)\n\n #Apply changesets\n cloudDAG = ChangesetDAG(ccat)\n path = cloudDAG.path(cloudDAG.root.hash, cloudDAG.leafs[0].hash)\n util.apply_changesets(args, path, lcat)\n\n # Write meta-data both to local and cloud\n mfile = MetaFile(lmeta)\n utcnow = datetime.utcnow().strftime(DATETIME_FORMAT)[:-4]\n mfile['catalog']['hash'] = hashsum(lcat)\n mfile['catalog']['modification_utc'] = utcnow\n mfile['catalog']['filename'] = lcat\n mfile['last_push']['filename'] = cloudDAG.leafs[0].mfile['changeset']['filename']\n mfile['last_push']['hash'] = cloudDAG.leafs[0].mfile['changeset']['hash']\n mfile['last_push']['modification_utc'] = cloudDAG.leafs[0].mfile['changeset']['modification_utc']\n mfile.flush()\n\n #Let's copy Smart Previews\n if not args.no_smart_previews:\n copy_smart_previews(lcat, ccat, local2cloud=False)\n\n #Finally, let's unlock the catalog files\n logging.info(\"Unlocking local catalog: %s\"%(lcat))\n unlock_file(lcat)\n\n logging.info(\"[init-pull-from-cloud]: Success!\")", "response": "Initiate the local catalog by downloading the cloud catalog"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_arguments(argv=None):\n\n def default_config_path():\n \"\"\"Returns the platform specific default location of the configure file\"\"\"\n\n if os.name == \"nt\":\n return join(os.getenv('APPDATA'), \"lrcloud.ini\")\n else:\n return join(os.path.expanduser(\"~\"), \".lrcloud.ini\")\n\n parser = argparse.ArgumentParser(\n description='Cloud extension to Lightroom',\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n\n cmd_group = parser.add_mutually_exclusive_group()\n cmd_group.add_argument(\n '--init-push-to-cloud',\n help='Initiate the local catalog and push it to the cloud',\n action=\"store_true\"\n )\n cmd_group.add_argument(\n '--init-pull-from-cloud',\n help='Download the cloud catalog and initiate a corresponding local catalog',\n action=\"store_true\"\n )\n parser.add_argument(\n '--cloud-catalog',\n help='The cloud/shared catalog file e.g. located in Google Drive or Dropbox',\n type=lambda x: os.path.expanduser(x)\n )\n parser.add_argument(\n '--local-catalog',\n help='The local Lightroom catalog file',\n type=lambda x: os.path.expanduser(x)\n )\n lr_exec = parser.add_mutually_exclusive_group()\n lr_exec.add_argument(\n '--lightroom-exec',\n help='The Lightroom executable file',\n type=str\n )\n lr_exec.add_argument(\n '--lightroom-exec-debug',\n help='Instead of running Lightroom, append data to the end of the catalog file',\n type=str\n )\n parser.add_argument(\n '-v', '--verbose',\n help='Increase output verbosity',\n action=\"store_true\"\n )\n parser.add_argument(\n '--no-smart-previews',\n help=\"Don't Sync Smart Previews\",\n action=\"store_true\"\n )\n parser.add_argument(\n '--config-file',\n help=\"Path to the configure (.ini) file\",\n type=str,\n default=default_config_path()\n )\n parser.add_argument(\n '--diff-cmd',\n help=\"The command that given two files, $in1 and $in2, \"\n \"produces a diff file $out\",\n type=str,\n #default=\"./jdiff -f $in1 $in2 $out\"\n #default=\"bsdiff $in1 $in2 $out\"\n )\n parser.add_argument(\n '--patch-cmd',\n help=\"The command that given a file, $in1, and a path, \"\n \"$patch, produces a file $out\",\n type=str,\n #default=\"./jptch $in1 $patch $out\"\n #default=\"bspatch $in1 $out $patch\"\n )\n args = parser.parse_args(args=argv)\n args.error = parser.error\n\n if args.config_file in ['', 'none', 'None', \"''\", '\"\"']:\n args.config_file = None\n\n if args.verbose:\n logging.basicConfig(level=logging.INFO)\n\n config_parser.read(args)\n (lcat, ccat) = (args.local_catalog, args.cloud_catalog)\n\n if lcat is None:\n parser.error(\"No local catalog specified, use --local-catalog\")\n if ccat is None:\n parser.error(\"No cloud catalog specified, use --cloud-catalog\")\n\n return args", "response": "Parse command line arguments and return a list of arguments"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef path(self, a_hash, b_hash):\n\n def _path(a, b):\n if a is b:\n return [a]\n else:\n assert len(a.children) == 1\n return [a] + _path(a.children[0], b)\n\n a = self.nodes[a_hash]\n b = self.nodes[b_hash]\n return _path(a, b)[1:]", "response": "Return the path between a and b"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the index of the last occurrence of x in the sequence.", "response": "def _rindex(mylist: Sequence[T], x: T) -> int:\n \"\"\"Index of the last occurrence of x in the sequence.\"\"\"\n return len(mylist) - mylist[::-1].index(x) - 1"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef raw_to_delimited(header: Header, raw_payload: RawPayload) -> DelimitedMsg:\n return tuple(header) + (b'',) + tuple(raw_payload)", "response": "\\ Returns a DelimitedMsg consisting of header frames delimiter frame and payload frames."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_delimited(header: Header, payload: Payload, side: CommSide) -> DelimitedMsg:\n return raw_to_delimited(header, [side.serialize(msg) for msg in payload])", "response": "\\ Returns a DelimitedMsg with the given header and payload."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef raw_from_delimited(msgs: DelimitedMsg) -> RawMsgs:\n delim = _rindex(msgs, b'')\n return tuple(msgs[:delim]), tuple(msgs[delim + 1:])", "response": "\\ Returns a tuple containing the header and payload frames of a message consisting of a delimiter frame and a message."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a figure with the specified title.", "response": "def figure(title=None, **kwargs):\n \"\"\"\n Creates a figure with *\\*\\*kwargs* with a window title *title*.\n\n Returns class :class:`matplotlib.figure.Figure`.\n \"\"\"\n fig = _figure(**kwargs)\n if title is not None:\n fig.canvas.set_window_title(title)\n return fig"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate and save an admin user.", "response": "def create_admin(username='admin', email='admin@admin.com', password='admin'):\n \"\"\"Create and save an admin user.\n\n :param username:\n Admin account's username. Defaults to 'admin'\n :param email:\n Admin account's email address. Defaults to 'admin@admin.com'\n :param password:\n Admin account's password. Defaults to 'admin'\n :returns:\n Django user with staff and superuser privileges\n \"\"\"\n admin = User.objects.create_user(username, email, password)\n admin.is_staff = True\n admin.is_superuser = True\n admin.save()\n return admin"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of the messages contained within the given response.", "response": "def messages_from_response(response):\n \"\"\"Returns a list of the messages from the django MessageMiddleware\n package contained within the given response. This is to be used during\n unit testing when trying to see if a message was set properly in a view.\n\n :param response: HttpResponse object, likely obtained through a\n test client.get() or client.post() call\n\n :returns: a list of tuples (message_string, message_level), one for each\n message in the response context\n \"\"\"\n messages = []\n if hasattr(response, 'context') and response.context and \\\n 'messages' in response.context:\n messages = response.context['messages']\n elif hasattr(response, 'cookies'):\n # no \"context\" set-up or no messages item, check for message info in\n # the cookies\n morsel = response.cookies.get('messages')\n if not morsel:\n return []\n\n # use the decoder in the CookieStore to process and get a list of\n # messages\n from django.contrib.messages.storage.cookie import CookieStorage\n store = CookieStorage(FakeRequest())\n messages = store._decode(morsel.value)\n else:\n return []\n\n return [(m.message, m.level) for m in messages]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef initiate(self):\n self.site = admin.sites.AdminSite()\n self.admin_user = create_admin(self.USERNAME, self.EMAIL, self.PASSWORD)\n self.authed = False", "response": "Initializes the admin site and creates a user with the\n appropriate privileges."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nauthenticating the superuser account via the web login.", "response": "def authorize(self):\n \"\"\"Authenticates the superuser account via the web login.\"\"\"\n response = self.client.login(username=self.USERNAME, \n password=self.PASSWORD)\n self.assertTrue(response)\n self.authed = True"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndo a django test client get against the given url after the admin first.", "response": "def authed_get(self, url, response_code=200, headers={}, follow=False):\n \"\"\"Does a django test client ``get`` against the given url after\n logging in the admin first.\n\n :param url:\n URL to fetch\n :param response_code:\n Expected response code from the URL fetch. This value is\n asserted. Defaults to 200\n :param headers:\n Optional dictionary of headers to send in the request\n :param follow:\n When True, the get call will follow any redirect requests.\n Defaults to False.\n :returns:\n Django testing ``Response`` object\n \"\"\"\n if not self.authed:\n self.authorize()\n\n response = self.client.get(url, follow=follow, **headers)\n self.assertEqual(response_code, response.status_code)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndoing a django test client post against the given url with optional headers.", "response": "def authed_post(self, url, data, response_code=200, follow=False,\n headers={}):\n \"\"\"Does a django test client ``post`` against the given url after\n logging in the admin first.\n\n :param url:\n URL to fetch\n :param data:\n Dictionary to form contents to post\n :param response_code:\n Expected response code from the URL fetch. This value is\n asserted. Defaults to 200\n :param headers:\n Optional dictionary of headers to send in with the request\n :returns:\n Django testing ``Response`` object\n \"\"\"\n if not self.authed:\n self.authorize()\n\n response = self.client.post(url, data, follow=follow, **headers)\n self.assertEqual(response_code, response.status_code)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the value displayed in the column for a given instance.", "response": "def field_value(self, admin_model, instance, field_name):\n \"\"\"Returns the value displayed in the column on the web interface for\n a given instance.\n\n :param admin_model:\n Instance of a :class:`admin.ModelAdmin` object that is responsible\n for displaying the change list\n :param instance:\n Object instance that is the row in the admin change list\n :field_name:\n Name of the field/column to fetch\n \"\"\"\n _, _, value = lookup_field(field_name, instance, admin_model)\n return value"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef field_names(self, admin_model):\n request = FakeRequest(user=self.admin_user)\n return admin_model.get_list_display(request)", "response": "Returns the names of the fields and columns used by the given admin_model."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nopen a nested tunnel for the given user and server.", "response": "def githubtunnel(user1, server1, user2, server2, port, verbose, stanford=False):\n \"\"\"\n Opens a nested tunnel, first to *user1*@*server1*, then to *user2*@*server2*, for accessing on *port*.\n\n If *verbose* is true, prints various ssh commands.\n\n If *stanford* is true, shifts ports up by 1.\n\n Attempts to get *user1*, *user2* from environment variable ``USER_NAME`` if called from the command line.\n \"\"\"\n if stanford:\n port_shift = 1\n else:\n port_shift = 0\n\n # command1 = 'ssh -nNf -L {}:quickpicmac3.slac.stanford.edu:22 {}@{}'.format(port, user, server)\n command1 = 'ssh -nNf -L {}:{}:22 {}@{}'.format(port-1-port_shift, server2, user1, server1)\n command2 = 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -nNf -L {}:cardinal.stanford.edu:22 -p {} {}@localhost'.format(port-port_shift, port-port_shift-1, user2)\n command3 = 'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -nNf -L {}:github.com:22 -p {} {}@localhost'.format(port, port-1, user2)\n if verbose:\n print(command1)\n if stanford:\n print(command2)\n print(command3)\n try:\n call(shlex.split(command1))\n if stanford:\n call(shlex.split(command2))\n call(shlex.split(command3))\n except:\n print('Failure!')\n pass"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef imgmax(self):\n if not hasattr(self, '_imgmax'):\n imgmax = _np.max(self.images[0])\n for img in self.images:\n imax = _np.max(img)\n if imax > imgmax:\n imgmax = imax\n\n self._imgmax = imgmax\n\n return self._imgmax", "response": "Returns the highest value of input image."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef spawn(func, *args, **kwargs):\n return gevent.spawn(wrap_uncaught_greenlet_exceptions(func), *args, **kwargs)", "response": "Spawns a greenlet that does not print exceptions to the screen."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns usage string with no trailing whitespace removed.", "response": "def _usage(prog_name=os.path.basename(sys.argv[0])):\n '''Returns usage string with no trailing whitespace.'''\n spacer = ' ' * len('usage: ')\n usage = prog_name + ' -b LIST [-S SEPARATOR] [file ...]\\n' \\\n + spacer + prog_name + ' -c LIST [-S SEPERATOR] [file ...]\\n' \\\n + spacer + prog_name \\\n + ' -f LIST [-d DELIM] [-e] [-S SEPERATOR] [-s] [file ...]'\n\n # Return usage message with trailing whitespace removed.\n return \"usage: \" + usage.rstrip()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _parse_args(args):\n\n # parser uses custom usage string, with 'usage: ' removed, as it is\n # added automatically via argparser.\n parser = argparse.ArgumentParser(description=\"Remove and/or rearrange \"\n + \"sections from each line of a file(s).\",\n usage=_usage()[len('usage: '):])\n parser.add_argument('-b', \"--bytes\", action='store', type=lst, default=[],\n help=\"Bytes to select\")\n parser.add_argument('-c', \"--chars\", action='store', type=lst, default=[],\n help=\"Character to select\")\n parser.add_argument('-f', \"--fields\", action='store', type=lst, default=[],\n help=\"Fields to select\")\n parser.add_argument('-d', \"--delimiter\", action='store', default=\"\\t\",\n help=\"Sets field delimiter(default is TAB)\")\n parser.add_argument('-e', \"--regex\", action='store_true',\n help='Enable regular expressions to be used as input '+\n 'delimiter')\n parser.add_argument('-s', '--skip', action='store_true',\n help=\"Skip lines that do not contain input delimiter.\")\n parser.add_argument('-S', \"--separator\", action='store', default=\"\\t\",\n help=\"Sets field separator for output.\")\n parser.add_argument('file', nargs='*', default=\"-\",\n help=\"File(s) to cut\")\n\n return parser.parse_args(args)", "response": "Parse command line arguments and generate help"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main(args=sys.argv[1:]):\n '''Processes command line arguments and file i/o'''\n if not args:\n sys.stderr.write(_usage() + '\\n')\n sys.exit(4)\n else:\n parsed = _parse_args(args)\n\n # Set delim based on whether or not regex is desired by user\n delim = parsed.delimiter if parsed.regex else re.escape(parsed.delimiter)\n\n # Keep track of number of cutters used to allow error handling if\n # multiple options selected (only one at a time is accepted)\n num_cutters = 0\n\n # Read mode will be used as file read mode later. 'r' is default, changed\n # to 'rb' in the event that binary read mode is selected by user\n read_mode = 'r'\n\n if parsed.bytes:\n positions = parsed.bytes\n cutter = ByteCutter(positions)\n num_cutters += 1\n read_mode = 'rb'\n\n if parsed.chars:\n positions = parsed.chars\n cutter = CharCutter(positions)\n num_cutters += 1\n\n if parsed.fields:\n positions = parsed.fields\n cutter = FieldCutter(positions, delim, parsed.separator)\n num_cutters += 1\n\n # Make sure only one option of -b,-c, or -f is used\n if num_cutters > 1:\n sys.stderr.write('Only one option permitted of -b, -c, -f.\\n')\n sys.stderr.write(_usage() + '\\n')\n sys.exit(1)\n\n # Check for possible specification of zero index, which is not allowed.\n # Regular expression checks for zero by itself, or in range specification\n if [n for n in positions if re.search(\"0:?|0$\", n)]:\n sys.stderr.write('Zero is an invalid position.\\n')\n sys.stderr.write(_usage() + '\\n')\n sys.exit(2)\n\n\n try:\n for line in fileinput.input(parsed.file, mode=read_mode):\n if parsed.skip and not re.search(parsed.delimiter, line):\n pass\n else:\n # Using sys.stdout.write for consistency between Py 2 and 3,\n # keeping linter happy\n print(cutter.cut(line))\n except IOError:\n sys.stderr.write('File \\'' + fileinput.filename()\n + '\\' could not be found.\\n')\n sys.exit(3)\n\n fileinput.close()", "response": "Main entry point for the\nTaxonomy command line interface."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn axes x y for a given image img.", "response": "def NonUniformImage_axes(img):\n \"\"\"\n Returns axes *x, y* for a given image *img* to be used with :func:`scisalt.matplotlib.NonUniformImage`.\n\n Returns\n -------\n x, y : float, float\n \"\"\"\n xmin = 0\n xmax = img.shape[1]-1\n ymin = 0\n ymax = img.shape[0]-1\n x = _np.linspace(xmin, xmax, img.shape[1])\n y = _np.linspace(ymin, ymax, img.shape[0])\n return x, y"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nviews to be used in the admin for changing a ranked object.", "response": "def move(request, content_type_id, obj_id, rank):\n \"\"\"View to be used in the django admin for changing a :class:`RankedModel`\n object's rank. See :func:`admin_link_move_up` and\n :func:`admin_link_move_down` for helper functions to incoroprate in your\n admin models.\n\n Upon completion this view sends the caller back to the referring page.\n\n :param content_type_id:\n ``ContentType`` id of object being moved\n :param obj_id:\n ID of object being moved\n :param rank:\n New rank of the object\n \"\"\"\n content_type = ContentType.objects.get_for_id(content_type_id)\n obj = get_object_or_404(content_type.model_class(), id=obj_id)\n obj.rank = int(rank)\n obj.save()\n\n return HttpResponseRedirect(request.META['HTTP_REFERER'])"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef open_s3(bucket):\n conn = boto.connect_s3(options.paved.s3.access_id, options.paved.s3.secret)\n try:\n bucket = conn.get_bucket(bucket)\n except boto.exception.S3ResponseError:\n bucket = conn.create_bucket(bucket)\n return bucket", "response": "Opens connection to S3 returning bucket and key"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef upload_s3(file_path, bucket_name, file_key, force=False, acl='private'):\n file_path = path(file_path)\n bucket = open_s3(bucket_name)\n\n if file_path.isdir():\n # Upload the contents of the dir path.\n paths = file_path.listdir()\n paths_keys = list(zip(paths, ['%s/%s' % (file_key, p.name) for p in paths]))\n else:\n # Upload just the given file path.\n paths_keys = [(file_path, file_key)]\n\n for p, k in paths_keys:\n headers = {}\n s3_key = bucket.get_key(k)\n if not s3_key:\n from boto.s3.key import Key\n s3_key = Key(bucket, k)\n\n content_type = mimetypes.guess_type(p)[0]\n if content_type:\n headers['Content-Type'] = content_type\n file_size = p.stat().st_size\n file_data = p.bytes()\n file_md5, file_md5_64 = s3_key.get_md5_from_hexdigest(hashlib.md5(file_data).hexdigest())\n\n # Check the hash.\n if s3_key.etag:\n s3_md5 = s3_key.etag.replace('\"', '')\n if s3_md5 == file_md5:\n info('Hash is the same. Skipping %s' % file_path)\n continue\n elif not force:\n # Check if file on S3 is older than local file.\n s3_datetime = datetime.datetime(*time.strptime(\n s3_key.last_modified, '%a, %d %b %Y %H:%M:%S %Z')[0:6])\n local_datetime = datetime.datetime.utcfromtimestamp(p.stat().st_mtime)\n if local_datetime < s3_datetime:\n info(\"File %s hasn't been modified since last \" \\\n \"being uploaded\" % (file_key))\n continue\n # File is newer, let's process and upload\n info(\"Uploading %s...\" % (file_key))\n\n try:\n s3_key.set_contents_from_string(file_data, headers, policy=acl, replace=True, md5=(file_md5, file_md5_64))\n except Exception as e:\n error(\"Failed: %s\" % e)\n raise", "response": "Upload a local file to S3."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef download_s3(bucket_name, file_key, file_path, force=False):\n file_path = path(file_path)\n bucket = open_s3(bucket_name)\n\n file_dir = file_path.dirname()\n file_dir.makedirs()\n\n s3_key = bucket.get_key(file_key)\n if file_path.exists():\n file_data = file_path.bytes()\n file_md5, file_md5_64 = s3_key.get_md5_from_hexdigest(hashlib.md5(file_data).hexdigest())\n\n # Check the hash.\n try:\n s3_md5 = s3_key.etag.replace('\"', '')\n except KeyError:\n pass\n else:\n if s3_md5 == file_md5:\n info('Hash is the same. Skipping %s' % file_path)\n return\n\n elif not force:\n # Check if file on S3 is older than local file.\n s3_datetime = datetime.datetime(*time.strptime(\n s3_key.last_modified, '%a, %d %b %Y %H:%M:%S %Z')[0:6])\n local_datetime = datetime.datetime.utcfromtimestamp(file_path.stat().st_mtime)\n if s3_datetime < local_datetime:\n info(\"File at %s is less recent than the local version.\" % (file_key))\n return\n\n # If it is newer, let's process and upload\n info(\"Downloading %s...\" % (file_key))\n\n try:\n with open(file_path, 'w') as fo:\n s3_key.get_contents_to_file(fo)\n except Exception as e:\n error(\"Failed: %s\" % e)\n raise", "response": "Download a remote file from S3."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate an ical. iCalendar file for an event using python - card - me.", "response": "def create_ical(request, slug):\n \"\"\" Creates an ical .ics file for an event using python-card-me. \"\"\"\n event = get_object_or_404(Event, slug=slug)\n # convert dates to datetimes.\n # when we change code to datetimes, we won't have to do this.\n start = event.start_date\n start = datetime.datetime(start.year, start.month, start.day)\n\n if event.end_date:\n end = event.end_date\n end = datetime.datetime(end.year, end.month, end.day)\n else:\n end = start\n\n cal = card_me.iCalendar()\n cal.add('method').value = 'PUBLISH'\n vevent = cal.add('vevent')\n vevent.add('dtstart').value = start\n vevent.add('dtend').value = end\n vevent.add('dtstamp').value = datetime.datetime.now()\n vevent.add('summary').value = event.name\n response = HttpResponse(cal.serialize(), content_type='text/calendar')\n response['Filename'] = 'filename.ics'\n response['Content-Disposition'] = 'attachment; filename=filename.ics'\n return response"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list view of all comments for a given event.", "response": "def event_all_comments_list(request, slug):\n \"\"\"\n Returns a list view of all comments for a given event.\n Combines event comments and update comments in one list.\n \"\"\"\n event = get_object_or_404(Event, slug=slug)\n comments = event.all_comments\n page = int(request.GET.get('page', 99999)) # feed empty page by default to push to last page\n is_paginated = False\n if comments:\n paginator = Paginator(comments, 50) # Show 50 comments per page\n try:\n comments = paginator.page(page)\n except EmptyPage:\n # If page is out of range (e.g. 9999), deliver last page of results.\n comments = paginator.page(paginator.num_pages)\n is_paginated = comments.has_other_pages()\n\n return render(request, 'happenings/event_comments.html', {\n \"event\": event,\n \"comment_list\": comments,\n \"object_list\": comments,\n \"page_obj\": comments,\n \"page\": page,\n \"is_paginated\": is_paginated,\n \"key\": key\n })"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list view of updates for a given event.", "response": "def event_update_list(request, slug):\n \"\"\"\n Returns a list view of updates for a given event.\n If the event is over, it will be in chronological order.\n If the event is upcoming or still going,\n it will be in reverse chronological order.\n \"\"\"\n event = get_object_or_404(Event, slug=slug)\n updates = Update.objects.filter(event__slug=slug)\n if event.recently_ended():\n # if the event is over, use chronological order\n updates = updates.order_by('id')\n else:\n # if not, use reverse chronological\n updates = updates.order_by('-id')\n return render(request, 'happenings/updates/update_list.html', {\n 'event': event,\n 'object_list': updates,\n })"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef video_list(request, slug):\n event = get_object_or_404(Event, slug=slug)\n return render(request, 'video/video_list.html', {\n 'event': event,\n 'video_list': event.eventvideo_set.all()\n })", "response": "Displays list of videos for given event."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_event(request):\n form = AddEventForm(request.POST or None)\n if form.is_valid():\n instance = form.save(commit=False)\n instance.sites = settings.SITE_ID\n instance.submitted_by = request.user\n instance.approved = True\n instance.slug = slugify(instance.name)\n instance.save()\n messages.success(request, 'Your event has been added.')\n return HttpResponseRedirect(reverse('events_index'))\n return render(request, 'happenings/event_form.html', {\n 'form': form,\n 'form_title': 'Add an event'\n })", "response": "Public form to add an event."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a memory to an event.", "response": "def add_memory(request, slug):\n \"\"\" Adds a memory to an event. \"\"\"\n event = get_object_or_404(Event, slug=slug)\n form = MemoryForm(request.POST or None, request.FILES or None)\n if form.is_valid():\n instance = form.save(commit=False)\n instance.user = request.user\n instance.event = event\n instance.save()\n msg = \"Your thoughts were added. \"\n\n if request.FILES:\n photo_list = request.FILES.getlist('photos')\n photo_count = len(photo_list)\n for upload_file in photo_list:\n process_upload(upload_file, instance, form, event, request)\n if photo_count > 1:\n msg += \"{} images were added and should appear soon.\".format(photo_count)\n else:\n msg += \"{} image was added and should appear soon.\".format(photo_count)\n messages.success(request, msg)\n return HttpResponseRedirect('../')\n return render(request, 'happenings/add_memories.html', {'form': form, 'event': event})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing an image upload and save it to the database.", "response": "def process_upload(upload_file, instance, form, event, request):\n \"\"\"\n Helper function that actually processes and saves the upload(s).\n Segregated out for readability.\n \"\"\"\n caption = form.cleaned_data.get('caption')\n upload_name = upload_file.name.lower()\n if upload_name.endswith('.jpg') or upload_name.endswith('.jpeg'):\n try:\n upload = Image(\n event=event,\n image=upload_file,\n caption=caption,\n )\n upload.save()\n instance.photos.add(upload)\n except Exception as error:\n messages.error(request, 'Error saving image: {}.'.format(error))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the parent python path that contains the __main__ s file directory and sys. path.", "response": "def _get_search_path(main_file_dir, sys_path):\n '''\n Find the parent python path that contains the __main__'s file directory\n\n :param main_file_dir: __main__'s file directory\n :param sys_path: paths list to match directory against (like sys.path)\n '''\n # List to gather candidate parent paths\n paths = []\n # look for paths containing the directory\n for pth in sys_path:\n # convert relative path to absolute\n pth = path.abspath(pth)\n # filter out __main__'s file directory, it will be in the sys.path\n # filter in parent paths containing the package\n if (pth != main_file_dir\n and pth == path.commonprefix((pth, main_file_dir))):\n paths.append(pth)\n # check if we have results\n if paths:\n # we found candidates, look for the largest(closest) parent search path\n paths.sort()\n return paths[-1]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _try_search_paths(main_globals):\n '''\n Try different strategies to found the path containing the __main__'s file.\n Will try strategies, in the following order:\n 1. Building file's path with PWD env var.\n 2. Building file's path from absolute file's path.\n 3. Buidling file's path from real file's path.\n\n :param main_globals: globals dictionary in __main__\n '''\n # try with abspath\n fl = main_globals['__file__']\n search_path = None\n if not path.isabs(fl) and os.getenv('PWD'):\n # Build absolute path from PWD if possible\n cwd_fl = path.abspath(path.join(os.getenv('PWD'), fl))\n main_dir = path.dirname(cwd_fl)\n search_path = _get_search_path(main_dir, sys.path)\n\n if not search_path:\n # try absolute strategy (will fail on some symlinks configs)\n main_dir = path.dirname(path.abspath(fl))\n search_path = _get_search_path(main_dir, sys.path)\n\n if not search_path:\n # try real path strategy\n main_dir = path.dirname(path.realpath(fl))\n sys_path = [path.realpath(p) for p in sys.path]\n search_path = _get_search_path(main_dir, sys_path)\n\n return main_dir, search_path", "response": "Try different strategies to found the path containing the __main__ s file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfind parent python path of __main__ and import it and set __package__ variable.", "response": "def _solve_pkg(main_globals):\n '''\n Find parent python path of __main__. From there solve the package\n containing __main__, import it and set __package__ variable.\n\n :param main_globals: globals dictionary in __main__\n '''\n # find __main__'s file directory and search path\n main_dir, search_path = _try_search_paths(main_globals)\n if not search_path:\n _log_debug('Could not solve parent python path for %r' % main_dir)\n # no candidates for search path, return\n return\n # solve package name from search path\n pkg_str = path.relpath(main_dir, search_path).replace(path.sep, '.')\n # Remove wrong starting string for site-packages\n site_pkgs = 'site-packages.'\n if pkg_str.startswith(site_pkgs):\n pkg_str = pkg_str[len(site_pkgs):]\n assert pkg_str\n _log_debug('pkg_str=%r' % pkg_str)\n # import the package in order to set __package__ value later\n try:\n if '__init__.py' in main_globals['__file__']:\n _log_debug('__init__ script. This module is its own package')\n # The __main__ is __init__.py => its own package\n # If we treat it as a normal module it would be imported twice\n # So we simply reuse it\n sys.modules[pkg_str] = sys.modules['__main__']\n # We need to set __path__ because its needed for\n # relative importing\n sys.modules[pkg_str].__path__ = [main_dir]\n # We need to import parent package, something that would\n # happen automatically in non-faked import\n parent_pkg_str = '.'.join(pkg_str.split('.')[:-1])\n if parent_pkg_str:\n importlib.import_module(parent_pkg_str)\n else:\n _log_debug('Importing package %r' % pkg_str)\n # we need to import the package to be available\n importlib.import_module(pkg_str)\n # finally enable relative import\n main_globals['__package__'] = pkg_str\n return pkg_str\n except ImportError as e:\n # In many situations we won't care if it fails, simply report error\n # main will fail anyway if finds an explicit relative import\n _print_exc(e)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlogging a message at debug level.", "response": "def _log_debug(msg):\n '''\n Log at debug level\n :param msg: message to log\n '''\n if _log_level <= DEBUG:\n if _log_level == TRACE:\n traceback.print_stack()\n _log(msg)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef init(log_level=ERROR):\n '''\n Enables explicit relative import in sub-modules when ran as __main__\n :param log_level: module's inner logger level (equivalent to logging pkg)\n '''\n global _initialized\n if _initialized:\n return\n else:\n _initialized = True\n # find caller's frame\n frame = currentframe()\n # go 1 frame back to find who imported us\n frame = frame.f_back\n _init(frame, log_level)", "response": "Enables explicit relative import in sub - modules when ran as __main__\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _init(frame, log_level=ERROR):\n '''\n Enables explicit relative import in sub-modules when ran as __main__\n :param log_level: module's inner logger level (equivalent to logging pkg)\n '''\n global _log_level\n _log_level = log_level\n # now we have access to the module globals\n main_globals = frame.f_globals\n\n # If __package__ set or it isn't the __main__, stop and return.\n # (in some cases relative_import could be called once from outside\n # __main__ if it was not called in __main__)\n # (also a reload of relative_import could trigger this function)\n pkg = main_globals.get('__package__')\n file_ = main_globals.get('__file__')\n if pkg or not file_:\n _log_debug('Package solved or init was called from interactive '\n 'console. __package__=%r, __file__=%r' % (pkg, file_))\n return\n try:\n _solve_pkg(main_globals)\n except Exception as e:\n _print_exc(e)", "response": "Initializes the base class."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef curve_fit_unscaled(*args, **kwargs):\n # Extract verbosity\n verbose = kwargs.pop('verbose', False)\n\n # Do initial fit\n popt, pcov = _spopt.curve_fit(*args, **kwargs)\n\n # Expand positional arguments\n func = args[0]\n x = args[1]\n y = args[2]\n\n ddof = len(popt)\n\n # Try to use sigma to unscale pcov\n try:\n sigma = kwargs['sigma']\n if sigma is None:\n sigma = _np.ones(len(y))\n # Get reduced chi-square\n y_expect = func(x, *popt)\n chisq_red = _chisquare(y, y_expect, sigma, ddof, verbose=verbose)\n\n # Correct scaled covariance matrix\n pcov = pcov / chisq_red\n return popt, pcov, chisq_red\n except ValueError:\n print('hello')", "response": "Fit a function to a random key - vector and return the optimal and covariance matrix of the parameters."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfit a gaussian to a slice of an image.", "response": "def fitimageslice(img, res_x, res_y, xslice, yslice, avg_e_func=None, h5file=None, plot=False):\n \"\"\"\n Fits a gaussian to a slice of an image *img* specified by *xslice* x-coordinates and *yslice* y-coordinates. *res_x* and *res_y* specify image resolution in x and y. *avg_e_func* is a function that returns the energy of the image as a function of x. It should have the form:\n\n *avg_e_func(x_1, x_2, h5file, res_y)*\n\n Fits a gaussian to a slice of an image.\n\n Parameters\n ----------\n\n img : array\n Image to be fitted.\n res_x : int\n Image resolution in :math:`x`.\n res_y : int\n Image resolution in :math:`y`.\n xslice : (int, int)\n Slice coordinates in :math:`x`\n yslice : (int, int)\n Slice coordinates in :math:`y`\n avg_e_func : function\n Of the form *avg_e_func(x_1, x_2, h5file, res_y)*, returns the energy of the image as a function of :math:`x`.\n h5file : h5file\n Instance from dataset.\n plot : boolean\n Whether to plot or not.\n \"\"\"\n # ======================================\n # Extract start and end values\n # (NOT necessarily indices!)\n # ======================================\n x_start = xslice[0]\n x_end = xslice[1]\n y_start = yslice[0]\n y_end = yslice[1]\n\n # ======================================\n # Round to pixel edges. Pixel edges in\n # px units are at 0.5, 1.5, 2.5, etc.\n # ======================================\n y_low = _np.round(y_start-0.5) + 0.5\n y_high = _np.round(y_end-0.5) + 0.5\n\n # ======================================\n # Take a strip between edges\n # ======================================\n y_px = linspacestep(1, img.shape[0])\n y_bool = _np.logical_and(y_low < y_px, y_px < y_high)\n strip = img[y_bool, x_start:x_end]\n\n # ======================================\n # Sum over the strip to get an average\n # ======================================\n histdata = _np.sum(strip, 0)\n xbins = len(histdata)\n x = _np.linspace(1, xbins, xbins)*res_x\n \n # ======================================\n # Fit with a Gaussian to find spot size\n # ======================================\n # plotbool=True\n # plotbool = False\n # varbool = False\n varbool = True\n gaussout = _sp.GaussResults(\n x,\n histdata,\n sigma_y = _np.ones(xbins),\n variance = varbool,\n background = True,\n p0 = [16000, 0.003, 1e-6, 0]\n )\n\n if avg_e_func is not None:\n # ======================================\n # Get average energy of strip\n # ======================================\n # Sum to get relative counts of pixels\n relcounts = _np.sum(strip, 1) / _np.float(_np.sum(strip))\n # Integrate each pixel strip\n Eavg = 0\n for i, val in enumerate(linspacestep(y_low, y_high-1)):\n Eavg = Eavg + avg_e_func(val, val+1, h5file, res_y)*relcounts[i]\n return Eavg, gaussout\n else:\n return gaussout"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef encode(python_object, indent=None, large_object=False):\n # sorted keys is easier to read; however, Python-2.7.2 does not have this feature\n kwargs = dict(indent=indent)\n if can_dumps_sort_keys():\n kwargs.update(sort_keys=True)\n try:\n if large_object:\n out = GreenletFriendlyStringIO()\n json.dump(python_object, out, **kwargs)\n return out.getvalue()\n else:\n return json.dumps(python_object, **kwargs)\n except Exception:\n logger.exception()\n raise EncodeError('Cannot encode {!r}', python_object)", "response": "Encodes a Python object into a JSON - representation of the object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __register_library(self, module_name: str, attr: str, fallback: str = None):\n\n # Import the module Named in the string\n try:\n module = importlib.import_module(module_name)\n\n # If module is not found it checks if an alternative is is listed\n # If it is then it substitutes it, just so that the code can run\n except ImportError:\n if fallback is not None:\n module = importlib.import_module(fallback)\n self.__logger.warn(module_name + \" not available: Replaced with \" + fallback)\n else:\n self.__logger.warn(module_name + \" not available: No Replacement Specified\")\n\n # Cram the module into the __sketch in the form of module -> \"attr\"\n # AKA the same as `import module as attr`\n if not attr in dir(self.__sketch):\n setattr(self.__sketch, attr, module)\n else:\n self.__logger.warn(attr +\" could not be imported as it's label is already used in the sketch\")", "response": "Inserts Interpreter Library of imports into the sketch in a very non - consensual way"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_moments(self, sx, sxp, sxxp):\n self._sx = sx\n self._sxp = sxp\n self._sxxp = sxxp\n emit = _np.sqrt(sx**2 * sxp**2 - sxxp**2)\n self._store_emit(emit=emit)", "response": "Sets the beam moments directly."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the beam moments indirectly using Courant - Snyder parameters.", "response": "def set_Courant_Snyder(self, beta, alpha, emit=None, emit_n=None):\n \"\"\"\n Sets the beam moments indirectly using Courant-Snyder parameters.\n\n Parameters\n ----------\n beta : float\n Courant-Snyder parameter :math:`\\\\beta`.\n alpha : float\n Courant-Snyder parameter :math:`\\\\alpha`.\n emit : float\n Beam emittance :math:`\\\\epsilon`.\n emit_n : float\n Normalized beam emittance :math:`\\\\gamma \\\\epsilon`.\n \"\"\"\n\n self._store_emit(emit=emit, emit_n=emit_n)\n \n self._sx = _np.sqrt(beta*self.emit)\n self._sxp = _np.sqrt((1+alpha**2)/beta*self.emit)\n self._sxxp = -alpha*self.emit"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef beta(self):\n beta = _np.sqrt(self.sx)/self.emit\n return beta", "response": "Return the beta of the courant - snyder."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnormalizes a slice object to a range of the n - grams.", "response": "def normalize_slice(slice_obj, length):\n \"\"\"\n Given a slice object, return appropriate values for use in the range function\n\n :param slice_obj: The slice object or integer provided in the `[]` notation\n :param length: For negative indexing we need to know the max length of the object.\n \"\"\"\n if isinstance(slice_obj, slice):\n start, stop, step = slice_obj.start, slice_obj.stop, slice_obj.step\n if start is None:\n start = 0\n\n if stop is None:\n stop = length\n\n if step is None:\n step = 1\n\n if start < 0:\n start += length\n\n if stop < 0:\n stop += length\n elif isinstance(slice_obj, int):\n start = slice_obj\n if start < 0:\n start += length\n stop = start + 1\n step = 1\n else:\n raise TypeError\n\n if (0 <= start <= length) and (0 <= stop <= length):\n return start, stop, step\n\n raise IndexError"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef error(self, error_code, value, **kwargs):\n code = self.error_code_map.get(error_code, error_code)\n\n try:\n message = Template(self.error_messages[code])\n except KeyError:\n message = Template(self.error_messages[error_code])\n\n placeholders = {\"value\": self.hidden_value if self.hidden else value}\n placeholders.update(kwargs)\n placeholders.update(self.message_values)\n\n self.messages[code] = message.safe_substitute(placeholders)", "response": "Helper method to add error to the internal error_messages field. It fills message_values with values from message_values map."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncopies a file from one directory to another.", "response": "def copy(src, dst):\n \"\"\"File copy that support compress and decompress of zip files\"\"\"\n\n (szip, dzip) = (src.endswith(\".zip\"), dst.endswith(\".zip\"))\n logging.info(\"Copy: %s => %s\"%(src, dst))\n\n if szip and dzip:#If both zipped, we can simply use copy\n shutil.copy2(src, dst)\n elif szip:\n with zipfile.ZipFile(src, mode='r') as z:\n tmpdir = tempfile.mkdtemp()\n try:\n z.extractall(tmpdir)\n if len(z.namelist()) != 1:\n raise RuntimeError(\"The zip file '%s' should only have one \"\\\n \"compressed file\"%src)\n tmpfile = join(tmpdir,z.namelist()[0])\n try:\n os.remove(dst)\n except OSError:\n pass\n shutil.move(tmpfile, dst)\n finally:\n shutil.rmtree(tmpdir, ignore_errors=True)\n elif dzip:\n with zipfile.ZipFile(dst, mode='w', compression=ZIP_DEFLATED) as z:\n z.write(src, arcname=basename(src))\n else:#None of them are zipped\n shutil.copy2(src, dst)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\napply the changesets in the metafile list changesets", "response": "def apply_changesets(args, changesets, catalog):\n \"\"\"Apply to the 'catalog' the changesets in the metafile list 'changesets'\"\"\"\n\n tmpdir = tempfile.mkdtemp()\n tmp_patch = join(tmpdir, \"tmp.patch\")\n tmp_lcat = join(tmpdir, \"tmp.lcat\")\n\n for node in changesets:\n remove(tmp_patch)\n copy(node.mfile['changeset']['filename'], tmp_patch)\n logging.info(\"mv %s %s\"%(catalog, tmp_lcat))\n shutil.move(catalog, tmp_lcat)\n\n cmd = args.patch_cmd.replace(\"$in1\", tmp_lcat)\\\n .replace(\"$patch\", tmp_patch)\\\n .replace(\"$out\", catalog)\n logging.info(\"Patch: %s\"%cmd)\n subprocess.check_call(cmd, shell=True)\n\n shutil.rmtree(tmpdir, ignore_errors=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates that an event with this name on this date exists.", "response": "def clean(self):\n \"\"\"\n Validate that an event with this name on this date does not exist.\n \"\"\"\n cleaned = super(EventForm, self).clean()\n if Event.objects.filter(name=cleaned['name'], start_date=cleaned['start_date']).count():\n raise forms.ValidationError(u'This event appears to be in the database already.')\n return cleaned"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _loop(self):\n while True:\n try:\n with uncaught_greenlet_exception_context():\n self._loop_callback()\n except gevent.GreenletExit:\n break\n if self._stop_event.wait(self._interval):\n break\n self._clear()", "response": "Main loop - used internally."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef start(self):\n assert not self.has_started(), \"called start() on an active GeventLoop\"\n self._stop_event = Event()\n # note that we don't use safe_greenlets.spawn because we take care of it in _loop by ourselves\n self._greenlet = gevent.spawn(self._loop)", "response": "Starts the loop. Calling a running loop is an error."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops a running loop and waits for it to end.", "response": "def stop(self, timeout=None):\n \"\"\"\n Stops a running loop and waits for it to end if timeout is set. Calling a non-running loop is an error.\n :param timeout: time (in seconds) to wait for the loop to end after signalling it. ``None`` is to block till it\n ends.\n :return: True if the loop stopped, False if still stopping.\n \"\"\"\n assert self.has_started(), \"called stop() on a non-active GeventLoop\"\n greenlet = self._greenlet if gevent.getcurrent() != self._greenlet else None\n self._stop_event.set()\n if greenlet is not None and timeout is not None:\n greenlet.join(timeout)\n return greenlet.ready\n else:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef kill(self):\n assert self.has_started(), \"called kill() on a non-active GeventLoop\"\n self._stop_event.set()\n self._greenlet.kill()\n self._clear()", "response": "Kills the running loop and waits till it gets killed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nplotting a non - uniform image.", "response": "def NonUniformImage(x, y, z, ax=None, fig=None, cmap=None, alpha=None, scalex=True, scaley=True, add_cbar=True, **kwargs):\n \"\"\"\n Used to plot a set of coordinates.\n\n\n Parameters\n ----------\n x, y : :class:`numpy.ndarray`\n 1-D ndarrays of lengths N and M, respectively, specifying pixel centers\n z : :class:`numpy.ndarray`\n An (M, N) ndarray or masked array of values to be colormapped, or a (M, N, 3) RGB array, or a (M, N, 4) RGBA array.\n ax : :class:`matplotlib.axes.Axes`, optional\n The axis to plot to.\n fig : :class:`matplotlib.figure.Figure`, optional\n The figure to plot to.\n cmap : :class:`matplotlib.colors.Colormap`, optional\n The colormap to use.\n alpha : float, optional\n The transparency to use.\n scalex : bool, optional\n To set the x limits to available data\n scaley : bool, optional\n To set the y limits to available data\n add_cbar : bool, optional\n Whether ot add a colorbar or not.\n\n Returns\n -------\n img : :class:`matplotlib.image.NonUniformImage`\n Object representing the :class:`matplotlib.image.NonUniformImage`.\n \"\"\"\n if ax is None and fig is None:\n fig, ax = _setup_axes()\n elif ax is None:\n ax = fig.gca()\n elif fig is None:\n fig = ax.get_figure()\n\n norm = kwargs.get('norm', None)\n\n im = _mplim.NonUniformImage(ax, **kwargs)\n\n vmin = kwargs.pop('vmin', _np.min(z))\n vmax = kwargs.pop('vmax', _np.max(z))\n # im.set_clim(vmin=vmin, vmax=vmax)\n\n if cmap is not None:\n im.set_cmap(cmap)\n\n m = _cm.ScalarMappable(cmap=im.get_cmap(), norm=norm)\n m.set_array(z)\n\n if add_cbar:\n cax, cb = _cb(ax=ax, im=m, fig=fig)\n\n if alpha is not None:\n im.set_alpha(alpha)\n\n im.set_data(x, y, z)\n ax.images.append(im)\n\n if scalex:\n xmin = min(x)\n xmax = max(x)\n ax.set_xlim(xmin, xmax)\n\n if scaley:\n ymin = min(y)\n ymax = max(y)\n ax.set_ylim(ymin, ymax)\n\n return _SI(im=im, cb=cb, cax=cax)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfix common spacing errors caused by LaTeX s habit of using an inter - sentence space after any full stop.", "response": "def _sentence_to_interstitial_spacing(self):\n \"\"\"Fix common spacing errors caused by LaTeX's habit\n of using an inter-sentence space after any full stop.\"\"\"\n\n not_sentence_end_chars = [' ']\n abbreviations = ['i.e.', 'e.g.', ' v.',\n ' w.', ' wh.']\n titles = ['Prof.', 'Mr.', 'Mrs.', 'Messrs.',\n 'Mmes.', 'Msgr.', 'Ms.', 'Fr.', 'Rev.',\n 'St.', 'Dr.', 'Lieut.', 'Lt.', 'Capt.',\n 'Cptn.', 'Sgt.', 'Sjt.', 'Gen.', 'Hon.',\n 'Cpl.', 'L-Cpl.', 'Pvt.', 'Dvr.', 'Gnr.',\n 'Spr.', 'Col.', 'Lt-Col', 'Lt-Gen.', 'Mx.']\n\n for abbrev in abbreviations:\n for x in not_sentence_end_chars:\n self._str_replacement(abbrev + x, abbrev + '\\ ')\n\n for title in titles:\n for x in not_sentence_end_chars:\n self._str_replacement(title + x, title + '~')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _hyphens_to_dashes(self):\n\n problematic_hyphens = [(r'-([.,!)])', r'---\\1'),\n (r'(?<=\\d)-(?=\\d)', '--'),\n (r'(?<=\\s)-(?=\\s)', '---')]\n\n for problem_case in problematic_hyphens:\n self._regex_replacement(*problem_case)", "response": "Transform hyphens to various kinds of dashes"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _str_replacement(self, target, replacement):\n self.data = self.data.replace(target, replacement)", "response": "Replace target with replacement"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef sphinx_make(*targets):\n sh('make %s' % ' '.join(targets), cwd=options.paved.docs.path)", "response": "Call the Sphinx Makefile with the specified targets."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupload the docs to a remote location via rsync.", "response": "def rsync_docs():\n \"\"\"Upload the docs to a remote location via rsync.\n\n `options.paved.docs.rsync_location`: the target location to rsync files to.\n\n `options.paved.docs.path`: the path to the Sphinx folder (where the Makefile resides).\n\n `options.paved.docs.build_rel`: the path of the documentation\n build folder, relative to `options.paved.docs.path`.\n \"\"\"\n assert options.paved.docs.rsync_location, \"Please specify an rsync location in options.paved.docs.rsync_location.\"\n sh('rsync -ravz %s/ %s/' % (path(options.paved.docs.path) / options.paved.docs.build_rel,\n options.paved.docs.rsync_location))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npushing the Sphinx docs to github_ gh - pages branch.", "response": "def ghpages():\n '''Push Sphinx docs to github_ gh-pages branch.\n\n 1. Create file .nojekyll\n 2. Push the branch to origin/gh-pages\n after committing using ghp-import_\n\n Requirements:\n - easy_install ghp-import\n\n Options:\n - `options.paved.docs.*` is not used\n - `options.sphinx.docroot` is used (default=docs)\n - `options.sphinx.builddir` is used (default=.build)\n\n .. warning::\n This will DESTROY your gh-pages branch.\n If you love it, you'll want to take backups\n before playing with this. This script assumes\n that gh-pages is 100% derivative. You should\n never edit files in your gh-pages branch by hand\n if you're using this script because you will\n lose your work.\n\n .. _github: https://github.com\n .. _ghp-import: https://github.com/davisp/ghp-import\n '''\n\n # copy from paver\n opts = options\n docroot = path(opts.get('docroot', 'docs'))\n if not docroot.exists():\n raise BuildFailure(\"Sphinx documentation root (%s) does not exist.\"\n % docroot)\n builddir = docroot / opts.get(\"builddir\", \".build\")\n # end of copy\n\n builddir=builddir / 'html'\n if not builddir.exists():\n raise BuildFailure(\"Sphinx build directory (%s) does not exist.\"\n % builddir)\n\n nojekyll = path(builddir) / '.nojekyll'\n nojekyll.touch()\n\n sh('ghp-import -p %s' % (builddir))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nopen your web browser and display the generated html documentation.", "response": "def showhtml():\n \"\"\"Open your web browser and display the generated html documentation.\n \"\"\"\n import webbrowser\n\n # copy from paver\n opts = options\n docroot = path(opts.get('docroot', 'docs'))\n if not docroot.exists():\n raise BuildFailure(\"Sphinx documentation root (%s) does not exist.\"\n % docroot)\n builddir = docroot / opts.get(\"builddir\", \".build\")\n # end of copy\n\n builddir=builddir / 'html'\n if not builddir.exists():\n raise BuildFailure(\"Sphinx build directory (%s) does not exist.\"\n % builddir)\n\n webbrowser.open(builddir / 'index.html')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets up a figure with a number of rows (*rows*) and columns (*cols*), *\\*\\*kwargs* passes through to :class:`matplotlib.figure.Figure`. .. versionchanged:: 1.3 Supports *\\*\\*kwargs* pass-through to :class:`matplotlib.figure.Figure`. .. versionchanged:: 1.2 Changed *gridspec_x* to *rows*, *gridspec_y* to *cols*, added *figsize* control. Parameters ---------- rows : int Number of rows to create. cols : int Number of columns to create. Returns ------- fig : :class:`matplotlib.figure.Figure` The figure. gs : :class:`matplotlib.gridspec.GridSpec` Instance with *gridspec_x* rows and *gridspec_y* columns", "response": "def setup_figure(rows=1, cols=1, **kwargs):\n \"\"\"\n Sets up a figure with a number of rows (*rows*) and columns (*cols*), *\\*\\*kwargs* passes through to :class:`matplotlib.figure.Figure`.\n\n .. versionchanged:: 1.3\n Supports *\\*\\*kwargs* pass-through to :class:`matplotlib.figure.Figure`.\n \n .. versionchanged:: 1.2\n Changed *gridspec_x* to *rows*, *gridspec_y* to *cols*, added *figsize* control.\n\n Parameters\n ----------\n\n rows : int\n Number of rows to create.\n cols : int\n Number of columns to create.\n\n Returns\n -------\n \n fig : :class:`matplotlib.figure.Figure`\n The figure.\n gs : :class:`matplotlib.gridspec.GridSpec`\n Instance with *gridspec_x* rows and *gridspec_y* columns\n \"\"\"\n fig = _plt.figure(**kwargs)\n gs = _gridspec.GridSpec(rows, cols)\n\n return fig, gs"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef guess(self, *args):\n self._validate()\n \"\"\"\n guess() allows a guess to be made. Before the guess is made, the method\n checks to see if the game has been won, lost, or there are no tries\n remaining. It then creates a return object stating the number of bulls\n (direct matches), cows (indirect matches), an analysis of the guess (a\n list of analysis objects), and a status.\n\n :param args: any number of integers (or string representations of integers)\n to the number of Digits in the answer; i.e. in normal mode, there would be\n a DigitWord to guess of 4 digits, so guess would expect guess(1, 2, 3, 4)\n and a shorter (guess(1, 2)) or longer (guess(1, 2, 3, 4, 5)) sequence will\n raise an exception.\n\n :return: a JSON object containing the analysis of the guess:\n\n {\n \"cows\": {\"type\": \"integer\"},\n \"bulls\": {\"type\": \"integer\"},\n \"analysis\": {\"type\": \"array of DigitWordAnalysis\"},\n \"status\": {\"type\": \"string\"}\n }\n\n \"\"\"\n logging.debug(\"guess called.\")\n logging.debug(\"Validating game object\")\n self._validate(op=\"guess\")\n\n logging.debug(\"Building return object\")\n _return_results = {\n \"cows\": None,\n \"bulls\": None,\n \"analysis\": [],\n \"status\": \"\"\n }\n\n logging.debug(\"Check if game already won, lost, or too many tries.\")\n if self._game.status == GameObject.GAME_WON:\n _return_results[\"message\"] = self._start_again(\"You already won!\")\n elif self._game.status == GameObject.GAME_LOST:\n _return_results[\"message\"] = self._start_again(\"You have made too many guesses, you lost!\")\n elif self._game.guesses_remaining < 1:\n _return_results[\"message\"] = self._start_again(\"You have run out of tries, sorry!\")\n else:\n logging.debug(\"Creating a DigitWord for the guess.\")\n\n _mode = self._load_mode(self._game.mode)\n guess = DigitWord(*args, wordtype=_mode.digit_type)\n\n logging.debug(\"Validating guess.\")\n self._game.guesses_remaining -= 1\n self._game.guesses_made += 1\n\n logging.debug(\"Initializing return object.\")\n _return_results[\"analysis\"] = []\n _return_results[\"cows\"] = 0\n _return_results[\"bulls\"] = 0\n\n logging.debug(\"Asking the underlying GameObject to compare itself to the guess.\")\n for i in self._game.answer.compare(guess):\n logging.debug(\"Iteration of guesses. Processing guess {}\".format(i.index))\n\n if i.match is True:\n logging.debug(\"Bull found. +1\")\n _return_results[\"bulls\"] += 1\n elif i.in_word is True:\n logging.debug(\"Cow found. +1\")\n _return_results[\"cows\"] += 1\n\n logging.debug(\"Add analysis to return object\")\n _return_results[\"analysis\"].append(i.get_object())\n\n logging.debug(\"Checking if game won or lost.\")\n if _return_results[\"bulls\"] == len(self._game.answer.word):\n logging.debug(\"Game was won.\")\n self._game.status = GameObject.GAME_WON\n self._game.guesses_remaining = 0\n _return_results[\"message\"] = \"Well done! You won the game with your \" \\\n \"answers {}\".format(self._game.answer_str)\n elif self._game.guesses_remaining < 1:\n logging.debug(\"Game was lost.\")\n self._game.status = GameObject.GAME_LOST\n _return_results[\"message\"] = \"Sorry, you lost! The correct answer was \" \\\n \"{}\".format(self._game.answer_str)\n _return_results[\"status\"] = self._game.status\n\n logging.debug(\"Returning results.\")\n return _return_results", "response": "This method is used to create a JSON object containing the number of cows bulls and analysis of the guess."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef attachments(value, obj, width = WIDTH):\n \n match = ATTACHMENT_REGEX.search(value)\n safe = isinstance(value, (SafeString, SafeUnicode))\n \n while not match is None and match.end() <= len(value):\n start = match.start()\n end = match.end()\n groups = match.groups()\n \n if len(groups) > 0:\n index = groups[0]\n options = None\n \n if len(groups) > 1:\n options = groups[1]\n if options:\n options = options.strip()\n if options:\n try:\n options = split(smart_str(options))\n except:\n options = None\n \n args = []\n kwargs = {\n 'width': width\n }\n \n if options:\n for option in options:\n key, equals, val = option.partition('=')\n if equals != '=':\n if key and not val:\n args.append(key)\n continue\n \n kwargs[key] = val\n \n try:\n if isinstance(obj, dict):\n inner = Attachment(\n **obj['attachments__attachment'][int(index) - 1]\n ).render(*args, **kwargs)\n else:\n inner = obj.attachments.all()[int(index) - 1].render(*args, **kwargs)\n except:\n inner = ''\n else:\n inner = ''\n \n value = value[:start] + inner + value[end:]\n match = ATTACHMENT_REGEX.search(value, start + len(inner))\n \n if safe:\n return mark_safe(value)\n else:\n return value", "response": "Parses the value and returns the list of attachments in this format"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef minify(self, css):\n css = css.replace(\"\\r\\n\", \"\\n\") # get rid of Windows line endings, if they exist\n for rule in _REPLACERS[self.level]:\n css = re.compile(rule[0], re.MULTILINE|re.UNICODE|re.DOTALL).sub(rule[1], css)\n return css", "response": "Tries to minimize the length of CSS code passed as parameter. Returns string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a colorbar for the current thread.", "response": "def colorbar(ax, im, fig=None, loc=\"right\", size=\"5%\", pad=\"3%\"):\n \"\"\"\n Adds a polite colorbar that steals space so :func:`matplotlib.pyplot.tight_layout` works nicely.\n\n .. versionadded:: 1.3\n\n Parameters\n ----------\n\n ax : :class:`matplotlib.axis.Axis`\n The axis to plot to.\n im : :class:`matplotlib.image.AxesImage`\n The plotted image to use for the colorbar.\n fig : :class:`matplotlib.figure.Figure`, optional\n The figure to plot to.\n loc : str, optional\n The location to place the axes.\n size : str, optional\n The size to allocate for the colorbar.\n pad : str, optional\n The amount to pad the colorbar.\n\n \"\"\"\n if fig is None:\n fig = ax.get_figure()\n\n # _pdb.set_trace()\n if loc == \"left\" or loc == \"right\":\n width = fig.get_figwidth()\n new = width * (1 + _pc2f(size) + _pc2f(pad))\n _logger.debug('Setting new figure width: {}'.format(new))\n # fig.set_size_inches(new, fig.get_figheight(), forward=True)\n elif loc == \"top\" or loc == \"bottom\":\n height = fig.get_figheight()\n new = height * (1 + _pc2f(size) + _pc2f(pad))\n _logger.debug('Setting new figure height: {}'.format(new))\n # fig.set_figheight(fig.get_figwidth(), new, forward=True)\n\n divider = _ag1.make_axes_locatable(ax)\n cax = divider.append_axes(loc, size=size, pad=pad)\n return cax, _plt.colorbar(im, cax=cax)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a bdes to a geometric focusing strength K.", "response": "def BDES2K(bdes, quad_length, energy):\n \"\"\"\n Converts a quadrupole :math:`B_des` into a geometric focusing strength :math:`K`.\n\n Parameters\n ----------\n\n bdes : float\n The magnet value of :math:`B_des`.\n quad_length : float\n The length of the quadrupole in meters.\n energy : float\n The design energy of the beam in GeV.\n\n Returns\n -------\n K : float\n The geometric focusing strength :math:`K`.\n \"\"\"\n # Make sure everything is float\n bdes = _np.float_(bdes)\n quad_length = _np.float_(quad_length)\n energy = _np.float_(energy)\n\n Brho = energy/_np.float_(0.029979)\n K = bdes/(Brho*quad_length)\n logger.log(level=loggerlevel, msg='Converted BDES: {bdes}, quad length: {quad_length}, energy: {energy} to K: {K}'.format(\n bdes = bdes ,\n quad_length = quad_length ,\n energy = energy ,\n K = K\n )\n )\n\n return K"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns an open file - object to the index file", "response": "def get_or_create_index(self, index_ratio, index_width):\n \"\"\"Return an open file-object to the index file\"\"\"\n if not self.index_path.exists() or not self.filepath.stat().st_mtime == self.index_path.stat().st_mtime:\n create_index(self.filepath, self.index_path, index_ratio=index_ratio, index_width=index_width)\n return IndexFile(str(self.index_path))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates the tasks on the server", "response": "def create(self, server):\n \"\"\"Create the tasks on the server\"\"\"\n for chunk in self.__cut_to_size():\n server.post(\n 'tasks_admin',\n chunk.as_payload(),\n replacements={\n 'slug': chunk.challenge.slug})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating existing tasks on the server", "response": "def update(self, server):\n \"\"\"Update existing tasks on the server\"\"\"\n for chunk in self.__cut_to_size():\n server.put(\n 'tasks_admin',\n chunk.as_payload(),\n replacements={\n 'slug': chunk.challenge.slug})"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reconcile(self, server):\n if not self.challenge.exists(server):\n raise Exception('Challenge does not exist on server')\n\n existing = MapRouletteTaskCollection.from_server(server, self.challenge)\n\n same = []\n new = []\n changed = []\n deleted = []\n\n # reconcile the new tasks with the existing tasks:\n for task in self.tasks:\n # if the task exists on the server...\n if task.identifier in [existing_task.identifier for existing_task in existing.tasks]:\n # and they are equal...\n if task == existing.get_by_identifier(task.identifier):\n # add to 'same' list\n same.append(task)\n # if they are not equal, add to 'changed' list\n else:\n changed.append(task)\n # if the task does not exist on the server, add to 'new' list\n else:\n new.append(task)\n\n # next, check for tasks on the server that don't exist in the new collection...\n for task in existing.tasks:\n if task.identifier not in [task.identifier for task in self.tasks]:\n # ... and add those to the 'deleted' list.\n deleted.append(task)\n\n # update the server with new, changed, and deleted tasks\n if new:\n newCollection = MapRouletteTaskCollection(self.challenge, tasks=new)\n newCollection.create(server)\n if changed:\n changedCollection = MapRouletteTaskCollection(self.challenge, tasks=changed)\n changedCollection.update(server)\n if deleted:\n deletedCollection = MapRouletteTaskCollection(self.challenge, tasks=deleted)\n for task in deletedCollection.tasks:\n task.status = 'deleted'\n deletedCollection.update(server)\n # return same, new, changed and deleted tasks\n return {'same': same, 'new': new, 'changed': changed, 'deleted': deleted}", "response": "Reconcile this collection with the server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef yn_prompt(msg, default=True):\n ret = custom_prompt(msg, [\"y\", \"n\"], \"y\" if default else \"n\")\n if ret == \"y\":\n return True\n return False", "response": "Prompts the user for yes or no."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef custom_prompt(msg, options, default):\n formatted_options = [\n x.upper() if x == default else x.lower() for x in options\n ]\n sure = input(\"{0} [{1}]: \".format(msg, \"/\".join(formatted_options)))\n if len(sure) == 0:\n return default\n for option in options:\n if sure.upper() == option.upper():\n return option\n return default", "response": "Prompts the user with custom options."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads the configure file and adds non - existing attributes to args", "response": "def read(args):\n \"\"\"Reading the configure file and adds non-existing attributes to 'args'\"\"\"\n\n if args.config_file is None or not isfile(args.config_file):\n return\n\n logging.info(\"Reading configure file: %s\"%args.config_file)\n\n config = cparser.ConfigParser()\n config.read(args.config_file)\n if not config.has_section('lrcloud'):\n raise RuntimeError(\"Configure file has no [lrcloud] section!\")\n\n for (name, value) in config.items('lrcloud'):\n if value == \"True\":\n value = True\n elif value == \"False\":\n value = False\n if getattr(args, name) is None:\n setattr(args, name, value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwriting the configure file with the attributes in args", "response": "def write(args):\n \"\"\"Writing the configure file with the attributes in 'args'\"\"\"\n\n logging.info(\"Writing configure file: %s\"%args.config_file)\n if args.config_file is None:\n return\n\n #Let's add each attribute of 'args' to the configure file\n config = cparser.ConfigParser()\n config.add_section(\"lrcloud\")\n for p in [x for x in dir(args) if not x.startswith(\"_\")]:\n if p in IGNORE_ARGS:\n continue#We ignore some attributes\n value = getattr(args, p)\n if value is not None:\n config.set('lrcloud', p, str(value))\n\n with open(args.config_file, 'w') as f:\n config.write(f)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _spawn_memcached(sock):\n\tp = subprocess.Popen('memcached -s ' + sock, shell=True)\n\ttime.sleep(0.2) # memcached needs to initialize\n\tassert p.poll() is None # make sure it's running\n\treturn p", "response": "Helper function for tests. Spawns a memcached process attached to sock. Returns Popen instance."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndumps a Python object of the object as detailed above.", "response": "def dump(self):\n \"\"\"\n Dump (return) a dict representation of the GameObject. This is a Python\n dict and is NOT serialized. NB: the answer (a DigitWord object) and the\n mode (a GameMode object) are converted to python objects of a list and\n dict respectively.\n\n :return: python of the GameObject as detailed above.\n \"\"\"\n\n return {\n \"key\": self._key,\n \"status\": self._status,\n \"ttl\": self._ttl,\n \"answer\": self._answer.word,\n \"mode\": self._mode.dump(),\n \"guesses_made\": self._guesses_made\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(self, source=None):\n if not source:\n raise ValueError(\"A valid dictionary must be passed as the source_dict\")\n if not isinstance(source, dict):\n raise TypeError(\"A valid dictionary must be passed as the source_dict. {} given.\".format(type(source)))\n\n required_keys = (\n \"key\",\n \"status\",\n \"ttl\",\n \"answer\",\n \"mode\",\n \"guesses_made\")\n if not all(key in source for key in required_keys):\n raise ValueError(\"The dictionary passed is malformed: {}\".format(source))\n\n _mode = GameMode(**source[\"mode\"])\n self._key = source[\"key\"]\n self._status = source[\"status\"]\n self._ttl = source[\"ttl\"]\n self._answer = DigitWord(*source[\"answer\"], wordtype=_mode.digit_type)\n self._mode = _mode\n self._guesses_made = source[\"guesses_made\"]", "response": "Load the representation of a GameObject from a Python dictionary representing\n the game object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new instance of a new object of type GameMode.", "response": "def new(self, mode):\n \"\"\"\n Create a new instance of a game. Note, a mode MUST be provided and MUST be of\n type GameMode.\n\n :param mode: \n\n \"\"\"\n dw = DigitWord(wordtype=mode.digit_type)\n dw.random(mode.digits)\n\n self._key = str(uuid.uuid4())\n self._status = \"\"\n self._ttl = 3600\n self._answer = dw\n self._mode = mode\n self._guesses_remaining = mode.guesses_allowed\n self._guesses_made = 0"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nbumping the Version given a target", "response": "def bump(self, target):\n \"\"\"\n Bumps the Version given a target\n\n The target can be either MAJOR, MINOR or PATCH\n \"\"\"\n if target == 'patch':\n return Version(self.major, self.minor, self.patch + 1)\n if target == 'minor':\n return Version(self.major, self.minor + 1, 0)\n if target == 'major':\n return Version(self.major + 1, 0, 0)\n return self.clone()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a copy of this object", "response": "def clone(self):\n \"\"\"\n Returns a copy of this object\n \"\"\"\n t = Tag(self.version.major, self.version.minor, self.version.patch)\n if self.revision is not None:\n t.revision = self.revision.clone()\n return t"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a new Tag with a given revision", "response": "def with_revision(self, label, number):\n \"\"\"\n Returns a Tag with a given revision\n \"\"\"\n t = self.clone()\n t.revision = Revision(label, number)\n return t"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse(s):\n try:\n m = _regex.match(s)\n t = Tag(int(m.group('major')),\n int(m.group('minor')),\n int(m.group('patch')))\n return t \\\n if m.group('label') is None \\\n else t.with_revision(m.group('label'), int(m.group('number')))\n except AttributeError:\n return None", "response": "Parses a string into a Tag object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef yield_tag(self, target=None, label=None):\n if target is None and label is None:\n raise ValueError('`target` and/or `label` must be specified')\n if label is None:\n return self._yield_from_target(target)\n if target is None:\n return self._yield_from_label(label)\n return self._yield_from_target_and_label(target, label)", "response": "Yields a Tag containing the target and label."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates a new array with numbers interpolated between the numbers of the input array and the exterior border.", "response": "def linspaceborders(array):\n \"\"\"\n Generate a new array with numbers interpolated between the numbers of the input array. Extrapolates elements to the left and right sides to get the exterior border as well.\n \n Parameters\n ----------\n\n array : :class:`numpy.ndarray`\n The original array.\n\n Returns\n -------\n\n array : :class:`numpy.ndarray`\n The interpolated/extrapolated array.\n \"\"\"\n # Get and set left side\n dela = array[1] - array[0]\n new_arr = _np.array([array[0] - dela / 2])\n\n # Add a point to the right side so the for loop works\n delb = array[-1] - array[-2]\n array = _np.append(array, array[-1] + delb)\n for i, val in enumerate(array):\n try:\n avg = (array[i] + array[i + 1]) / 2\n new_arr = _np.append(new_arr, avg)\n except:\n pass\n # start = array[0] - dela / 2\n # end = array[-1] + dela / 2\n # return _linspacestep(start, end, dela)\n return new_arr"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the number of beam points in the first axis.", "response": "def nb0(self):\n \"\"\"\n On-axis beam density :math:`n_{b,0}`.\n \"\"\"\n return self.N_e / (4*_np.sqrt(3) * _np.pi * self.sig_x * self.sig_y * self.sig_xi)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef k(self):\n try:\n return self._k\n except AttributeError:\n self._k = _np.sqrt(_np.pi/8) * e**2 * self.nb0 * self.sig_y / ( e0 * self.m * c**2)\n return self._k", "response": "Gets the K term of the current object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef k_small(self):\n return self.k * _np.sqrt(2/_np.pi) / self.sig_y", "response": "Small - angle driving force term"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new game. Docs TBC.", "response": "def new_game(self, mode=None):\n \"\"\"\n new_game() creates a new game. Docs TBC.\n\n :return: JSON String containing the game object.\n\n \"\"\"\n\n # Create a placeholder Game object\n self._g = GameObject()\n\n # Validate game mode\n _mode = mode or \"normal\"\n logging.debug(\"new_game: requesting mode {}\".format(_mode))\n\n mode_info = self._g.get_game_type(gametype=_mode)\n logging.debug(\"mode_info: {} (Type: {})\".format(mode_info, type(mode_info)))\n\n if not mode_info:\n self._g = None\n raise ValueError('The mode passed ({}) is not supported.'.format(_mode))\n\n logging.debug(\"Creating a DigitWord (type {})\".format(mode_info.digitType))\n dw = DigitWord(wordtype=mode_info.digitType)\n dw.random(mode_info.digits)\n\n logging.debug(\"Randomized DigitWord. Value is {}.\".format(dw.word))\n\n _game = {\n \"key\": str(uuid.uuid4()),\n \"status\": \"playing\",\n \"ttl\": int(time()) + 3600,\n \"answer\": dw.word,\n \"mode\": _mode,\n \"guesses_remaining\": mode_info.guesses_allowed,\n \"guesses_made\": 0\n }\n logging.debug(\"Game being created: {}\".format(_game))\n\n self._g.from_json(jsonstr=json.dumps(_game))\n return self._g.to_json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading a game object from a JSON string.", "response": "def load_game(self, jsonstr):\n \"\"\"\n load_game() takes a JSON string representing a game object and calls the underlying\n game object (_g) to load the JSON. The underlying object will handle schema validation\n and transformation.\n\n :param jsonstr: A valid JSON string representing a GameObject (see above)\n\n :return: None\n\n \"\"\"\n logging.debug(\"load_game called.\")\n logging.debug(\"Creating empty GameObject.\")\n self._g = GameObject()\n\n logging.debug(\"Calling from_json with {}.\".format(jsonstr))\n self._g.from_json(jsonstr=jsonstr)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_game(self):\n logging.debug(\"save_game called.\")\n logging.debug(\"Validating game object\")\n self._validate_game_object(op=\"save_game\")\n\n logging.debug(\"Dumping JSON from GameObject\")\n return self._g.to_json()", "response": "Save the contents of the underlying game object as JSON and return the JSON representation of the game object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef guess(self, *args):\n logging.debug(\"guess called.\")\n logging.debug(\"Validating game object\")\n self._validate_game_object(op=\"guess\")\n\n logging.debug(\"Building return object\")\n _return_results = {\n \"cows\": None,\n \"bulls\": None,\n \"analysis\": [],\n \"status\": \"\"\n }\n\n logging.debug(\"Check if game already won, lost, or too many tries.\")\n if self._g.status.lower() == \"won\":\n _return_results[\"message\"] = self._start_again(\"You already won!\")\n elif self._g.status.lower() == \"lost\":\n _return_results[\"message\"] = self._start_again(\"You have made too many guesses_allowed, you lost!\")\n elif self._g.guesses_remaining < 1:\n _return_results[\"message\"] = self._start_again(\"You have run out of tries, sorry!\")\n elif self._g.ttl < time():\n _return_results[\"message\"] = self._start_again(\"Sorry, you ran out of time to complete the puzzle!\")\n else:\n logging.debug(\"Creating a DigitWord for the guess.\")\n\n _wordtype = DigitWord.HEXDIGIT if self._g.mode.lower() == 'hex' else DigitWord.DIGIT\n guess = DigitWord(*args, wordtype=_wordtype)\n\n logging.debug(\"Validating guess.\")\n self._g.guesses_remaining -= 1\n self._g.guesses_made += 1\n\n logging.debug(\"Initializing return object.\")\n _return_results[\"analysis\"] = []\n _return_results[\"cows\"] = 0\n _return_results[\"bulls\"] = 0\n\n logging.debug(\"Asking the underlying GameObject to compare itself to the guess.\")\n for i in self._g.answer.compare(guess):\n logging.debug(\"Iteration of guesses_allowed. Processing guess {}\".format(i.index))\n\n if i.match is True:\n logging.debug(\"Bull found. +1\")\n _return_results[\"bulls\"] += 1\n elif i.in_word is True:\n logging.debug(\"Cow found. +1\")\n _return_results[\"cows\"] += 1\n\n logging.debug(\"Add analysis to return object\")\n _return_results[\"analysis\"].append(i.get_object())\n\n logging.debug(\"Checking if game won or lost.\")\n if _return_results[\"bulls\"] == len(self._g.answer.word):\n logging.debug(\"Game was won.\")\n self._g.status = \"won\"\n self._g.guesses_remaining = 0\n _return_results[\"message\"] = \"Well done! You won the game with your \" \\\n \"answers {}\".format(self._get_text_answer())\n elif self._g.guesses_remaining < 1:\n logging.debug(\"Game was lost.\")\n self._g.status = \"lost\"\n _return_results[\"message\"] = \"Sorry, you lost! The correct answer was \" \\\n \"{}\".format(self._get_text_answer())\n _return_results[\"status\"] = self._g.status\n\n logging.debug(\"Returning results.\")\n return _return_results", "response": "This method is used to make a guess of the current locale. It checks to see if the game has already won lost or too many tries."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates the game object.", "response": "def _validate_game_object(self, op=\"unknown\"):\n \"\"\"\n A helper method to provide validation of the game object (_g). If the\n game object does not exist or if (for any reason) the object is not a GameObject,\n then an exception will be raised.\n\n :param op: A string describing the operation (e.g. guess, save, etc.) taking place\n :return: Nothing\n \"\"\"\n if self._g is None:\n raise ValueError(\n \"Game must be instantiated properly before using - call new_game() \"\n \"or load_game(jsonstr='{...}')\"\n )\n if not isinstance(self._g, GameObject):\n raise TypeError(\n \"Unexpected error during {0}! GameObject (_g) is not a GameObject!\".format(op)\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef extra_context(request):\n host = os.environ.get('DJANGO_LIVE_TEST_SERVER_ADDRESS', None) \\\n or request.get_host()\n d = {\n 'request':request,\n 'HOST':host,\n 'IN_ADMIN':request.path.startswith('/admin/'),\n }\n\n return d", "response": "Adds useful global items to the context for use in templates."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fft(values, freq=None, timestamps=None, fill_missing=False):\n # ======================================\n # Get frequency\n # ======================================\n if freq is None:\n from .. import qt\n freq = qt.getDouble(title='Fourier Analysis', text='Frequency samples taken at:', min=0, decimals=2, value=1.0)\n freq = freq.input\n \n if fill_missing:\n (t_x, x_filled) = fill_missing_timestamps(timestamps, values)\n else:\n x_filled = values\n \n num_samples = _np.size(x_filled)\n xfft = _sp.fftpack.rfft(x_filled)\n \n factor = freq/num_samples\n num_fft = _np.size(xfft)\n f = factor * _np.linspace(1, num_fft, num_fft)\n \n xpow = _np.abs(xfft*_np.conj(xfft))\n\n # ======================================\n # No DC term\n # ======================================\n xpow = xpow[1:]\n f = f[1:]\n\n return (f, xpow)", "response": "Compute the FFT of the given values."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create(self, server):\n\n return server.post(\n 'challenge_admin',\n self.as_payload(),\n replacements={'slug': self.slug})", "response": "Create the challenge on the server"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the challenge on the server", "response": "def update(self, server):\n \"\"\"Update existing challenge on the server\"\"\"\n\n return server.put(\n 'challenge_admin',\n self.as_payload(),\n replacements={'slug': self.slug})"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if a challenge exists on the server", "response": "def exists(self, server):\n \"\"\"Check if a challenge exists on the server\"\"\"\n\n try:\n server.get(\n 'challenge',\n replacements={'slug': self.slug})\n except Exception:\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve a challenge from the MapRoulette server.", "response": "def from_server(cls, server, slug):\n \"\"\"Retrieve a challenge from the MapRoulette server\n :type server\n \"\"\"\n\n challenge = server.get(\n 'challenge',\n replacements={'slug': slug})\n return cls(\n **challenge)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextends logthing. utils. default_logging_dict with django specific values.", "response": "def django_logging_dict(log_dir, handlers=['file'], filename='debug.log'):\n \"\"\"Extends :func:`logthing.utils.default_logging_dict` with django\n specific values.\n \"\"\"\n d = default_logging_dict(log_dir, handlers, filename)\n d['handlers'].update({\n 'mail_admins':{\n 'level':'ERROR',\n 'class':'django.utils.log.AdminEmailHandler',\n }\n })\n d['loggers'].update({\n 'django.db.backends': { # stop SQL debug from going to main logger\n 'handlers': ['file', 'mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n 'django.request': {\n 'handlers': ['file', 'mail_admins'],\n 'level': 'ERROR',\n 'propagate': False,\n },\n })\n return d"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_position(self, position_id):\n url = \"/2/positions/%s\" % position_id\n\n return self.position_from_json(self._get_resource(url)[\"position\"])", "response": "Returns position data.\n\n http://dev.wheniwork.com/#get-existing-position"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of positions.", "response": "def get_positions(self):\n \"\"\"\n Returns a list of positions.\n\n http://dev.wheniwork.com/#listing-positions\n \"\"\"\n url = \"/2/positions\"\n\n data = self._get_resource(url)\n positions = []\n for entry in data['positions']:\n positions.append(self.position_from_json(entry))\n\n return positions"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_position(self, params={}):\n url = \"/2/positions/\"\n body = params\n\n data = self._post_resource(url, body)\n return self.position_from_json(data[\"position\"])", "response": "Creates a position for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef print2elog(author='', title='', text='', link=None, file=None, now=None):\n # ============================\n # Get current time\n # ============================\n if now is None:\n now = _dt.datetime.now()\n fulltime = now.strftime('%Y-%m-%dT%H:%M:%S-00')\n\n # ============================\n # Copy files\n # ============================\n if not ((link is None) ^ (file is None)):\n link_copied = _copy_file(link, fulltime)\n file_copied = _copy_file(file, fulltime)\n else:\n raise ValueError('Need both file and its thumbnail!')\n\n # ============================\n # Jinja templating\n # ============================\n loader = _jj.PackageLoader('pytools.facettools', 'resources/templates')\n env = _jj.Environment(loader=loader, trim_blocks=True)\n template = env.get_template('facetelog.xml')\n stream = template.stream(author=author, title=title, text=text, link=link_copied, file=file_copied, now=now)\n\n # ============================\n # Write xml\n # ============================\n with _tempfile.TemporaryDirectory() as dirname:\n filename = '{}.xml'.format(fulltime)\n filepath = _os.path.join(dirname, filename)\n\n with open(filepath, 'w+') as fid:\n # stream.disable_buffering()\n stream.dump(fid)\n\n finalpath = _os.path.join(basedir, filename)\n # _shutil.copyfile(filepath, 'new.xml')\n _shutil.copyfile(filepath, finalpath)", "response": "Print to the elog."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting Source Lines of Code and export to file.", "response": "def sloccount():\n '''Print \"Source Lines of Code\" and export to file.\n\n Export is hudson_ plugin_ compatible: sloccount.sc\n\n requirements:\n - sloccount_ should be installed.\n - tee and pipes are used\n\n options.paved.pycheck.sloccount.param\n\n .. _sloccount: http://www.dwheeler.com/sloccount/\n .. _hudson: http://hudson-ci.org/\n .. _plugin: http://wiki.hudson-ci.org/display/HUDSON/SLOCCount+Plugin\n '''\n\n # filter out subpackages\n setup = options.get('setup')\n packages = options.get('packages') if setup else None\n\n if packages:\n dirs = [x for x in packages if '.' not in x]\n else:\n dirs = ['.']\n\n # sloccount has strange behaviour with directories,\n # can cause exception in hudson sloccount plugin.\n # Better to call it with file list\n ls=[]\n for d in dirs:\n ls += list(path(d).walkfiles())\n #ls=list(set(ls))\n files=' '.join(ls)\n param=options.paved.pycheck.sloccount.param\n sh('sloccount {param} {files} | tee sloccount.sc'.format(param=param, files=files))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef gaussian(x, mu, sigma):\n return _np.exp(-(x-mu)**2/(2*sigma**2)) / (_np.sqrt(2*_np.pi) * sigma)", "response": "Returns the gaussian function of the form : math : \\ mu \\ sigma \\"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef http_exception_error_handler(\n exception):\n \"\"\"\n Handle HTTP exception\n\n :param werkzeug.exceptions.HTTPException exception: Raised exception\n\n A response is returned, as formatted by the :py:func:`response` function.\n \"\"\"\n\n assert issubclass(type(exception), HTTPException), type(exception)\n assert hasattr(exception, \"code\")\n assert hasattr(exception, \"description\")\n\n return response(exception.code, exception.description)", "response": "Handle HTTP exception\n\n :param werkzeug.exceptions.HTTPException exception: Raised exception\n\n A response is returned, as formatted by the :py:func:`response` function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if the value given is a valid CSS colour i. e. matches one of the regular expressions in the module or is in the list of one predetefined values by the browser.", "response": "def is_colour(value):\n \"\"\"Returns True if the value given is a valid CSS colour, i.e. matches one\n of the regular expressions in the module or is in the list of\n predetefined values by the browser.\n \"\"\"\n global PREDEFINED, HEX_MATCH, RGB_MATCH, RGBA_MATCH, HSL_MATCH, HSLA_MATCH\n value = value.strip()\n\n # hex match\n if HEX_MATCH.match(value) or RGB_MATCH.match(value) or \\\n RGBA_MATCH.match(value) or HSL_MATCH.match(value) or \\\n HSLA_MATCH.match(value) or value in PREDEFINED:\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef format_time(seconds):\n\n timedelta = datetime.timedelta(seconds=int(seconds))\n\n mm, ss = divmod(timedelta.seconds, 60)\n if mm < 60:\n return \"%02d:%02d\" % (mm, ss)\n hh, mm = divmod(mm, 60)\n if hh < 24:\n return \"%02d:%02d:%02d\" % (hh, mm, ss)\n dd, hh = divmod(mm, 24)\n return \"%d days %02d:%02d:%02d\" % (dd, hh, mm, ss)", "response": "Formats time as the string HH : MM : SS."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the frictional resistance coefficient of the vehicle", "response": "def frictional_resistance_coef(length, speed, **kwargs):\n \"\"\"\n Flat plate frictional resistance of the ship according to ITTC formula.\n ref: https://ittc.info/media/2021/75-02-02-02.pdf\n\n :param length: metres length of the vehicle\n :param speed: m/s speed of the vehicle\n :param kwargs: optional could take in temperature to take account change of water property\n :return: Frictional resistance coefficient of the vehicle\n \"\"\"\n Cf = 0.075 / (np.log10(reynolds_number(length, speed, **kwargs)) - 2) ** 2\n return Cf"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef reynolds_number(length, speed, temperature=25):\n kinematic_viscosity = interpolate.interp1d([0, 10, 20, 25, 30, 40],\n np.array([18.54, 13.60, 10.50, 9.37, 8.42, 6.95]) / 10 ** 7)\n # Data from http://web.mit.edu/seawater/2017_MIT_Seawater_Property_Tables_r2.pdf\n Re = length * speed / kinematic_viscosity(temperature)\n return Re", "response": "This function returns the reynolds number for a vehicle at specific length and speed."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dimension(self, length, draught, beam, speed,\n slenderness_coefficient, prismatic_coefficient):\n \"\"\"\n Assign values for the main dimension of a ship.\n\n :param length: metres length of the vehicle\n :param draught: metres draught of the vehicle\n :param beam: metres beam of the vehicle\n :param speed: m/s speed of the vehicle\n :param slenderness_coefficient: Slenderness coefficient dimensionless :math:`L/(\u2207^{1/3})` where L is length of ship,\n \u2207 is displacement\n :param prismatic_coefficient: Prismatic coefficient dimensionless :math:`\u2207/(L\\cdot A_m)` where L is length of ship,\n \u2207 is displacement Am is midsection area of the ship\n \"\"\"\n self.length = length\n self.draught = draught\n self.beam = beam\n self.speed = speed\n self.slenderness_coefficient = slenderness_coefficient\n self.prismatic_coefficient = prismatic_coefficient\n self.displacement = (self.length / self.slenderness_coefficient) ** 3\n self.surface_area = 1.025 * (1.7 * self.length * self.draught +\n self.displacement / self.draught)", "response": "Assign values for the main dimension of a vehicle."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef resistance(self):\n self.total_resistance_coef = frictional_resistance_coef(self.length, self.speed) + \\\n residual_resistance_coef(self.slenderness_coefficient,\n self.prismatic_coefficient,\n froude_number(self.speed, self.length))\n RT = 1 / 2 * self.total_resistance_coef * 1025 * self.surface_area * self.speed ** 2\n return RT", "response": "Return the newton resistance of the vehicle."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef maximum_deck_area(self, water_plane_coef=0.88):\n AD = self.beam * self.length * water_plane_coef\n return AD", "response": "Returns the maximum deck area of the ship\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prop_power(self, propulsion_eff=0.7, sea_margin=0.2):\n PP = (1 + sea_margin) * self.resistance() * self.speed/propulsion_eff\n return PP", "response": "Calculates the propulsion power of the wave."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef axesfontsize(ax, fontsize):\n items = ([ax.title, ax.xaxis.label, ax.yaxis.label] + ax.get_xticklabels() + ax.get_yticklabels())\n for item in items:\n item.set_fontsize(fontsize)", "response": "Change the font size for axes ax to fontsize."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nscaling the number of tick labels in x and y by x_fraction and y_fraction respectively.", "response": "def less_labels(ax, x_fraction=0.5, y_fraction=0.5):\n \"\"\"\n Scale the number of tick labels in x and y by *x_fraction* and *y_fraction* respectively.\n \"\"\"\n nbins = _np.size(ax.get_xticklabels())\n ax.locator_params(nbins=_np.floor(nbins*x_fraction), axis='x')\n\n nbins = _np.size(ax.get_yticklabels())\n ax.locator_params(nbins=_np.floor(nbins*y_fraction), axis='y')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconfigure the API to use given url and token or to get them from the Config.", "response": "def configure(self, url=None, token=None, test=False):\n \"\"\"\n Configure the api to use given url and token or to get them from the\n Config.\n \"\"\"\n\n if url is None:\n url = Config.get_value(\"url\")\n if token is None:\n token = Config.get_value(\"token\")\n\n self.server_url = url\n self.auth_header = {\"Authorization\": \"Basic {0}\".format(token)}\n self.configured = True\n\n if test:\n self.test_connection()\n\n Config.set(\"url\", url)\n Config.set(\"token\", token)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef send_zip(self, exercise, file, params):\n\n resp = self.post(\n exercise.return_url,\n params=params,\n files={\n \"submission[file]\": ('submission.zip', file)\n },\n data={\n \"commit\": \"Submit\"\n }\n )\n return self._to_json(resp)", "response": "Send a zip file to the TMC for given exercise"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _make_url(self, slug):\n if slug.startswith(\"http\"):\n return slug\n return \"{0}{1}\".format(self.server_url, slug)", "response": "Make the URL for the given slug."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _do_request(self, method, slug, **kwargs):\n # ensure we are configured\n if not self.configured:\n self.configure()\n\n url = self._make_url(slug)\n\n # 'defaults' are values associated with every request.\n # following will make values in kwargs override them.\n defaults = {\"headers\": self.auth_header, \"params\": self.params}\n for item in defaults.keys():\n # override default's value with kwargs's one if existing.\n kwargs[item] = dict(defaults[item], **(kwargs.get(item, {})))\n\n # request() can raise connectivity related exceptions.\n # raise_for_status raises an exception ONLY if the response\n # status_code is \"not-OK\" i.e 4XX, 5XX..\n #\n # All of these inherit from RequestException\n # which is \"translated\" into an APIError.\n try:\n resp = request(method, url, **kwargs)\n resp.raise_for_status()\n except RequestException as e:\n reason = \"HTTP {0} request to {1} failed: {2}\"\n raise APIError(reason.format(method, url, repr(e)))\n return resp", "response": "Does HTTP request sending and validation."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting json from a response.", "response": "def _to_json(self, resp):\n \"\"\"\n Extract json from a response.\n Assumes response is valid otherwise.\n Internal use only.\n \"\"\"\n try:\n json = resp.json()\n except ValueError as e:\n reason = \"TMC Server did not send valid JSON: {0}\"\n raise APIError(reason.format(repr(e)))\n\n return json"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _normalize_instancemethod(instance_method):\n if not hasattr(instance_method, 'im_self'):\n return instance_method\n\n def _func(*args, **kwargs):\n return instance_method(*args, **kwargs)\n\n _func.__name__ = repr(instance_method)\n return _func", "response": "This function is used to normalize the instancemethod to be a function that is called by the object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef safe_joinall(greenlets, timeout=None, raise_error=False):\n greenlets = list(greenlets)\n try:\n gevent.joinall(greenlets, timeout=timeout, raise_error=raise_error)\n except gevent.GreenletExit:\n [greenlet.kill() for greenlet in greenlets if not greenlet.ready()]\n raise\n return greenlets", "response": "Wrapper for gevent. joinall that kills all the greenlets that are not ready."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef imshow_batch(images, cbar=True, show=True, pdf=None, figsize=(16, 12), rows=2, columns=2, cmap=None, **kwargs):\n # ======================================\n # Set up grid\n # ======================================\n images = _np.array(images)\n gs = _gridspec.GridSpec(rows, columns)\n\n num_imgs = images.shape[0]\n max_ind = num_imgs-1\n\n # ======================================\n # Split into pages\n # ======================================\n per_page = rows*columns\n num_pages = _np.int(_np.ceil(num_imgs/per_page))\n fig_array = _np.empty(shape=num_pages, dtype=object)\n\n if num_pages > 1:\n logger.info('Multiple pages necessary')\n\n if pdf is not None:\n f = _PdfPages(pdf)\n\n for p in range(num_pages):\n # ======================================\n # Make figure\n # ======================================\n fig_array[p] = _plt.figure(figsize=figsize)\n\n # ======================================\n # Get number of rows on page\n # ======================================\n pg_max_ind = _np.min( [(p+1) * per_page - 1, max_ind] )\n num_rows = _np.int(_np.ceil((pg_max_ind+1 - p * per_page) / columns))\n\n for i in range(num_rows):\n # ======================================\n # Get images for column\n # ======================================\n i_min_ind = p * per_page + i * columns\n col_max_ind = _np.min([i_min_ind + columns - 1, max_ind])\n \n for j, image in enumerate(images[i_min_ind:col_max_ind+1]):\n ax = fig_array[p].add_subplot(gs[i, j])\n\n try:\n if _np.issubdtype(image.dtype, _np.integer):\n image = _np.array(image, dtype=float)\n except:\n pass\n\n plot = ax.imshow(image, **kwargs)\n if cmap is not None:\n plot.set_cmap(cmap)\n\n if cbar:\n fig_array[p].colorbar(plot)\n\n fig_array[p].tight_layout()\n if pdf is not None:\n f.savefig(fig_array[p])\n\n if not show:\n _plt.close(fig_array[p])\n\n if pdf is not None:\n f.close()\n\n return fig_array", "response": "Plots a batch of images on a single page."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an error from the given code and args and kwargs.", "response": "def error(code: int, *args, **kwargs) -> HedgehogCommandError:\n \"\"\"\n Creates an error from the given code, and args and kwargs.\n\n :param code: The acknowledgement code\n :param args: Exception args\n :param kwargs: Exception kwargs\n :return: the error for the given acknowledgement code\n \"\"\"\n # TODO add proper error code\n if code == FAILED_COMMAND and len(args) >= 1 and args[0] == \"Emergency Shutdown activated\":\n return EmergencyShutdown(*args, **kwargs)\n return _errors[code](*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates an error Acknowledgement message from this exception.", "response": "def to_message(self):\n \"\"\"\n Creates an error Acknowledgement message.\n The message's code and message are taken from this exception.\n\n :return: the message representing this exception\n \"\"\"\n from .messages import ack\n return ack.Acknowledgement(self.code, self.args[0] if len(self.args) > 0 else '')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncleans up extra files littering the source tree.", "response": "def clean(options, info):\n \"\"\"Clean up extra files littering the source tree.\n\n options.paved.clean.dirs: directories to search recursively\n options.paved.clean.patterns: patterns to search for and remove\n \"\"\"\n info(\"Cleaning patterns %s\", options.paved.clean.patterns)\n for wd in options.paved.clean.dirs:\n info(\"Cleaning in %s\", wd)\n for p in options.paved.clean.patterns:\n for f in wd.walkfiles(p):\n f.remove()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef printoptions():\n '''print paver options.\n\n Prettified by json.\n `long_description` is removed\n '''\n x = json.dumps(environment.options,\n indent=4,\n sort_keys=True,\n skipkeys=True,\n cls=MyEncoder)\n print(x)", "response": "print paver options.\n Prettified by json.\n long_description is removed\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse(self, data: RawMessage) -> Message:\n try:\n return self.receiver.parse(data)\n except KeyError as err:\n raise UnknownCommandError from err\n except DecodeError as err:\n raise UnknownCommandError(f\"{err}\") from err", "response": "\\ Parses a binary protobuf message into a Message object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setup_axes(rows=1, cols=1, figsize=(8, 6), expand=True, tight_layout=None, **kwargs):\n\n if expand:\n figsize = (figsize[0]*cols, figsize[1]*rows)\n\n figargs = {}\n if isinstance(tight_layout, dict):\n figargs[\"tight_layout\"] = tight_layout\n elif tight_layout == \"pdf\":\n figargs[\"tight_layout\"] = {\"rect\": (0, 0, 1, 0.95)}\n\n dpi = kwargs.pop('dpi', None)\n\n fig, gs = _setup_figure(rows=rows, cols=cols, figsize=figsize, dpi=dpi, **figargs)\n\n axes = _np.empty(shape=(rows, cols), dtype=object)\n\n for i in range(rows):\n for j in range(cols):\n axes[i, j] = fig.add_subplot(gs[i, j], **kwargs)\n\n if axes.shape == (1, 1):\n return fig, axes[0, 0]\n else:\n return fig, axes", "response": "Setups a figure and axes for the base class."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a DCCGraph a root Node and a Node s associated with the data class instance.", "response": "def factory(cls, data_class, **kwargs):\n \"\"\"Creates a ``DCCGraph``, a root :class:`Node`: and the node's\n associated data class instance. This factory is used to get around\n the chicken-and-egg problem of the ``DCCGraph`` and ``Nodes`` within\n it having pointers to each other.\n\n :param data_class: \n django model class that extends :class:`BaseNodeData` and is used\n to associate information with the nodes in this graph\n :param kwargs: \n arguments to pass to constructor of the data class instance that\n is to be created for the root node\n :returns: \n instance of the newly created ``DCCGraph``\n \"\"\"\n if not issubclass(data_class, BaseNodeData):\n raise AttributeError('data_class must be a BaseNodeData extender')\n\n content_type = ContentType.objects.get_for_model(data_class)\n graph = DCCGraph.objects.create(data_content_type=content_type)\n node = Node.objects.create(graph=graph)\n data_class.objects.create(node=node, **kwargs)\n graph.root = node\n graph.save()\n return graph"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a DCCGraph from a django model class and a list of children.", "response": "def factory_from_graph(cls, data_class, root_args, children):\n \"\"\"Creates a ``DCCGraph`` and corresponding nodes. The root_args parm\n is a dictionary specifying the parameters for creating a :class:`Node`\n and its corresponding :class:`BaseNodeData` subclass from the\n data_class specified. The children parm is an iterable containing\n pairs of dictionaries and iterables, where the dictionaries specify\n the parameters for a :class:`BaseNodeData` subclass and the iterable\n the list of children.\n\n Example::\n\n DCCGraph.factory_from_graph(Label, \n {'name':'A'}, [\n ({'name':'B', []),\n ({'name':'C', [])\n ])\n\n creates the graph::\n\n A\n / \\\n B C\n\n :param data_class: \n django model class that extends :class:`BaseNodeData` and is used\n to associate information with the Nodes in this graph\n :param root_args: \n dictionary of arguments to pass to constructor of the data class\n instance that is to be created for the root node\n :param children: \n iterable with a list of dictionary and iterable pairs\n :returns: \n instance of the newly created ``DCCGraph``\n \"\"\"\n graph = cls.factory(data_class, **root_args)\n for child in children:\n cls._depth_create(graph.root, child[0], child[1])\n\n return graph"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsearching the data nodes that are associated with this graph using the key word arguments as a filter and returns the attached .", "response": "def find_nodes(self, **kwargs):\n \"\"\"Searches the data nodes that are associated with this graph using\n the key word arguments as a filter and returns a\n :class:`django.db.models.query.QuerySet`` of the attached\n :class:`Node` objects.\n\n :param kwargs: \n filter arguments applied to searching the :class:`BaseNodeData`\n subclass associated with this graph.\n :returns: \n ``QuerySet`` of :class:`Node` objects \n \"\"\"\n filter_args = {}\n classname = self.data_content_type.model_class().__name__.lower()\n for key, value in kwargs.items():\n filter_args['%s__%s' % (classname, key)] = value\n\n return Node.objects.filter(**filter_args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new Node based on the extending class and adds it as a child to this Node.", "response": "def add_child(self, **kwargs):\n \"\"\"Creates a new ``Node`` based on the extending class and adds it as\n a child to this ``Node``.\n\n :param kwargs: \n arguments for constructing the data object associated with this\n ``Node``\n :returns: \n extender of the ``Node`` class\n \"\"\"\n data_class = self.graph.data_content_type.model_class()\n node = Node.objects.create(graph=self.graph)\n data_class.objects.create(node=node, **kwargs)\n node.parents.add(self)\n self.children.add(node)\n return node"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef connect_child(self, node):\n if node.graph != self.graph:\n raise AttributeError('cannot connect nodes from different graphs')\n\n node.parents.add(self)\n self.children.add(node)", "response": "Adds the given node as a child of this one."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ancestors(self):\n ancestors = set([])\n self._depth_ascend(self, ancestors)\n try:\n ancestors.remove(self)\n except KeyError:\n # we weren't ancestor of ourself, that's ok\n pass\n\n return list(ancestors)", "response": "Returns a list of the ancestors of this node."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of the ancestors of this node but does not pass the root node even if the root has parents due to cycles.", "response": "def ancestors_root(self):\n \"\"\"Returns a list of the ancestors of this node but does not pass the\n root node, even if the root has parents due to cycles.\"\"\"\n if self.is_root():\n return []\n\n ancestors = set([])\n self._depth_ascend(self, ancestors, True)\n try:\n ancestors.remove(self)\n except KeyError:\n # we weren't ancestor of ourself, that's ok\n pass\n\n return list(ancestors)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of descendents of this node.", "response": "def descendents(self):\n \"\"\"Returns a list of descendents of this node.\"\"\"\n visited = set([])\n self._depth_descend(self, visited)\n try:\n visited.remove(self)\n except KeyError:\n # we weren't descendent of ourself, that's ok\n pass\n\n return list(visited)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of descendents of this node", "response": "def descendents_root(self):\n \"\"\"Returns a list of descendents of this node, if the root node is in\n the list (due to a cycle) it will be included but will not pass\n through it. \n \"\"\"\n visited = set([])\n self._depth_descend(self, visited, True)\n try:\n visited.remove(self)\n except KeyError:\n # we weren't descendent of ourself, that's ok\n pass\n\n return list(visited)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if the node can be removed and still leave the forest as a single connected entity.", "response": "def can_remove(self):\n \"\"\"Returns True if it is legal to remove this node and still leave the\n graph as a single connected entity, not splitting it into a forest.\n Only nodes with no children or those who cause a cycle can be deleted.\n \"\"\"\n if self.children.count() == 0:\n return True\n\n ancestors = set(self.ancestors_root())\n children = set(self.children.all())\n return children.issubset(ancestors)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove the node from the graph.", "response": "def remove(self):\n \"\"\"Removes the node from the graph. Note this does not remove the\n associated data object. See :func:`Node.can_remove` for limitations\n on what can be deleted.\n\n :returns: \n :class:`BaseNodeData` subclass associated with the deleted Node\n\n :raises AttributeError:\n if called on a ``Node`` that cannot be deleted\n \"\"\"\n if not self.can_remove():\n raise AttributeError('this node cannot be deleted')\n\n data = self.data\n self.parents.remove(self)\n self.delete()\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef prune(self):\n targets = self.descendents_root()\n try:\n targets.remove(self.graph.root)\n except ValueError:\n # root wasn't in the target list, no problem\n pass\n\n results = [n.data for n in targets]\n results.append(self.data)\n for node in targets:\n node.delete()\n\n for parent in self.parents.all():\n parent.children.remove(self)\n\n self.delete()\n return results", "response": "Removes the node and all descendents without looping back past the\n root."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prune_list(self):\n targets = self.descendents_root()\n try:\n targets.remove(self.graph.root)\n except ValueError:\n # root wasn't in the target list, no problem\n pass\n\n targets.append(self)\n return targets", "response": "Returns a list of nodes that would be removed from this element."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall to verify that the given rule can become a child of the current node. The current node is checked to see if the given rule is allowed.", "response": "def _child_allowed(self, child_rule):\n \"\"\"Called to verify that the given rule can become a child of the\n current node. \n\n :raises AttributeError: \n if the child is not allowed\n \"\"\"\n num_kids = self.node.children.count()\n num_kids_allowed = len(self.rule.children)\n if not self.rule.multiple_paths:\n num_kids_allowed = 1\n\n if num_kids >= num_kids_allowed:\n raise AttributeError('Rule %s only allows %s children' % (\n self.rule_name, self.num_kids_allowed))\n\n # verify not a duplicate\n for node in self.node.children.all():\n if node.data.rule_label == child_rule.class_label:\n raise AttributeError('Child rule already exists')\n\n # check if the given rule is allowed as a child\n if child_rule not in self.rule.children:\n raise AttributeError('Rule %s is not a valid child of Rule %s' % (\n child_rule.__name__, self.rule_name))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_child_rule(self, child_rule):\n self._child_allowed(child_rule)\n child_node = self.node.add_child(rule_label=child_rule.class_label)\n return child_node.data", "response": "Adds a child rule to the flow."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a connection to a given child node in the flow.", "response": "def connect_child(self, child_node):\n \"\"\"Adds a connection to an existing rule in the :class`Flow` graph.\n The given :class`Rule` subclass must be allowed to be connected at\n this stage of the flow according to the hierarchy of rules.\n\n :param child_node: \n ``FlowNodeData`` to attach as a child\n \"\"\"\n self._child_allowed(child_node.rule)\n self.node.connect_child(child_node.node)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn True if there is a State object that uses this Flow.", "response": "def in_use(self):\n \"\"\"Returns True if there is a :class:`State` object that uses this\n ``Flow``\"\"\"\n state = State.objects.filter(flow=self).first()\n return bool(state)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new state based on a flow.", "response": "def start(self, flow):\n \"\"\"Factory method for a running state based on a flow. Creates and\n returns a ``State`` object and calls the associated\n :func:`Rule.on_enter` method.\n\n :param flow: \n :class:`Flow` which defines this state machine\n :returns: \n newly created instance\n \"\"\"\n state = State.objects.create(flow=flow, \n current_node=flow.state_graph.root)\n flow.state_graph.root.data.rule.on_enter(state)\n return state"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef next_state(self, rule=None):\n num_kids = self.current_node.children.count()\n next_node = None\n if num_kids == 0:\n raise AttributeError('No next state in this Flow id=%s' % (\n self.flow.id))\n elif num_kids == 1:\n next_node = self.current_node.children.first()\n else:\n if not rule:\n raise AttributeError(('Current Rule %s is multipath but no '\n 'choice was passed in') % self.current_node.data.rule_name)\n\n for node in self.current_node.children.all():\n if node.data.rule_label == rule.class_label:\n next_node = node\n break\n\n if not next_node:\n raise AttributeError(('Current Rule %s is multipath and the '\n 'Rule choice passed in was not in the Flow') % (\n self.current_node.data.rule_name))\n\n self.current_node.data.rule.on_leave(self)\n next_node.data.rule.on_enter(self)\n self.current_node = next_node\n self.save()", "response": "Proceeds to the next step in the flow."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns location data. http://dev.wheniwork.com/#get-existing-location", "response": "def get_location(self, location_id):\n \"\"\"\n Returns location data.\n\n http://dev.wheniwork.com/#get-existing-location\n \"\"\"\n url = \"/2/locations/%s\" % location_id\n\n return self.location_from_json(self._get_resource(url)[\"location\"])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of locations.", "response": "def get_locations(self):\n \"\"\"\n Returns a list of locations.\n\n http://dev.wheniwork.com/#listing-locations\n \"\"\"\n url = \"/2/locations\"\n\n data = self._get_resource(url)\n locations = []\n for entry in data['locations']:\n locations.append(self.location_from_json(entry))\n\n return locations"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef y(self):\n if self._y is None:\n self._y = self.y_unweighted/self.y_error\n return self._y", "response": "The y - value of the assessment class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the beta of the linear least squares matrix.", "response": "def beta(self):\n \"\"\"\n The result :math:`\\\\beta` of the linear least squares\n \"\"\"\n if self._beta is None:\n # This is the linear least squares matrix formalism\n self._beta = _np.dot(_np.linalg.pinv(self.X) , self.y)\n return self._beta"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the covariance matrix for the result.", "response": "def covar(self):\n \"\"\"\n The covariance matrix for the result :math:`\\\\beta`\n \"\"\"\n if self._covar is None:\n self._covar = _np.linalg.inv(_np.dot(_np.transpose(self.X), self.X))\n return self._covar"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the reduced chi - square of the linear least squares", "response": "def chisq_red(self):\n \"\"\"\n The reduced chi-square of the linear least squares\n \"\"\"\n if self._chisq_red is None:\n self._chisq_red = chisquare(self.y_unweighted.transpose(), _np.dot(self.X_unweighted, self.beta), self.y_error, ddof=3, verbose=False)\n return self._chisq_red"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating the task on the server", "response": "def create(self, server):\n \"\"\"Create the task on the server\"\"\"\n if len(self.geometries) == 0:\n raise Exception('no geometries')\n return server.post(\n 'task_admin',\n self.as_payload(),\n replacements={\n 'slug': self.__challenge__.slug,\n 'identifier': self.identifier})"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the task on the server", "response": "def update(self, server):\n \"\"\"Update existing task on the server\"\"\"\n return server.put(\n 'task_admin',\n self.as_payload(),\n replacements={\n 'slug': self.__challenge__.slug,\n 'identifier': self.identifier})"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_server(cls, server, slug, identifier):\n task = server.get(\n 'task',\n replacements={\n 'slug': slug,\n 'identifier': identifier})\n return cls(**task)", "response": "Retrieve a task from the server"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef formatter(color, s):\n if no_coloring:\n return s\n return \"{begin}{s}{reset}\".format(begin=color, s=s, reset=Colors.RESET)", "response": "Formats a string with color"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef reload(self, *fields, **kwargs):\n max_depth = 1\n if fields and isinstance(fields[0], int):\n max_depth = fields[0]\n fields = fields[1:]\n elif \"max_depth\" in kwargs:\n max_depth = kwargs[\"max_depth\"]\n\n if not self.pk:\n raise self.DoesNotExist(\"Document does not exist\")\n obj = self._qs.read_preference(ReadPreference.PRIMARY).filter(\n **self._object_key).only(*fields).limit(1\n ).select_related(max_depth=max_depth)\n\n if obj:\n obj = obj[0]\n else:\n raise self.DoesNotExist(\"Document does not exist\")\n\n for field in self._fields_ordered:\n if not fields or field in fields:\n try:\n setattr(self, field, self._reload(field, obj[field]))\n except KeyError:\n # If field is removed from the database while the object\n # is in memory, a reload would cause a KeyError\n # i.e. obj.update(unset__field=1) followed by obj.reload()\n delattr(self, field)\n\n # BUG FIX BY US HERE:\n if not fields:\n self._changed_fields = obj._changed_fields\n else:\n for field in fields:\n field = self._db_field_map.get(field, field)\n if field in self._changed_fields:\n self._changed_fields.remove(field)\n self._created = False\n return self", "response": "Reloads all attributes of the object from the database."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pdf2png(file_in, file_out):\n command = 'convert -display 37.5 {} -resize 600 -append {}'.format(file_in, file_out)\n _subprocess.call(_shlex.split(command))", "response": "Convert a pdf file to a png file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_user(self, user_id):\n url = \"/2/users/%s\" % user_id\n\n return self.user_from_json(self._get_resource(url)[\"user\"])", "response": "Returns user profile data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_users(self, params={}):\n param_list = [(k, params[k]) for k in sorted(params)]\n url = \"/2/users/?%s\" % urlencode(param_list)\n\n data = self._get_resource(url)\n users = []\n for entry in data[\"users\"]:\n users.append(self.user_from_json(entry))\n\n return users", "response": "Returns a list of users."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _setVirtualEnv():\n try:\n activate = options.virtualenv.activate_cmd\n except AttributeError:\n activate = None\n\n if activate is None:\n virtualenv = path(os.environ.get('VIRTUAL_ENV', ''))\n if not virtualenv:\n virtualenv = options.paved.cwd\n else:\n virtualenv = path(virtualenv)\n\n activate = virtualenv / 'bin' / 'activate'\n\n if activate.exists():\n info('Using default virtualenv at %s' % activate)\n options.setdotted('virtualenv.activate_cmd', 'source %s' % activate)", "response": "Attempt to set the virtualenv activate command if it hasn t been specified."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning the given command inside the virtual environment", "response": "def shv(command, capture=False, ignore_error=False, cwd=None):\n \"\"\"Run the given command inside the virtual environment, if available:\n \"\"\"\n _setVirtualEnv()\n try:\n command = \"%s; %s\" % (options.virtualenv.activate_cmd, command)\n except AttributeError:\n pass\n return bash(command, capture=capture, ignore_error=ignore_error, cwd=cwd)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(dst, src):\n stack = [(dst, src)]\n\n def isdict(o):\n return hasattr(o, 'keys')\n\n while stack:\n current_dst, current_src = stack.pop()\n for key in current_src:\n if key not in current_dst:\n current_dst[key] = current_src[key]\n else:\n if isdict(current_src[key]) and isdict(current_dst[key]):\n stack.append((current_dst[key], current_src[key]))\n else:\n current_dst[key] = current_src[key]\n return dst", "response": "Recursively updates the destination dict - like object with the source dict - like object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pip_install(*args):\n download_cache = ('--download-cache=%s ' % options.paved.pip.download_cache) if options.paved.pip.download_cache else ''\n shv('pip install %s%s' % (download_cache, ' '.join(args)))", "response": "Send the given arguments to pip install."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_resource(self, url, data_key=None):\n headers = {\"Accept\": \"application/json\"}\n if self.token:\n headers[\"W-Token\"] = \"%s\" % self.token\n response = WhenIWork_DAO().getURL(url, headers)\n\n if response.status != 200:\n raise DataFailureException(url, response.status, response.data)\n\n return json.loads(response.data)", "response": "When I Work GET method. Return representation of the requested\n resource."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _put_resource(self, url, body):\n headers = {\"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\"}\n if self.token:\n headers[\"W-Token\"] = \"%s\" % self.token\n response = WhenIWork_DAO().putURL(url, headers, json.dumps(body))\n\n if not (response.status == 200 or response.status == 201 or\n response.status == 204):\n raise DataFailureException(url, response.status, response.data)\n\n return json.loads(response.data)", "response": "When I Work PUT method."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget all the shifts in the system.", "response": "def get_shifts(self, params={}):\n \"\"\"\n List shifts\n\n http://dev.wheniwork.com/#listing-shifts\n \"\"\"\n param_list = [(k, params[k]) for k in sorted(params)]\n url = \"/2/shifts/?%s\" % urlencode(param_list)\n\n data = self._get_resource(url)\n shifts = []\n locations = {}\n sites = {}\n positions = {}\n users = {}\n for entry in data.get(\"locations\", []):\n location = Locations.location_from_json(entry)\n locations[location.location_id] = location\n for entry in data.get(\"sites\", []):\n site = Sites.site_from_json(entry)\n sites[site.site_id] = site\n for entry in data.get(\"positions\", []):\n position = Positions.position_from_json(entry)\n positions[position.position_id] = position\n for entry in data.get(\"users\", []):\n user = Users.user_from_json(entry)\n users[user.user_id] = user\n for entry in data[\"shifts\"]:\n shift = self.shift_from_json(entry)\n shifts.append(shift)\n\n for shift in shifts:\n shift.location = locations.get(shift.location_id, None)\n shift.site = sites.get(shift.site_id, None)\n shift.position = positions.get(shift.position_id, None)\n shift.user = users.get(shift.user_id, None)\n\n return shifts"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_shift(self, params={}):\n url = \"/2/shifts/\"\n body = params\n\n data = self._post_resource(url, body)\n shift = self.shift_from_json(data[\"shift\"])\n\n return shift", "response": "Creates a shift with the given params"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_shifts(self, shifts):\n url = \"/2/shifts/?%s\" % urlencode(\n {'ids': \",\".join(str(s) for s in shifts)})\n\n data = self._delete_resource(url)\n\n return data", "response": "Delete existing shifts.\n\n http://dev.wheniwork.com/#delete-shift"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_past_events(self):\n lapsed = datetime.datetime.now() - datetime.timedelta(days=90)\n for event in self.filter(start_date__lte=lapsed, featured=0, recap=''):\n event.delete()", "response": "Deletes all events that have been passed by Lapsed and has no related special events."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining if the event was recently ended.", "response": "def recently_ended(self):\n \"\"\"\n Determines if event ended recently (within 5 days).\n Useful for attending list.\n \"\"\"\n if self.ended():\n end_date = self.end_date if self.end_date else self.start_date\n if end_date >= offset.date():\n return True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of event and update comments.", "response": "def all_comments(self):\n \"\"\"\n Returns combined list of event and update comments.\n \"\"\"\n ctype = ContentType.objects.get(app_label__exact=\"happenings\", model__exact='event')\n update_ctype = ContentType.objects.get(app_label__exact=\"happenings\", model__exact='update')\n update_ids = self.update_set.values_list('id', flat=True)\n\n return Comment.objects.filter(\n Q(content_type=ctype.id, object_pk=self.id) |\n Q(content_type=update_ctype.id, object_pk__in=update_ids)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns chained list of event and update images.", "response": "def get_all_images(self):\n \"\"\"\n Returns chained list of event and update images.\n \"\"\"\n self_imgs = self.image_set.all()\n update_ids = self.update_set.values_list('id', flat=True)\n u_images = UpdateImage.objects.filter(update__id__in=update_ids)\n\n return list(chain(self_imgs, u_images))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_all_images_count(self):\n self_imgs = self.image_set.count()\n update_ids = self.update_set.values_list('id', flat=True)\n u_images = UpdateImage.objects.filter(update__id__in=update_ids).count()\n count = self_imgs + u_images\n\n return count", "response": "Gets count of all images from both event and updates."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_top_assets(self):\n images = self.get_all_images()[0:14]\n video = []\n if supports_video:\n video = self.eventvideo_set.all()[0:10]\n\n return list(chain(images, video))[0:15]", "response": "Gets the top assets for this event."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping for matplotlib. pyplot. plot.", "response": "def plot_featured(*args, **kwargs):\n \"\"\"\n Wrapper for matplotlib.pyplot.plot() / errorbar().\n\n Takes options:\n\n * 'error': if true, use :func:`matplotlib.pyplot.errorbar` instead of :func:`matplotlib.pyplot.plot`. *\\*args* and *\\*\\*kwargs* passed through here.\n * 'fig': figure to use.\n * 'figlabel': figure label.\n * 'legend': legend location.\n * 'toplabel': top label of plot.\n * 'xlabel': x-label of plot.\n * 'ylabel': y-label of plot.\n\n \"\"\"\n # Strip off options specific to plot_featured\n toplabel = kwargs.pop('toplabel', None)\n xlabel = kwargs.pop('xlabel', None)\n ylabel = kwargs.pop('ylabel', None)\n legend = kwargs.pop('legend', None)\n error = kwargs.pop('error', None)\n # save = kwargs.pop('save', False)\n figlabel = kwargs.pop('figlabel', None)\n fig = kwargs.pop('fig', None)\n\n if figlabel is not None:\n fig = _figure(figlabel)\n elif fig is None:\n try:\n fig = _plt.gcf()\n except:\n fig = _plt.fig()\n\n # Pass everything else to plot\n if error is None:\n _plt.plot(*args, **kwargs)\n else:\n _plt.errorbar(*args, **kwargs)\n\n # Format plot as desired\n _addlabel(toplabel, xlabel, ylabel, fig=fig)\n if legend is not None:\n _plt.legend(legend)\n\n return fig"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef decorate(msg=\"\", waitmsg=\"Please wait\"):\n def decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n spin = Spinner(msg=msg, waitmsg=waitmsg)\n spin.start()\n a = None\n try:\n a = func(*args, **kwargs)\n except Exception as e:\n spin.msg = \"Something went wrong: \"\n spin.stop_spinning()\n spin.join()\n raise e\n spin.stop_spinning()\n spin.join()\n return a\n\n return wrapper\n\n return decorator", "response": "Decorator for functions that return a function that will be called with the arguments passed to it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef launch(title, items, selected=None):\n resp = {\"code\": -1, \"done\": False}\n curses.wrapper(Menu, title, items, selected, resp)\n return resp", "response": "Launches a new menu."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save(self, *args, **kwargs):\n rerank = kwargs.pop('rerank', True)\n if rerank:\n if not self.id:\n self._process_new_rank_obj()\n elif self.rank == self._rank_at_load:\n # nothing changed\n pass\n else:\n self._process_moved_rank_obj()\n\n super(RankedModel, self).save(*args, **kwargs)", "response": "Overridden method that handles re - ranking of objects and the rank field of the related object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove any blank ranks in the order.", "response": "def repack(self):\n \"\"\"Removes any blank ranks in the order.\"\"\"\n items = self.grouped_filter().order_by('rank').select_for_update()\n for count, item in enumerate(items):\n item.rank = count + 1\n item.save(rerank=False)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nqueries the database for the same object that is passed in and refetches the corresponding object s contents until the next commit.", "response": "def refetch_for_update(obj):\n \"\"\"Queries the database for the same object that is passed in, refetching\n its contents and runs ``select_for_update()`` to lock the corresponding\n row until the next commit.\n\n :param obj:\n Object to refetch\n :returns:\n Refreshed version of the object\n \"\"\"\n return obj.__class__.objects.select_for_update().get(id=obj.id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_field_names(obj, ignore_auto=True, ignore_relations=True, \n exclude=[]):\n \"\"\"Returns the field names of a Django model object.\n\n :param obj: the Django model class or object instance to get the fields\n from\n :param ignore_auto: ignore any fields of type AutoField. Defaults to True\n :param ignore_relations: ignore any fields that involve relations such as\n the ForeignKey or ManyToManyField\n :param exclude: exclude anything in this list from the results\n\n :returns: generator of found field names\n \"\"\"\n\n from django.db.models import (AutoField, ForeignKey, ManyToManyField, \n ManyToOneRel, OneToOneField, OneToOneRel)\n\n for field in obj._meta.get_fields():\n if ignore_auto and isinstance(field, AutoField):\n continue\n\n if ignore_relations and (isinstance(field, ForeignKey) or\n isinstance(field, ManyToManyField) or\n isinstance(field, ManyToOneRel) or\n isinstance(field, OneToOneRel) or\n isinstance(field, OneToOneField)):\n # optimization is killing coverage measure, have to put no-op that\n # does something\n a = 1; a\n continue\n\n if field.name in exclude:\n continue\n\n yield field.name", "response": "Returns the field names of a Django model class or object instance."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the object s attribute with respect to the base tree of the object.", "response": "def get_obj_attr(obj, attr):\n \"\"\"Works like getattr() but supports django's double underscore object\n dereference notation.\n\n Example usage:\n\n .. code-block:: python\n\n >>> get_obj_attr(book, 'writer__age')\n 42\n >>> get_obj_attr(book, 'publisher__address')\n
\n\n :param obj: \n Object to start the derference from\n\n :param attr:\n String name of attribute to return\n\n :returns:\n Derferenced object \n\n :raises:\n AttributeError in the attribute in question does not exist\n \"\"\"\n # handle '__' referencing like in QuerySets\n fields = attr.split('__')\n field_obj = getattr(obj, fields[0])\n\n for field in fields[1:]:\n # keep going down the reference tree\n field_obj = getattr(field_obj, field)\n\n return field_obj"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef as_list(self):\n result = []\n for child in self.children:\n self._depth_traversal(child, result)\n\n return result", "response": "Returns a list of strings describing the full paths and patterns\n along with the name of the urls."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister all HTTP error codes and error handlers for the current object.", "response": "def register(\n app):\n \"\"\"\n Register all HTTP error code error handlers\n\n Currently, errors are handled by the JSON error handler.\n \"\"\"\n\n # Pick a handler based on the requested format. Currently we assume the\n # caller wants JSON.\n error_handler = json.http_exception_error_handler\n\n\n @app.errorhandler(400)\n def handle_bad_request(\n exception):\n return error_handler(exception)\n\n\n @app.errorhandler(404)\n def handle_not_found(\n exception):\n return error_handler(exception)\n\n\n @app.errorhandler(405)\n def handle_method_not_allowed(\n exception):\n return error_handler(exception)\n\n\n @app.errorhandler(422)\n def handle_unprocessable_entity(\n exception):\n return error_handler(exception)\n\n\n @app.errorhandler(500)\n def handle_internal_server_error(\n exception):\n return error_handler(exception)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nplots the base on the base axes.", "response": "def plot(*args, ax=None, **kwargs):\n \"\"\"\n Plots but automatically resizes x axis.\n\n .. versionadded:: 1.4\n\n Parameters\n ----------\n args\n Passed on to :meth:`matplotlib.axis.Axis.plot`.\n ax : :class:`matplotlib.axis.Axis`, optional\n The axis to plot to.\n kwargs\n Passed on to :meth:`matplotlib.axis.Axis.plot`.\n\n \"\"\"\n if ax is None:\n fig, ax = _setup_axes()\n\n pl = ax.plot(*args, **kwargs)\n\n if _np.shape(args)[0] > 1:\n if type(args[1]) is not str:\n min_x = min(args[0])\n max_x = max(args[0])\n ax.set_xlim((min_x, max_x))\n\n return pl"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a vector of values over an interval with a specified step size.", "response": "def linspacestep(start, stop, step=1):\n \"\"\"\n Create a vector of values over an interval with a specified step size.\n\n Parameters\n ----------\n\n start : float\n The beginning of the interval.\n stop : float\n The end of the interval.\n step : float\n The step size.\n\n Returns\n -------\n vector : :class:`numpy.ndarray`\n The vector of values.\n \"\"\"\n # Find an integer number of steps\n numsteps = _np.int((stop-start)/step)\n\n # Do a linspace over the new range\n # that has the correct endpoint\n return _np.linspace(start, start+step*numsteps, numsteps+1)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new logger with the given name filename and indent_offset.", "response": "def mylogger(name=None, filename=None, indent_offset=7, level=_logging.DEBUG, stream_level=_logging.WARN, file_level=_logging.INFO):\n \"\"\"\n Sets up logging to *filename*.debug.log, *filename*.log, and the terminal. *indent_offset* attempts to line up the lowest indent level to 0. Custom levels:\n\n * *level*: Parent logging level.\n * *stream_level*: Logging level for console stream.\n * *file_level*: Logging level for general file log.\n \"\"\"\n if name is not None:\n logger = _logging.getLogger(name)\n else:\n logger = _logging.getLogger()\n\n logger.setLevel(level)\n\n fmtr = IndentFormatter(indent_offset=indent_offset)\n fmtr_msgonly = IndentFormatter('%(funcName)s:%(lineno)d: %(message)s')\n\n ch = _logging.StreamHandler()\n ch.setLevel(stream_level)\n ch.setFormatter(fmtr_msgonly)\n logger.addHandler(ch)\n\n if filename is not None:\n debugh = _logging.FileHandler(filename='{}_debug.log'.format(filename), mode='w')\n debugh.setLevel(_logging.DEBUG)\n debugh.setFormatter(fmtr_msgonly)\n logger.addHandler(debugh)\n\n fh = _logging.FileHandler(filename='{}.log'.format(filename), mode='w')\n fh.setLevel(file_level)\n fh.setFormatter(fmtr)\n logger.addHandler(fh)\n\n return logger"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef selected_course(func):\n @wraps(func)\n def inner(*args, **kwargs):\n course = Course.get_selected()\n return func(course, *args, **kwargs)\n return inner", "response": "Decorator to wrap a function to return the selected course"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef selected_exercise(func):\n @wraps(func)\n def inner(*args, **kwargs):\n exercise = Exercise.get_selected()\n return func(exercise, *args, **kwargs)\n return inner", "response": "Decorator for functions that take a single exercise and passes the selected exercise as the first argument to func."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef false_exit(func):\n @wraps(func)\n def inner(*args, **kwargs):\n ret = func(*args, **kwargs)\n if ret is False:\n if \"TMC_TESTING\" in os.environ:\n raise TMCExit()\n else:\n sys.exit(-1)\n return ret\n return inner", "response": "Decorator to exit the program immediately if the function returns False."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef configure(server=None, username=None, password=None, tid=None, auto=False):\n if not server and not username and not password and not tid:\n if Config.has():\n if not yn_prompt(\"Override old configuration\", False):\n return False\n reset_db()\n if not server:\n while True:\n server = input(\"Server url [https://tmc.mooc.fi/mooc/]: \").strip()\n if len(server) == 0:\n server = \"https://tmc.mooc.fi/mooc/\"\n if not server.endswith('/'):\n server += '/'\n if not (server.startswith(\"http://\")\n or server.startswith(\"https://\")):\n ret = custom_prompt(\n \"Server should start with http:// or https://\\n\" +\n \"R: Retry, H: Assume http://, S: Assume https://\",\n [\"r\", \"h\", \"s\"], \"r\")\n if ret == \"r\":\n continue\n # Strip previous schema\n if \"://\" in server:\n server = server.split(\"://\")[1]\n if ret == \"h\":\n server = \"http://\" + server\n elif ret == \"s\":\n server = \"https://\" + server\n break\n\n print(\"Using URL: '{0}'\".format(server))\n while True:\n if not username:\n username = input(\"Username: \")\n if not password:\n password = getpass(\"Password: \")\n # wow, such security\n token = b64encode(\n bytes(\"{0}:{1}\".format(username, password), encoding='utf-8')\n ).decode(\"utf-8\")\n\n try:\n api.configure(url=server, token=token, test=True)\n except APIError as e:\n print(e)\n if auto is False and yn_prompt(\"Retry authentication\"):\n username = password = None\n continue\n return False\n break\n if tid:\n select(course=True, tid=tid, auto=auto)\n else:\n select(course=True)", "response": "Configure tmc. py to use your account."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef download(course, tid=None, dl_all=False, force=False, upgradejava=False,\n update=False):\n \"\"\"\n Download the exercises from the server.\n \"\"\"\n\n def dl(id):\n download_exercise(Exercise.get(Exercise.tid == id),\n force=force,\n update_java=upgradejava,\n update=update)\n\n if dl_all:\n for exercise in list(course.exercises):\n dl(exercise.tid)\n elif tid is not None:\n dl(int(tid))\n else:\n for exercise in list(course.exercises):\n if not exercise.is_completed:\n dl(exercise.tid)\n else:\n exercise.update_downloaded()", "response": "Download the exercises from the server."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngo to the next exercise.", "response": "def skip(course, num=1):\n \"\"\"\n Go to the next exercise.\n \"\"\"\n sel = None\n try:\n sel = Exercise.get_selected()\n if sel.course.tid != course.tid:\n sel = None\n except NoExerciseSelected:\n pass\n\n if sel is None:\n sel = course.exercises.first()\n else:\n try:\n sel = Exercise.get(Exercise.id == sel.id + num)\n except peewee.DoesNotExist:\n print(\"There are no more exercises in this course.\")\n return False\n\n sel.set_select()\n list_all(single=sel)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef run(exercise, command):\n Popen(['nohup', command, exercise.path()], stdout=DEVNULL, stderr=DEVNULL)", "response": "Spawns a process with command path - of - exercise"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nselect a course or an exercise.", "response": "def select(course=False, tid=None, auto=False):\n \"\"\"\n Select a course or an exercise.\n \"\"\"\n if course:\n update(course=True)\n course = None\n try:\n course = Course.get_selected()\n except NoCourseSelected:\n pass\n\n ret = {}\n if not tid:\n ret = Menu.launch(\"Select a course\",\n Course.select().execute(),\n course)\n else:\n ret[\"item\"] = Course.get(Course.tid == tid)\n if \"item\" in ret:\n ret[\"item\"].set_select()\n update()\n if ret[\"item\"].path == \"\":\n select_a_path(auto=auto)\n # Selects the first exercise in this course\n skip()\n return\n else:\n print(\"You can select the course with `tmc select --course`\")\n return\n else:\n selected = None\n try:\n selected = Exercise.get_selected()\n except NoExerciseSelected:\n pass\n\n ret = {}\n if not tid:\n ret = Menu.launch(\"Select an exercise\",\n Course.get_selected().exercises,\n selected)\n else:\n ret[\"item\"] = Exercise.byid(tid)\n if \"item\" in ret:\n ret[\"item\"].set_select()\n print(\"Selected {}\".format(ret[\"item\"]))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsubmits the selected exercise to the server.", "response": "def submit(course, tid=None, pastebin=False, review=False):\n \"\"\"\n Submit the selected exercise to the server.\n \"\"\"\n if tid is not None:\n return submit_exercise(Exercise.byid(tid),\n pastebin=pastebin,\n request_review=review)\n else:\n sel = Exercise.get_selected()\n if not sel:\n raise NoExerciseSelected()\n return submit_exercise(sel, pastebin=pastebin, request_review=review)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsend the selected exercise to the TMC pastebin.", "response": "def paste(tid=None, review=False):\n \"\"\"\n Sends the selected exercise to the TMC pastebin.\n \"\"\"\n submit(pastebin=True, tid=tid, review=False)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlisting all of the exercises in the current course.", "response": "def list_all(course, single=None):\n \"\"\"\n Lists all of the exercises in the current course.\n \"\"\"\n\n def bs(val):\n return \"\u25cf\" if val else \" \"\n\n def bc(val):\n return as_success(\"\u2714\") if val else as_error(\"\u2718\")\n\n def format_line(exercise):\n return \"{0} \u2502 {1} \u2502 {2} \u2502 {3} \u2502 {4}\".format(exercise.tid,\n bs(exercise.is_selected),\n bc(exercise.is_downloaded),\n bc(exercise.is_completed),\n exercise.menuname())\n\n print(\"ID{0}\u2502 S \u2502 D \u2502 C \u2502 Name\".format(\n (len(str(course.exercises[0].tid)) - 1) * \" \"\n ))\n if single:\n print(format_line(single))\n return\n for exercise in course.exercises:\n # ToDo: use a pager\n print(format_line(exercise))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update(course=False):\n if course:\n with Spinner.context(msg=\"Updated course metadata.\",\n waitmsg=\"Updating course metadata.\"):\n for course in api.get_courses():\n old = None\n try:\n old = Course.get(Course.tid == course[\"id\"])\n except peewee.DoesNotExist:\n old = None\n if old:\n old.details_url = course[\"details_url\"]\n old.save()\n continue\n Course.create(tid=course[\"id\"], name=course[\"name\"],\n details_url=course[\"details_url\"])\n else:\n selected = Course.get_selected()\n\n # with Spinner.context(msg=\"Updated exercise metadata.\",\n # waitmsg=\"Updating exercise metadata.\"):\n print(\"Updating exercise data.\")\n for exercise in api.get_exercises(selected):\n old = None\n try:\n old = Exercise.byid(exercise[\"id\"])\n except peewee.DoesNotExist:\n old = None\n if old is not None:\n old.name = exercise[\"name\"]\n old.course = selected.id\n old.is_attempted = exercise[\"attempted\"]\n old.is_completed = exercise[\"completed\"]\n old.deadline = exercise.get(\"deadline\")\n old.is_downloaded = os.path.isdir(old.path())\n old.return_url = exercise[\"return_url\"]\n old.zip_url = exercise[\"zip_url\"]\n old.submissions_url = exercise[\"exercise_submissions_url\"]\n old.save()\n download_exercise(old, update=True)\n else:\n ex = Exercise.create(tid=exercise[\"id\"],\n name=exercise[\"name\"],\n course=selected.id,\n is_attempted=exercise[\"attempted\"],\n is_completed=exercise[\"completed\"],\n deadline=exercise.get(\"deadline\"),\n return_url=exercise[\"return_url\"],\n zip_url=exercise[\"zip_url\"],\n submissions_url=exercise[(\"exercise_\"\n \"submissions_\"\n \"url\")])\n ex.is_downloaded = os.path.isdir(ex.path())\n ex.save()", "response": "Update the data of courses and exercises."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndetermining the type of x", "response": "def determine_type(x):\n \"\"\"Determine the type of x\"\"\"\n types = (int, float, str)\n _type = filter(lambda a: is_type(a, x), types)[0]\n return _type(x)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dmap(fn, record):\n values = (fn(v) for k, v in record.items())\n return dict(itertools.izip(record, values))", "response": "map for a directory"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef apply_types(use_types, guess_type, line):\n new_line = {}\n for k, v in line.items():\n if use_types.has_key(k):\n new_line[k] = force_type(use_types[k], v)\n elif guess_type:\n new_line[k] = determine_type(v)\n else:\n new_line[k] = v\n return new_line", "response": "Apply the types on the elements of the line"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads a CSV file containing a set of items", "response": "def read_csv(filename, delimiter=\",\", skip=0, guess_type=True, has_header=True, use_types={}):\n \"\"\"Read a CSV file\n \n Usage\n -----\n >>> data = read_csv(filename, delimiter=delimiter, skip=skip,\n guess_type=guess_type, has_header=True, use_types={}) \n\n # Use specific types\n >>> types = {\"sepal.length\": int, \"petal.width\": float}\n >>> data = read_csv(filename, guess_type=guess_type, use_types=types) \n\n keywords\n :has_header:\n Determine whether the file has a header or not\n\n \"\"\"\n with open(filename, 'r') as f:\n # Skip the n first lines\n if has_header:\n header = f.readline().strip().split(delimiter)\n else:\n header = None\n\n for i in range(skip):\n f.readline()\n\n for line in csv.DictReader(f, delimiter=delimiter, fieldnames=header):\n if use_types:\n yield apply_types(use_types, guess_type, line)\n elif guess_type:\n yield dmap(determine_type, line)\n else:\n yield line"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef write_csv(filename, header, data=None, rows=None, mode=\"w\"):\n if data == rows == None:\n msg = \"You must specify either data or rows\"\n raise ValueError(msg)\n\n elif data != None and rows != None:\n msg = \"You must specify either data or rows. Not both\"\n raise ValueError(msg)\n\n data_header = dict((x, x) for x in header)\n\n with open(filename, mode) as f:\n if data:\n writer = csv.DictWriter(f, fieldnames=header)\n if mode == \"w\":\n writer.writerow(data_header)\n writer.writerows(data)\n elif rows:\n writer = csv.writer(f)\n if mode == \"w\":\n writer.writerow(header)\n writer.writerows(rows)\n\n print \"Saved %s.\" % filename", "response": "Writes the data to the specified filename."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_to_csv(filename, skiprows=0, delimiter=\"\"):\n if not delimiter:\n delimiter = \"\\t\"\n\n input_file = open(filename, \"r\")\n\n if skiprows:\n [input_file.readline() for _ in range(skiprows)]\n \n new_filename = os.path.splitext(filename)[0] + \".csv\"\n output_file = open(new_filename, \"w\")\n\n header = input_file.readline().split()\n reader = csv.DictReader(input_file, fieldnames=header, delimiter=delimiter)\n writer = csv.DictWriter(output_file, fieldnames=header, delimiter=\",\")\n \n # Write header\n writer.writerow(dict((x, x) for x in header))\n \n # Write rows\n for line in reader:\n if None in line: del line[None]\n writer.writerow(line)\n \n input_file.close()\n output_file.close()\n print \"Saved %s.\" % new_filename", "response": "Convert a file to a. csv file"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave the figure to a file in a folder.", "response": "def savefig(filename, path=\"figs\", fig=None, ext='eps', verbose=False, **kwargs):\n \"\"\"\n Save the figure *fig* (optional, if not specified, latest figure in focus) to *filename* in the path *path* with extension *ext*.\n\n *\\*\\*kwargs* is passed to :meth:`matplotlib.figure.Figure.savefig`.\n \"\"\"\n filename = os.path.join(path, filename)\n final_filename = '{}.{}'.format(filename, ext).replace(\" \", \"\").replace(\"\\n\", \"\")\n final_filename = os.path.abspath(final_filename)\n\n final_path = os.path.dirname(final_filename)\n if not os.path.exists(final_path):\n os.makedirs(final_path)\n\n if verbose:\n print('Saving file: {}'.format(final_filename))\n\n if fig is not None:\n fig.savefig(final_filename, bbox_inches='tight', **kwargs)\n else:\n plt.savefig(final_filename, bbox_inches='tight', **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef admin_obj_link(obj, display=''):\n # get the url for the change list for this object\n url = reverse('admin:%s_%s_changelist' % (obj._meta.app_label,\n obj._meta.model_name))\n url += '?id__exact=%s' % obj.id\n\n text = str(obj)\n if display:\n text = display\n\n return format_html('{}', url, text)", "response": "Returns a link to the django admin change list with a filter set to\n only the object given."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _obj_display(obj, display=''):\n result = ''\n if not display:\n result = str(obj)\n else:\n template = Template(display)\n context = Context({'obj':obj})\n result = template.render(context)\n\n return result", "response": "Returns the string representation of an object based on the display template passed in."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_admin_obj_mixin(name):\n @classmethod\n def add_obj_link(cls, funcname, attr, title='', display=''):\n if not title:\n title = attr.capitalize()\n\n # python scoping is a bit weird with default values, if it isn't\n # referenced the inner function won't see it, so assign it for use\n _display = display\n\n def _link(self, obj):\n field_obj = admin_obj_attr(obj, attr)\n if not field_obj:\n return ''\n\n text = _obj_display(field_obj, _display)\n return admin_obj_link(field_obj, text)\n _link.short_description = title\n _link.allow_tags = True\n _link.admin_order_field = attr\n\n setattr(cls, funcname, _link)\n\n @classmethod\n def add_obj_ref(cls, funcname, attr, title='', display=''):\n if not title:\n title = attr.capitalize()\n\n # python scoping is a bit weird with default values, if it isn't\n # referenced the inner function won't see it, so assign it for use\n _display = display\n\n def _ref(self, obj):\n field_obj = admin_obj_attr(obj, attr)\n if not field_obj:\n return ''\n\n return _obj_display(field_obj, _display)\n _ref.short_description = title\n _ref.allow_tags = True\n _ref.admin_order_field = attr\n\n setattr(cls, funcname, _ref)\n\n klass = type(name, (), {})\n klass.add_obj_link = add_obj_link\n klass.add_obj_ref = add_obj_ref\n return klass", "response": "This method creates a mixin that can be used to create admin objects."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a new copy of a FancyModelAdmin class that can be used to manage the list_display attribute of the admin class.", "response": "def fancy_modeladmin(*args):\n \"\"\"Returns a new copy of a :class:`FancyModelAdmin` class (a class, not\n an instance!). This can then be inherited from when declaring a model\n admin class. The :class:`FancyModelAdmin` class has additional methods\n for managing the ``list_display`` attribute.\n\n :param ``*args``: [optional] any arguments given will be added to the\n ``list_display`` property using regular django ``list_display``\n functionality.\n\n This function is meant as a replacement for :func:`make_admin_obj_mixin`,\n it does everything the old one does with fewer bookkeeping needs for the\n user as well as adding functionality.\n\n Example usage:\n\n .. code-block:: python\n\n # ---- models.py file ----\n class Author(models.Model):\n name = models.CharField(max_length=100)\n\n\n class Book(models.Model):\n title = models.CharField(max_length=100)\n author = models.ForeignKey(Author, on_delete=models.CASCADE)\n\n\n .. code-block:: python\n\n # ---- admin.py file ----\n @admin.register(Author)\n class Author(admin.ModelAdmin):\n list_display = ('name', )\n\n\n base = fany_list_display_modeladmin()\n base.add_displays('id', 'name')\n base.add_obj_link('author', 'Our Authors',\n '{{obj.name}} (id={{obj.id}})')\n\n @admin.register(Book)\n class BookAdmin(base):\n list_display = ('name', 'show_author')\n\n\n A sample django admin page for \"Book\" would have the table:\n\n +----+---------------------------------+------------------------+\n | ID | Name | Our Authors |\n +====+=================================+========================+\n | 1 | Hitchhikers Guide To The Galaxy | *Douglas Adams (id=1)* |\n +----+---------------------------------+------------------------+\n | 2 | War and Peace | *Tolstoy (id=2)* |\n +----+---------------------------------+------------------------+\n | 3 | Dirk Gently | *Douglas Adams (id=1)* |\n +----+---------------------------------+------------------------+\n\n\n See :class:`FancyModelAdmin` for a full list of functionality\n provided by the returned base class.\n \"\"\"\n global klass_count\n\n klass_count += 1\n name = 'DynamicAdminClass%d' % klass_count\n\n # clone the admin class\n klass = type(name, (FancyModelAdmin,), {})\n klass.list_display = []\n if len(args) > 0:\n klass.add_displays(*args)\n\n return klass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a display property to the list_display property of the class.", "response": "def add_display(cls, attr, title=''):\n \"\"\"Adds a ``list_display`` property without any extra wrappers,\n similar to :func:`add_displays`, but can also change the title.\n\n :param attr:\n Name of the attribute to add to the display\n\n :param title:\n Title for the column of the django admin table. If not given it\n defaults to a capitalized version of ``attr``\n \"\"\"\n global klass_count\n klass_count += 1\n fn_name = 'dyn_fn_%d' % klass_count\n cls.list_display.append(fn_name)\n\n if not title:\n title = attr.capitalize()\n\n def _ref(self, obj):\n # use the django mechanism for field value lookup\n _, _, value = lookup_field(attr, obj, cls)\n return value\n _ref.short_description = title\n _ref.allow_tags = True\n _ref.admin_order_field = attr\n\n setattr(cls, fn_name, _ref)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_link(cls, attr, title='', display=''):\n global klass_count\n klass_count += 1\n fn_name = 'dyn_fn_%d' % klass_count\n cls.list_display.append(fn_name)\n\n if not title:\n title = attr.capitalize()\n\n # python scoping is a bit weird with default values, if it isn't\n # referenced the inner function won't see it, so assign it for use\n _display = display\n\n def _link(self, obj):\n field_obj = admin_obj_attr(obj, attr)\n if not field_obj:\n return ''\n\n text = _obj_display(field_obj, _display)\n return admin_obj_link(field_obj, text)\n _link.short_description = title\n _link.allow_tags = True\n _link.admin_order_field = attr\n\n setattr(cls, fn_name, _link)", "response": "Adds a link to the django admin change page for the type of object being shown."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_object(cls, attr, title='', display=''):\n global klass_count\n klass_count += 1\n fn_name = 'dyn_fn_%d' % klass_count\n cls.list_display.append(fn_name)\n\n if not title:\n title = attr.capitalize()\n\n # python scoping is a bit weird with default values, if it isn't\n # referenced the inner function won't see it, so assign it for use\n _display = display\n\n def _ref(self, obj):\n field_obj = admin_obj_attr(obj, attr)\n if not field_obj:\n return ''\n\n return _obj_display(field_obj, _display)\n _ref.short_description = title\n _ref.allow_tags = True\n _ref.admin_order_field = attr\n\n setattr(cls, fn_name, _ref)", "response": "Adds a list_display attribute showing an object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a list_display attribute showing a field in the object using a python %formatted string.", "response": "def add_formatted_field(cls, field, format_string, title=''):\n \"\"\"Adds a ``list_display`` attribute showing a field in the object\n using a python %formatted string.\n\n :param field:\n Name of the field in the object.\n\n :param format_string:\n A old-style (to remain python 2.x compatible) % string formatter\n with a single variable reference. The named ``field`` attribute\n will be passed to the formatter using the \"%\" operator. \n\n :param title:\n Title for the column of the django admin table. If not given it\n defaults to a capitalized version of ``field``\n \"\"\"\n global klass_count\n klass_count += 1\n fn_name = 'dyn_fn_%d' % klass_count\n cls.list_display.append(fn_name)\n\n if not title:\n title = field.capitalize()\n\n # python scoping is a bit weird with default values, if it isn't\n # referenced the inner function won't see it, so assign it for use\n _format_string = format_string\n\n def _ref(self, obj):\n return _format_string % getattr(obj, field)\n _ref.short_description = title\n _ref.allow_tags = True\n _ref.admin_order_field = field\n\n setattr(cls, fn_name, _ref)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef post_required(method_or_options=[]):\n def decorator(method):\n # handle wrapping or wrapping with arguments; if no arguments (and no\n # calling parenthesis) then method_or_options will be a list,\n # otherwise it will be the wrapped function\n expected_fields = []\n if not callable(method_or_options):\n # not callable means wrapping with arguments\n expected_fields = method_or_options\n\n @wraps(method)\n def wrapper(*args, **kwargs):\n request = args[0]\n if request.method != 'POST':\n logger.error('POST required for this url')\n raise Http404('only POST allowed for this url')\n\n missing = []\n for field in expected_fields:\n if field not in request.POST:\n missing.append(field)\n\n if missing:\n s = 'Expected fields missing in POST: %s' % missing\n logger.error(s)\n raise Http404(s)\n\n # everything verified, run the view\n return method(*args, **kwargs)\n return wrapper\n\n if callable(method_or_options):\n # callable means decorated method without options, call our decorator \n return decorator(method_or_options)\n return decorator", "response": "Decorator that enforces that the method was called using POST."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef json_post_required(*decorator_args):\n def decorator(method):\n @wraps(method)\n def wrapper(*args, **kwargs):\n field = decorator_args[0]\n if len(decorator_args) == 2:\n request_name = decorator_args[1]\n else:\n request_name = field\n\n request = args[0]\n if request.method != 'POST':\n logger.error('POST required for this url')\n raise Http404('only POST allowed for this url')\n\n if field not in request.POST:\n s = 'Expected field named %s in POST' % field\n logger.error(s)\n raise Http404(s)\n\n # deserialize the JSON and put it in the request\n setattr(request, request_name, json.loads(request.POST[field]))\n\n # everything verified, run the view\n return method(*args, **kwargs)\n return wrapper\n return decorator", "response": "Decorator that enforces that the method was called using POST and the JSON dictionary of the request."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a histogram of data x with a given number of bins and labels.", "response": "def hist(x, bins=10, labels=None, aspect=\"auto\", plot=True, ax=None, range=None):\n \"\"\"\n Creates a histogram of data *x* with a *bins*, *labels* = :code:`[title, xlabel, ylabel]`.\n \"\"\"\n\n h, edge = _np.histogram(x, bins=bins, range=range)\n \n mids = edge + (edge[1]-edge[0])/2\n mids = mids[:-1]\n\n if plot:\n if ax is None:\n _plt.hist(x, bins=bins, range=range)\n else:\n ax.hist(x, bins=bins, range=range)\n\n if labels is not None:\n _addlabel(labels[0], labels[1], labels[2])\n\n return h, mids"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nspotting size of matched beam", "response": "def sigma(self):\n \"\"\"\n Spot size of matched beam :math:`\\\\left( \\\\frac{2 E \\\\varepsilon_0 }{ n_p e^2 } \\\\right)^{1/4} \\\\sqrt{\\\\epsilon}`\n \"\"\"\n return _np.power(2*_sltr.GeV2joule(self.E)*_spc.epsilon_0 / (self.plasma.n_p * _np.power(_spc.elementary_charge, 2)) , 0.25) * _np.sqrt(self.emit)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the sigma of the beam.", "response": "def sigma_prime(self):\n \"\"\"\n Divergence of matched beam\n \"\"\"\n return _np.sqrt(self.emit/self.beta(self.E))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef n_p(self):\n return 2*_sltr.GeV2joule(self.E)*_spc.epsilon_0 / (self.beta*_spc.elementary_charge)**2", "response": "The plasma density in SI units."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a tag from the upstream", "response": "def main(target, label):\n \"\"\"\n Semver tag triggered deployment helper\n \"\"\"\n check_environment(target, label)\n\n click.secho('Fetching tags from the upstream ...')\n handler = TagHandler(git.list_tags())\n\n print_information(handler, label)\n\n tag = handler.yield_tag(target, label)\n confirm(tag)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_environment(target, label):\n if not git.exists():\n click.secho('You must have git installed to use yld.', fg='red')\n sys.exit(1)\n\n if not os.path.isdir('.git'):\n click.secho('You must cd into a git repository to use yld.', fg='red')\n sys.exit(1)\n\n if not git.is_committed():\n click.secho('You must commit or stash your work before proceeding.',\n fg='red')\n sys.exit(1)\n\n if target is None and label is None:\n click.secho('You must specify either a target or a label.', fg='red')\n sys.exit(1)", "response": "Checks that the environment is correct."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting information about the latest tag.", "response": "def print_information(handler, label):\n \"\"\"\n Prints latest tag's information\n \"\"\"\n click.echo('=> Latest stable: {tag}'.format(\n tag=click.style(str(handler.latest_stable or 'N/A'), fg='yellow' if\n handler.latest_stable else 'magenta')\n ))\n\n if label is not None:\n latest_revision = handler.latest_revision(label)\n click.echo('=> Latest relative revision ({label}): {tag}'.format(\n label=click.style(label, fg='blue'),\n tag=click.style(str(latest_revision or 'N/A'),\n fg='yellow' if latest_revision else 'magenta')\n ))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef confirm(tag):\n click.echo()\n if click.confirm('Do you want to create the tag {tag}?'.format(\n tag=click.style(str(tag), fg='yellow')),\n default=True, abort=True):\n git.create_tag(tag)\n\n if click.confirm(\n 'Do you want to push the tag {tag} into the upstream?'.format(\n tag=click.style(str(tag), fg='yellow')),\n default=True):\n git.push_tag(tag)\n click.echo('Done!')\n else:\n git.delete_tag(tag)\n click.echo('Aborted!')", "response": "Prompts user before proceeding\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nplots an array X in a colorbar.", "response": "def imshow(X, ax=None, add_cbar=True, rescale_fig=True, **kwargs):\n \"\"\"\n Plots an array *X* such that the first coordinate is the *x* coordinate and the second coordinate is the *y* coordinate, with the origin at the bottom left corner.\n\n Optional argument *ax* allows an existing axes to be used.\n\n *\\*\\*kwargs* are passed on to :meth:`matplotlib.axes.Axes.imshow`.\n\n .. versionadded:: 1.3\n\n Returns\n -------\n fig, ax, im :\n if axes aren't specified.\n im :\n if axes are specified.\n \"\"\"\n return _plot_array(X, plottype=_IMSHOW, ax=ax, add_cbar=add_cbar, rescale_fig=rescale_fig, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef scaled_figsize(X, figsize=None, h_pad=None, v_pad=None):\n if figsize is None:\n figsize = _mpl.rcParams['figure.figsize']\n\n # ======================================\n # Find the height and width\n # ======================================\n width, height = _np.shape(X)\n \n ratio = width / height\n \n # ======================================\n # Find how to rescale the figure\n # ======================================\n if ratio > figsize[0]/figsize[1]:\n figsize[1] = figsize[0] / ratio\n else:\n figsize[0] = figsize[1] * ratio\n \n return figsize", "response": "Determines the good size for a figure given an array X."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get(f, key, default=None):\n\n if key in f.keys():\n val = f[key].value\n\n if default is None:\n return val\n else:\n if _np.shape(val) == _np.shape(default):\n return val\n\n return default", "response": "Gets an array from datasets.\n .. versionadded:: 1. 4\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_state(self):\n return [os.path.join(dp, f)\n for dp, _, fn in os.walk(self.dir)\n for f in fn]", "response": "Get the current directory state"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd one tick to progress bar", "response": "def tick(self):\n \"\"\"Add one tick to progress bar\"\"\"\n self.current += 1\n if self.current == self.factor:\n sys.stdout.write('+')\n sys.stdout.flush()\n self.current = 0"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npushing a new item to the top of the list.", "response": "def push(self, k):\n \"\"\"Push k to the top of the list\n\n >>> l = DLL()\n >>> l.push(1)\n >>> l\n [1]\n >>> l.push(2)\n >>> l\n [2, 1]\n >>> l.push(3)\n >>> l\n [3, 2, 1]\n \"\"\"\n if not self._first:\n # first item\n self._first = self._last = node = DLL.Node(k)\n elif self._first.value == k:\n # it's already at the top\n return\n else:\n try:\n self.delete(k) # in case we have it already\n except KeyError:\n pass\n self._first = node = self._first.insert_before(k)\n self._index[k] = node\n self._size += 1"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deletenode(self, node):\n if self._last == node:\n self._last = node.previous\n if self._first == node:\n self._first = node.next\n node.pop()\n del self._index[node.value]\n self._size -= 1", "response": "Delete a node from the set."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the last key from the set", "response": "def pop(self):\n \"\"\"\n >>> l = DLL()\n >>> l.push(1)\n >>> l.pop()\n 1\n \"\"\"\n k = self._last.value\n self.deletenode(self._last)\n return k"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef increment(cls, name):\n with transaction.atomic():\n counter = Counter.objects.select_for_update().get(name=name)\n counter.value += 1\n counter.save()\n\n return counter.value", "response": "This method is atomic on\n the database. This is atomic on\n the database."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns x y for array to be used with matplotlib. pyplot. color.", "response": "def pcolor_axes(array, px_to_units=px_to_units):\n \"\"\"\n Return axes :code:`x, y` for *array* to be used with :func:`matplotlib.pyplot.color`.\n\n *px_to_units* is a function to convert pixels to units. By default, returns pixels.\n \"\"\"\n # ======================================\n # Coords need to be +1 larger than array\n # ======================================\n x_size = array.shape[0]+1\n y_size = array.shape[1]+1\n\n x = _np.empty((x_size, y_size))\n y = _np.empty((x_size, y_size))\n\n for i in range(x_size):\n for j in range(y_size):\n x[i, j], y[i, j] = px_to_units(i-0.5, j-0.5)\n\n return x, y"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef nb0(self):\n return self.N_e / ( (2*_np.pi)**(3/2) * self.sig_r**2 * self.sig_xi)", "response": "Return the number of samples in the beam."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef lambda_large(self, r0):\n return 2*_np.sqrt(2*_np.pi/self.k)*r0", "response": "The lambda function for large oscillations."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef r_small(self, x, r0):\n return r0*_np.cos(_np.sqrt(self.k_small) * x)", "response": "Approximate trajectory function for small oscillations."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef r_large(self, x, r0):\n return r0*_np.cos(x*self.omega_big(r0))", "response": "Approximate trajectory function for large oscillations."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the K term of the current object.", "response": "def k(self):\n \"\"\"\n Driving force term: :math:`r'' = -k \\\\left( \\\\frac{1-e^{-r^2/2{\\\\sigma_r}^2}}{r} \\\\right)`\n \"\"\"\n try:\n return self._k\n except AttributeError:\n self._k = e**2 * self.N_e / ( (2*_np.pi)**(5/2) * e0 * self.m * c**2 * self.sig_xi)\n return self._k"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint loading message on screen", "response": "def print_loading(self, wait, message):\n \"\"\"\n print loading message on screen\n\n .. note::\n\n loading message only write to `sys.stdout`\n\n\n :param int wait: seconds to wait\n :param str message: message to print\n :return: None\n \"\"\"\n tags = ['\\\\', '|', '/', '-']\n\n for i in range(wait):\n time.sleep(0.25)\n sys.stdout.write(\"%(message)s... %(tag)s\\r\" % {\n 'message': message,\n 'tag': tags[i % 4]\n })\n\n sys.stdout.flush()\n pass\n\n sys.stdout.write(\"%s... Done...\\n\" % message)\n sys.stdout.flush()\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprints a warning message to file handle fh or sys. stdout", "response": "def warn_message(self, message, fh=None, prefix=\"[warn]:\", suffix=\"...\"):\n \"\"\"\n print warn type message,\n if file handle is `sys.stdout`, print color message\n\n\n :param str message: message to print\n :param file fh: file handle,default is `sys.stdout`\n :param str prefix: message prefix,default is `[warn]`\n :param str suffix: message suffix ,default is `...`\n :return: None\n \"\"\"\n\n msg = prefix + message + suffix\n fh = fh or sys.stdout\n\n if fh is sys.stdout:\n termcolor.cprint(msg, color=\"yellow\")\n else:\n fh.write(msg)\n\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps make dry - run easier.", "response": "def system(self, cmd, fake_code=False):\n \"\"\"\n a built-in wrapper make dry-run easier.\n you should use this instead use `os.system`\n\n .. note::\n\n to use it,you need add '--dry-run' option in\n your argparser options\n\n\n :param str cmd: command to execute\n :param bool fake_code: only display command\n when is True,default is False\n :return:\n \"\"\"\n try:\n if self.options.dry_run:\n def fake_system(cmd):\n self.print_message(cmd)\n return fake_code\n\n return fake_system(cmd)\n except AttributeError:\n self.logger.warnning(\"fake mode enabled,\"\n \"but you don't set '--dry-run' option \"\n \"in your argparser options\")\n pass\n\n return os.system(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads the resource file which is located in the specified path.", "response": "def load_resource(path, root=''):\n \"\"\"\n .. warning::\n\n Experiment feature.\n\n BE CAREFUL! WE MAY REMOVE THIS FEATURE!\n\n\n load resource file which in package.\n this method is used to load file easier in different environment.\n\n e.g:\n\n consume we have a file named `resource.io` in package `cliez.conf`,\n and we want to load it.\n the easiest way may like this:\n\n .. code-block:: python\n\n open('../conf/resource.io').read()\n\n\n An obvious problem is `..` is relative path.\n it will cause an error.\n\n `load_resource` is designed for solve this problem.\n\n\n The following code are equivalent:\n\n .. code-block:: python\n\n a = Component()\n a.load_resource('resource.io', root='cliez/base')\n a.load_resource('base/resource.io', root='cliez')\n a.load_resource('/base/resource.io', root='cliez')\n a.load_resource('cliez/base/resource.io')\n a.load_resource(__file__.rsplit('/', 2)[0] +\n '/cliez/base/resource.io')\n\n\n .. note::\n\n The document charset *MUST BE* utf-8\n\n\n :param str path: file path\n :param str root: root path\n\n :return: str\n \"\"\"\n\n if root:\n full_path = root + '/' + path.strip('/')\n else:\n full_path = path\n\n buf = ''\n\n try:\n buf = open(full_path).read()\n except IOError:\n pkg, path = full_path.split('/', 1)\n try:\n import pkg_resources\n buf = pkg_resources.resource_string(pkg, path)\n # compatible python3 and only support utf-8\n if type(buf) != str:\n buf = buf.decode('utf-8')\n pass\n except AttributeError:\n # load resource feature not work in python2\n pass\n\n return buf"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading description of a resource file.", "response": "def load_description(name, root=''):\n \"\"\"\n .. warning::\n\n Experiment feature.\n\n BE CAREFUL! WE MAY REMOVE THIS FEATURE!\n\n\n Load resource file as description,\n if resource file not exist,will return empty string.\n\n :param str path: name resource path\n :param str root: same as `load_resource` root\n :return: `str`\n \"\"\"\n desc = ''\n\n try:\n desc = Component.load_resource(name, root=root)\n except (IOError, ImportError):\n pass\n\n return desc"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the name of the Firebase instance.", "response": "def name(self):\n '''\n Returns the name of the Firebase. If a Firebase instance points to\n 'https://my_firebase.firebaseio.com/users' its name would be 'users'\n '''\n i = self.__url.rfind('/')\n if self.__url[:i] == 'https:/':\n return \"/\"\n return self.__url[i+1:]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef url_correct(self, point, auth=None, export=None):\n '''\n Returns a Corrected URL to be used for a Request\n as per the REST API.\n '''\n newUrl = self.__url + point + '.json'\n if auth or export:\n newUrl += \"?\"\n if auth:\n newUrl += (\"auth=\" + auth)\n if export:\n if not newUrl.endswith('?'):\n newUrl += \"&\"\n newUrl += \"format=export\"\n return newUrl", "response": "Returns a Corrected URL to be used for a Request\n ."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread a File with contents in correct JSON format. Returns the data as Python objects.", "response": "def __read(path):\n '''\n Reads a File with contents in correct JSON format.\n Returns the data as Python objects.\n path - (string) path to the file\n '''\n try:\n with open(path, 'r') as data_file:\n data = data_file.read()\n data = json.loads(data)\n return data\n except IOError as err:\n pass\n except Exception as err:\n # Invalid JSON formatted files\n pass"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __write(path, data, mode=\"w\"):\n '''\n Writes to a File. Returns the data written.\n path - (string) path to the file to write to.\n data - (json) data from a request.\n mode - (string) mode to open the file in. Default to 'w'. Overwrites.\n '''\n with open(path, mode) as data_file:\n data = json.dumps(data, indent=4)\n data_file.write(data)\n return data", "response": "Writes to a File. Returns the data written."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef amust(self, args, argv):\n '''\n Requires the User to provide a certain parameter\n for the method to function properly.\n Else, an Exception is raised.\n args - (tuple) arguments you are looking for.\n argv - (dict) arguments you have received and want to inspect.\n '''\n for arg in args:\n if str(arg) not in argv:\n raise KeyError(\"ArgMissing: \" + str(arg) + \" not passed\")", "response": "This method is called by the user to check if the user has provided a certain parameter\n for the method to function properly."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncatch errors from a response and raises an exception if they are not.", "response": "def catch_error(response):\n '''\n Checks for Errors in a Response.\n 401 or 403 - Security Rules Violation.\n 404 or 417 - Firebase NOT Found.\n response - (Request.Response) - response from a request.\n '''\n status = response.status_code\n if status == 401 or status == 403:\n raise EnvironmentError(\"Forbidden\")\n elif status == 417 or status == 404:\n raise EnvironmentError(\"NotFound\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget : gets data from Firebase.", "response": "def get_sync(self, **kwargs):\n '''\n GET: gets data from the Firebase.\n Requires the 'point' parameter as a keyworded argument.\n '''\n self.amust((\"point\",), kwargs)\n response = requests.get(self.url_correct(kwargs[\"point\"],\n kwargs.get(\"auth\", self.__auth)))\n self.catch_error(response)\n return response.content"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef put_sync(self, **kwargs):\n '''\n PUT: puts data into the Firebase.\n Requires the 'point' parameter as a keyworded argument.\n '''\n self.amust((\"point\", \"data\"), kwargs)\n response = requests.put(self.url_correct(kwargs[\"point\"],\n kwargs.get(\"auth\", self.__auth)),\n data=json.dumps(kwargs[\"data\"]))\n self.catch_error(response)\n return response.content", "response": "PUT - Sync put data into Firebase."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nposting data to a Firebase.", "response": "def post_sync(self, **kwargs):\n '''\n POST: post data to a Firebase.\n Requires the 'point' parameter as a keyworded argument.\n Note: Firebase will give it a randomized \"key\"\n and the data will be the \"value\". Thus a key-value pair\n '''\n self.amust((\"point\", \"data\"), kwargs)\n response = requests.post(self.url_correct(\n kwargs[\"point\"],\n kwargs.get(\"auth\", self.__auth)),\n data=json.dumps(kwargs[\"data\"]))\n self.catch_error(response)\n return response.content"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_sync(self, **kwargs):\n '''\n UPDATE: updates data in the Firebase.\n Requires the 'point' parameter as a keyworded argument.\n '''\n self.amust((\"point\", \"data\"), kwargs)\n # Sending the 'PATCH' request\n response = requests.patch(self.url_correct(\n kwargs[\"point\"],\n kwargs.get(\"auth\", self.__auth)),\n data=json.dumps(kwargs[\"data\"]))\n self.catch_error(response)\n return response.content", "response": "Updates data in Firebase."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete data in Firebase.", "response": "def delete_sync(self, **kwargs):\n '''\n DELETE: delete data in the Firebase.\n Requires the 'point' parameter as a keyworded argument.\n '''\n self.amust((\"point\",), kwargs)\n response = requests.delete(self.url_correct(\n kwargs[\"point\"],\n kwargs.get(\"auth\", self.__auth)))\n self.catch_error(response)\n return response.content"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexport data from Firebase into a File if given.", "response": "def export_sync(self, **kwargs):\n '''\n EXPORT: exports data from the Firebase into a File, if given.\n Requires the 'point' parameter as a keyworded argument.\n '''\n self.amust((\"point\",), kwargs)\n response = requests.get(self.url_correct(kwargs[\"point\"],\n kwargs.get(\"auth\", self.__auth), True))\n self.catch_error(response)\n path = kwargs.get(\"path\", None)\n if path:\n self.__write(path, response.content, kwargs.get(\"mode\", \"w\"))\n return response.content"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef superable(cls) :\n '''Provide .__super in python 2.x classes without having to specify the current \n class name each time super is used (DRY principle).'''\n name = cls.__name__\n super_name = '_%s__super' % (name,)\n setattr(cls,super_name,super(cls))\n return cls", "response": "A class decorator that provides a superable version of the given class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n args = get_arguments()\n setup_logging(args)\n\n version_path = os.path.abspath(os.path.join(\n os.path.dirname(__file__),\n '..',\n '..',\n '.VERSION'\n ))\n\n try:\n version_text = open(version_path).read().strip()\n except Exception:\n print('Could not open or read the .VERSION file')\n sys.exit(1)\n\n try:\n semver.parse(version_text)\n except ValueError:\n print(('The .VERSION file contains an invalid '\n 'version: \"{}\"').format(version_text))\n sys.exit(1)\n\n new_version = version_text\n if args.version:\n try:\n if semver.parse(args.version):\n new_version = args.version\n except Exception:\n print('Could not parse \"{}\" as a version'.format(args.version))\n sys.exit(1)\n elif args.bump_major:\n new_version = semver.bump_major(version_text)\n elif args.bump_minor:\n new_version = semver.bump_minor(version_text)\n elif args.bump_patch:\n new_version = semver.bump_patch(version_text)\n\n try:\n with open(version_path, 'w') as version_file:\n version_file.write(new_version)\n except Exception:\n print('Could not write the .VERSION file')\n sys.exit(1)\n print(new_version)", "response": "This method is used to execute the base script. It is called by the script line. It is called by the script line to execute the base script."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate a Firebase URL.", "response": "def valid_url(name):\n '''\n Validates a Firebase URL. A valid URL must exhibit:\n - Protocol must be HTTPS\n - Domain Must be firebaseio.com\n - firebasename can only contain:\n - hyphen, small and capital letters\n - firebasename can not have leading and trailing hyphens\n - childnames can not contain:\n - period, dollar sign, square brackets, pound\n - ASCII Control Chars from \\x00 - \\x1F and \\x7F\n '''\n\n try:\n pattern = r'^https://([a-zA-Z0-9][a-zA-Z0-9\\-]+[a-zA-Z0-9]).firebaseio.com(/[^\\#\\$\\[\\]\\.\\x00-\\x1F\\x7F]*)'\n result = re.search(pattern, name + '/').group()\n result = result[:len(result) - 1]\n if result == name:\n return name\n else:\n raise ValueError(\"InvalidURL\")\n except:\n raise ValueError(\"InvalidURL\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndisplaying the contact form and sends the email", "response": "def contact(request):\n \"\"\"Displays the contact form and sends the email\"\"\"\n form = ContactForm(request.POST or None)\n if form.is_valid():\n subject = form.cleaned_data['subject']\n message = form.cleaned_data['message']\n sender = form.cleaned_data['sender']\n cc_myself = form.cleaned_data['cc_myself']\n\n recipients = settings.CONTACTFORM_RECIPIENTS\n if cc_myself:\n recipients.append(sender)\n\n send_mail(getattr(settings, \"CONTACTFORM_SUBJECT_PREFIX\", '') + subject, message, sender, recipients)\n\n return render(request, 'contactform/thanks.html')\n\n return render( request, 'contactform/contact.html', {'form': form})"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_gitconfig(self):\n gitconfig_path = os.path.expanduser('~/.gitconfig')\n\n if os.path.exists(gitconfig_path):\n parser = Parser()\n parser.read(gitconfig_path)\n parser.sections()\n return parser\n\n pass", "response": "try use gitconfig info. gitconfig info. gitconfig author email etc."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef render(self, match_string, new_string):\n\n current_dir = self.options.dir\n\n # safe check,we don't allow handle system root and user root.\n if os.path.expanduser(current_dir) in ['/', os.path.expanduser(\"~\")]:\n self.error(\"invalid directory\", -1)\n pass\n\n def match_directory(path):\n \"\"\"\n exclude indeed directory.\n\n .. note::\n\n this function will ignore in all depth.\n\n\n :param path:\n :return:\n \"\"\"\n skip = False\n for include_dir in ['/%s/' % s for s in\n self.exclude_directories]:\n if path.find(include_dir) > -1:\n skip = True\n break\n pass\n return skip\n\n # handle files detail first\n for v in os.walk(current_dir):\n # skip exclude directories in depth 1\n if os.path.basename(v[0]) in self.exclude_directories:\n continue\n if match_directory(v[0]):\n continue\n\n for base_name in v[2]:\n file_name = os.path.join(v[0], base_name)\n\n try:\n with open(file_name, 'r') as fh:\n buffer = fh.read()\n buffer = buffer.replace(match_string, new_string)\n pass\n\n with open(file_name, 'w') as fh:\n fh.write(buffer)\n pass\n except UnicodeDecodeError:\n # ignore binary files\n continue\n\n pass\n pass\n\n # handle directory\n redo_directories = []\n redo_files = []\n\n for v in os.walk(current_dir):\n if os.path.basename(v[0]) in self.exclude_directories:\n continue\n if match_directory(v[0]):\n continue\n\n for sub_dir in v[1]:\n if match_string in sub_dir:\n redo_directories.append(os.path.join(v[0], sub_dir))\n pass\n\n for f in v[2]:\n if match_string in f:\n redo_files.append(os.path.join(v[0], f))\n pass\n pass\n\n redo_directories.reverse()\n redo_files.reverse()\n\n # redo files first\n for v in redo_files:\n dir_name = os.path.dirname(v)\n file_name = os.path.basename(v)\n shutil.move(v, os.path.join(\n dir_name,\n file_name.replace(match_string, new_string)))\n pass\n\n for v in redo_directories:\n shutil.move(v, v.replace(match_string, new_string))\n pass\n\n pass", "response": "render template string to user string"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding command line arguments for the command line.", "response": "def add_arguments(cls):\n \"\"\"\n Init project.\n \"\"\"\n return [\n (('--yes',), dict(action='store_true', help='clean .git repo')),\n (('--variable', '-s'),\n dict(nargs='+', help='set extra variable,format is name:value')),\n (('--skip-builtin',),\n dict(action='store_true', help='skip replace builtin variable')),\n ]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_exclusive_mode(self):\n\n if self.options.exclusive_mode:\n import psutil\n current_pid = os.getpid()\n current = psutil.Process(current_pid).cmdline()\n\n for pid in psutil.pids():\n p = psutil.Process(pid)\n try:\n if current_pid != pid and current == p.cmdline():\n self.error_message(\n \"process exist. pid:{}\".format(p.pid))\n sys.exit(-1)\n pass\n\n except psutil.ZombieProcess:\n # \u50f5\u5c38\u8fdb\u7a0b,\u65e0\u89c6\n pass\n except psutil.AccessDenied:\n # \u4e0d\u5177\u5907\u7528\u6237\u6743\u9650\n pass\n pass\n pass\n\n pass", "response": "check if exclusive mode is on"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a pandas. DataFrame containing the COM ports that are part of a specific USB vendor ID and product ID for a specific TLE.", "response": "def serial_ports():\n '''\n Returns\n -------\n pandas.DataFrame\n Table of serial ports that match the USB vendor ID and product ID for\n the `Teensy 3.2`_ board.\n\n .. Teensy 3.2: https://www.pjrc.com/store/teensy32.html\n '''\n df_comports = sd.comports()\n # Match COM ports with USB vendor ID and product IDs for [Teensy 3.2\n # device][1].\n #\n # [1]: https://www.pjrc.com/store/teensy32.html\n df_teensy_comports = df_comports.loc[df_comports.hardware_id.str\n .contains('VID:PID=16c0:0483',\n case=False)]\n return df_teensy_comports"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncombining a bunch of files into a single file name.", "response": "def combine_filenames(filenames, max_length=40):\n \"\"\"Return a new filename to use as the combined file name for a\n bunch of files, based on the SHA of their contents.\n A precondition is that they all have the same file extension\n\n Given that the list of files can have different paths, we aim to use the\n most common path.\n\n Example:\n /somewhere/else/foo.js\n /somewhere/bar.js\n /somewhere/different/too/foobar.js\n The result will be\n /somewhere/148713695b4a4b9083e506086f061f9c.js\n\n Another thing to note, if the filenames have timestamps in them, combine\n them all and use the highest timestamp.\n\n \"\"\"\n # Get the SHA for each file, then sha all the shas.\n\n path = None\n names = []\n extension = None\n timestamps = []\n shas = []\n filenames.sort()\n concat_names = \"_\".join(filenames)\n if concat_names in COMBINED_FILENAMES_GENERATED:\n return COMBINED_FILENAMES_GENERATED[concat_names]\n\n for filename in filenames:\n name = os.path.basename(filename)\n if not extension:\n extension = os.path.splitext(name)[1]\n elif os.path.splitext(name)[1] != extension:\n raise ValueError(\"Can't combine multiple file extensions\")\n\n for base in MEDIA_ROOTS:\n try:\n shas.append(md5(os.path.join(base, filename)))\n break\n except IOError:\n pass\n\n\n if path is None:\n path = os.path.dirname(filename)\n else:\n if len(os.path.dirname(filename)) < len(path):\n path = os.path.dirname(filename)\n\n m = hashlib.md5()\n m.update(\",\".join(shas))\n\n new_filename = \"%s-inkmd\" % m.hexdigest()\n\n new_filename = new_filename[:max_length]\n new_filename += extension\n COMBINED_FILENAMES_GENERATED[concat_names] = new_filename\n\n return os.path.join(path, new_filename)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef apply_orientation(im):\n\n try:\n kOrientationEXIFTag = 0x0112\n if hasattr(im, '_getexif'): # only present in JPEGs\n e = im._getexif() # returns None if no EXIF data\n if e is not None:\n #log.info('EXIF data found: %r', e)\n orientation = e[kOrientationEXIFTag]\n f = orientation_funcs[orientation]\n return f(im)\n except:\n # We'd be here with an invalid orientation value or some random error?\n pass # log.exception(\"Error applying EXIF Orientation tag\")\n return im", "response": "Returns the image instance that is in - place with the specified orientation."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstarting a new piece", "response": "def write():\n \"\"\"Start a new piece\"\"\"\n click.echo(\"Fantastic. Let's get started. \")\n title = click.prompt(\"What's the title?\")\n\n # Make sure that title doesn't exist.\n url = slugify(title)\n url = click.prompt(\"What's the URL?\", default=url)\n\n # Make sure that title doesn't exist.\n click.echo(\"Got it. Creating %s...\" % url)\n scaffold_piece(title, url)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstarts a new site.", "response": "def scaffold():\n \"\"\"Start a new site.\"\"\"\n click.echo(\"A whole new site? Awesome.\")\n title = click.prompt(\"What's the title?\")\n url = click.prompt(\"Great. What's url? http://\")\n\n # Make sure that title doesn't exist.\n click.echo(\"Got it. Creating %s...\" % url)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npromotes all social media posts to buffer.", "response": "def promote():\n \"\"\"Schedule all the social media posts.\"\"\"\n\n if \"BUFFER_ACCESS_TOKEN\" not in os.environ:\n warn(\"Missing BUFFER_ACCESS_TOKEN.\")\n echo(\"To publish to social medial, you'll need an access token for buffer.\")\n echo(\"The simplest way to get one is to create a new app here: https://buffer.com/developers/apps\")\n echo(\"The token you want is the 'Access Token'\")\n echo(\"Once you have it, make it available to ink by putting it in the environment.\")\n\n # GET https://api.bufferapp.com/1/profiles.json\n echo(\"Verifying available profiles on buffer\")\n profiles = buffer_get(\"/1/profiles.json\")\n for p in profiles:\n supported_profile = False\n if p[\"formatted_service\"].lower() == \"facebook\" or p[\"formatted_service\"].lower() == \"facebook page\":\n facebook_profiles.append(p)\n supported_profile = True\n elif p[\"formatted_service\"].lower() == \"twitter\":\n twitter_profiles.append(p)\n supported_profile = True\n\n if supported_profile:\n click.secho(u\"\u2713 %s: %s\" % (p[\"formatted_service\"], p[\"formatted_username\"]), fg=\"green\")\n\n echo(\"Checking publication status...\")\n site_json_filename = os.path.join(ROOT_DIR, BUILD_DIR, \"static\", \"private.json\")\n with open(site_json_filename, \"r\") as site_json:\n site = load(site_json)\n\n echo('Reviewing social posts...')\n\n posts = {}\n unpublished_posts = []\n\n for dirpath, dirnames, filenames in os.walk(os.path.join(ROOT_DIR, \"posts\"), topdown=False):\n for filename in filenames:\n if \"piece.md\" in filename:\n if exists(dirpath, \"social.yml\") and exists(dirpath, \"meta.yml\"):\n with open(os.path.join(dirpath, \"social.yml\")) as f:\n social = load(f)\n\n with open(os.path.join(dirpath, \"meta.yml\")) as f:\n meta = load(f)\n\n if \"url\" in meta:\n site_json_entry = None\n for sp in site[\"posts\"]:\n if meta[\"url\"] == sp[\"url\"]:\n site_json_entry = sp\n break\n\n posts[meta[\"url\"]] = {\n \"meta\": meta,\n \"social\": social,\n \"dirpath\": dirpath,\n \"site\": site_json_entry,\n }\n if \"published\" not in social or social[\"published\"] is not True:\n unpublished_posts.append(meta[\"url\"])\n else:\n warn(\"No url found for %s\" % dirpath.replace(ROOT_DIR))\n\n automark_set = False\n automark = None\n for u in unpublished_posts:\n post = posts[u]\n\n if \"posts\" in post[\"social\"] and post[\"social\"][\"posts\"] and len(post[\"social\"][\"posts\"]) > 0:\n facebook_posts = []\n twitter_posts = []\n mark_as_published = False\n\n has_valid_post = False\n for p in post[\"social\"][\"posts\"]:\n try:\n if len(p.keys()) != 1:\n error(\"Something's formatted wrong in %s's social.yml\" % u)\n break\n if p.keys()[0] == \"facebook\":\n facebook_posts.append(p[\"facebook\"])\n if post_in_future(p[\"facebook\"], post):\n has_valid_post = True\n elif p.keys()[0] == \"twitter\":\n if post_in_future(p[\"twitter\"], post):\n has_valid_post = True\n twitter_posts.append(p[\"twitter\"])\n else:\n warn(\"Unknown post type: %s. Skipping.\" % p.keys()[0])\n except:\n error(\"Error parsing social.yml for \\\"%s\\\"\" % post[\"meta\"][\"title\"])\n import traceback\n traceback.print_exc()\n\n if not has_valid_post:\n if automark:\n mark_as_published = True\n else:\n warn('\"%s\" hasn\\'t been published, but all posts are in the past.' % post[\"meta\"][\"title\"])\n if click.confirm(\"Mark as published?\"):\n mark_as_published = True\n if not automark_set:\n if click.confirm(\"Mark all other similar posts as published?\"):\n automark = True\n automark_set = True\n else:\n echo('\\n\"%s\" hasn\\'t been published to social media.' % post[\"meta\"][\"title\"])\n if len(facebook_posts) > 0:\n echo(\" Facebook:\")\n for p in facebook_posts:\n if (len(p[\"content\"]) > 40):\n truncated_content = \"%s...\" % p[\"content\"][:40]\n else:\n truncated_content = p[\"content\"]\n if post_in_future(p, post):\n echo(\" - %s: \\\"%s\\\"\" % (\n publish_datetime(p, post).strftime(\"%c\"),\n truncated_content,\n ))\n else:\n warn(\" - %s: \\\"%s\\\" skipping (past)\" % (\n publish_datetime(p, post).strftime(\"%c\"),\n truncated_content,\n ))\n echo(\" Twitter:\")\n if len(twitter_posts) > 0:\n for p in twitter_posts:\n if (len(p[\"content\"]) > 40):\n truncated_content = \"%s...\" % p[\"content\"][:40]\n else:\n truncated_content = p[\"content\"]\n if post_in_future(p, post):\n echo(\" - %s: \\\"%s\\\"\" % (\n publish_datetime(p, post).strftime(\"%c\"),\n truncated_content,\n ))\n else:\n warn(\" - %s: \\\"%s\\\" skipping (past)\" % (\n publish_datetime(p, post).strftime(\"%c\"),\n truncated_content,\n ))\n\n if click.confirm(click.style(\" Publish now?\", fg=\"green\")):\n mark_as_published = True\n echo(\" Publishing...\")\n for p in facebook_posts:\n if post_in_future(p, post):\n publish_facebook(p, post)\n if (len(p[\"content\"]) > 40):\n truncated_content = \"%s...\" % p[\"content\"][:40]\n else:\n truncated_content = p[\"content\"]\n click.secho(u\" \u2713 Facebook %s: \\\"%s\\\"\" % (\n publish_datetime(p, post).strftime(\"%c\"),\n truncated_content,\n ), fg=\"green\")\n for p in twitter_posts:\n if post_in_future(p, post):\n publish_twitter(p, post)\n if (len(p[\"content\"]) > 40):\n truncated_content = \"%s...\" % p[\"content\"][:40]\n else:\n truncated_content = p[\"content\"]\n click.secho(u\" \u2713 Twitter %s: \\\"%s\\\"\" % (\n publish_datetime(p, post).strftime(\"%c\"),\n truncated_content,\n ), fg=\"green\")\n\n echo(\" Published.\")\n\n # Save as published.\n if mark_as_published or automark:\n post[\"social\"][\"published\"] = True\n with open(os.path.join(post[\"dirpath\"], \"social.yml\"), \"w\") as f:\n dump(post[\"social\"], f, default_flow_style=False, width=1000)\n\n if click.confirm(\"Publish your entire backlog to buffer?\"):\n print (\"dope\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of the branches", "response": "def get_branches(self):\n \"\"\"Returns a list of the branches\"\"\"\n return [self._sanitize(branch)\n for branch in self._git.branch(color=\"never\").splitlines()]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the currently active branch", "response": "def get_current_branch(self):\n \"\"\"Returns the currently active branch\"\"\"\n return next((self._sanitize(branch)\n for branch in self._git.branch(color=\"never\").splitlines()\n if branch.startswith('*')),\n None)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a patch between tags", "response": "def create_patch(self, from_tag, to_tag):\n \"\"\"Create a patch between tags\"\"\"\n return str(self._git.diff('{}..{}'.format(from_tag, to_tag), _tty_out=False))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef one(func, n=0):\n def _one(result):\n if _isSequenceTypeNotText(result) and len(result) > n:\n return func(result[n])\n return None\n return maybe(_one)", "response": "Create a callable that applies func to each element of a sequence."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef many(func):\n def _many(result):\n if _isSequenceTypeNotText(result):\n return map(func, result)\n return []\n return maybe(_many, default=[])", "response": "Create a callable that applies func to every value in a sequence."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef Text(value, encoding=None):\n if encoding is None:\n encoding = 'utf-8'\n if isinstance(value, bytes):\n return value.decode(encoding)\n elif isinstance(value, unicode):\n return value\n return None", "response": "Parses a value as text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef Integer(value, base=10, encoding=None):\n try:\n return int(Text(value, encoding), base)\n except (TypeError, ValueError):\n return None", "response": "Parses a value as an integer."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a value as a boolean.", "response": "def Boolean(value, true=(u'yes', u'1', u'true'), false=(u'no', u'0', u'false'),\n encoding=None):\n \"\"\"\n Parse a value as a boolean.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse.\n\n :type true: `tuple` of `unicode`\n :param true: Values to compare, ignoring case, for ``True`` values.\n\n :type false: `tuple` of `unicode`\n :param false: Values to compare, ignoring case, for ``False`` values.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat `bytes` values as, defaults to\n ``utf-8``.\n\n :rtype: `bool`\n :return: Parsed boolean or ``None`` if ``value`` did not match ``true`` or\n ``false`` values.\n \"\"\"\n value = Text(value, encoding)\n if value is not None:\n value = value.lower().strip()\n if value in true:\n return True\n elif value in false:\n return False\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses a value as a delimited list.", "response": "def Delimited(value, parser=Text, delimiter=u',', encoding=None):\n \"\"\"\n Parse a value as a delimited list.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse.\n\n :type parser: `callable` taking a `unicode` parameter\n :param parser: Callable to map over the delimited text values.\n\n :type delimiter: `unicode`\n :param delimiter: Delimiter text.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat `bytes` values as, defaults to\n ``utf-8``.\n\n :rtype: `list`\n :return: List of parsed values.\n \"\"\"\n value = Text(value, encoding)\n if value is None or value == u'':\n return []\n return map(parser, value.split(delimiter))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing a value as a POSIX timestamp in seconds.", "response": "def Timestamp(value, _divisor=1., tz=UTC, encoding=None):\n \"\"\"\n Parse a value as a POSIX timestamp in seconds.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse, which should be the number of seconds\n since the epoch.\n\n :type _divisor: `float`\n :param _divisor: Number to divide the value by.\n\n :type tz: `tzinfo`\n :param tz: Timezone, defaults to UTC.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat `bytes` values as, defaults to\n ``utf-8``.\n\n :rtype: `datetime.datetime`\n :return: Parsed datetime or ``None`` if ``value`` could not be parsed.\n \"\"\"\n value = Float(value, encoding)\n if value is not None:\n value = value / _divisor\n return datetime.fromtimestamp(value, tz)\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses the query parameters.", "response": "def parse(expected, query):\n \"\"\"\n Parse query parameters.\n\n :type expected: `dict` mapping `bytes` to `callable`\n :param expected: Mapping of query argument names to argument parsing\n callables.\n\n :type query: `dict` mapping `bytes` to `list` of `bytes`\n :param query: Mapping of query argument names to lists of argument values,\n this is the form that Twisted Web's `IRequest.args\n ` value takes.\n\n :rtype: `dict` mapping `bytes` to `object`\n :return: Mapping of query argument names to parsed argument values.\n \"\"\"\n return dict(\n (key, parser(query.get(key, [])))\n for key, parser in expected.items())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef put(self, metrics):\n if type(metrics) == list:\n for metric in metrics:\n self.c.put_metric_data(**metric)\n else:\n self.c.put_metric_data(**metrics)", "response": "Put metrics to CloudWatch."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrender a given resource.", "response": "def _renderResource(resource, request):\n \"\"\"\n Render a given resource.\n\n See `IResource.render `.\n \"\"\"\n meth = getattr(resource, 'render_' + nativeString(request.method), None)\n if meth is None:\n try:\n allowedMethods = resource.allowedMethods\n except AttributeError:\n allowedMethods = _computeAllowedMethods(resource)\n raise UnsupportedMethod(allowedMethods)\n return meth(request)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadapts a result to IResource.", "response": "def _adaptToResource(self, result):\n \"\"\"\n Adapt a result to `IResource`.\n\n Several adaptions are tried they are, in order: ``None``,\n `IRenderable `, `IResource\n `, and `URLPath\n `. Anything else is returned as\n is.\n\n A `URLPath ` is treated as\n a redirect.\n \"\"\"\n if result is None:\n return NotFound()\n\n spinneretResource = ISpinneretResource(result, None)\n if spinneretResource is not None:\n return SpinneretResource(spinneretResource)\n\n renderable = IRenderable(result, None)\n if renderable is not None:\n return _RenderableResource(renderable)\n\n resource = IResource(result, None)\n if resource is not None:\n return resource\n\n if isinstance(result, URLPath):\n return Redirect(str(result))\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nhandle the result from IResource. render.", "response": "def _handleRenderResult(self, request, result):\n \"\"\"\n Handle the result from `IResource.render`.\n\n If the result is a `Deferred` then return `NOT_DONE_YET` and add\n a callback to write the result to the request when it arrives.\n \"\"\"\n def _requestFinished(result, cancel):\n cancel()\n return result\n\n if not isinstance(result, Deferred):\n result = succeed(result)\n\n def _whenDone(result):\n render = getattr(result, 'render', lambda request: result)\n renderResult = render(request)\n if renderResult != NOT_DONE_YET:\n request.write(renderResult)\n request.finish()\n return result\n request.notifyFinish().addBoth(_requestFinished, result.cancel)\n result.addCallback(self._adaptToResource)\n result.addCallback(_whenDone)\n result.addErrback(request.processingFailed)\n return NOT_DONE_YET"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _negotiateHandler(self, request):\n accept = _parseAccept(request.requestHeaders.getRawHeaders('Accept'))\n for contentType in accept.keys():\n handler = self._acceptHandlers.get(contentType.lower())\n if handler is not None:\n return handler, handler.contentType\n\n if self._fallback:\n handler = self._handlers[0]\n return handler, handler.contentType\n return NotAcceptable(), None", "response": "Negotiate a handler based on the content types acceptable to the\n client."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _parseAccept(headers):\n def sort(value):\n return float(value[1].get('q', 1))\n return OrderedDict(sorted(_splitHeaders(headers), key=sort, reverse=True))", "response": "Parse and sort an Accept header."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsplit an HTTP header into two - tuple - like objects.", "response": "def _splitHeaders(headers):\n \"\"\"\n Split an HTTP header whose components are separated with commas.\n\n Each component is then split on semicolons and the component arguments\n converted into a `dict`.\n\n @return: `list` of 2-`tuple` of `bytes`, `dict`\n @return: List of header arguments and mapping of component argument names\n to values.\n \"\"\"\n return [cgi.parse_header(value)\n for value in chain.from_iterable(\n s.split(',') for s in headers\n if s)]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef contentEncoding(requestHeaders, encoding=None):\n if encoding is None:\n encoding = b'utf-8'\n headers = _splitHeaders(\n requestHeaders.getRawHeaders(b'Content-Type', []))\n if headers:\n return headers[0][1].get(b'charset', encoding)\n return encoding", "response": "Extract an encoding from a Content - Type header."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef maybe(f, default=None):\n @wraps(f)\n def _maybe(x, *a, **kw):\n if x is None:\n return default\n return f(x, *a, **kw)\n return _maybe", "response": "Create a nil - safe callable decorator that returns a new object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef settings(path=None, with_path=None):\n\n if path:\n Settings.bind(path, with_path=with_path)\n\n return Settings._wrapped", "response": "Get or set the settings object"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbinds user variable to settings", "response": "def bind(mod_path, with_path=None):\n \"\"\"\n bind user variable to `_wrapped`\n\n .. note::\n\n you don't need call this method by yourself.\n\n program will call it in `cliez.parser.parse`\n\n\n .. expection::\n\n if path is not correct,will cause an `ImportError`\n\n\n :param str mod_path: module path, *use dot style,'mod.mod1'*\n :param str with_path: add path to `sys.path`,\n if path is file,use its parent.\n :return: A instance of `Settings`\n \"\"\"\n\n if with_path:\n if os.path.isdir(with_path):\n sys.path.insert(0, with_path)\n else:\n sys.path.insert(0, with_path.rsplit('/', 2)[0])\n pass\n\n # raise `ImportError` mod_path if not exist\n mod = importlib.import_module(mod_path)\n\n settings = Settings()\n\n for v in dir(mod):\n if v[0] == '_' or type(getattr(mod, v)).__name__ == 'module':\n continue\n setattr(settings, v, getattr(mod, v))\n pass\n\n Settings._path = mod_path\n Settings._wrapped = settings\n\n return settings"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_version():\n version_module_path = os.path.join(\n os.path.dirname(__file__), \"txspinneret\", \"_version.py\")\n # The version module contains a variable called __version__\n with open(version_module_path) as version_module:\n exec(version_module.read())\n return locals()[\"__version__\"]", "response": "Get the version from version module without importing more than\n necessary."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending a transaction immediately.", "response": "def send(self, use_open_peers=True, queue=True, **kw):\n \"\"\"\n send a transaction immediately. Failed transactions are picked up by the TxBroadcaster\n\n :param ip: specific peer IP to send tx to\n :param port: port of specific peer\n :param use_open_peers: use Arky's broadcast method\n \"\"\"\n\n if not use_open_peers:\n ip = kw.get('ip')\n port = kw.get('port')\n peer = 'http://{}:{}'.format(ip, port)\n res = arky.rest.POST.peer.transactions(peer=peer, transactions=[self.tx.tx])\n\n else:\n res = arky.core.sendPayload(self.tx.tx)\n\n if self.tx.success != '0.0%':\n self.tx.error = None\n self.tx.success = True\n else:\n self.tx.error = res['messages']\n self.tx.success = False\n\n self.tx.tries += 1\n self.tx.res = res\n\n if queue:\n self.tx.send = True\n\n self.__save()\n return res"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_confirmations_or_resend(self, use_open_peers=False, **kw):\n if self.confirmations() == 0:\n self.send(use_open_peers, **kw)", "response": "check if a tx is confirmed else resend it."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget list of all available command names.", "response": "def command_list():\n \"\"\"\n Get sub-command list\n\n .. note::\n\n Don't use logger handle this function errors.\n\n Because the error should be a code error,not runtime error.\n\n\n :return: `list` matched sub-parser\n \"\"\"\n from cliez.conf import COMPONENT_ROOT\n\n root = COMPONENT_ROOT\n\n if root is None:\n sys.stderr.write(\"cliez.conf.COMPONENT_ROOT not set.\\n\")\n sys.exit(2)\n pass\n\n if not os.path.exists(root):\n sys.stderr.write(\n \"please set a valid path for `cliez.conf.COMPONENT_ROOT`\\n\")\n sys.exit(2)\n pass\n\n try:\n path = os.listdir(os.path.join(root, 'components'))\n return [f[:-3] for f in path if\n f.endswith('.py') and f != '__init__.py']\n except FileNotFoundError:\n return []"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding class options to argparser options.", "response": "def append_arguments(klass, sub_parsers, default_epilog, general_arguments):\n \"\"\"\n Add class options to argparser options.\n\n :param cliez.component.Component klass: subclass of Component\n :param Namespace sub_parsers:\n :param str default_epilog: default_epilog\n :param list general_arguments: global options, defined by user\n :return: Namespace subparser\n \"\"\"\n\n entry_name = hump_to_underscore(klass.__name__).replace(\n '_component',\n '')\n\n # set sub command document\n epilog = default_epilog if default_epilog \\\n else 'This tool generate by `cliez` ' \\\n 'https://www.github.com/wangwenpei/cliez'\n\n sub_parser = sub_parsers.add_parser(entry_name, help=klass.__doc__,\n epilog=epilog)\n sub_parser.description = klass.add_arguments.__doc__\n\n # add slot arguments\n if hasattr(klass, 'add_slot_args'):\n slot_args = klass.add_slot_args() or []\n for v in slot_args:\n sub_parser.add_argument(*v[0], **v[1])\n sub_parser.description = klass.add_slot_args.__doc__\n pass\n\n user_arguments = klass.add_arguments() or []\n\n for v in user_arguments:\n sub_parser.add_argument(*v[0], **v[1])\n\n if not klass.exclude_global_option:\n for v in general_arguments:\n sub_parser.add_argument(*v[0], **v[1])\n\n return sub_parser"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse(parser, argv=None, settings_key='settings', no_args_func=None):\n\n argv = argv or sys.argv\n commands = command_list()\n\n if type(argv) not in [list, tuple]:\n raise TypeError(\"argv only can be list or tuple\")\n\n # match sub-parser\n if len(argv) >= 2 and argv[1] in commands:\n sub_parsers = parser.add_subparsers()\n class_name = argv[1].capitalize() + 'Component'\n\n from cliez.conf import (COMPONENT_ROOT,\n LOGGING_CONFIG,\n EPILOG,\n GENERAL_ARGUMENTS)\n\n sys.path.insert(0, os.path.dirname(COMPONENT_ROOT))\n mod = importlib.import_module(\n '{}.components.{}'.format(os.path.basename(COMPONENT_ROOT),\n argv[1]))\n\n # dynamic load component\n klass = getattr(mod, class_name)\n sub_parser = append_arguments(klass, sub_parsers, EPILOG,\n GENERAL_ARGUMENTS)\n options = parser.parse_args(argv[1:])\n\n settings = Settings.bind(\n getattr(options, settings_key)\n ) if settings_key and hasattr(options, settings_key) else None\n\n obj = klass(parser, sub_parser, options, settings)\n\n # init logger\n logger_level = logging.CRITICAL\n if hasattr(options, 'verbose'):\n if options.verbose == 1:\n logger_level = logging.ERROR\n elif options.verbose == 2:\n logger_level = logging.WARNING\n elif options.verbose == 3:\n logger_level = logging.INFO\n obj.logger.setLevel(logging.INFO)\n pass\n\n if hasattr(options, 'debug') and options.debug:\n logger_level = logging.DEBUG\n # http lib use a strange way to logging\n try:\n import http.client as http_client\n http_client.HTTPConnection.debuglevel = 1\n except Exception:\n # do nothing\n pass\n pass\n\n loggers = LOGGING_CONFIG['loggers']\n for k, v in loggers.items():\n v.setdefault('level', logger_level)\n if logger_level in [logging.INFO, logging.DEBUG]:\n v['handlers'] = ['stdout']\n pass\n\n logging_config.dictConfig(LOGGING_CONFIG)\n # this may not necessary\n # obj.logger.setLevel(logger_level)\n\n obj.run(options)\n\n # return object to make unit test easy\n return obj\n\n # print all sub commands when user set.\n if not parser.description and len(commands):\n sub_parsers = parser.add_subparsers()\n [sub_parsers.add_parser(v) for v in commands]\n pass\n pass\n\n options = parser.parse_args(argv[1:])\n if no_args_func and callable(no_args_func):\n return no_args_func(options)\n else:\n parser._print_message(\"nothing to do...\\n\")\n pass", "response": "parse an argument parser and return an instance of cliez. component. Component or a subclass of cliez. component. Component."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef include_file(filename, global_vars=None, local_vars=None):\n if global_vars is None:\n global_vars = sys._getframe(1).f_globals\n if local_vars is None:\n local_vars = sys._getframe(1).f_locals\n\n with open(filename, 'r') as f:\n code = compile(f.read(), os.path.basename(filename), 'exec')\n exec(code, global_vars, local_vars)\n pass", "response": "Include a file into the current context."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef hump_to_underscore(name):\n new_name = ''\n\n pos = 0\n for c in name:\n if pos == 0:\n new_name = c.lower()\n elif 65 <= ord(c) <= 90:\n new_name += '_' + c.lower()\n pass\n else:\n new_name += c\n pos += 1\n pass\n return new_name", "response": "Convert Hump style to underscore"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_fuel_prices(self) -> GetFuelPricesResponse:\n response = requests.get(\n '{}/prices'.format(API_URL_BASE),\n headers=self._get_headers(),\n timeout=self._timeout,\n )\n\n if not response.ok:\n raise FuelCheckError.create(response)\n\n return GetFuelPricesResponse.deserialize(response.json())", "response": "Fetches fuel prices for all stations."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the fuel prices for a specific fuel station.", "response": "def get_fuel_prices_for_station(\n self,\n station: int\n ) -> List[Price]:\n \"\"\"Gets the fuel prices for a specific fuel station.\"\"\"\n response = requests.get(\n '{}/prices/station/{}'.format(API_URL_BASE, station),\n headers=self._get_headers(),\n timeout=self._timeout,\n )\n\n if not response.ok:\n raise FuelCheckError.create(response)\n\n data = response.json()\n return [Price.deserialize(data) for data in data['prices']]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_fuel_prices_within_radius(\n self, latitude: float, longitude: float, radius: int,\n fuel_type: str, brands: Optional[List[str]] = None\n ) -> List[StationPrice]:\n \"\"\"Gets all the fuel prices within the specified radius.\"\"\"\n\n if brands is None:\n brands = []\n response = requests.post(\n '{}/prices/nearby'.format(API_URL_BASE),\n json={\n 'fueltype': fuel_type,\n 'latitude': latitude,\n 'longitude': longitude,\n 'radius': radius,\n 'brand': brands,\n },\n headers=self._get_headers(),\n timeout=self._timeout,\n )\n\n if not response.ok:\n raise FuelCheckError.create(response)\n\n data = response.json()\n stations = {\n station['code']: Station.deserialize(station)\n for station in data['stations']\n }\n station_prices = [] # type: List[StationPrice]\n for serialized_price in data['prices']:\n price = Price.deserialize(serialized_price)\n station_prices.append(StationPrice(\n price=price,\n station=stations[price.station_code]\n ))\n\n return station_prices", "response": "Gets all the fuel prices within the specified radius."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the fuel price trends for the given location and fuel types.", "response": "def get_fuel_price_trends(self, latitude: float, longitude: float,\n fuel_types: List[str]) -> PriceTrends:\n \"\"\"Gets the fuel price trends for the given location and fuel types.\"\"\"\n response = requests.post(\n '{}/prices/trends/'.format(API_URL_BASE),\n json={\n 'location': {\n 'latitude': latitude,\n 'longitude': longitude,\n },\n 'fueltypes': [{'code': type} for type in fuel_types],\n },\n headers=self._get_headers(),\n timeout=self._timeout,\n )\n\n if not response.ok:\n raise FuelCheckError.create(response)\n\n data = response.json()\n return PriceTrends(\n variances=[\n Variance.deserialize(variance)\n for variance in data['Variances']\n ],\n average_prices=[\n AveragePrice.deserialize(avg_price)\n for avg_price in data['AveragePrices']\n ]\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfetch API reference data.", "response": "def get_reference_data(\n self,\n modified_since: Optional[datetime.datetime] = None\n ) -> GetReferenceDataResponse:\n \"\"\"\n Fetches API reference data.\n\n :param modified_since: The response will be empty if no\n changes have been made to the reference data since this\n timestamp, otherwise all reference data will be returned.\n \"\"\"\n\n if modified_since is None:\n modified_since = datetime.datetime(year=2010, month=1, day=1)\n\n response = requests.get(\n '{}/lovs'.format(API_URL_BASE),\n headers={\n 'if-modified-since': self._format_dt(modified_since),\n **self._get_headers(),\n },\n timeout=self._timeout,\n )\n\n if not response.ok:\n raise FuelCheckError.create(response)\n\n # return response.text\n return GetReferenceDataResponse.deserialize(response.json())"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck and get require config :param dict env: user config node :param list keys: check keys .. note:: option and key name must be same. :param dict defaults: default value for keys :return: dict.env with verified. .. exception:: will raise `ValueError` when some key missed.", "response": "def parse_require(self, env, keys, defaults={}):\n \"\"\"\n check and get require config\n :param dict env: user config node\n :param list keys: check keys\n .. note::\n\n option and key name must be same.\n\n :param dict defaults: default value for keys\n :return: dict.env with verified.\n\n\n .. exception::\n\n will raise `ValueError` when some key missed.\n\n \"\"\"\n\n for k in keys:\n env[k] = getattr(self.options, k) or env.get(k, None)\n\n if env[k] is None:\n self.error(\"config syntax error,\"\n \"please set `%s` in your env: %s\" % (k, env))\n\n pass\n\n for k, v in defaults.items():\n env.setdefault(k, v)\n\n return env"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pre(self, command, output_dir, vars):\n # import pdb;pdb.set_trace()\n vars['license_name'] = 'Apache'\n vars['year'] = time.strftime('%Y', time.localtime())", "response": "Called before template is applied."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Text(name, encoding=None):\n def _match(request, value):\n return name, query.Text(\n value,\n encoding=contentEncoding(request.requestHeaders, encoding))\n return _match", "response": "Match a route parameter."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmatching an integer route parameter.", "response": "def Integer(name, base=10, encoding=None):\n \"\"\"\n Match an integer route parameter.\n\n :type name: `bytes`\n :param name: Route parameter name.\n\n :type base: `int`\n :param base: Base to interpret the value in.\n\n :type encoding: `bytes`\n :param encoding: Default encoding to assume if the ``Content-Type``\n header is lacking one.\n\n :return: ``callable`` suitable for use with `route` or `subroute`.\n \"\"\"\n def _match(request, value):\n return name, query.Integer(\n value,\n base=base,\n encoding=contentEncoding(request.requestHeaders, encoding))\n return _match"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmatching a request path against the resource hierarchy.", "response": "def _matchRoute(components, request, segments, partialMatching):\n \"\"\"\n Match a request path against our path components.\n\n The path components are always matched relative to their parent is in the\n resource hierarchy, in other words it is only possible to match URIs nested\n more deeply than the parent resource.\n\n :type components: ``iterable`` of `bytes` or `callable`\n :param components: Iterable of path components, to match against the\n request, either static strings or dynamic parameters. As a convenience,\n a single `bytes` component containing ``/`` may be given instead of\n manually separating the components. If no components are given the null\n route is matched, this is the case where ``segments`` is empty.\n\n :type segments: ``sequence`` of `bytes`\n :param segments: Sequence of path segments, from the request, to match\n against.\n\n :type partialMatching: `bool`\n :param partialMatching: Allow partial matching against the request path?\n\n :rtype: 2-`tuple` of `dict` keyed on `bytes` and `list` of `bytes`\n :return: Pair of parameter results, mapping parameter names to processed\n values, and a list of the remaining request path segments. If there is\n no route match the result will be ``None`` and the original request path\n segments.\n \"\"\"\n if len(components) == 1 and isinstance(components[0], bytes):\n components = components[0]\n if components[:1] == '/':\n components = components[1:]\n components = components.split('/')\n\n results = OrderedDict()\n NO_MATCH = None, segments\n remaining = list(segments)\n\n # Handle the null route.\n if len(segments) == len(components) == 0:\n return results, remaining\n\n for us, them in izip_longest(components, segments):\n if us is None:\n if partialMatching:\n # We've matched all of our components, there might be more\n # segments for something else to process.\n break\n else:\n return NO_MATCH\n elif them is None:\n # We've run out of path segments to match, so this route can't be\n # the matching one.\n return NO_MATCH\n\n if callable(us):\n name, match = us(request, them)\n if match is None:\n return NO_MATCH\n results[name] = match\n elif us != them:\n return NO_MATCH\n remaining.pop(0)\n\n return results, remaining"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndecorating a router-producing callable to instead produce a resource. This simply produces a new callable that invokes the original callable, and calls ``resource`` on the ``routerAttribute``. If the router producer has multiple routers the attribute can be altered to choose the appropriate one, for example: .. code-block:: python class _ComplexRouter(object): router = Router() privateRouter = Router() @router.route('/') def publicRoot(self, request, params): return SomethingPublic(...) @privateRouter.route('/') def privateRoot(self, request, params): return SomethingPrivate(...) PublicResource = routedResource(_ComplexRouter) PrivateResource = routedResource(_ComplexRouter, 'privateRouter') :type f: ``callable`` :param f: Callable producing an object with a `Router` attribute, for example, a type. :type routerAttribute: `str` :param routerAttribute: Name of the `Router` attribute on the result of calling ``f``. :rtype: `callable` :return: Callable producing an `IResource`.", "response": "def routedResource(f, routerAttribute='router'):\n \"\"\"\n Decorate a router-producing callable to instead produce a resource.\n\n This simply produces a new callable that invokes the original callable, and\n calls ``resource`` on the ``routerAttribute``.\n\n If the router producer has multiple routers the attribute can be altered to\n choose the appropriate one, for example:\n\n .. code-block:: python\n\n class _ComplexRouter(object):\n router = Router()\n privateRouter = Router()\n\n @router.route('/')\n def publicRoot(self, request, params):\n return SomethingPublic(...)\n\n @privateRouter.route('/')\n def privateRoot(self, request, params):\n return SomethingPrivate(...)\n\n PublicResource = routedResource(_ComplexRouter)\n PrivateResource = routedResource(_ComplexRouter, 'privateRouter')\n\n :type f: ``callable``\n :param f: Callable producing an object with a `Router` attribute, for\n example, a type.\n\n :type routerAttribute: `str`\n :param routerAttribute: Name of the `Router` attribute on the result of\n calling ``f``.\n\n :rtype: `callable`\n :return: Callable producing an `IResource`.\n \"\"\"\n return wraps(f)(\n lambda *a, **kw: getattr(f(*a, **kw), routerAttribute).resource())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _forObject(self, obj):\n router = type(self)()\n router._routes = list(self._routes)\n router._self = obj\n return router", "response": "Create a new Router instance for the given object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _addRoute(self, f, matcher):\n self._routes.append((f.func_name, f, matcher))", "response": "Add a route handler and matcher to the collection of possible routes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef route(self, *components):\n def _factory(f):\n self._addRoute(f, route(*components))\n return f\n return _factory", "response": "Decorator for route methods."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsee `txspinneret.route.subroute`. This decorator can be stacked with itself to specify multiple routes with a single handler.", "response": "def subroute(self, *components):\n \"\"\"\n See `txspinneret.route.subroute`.\n\n This decorator can be stacked with itself to specify multiple routes\n with a single handler.\n \"\"\"\n def _factory(f):\n self._addRoute(f, subroute(*components))\n return f\n return _factory"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _tempfile(filename):\n return tempfile.NamedTemporaryFile(mode='w',\n dir=os.path.dirname(filename),\n prefix=os.path.basename(filename),\n suffix=os.fsencode('.tmp'),\n delete=False)", "response": "Create a NamedTemporaryFile instance for the file filename."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nopens a NamedTemoraryFile handle in a context manager", "response": "def atomic_write(filename):\n \"\"\"\n Open a NamedTemoraryFile handle in a context manager\n \"\"\"\n f = _tempfile(os.fsencode(filename))\n\n try:\n yield f\n\n finally:\n f.close()\n # replace the original file with the new temp file (atomic on success)\n os.replace(f.name, filename)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_item(filename, uuid):\n with open(os.fsencode(str(filename)), \"r\") as f:\n data = json.load(f)\n results = [i for i in data if i[\"uuid\"] == str(uuid)]\n if results:\n return results\n return None", "response": "Read entry from JSON file and return the object"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsaving entry to JSON file", "response": "def set_item(filename, item):\n \"\"\"\n Save entry to JSON file\n \"\"\"\n with atomic_write(os.fsencode(str(filename))) as temp_file:\n with open(os.fsencode(str(filename))) as products_file:\n # load the JSON data into memory\n products_data = json.load(products_file)\n # check if UUID already exists\n uuid_list = [i for i in filter(\n lambda z: z[\"uuid\"] == str(item[\"uuid\"]), products_data)]\n if len(uuid_list) == 0:\n # add the new item to the JSON file\n products_data.append(item)\n # save the new JSON to the temp file\n json.dump(products_data, temp_file)\n return True\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_item(filename, item, uuid):\n with atomic_write(os.fsencode(str(filename))) as temp_file:\n with open(os.fsencode(str(filename))) as products_file:\n # load the JSON data into memory\n products_data = json.load(products_file)\n # apply modifications to the JSON data wrt UUID\n # TODO: handle this in a neat way\n if 'products' in products_data[-1]:\n # handle orders object\n [products_data[i][\"products\"][0].update(item) for (\n i, j) in enumerate(products_data) if j[\"uuid\"] == str(uuid)]\n else:\n # handle products object\n [products_data[i].update(item) for (i, j) in enumerate(\n products_data) if j[\"uuid\"] == str(uuid)]\n # save the modified JSON data into the temp file\n json.dump(products_data, temp_file)\n return True", "response": "Update the item in the JSON file by UUID in the JSON file\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck the database connection", "response": "def run(self, options):\n \"\"\"\n .. todo::\n\n check network connection\n\n :param Namespace options: parse result from argparse\n :return:\n \"\"\"\n self.logger.debug(\"debug enabled...\")\n\n depends = ['git']\n nil_tools = []\n\n self.logger.info(\"depends list: %s\", depends)\n\n for v in depends:\n real_path = shutil.which(v)\n if real_path:\n self.print_message(\"Found {}:{}...\"\n \" {}\".format(v,\n real_path,\n termcolor.colored(\n '[OK]',\n color='blue')))\n else:\n nil_tools.append(v)\n self.error_message(\n 'Missing tool:`{}`... {}'.format(v, '[ERR]'), prefix='',\n suffix='')\n\n pass\n\n if nil_tools:\n self.print_message('')\n self.error(\"please install missing tools...\")\n else:\n self.print_message(\"\\nNo error found,\"\n \"you can use cliez in right way.\")\n self.logger.debug(\"check finished...\")\n pass\n\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef experiment_data(self, commit=None, must_contain_results=False):\n results = {}\n for tag in self.__repository.tags:\n if not tag.name.startswith(self.__tag_prefix):\n continue\n data = json.loads(tag.tag.message)\n if \"results\" not in data and must_contain_results:\n continue\n if commit is not None and tag.tag.object.hexsha != name_to_object(self.__repository, commit).hexsha:\n continue\n results[tag.name] = data\n return results", "response": "returns all the experiment data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete an experiment by removing the associated tag.", "response": "def delete(self, experiment_name):\n \"\"\"\n Delete an experiment by removing the associated tag.\n :param experiment_name: the name of the experiment to be deleted\n :type experiment_name: str\n :rtype bool\n :return if deleting succeeded\n \"\"\"\n if not experiment_name.startswith(self.__tag_prefix):\n target_tag = self.__tag_prefix + experiment_name\n else:\n target_tag = experiment_name\n if target_tag not in [t.name for t in self.__repository.tags]:\n return False\n self.__repository.delete_tag(target_tag)\n return target_tag not in [t.name for t in self.__repository.tags]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main():\n plugin = Register()\n if plugin.args.option == 'command':\n plugin.command_handle()\n else:\n plugin.unknown(\"Unknown actions.\")", "response": "Register your own mode and handle method here."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the number of the shell command.", "response": "def command_handle(self):\n \"\"\"Get the number of the shell command.\"\"\"\n self.__results = self.execute(self.args.command)\n self.close()\n\n self.logger.debug(\"results: {}\".format(self.__results))\n if not self.__results:\n self.unknown(\"{} return nothing.\".format(self.args.command))\n if len(self.__results) != 1:\n self.unknown(\n \"{} return more than one number.\".format(\n self.args.command))\n self.__result = int(self.__results[0])\n self.logger.debug(\"result: {}\".format(self.__result))\n if not isinstance(self.__result, (int, long)):\n self.unknown(\n \"{} didn't return single number.\".format(\n self.args.command))\n\n status = self.ok\n # Compare the vlaue.\n if self.__result > self.args.warning:\n status = self.warning\n if self.__result > self.args.critical:\n status = self.critical\n\n # Output\n self.shortoutput = \"{0} return {1}.\".format(\n self.args.command, self.__result)\n [self.longoutput.append(line)\n for line in self.__results if self.__results]\n self.perfdata.append(\"{command}={result};{warn};{crit};0;\".format(\n crit=self.args.critical,\n warn=self.args.warning,\n result=self.__result,\n command=self.args.command))\n\n # Return status with message to Nagios.\n status(self.output(long_output_limit=None))\n self.logger.debug(\"Return status and exit to Nagios.\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting a shell command.", "response": "def execute(self, command, timeout=None):\n \"\"\"Execute a shell command.\"\"\"\n try:\n self.channel = self.ssh.get_transport().open_session()\n except paramiko.SSHException as e:\n self.unknown(\"Create channel error: %s\" % e)\n try:\n self.channel.settimeout(self.args.timeout if not timeout else timeout)\n except socket.timeout as e:\n self.unknown(\"Settimeout for channel error: %s\" % e)\n try:\n self.logger.debug(\"command: {}\".format(command))\n self.channel.exec_command(command)\n except paramiko.SSHException as e:\n self.unknown(\"Execute command error: %s\" % e)\n try:\n self.stdin = self.channel.makefile('wb', -1)\n self.stderr = map(string.strip, self.channel.makefile_stderr('rb', -1).readlines())\n self.stdout = map(string.strip, self.channel.makefile('rb', -1).readlines())\n except Exception as e:\n self.unknown(\"Get result error: %s\" % e)\n try:\n self.status = self.channel.recv_exit_status()\n except paramiko.SSHException as e:\n self.unknown(\"Get return code error: %s\" % e)\n else:\n if self.status != 0:\n self.unknown(\"Return code: %d , stderr: %s\" % (self.status, self.errors))\n else:\n return self.stdout\n finally:\n self.logger.debug(\"Execute command finish.\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef close(self):\n try:\n self.ssh.close()\n self.logger.debug(\"close connect succeed.\")\n except paramiko.SSHException as e:\n self.unknown(\"close connect error: %s\" % e)", "response": "Close and exit the connection."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef slinky(filename, seconds_available, bucket_name, aws_key, aws_secret):\n if not os.environ.get('AWS_ACCESS_KEY_ID') and os.environ.get('AWS_SECRET_ACCESS_KEY'):\n \tprint 'Need to set environment variables for AWS access and create a slinky bucket.'\n \texit()\n \n print create_temp_s3_link(filename, seconds_available, bucket_name)", "response": "Create a temp S3 link."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks if the current process is readable.", "response": "def check_readable(self, timeout):\n \"\"\"\n Poll ``self.stdout`` and return True if it is readable.\n\n :param float timeout: seconds to wait I/O\n :return: True if readable, else False\n :rtype: boolean\n \"\"\"\n rlist, wlist, xlist = select.select([self._stdout], [], [], timeout)\n return bool(len(rlist))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrieving a list of characters and escape codes where each escape code uses only one index.", "response": "def get_indices_list(s: Any) -> List[str]:\n \"\"\" Retrieve a list of characters and escape codes where each escape\n code uses only one index. The indexes will not match up with the\n indexes in the original string.\n \"\"\"\n indices = get_indices(s)\n return [\n indices[i] for i in sorted(indices, key=int)\n ]"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstrips all color codes from a string.", "response": "def strip_codes(s: Any) -> str:\n \"\"\" Strip all color codes from a string.\n Returns empty string for \"falsey\" inputs.\n \"\"\"\n return codepat.sub('', str(s) if (s or (s == 0)) else '')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef init_build(self, asset, builder):\n if not self.abs_path:\n rel_path = utils.prepare_path(self.rel_bundle_path)\n self.abs_bundle_path = utils.prepare_path([builder.config.input_dir, rel_path])\n self.abs_path = True\n self.input_dir = builder.config.input_dir", "response": "Called when builder group collect files\n Resolves absolute url if relative"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a single file or list of files to bundle", "response": "def add_file(self, *args):\n \"\"\"\n Add single file or list of files to bundle\n\n :type: file_path: str|unicode\n \"\"\"\n for file_path in args:\n self.files.append(FilePath(file_path, self))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a directory or directories list to the bundle.", "response": "def add_directory(self, *args, **kwargs):\n \"\"\"\n Add directory or directories list to bundle\n\n :param exclusions: List of excluded paths\n\n :type path: str|unicode\n :type exclusions: list\n \"\"\"\n exc = kwargs.get('exclusions', None)\n for path in args:\n self.files.append(DirectoryPath(path, self, exclusions=exc))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_path_object(self, *args):\n for obj in args:\n obj.bundle = self\n self.files.append(obj)", "response": "Add custom path objects to the list of files."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_prepare_handler(self, prepare_handlers):\n if not isinstance(prepare_handlers, static_bundle.BUNDLE_ITERABLE_TYPES):\n prepare_handlers = [prepare_handlers]\n if self.prepare_handlers_chain is None:\n self.prepare_handlers_chain = []\n for handler in prepare_handlers:\n self.prepare_handlers_chain.append(handler)", "response": "Adds prepare handler to the prepare_handlers_chain."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef prepare(self):\n result_files = self.collect_files()\n chain = self.prepare_handlers_chain\n if chain is None:\n # default handlers\n chain = [\n LessCompilerPrepareHandler()\n ]\n for prepare_handler in chain:\n result_files = prepare_handler.prepare(result_files, self)\n return result_files", "response": "Called when builder run collect files in group\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister your own mode and handle method here.", "response": "def main():\n \"\"\"Register your own mode and handle method here.\"\"\"\n plugin = Register()\n if plugin.args.option == 'filenumber':\n plugin.filenumber_handle()\n else:\n plugin.unknown(\"Unknown actions.\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef filenumber_handle(self):\n self.__results = []\n self.__dirs = []\n self.__files = []\n self.__ftp = self.connect()\n self.__ftp.dir(self.args.path, self.__results.append)\n self.logger.debug(\"dir results: {}\".format(self.__results))\n self.quit()\n\n status = self.ok\n\n for data in self.__results:\n if \"\" in data:\n self.__dirs.append(str(data.split()[3]))\n else:\n self.__files.append(str(data.split()[2]))\n\n self.__result = len(self.__files)\n self.logger.debug(\"result: {}\".format(self.__result))\n\n # Compare the vlaue.\n if self.__result > self.args.warning:\n status = self.warning\n if self.__result > self.args.critical:\n status = self.critical\n\n # Output\n self.shortoutput = \"Found {0} files in {1}.\".format(self.__result,\n self.args.path)\n [self.longoutput.append(line)\n for line in self.__results if self.__results]\n self.perfdata.append(\"{path}={result};{warn};{crit};0;\".format(\n crit=self.args.critical,\n warn=self.args.warning,\n result=self.__result,\n path=self.args.path))\n\n self.logger.debug(\"Return status and output.\")\n status(self.output())", "response": "Handle the filenumber subcommand."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nregister the contents of the JSON string as the data dictionary.", "response": "def register_json(self, data):\n \"\"\"\n Register the contents as JSON\n \"\"\"\n j = json.loads(data)\n self.last_data_timestamp = \\\n datetime.datetime.utcnow().replace(microsecond=0).isoformat()\n try:\n for v in j:\n # prepare the sensor entry container\n self.data[v[self.id_key]] = {}\n # add the mandatory entries\n self.data[v[self.id_key]][self.id_key] = \\\n v[self.id_key]\n self.data[v[self.id_key]][self.value_key] = \\\n v[self.value_key]\n # add the optional well known entries if provided\n if self.unit_key in v:\n self.data[v[self.id_key]][self.unit_key] = \\\n v[self.unit_key]\n if self.threshold_key in v:\n self.data[v[self.id_key]][self.threshold_key] = \\\n v[self.threshold_key]\n # add any further entries found\n for k in self.other_keys:\n if k in v:\n self.data[v[self.id_key]][k] = v[k]\n # add the custom sensor time\n if self.sensor_time_key in v:\n self.data[v[self.sensor_time_key]][self.sensor_time_key] = \\\n v[self.sensor_time_key]\n # last: add the time the data was received (overwriting any\n # not properly defined timestamp that was already there)\n self.data[v[self.id_key]][self.time_key] = \\\n self.last_data_timestamp\n except KeyError as e:\n print(\"The main key was not found on the serial input line: \" + \\\n str(e))\n except ValueError as e:\n print(\"No valid JSON string received. Waiting for the next turn.\")\n print(\"The error was: \" + str(e))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_text(self):\n t = \"==== \" + str(self.last_data_timestamp) + \" ====\\n\"\n for k in self.data:\n t += k + \" \" + str(self.data[k][self.value_key])\n u = \"\"\n if self.unit_key in self.data[k]:\n u = self.data[k][self.unit_key]\n t += u\n if self.threshold_key in self.data[k]:\n if (self.data[k][self.threshold_key] < \\\n self.data[k][self.value_key]):\n t += \" !Warning: Value is over threshold: \" + \\\n str(self.data[k][self.threshold_key]) + \"!\"\n else:\n t += \" (\" + str(self.data[k][self.threshold_key]) + u + \")\"\n for l in self.other_keys:\n if l in self.data[k]:\n t += \" \" + self.data[k][l]\n t += \"\\n\"\n return t", "response": "Get the data in text form i. e. human readable"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntranslates the data with the translation table", "response": "def get_translated_data(self):\n \"\"\"\n Translate the data with the translation table\n \"\"\"\n j = {}\n for k in self.data:\n d = {}\n for l in self.data[k]:\n d[self.translation_keys[l]] = self.data[k][l]\n j[k] = d\n return j"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the data in JSON form", "response": "def get_json(self, prettyprint=False, translate=True):\n \"\"\"\n Get the data in JSON form\n \"\"\"\n j = []\n if translate:\n d = self.get_translated_data()\n else:\n d = self.data\n for k in d:\n j.append(d[k])\n if prettyprint:\n j = json.dumps(j, indent=2, separators=(',',': '))\n else:\n j = json.dumps(j)\n return j"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_json_tuples(self, prettyprint=False, translate=True):\n j = self.get_json(prettyprint, translate)\n if len(j) > 2:\n if prettyprint:\n j = j[1:-2] + \",\\n\"\n else:\n j = j[1:-1] + \",\"\n else:\n j = \"\"\n return j", "response": "Get the data as JSON tuples"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nissue a GET request against the API and returns the JSON response", "response": "def get(self, url, params={}):\n \"\"\"\n Issues a GET request against the API, properly formatting the params\n\n :param url: a string, the url you are requesting\n :param params: a dict, the key-value of all the paramaters needed\n in the request\n :returns: a dict parsed of the JSON response\n \"\"\"\n\n params.update({'api_key': self.api_key})\n try:\n response = requests.get(self.host + url, params=params)\n except RequestException as e:\n response = e.args\n\n return self.json_parse(response.content)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nissues a POST request against the API and returns the JSON response", "response": "def post(self, url, params={}, files=None):\n \"\"\"\n Issues a POST request against the API, allows for multipart data uploads\n\n :param url: a string, the url you are requesting\n :param params: a dict, the key-value of all the parameters needed\n in the request\n :param files: a list, the list of tuples of files\n\n :returns: a dict parsed of the JSON response\n \"\"\"\n params.update({'api_key': self.api_key})\n try:\n response = requests.post(self.host + url, data=params, files=files)\n return self.json_parse(response.content)\n except RequestException as e:\n return self.json_parse(e.args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef json_parse(self, content):\n try:\n data = json.loads(content)\n except ValueError, e:\n return {'meta': { 'status': 500, 'msg': 'Server Error'}, 'response': {\"error\": \"Malformed JSON or HTML was returned.\"}}\n \n #We only really care about the response if we succeed\n #and the error if we fail\n if 'error' in data:\n return {'meta': { 'status': 400, 'msg': 'Bad Request'}, 'response': {\"error\": data['error']}}\n elif 'result' in data:\n return data['result']\n else:\n return {}", "response": "Wraps and abstracts content validation and JSON parsing\nAttributeNames to make sure the user gets the correct response."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload the values from the environment variable map and apply any filters.", "response": "def load_values(self):\n \"\"\"\n Go through the env var map, transferring the values to this object\n as attributes.\n\n :raises: RuntimeError if a required env var isn't defined.\n \"\"\"\n\n for config_name, evar in self.evar_defs.items():\n if evar.is_required and evar.name not in os.environ:\n raise RuntimeError((\n \"Missing required environment variable: {evar_name}\\n\"\n \"{help_txt}\"\n ).format(evar_name=evar.name, help_txt=evar.help_txt))\n # Env var is present. Transfer its value over.\n if evar.name in os.environ:\n self[config_name] = os.environ.get(evar.name)\n else:\n self[config_name] = evar.default_val\n # Perform any validations or transformations.\n for filter in evar.filters:\n current_val = self.get(config_name)\n new_val = filter(current_val, evar)\n self[config_name] = new_val\n # This is the top-level filter that is often useful for checking\n # the values of related env vars (instead of individual validation).\n self._filter_all()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef embed_data(request):\n result = _EmbedDataFixture(request)\n result.delete_data_dir()\n result.create_data_dir()\n yield result\n result.delete_data_dir()", "response": "Embed data into a temporary directory with input data for the test module."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_filename(self, *parts):\n '''\n Returns an absolute filename in the data-directory (standardized by StandardizePath).\n\n @params parts: list(unicode)\n Path parts. Each part is joined to form a path.\n\n :rtype: unicode\n :returns:\n The full path prefixed with the data-directory.\n\n @remarks:\n This method triggers the data-directory creation.\n '''\n from zerotk.easyfs import StandardizePath\n\n result = [self._data_dir] + list(parts)\n result = '/'.join(result)\n return StandardizePath(result)", "response": "Returns an absolute filename in the data - directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef assert_equal_files(self, obtained_fn, expected_fn, fix_callback=lambda x:x, binary=False, encoding=None):\n '''\n Compare two files contents. If the files differ, show the diff and write a nice HTML\n diff file into the data directory.\n\n Searches for the filenames both inside and outside the data directory (in that order).\n\n :param unicode obtained_fn: basename to obtained file into the data directory, or full path.\n\n :param unicode expected_fn: basename to expected file into the data directory, or full path.\n\n :param bool binary:\n Thread both files as binary files.\n\n :param unicode encoding:\n File's encoding. If not None, contents obtained from file will be decoded using this\n `encoding`.\n\n :param callable fix_callback:\n A callback to \"fix\" the contents of the obtained (first) file.\n This callback receives a list of strings (lines) and must also return a list of lines,\n changed as needed.\n The resulting lines will be used to compare with the contents of expected_fn.\n\n :param bool binary:\n .. seealso:: zerotk.easyfs.GetFileContents\n '''\n import os\n from zerotk.easyfs import GetFileContents, GetFileLines\n\n __tracebackhide__ = True\n import io\n\n def FindFile(filename):\n # See if this path exists in the data dir\n data_filename = self.get_filename(filename)\n if os.path.isfile(data_filename):\n return data_filename\n\n # If not, we might have already received a full path\n if os.path.isfile(filename):\n return filename\n\n # If we didn't find anything, raise an error\n from ._exceptions import MultipleFilesNotFound\n raise MultipleFilesNotFound([filename, data_filename])\n\n obtained_fn = FindFile(obtained_fn)\n expected_fn = FindFile(expected_fn)\n\n if binary:\n obtained_lines = GetFileContents(obtained_fn, binary=True)\n expected_lines = GetFileContents(expected_fn, binary=True)\n assert obtained_lines == expected_lines\n else:\n obtained_lines = fix_callback(GetFileLines(obtained_fn, encoding=encoding))\n expected_lines = GetFileLines(expected_fn, encoding=encoding)\n\n if obtained_lines != expected_lines:\n html_fn = os.path.splitext(obtained_fn)[0] + '.diff.html'\n html_diff = self._generate_html_diff(\n expected_fn, expected_lines, obtained_fn, obtained_lines)\n with io.open(html_fn, 'w') as f:\n f.write(html_diff)\n\n import difflib\n diff = ['FILES DIFFER:', obtained_fn, expected_fn]\n diff += ['HTML DIFF: %s' % html_fn]\n diff += difflib.context_diff(obtained_lines, expected_lines)\n raise AssertionError('\\n'.join(diff) + '\\n')", "response": "Compare two files contents."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _generate_html_diff(self, expected_fn, expected_lines, obtained_fn, obtained_lines):\n import difflib\n differ = difflib.HtmlDiff()\n return differ.make_file(\n fromlines=expected_lines,\n fromdesc=expected_fn,\n tolines=obtained_lines,\n todesc=obtained_fn,\n )", "response": "Generate a nice side - by - side diff of the given files."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a peer or multiple peers to the PEERS variable takes a single string or a list of strings or a list of strings.", "response": "def add_peer(self, peer):\n \"\"\"\n Add a peer or multiple peers to the PEERS variable, takes a single string or a list.\n\n :param peer(list or string)\n \"\"\"\n if type(peer) == list:\n for i in peer:\n check_url(i)\n self.PEERS.extend(peer)\n elif type(peer) == str:\n check_url(peer)\n self.PEERS.append(peer)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving one or multiple peers from PEERS variable", "response": "def remove_peer(self, peer):\n \"\"\"\n remove one or multiple peers from PEERS variable\n\n :param peer(list or string):\n \"\"\"\n if type(peer) == list:\n for x in peer:\n check_url(x)\n for i in self.PEERS:\n if x in i:\n self.PEERS.remove(i)\n elif type(peer) == str:\n check_url(peer)\n for i in self.PEERS:\n if peer == i:\n self.PEERS.remove(i)\n else:\n raise ValueError('peer paramater did not pass url validation')"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck the status of the network and the peers", "response": "def status(self):\n \"\"\"\n check the status of the network and the peers\n\n :return: network_height, peer_status\n \"\"\"\n peer = random.choice(self.PEERS)\n formatted_peer = 'http://{}:4001'.format(peer)\n peerdata = requests.get(url=formatted_peer + '/api/peers/').json()['peers']\n peers_status = {}\n\n networkheight = max([x['height'] for x in peerdata])\n\n for i in peerdata:\n if 'http://{}:4001'.format(i['ip']) in self.PEERS:\n peers_status.update({i['ip']: {\n 'height': i['height'],\n 'status': i['status'],\n 'version': i['version'],\n 'delay': i['delay'],\n }})\n\n return {\n 'network_height': networkheight,\n 'peer_status': peers_status\n }"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef broadcast_tx(self, address, amount, secret, secondsecret=None, vendorfield=''):\n\n peer = random.choice(self.PEERS)\n park = Park(\n peer,\n 4001,\n constants.ARK_NETHASH,\n '1.1.1'\n )\n\n return park.transactions().create(address, str(amount), vendorfield, secret, secondsecret)", "response": "broadcasts a transaction to the peerslist using ark - js library"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register(self, service, name=''):\n try:\n is_model = issubclass(service, orb.Model)\n except StandardError:\n is_model = False\n\n # expose an ORB table dynamically as a service\n if is_model:\n self.services[service.schema().dbname()] = (ModelService, service)\n\n else:\n super(OrbApiFactory, self).register(service, name=name)", "response": "Registers a given service to this API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nregistering your own mode and handle method here.", "response": "def main():\n \"\"\"Register your own mode and handle method here.\"\"\"\n plugin = Register()\n if plugin.args.option == 'sql':\n plugin.sql_handle()\n else:\n plugin.unknown(\"Unknown actions.\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef prepare(self, input_files, bundle):\n out = []\n for input_file in input_files:\n if input_file.extension == \"less\" and os.path.isfile(input_file.abs_path):\n output_file = self.get_compile_file(input_file, bundle)\n self.compile(input_file, output_file)\n out.append(output_file)\n else:\n out.append(input_file)\n return out", "response": "Prepares the list of files for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef debug(*args, **kwargs):\n if not (DEBUG and args):\n return None\n\n # Include parent class name when given.\n parent = kwargs.get('parent', None)\n with suppress(KeyError):\n kwargs.pop('parent')\n\n # Go back more than once when given.\n backlevel = kwargs.get('back', 1)\n with suppress(KeyError):\n kwargs.pop('back')\n\n frame = inspect.currentframe()\n # Go back a number of frames (usually 1).\n while backlevel > 0:\n frame = frame.f_back\n backlevel -= 1\n fname = os.path.split(frame.f_code.co_filename)[-1]\n lineno = frame.f_lineno\n if parent:\n func = '{}.{}'.format(parent.__class__.__name__, frame.f_code.co_name)\n else:\n func = frame.f_code.co_name\n\n lineinfo = '{}:{} {}: '.format(\n C(fname, 'yellow'),\n C(str(lineno).ljust(4), 'blue'),\n C().join(C(func, 'magenta'), '()').ljust(20)\n )\n # Patch args to stay compatible with print().\n pargs = list(C(a, 'green').str() for a in args)\n pargs[0] = ''.join((lineinfo, pargs[0]))\n print_err(*pargs, **kwargs)", "response": "Print a message only if DEBUG is truthy."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses a string as an integer.", "response": "def parse_int(s):\n \"\"\" Parse a string as an integer.\n Exit with a message on failure.\n \"\"\"\n try:\n val = int(s)\n except ValueError:\n print_err('\\nInvalid integer: {}'.format(s))\n sys.exit(1)\n return val"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntrying to read a file and return it s content.", "response": "def try_read_file(s):\n \"\"\" If `s` is a file name, read the file and return it's content.\n Otherwise, return the original string.\n Returns None if the file was opened, but errored during reading.\n \"\"\"\n try:\n with open(s, 'r') as f:\n data = f.read()\n except FileNotFoundError:\n # Not a file name.\n return s\n except EnvironmentError as ex:\n print_err('\\nFailed to read file: {}\\n {}'.format(s, ex))\n return None\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef close(self):\n self._tempfile.close()\n self._process.terminate()\n if self._process.is_alive():\n self._process.kill()", "response": "Disconnect and close the temporary file and process."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef send_keys(self, keys, wait=True):\n self._process.stdin.write(bytearray(keys, self._encoding))\n self._process.stdin.flush()\n if wait:\n self.wait()", "response": "Send a raw key sequence to the Vim server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwaiting for response until timeout is reached.", "response": "def wait(self, timeout=None):\n \"\"\"\n Wait for response until timeout.\n If timeout is specified to None, ``self.timeout`` is used.\n\n :param float timeout: seconds to wait I/O\n \"\"\"\n if timeout is None:\n timeout = self._timeout\n while self._process.check_readable(timeout):\n self._flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninstalls the Vim plugin.", "response": "def install_plugin(self, dir, entry_script=None):\n \"\"\"\n Install *Vim* plugin.\n\n :param string dir: the root directory contains *Vim* script\n :param string entry_script: path to the initializing script\n \"\"\"\n self.runtimepath.append(dir)\n if entry_script is not None:\n self.command('runtime! {0}'.format(entry_script), False)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef command(self, command, capture=True):\n if capture:\n self.command('redir! >> {0}'.format(self._tempfile.name), False)\n self.set_mode('command')\n self.send_keys('{0}\\n'.format(command))\n if capture:\n self.command('redir END', False)\n return self._tempfile.read().strip('\\n')", "response": "Execute a command on the Vim."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the mode of the current locale.", "response": "def set_mode(self, mode):\n \"\"\"\n Set *Vim* mode to ``mode``.\n Supported modes:\n\n * ``normal``\n * ``insert``\n * ``command``\n * ``visual``\n * ``visual-block``\n\n\n This method behave as setter-only property.\n\n Example:\n\n >>> import headlessvim\n >>> with headlessvim.open() as vim:\n ... vim.set_mode('insert')\n ... vim.mode = 'normal' # also accessible as property\n ...\n\n :param string mode: *Vim* mode to set\n :raises ValueError: if ``mode`` is not supported\n \"\"\"\n keys = '\\033\\033'\n if mode == 'normal':\n pass\n elif mode == 'insert':\n keys += 'i'\n elif mode == 'command':\n keys += ':'\n elif mode == 'visual':\n keys += 'v'\n elif mode == 'visual-block':\n keys += 'V'\n else:\n raise ValueError('mode {0} is not supported'.format(mode))\n self.send_keys(keys)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef screen_size(self, size):\n if self.screen_size != size:\n self._screen.resize(*self._swap(size))", "response": "Sets the screen size of the current instance."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef runtimepath(self):\n if self._runtimepath is None:\n self._runtimepath = runtimepath.RuntimePath(self)\n return self._runtimepath", "response": "Returns the runtime path of the assessment class."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a file - like object that is seekable.", "response": "def make_seekable(fileobj):\n \"\"\"\n If the file-object is not seekable, return ArchiveTemp of the fileobject,\n otherwise return the file-object itself\n \"\"\"\n if sys.version_info < (3, 0) and isinstance(fileobj, file):\n filename = fileobj.name\n fileobj = io.FileIO(fileobj.fileno(), closefd=False)\n fileobj.name = filename\n assert isinstance(fileobj, io.IOBase), \\\n \"fileobj must be an instance of io.IOBase or a file, got %s\" \\\n % type(fileobj)\n return fileobj if fileobj.seekable() \\\n else ArchiveTemp(fileobj)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _before(self):\n # Don't trace excluded routes.\n if request.path in self.excluded_routes:\n request._tracy_exclude = True\n return\n\n request._tracy_start_time = monotonic()\n client = request.headers.get(trace_header_client, None)\n require_client = current_app.config.get(\"TRACY_REQUIRE_CLIENT\", False)\n if client is None and require_client:\n abort(400, \"Missing %s header\" % trace_header_client)\n\n request._tracy_client = client\n request._tracy_id = request.headers.get(trace_header_id, new_id())", "response": "Record the start time of this reqeust."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _after(self, response):\n # Ignore excluded routes.\n if getattr(request, '_tracy_exclude', False):\n return response\n\n duration = None\n if getattr(request, '_tracy_start_time', None):\n duration = monotonic() - request._tracy_start_time\n\n # Add Trace_ID header.\n trace_id = None\n if getattr(request, '_tracy_id', None):\n trace_id = request._tracy_id\n response.headers[trace_header_id] = trace_id\n\n # Get the invoking client.\n trace_client = None\n if getattr(request, '_tracy_client', None):\n trace_client = request._tracy_client\n\n # Extra log kwargs.\n d = {'status_code': response.status_code,\n 'url': request.base_url,\n 'client_ip': request.remote_addr,\n 'trace_name': trace_client,\n 'trace_id': trace_id,\n 'trace_duration': duration}\n logger.info(None, extra=d)\n return response", "response": "Calculates the duration and adds a transaction\n ID to the response."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncloses the http / https connect.", "response": "def close(self):\n \"\"\"Close the http/https connect.\"\"\"\n try:\n self.response.close()\n self.logger.debug(\"close connect succeed.\")\n except Exception as e:\n self.unknown(\"close connect error: %s\" % e)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a Stat object for the given path.", "response": "def _lstat(self, path):\n '''IMPORTANT: expects `path`'s parent to already be deref()'erenced.'''\n if path not in self.entries:\n return OverlayStat(*self.originals['os:lstat'](path)[:10], st_overlay=0)\n return self.entries[path].stat"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if a file or directory exists.", "response": "def _exists(self, path):\n '''IMPORTANT: expects `path` to already be deref()'erenced.'''\n try:\n return bool(self._stat(path))\n except os.error:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if path exists and False otherwise.", "response": "def _lexists(self, path):\n '''IMPORTANT: expects `path` to already be deref()'erenced.'''\n try:\n return bool(self._lstat(path))\n except os.error:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fso_exists(self, path):\n 'overlays os.path.exists()'\n try:\n return self._exists(self.deref(path))\n except os.error:\n return False", "response": "overlays os. path. exists()'"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\noverlays os. path. lexists()'", "response": "def fso_lexists(self, path):\n 'overlays os.path.lexists()'\n try:\n return self._lexists(self.deref(path, to_parent=True))\n except os.error:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fso_islink(self, path):\n 'overlays os.path.islink()'\n try:\n return stat.S_ISLNK(self.fso_lstat(path).st_mode)\n except OSError:\n return False", "response": "overlays os. path. islink()'"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwrapping for the MWAPI API function.", "response": "def MWAPIWrapper(func):\n \"\"\"\n MWAPIWrapper \u63a7\u5236API\u8bf7\u6c42\u5f02\u5e38\u7684\u88c5\u9970\u5668\n \u6839\u636erequests\u5e93\u5b9a\u4e49\u7684\u5f02\u5e38\u6765\u63a7\u5236\u8bf7\u6c42\u8fd4\u56de\u7684\u610f\u5916\u60c5\u51b5\n \"\"\"\n @wraps(func)\n def wrapper(*args, **kwargs):\n self = args[0]\n try:\n result = func(*args, **kwargs)\n return result\n except ConnectionError:\n err_title = '\u8fde\u63a5\u9519\u8bef'\n err_message = '[{name}] \u8fde\u63a5\u9519\u8bef\uff0c\u7f51\u7edc\u72b6\u51b5\u5f02\u5e38'.format(name=func.__name__, host=self.host)\n except HTTPError as e:\n err_title = 'HTTP\u54cd\u5e94\u9519\u8bef'\n err_message = '[{name}] \u76ee\u6807\u670d\u52a1\u5668\"{host}\" HTTP\u54cd\u5e94\u9519\u8bef({detail})'.format(name=func.__name__,\n host=self.host, detail=e.message)\n except Timeout:\n err_title = '\u8bf7\u6c42\u8d85\u65f6'\n err_message = '[{name}] \u76ee\u6807\u670d\u52a1\u5668\"{host}\" \u8bf7\u6c42\u8d85\u65f6'.format(name=func.__name__, host=self.host)\n except TooManyRedirects:\n err_title = '\u8fc7\u591a\u91cd\u5b9a\u5411'\n err_message = '[{name}] \u76ee\u6807\u670d\u52a1\u5668\"{host}\" \u8fc7\u591a\u91cd\u5b9a\u5411'.format(name=func.__name__, host=self.host)\n except ValueError as e:\n if e.message.find('JSON') >= 0:\n err_title = 'API JSON\u8fd4\u56de\u503c\u5f02\u5e38'\n err_message = '[{name}] \u76ee\u6807\u670d\u52a1\u5668\"{host}\" API JSON\u8fd4\u56de\u503c\u5f02\u5e38'.format(name=func.__name__, host=self.host)\n else:\n err_title = '\u503c\u9519\u8bef'\n err_message = '[{name}] \u5b58\u5728ValueError\uff1a{msg}'.format(name=func.__name__, msg=e.message)\n self.log.error(e, exc_info=True)\n except KeyError as e:\n err_title = '\u952e\u9519\u8bef'\n err_message = '[{name}] \u5b58\u5728KeyError\uff0c\u9519\u8bef\u952e\u4e3a{key}'.format(name=func.__name__, key=e.message)\n self.log.error(e, exc_info=True)\n\n except MWAPIException as e:\n err_title = 'Mediawiki API \u5f02\u5e38'\n err_message = e.message\n\n self.log.error('%s:%s', err_title, err_message)\n return {'success': False, 'errtitle': err_title, 'errmsg': err_message}\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nexpanding a line of text into a single word.", "response": "def expand_words(self, line, width=60):\n \"\"\" Insert spaces between words until it is wide enough for `width`.\n \"\"\"\n if not line.strip():\n return line\n # Word index, which word to insert on (cycles between 1->len(words))\n wordi = 1\n while len(strip_codes(line)) < width:\n wordendi = self.find_word_end(line, wordi)\n if wordendi < 0:\n # Reached the end?, try starting at the front again.\n wordi = 1\n wordendi = self.find_word_end(line, wordi)\n if wordendi < 0:\n # There are no spaces to expand, just prepend one.\n line = ''.join((' ', line))\n else:\n line = ' '.join((line[:wordendi], line[wordendi:]))\n wordi += 1\n\n # Don't push a single word all the way to the right.\n if ' ' not in strip_codes(line).strip():\n return line.replace(' ', '')\n return line"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_word_end(text, count=1):\n if not text:\n return -1\n elif ' ' not in text:\n return 0\n elif not text.strip():\n return -1\n count = count or 1\n found = 0\n foundindex = -1\n inword = False\n indices = get_indices(str(text))\n sortedindices = sorted(indices)\n for i in sortedindices:\n c = indices[i]\n if inword and c.isspace():\n # Found space.\n inword = False\n foundindex = i\n found += 1\n # Was there an escape code before this space?\n testindex = i\n while testindex > 0:\n testindex -= 1\n s = indices.get(testindex, None)\n if s is None:\n # Must be in the middle of an escape code.\n continue\n if len(s) == 1:\n # Test index was a char.\n foundindex = testindex + 1\n break\n\n if found == count:\n return foundindex\n elif not c.isspace():\n inword = True\n # We ended in a word/escape-code, or there were no words.\n lastindex = sortedindices[-1]\n if len(indices[lastindex]) > 1:\n # Last word included an escape code. Rewind a bit.\n while lastindex > 0:\n lastindex -= 1\n s = indices.get(lastindex, None)\n if s is None:\n # Must be in the middle of an escape code.\n continue\n if len(s) == 1:\n # Found last char.\n return lastindex + 1\n\n return -1 if inword else foundindex", "response": "This method finds the index of the last count words in the text."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nformat a long string into a block of newline seperated text.", "response": "def format(\n self, text=None,\n width=60, chars=False, fill=False, newlines=False,\n prepend=None, append=None, strip_first=False, strip_last=False,\n lstrip=False):\n \"\"\" Format a long string into a block of newline seperated text.\n Arguments:\n See iter_format_block().\n \"\"\"\n # Basic usage of iter_format_block(), for convenience.\n return '\\n'.join(\n self.iter_format_block(\n (self.text if text is None else text) or '',\n prepend=prepend,\n append=append,\n strip_first=strip_first,\n strip_last=strip_last,\n width=width,\n chars=chars,\n fill=fill,\n newlines=newlines,\n lstrip=lstrip\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef iter_add_text(self, lines, prepend=None, append=None):\n if (prepend is None) and (append is None):\n yield from lines\n else:\n # Build up a format string, with optional {prepend}/{append}\n fmtpcs = ['{prepend}'] if prepend else []\n fmtpcs.append('{line}')\n if append:\n fmtpcs.append('{append}')\n fmtstr = ''.join(fmtpcs)\n yield from (\n fmtstr.format(prepend=prepend, line=line, append=append)\n for line in lines\n )", "response": "Yields each line of text."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef iter_char_block(self, text=None, width=60, fmtfunc=str):\n if width < 1:\n width = 1\n text = (self.text if text is None else text) or ''\n text = ' '.join(text.split('\\n'))\n escapecodes = get_codes(text)\n if not escapecodes:\n # No escape codes, use simple method.\n yield from (\n fmtfunc(text[i:i + width])\n for i in range(0, len(text), width)\n )\n else:\n # Ignore escape codes when counting.\n blockwidth = 0\n block = []\n for i, s in enumerate(get_indices_list(text)):\n block.append(s)\n if len(s) == 1:\n # Normal char.\n blockwidth += 1\n if blockwidth == width:\n yield ''.join(block)\n block = []\n blockwidth = 0\n if block:\n yield ''.join(block)", "response": "Yields the text in a format block."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\niterating over lines in a formatted block of text.", "response": "def iter_format_block(\n self, text=None,\n width=60, chars=False, fill=False, newlines=False,\n append=None, prepend=None, strip_first=False, strip_last=False,\n lstrip=False):\n \"\"\" Iterate over lines in a formatted block of text.\n This iterator allows you to prepend to each line.\n For basic blocks see iter_block().\n\n\n Arguments:\n text : String to format.\n\n width : Maximum width for each line. The prepend string\n is not included in this calculation.\n Default: 60\n\n chars : Whether to wrap on characters instead of spaces.\n Default: False\n fill : Insert spaces between words so that each line is\n the same width. This overrides `chars`.\n Default: False\n\n newlines : Whether to preserve newlines in the original\n string.\n Default: False\n\n append : String to append after each line.\n\n prepend : String to prepend before each line.\n\n strip_first : Whether to omit the prepend string for the first\n line.\n Default: False\n\n Example (when using prepend='$'):\n Without strip_first -> '$this', '$that'\n With strip_first -> 'this', '$that'\n\n strip_last : Whether to omit the append string for the last\n line (like strip_first does for prepend).\n Default: False\n\n lstrip : Whether to remove leading spaces from each line.\n This doesn't include any spaces in `prepend`.\n Default: False\n \"\"\"\n if fill:\n chars = False\n\n iterlines = self.iter_block(\n (self.text if text is None else text) or '',\n width=width,\n chars=chars,\n newlines=newlines,\n lstrip=lstrip,\n )\n\n if not (prepend or append):\n # Shortcut some of the logic below when not prepending/appending.\n if fill:\n yield from (\n self.expand_words(l, width=width) for l in iterlines\n )\n else:\n yield from iterlines\n else:\n # Prepend, append, or both prepend/append to each line.\n if prepend:\n prependlen = len(prepend)\n else:\n # No prepend, stripping not necessary and shouldn't be tried.\n strip_first = False\n prependlen = 0\n if append:\n # Unfortunately appending mean exhausting the generator.\n # I don't know where the last line is if I don't.\n lines = list(iterlines)\n lasti = len(lines) - 1\n iterlines = (l for l in lines)\n appendlen = len(append)\n else:\n # No append, stripping not necessary and shouldn't be tried.\n strip_last = False\n appendlen = 0\n lasti = -1\n for i, l in enumerate(self.iter_add_text(\n iterlines,\n prepend=prepend,\n append=append)):\n if strip_first and (i == 0):\n # Strip the prepend that iter_add_text() added.\n l = l[prependlen:]\n elif strip_last and (i == lasti):\n # Strip the append that iter_add_text() added.\n l = l[:-appendlen]\n if fill:\n yield self.expand_words(l, width=width)\n else:\n yield l"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef iter_space_block(self, text=None, width=60, fmtfunc=str):\n if width < 1:\n width = 1\n curline = ''\n text = (self.text if text is None else text) or ''\n for word in text.split():\n possibleline = ' '.join((curline, word)) if curline else word\n # Ignore escape codes.\n codelen = sum(len(s) for s in get_codes(possibleline))\n reallen = len(possibleline) - codelen\n if reallen > width:\n # This word would exceed the limit, start a new line with\n # it.\n yield fmtfunc(curline)\n curline = word\n else:\n curline = possibleline\n # yield the last line.\n if curline:\n yield fmtfunc(curline)", "response": "Yields a format block of text with spaces."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef squeeze_words(line, width=60):\n # Start removing spaces to \"squeeze\" the text, leaving at least one.\n while (' ' in line) and (len(line) > width):\n # Remove two spaces from the end, replace with one.\n head, _, tail = line.rpartition(' ')\n line = ' '.join((head, tail))\n return line", "response": "Squeeze the text from the given line to the specified width."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck the ip trough the httpBL API and return the httpBL results if any error is occurred.", "response": "def check_ip(self, ip):\n \"\"\"\n Check IP trough the httpBL API\n\n :param ip: ipv4 ip address\n :return: httpBL results or None if any error is occurred\n \"\"\"\n\n self._last_result = None\n\n if is_valid_ipv4(ip):\n key = None\n if self._use_cache:\n key = self._make_cache_key(ip)\n self._last_result = self._cache.get(key, version=self._cache_version)\n\n if self._last_result is None:\n # request httpBL API\n error, age, threat, type = self._request_httpbl(ip)\n if error == 127 or error == 0:\n self._last_result = {\n 'error': error,\n 'age': age,\n 'threat': threat,\n 'type': type\n }\n if self._use_cache:\n self._cache.set(key, self._last_result, timeout=self._api_timeout, version=self._cache_version)\n if self._last_result is not None and settings.CACHED_HTTPBL_USE_LOGGING:\n logger.info(\n 'httpBL check ip: {0}; '\n 'httpBL result: error: {1}, age: {2}, threat: {3}, type: {4}'.format(ip,\n self._last_result['error'],\n self._last_result['age'],\n self._last_result['threat'],\n self._last_result['type']\n )\n )\n\n return self._last_result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_threat(self, result=None, harmless_age=None, threat_score=None, threat_type=None):\n\n harmless_age = harmless_age if harmless_age is not None else settings.CACHED_HTTPBL_HARMLESS_AGE\n threat_score = threat_score if threat_score is not None else settings.CACHED_HTTPBL_THREAT_SCORE\n threat_type = threat_type if threat_type is not None else -1\n result = result if result is not None else self._last_result\n threat = False\n if result is not None:\n if result['age'] < harmless_age and result['threat'] > threat_score:\n threat = True\n if threat_type > -1:\n if result['type'] & threat_type:\n threat = True\n else:\n threat = False\n return threat", "response": "Check if the IP is a threat."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_suspicious(self, result=None):\n\n result = result if result is not None else self._last_result\n suspicious = False\n if result is not None:\n suspicious = True if result['type'] > 0 else False\n return suspicious", "response": "Check if IP is suspicious."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef invalidate_ip(self, ip):\n\n if self._use_cache:\n key = self._make_cache_key(ip)\n self._cache.delete(key, version=self._cache_version)", "response": "Invalidates httpBL cache for the given IP address."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef invalidate_cache(self):\n\n if self._use_cache:\n self._cache_version += 1\n self._cache.increment('cached_httpbl_{0}_version'.format(self._api_key))", "response": "Invalidate the cache for this instance."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuploads the next batch of items return whether successful.", "response": "def upload(self):\n \"\"\"Upload the next batch of items, return whether successful.\"\"\"\n success = False\n batch = self.next()\n if len(batch) == 0:\n return False\n\n try:\n self.request(batch)\n success = True\n except Exception as e:\n self.log.error('error uploading: %s', e)\n success = False\n if self.on_error:\n self.on_error(e, batch)\n finally:\n # cleanup\n for item in batch:\n self.queue.task_done()\n\n return success"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef next(self):\n queue = self.queue\n items = []\n item = self.next_item()\n if item is None:\n return items\n\n items.append(item)\n while len(items) < self.upload_size and not queue.empty():\n item = self.next_item()\n if item:\n items.append(item)\n\n return items", "response": "Return the next batch of items to upload."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a single item from the queue.", "response": "def next_item(self):\n \"\"\"Get a single item from the queue.\"\"\"\n queue = self.queue\n try:\n item = queue.get(block=True, timeout=5)\n return item\n except Exception:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nattempts to upload the batch and retry before raising an error", "response": "def request(self, batch, attempt=0):\n \"\"\"Attempt to upload the batch and retry before raising an error \"\"\"\n try:\n q = self.api.new_queue()\n for msg in batch:\n q.add(msg['event'], msg['value'], source=msg['source'])\n q.submit()\n except:\n if attempt > self.retries:\n raise\n self.request(batch, attempt+1)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _camelcase_to_underscore(url):\n def upper2underscore(text):\n for char in text:\n if char.islower():\n yield char\n else:\n yield '_'\n if char.isalpha():\n yield char.lower()\n return ''.join(upper2underscore(url))", "response": "Translate camelCase into underscore format."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_tree(endpoints):\n tree = {}\n\n for method, url, doc in endpoints:\n path = [p for p in url.strip('/').split('/')]\n here = tree\n\n # First element (API Version).\n version = path[0]\n here.setdefault(version, {})\n here = here[version]\n\n # The rest of elements of the URL.\n for p in path[1:]:\n part = _camelcase_to_underscore(p)\n here.setdefault(part, {})\n here = here[part]\n\n # Allowed HTTP methods.\n if not 'METHODS' in here:\n here['METHODS'] = [[method, doc]]\n else:\n if not method in here['METHODS']:\n here['METHODS'].append([method, doc])\n\n return tree", "response": "Creates the Trello endpoint tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main():\n ep = requests.get(TRELLO_API_DOC).content\n root = html.fromstring(ep)\n\n links = root.xpath('//a[contains(@class, \"reference internal\")]/@href')\n pages = [requests.get(TRELLO_API_DOC + u)\n for u in links if u.endswith('index.html')]\n\n endpoints = []\n for page in pages:\n root = html.fromstring(page.content)\n sections = root.xpath('//div[@class=\"section\"]/h2/..')\n for sec in sections:\n ep_html = etree.tostring(sec).decode('utf-8')\n ep_text = html2text(ep_html).splitlines()\n match = EP_DESC_REGEX.match(ep_text[0])\n if not match:\n continue\n ep_method, ep_url = match.groups()\n ep_text[0] = ' '.join([ep_method, ep_url])\n ep_doc = b64encode(gzip.compress('\\n'.join(ep_text).encode('utf-8')))\n endpoints.append((ep_method, ep_url, ep_doc))\n\n print(yaml.dump(create_tree(endpoints)))", "response": "Print the complete YAML."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\noverrides the default exit in the parser.", "response": "def _parser_exit(parser: argparse.ArgumentParser, proc: \"DirectoryListProcessor\", _=0,\n message: Optional[str]=None) -> None:\n \"\"\"\n Override the default exit in the parser.\n :param parser:\n :param _: exit code. Unused because we don't exit\n :param message: Optional message\n \"\"\"\n if message:\n parser._print_message(message, sys.stderr)\n proc.successful_parse = False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode_file_args(self, argv: List[str]) -> List[str]:\n for arg in [arg for arg in argv if arg[0] in self.fromfile_prefix_chars]:\n argv.remove(arg)\n with open(arg[1:]) as config_file:\n argv += shlex.split(config_file.read())\n return self.decode_file_args(argv)\n return argv", "response": "Decodes any arguments that begin with the fromfile prefix char and replaces any file references with the file references replaced by the ones that are not in the config file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreporting an error in the log file.", "response": "def _proc_error(ifn: str, e: Exception) -> None:\n \"\"\" Report an error\n :param ifn: Input file name\n :param e: Exception to report\n \"\"\"\n type_, value_, traceback_ = sys.exc_info()\n traceback.print_tb(traceback_, file=sys.stderr)\n print(file=sys.stderr)\n print(\"***** ERROR: %s\" % ifn, file=sys.stderr)\n print(str(e), file=sys.stderr)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalls the processor and intercept anything that goes wrong AttributeNames", "response": "def _call_proc(self,\n proc: Callable[[Optional[str], Optional[str], argparse.Namespace], bool],\n ifn: Optional[str],\n ofn: Optional[str]) -> bool:\n \"\"\" Call the actual processor and intercept anything that goes wrong\n :param proc: Process to call\n :param ifn: Input file name to process. If absent, typical use is stdin\n :param ofn: Output file name. If absent, typical use is stdout\n :return: true means process was successful\n \"\"\"\n rslt = False\n try:\n rslt = proc(ifn, ofn, self.opts)\n except Exception as e:\n self._proc_error(ifn, e)\n return True if rslt or rslt is None else False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrun the directory list processor calling a function per file.", "response": "def run(self,\n proc: Callable[[Optional[str], Optional[str], argparse.Namespace], Optional[bool]],\n file_filter: Optional[Callable[[str], bool]]=None,\n file_filter_2: Optional[Callable[[Optional[str], str, argparse.Namespace], bool]]=None) \\\n -> Tuple[int, int]:\n \"\"\" Run the directory list processor calling a function per file.\n :param proc: Process to invoke. Args: input_file_name, output_file_name, argparse options. Return pass or fail.\n No return also means pass\n :param file_filter: Additional filter for testing file names, types, etc.\n :param file_filter_2: File filter that includes directory, filename and opts\n (separate for backwards compatibility)\n :return: tuple - (number of files passed to proc: int, number of files that passed proc)\n \"\"\"\n nfiles = 0\n nsuccess = 0\n\n # List of one or more input and output files\n if self.opts.infile:\n for file_idx in range(len(self.opts.infile)):\n in_f = self.opts.infile[file_idx]\n if self._check_filter(in_f, self.opts.indir, file_filter, file_filter_2):\n fn = os.path.join(self.opts.indir, in_f) if self.opts.indir else in_f\n nfiles += 1\n if self._call_proc(proc, fn, self._outfile_name('', fn, outfile_idx=file_idx)):\n nsuccess += 1\n elif self.opts.stoponerror:\n return nfiles, nsuccess\n\n # Single input from the command line\n elif not self.opts.indir:\n if self._check_filter(None, None, file_filter, file_filter_2):\n nfiles += 1\n if self._call_proc(proc, None, self._outfile_name('', '')):\n nsuccess += 1\n\n # Input directory that needs to be navigated\n else:\n for dirpath, _, filenames in os.walk(self.opts.indir):\n for fn in filenames:\n if self._check_filter(fn, dirpath, file_filter, file_filter_2):\n nfiles += 1\n if self._call_proc(proc, os.path.join(dirpath, fn), self._outfile_name(dirpath, fn)):\n nsuccess += 1\n elif self.opts.stoponerror:\n return nfiles, nsuccess\n\n return nfiles, nsuccess"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconstructing the output file name from the input file and the output file list.", "response": "def _outfile_name(self, dirpath: str, infile: str, outfile_idx: int=0) -> Optional[str]:\n \"\"\" Construct the output file name from the input file. If a single output file was named and there isn't a\n directory, return the output file.\n :param dirpath: Directory path to infile\n :param infile: Name of input file\n :param outfile_idx: Index into output file list (for multiple input/output files)\n :return: Full name of output file or None if output is not otherwise supplied\n \"\"\"\n if not self.opts.outfile and not self.opts.outdir:\n # Up to the process itself to decide what do do with it\n return None\n\n if self.opts.outfile:\n # Output file specified - either one aggregate file or a 1 to 1 list\n outfile_element = self.opts.outfile[0] if len(self.opts.outfile) == 1 else self.opts.outfile[outfile_idx]\n\n elif self.opts.infile:\n # Input file name(s) have been supplied\n if '://' in infile:\n # Input file is a URL -- generate an output file of the form \"_url[n]\"\n outfile_element = \"_url{}\".format(outfile_idx + 1)\n else:\n outfile_element = os.path.basename(infile).rsplit('.', 1)[0]\n\n else:\n # Doing an input directory to an output directory\n relpath = dirpath[len(self.opts.indir) + 1:] if not self.opts.flatten and self.opts.indir else ''\n outfile_element = os.path.join(relpath, os.path.split(infile)[1][:-len(self.infile_suffix)])\n return (os.path.join(self.opts.outdir, outfile_element) if self.opts.outdir else outfile_element) + \\\n (self.outfile_suffix if not self.opts.outfile and self.outfile_suffix else '')"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses and exit the connection.", "response": "def quit(self):\n \"\"\"Close and exit the connection.\"\"\"\n try:\n self.ftp.quit()\n self.logger.debug(\"quit connect succeed.\")\n except ftplib.Error as e:\n self.unknown(\"quit connect error: %s\" % e)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef query(self, wql):\n try:\n self.__wql = ['wmic', '-U',\n self.args.domain + '\\\\' + self.args.user + '%' + self.args.password,\n '//' + self.args.host,\n '--namespace', self.args.namespace,\n '--delimiter', self.args.delimiter,\n wql]\n self.logger.debug(\"wql: {}\".format(self.__wql))\n self.__output = subprocess.check_output(self.__wql)\n self.logger.debug(\"output: {}\".format(self.__output))\n self.logger.debug(\"wmi connect succeed.\")\n self.__wmi_output = self.__output.splitlines()[1:]\n self.logger.debug(\"wmi_output: {}\".format(self.__wmi_output))\n self.__csv_header = csv.DictReader(self.__wmi_output, delimiter='|')\n self.logger.debug(\"csv_header: {}\".format(self.__csv_header))\n return list(self.__csv_header)\n except subprocess.CalledProcessError as e:\n self.unknown(\"Connect by wmi and run wql error: %s\" % e)", "response": "Connect by wmi and run wql."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_value(self, value, timeout):\r\n self.value = value\r\n self.expiration = time.clock() * 1000 + timeout", "response": "Sets the value of the object and updates the expiration time."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrap for the other log methods decide which one based on the URL parameter.", "response": "def log(self, url=None, credentials=None, do_verify_certificate=True):\n \"\"\"\n Wrapper for the other log methods, decide which one based on the\n URL parameter.\n \"\"\"\n if url is None:\n url = self.url\n if re.match(\"file://\", url):\n self.log_file(url)\n elif re.match(\"https://\", url) or re.match(\"http://\", url):\n self.log_post(url, credentials, do_verify_certificate)\n else:\n self.log_stdout()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef log_file(self, url=None):\n if url is None:\n url = self.url\n f = re.sub(\"file://\", \"\", url)\n try:\n with open(f, \"a\") as of:\n of.write(str(self.store.get_json_tuples(True)))\n except IOError as e:\n print(e)\n print(\"Could not write the content to the file..\")", "response": "Write to a local log file"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef log_post(self, url=None, credentials=None, do_verify_certificate=True):\n if url is None:\n url = self.url\n if credentials is None:\n credentials = self.credentials\n if do_verify_certificate is None:\n do_verify_certificate = self.do_verify_certificate\n if credentials and \"base64\" in credentials:\n headers = {\"Content-Type\": \"application/json\", \\\n 'Authorization': 'Basic %s' % credentials[\"base64\"]}\n else:\n headers = {\"Content-Type\": \"application/json\"}\n try:\n request = requests.post(url, headers=headers, \\\n data=self.store.get_json(), verify=do_verify_certificate)\n except httplib.IncompleteRead as e:\n request = e.partial", "response": "Write to a remote host via HTTP POST"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nregistering credentials for the current user and password.", "response": "def register_credentials(self, credentials=None, user=None, user_file=None, password=None, password_file=None):\n \"\"\"\n Helper method to store username and password\n \"\"\"\n # lets store all kind of credential data into this dict\n if credentials is not None:\n self.credentials = credentials\n else:\n self.credentials = {}\n # set the user from CLI or file\n if user:\n self.credentials[\"user\"] = user\n elif user_file:\n with open(user_file, \"r\") as of:\n # what would the file entry look like?\n pattern = re.compile(\"^user: \")\n for l in of:\n if re.match(pattern, l):\n # strip away the newline\n l = l[0:-1]\n self.credentials[\"user\"] = re.sub(pattern, \"\", l)\n # remove any surrounding quotes\n if self.credentials[\"user\"][0:1] == '\"' and \\\n self.credentials[\"user\"][-1:] == '\"':\n self.credentials[\"user\"] = self.credentials[\"user\"][1:-1]\n # set the password from CLI or file\n if password:\n self.credentials[\"password\"] = password\n elif password_file:\n with open(password_file, \"r\") as of:\n # what would the file entry look like?\n pattern = re.compile(\"^password: \")\n for l in of:\n if re.match(pattern, l):\n # strip away the newline\n l = l[0:-1]\n self.credentials[\"password\"] = \\\n re.sub(pattern, \"\", l)\n # remove any surrounding quotes\n if self.credentials[\"password\"][0:1] == '\"' and \\\n self.credentials[\"password\"][-1:] == '\"':\n self.credentials[\"password\"] = \\\n self.credentials[\"password\"][1:-1]\n\n # if both user and password is set,\n # 1. encode to base 64 for basic auth\n if \"user\" in self.credentials and \"password\" in self.credentials:\n c = self.credentials[\"user\"] + \":\" + self.credentials[\"password\"]\n self.credentials[\"base64\"] = b64encode(c.encode()).decode(\"ascii\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generator_to_list(function):\n \n def wrapper(*args, **kwargs):\n return list(function(*args, **kwargs))\n wrapper.__name__ = function.__name__\n wrapper.__doc__ = function.__doc__\n return wrapper", "response": "Wrap a generator function so that it returns a list when called."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the next available filename for a particular filename prefix.", "response": "def logrotate(filename):\n \n \"\"\"\n Return the next available filename for a particular filename prefix.\n \n For example:\n \n >>> import os\n # Make three (empty) files in a directory\n >>> fp0 = open('file.0', 'w')\n >>> fp1 = open('file.1', 'w')\n >>> fp2 = open('file.2', 'w')\n >>> fp0.close(), fp1.close(), fp2.close()\n (None, None, None)\n # Use logrotate to get the next available filename.\n >>> logrotate('file')\n 'file.3'\n >>> logrotate('file.2')\n 'file.3'\n >>> logrotate('file.1')\n 'file.3'\n \n This can be used to get the next available filename for logging, allowing\n you to rotate log files, without using Python's ``logging`` module.\n \"\"\"\n\n match = re.match(r'(.*)' + re.escape(os.path.extsep) + r'(\\d+)', filename)\n if os.path.exists(filename):\n if match:\n prefix, number = match.groups()\n number = int(number)\n while os.path.exists(os.path.extsep.join((prefix, str(number)))):\n number += 1\n return os.path.extsep.join((prefix, str(number)))\n elif match:\n return filename\n return logrotate(os.path.extsep.join((filename, '0')))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_connection(host=None, database=None, user=None, password=None):\n c.CONNECTION['HOST'] = host\n c.CONNECTION['DATABASE'] = database\n c.CONNECTION['USER'] = user\n c.CONNECTION['PASSWORD'] = password", "response": "Set connection parameters. Call set_connection with no arguments to clear."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_delegate(address=None, pubkey=None, secret=None):\n c.DELEGATE['ADDRESS'] = address\n c.DELEGATE['PUBKEY'] = pubkey\n c.DELEGATE['PASSPHRASE'] = secret", "response": "Set the DELEGATE parameters. Call set_delegate with no arguments to clear."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_transactionlist(delegate_pubkey):\n\n res = DbCursor().execute_and_fetchall(\"\"\"\n SELECT transactions.\"id\", transactions.\"amount\",\n blocks.\"timestamp\", transactions.\"recipientId\",\n transactions.\"senderId\", transactions.\"rawasset\",\n transactions.\"type\", transactions.\"fee\", transactions.\"blockId\"\n FROM transactions \n INNER JOIN blocks\n ON transactions.\"blockId\" = blocks.id\n WHERE transactions.\"senderId\" IN\n (SELECT transactions.\"recipientId\"\n FROM transactions, votes\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND votes.\"votes\" = '+{0}')\n OR transactions.\"recipientId\" IN\n (SELECT transactions.\"recipientId\"\n FROM transactions, votes\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND votes.\"votes\" = '+{0}')\n ORDER BY blocks.\"timestamp\" ASC;\"\"\".format(delegate_pubkey))\n\n Transaction = namedtuple(\n 'transaction',\n 'id amount timestamp recipientId senderId rawasset type fee')\n named_transactions = []\n\n for i in res:\n tx_id = Transaction(\n id=i[0],\n amount=i[1],\n timestamp=i[2],\n recipientId=i[3],\n senderId=i[4],\n rawasset=i[5],\n type=i[6],\n fee=i[7],\n )\n\n named_transactions.append(tx_id)\n return named_transactions", "response": "returns a list of named tuples of all transactions relevant to a specific delegates voters"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_events(delegate_pubkey):\n\n res = DbCursor().execute_and_fetchall(\"\"\"\n SELECT *\n FROM(\n SELECT transactions.\"id\",\n transactions.\"amount\",\n transactions.\"fee\",\n blocks.\"timestamp\",\n transactions.\"recipientId\",\n transactions.\"senderId\",\n transactions.\"type\",\n transactions.\"rawasset\"\n FROM transactions\n INNER JOIN blocks\n ON transactions.\"blockId\" = blocks.id\n WHERE transactions.\"senderId\" IN\n (SELECT transactions.\"recipientId\"\n FROM transactions, votes\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND votes.\"votes\" = '+{0}')\n OR transactions.\"recipientId\" IN\n (SELECT transactions.\"recipientId\"\n FROM transactions, votes\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND votes.\"votes\" = '+{0}')\n UNION\n SELECT blocks.\"id\",\n blocks.\"reward\",\n blocks.\"totalFee\",\n blocks.\"timestamp\",\n mem_accounts.\"address\",\n NULL,\n 100,\n blocks.\"rawtxs\"\n FROM blocks\n INNER JOIN mem_accounts\n ON mem_accounts.\"publicKey\" = blocks.\"generatorPublicKey\"\n WHERE mem_accounts.\"address\" IN\n (SELECT transactions.\"recipientId\"\n FROM transactions, votes\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND votes.\"votes\" = '+{0}')) AS events\n ORDER BY events.\"timestamp\";\"\"\".format(delegate_pubkey))\n\n Event = namedtuple(\n 'Event',\n 'id amount fee timestamp recipientId senderId type raw')\n named_events = []\n\n for i in res:\n tx_id = Event(\n id=i[0],\n amount=i[1],\n fee=i[2],\n timestamp=i[3],\n recipientId=i[4],\n senderId=i[5],\n type=i[6],\n raw=i[7]\n )\n named_events.append(tx_id)\n return named_events", "response": "returns a list of named tuples of all transactions relevant to a specific delegates voters"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef payout(address):\n qry = DbCursor().execute_and_fetchall(\"\"\"\n SELECT DISTINCT transactions.\"id\", transactions.\"amount\",\n transactions.\"timestamp\", transactions.\"recipientId\",\n transactions.\"senderId\", transactions.\"rawasset\",\n transactions.\"type\", transactions.\"fee\"\n FROM transactions, delegates\n WHERE transactions.\"senderId\" IN (\n SELECT transactions.\"senderId\" \n FROM transactions, delegates \n WHERE transactions.\"id\" = delegates.\"transactionId\"\n )\n AND transactions.\"recipientId\" = '{}'\n ORDER BY transactions.\"timestamp\" ASC\"\"\".format(address))\n\n Transaction = namedtuple(\n 'transaction',\n 'id amount timestamp recipientId senderId rawasset type fee')\n named_transactions = []\n\n for i in qry:\n tx_id = Transaction(\n id=i[0],\n amount=i[1],\n timestamp=i[2],\n recipientId=i[3],\n senderId=i[4],\n rawasset=i[5],\n type=i[6],\n fee=i[7],\n )\n\n named_transactions.append(tx_id)\n return named_transactions", "response": "returns all received transactions between the address and registered delegate accounts\n ORDER BY timestamp ASC."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list of namedtuples all votes made by an address.", "response": "def votes(address):\n \"\"\"Returns a list of namedtuples all votes made by an address, {(+/-)pubkeydelegate:timestamp}, timestamp DESC\"\"\"\n qry = DbCursor().execute_and_fetchall(\"\"\"\n SELECT votes.\"votes\", transactions.\"timestamp\"\n FROM votes, transactions\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND transactions.\"senderId\" = '{}'\n ORDER BY transactions.\"timestamp\" DESC\n \"\"\".format(address))\n\n Vote = namedtuple(\n 'vote',\n 'direction delegate timestamp')\n res = []\n for i in qry:\n if i[0][0] == '+':\n direction = True\n elif i[0][0] == '-':\n direction = False\n else:\n logger.fatal('failed to interpret direction for: {}'.format(i))\n raise ParseError('failed to interpret direction of vote for: {}'.format(i))\n vote = Vote(\n direction=direction,\n delegate=i[0][1:],\n timestamp=i[1],\n )\n res.append(vote)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntaking a single address and returns the current balance.", "response": "def balance(address):\n \"\"\"\n Takes a single address and returns the current balance.\n \"\"\"\n txhistory = Address.transactions(address)\n balance = 0\n for i in txhistory:\n if i.recipientId == address:\n balance += i.amount\n if i.senderId == address:\n balance -= (i.amount + i.fee)\n\n delegates = Delegate.delegates()\n for i in delegates:\n if address == i.address:\n forged_blocks = Delegate.blocks(i.pubkey)\n for block in forged_blocks:\n balance += (block.reward + block.totalFee)\n\n if balance < 0:\n height = Node.height()\n logger.fatal('Negative balance for address {0}, Nodeheight: {1)'.format(address, height))\n raise NegativeBalanceError('Negative balance for address {0}, Nodeheight: {1)'.format(address, height))\n return balance"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delegates():\n qry = DbCursor().execute_and_fetchall(\"\"\"\n SELECT delegates.\"username\", delegates.\"transactionId\", transactions.\"timestamp\", transactions.\"senderId\", \n transactions.\"senderPublicKey\" \n FROM transactions\n JOIN delegates ON transactions.\"id\" = delegates.\"transactionId\"\n \"\"\")\n\n Delegate = namedtuple(\n 'delegate',\n 'username pubkey timestamp address transactionId')\n res = []\n for i in qry:\n registration = Delegate(\n username=i[0],\n pubkey=binascii.hexlify(i[4]).decode(\"utf-8\"),\n timestamp=i[2],\n address=i[3],\n transactionId=i[1]\n )\n res.append(registration)\n return res", "response": "returns a list of named tuples of all delegates."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lastpayout(delegate_address, blacklist=None):\n '''\n Assumes that all send transactions from a delegate are payouts.\n Use blacklist to remove rewardwallet and other transactions if the\n address is not a voter. blacklist can contain both addresses and transactionIds'''\n if blacklist and len(blacklist) > 1:\n command_blacklist = 'NOT IN ' + str(tuple(blacklist))\n elif blacklist and len(blacklist) == 1:\n command_blacklist = '!= ' + \"'\" + blacklist[0] + \"'\"\n else:\n command_blacklist = \"!= 'nothing'\"\n qry = DbCursor().execute_and_fetchall(\"\"\"\n SELECT ts.\"recipientId\", ts.\"id\", ts.\"timestamp\"\n FROM transactions ts,\n (SELECT MAX(transactions.\"timestamp\") AS max_timestamp, transactions.\"recipientId\"\n FROM transactions\n WHERE transactions.\"senderId\" = '{0}'\n AND transactions.\"id\" {1}\n GROUP BY transactions.\"recipientId\") maxresults\n WHERE ts.\"recipientId\" = maxresults.\"recipientId\"\n AND ts.\"recipientId\" {1}\n AND ts.\"timestamp\"= maxresults.max_timestamp;\n\n \"\"\".format(delegate_address, command_blacklist))\n result = []\n\n Payout = namedtuple(\n 'payout',\n 'address id timestamp')\n\n for i in qry:\n payout = Payout(\n address=i[0],\n id=i[1],\n timestamp=i[2]\n )\n result.append(payout)\n return result", "response": "Returns a list of namedtuples containing the last payouts from a delegate address."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef votes(delegate_pubkey):\n qry = DbCursor().execute_and_fetchall(\"\"\"\n SELECT transactions.\"recipientId\", transactions.\"timestamp\"\n FROM transactions, votes\n WHERE transactions.\"id\" = votes.\"transactionId\"\n AND votes.\"votes\" = '+{}'\n ORDER BY transactions.\"timestamp\" ASC;\n \"\"\".format(delegate_pubkey))\n\n Voter = namedtuple(\n 'voter',\n 'address timestamp')\n voters = []\n for i in qry:\n voter = Voter(\n address=i[0],\n timestamp=i[1]\n )\n voters.append(voter)\n return voters", "response": "returns all addresses that have voted for a given delegate pubkey"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef blocks(delegate_pubkey=None, max_timestamp=None):\n\n if not delegate_pubkey:\n delegate_pubkey = c.DELEGATE['PUBKEY']\n if max_timestamp:\n max_timestamp_sql = \"\"\" blocks.\"timestamp\" <= {} AND\"\"\".format(max_timestamp)\n else:\n max_timestamp_sql = ''\n\n qry = DbCursor().execute_and_fetchall(\"\"\"\n SELECT blocks.\"timestamp\", blocks.\"height\", blocks.\"id\", blocks.\"totalFee\", blocks.\"reward\"\n FROM blocks\n WHERE {0} blocks.\"generatorPublicKey\" = '\\\\x{1}'\n ORDER BY blocks.\"timestamp\" \n ASC\"\"\".format(\n max_timestamp_sql,\n delegate_pubkey))\n\n Block = namedtuple('block',\n 'timestamp height id totalFee reward')\n block_list = []\n for block in qry:\n block_value = Block(timestamp=block[0],\n height=block[1],\n id=block[2],\n totalFee=block[3],\n reward=block[4])\n block_list.append(block_value)\n\n return block_list", "response": "returns a list of named tuples of all blocks forged by a given delegate pubkey."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef share(passphrase=None, last_payout=None, start_block=0, del_pubkey=None, del_address=None):\n logger.info('starting share calculation using settings: {0} {1}'.format(c.DELEGATE, c.CALCULATION_SETTINGS))\n\n\n delegate_pubkey = c.DELEGATE['PUBKEY']\n delegate_address = c.DELEGATE['ADDRESS']\n\n if del_pubkey and del_address:\n delegate_address = del_address\n delegate_pubkey = del_pubkey\n\n logger.info('Starting share calculation, using address:{0}, pubkey:{1}'.format(delegate_address, delegate_pubkey))\n\n max_timestamp = Node.max_timestamp()\n logger.info('Share calculation max_timestamp = {}'.format(max_timestamp))\n\n # utils function\n transactions = get_transactionlist(\n delegate_pubkey=delegate_pubkey\n )\n\n votes = Delegate.votes(delegate_pubkey)\n\n # create a map of voters\n voter_dict = {}\n for voter in votes:\n voter_dict.update({voter.address: {\n 'balance': 0.0,\n 'status': False,\n 'last_payout': voter.timestamp,\n 'share': 0.0,\n 'vote_timestamp': voter.timestamp,\n 'blocks_forged': []}\n })\n\n # check if a voter is/used to be a forging delegate\n delegates = Delegate.delegates()\n for i in delegates:\n if i.address in voter_dict:\n logger.info('A registered delegate is a voter: {0}, {1}, {2}'.format(i.username, i.address, i.pubkey))\n try:\n blocks_by_voter = Delegate.blocks(i.pubkey)\n voter_dict[i.address]['blocks_forged'].extend(Delegate.blocks(i.pubkey))\n logger.info('delegate {0} has forged {1} blocks'.format(i.username, len(blocks_by_voter)))\n except Exception:\n logger.info('delegate {} has not forged any blocks'.format(i))\n pass\n try:\n for i in c.CALCULATION_SETTINGS['BLACKLIST']:\n voter_dict.pop(i)\n logger.debug('popped {} from calculations'.format(i))\n except Exception:\n pass\n\n if not last_payout:\n last_payout = Delegate.lastpayout(delegate_address)\n for payout in last_payout:\n try:\n voter_dict[payout.address]['last_payout'] = payout.timestamp\n except Exception:\n pass\n elif type(last_payout) is int:\n for address in voter_dict:\n if address['vote_timestamp'] < last_payout:\n address['last_payout'] = last_payout\n elif type(last_payout) is dict:\n for payout in last_payout:\n try:\n voter_dict[payout.address]['last_payout'] = payout.timestamp\n except Exception:\n pass\n else:\n logger.fatal('last_payout object not recognised: {}'.format(type(last_payout)))\n raise InputError('last_payout object not recognised: {}'.format(type(last_payout)))\n\n # get all forged blocks of delegate:\n blocks = Delegate.blocks(max_timestamp=max_timestamp,\n delegate_pubkey=delegate_pubkey)\n\n block_nr = start_block\n chunk_dict = {}\n reuse = False\n try:\n for tx in transactions:\n while tx.timestamp > blocks[block_nr].timestamp:\n if reuse:\n block_nr += 1\n for x in chunk_dict:\n voter_dict[x]['share'] += chunk_dict[x]\n continue\n block_nr += 1\n poolbalance = 0\n chunk_dict = {}\n for i in voter_dict:\n balance = voter_dict[i]['balance']\n\n try:\n if voter_dict[i]['balance'] > c.CALCULATION_SETTINGS['MAX']:\n balance = c.CALCULATION_SETTINGS['MAX']\n except Exception:\n pass\n\n try:\n if balance > c.CALCULATION_SETTINGS['EXCEPTIONS'][i]['REPLACE']:\n balance = c.CALCULATION_SETTINGS['EXCEPTIONS'][i]['REPLACE']\n except Exception:\n pass\n\n try:\n for x in voter_dict[i]['blocks_forged']:\n if x.timestamp < blocks[block_nr].timestamp:\n voter_dict[i]['balance'] += (x.reward + x.totalFee)\n voter_dict[i]['blocks_forged'].remove(x)\n balance = voter_dict[i]['balance']\n except Exception:\n pass\n\n if voter_dict[i]['status']:\n if not voter_dict[i]['balance'] < -20 * c.ARK:\n poolbalance += balance\n else:\n logger.fatal('balance lower than zero for: {0}'.format(i))\n raise NegativeBalanceError('balance lower than zero for: {0}'.format(i))\n\n for i in voter_dict:\n balance = voter_dict[i]['balance']\n\n if voter_dict[i]['balance'] > c.CALCULATION_SETTINGS['MAX']:\n balance = c.CALCULATION_SETTINGS['MAX']\n\n try:\n if balance > c.CALCULATION_SETTINGS['EXCEPTIONS'][i]['REPLACE']:\n balance = c.CALCULATION_SETTINGS['EXCEPTIONS'][i]['REPLACE']\n except Exception:\n pass\n\n if voter_dict[i]['status'] and voter_dict[i]['last_payout'] < blocks[block_nr].timestamp:\n if c.CALCULATION_SETTINGS['SHARE_FEES']:\n share = (balance/poolbalance) * (blocks[block_nr].reward +\n blocks[block_nr].totalFee)\n else:\n share = (balance/poolbalance) * blocks[block_nr].reward\n voter_dict[i]['share'] += share\n chunk_dict.update({i: share})\n reuse = True\n\n\n # parsing a transaction\n minvote = '{{\"votes\":[\"-{0}\"]}}'.format(delegate_pubkey)\n plusvote = '{{\"votes\":[\"+{0}\"]}}'.format(delegate_pubkey)\n\n reuse = False\n\n if tx.recipientId in voter_dict:\n voter_dict[tx.recipientId]['balance'] += tx.amount\n if tx.senderId in voter_dict:\n voter_dict[tx.senderId]['balance'] -= (tx.amount + tx.fee)\n if tx.senderId in voter_dict and tx.type == 3 and plusvote in tx.rawasset:\n voter_dict[tx.senderId]['status'] = True\n if tx.senderId in voter_dict and tx.type == 3 and minvote in tx.rawasset:\n voter_dict[tx.senderId]['status'] = False\n\n remaining_blocks = len(blocks) - block_nr - 1\n for i in range(remaining_blocks):\n for x in chunk_dict:\n voter_dict[x]['share'] += chunk_dict[x]\n\n # an IndexError occurs if max(transactions.timestamp) > max(blocks.timestamp) This means we parsed every block\n except IndexError:\n pass\n\n for i in voter_dict:\n logger.info(\"{0} {1} {2} {3} {4}\".format(i,\n voter_dict[i]['share'],\n voter_dict[i]['status'],\n voter_dict[i]['last_payout'],\n voter_dict[i]['vote_timestamp']))\n return voter_dict, max_timestamp", "response": "Calculate the true blockweight payout share for a given delegate."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef trueshare(start_block=0, del_pubkey=None, del_address=None, blacklist=None, share_fees=False,\n max_weight=float('inf')):\n '''\n Legacy TBW script (still pretty performant, but has some quirky behavior when forging delegates are amongst\n your voters)\n\n :param int start_block: block from which we start adding to the share (we calculate balances from block 0 anyways)\n :param str del_pubkey: delegate public key as is presented in the ark wallet\n :param str del_address: delegate address\n :param list blacklist: blacklist for addresses to be removed BEFORE calculation. Their share is removed from the pool balance\n :param bool share_fees: if tx fees should be shared as well.\n :param float max_weight: max_balance of a voter\n\n\n '''\n\n delegate_pubkey = c.DELEGATE['PUBKEY']\n delegate_address = c.DELEGATE['ADDRESS']\n\n if del_pubkey and del_address:\n delegate_address = del_address\n delegate_pubkey = del_pubkey\n\n max_timestamp = Node.max_timestamp()\n\n # utils function\n events = get_events(\n delegate_pubkey=delegate_pubkey\n )\n\n votes = Delegate.votes(delegate_pubkey)\n\n # create a map of voters\n voter_dict = {}\n for voter in votes:\n voter_dict.update({voter.address: {\n 'balance': 0.0,\n 'status': False,\n 'last_payout': voter.timestamp,\n 'share': 0.0,\n 'vote_timestamp': voter.timestamp,\n 'blocks_forged': []}\n })\n\n try:\n for i in blacklist:\n voter_dict.pop(i)\n except Exception:\n pass\n\n last_payout = Delegate.lastpayout(delegate_address)\n for payout in last_payout:\n try:\n voter_dict[payout.address]['last_payout'] = payout.timestamp\n except Exception:\n pass\n\n blocks = Delegate.blocks(delegate_pubkey)\n block_nr = start_block\n chunk_dict = {}\n reuse = False\n try:\n for e in events:\n while e.timestamp > blocks[block_nr].timestamp:\n if reuse:\n block_nr += 1\n for x in chunk_dict:\n voter_dict[x]['share'] += chunk_dict[x]\n continue\n\n block_nr += 1\n poolbalance = 0\n chunk_dict = {}\n for i in voter_dict:\n balance = voter_dict[i]['balance']\n if balance > max_weight:\n balance = max_weight\n\n if voter_dict[i]['status']:\n if voter_dict[i]['balance'] >= 0:\n poolbalance += balance\n else:\n print(voter_dict[i])\n raise NegativeBalanceError('balance lower than zero for: {0}'.format(i))\n\n for i in voter_dict:\n balance = voter_dict[i]['balance']\n\n if voter_dict[i]['status'] and voter_dict[i]['last_payout'] < blocks[block_nr].timestamp:\n if share_fees:\n share = (balance / poolbalance) * (blocks[block_nr].reward +\n blocks[block_nr].totalFee)\n else:\n share = (balance / poolbalance) * blocks[block_nr].reward\n voter_dict[i]['share'] += share\n chunk_dict.update({i: share})\n reuse = True\n\n # parsing a transaction\n minvote = '{{\"votes\":[\"-{0}\"]}}'.format(delegate_pubkey)\n plusvote = '{{\"votes\":[\"+{0}\"]}}'.format(delegate_pubkey)\n\n reuse = False\n # type 100 is a forged block\n if e.type != 100:\n if e.recipientId in voter_dict:\n voter_dict[e.recipientId]['balance'] += e.amount\n if e.senderId in voter_dict:\n voter_dict[e.senderId]['balance'] -= (e.amount + e.fee)\n if e.senderId in voter_dict and e.type == 3 and plusvote in e.raw:\n voter_dict[e.senderId]['status'] = True\n if e.senderId in voter_dict and e.type == 3 and minvote in e.raw:\n voter_dict[e.senderId]['status'] = False\n elif e.type == 100:\n if e.recipientId in voter_dict:\n voter_dict[e.recipientId]['balance'] += e.amount + e.fee\n\n remaining_blocks = len(blocks) - block_nr - 1\n for i in range(remaining_blocks):\n for x in chunk_dict:\n voter_dict[x]['share'] += chunk_dict[x]\n except IndexError:\n pass\n\n for i in voter_dict:\n logger.info(\"{0} {1} {2} {3} {4}\".format(\n i,\n voter_dict[i]['share'],\n voter_dict[i]['status'],\n voter_dict[i]['last_payout'],\n voter_dict[i]['vote_timestamp']))\n return voter_dict, max_timestamp", "response": "This function is used to calculate the balance of a voter and a voter."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_source(self, source_class, *constructor_args):\n if not IIPSource.implementedBy(source_class):\n raise TypeError(\"source_class {} must implement IIPSource\".format(source_class))\n else:\n self._sources.add((source_class, constructor_args))", "response": "Adds a source to the factory provided it s type and constructor arguments."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_sources(self, limit=sys.maxsize, types_list=None):\n if types_list and not isinstance(types_list, (tuple, list)):\n types_list = [types_list]\n\n sources = list(self._sources)\n random.shuffle(sources)\n\n for source in sources:\n if not types_list or source[0] in types_list:\n limit -= 1\n yield source[0](*source[1])\n\n if limit <= 0:\n break", "response": "Generates instantiated sources from the factory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef value_to_bool(config_val, evar):\n if not config_val:\n return False\n if config_val.strip().lower() == 'true':\n return True\n else:\n return False", "response": "Converts a string value to a boolean equivalents."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate that the value for the environment variable is not None.", "response": "def validate_is_not_none(config_val, evar):\n \"\"\"\n If the value is ``None``, fail validation.\n\n :param str config_val: The env var value.\n :param EnvironmentVariable evar: The EVar object we are validating\n a value for.\n :raises: ValueError if the config value is None.\n \"\"\"\n if config_val is None:\n raise ValueError(\n \"Value for environment variable '{evar_name}' can't \"\n \"be empty.\".format(evar_name=evar.name))\n return config_val"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate_is_boolean_true(config_val, evar):\n if config_val is None:\n raise ValueError(\n \"Value for environment variable '{evar_name}' can't \"\n \"be empty.\".format(evar_name=evar.name))\n return config_val", "response": "Validate that the value evaluates to boolean True."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert an evar value into a Python logging level constant.", "response": "def value_to_python_log_level(config_val, evar):\n \"\"\"\n Convert an evar value into a Python logging level constant.\n\n :param str config_val: The env var value.\n :param EnvironmentVariable evar: The EVar object we are validating\n a value for.\n :return: A validated string.\n :raises: ValueError if the log level is invalid.\n \"\"\"\n if not config_val:\n config_val = evar.default_val\n config_val = config_val.upper()\n # noinspection PyProtectedMember\n return logging._checkLevel(config_val)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister a new range type as a PostgreSQL range.", "response": "def register_range_type(pgrange, pyrange, conn):\n \"\"\"\n Register a new range type as a PostgreSQL range.\n\n >>> register_range_type(\"int4range\", intrange, conn)\n\n The above will make sure intrange is regarded as an int4range for queries\n and that int4ranges will be cast into intrange when fetching rows.\n\n pgrange should be the full name including schema for the custom range type.\n\n Note that adaption is global, meaning if a range type is passed to a regular\n psycopg2 connection it will adapt it to its proper range type. Parsing of\n rows from the database however is not global and just set on a per connection\n basis.\n \"\"\"\n\n register_adapter(pyrange, partial(adapt_range, pgrange))\n register_range_caster(\n pgrange, pyrange, *query_range_oids(pgrange, conn), scope=conn)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nacquire the correct error for a given response.", "response": "def get_api_error(response):\n \"\"\"Acquires the correct error for a given response.\n\n :param requests.Response response: HTTP error response\n :returns: the appropriate error for a given response\n :rtype: APIError\n\n \"\"\"\n error_class = _status_code_to_class.get(response.status_code, APIError)\n return error_class(response)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_param_values(request, model=None):\n if type(request) == dict:\n return request\n\n params = get_payload(request)\n\n # support in-place editing formatted request\n try:\n del params['pk']\n params[params.pop('name')] = params.pop('value')\n except KeyError:\n pass\n\n return {\n k.rstrip('[]'): safe_eval(v) if not type(v) == list else [safe_eval(sv) for sv in v]\n for k, v in params.items()\n }", "response": "Converts the request parameters to Python."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nextracts ORB context information from the request.", "response": "def get_context(request, model=None):\n \"\"\"\n Extracts ORB context information from the request.\n\n :param request: \n :param model: || None\n\n :return: { key: value} values, \n \"\"\"\n # convert request parameters to python\n param_values = get_param_values(request, model=model)\n\n # extract the full orb context if provided\n context = param_values.pop('orb_context', {})\n if isinstance(context, (unicode, str)):\n context = projex.rest.unjsonify(context)\n\n # otherwise, extract the limit information\n has_limit = 'limit' in context or 'limit' in param_values\n\n # create the new orb context\n orb_context = orb.Context(**context)\n\n # build up context information from the request params\n used = set()\n query_context = {}\n for key in orb.Context.Defaults:\n if key in param_values:\n used.add(key)\n query_context[key] = param_values.get(key)\n\n # generate a simple query object\n schema_values = {}\n if model:\n # extract match dict items\n for key, value in request.matchdict.items():\n if model.schema().column(key, raise_=False):\n schema_values[key] = value\n\n # extract payload items\n for key, value in param_values.items():\n root_key = key.split('.')[0]\n schema_object = model.schema().column(root_key, raise_=False) or model.schema().collector(root_key)\n if schema_object:\n value = param_values.pop(key)\n if isinstance(schema_object, orb.Collector) and type(value) not in (tuple, list):\n value = [value]\n schema_values[key] = value\n\n # generate the base context information\n query_context['scope'] = {\n 'request': request\n }\n\n # include any request specific scoping or information from the request\n # first, look for default ORB context for all requests\n try:\n default_context = request.orb_default_context\n\n # then, look for scope specific information for all requests\n except AttributeError:\n try:\n query_context['scope'].update(request.orb_scope)\n except AttributeError:\n pass\n\n # if request specific context defaults exist, then\n # merge them with the rest of the query context\n else:\n if 'scope' in default_context:\n query_context['scope'].update(default_context.pop('scope'))\n\n # setup defaults based on the request\n for k, v in default_context.items():\n query_context.setdefault(k, v)\n\n orb_context.update(query_context)\n return schema_values, orb_context"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _real_time_thread(self):\n while self.ws_client.connected():\n if self.die:\n break\n \n if self.pause:\n sleep(5)\n continue\n\n message = self.ws_client.receive()\n\n if message is None:\n break\n\n message_type = message['type']\n\n if message_type == 'error':\n continue\n if message['sequence'] <= self.sequence:\n continue\n\n if message_type == 'open':\n self._handle_open(message)\n elif message_type == 'match':\n self._handle_match(message)\n elif message_type == 'done':\n self._handle_done(message)\n elif message_type == 'change':\n self._handle_change(message)\n else:\n continue\n\n self.ws_client.disconnect()", "response": "Handles real - time updates to the order book."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nuses exclusively as a thread which keeps the WebSocket alive.", "response": "def _keep_alive_thread(self):\n \"\"\"Used exclusively as a thread which keeps the WebSocket alive.\"\"\"\n while True:\n with self._lock:\n if self.connected():\n self._ws.ping()\n else:\n self.disconnect()\n self._thread = None\n return\n sleep(30)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconnects and subscribes to the WebSocket Feed.", "response": "def connect(self):\n \"\"\"Connects and subscribes to the WebSocket Feed.\"\"\"\n if not self.connected():\n self._ws = create_connection(self.WS_URI)\n message = {\n 'type':self.WS_TYPE,\n 'product_id':self.WS_PRODUCT_ID\n }\n self._ws.send(dumps(message))\n\n # There will be only one keep alive thread per client instance\n with self._lock:\n if not self._thread:\n thread = Thread(target=self._keep_alive_thread, args=[])\n thread.start()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmark a view function as being exempt from the cached httpbl view protection.", "response": "def cached_httpbl_exempt(view_func):\n \"\"\"\n Marks a view function as being exempt from the cached httpbl view protection.\n \"\"\"\n # We could just do view_func.cached_httpbl_exempt = True, but decorators\n # are nicer if they don't have side-effects, so we return a new\n # function.\n def wrapped_view(*args, **kwargs):\n return view_func(*args, **kwargs)\n wrapped_view.cached_httpbl_exempt = True\n return wraps(view_func, assigned=available_attrs(view_func))(wrapped_view)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a connection to the Counter class.", "response": "def get_conn(self, aws_access_key=None, aws_secret_key=None):\n '''\n Hook point for overriding how the CounterPool gets its connection to\n AWS.\n '''\n return boto.connect_dynamodb(\n aws_access_key_id=aws_access_key,\n aws_secret_access_key=aws_secret_key,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the schema for the current table.", "response": "def get_schema(self):\n '''\n Hook point for overriding how the CounterPool determines the schema\n to be used when creating a missing table.\n '''\n if not self.schema:\n raise NotImplementedError(\n 'You must provide a schema value or override the get_schema method'\n )\n\n return self.conn.create_schema(**self.schema)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_table(self):\n '''\n Hook point for overriding how the CounterPool creates a new table\n in DynamooDB\n '''\n table = self.conn.create_table(\n name=self.get_table_name(),\n schema=self.get_schema(),\n read_units=self.get_read_units(),\n write_units=self.get_write_units(),\n )\n\n if table.status != 'ACTIVE':\n table.refresh(wait_for_active=True, retry_seconds=1)\n\n return table", "response": "Hook point for overriding how the CounterPool creates a new table\n in DynamooDB\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_table(self):\n '''\n Hook point for overriding how the CounterPool transforms table_name\n into a boto DynamoDB Table object.\n '''\n if hasattr(self, '_table'):\n table = self._table\n else:\n try:\n table = self.conn.get_table(self.get_table_name())\n except boto.exception.DynamoDBResponseError:\n if self.auto_create_table:\n table = self.create_table()\n else:\n raise\n\n self._table = table\n\n return table", "response": "Returns a boto DynamoDB Table object for the given table_name."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhooks point for overriding how the CouterPool creates a DynamoDB item for a given counter when an existing item can t be found.", "response": "def create_item(self, hash_key, start=0, extra_attrs=None):\n '''\n Hook point for overriding how the CouterPool creates a DynamoDB item\n for a given counter when an existing item can't be found.\n '''\n table = self.get_table()\n now = datetime.utcnow().replace(microsecond=0).isoformat()\n attrs = {\n 'created_on': now,\n 'modified_on': now,\n 'count': start,\n }\n\n if extra_attrs:\n attrs.update(extra_attrs)\n\n item = table.new_item(\n hash_key=hash_key,\n attrs=attrs,\n )\n\n return item"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_item(self, hash_key, start=0, extra_attrs=None):\n '''\n Hook point for overriding how the CouterPool fetches a DynamoDB item\n for a given counter.\n '''\n table = self.get_table()\n\n try:\n item = table.get_item(hash_key=hash_key)\n except DynamoDBKeyNotFoundError:\n item = None\n\n if item is None:\n item = self.create_item(\n hash_key=hash_key,\n start=start,\n extra_attrs=extra_attrs,\n )\n\n return item", "response": "Hook point for overriding how the CouterPool fetches a DynamoDB item\n for a given counter."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_counter(self, name, start=0):\n '''\n Gets the DynamoDB item behind a counter and ties it to a Counter\n instace.\n '''\n item = self.get_item(hash_key=name, start=start)\n counter = Counter(dynamo_item=item, pool=self)\n\n return counter", "response": "Gets the DynamoDB item behind a Counter and ties it to a Counter\n instace."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if this instance matches another instance.", "response": "def exact_match(self, descriptor):\r\n \"\"\"\r\n Matches this descriptor to another descriptor exactly.\r\n \r\n Args:\r\n descriptor: another descriptor to match this one.\r\n\r\n Returns: True if descriptors match or False otherwise. \r\n \"\"\"\r\n return self._exact_match_field(self._group, descriptor.get_group()) \\\r\n and self._exact_atch_field(self._type, descriptor.get_type()) \\\r\n and self._exact_match_field(self._kind, descriptor.get_kind()) \\\r\n and self._exact_match_field(self._name, descriptor.get_name()) \\\r\n and self._exact_match_field(self._version, descriptor.get_version())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef many_to_one(clsname, **kw):\n @declared_attr\n def m2o(cls):\n cls._references((cls.__name__, clsname))\n return relationship(clsname, **kw)\n return m2o", "response": "Use an event to build a many - to - one relationship on a class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef one_to_many(clsname, **kw):\n @declared_attr\n def o2m(cls):\n cls._references((clsname, cls.__name__))\n return relationship(clsname, **kw)\n return o2m", "response": "Use an event to build a one - to - many relationship on a class."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhandles data from the djeffification server.", "response": "def handle_data(self, data):\n \"\"\"\n Djeffify data between tags\n \"\"\"\n if data.strip():\n data = djeffify_string(data)\n self.djhtml += data"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _reference_table(cls, ref_table):\n # create pairs of (Foreign key column, primary key column)\n cols = [(sa.Column(), refcol) for refcol in ref_table.primary_key]\n\n # set \"tablename_colname = Foreign key Column\" on the local class\n for col, refcol in cols:\n setattr(cls, \"%s_%s\" % (ref_table.name, refcol.name), col)\n\n # add a ForeignKeyConstraint([local columns], [remote columns])\n cls.__table__.append_constraint(sa.ForeignKeyConstraint(*zip(*cols)))", "response": "Create a foreign key reference from the local class to the given remote class."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntry to run __json__ on the given object.", "response": "def __try_to_json(self, request, attr):\n \"\"\"\n Try to run __json__ on the given object.\n Raise TypeError is __json__ is missing\n\n :param request: Pyramid Request object\n :type request: \n :param obj: Object to JSONify\n :type obj: any object that has __json__ method\n :exception: TypeError\n \"\"\"\n\n # check for __json__ method and try to JSONify\n if hasattr(attr, '__json__'):\n return attr.__json__(request)\n\n # raise error otherwise\n raise TypeError('__json__ method missing on %s' % str(attr))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprepare path for use in the base class", "response": "def prepare_path(path):\n \"\"\"\n Path join helper method\n Join paths if list passed\n\n :type path: str|unicode|list\n :rtype: str|unicode\n \"\"\"\n if type(path) == list:\n return os.path.join(*path)\n return path"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read_from_file(file_path, encoding=\"utf-8\"):\n with codecs.open(file_path, \"r\", encoding) as f:\n return f.read()", "response": "Read the content of a NCBI log file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites contents to file.", "response": "def write_to_file(file_path, contents, encoding=\"utf-8\"):\n \"\"\"\n Write helper method\n\n :type file_path: str|unicode\n :type contents: str|unicode\n :type encoding: str|unicode\n \"\"\"\n with codecs.open(file_path, \"w\", encoding) as f:\n f.write(contents)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef copy_file(src, dest):\n dir_path = os.path.dirname(dest)\n if not os.path.exists(dir_path):\n os.makedirs(dir_path)\n shutil.copy2(src, dest)", "response": "Copy file src to dest"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the extension of a file name and return it.", "response": "def get_path_extension(path):\n \"\"\"\n Split file name and extension\n\n :type path: str|unicode\n :rtype: one str|unicode\n \"\"\"\n file_path, file_ext = os.path.splitext(path)\n return file_ext.lstrip('.')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef split_path(path):\n result_parts = []\n #todo: check loops\n while path != \"/\":\n parts = os.path.split(path)\n if parts[1] == path:\n result_parts.insert(0, parts[1])\n break\n elif parts[0] == path:\n result_parts.insert(0, parts[0])\n break\n else:\n path = parts[0]\n result_parts.insert(0, parts[1])\n return result_parts", "response": "Helper method for absolute and relative paths resolution\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate fully qualified endpoint URI.", "response": "def _create_api_uri(self, *parts):\n \"\"\"Creates fully qualified endpoint URIs.\n\n :param parts: the string parts that form the request URI\n\n \"\"\"\n return urljoin(self.API_URI, '/'.join(map(quote, parts)))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking sure we have proper ISO 8601 time.", "response": "def _format_iso_time(self, time):\n \"\"\"Makes sure we have proper ISO 8601 time.\n\n :param time: either already ISO 8601 a string or datetime.datetime\n :returns: ISO 8601 time\n :rtype: str\n\n \"\"\"\n if isinstance(time, str):\n return time\n elif isinstance(time, datetime):\n return time.strftime('%Y-%m-%dT%H:%M:%S.%fZ')\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the given response or raises an APIError for non - 2xx responses.", "response": "def _handle_response(self, response):\n \"\"\"Returns the given response or raises an APIError for non-2xx responses.\n\n :param requests.Response response: HTTP response\n :returns: requested data\n :rtype: requests.Response\n :raises APIError: for non-2xx responses\n\n \"\"\"\n if not str(response.status_code).startswith('2'):\n raise get_api_error(response)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _check_next(self):\n if self.is_initial:\n return True\n if self.before:\n if self.before_cursor:\n return True\n else:\n return False\n else:\n if self.after_cursor:\n return True\n else:\n return False", "response": "Checks if a next message is possible."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _wrap_color(self, code, text, format=None, style=None):\n color = None\n if code[:3] == self.bg.PREFIX:\n color = self.bg.COLORS.get(code, None)\n if not color:\n color = self.fg.COLORS.get(code, None)\n\n if not color:\n raise Exception('Color code not found')\n\n if format and format not in self.formats:\n raise Exception('Color format not found')\n\n fmt = \"0;\"\n if format == 'bold':\n fmt = \"1;\"\n elif format == 'underline':\n fmt = \"4;\"\n\n # Manage the format\n parts = color.split('[')\n color = '{0}[{1}{2}'.format(parts[0], fmt, parts[1])\n\n if self.has_colors and self.colors_enabled:\n # Set brightness\n st = ''\n if style:\n st = self.st.COLORS.get(style, '')\n return \"{0}{1}{2}{3}\".format(st, color, text, self.st.COLORS['reset_all'])\n else:\n return text", "response": "Wrap text with color code and given format"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef RegisterMessage(self, message):\n\n desc = message.DESCRIPTOR\n self._symbols[desc.full_name] = message\n if desc.file.name not in self._symbols_by_file:\n self._symbols_by_file[desc.file.name] = {}\n self._symbols_by_file[desc.file.name][desc.full_name] = message\n self.pool.AddDescriptor(desc)\n return message", "response": "Registers the given message type in the local database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all the messages from a specified file.", "response": "def GetMessages(self, files):\n \"\"\"Gets all the messages from a specified file.\n\n This will find and resolve dependencies, failing if they are not registered\n in the symbol database.\n\n\n Args:\n files: The file names to extract messages from.\n\n Returns:\n A dictionary mapping proto names to the message classes. This will include\n any dependent messages as well as any messages defined in the same file as\n a specified message.\n\n Raises:\n KeyError: if a file could not be found.\n \"\"\"\n\n result = {}\n for f in files:\n result.update(self._symbols_by_file[f])\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninserting object before index.", "response": "def insert(self, index, value):\n \"\"\"\n Insert object before index.\n\n :param int index: index to insert in\n :param string value: path to insert\n \"\"\"\n self._list.insert(index, value)\n self._sync()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the runtime path representation to list.", "response": "def parse(self, string):\n \"\"\"\n Parse runtime path representation to list.\n\n :param string string: runtime path string\n :return: list of runtime paths\n :rtype: list of string\n \"\"\"\n var, eq, values = string.strip().partition('=')\n assert var == 'runtimepath'\n assert eq == '='\n return values.split(',')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd some bundle to build group", "response": "def add_bundle(self, *args):\n \"\"\"\n Add some bundle to build group\n\n :type bundle: static_bundle.bundles.AbstractBundle\n @rtype: BuildGroup\n \"\"\"\n for bundle in args:\n if not self.multitype and self.has_bundles():\n first_bundle = self.get_first_bundle()\n if first_bundle.get_type() != bundle.get_type():\n raise Exception(\n 'Different bundle types for one Asset: %s[%s -> %s]'\n 'check types or set multitype parameter to True'\n % (self.name, first_bundle.get_type(), bundle.get_type())\n )\n self.bundles.append(bundle)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncollect files from all bundles and return them.", "response": "def collect_files(self):\n \"\"\"\n Return collected files links\n\n :rtype: list[static_bundle.files.StaticFileResult]\n \"\"\"\n self.files = []\n for bundle in self.bundles:\n bundle.init_build(self, self.builder)\n bundle_files = bundle.prepare()\n self.files.extend(bundle_files)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the minifier for the asset.", "response": "def get_minifier(self):\n \"\"\"\n Asset minifier\n Uses default minifier in bundle if it's not defined\n\n :rtype: static_bundle.minifiers.DefaultMinifier|None\n \"\"\"\n if self.minifier is None:\n if not self.has_bundles():\n raise Exception(\"Unable to get default minifier, no bundles in build group\")\n minifier = self.get_first_bundle().get_default_minifier()\n else:\n minifier = self.minifier\n if minifier:\n minifier.init_asset(self)\n return minifier"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates an asset object and store it in the self. assets dict.", "response": "def create_asset(self, name, **kwargs):\n \"\"\"\n Create asset\n\n :type name: unicode|str\n :rtype: Asset\n \"\"\"\n asset = Asset(self, name, **kwargs)\n self.assets[name] = asset\n return asset"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrender all includes in asset by names", "response": "def render_asset(self, name):\n \"\"\"\n Render all includes in asset by names\n\n :type name: str|unicode\n :rtype: str|unicode\n \"\"\"\n result = \"\"\n if self.has_asset(name):\n asset = self.get_asset(name)\n if asset.files:\n for f in asset.files:\n result += f.render_include() + \"\\r\\n\"\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef collect_links(self, env=None):\n for asset in self.assets.values():\n if asset.has_bundles():\n asset.collect_files()\n if env is None:\n env = self.config.env\n if env == static_bundle.ENV_PRODUCTION:\n self._minify(emulate=True)\n self._add_url_prefix()", "response": "Collect links without build files"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_build(self):\n for asset in self.assets.values():\n if asset.has_bundles():\n asset.collect_files()\n if not os.path.exists(self.config.output_dir):\n os.makedirs(self.config.output_dir)\n if self.config.copy_only_bundles:\n for asset in self.assets.values():\n if not asset.minify and asset.files:\n for f in asset.files:\n copy_file(f.abs_path, self._get_output_path(f.abs_path))\n else:\n copy_excludes = {}\n for asset in self.assets.values():\n if asset.minify and asset.files:\n for f in asset.files:\n copy_excludes[f.abs_path] = f\n for root, dirs, files in os.walk(self.config.input_dir):\n for fpath in files:\n current_file_path = os.path.join(root, fpath)\n if current_file_path not in copy_excludes:\n copy_file(current_file_path, self._get_output_path(current_file_path))\n self._minify()", "response": "Move files and make static build"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nclear build output directory", "response": "def clear(self, exclude=None):\n \"\"\"\n Clear build output dir\n :type exclude: list|None\n \"\"\"\n exclude = exclude or []\n for root, dirs, files in os.walk(self.config.output_dir):\n for f in files:\n if f not in exclude:\n os.unlink(os.path.join(root, f))\n for d in dirs:\n if d not in exclude:\n shutil.rmtree(os.path.join(root, d))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _default_json_default(obj):\n if isinstance(obj, (datetime.datetime, datetime.date, datetime.time)):\n return obj.strftime(default_date_fmt)\n else:\n return str(obj)", "response": "Coerce everything to strings."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing the zlogger. Sets up a rotating file handler to the specified path and file with the given size and backup count limits, sets the default application_name, server_hostname, and default/whitelist fields. :param path: path to write the log file :param target: name of the log file :param logger_name: name of the logger (defaults to root) :param level: log level for this logger (defaults to logging.DEBUG) :param maxBytes: size of the file before rotation (default 1MB) :param application_name: app name to add to each log entry :param server_hostname: hostname to add to each log entry :param fields: default/whitelist fields. :type path: string :type target: string :type logger_name: string :type level: int :type maxBytes: int :type backupCount: int :type application_name: string :type server_hostname: string :type fields: dict", "response": "def init_logs(path=None,\n target=None,\n logger_name='root',\n level=logging.DEBUG,\n maxBytes=1*1024*1024,\n backupCount=5,\n application_name='default',\n server_hostname=None,\n fields=None):\n \"\"\"Initialize the zlogger.\n\n Sets up a rotating file handler to the specified path and file with\n the given size and backup count limits, sets the default\n application_name, server_hostname, and default/whitelist fields.\n\n :param path: path to write the log file\n :param target: name of the log file\n :param logger_name: name of the logger (defaults to root)\n :param level: log level for this logger (defaults to logging.DEBUG)\n :param maxBytes: size of the file before rotation (default 1MB)\n :param application_name: app name to add to each log entry\n :param server_hostname: hostname to add to each log entry\n :param fields: default/whitelist fields.\n :type path: string\n :type target: string\n :type logger_name: string\n :type level: int\n :type maxBytes: int\n :type backupCount: int\n :type application_name: string\n :type server_hostname: string\n :type fields: dict\n \"\"\"\n log_file = os.path.abspath(\n os.path.join(path, target))\n logger = logging.getLogger(logger_name)\n logger.setLevel(level)\n\n handler = logging.handlers.RotatingFileHandler(\n log_file, maxBytes=maxBytes, backupCount=backupCount)\n handler.setLevel(level)\n\n handler.setFormatter(\n JsonFormatter(\n application_name=application_name,\n server_hostname=server_hostname,\n fields=fields))\n\n logger.addHandler(handler)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef format(self, record):\n\n record_fields = record.__dict__.copy()\n self._set_exc_info(record_fields)\n\n event_name = 'default'\n if record_fields.get('event_name'):\n event_name = record_fields.pop('event_name')\n\n log_level = 'INFO'\n if record_fields.get('log_level'):\n log_level = record_fields.pop('log_level')\n\n [record_fields.pop(k) for k in record_fields.keys()\n if k not in self.fields]\n\n defaults = self.defaults.copy()\n fields = self.fields.copy()\n fields.update(record_fields)\n filtered_fields = {}\n for k, v in fields.iteritems():\n if v is not None:\n filtered_fields[k] = v\n\n defaults.update({\n 'event_timestamp': self._get_now(),\n 'event_name': event_name,\n 'log_level': log_level,\n 'fields': filtered_fields})\n\n return json.dumps(defaults, default=self.json_default)", "response": "formats a logging. Record into a standard json log entry"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef includeme(config):\n settings = config.get_settings()\n should_create = asbool(settings.get('baka_model.should_create_all', False))\n should_drop = asbool(settings.get('baka_model.should_drop_all', False))\n\n # Configure the transaction manager to support retrying retryable\n # exceptions. We also register the session factory with the thread-local\n # transaction manager, so that all sessions it creates are registered.\n # \"tm.attempts\": 3,\n config.add_settings({\n \"retry.attempts\": 3,\n \"tm.activate_hook\": tm_activate_hook,\n \"tm.annotate_user\": False,\n })\n\n # use pyramid_retry couse pyramid_tm disabled it\n config.include('pyramid_retry')\n # use pyramid_tm to hook the transaction lifecycle to the request\n config.include('pyramid_tm')\n\n engine = get_engine(settings)\n session_factory = get_session_factory(engine)\n\n config.registry['db_session_factory'] = session_factory\n\n # make request.db available for use in Pyramid\n config.add_request_method(\n # r.tm is the transaction manager used by pyramid_tm\n lambda r: get_tm_session(session_factory, r.tm),\n 'db',\n reify=True\n )\n\n # service model factory\n config.include('.service')\n\n # Register a deferred action to bind the engine when the configuration is\n # committed. Deferring the action means that this module can be included\n # before model modules without ill effect.\n config.action(None, bind_engine, (engine,), {\n 'should_create': should_create,\n 'should_drop': should_drop\n }, order=10)", "response": "Include the Pyramid model modules."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef store( self, collection, **kwargs ):\n '''\n validate the passed values in kwargs based on the collection,\n store them in the mongodb collection\n '''\n key = validate( collection, **kwargs )\n if self.fetch( collection, **{ key : kwargs[key] } ):\n raise Proauth2Error( 'duplicate_key' )\n self.db[collection].insert( kwargs )", "response": "store the passed values in kwargs in the mongodb collection"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_abs_and_rel_paths(self, root_path, file_name, input_dir):\n # todo: change relative path resolving [bug on duplicate dir names in path]\n relative_dir = root_path.replace(input_dir, '')\n return os.path.join(root_path, file_name), relative_dir + '/' + file_name", "response": "Return absolute and relative paths for file\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_files(self):\n assert self.bundle, 'Cannot fetch file name with empty bundle'\n abs_path, rel_path = self.get_abs_and_rel_paths(self.bundle.path, self.file_path, self.bundle.input_dir)\n file_cls = self.bundle.get_file_cls()\n return [file_cls(rel_path, abs_path)]", "response": "Returns a list of all files in the bundle"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the files in the bundle.", "response": "def get_files(self):\n \"\"\"\n :inheritdoc\n \"\"\"\n assert self.bundle, 'Cannot fetch directory name with empty bundle'\n result_files = []\n bundle_ext = self.bundle.get_extension()\n ext = \".\" + bundle_ext if bundle_ext else None\n if self.directory_path == \"\":\n root_path = self.bundle.path\n else:\n root_path = os.path.join(self.bundle.path, self.directory_path)\n for root, dirs, files in os.walk(root_path):\n for fpath in files:\n if (not ext or fpath.endswith(ext)) and (not self.exclusions or all(fpath != n for n in self.exclusions)):\n abs_path, rel_path = self.get_abs_and_rel_paths(root, fpath, self.bundle.input_dir)\n file_cls = self.bundle.get_file_cls()\n result_files.append(file_cls(rel_path, abs_path))\n return result_files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef replicate_existing(source_db, target_db):\n # Get the server from which to manage the replication.\n server = shortcuts.get_server()\n logger = logging.getLogger('relax.couchdb.replicate')\n logger.debug('POST ' + urlparse.urljoin(server.resource.uri, '/_replicate'))\n source, target = specifier_to_db(source_db), specifier_to_db(target_db)\n logger.debug('Source DB: %s' % (source,))\n logger.debug('Target DB: %s' % (target,))\n try:\n resp_headers, resp_body = server.resource.post(path='/_replicate',\n content=json.dumps({'source': source, 'target': target}))\n except couchdb.client.ServerError, exc:\n logger.error('Replication failed.')\n raise ReplicationError(exc.args)\n result = resp_body['history'][0]\n if resp_body['ok']:\n logger.info('Replication %s... successful!' % (\n resp_body['session_id'][:6],))\n logger.info('Replication started: ' + result['start_time'])\n logger.info('Replication finished: ' + result['end_time'])\n result['start_time'] = datetime.datetime.strptime(result['start_time'],\n '%a, %d %b %Y %H:%M:%S GMT')\n result['end_time'] = datetime.datetime.strptime(result['end_time'],\n '%a, %d %b %Y %H:%M:%S GMT')\n timedelta = result['end_time'] - result['start_time']\n if timedelta.days:\n logger.info('Replication took %d days and %.2f seconds.' % (\n timedelta.days,\n timedelta.seconds + (timedelta.microseconds * (1e-6))))\n else:\n logger.info('Replication took %.2f seconds.' % (\n timedelta.seconds + (timedelta.microseconds * (1e-6))))\n # Prepare the 'result' dictionary.\n result['ok'] = resp_body['ok']\n result['session_id'] = resp_body['session_id']\n result['source_last_seq'] = resp_body['source_last_seq']\n # Info-log the number of docs read/written and checked/found.\n if result['docs_read'] == 1:\n docs_read = '1 document read'\n else:\n docs_read = '%d documents read' % (result['docs_read'],)\n if result['docs_written'] == 1:\n docs_written = '1 document written'\n else:\n docs_written = '%d documents written' % (result['docs_written'],)\n if result['missing_checked'] == 1:\n missing_checked = 'Checked for 1 missing document, found %d.' % (\n result['missing_found'],)\n else:\n missing_checked = 'Checked for %d missing documents, found %d.' % (\n result['missing_checked'], result['missing_found'],)\n logging.info('%s, %s' % (docs_read, docs_written))\n logging.info(missing_checked)\n return result\n else:\n logger.error('Replication %s... failed.' % (\n resp_body['session_id'][:6],))\n result['ok'] = resp_body['ok']\n result['session_id'] = resp_body['session_id']\n result['source_last_seq'] = resp_body['source_last_seq']\n raise ReplicationFailure(resp_headers, result)", "response": "Replicate an existing database to another existing database."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadvance the chain by nsteps steps.", "response": "def mcmc_advance(start, stdevs, logp, nsteps = 1e300, adapt=True, callback=None):\n\t\"\"\"\n\tGeneric Metropolis MCMC. Advances the chain by nsteps.\n\tCalled by :func:`mcmc`\n\t\n\t:param adapt: enables adaptive stepwidth alteration (converges).\n\t\"\"\"\n\timport scipy\n\tfrom numpy import log\n\timport progressbar\n\t\n\tprob = logp(start)\n\tchain = [start]\n\taccepts = [True]\n\tprobs = [prob]\n\tassert not numpy.isnan(start).any()\n\tassert not numpy.isnan(stdevs).any()\n\t\n\ti = 0\n\twidgets=['AR', progressbar.Percentage(), progressbar.Counter('%5d'),\n\t\tprogressbar.Bar(), progressbar.ETA()]\n\tpbar = progressbar.ProgressBar(widgets=widgets,\n\t\tmaxval=nsteps).start()\n\n\tprev = start\n\tprev_prob = prob\n\tprint 'MCMC: start at prob', prob\n\tstepchange = 0.1\n\twhile len(chain) < nsteps:\n\t\ti = i + 1\n\t\tnext = scipy.random.normal(prev, stdevs)\n\t\tnext[next > 1] = 1\n\t\tnext[next < 0] = 0\n\t\tnext_prob = logp(next)\n\t\tassert not numpy.isnan(next).any()\n\t\tassert not numpy.isnan(next_prob).any()\n\t\tdelta = next_prob - prev_prob\n\t\tdice = log(scipy.random.uniform(0, 1))\n\t\taccept = delta > dice\n\t\tif accept:\n\t\t\tprev = next\n\t\t\tprev_prob = next_prob\n\t\t\tif adapt: stdevs *= (1 + stepchange)\n\t\telse:\n\t\t\tif adapt: stdevs *= (1 + stepchange)**(-0.4) # aiming for 40% acceptance\n\t\tif callback: callback(prev_prob, prev, accept)\n\t\tchain.append(prev)\n\t\taccepts.append(accept)\n\t\tprobs.append(prev_prob)\n\t\tif adapt: stepchange = min(0.1, 10. / i)\n\t\t#print 'STDEV', stdevs[:5], stepchange\n\t\t\n\t\t# compute stats\n\t\twidgets[0] = 'AR: %.03f' % numpy.mean(numpy.array(accepts[len(accepts)/3:])+0)\n\t\tpbar.update(pbar.currval + 1)\n\tpbar.finish()\n\t\n\treturn chain, probs, accepts, stdevs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mcmc(transform, loglikelihood, parameter_names, nsteps=40000, nburn=400, \n\t\tstdevs=0.1, start = 0.5, **problem):\n\t\"\"\"\n \t**Metropolis Hastings MCMC**\n\n\twith automatic step width adaption. \n\tBurnin period is also used to guess steps.\n\t\n\t:param nburn: number of burnin steps\n\t:param stdevs: step widths to start with\n\t\"\"\"\n\t\n\tif 'seed' in problem:\n\t\tnumpy.random.seed(problem['seed'])\n\tn_params = len(parameter_names)\n\t\n\tdef like(cube):\n\t\tcube = numpy.array(cube)\n\t\tif (cube <= 1e-10).any() or (cube >= 1-1e-10).any():\n\t\t\treturn -1e100\n\t\tparams = transform(cube)\n\t\treturn loglikelihood(params)\n\t\n\tstart = start + numpy.zeros(n_params)\n\tstdevs = stdevs + numpy.zeros(n_params)\n\n\tdef compute_stepwidths(chain):\n\t\treturn numpy.std(chain, axis=0) / 3\n\n\timport matplotlib.pyplot as plt\n\tplt.figure(figsize=(7, 7))\n\tsteps = numpy.array([0.1]*(n_params))\n\tprint 'burn-in (1/2)...'\n\tchain, prob, _, steps_ = mcmc_advance(start, steps, like, nsteps=nburn / 2, adapt=True)\n\tsteps = compute_stepwidths(chain)\n\tprint 'burn-in (2/2)...'\n\tchain, prob, _, steps_ = mcmc_advance(chain[-1], steps, like, nsteps=nburn / 2, adapt=True)\n\tsteps = compute_stepwidths(chain)\n\tprint 'recording chain ...'\n\tchain, prob, _, steps_ = mcmc_advance(chain[-1], steps, like, nsteps=nsteps)\n\tchain = numpy.array(chain)\n\n\ti = numpy.argmax(prob)\n\tfinal = chain[-1]\n\tprint 'postprocessing...'\n\tchain = numpy.array([transform(params) for params in chain])\n\t\n\treturn dict(start=chain[-1], maximum=chain[i], seeds=[final, chain[i]], chain=chain, method='Metropolis MCMC')", "response": "This function is used to guess the MCMC step width for a given set of parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ensemble(transform, loglikelihood, parameter_names, nsteps=40000, nburn=400, \n\t\tstart=0.5, **problem):\n\t\"\"\"\n\t**Ensemble MCMC**\n\t\n\tvia `emcee `_\n\t\"\"\"\n\timport emcee\n\timport progressbar\n\tif 'seed' in problem:\n\t\tnumpy.random.seed(problem['seed'])\n\tn_params = len(parameter_names)\n\tnwalkers = 50 + n_params * 2\n\tif nwalkers > 200:\n\t\tnwalkers = 200\n\tp0 = [numpy.random.rand(n_params) for i in xrange(nwalkers)]\n\tstart = start + numpy.zeros(n_params)\n\tp0[0] = start\n\n\tdef like(cube):\n\t\tcube = numpy.array(cube)\n\t\tif (cube <= 1e-10).any() or (cube >= 1-1e-10).any():\n\t\t\treturn -1e100\n\t\tparams = transform(cube)\n\t\treturn loglikelihood(params)\n\t\n\tsampler = emcee.EnsembleSampler(nwalkers, n_params, like,\n\t\tlive_dangerously=True)\n\n\tprint 'burn-in...'\n\tpos, prob, state = sampler.run_mcmc(p0, nburn / nwalkers)\n\n\t# Reset the chain to remove the burn-in samples.\n\tsampler.reset()\n\n\tprint 'running ...'\n\t# Starting from the final position in the burn-in chain, sample\n\tpbar = progressbar.ProgressBar(\n\t\twidgets=[progressbar.Percentage(), progressbar.Counter('%5d'),\n\t\tprogressbar.Bar(), progressbar.ETA()],\n\t\tmaxval=nsteps).start()\n\tfor results in sampler.sample(pos, iterations=nsteps / nwalkers, rstate0=state):\n\t\tpbar.update(pbar.currval + 1)\n\tpbar.finish()\n\n\tprint \"Mean acceptance fraction:\", numpy.mean(sampler.acceptance_fraction)\n\n\tchain = sampler.flatchain\n\t\n\tfinal = chain[-1]\n\tprint 'postprocessing...'\n\tchain_post = numpy.array([transform(params) for params in chain])\n\tchain_prob = sampler.flatlnprobability\n\t\n\treturn dict(start=final, chain=chain_post, chain_prior=chain,\n\t\tchain_prob=chain_prob,\n\t\tmethod='Ensemble MCMC')", "response": "Ensemble MCMC with a given transform and loglikelihood."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsearching monits or works classes.", "response": "def _get_classes(package_name, base_class):\n \"\"\"\n search monits or works classes. Class must have 'name' attribute\n :param package_name: 'monits' or 'works'\n :param base_class: Monit or Work\n :return: tuple of tuples monit/work-name and class\n \"\"\"\n classes = {}\n\n base_dir = os.getcwd()\n root_module_name = base_dir.split('/')[-1]\n package_dir = base_dir + '/%s' % package_name\n if os.path.isdir(package_dir):\n for module_path in os.listdir(package_dir):\n if not module_path.endswith('.py'):\n continue\n\n module_name = os.path.splitext(module_path)[0]\n module_full_name = '%s.%s.%s' % (root_module_name, package_name, module_name)\n __import__(module_full_name)\n work_module = sys.modules[module_full_name]\n for module_item in work_module.__dict__.values():\n if type(module_item) is type \\\n and issubclass(module_item, base_class) \\\n and module_item is not base_class\\\n and hasattr(module_item, 'name') and module_item.name:\n classes.setdefault(module_item.name, []).append(module_item)\n\n # check no duplicated names\n for work_name, work_modules in classes.items():\n if len(work_modules) > 1:\n raise DuplicatedNameException('Modules %s have same name \"%s\"' % (\n ' and '.join(map(str, work_modules)),\n work_name\n ))\n\n # create immutable list of modules\n return tuple([(work_name, work_modules[0]) for work_name, work_modules in classes.items()])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd an EnumDescriptor to the pool.", "response": "def AddEnumDescriptor(self, enum_desc):\n \"\"\"Adds an EnumDescriptor to the pool.\n\n This method also registers the FileDescriptor associated with the message.\n\n Args:\n enum_desc: An EnumDescriptor.\n \"\"\"\n\n if not isinstance(enum_desc, descriptor.EnumDescriptor):\n raise TypeError('Expected instance of descriptor.EnumDescriptor.')\n\n self._enum_descriptors[enum_desc.full_name] = enum_desc\n self.AddFileDescriptor(enum_desc.file)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef FindFileContainingSymbol(self, symbol):\n\n symbol = _NormalizeFullyQualifiedName(symbol)\n try:\n return self._descriptors[symbol].file\n except KeyError:\n pass\n\n try:\n return self._enum_descriptors[symbol].file\n except KeyError:\n pass\n\n try:\n file_proto = self._internal_db.FindFileContainingSymbol(symbol)\n except KeyError as error:\n if self._descriptor_db:\n file_proto = self._descriptor_db.FindFileContainingSymbol(symbol)\n else:\n raise error\n if not file_proto:\n raise KeyError('Cannot find a file containing %s' % symbol)\n return self._ConvertFileProtoToFileDescriptor(file_proto)", "response": "Searches the file descriptor for the file containing the specified symbol."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload the named descriptor from the pool.", "response": "def FindMessageTypeByName(self, full_name):\n \"\"\"Loads the named descriptor from the pool.\n\n Args:\n full_name: The full name of the descriptor to load.\n\n Returns:\n The descriptor for the named type.\n \"\"\"\n\n full_name = _NormalizeFullyQualifiedName(full_name)\n if full_name not in self._descriptors:\n self.FindFileContainingSymbol(full_name)\n return self._descriptors[full_name]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload the named enum descriptor from the pool.", "response": "def FindEnumTypeByName(self, full_name):\n \"\"\"Loads the named enum descriptor from the pool.\n\n Args:\n full_name: The full name of the enum descriptor to load.\n\n Returns:\n The enum descriptor for the named type.\n \"\"\"\n\n full_name = _NormalizeFullyQualifiedName(full_name)\n if full_name not in self._enum_descriptors:\n self.FindFileContainingSymbol(full_name)\n return self._enum_descriptors[full_name]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading the named extension descriptor from the pool.", "response": "def FindExtensionByName(self, full_name):\n \"\"\"Loads the named extension descriptor from the pool.\n\n Args:\n full_name: The full name of the extension descriptor to load.\n\n Returns:\n A FieldDescriptor, describing the named extension.\n \"\"\"\n full_name = _NormalizeFullyQualifiedName(full_name)\n message_name, _, extension_name = full_name.rpartition('.')\n try:\n # Most extensions are nested inside a message.\n scope = self.FindMessageTypeByName(message_name)\n except KeyError:\n # Some extensions are defined at file scope.\n scope = self.FindFileContainingSymbol(full_name)\n return scope.extensions_by_name[extension_name]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _ConvertEnumDescriptor(self, enum_proto, package=None, file_desc=None,\n containing_type=None, scope=None):\n \"\"\"Make a protobuf EnumDescriptor given an EnumDescriptorProto protobuf.\n\n Args:\n enum_proto: The descriptor_pb2.EnumDescriptorProto protobuf message.\n package: Optional package name for the new message EnumDescriptor.\n file_desc: The file containing the enum descriptor.\n containing_type: The type containing this enum.\n scope: Scope containing available types.\n\n Returns:\n The added descriptor\n \"\"\"\n\n if package:\n enum_name = '.'.join((package, enum_proto.name))\n else:\n enum_name = enum_proto.name\n\n if file_desc is None:\n file_name = None\n else:\n file_name = file_desc.name\n\n values = [self._MakeEnumValueDescriptor(value, index)\n for index, value in enumerate(enum_proto.value)]\n desc = descriptor.EnumDescriptor(name=enum_proto.name,\n full_name=enum_name,\n filename=file_name,\n file=file_desc,\n values=values,\n containing_type=containing_type,\n options=enum_proto.options)\n scope['.%s' % enum_name] = desc\n self._enum_descriptors[enum_name] = desc\n return desc", "response": "Converts an EnumDescriptorProto to a EnumDescriptor."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a FieldDescriptor object from a FieldDescriptorProto.", "response": "def _MakeFieldDescriptor(self, field_proto, message_name, index,\n is_extension=False):\n \"\"\"Creates a field descriptor from a FieldDescriptorProto.\n\n For message and enum type fields, this method will do a look up\n in the pool for the appropriate descriptor for that type. If it\n is unavailable, it will fall back to the _source function to\n create it. If this type is still unavailable, construction will\n fail.\n\n Args:\n field_proto: The proto describing the field.\n message_name: The name of the containing message.\n index: Index of the field\n is_extension: Indication that this field is for an extension.\n\n Returns:\n An initialized FieldDescriptor object\n \"\"\"\n\n if message_name:\n full_name = '.'.join((message_name, field_proto.name))\n else:\n full_name = field_proto.name\n\n return descriptor.FieldDescriptor(\n name=field_proto.name,\n full_name=full_name,\n index=index,\n number=field_proto.number,\n type=field_proto.type,\n cpp_type=None,\n message_type=None,\n enum_type=None,\n containing_type=None,\n label=field_proto.label,\n has_default_value=False,\n default_value=None,\n is_extension=is_extension,\n extension_scope=None,\n options=field_proto.options)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate( table, **data ):\n '''\n theoretically, any data store can be implemented to work with this package,\n which means basic data validation must be done in-package, so that weird\n stuff can't be stored in the data store.\n this function raises an exception if an invalid table name is passed, not\n all of the required fields are in the data kwargs, or if a field that was\n passed is not expected.\n it also returns the key field name, for ensuring uniqueness (again, that may\n not be built into whatever data store is impelemented.)\n '''\n if table not in good.keys():\n raise Proauth2Error( 'invalid_request', 'invalid name: %s' % table )\n for req in good[table]['required']:\n if not data.get( req, None ):\n raise Proauth2Error( 'invalid_request',\n 'missing required field: %s' % req )\n for key in data.keys():\n if key not in good[table]['required'] and \\\n key not in good[table]['optional']:\n raise Proauth2Error( 'invalid_request', 'invalid field: %s' % key )\n return good[table]['key']", "response": "Validate that the data passed in is valid for the given table."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrecording the results of this experiment by updating the tag.", "response": "def record_results(self, results):\n \"\"\"\n Record the results of this experiment, by updating the tag.\n :param results: A dictionary containing the results of the experiment.\n :type results: dict\n \"\"\"\n repository = Repo(self.__repository_directory, search_parent_directories=True)\n for tag in repository.tags:\n if tag.name == self.__tag_name:\n tag_object = tag\n break\n else:\n raise Exception(\"Experiment tag has been deleted since experiment started\")\n data = json.loads(tag_object.tag.message)\n data[\"results\"] = results\n TagReference.create(repository, self.__tag_name, message=json.dumps(data),\n ref=tag_object.tag.object, force=True)\n self.__results_recorded = True"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __tag_repo(self, data, repository):\n assert self.__tag_name not in [t.name for t in repository.tags]\n return TagReference.create(repository, self.__tag_name, message=json.dumps(data))", "response": "Tag the current repository with the given data."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the files that have been modified and can be added to the repository.", "response": "def __get_files_to_be_added(self, repository):\n \"\"\"\n :return: the files that have been modified and can be added\n \"\"\"\n for root, dirs, files in os.walk(repository.working_dir):\n for f in files:\n relative_path = os.path.join(root, f)[len(repository.working_dir) + 1:]\n try:\n repository.head.commit.tree[relative_path] # will fail if not tracked\n yield relative_path\n except:\n pass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef __start_experiment(self, parameters):\n repository = Repo(self.__repository_directory, search_parent_directories=True)\n if len(repository.untracked_files) > 0:\n logging.warning(\"Untracked files will not be recorded: %s\", repository.untracked_files)\n current_commit = repository.head.commit\n started_state_is_dirty = repository.is_dirty()\n\n if started_state_is_dirty:\n repository.index.add([p for p in self.__get_files_to_be_added(repository)])\n commit_obj = repository.index.commit(\"Temporary commit for experiment \" + self.__experiment_name)\n sha = commit_obj.hexsha\n else:\n sha = repository.head.object.hexsha\n\n data = {\"parameters\": parameters, \"started\": time.time(), \"description\": self.__description,\n \"commit_sha\": sha}\n tag_object = self.__tag_repo(data, repository)\n\n if started_state_is_dirty:\n repository.head.reset(current_commit, working_tree=False, index=True)\n\n return tag_object", "response": "Start an experiment by capturing the state of the code\n and then processing the tags."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a sqlalchemy. orm. Session instance backed by a transaction.", "response": "def get_tm_session(session_factory, transaction_manager):\n \"\"\"\n Get a ``sqlalchemy.orm.Session`` instance backed by a transaction.\n\n This function will hook the session to the transaction manager which\n will take care of committing any changes.\n\n - When using pyramid_tm it will automatically be committed or aborted\n depending on whether an exception is raised.\n\n - When using scripts you should wrap the session in a manager yourself.\n For example::\n\n import transaction\n\n engine = get_engine(settings)\n session_factory = get_session_factory(engine)\n with transaction.manager:\n dbsession = get_tm_session(session_factory, transaction.manager)\n\n \"\"\"\n dbsession = session_factory()\n zope.sqlalchemy.register(\n dbsession, transaction_manager=transaction_manager)\n return dbsession"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run(self):\n global existed_urls_list\n config = config_creator()\n debug = config.debug\n main_thread_sleep = config.main_thread_sleep\n branch_thread_num = config.branch_thread_num\n while 1:\n url = self.main_queue.get()\n if debug:\n print('main thread-{} start'.format(url))\n main_spider = self.main_spider(url)\n sleep(random.randrange(*main_thread_sleep))\n links = main_spider.request_urls()\n\n try:\n assert type(links) in VALIDATE_URLS\n except AssertionError:\n error_message('except to return a list or tuple which contains url')\n links = list()\n\n branch_queue = queue.Queue(branch_thread_num)\n\n for i in range(branch_thread_num):\n branch_thread = BranchThread(branch_queue=branch_queue,\n branch_spider=self.branch_spider)\n branch_thread.daemon = True\n branch_thread.start()\n\n for link in links:\n if link not in existed_urls_list:\n existed_urls_list.append(link)\n branch_queue.put(link)\n\n branch_queue.join()\n if debug:\n print('main thread-{}\\'s child threads is all finish'.format(url))\n self.main_queue.task_done()", "response": "run your main spider and get a list or tuple of url as result\n then make the instance of branch thread"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef pypi_render(source):\n ALLOWED_SCHEMES = '''file ftp gopher hdl http https imap mailto mms news\n nntp prospero rsync rtsp rtspu sftp shttp sip sips snews svn svn+ssh\n telnet wais irc'''.split()\n\n settings_overrides = {\n \"raw_enabled\": 0, # no raw HTML code\n \"file_insertion_enabled\": 0, # no file/URL access\n \"halt_level\": 2, # at warnings or errors, raise an exception\n \"report_level\": 5, # never report problems with the reST code\n }\n\n # capture publishing errors, they go to stderr\n old_stderr = sys.stderr\n sys.stderr = s = StringIO.StringIO()\n parts = None\n\n try:\n # Convert reStructuredText to HTML using Docutils.\n document = publish_doctree(source=source,\n settings_overrides=settings_overrides)\n\n for node in document.traverse():\n if node.tagname == '#text':\n continue\n if node.hasattr('refuri'):\n uri = node['refuri']\n elif node.hasattr('uri'):\n uri = node['uri']\n else:\n continue\n o = urlparse.urlparse(uri)\n if o.scheme not in ALLOWED_SCHEMES:\n raise TransformError('link scheme not allowed')\n\n # now turn the transformed document into HTML\n reader = readers.doctree.Reader(parser_name='null')\n pub = Publisher(reader, source=io.DocTreeInput(document),\n destination_class=io.StringOutput)\n pub.set_writer('html')\n pub.process_programmatic_settings(None, settings_overrides, None)\n pub.set_destination(None, None)\n pub.publish()\n parts = pub.writer.parts\n\n except:\n pass\n\n sys.stderr = old_stderr\n\n # original text if publishing errors occur\n if parts is None or len(s.getvalue()) > 0:\n return None\n else:\n return parts['body']", "response": "Render the source into a single page of HTML."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate a random string of the specified length.", "response": "def generate(length=DEFAULT_LENGTH):\n \"\"\"\n Generate a random string of the specified length.\n\n The returned string is composed of an alphabet that shouldn't include any\n characters that are easily mistakeable for one another (I, 1, O, 0), and\n hopefully won't accidentally contain any English-language curse words.\n \"\"\"\n return ''.join(random.SystemRandom().choice(ALPHABET)\n for _ in range(length))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nassert that the named field has the right data_type", "response": "def require(name, field, data_type):\n \"\"\"Require that the named `field` has the right `data_type`\"\"\"\n if not isinstance(field, data_type):\n msg = '{0} must have {1}, got: {2}'.format(name, data_type, field)\n raise AssertionError(msg)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npush a new msg onto the queue and return a tuple of success and msg", "response": "def _enqueue(self, msg):\n \"\"\"Push a new `msg` onto the queue, return `(success, msg)`\"\"\"\n self.log.debug('queueing: %s', msg)\n\n if self.queue.full():\n self.log.warn('librato_bg queue is full')\n return False, msg\n\n self.queue.put(msg)\n self.log.debug('enqueued %s.', msg)\n return True, msg"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef flush(self):\n queue = self.queue\n size = queue.qsize()\n queue.join()\n self.log.debug('successfully flushed %s items.', size)", "response": "Forces a flush from the internal queue to the server"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef open(name=None, fileobj=None, closefd=True):\n return Guesser().open(name=name, fileobj=fileobj, closefd=closefd)", "response": "Open a file - like object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef marv(ctx, config, loglevel, logfilter, verbosity):\n if config is None:\n cwd = os.path.abspath(os.path.curdir)\n while cwd != os.path.sep:\n config = os.path.join(cwd, 'marv.conf')\n if os.path.exists(config):\n break\n cwd = os.path.dirname(cwd)\n else:\n config = '/etc/marv/marv.conf'\n if not os.path.exists(config):\n config = None\n ctx.obj = config\n setup_logging(loglevel, verbosity, logfilter)", "response": "Manage a Marv site"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlikes _VarintDecoder but decodes signed values.", "response": "def _SignedVarintDecoder(mask, result_type):\n \"\"\"Like _VarintDecoder() but decodes signed values.\"\"\"\n\n def DecodeVarint(buffer, pos):\n result = 0\n shift = 0\n while 1:\n b = six.indexbytes(buffer, pos)\n result |= ((b & 0x7f) << shift)\n pos += 1\n if not (b & 0x80):\n if result > 0x7fffffffffffffff:\n result -= (1 << 64)\n result |= ~mask\n else:\n result &= mask\n result = result_type(result)\n return (result, pos)\n shift += 7\n if shift >= 64:\n raise _DecodeError('Too many bytes when decoding varint.')\n return DecodeVarint"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef MessageSetItemDecoder(extensions_by_number):\n\n type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT)\n message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)\n item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP)\n\n local_ReadTag = ReadTag\n local_DecodeVarint = _DecodeVarint\n local_SkipField = SkipField\n\n def DecodeItem(buffer, pos, end, message, field_dict):\n message_set_item_start = pos\n type_id = -1\n message_start = -1\n message_end = -1\n\n # Technically, type_id and message can appear in any order, so we need\n # a little loop here.\n while 1:\n (tag_bytes, pos) = local_ReadTag(buffer, pos)\n if tag_bytes == type_id_tag_bytes:\n (type_id, pos) = local_DecodeVarint(buffer, pos)\n elif tag_bytes == message_tag_bytes:\n (size, message_start) = local_DecodeVarint(buffer, pos)\n pos = message_end = message_start + size\n elif tag_bytes == item_end_tag_bytes:\n break\n else:\n pos = SkipField(buffer, pos, end, tag_bytes)\n if pos == -1:\n raise _DecodeError('Missing group end tag.')\n\n if pos > end:\n raise _DecodeError('Truncated message.')\n\n if type_id == -1:\n raise _DecodeError('MessageSet item missing type_id.')\n if message_start == -1:\n raise _DecodeError('MessageSet item missing message.')\n\n extension = extensions_by_number.get(type_id)\n if extension is not None:\n value = field_dict.get(extension)\n if value is None:\n value = field_dict.setdefault(\n extension, extension.message_type._concrete_class())\n if value._InternalParse(buffer, message_start,message_end) != message_end:\n # The only reason _InternalParse would return early is if it encountered\n # an end-group tag.\n raise _DecodeError('Unexpected end-group tag.')\n else:\n if not message._unknown_fields:\n message._unknown_fields = []\n message._unknown_fields.append((MESSAGE_SET_ITEM_TAG,\n buffer[message_set_item_start:pos]))\n\n return pos\n\n return DecodeItem", "response": "Returns a decoder for a MessageSet item."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_app_name():\n fn = getattr(sys.modules['__main__'], '__file__', None)\n if fn is None:\n return '__main__'\n return os.path.splitext(os.path.basename(fn))[0]", "response": "Get the app name via\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngive a Python function name return the function it refers to.", "response": "def get_function(function_name):\n \"\"\"\n Given a Python function name, return the function it refers to.\n \"\"\"\n module, basename = str(function_name).rsplit('.', 1)\n try:\n return getattr(__import__(module, fromlist=[basename]), basename)\n except (ImportError, AttributeError):\n raise FunctionNotFound(function_name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_add_fun(self, function_name):\n function_name = function_name.strip()\n try:\n function = get_function(function_name)\n except Exception, exc:\n self.wfile.write(js_error(exc) + NEWLINE)\n return\n # This tests to see if the function has been decorated with the view\n # server synchronisation decorator (``decorate_view``).\n if not getattr(function, 'view_decorated', None):\n self.functions[function_name] = (self.function_counter, function)\n # The decorator gets called with the logger function.\n else:\n self.functions[function_name] = (self.function_counter,\n function(self.log))\n self.function_counter += 1\n return True", "response": "Add a function to the function list in order."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the mapping of a document according to the function list.", "response": "def handle_map_doc(self, document):\n \"\"\"Return the mapping of a document according to the function list.\"\"\"\n # This uses the stored set of functions, sorted by order of addition.\n for function in sorted(self.functions.values(), key=lambda x: x[0]):\n try:\n # It has to be run through ``list``, because it may be a\n #\u00a0generator function.\n yield [list(function(document))]\n except Exception, exc:\n # Otherwise, return an empty list and log the event.\n yield []\n self.log(repr(exc))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef handle_reduce(self, reduce_function_names, mapped_docs):\n reduce_functions = []\n # This gets a large list of reduction functions, given their names.\n for reduce_function_name in reduce_function_names:\n try:\n reduce_function = get_function(reduce_function_name)\n if getattr(reduce_function, 'view_decorated', None):\n reduce_function = reduce_function(self.log)\n reduce_functions.append(reduce_function)\n except Exception, exc:\n self.log(repr(exc))\n reduce_functions.append(lambda *args, **kwargs: None)\n # Transform lots of (key, value) pairs into one (keys, values) pair.\n keys, values = zip(\n (key, value) for ((key, doc_id), value) in mapped_docs)\n # This gets the list of results from the reduction functions.\n results = []\n for reduce_function in reduce_functions:\n try:\n results.append(reduce_function(keys, values, rereduce=False))\n except Exception, exc:\n self.log(repr(exc))\n results.append(None)\n return [True, results]", "response": "This function handles the reduce operation."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef handle_rereduce(self, reduce_function_names, values):\n # This gets a large list of reduction functions, given their names.\n reduce_functions = []\n for reduce_function_name in reduce_function_names:\n try:\n reduce_function = get_function(reduce_function_name)\n if getattr(reduce_function, 'view_decorated', None):\n reduce_function = reduce_function(self.log)\n reduce_functions.append(reduce_function)\n except Exception, exc:\n self.log(repr(exc))\n reduce_functions.append(lambda *args, **kwargs: None)\n # This gets the list of results from those functions.\n results = []\n for reduce_function in reduce_functions:\n try:\n results.append(reduce_function(None, values, rereduce=True))\n except Exception, exc:\n self.log(repr(exc))\n results.append(None)\n return [True, results]", "response": "Re - reduce a set of values with a list of rereduction functions."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_validate(self, function_name, new_doc, old_doc, user_ctx):\n try:\n function = get_function(function_name)\n except Exception, exc:\n self.log(repr(exc))\n return False\n try:\n return function(new_doc, old_doc, user_ctx)\n except Exception, exc:\n self.log(repr(exc))\n return repr(exc)", "response": "Validate... this function is undocumented but still in CouchDB."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlogs an event on the CouchDB server.", "response": "def log(self, string):\n \"\"\"Log an event on the CouchDB server.\"\"\"\n self.wfile.write(json.dumps({'log': string}) + NEWLINE)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates a universally unique ID.", "response": "def guid(*args):\n \"\"\"\n Generates a universally unique ID.\n Any arguments only create more randomness.\n \"\"\"\n t = float(time.time() * 1000)\n r = float(random.random()*10000000000000)\n\n a = random.random() * 10000000000000\n data = str(t) + ' ' + str(r) + ' ' + str(a) + ' ' + str(args)\n data = hashlib.md5(data.encode()).hexdigest()[:10]\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_pages(self, limit=5, order_by=('position', '-modified_at')):\n return to_eg_objects(self.workspace.S(Page).filter(\n language=self.locale).order_by(*order_by)[:limit])", "response": "Return a list of pages in GitModel knows about."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of featured pages in the GitModel.", "response": "def get_featured_pages(\n self, limit=5, order_by=('position', '-modified_at')):\n \"\"\"\n Return featured pages the GitModel knows about.\n :param str locale:\n The locale string, like `eng_UK`.\n :param int limit:\n The number of pages to return, defaults to 5.\n :param tuple order_by:\n The attributes to order on,\n defaults to ('position', '-modified_at').\n \"\"\"\n return self._get_featured_pages(self.locale, limit, order_by)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef register_app(self, name, redirect_uri, callback):\n '''\n register_app takes an application name and redirect_uri\n It generates client_id (client_key) and client_secret,\n then stores all of the above in the data_store,\n and returns a dictionary containing the client_id and client_secret.\n '''\n client_id = self._generate_token()\n client_secret = self._generate_token(64)\n yield Task(self.data_store.store, 'applications', client_id=client_id,\n client_secret=client_secret, name=name,\n redirect_uri=redirect_uri)\n callback({'client_id':client_id, 'client_secret':client_secret})", "response": "This is a generator function that creates a new application with the given name and redirect_uri and returns a dictionary containing the client_id and client_secret."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef request_access_token(self, client_id, key, code, grant_type,\n redirect_uri=None, method='direct_auth',\n callback=None):\n '''\n request_access_token validates the client_id and client_secret, using the\n provided method, then generates an access_token, stores it with the user_id\n from the nonce, and returns a dictionary containing an access_token and\n bearer token.\n ---\n from the spec, it looks like there are different types of\n tokens, but i don't understand the disctintions, so someone else can fix\n this if need be.\n regarding the method: it appears that it is intended for there to be\n multiple ways to verify the client_id. my assumption is that you use the\n secret as the salt and pass the hashed of the client_id or something, and\n then compare hashes on the server end. currently the only implemented method\n is direct comparison of the client_ids and client_secrets.\n additional methods can be added to proauth2.auth_methods\n '''\n if grant_type != 'authorization_code':\n raise Proauth2Error('invalid_request',\n 'grant_type must be \"authorization_code\"')\n\n yield Task(self._auth, client_id, key, method)\n user_id = yield Task(self._validate_request_code, code, client_id)\n access_token = self._generate_token(64)\n yield Task(self.data_store.store, 'tokens', token=access_token,\n user_id=user_id, client_id=client_id)\n\n callback({'access_token':access_token, 'token_type':'bearer'})", "response": "Request an access token for the client_id and client_secret."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nauthenticate the passed token and returns the user_id associated with the user.", "response": "def authenticate_token(self, token, callback):\n '''\n authenticate_token checks the passed token and returns the user_id it is\n associated with. it is assumed that this method won't be directly exposed to\n the oauth client, but some kind of framework or wrapper. this allows the\n framework to have the user_id without doing additional DB calls.\n '''\n token_data = yield Task(self.data_store.fetch, 'tokens', token=token)\n if not token_data:\n raise Proauth2Error('access_denied',\n 'token does not exist or has been revoked')\n callback(token_data['user_id'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrevoking the specified token from the data_store", "response": "def revoke_token(self, token, callback):\n '''\n revoke_token removes the access token from the data_store\n '''\n yield Task(self.data_store.remove, 'tokens', token=token)\n callback()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nauthenticate with the client_id and client_secret and calls the callback with the result.", "response": "def _auth(self, client_id, key, method, callback):\n '''\n _auth - internal method to ensure the client_id and client_secret passed with\n the nonce match\n '''\n available = auth_methods.keys()\n if method not in available:\n raise Proauth2Error('invalid_request',\n 'unsupported authentication method: %s'\n 'available methods: %s' % \\\n (method, '\\n'.join(available)))\n client = yield Task(self.data_store.fetch, 'applications',\n client_id=client_id)\n if not client: raise Proauth2Error('access_denied')\n if not auth_methods[method](key, client['client_secret']):\n raise Proauth2Error('access_denied')\n callback()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _generate_token(self, length=32):\n '''\n _generate_token - internal function for generating randomized alphanumberic\n strings of a given length\n '''\n return ''.join(choice(ascii_letters + digits) for x in range(length))", "response": "Internal function for generating randomized alphanumberic\n strings of a given length"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmerges multiple ordered so that within - ordered order is preserved", "response": "def merge_ordered(ordereds: typing.Iterable[typing.Any]) -> typing.Iterable[typing.Any]:\n \"\"\"Merge multiple ordered so that within-ordered order is preserved\n \"\"\"\n seen_set = set()\n add_seen = seen_set.add\n return reversed(tuple(map(\n lambda obj: add_seen(obj) or obj,\n filterfalse(\n seen_set.__contains__,\n chain.from_iterable(map(reversed, reversed(ordereds))),\n ),\n )))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nhelping us validate the parameters for the current node", "response": "def validate_params(required, optional, params):\n \"\"\"\n Helps us validate the parameters for the request\n\n :param valid_options: a list of strings of valid options for the\n api request\n :param params: a dict, the key-value store which we really only care about\n the key which has tells us what the user is using for the\n API request\n\n :returns: None or throws an exception if the validation fails\n \"\"\"\n\n missing_fields = [x for x in required if x not in params]\n if missing_fields:\n field_strings = \", \".join(missing_fields)\n raise Exception(\"Missing fields: %s\" % field_strings)\n\n disallowed_fields = [x for x in params if x not in optional and x not in required]\n if disallowed_fields:\n field_strings = \", \".join(disallowed_fields)\n raise Exception(\"Disallowed fields: %s\" % field_strings)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nregister your own mode and handle method here.", "response": "def main():\n \"\"\"Register your own mode and handle method here.\"\"\"\n plugin = Register()\n if plugin.args.option == 'filenumber':\n plugin.filenumber_handle()\n elif plugin.args.option == 'fileage':\n plugin.fileage_handle()\n elif plugin.args.option == 'sqlserverlocks':\n plugin.sqlserverlocks_handle()\n else:\n plugin.unknown(\"Unknown actions.\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filenumber_handle(self):\n self.file_list = []\n self.count = 0\n status = self.ok\n\n if self.args.recursion:\n self.__result, self.__file_list = self.__get_folder(self.args.path)\n else:\n self.__result, self.__file_list = self.__get_file(self.args.path)\n\n # Compare the vlaue.\n if self.__result > self.args.critical:\n status = self.critical\n elif self.__result > self.args.warning:\n status = self.warning\n else:\n status = self.ok\n\n # Output\n self.shortoutput = \"Found {0} files in {1}.\".format(self.__result,\n self.args.path)\n self.logger.debug(\"file_list: {}\".format(self.__file_list))\n [self.longoutput.append(file_data.get('Name'))\n for file_data in self.__file_list]\n self.perfdata.append(\"{path}={result};{warn};{crit};0;\".format(\n crit=self.args.critical,\n warn=self.args.warning,\n result=self.__result,\n path=self.args.path))\n\n # Return status with message to Nagios.\n status(self.output(long_output_limit=None))\n self.logger.debug(\"Return status and exit to Nagios.\")", "response": "Get the number of files in the folder."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __get_current_datetime(self):\n self.wql_time = \"SELECT LocalDateTime FROM Win32_OperatingSystem\"\n self.current_time = self.query(self.wql_time)\n # [{'LocalDateTime': '20160824161431.977000+480'}]'\n self.current_time_string = str(\n self.current_time[0].get('LocalDateTime').split('.')[0])\n # '20160824161431'\n self.current_time_format = datetime.datetime.strptime(\n self.current_time_string, '%Y%m%d%H%M%S')\n # param: datetime.datetime(2016, 8, 24, 16, 14, 31) -> type:\n # datetime.datetime\n return self.current_time_format", "response": "Get current datetime for every file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fileage_handle(self):\n self.file_list = []\n self.ok_file = []\n self.warn_file = []\n self.crit_file = []\n status = self.ok\n\n if self.args.recursion:\n self.__file_list = self.__get_folder(self.args.path)\n else:\n self.__file_list = self.__get_file(self.args.path)\n self.logger.debug(\"file_list: {}\".format(self.__file_list))\n # [{'LastModified': '20160824142017.737101+480', 'Name': 'd:\\\\test\\\\1.txt'},\n # {'LastModified': '20160824142021.392101+480', 'Name': 'd:\\\\test\\\\2.txt'},\n # {'LastModified': '20160824142106.460101+480', 'Name': 'd:\\\\test\\\\test1\\\\21.txt'}]\n\n for file_dict in self.__file_list:\n self.filename = file_dict.get('Name')\n if self.filename and self.filename != 'Name':\n self.logger.debug(\n \"===== start to compare {} =====\".format(\n self.filename))\n\n self.file_datetime_string = file_dict.get(\n 'LastModified').split('.')[0]\n self.file_datetime = datetime.datetime.strptime(\n self.file_datetime_string, '%Y%m%d%H%M%S')\n self.logger.debug(\n \"file_datetime: {}\".format(\n self.file_datetime))\n\n self.current_datetime = self.__get_current_datetime()\n self.logger.debug(\n \"current_datetime: {}\".format(\n self.current_datetime))\n\n self.__delta_datetime = self.current_datetime - self.file_datetime\n self.logger.debug(\n \"delta_datetime: {}\".format(\n self.__delta_datetime))\n self.logger.debug(\n \"warn_datetime: {}\".format(\n datetime.timedelta(\n minutes=self.args.warning)))\n self.logger.debug(\n \"crit_datetime: {}\".format(\n datetime.timedelta(\n minutes=self.args.critical)))\n if self.__delta_datetime > datetime.timedelta(\n minutes=self.args.critical):\n self.crit_file.append(self.filename)\n elif self.__delta_datetime > datetime.timedelta(minutes=self.args.warning):\n self.warn_file.append(self.filename)\n else:\n self.ok_file.append(self.filename)\n\n # Compare the vlaue.\n if self.crit_file:\n status = self.critical\n elif self.warn_file:\n status = self.warning\n else:\n status = self.ok\n\n # Output\n self.shortoutput = \"Found {0} files out of date.\".format(\n len(self.crit_file))\n if self.crit_file:\n self.longoutput.append(\"===== Critical File out of date ====\")\n [self.longoutput.append(filename)\n for filename in self.crit_file if self.crit_file]\n if self.warn_file:\n self.longoutput.append(\"===== Warning File out of date ====\")\n [self.longoutput.append(filename)\n for filename in self.warn_file if self.warn_file]\n if self.ok_file:\n self.longoutput.append(\"===== OK File out of date ====\")\n [self.longoutput.append(filename)\n for filename in self.ok_file if self.ok_file]\n self.perfdata.append(\"{path}={result};{warn};{crit};0;\".format(\n crit=self.args.critical,\n warn=self.args.warning,\n result=len(self.crit_file),\n path=self.args.drive + self.args.path))\n\n # Return status with message to Nagios.\n status(self.output(long_output_limit=None))\n self.logger.debug(\"Return status and exit to Nagios.\")", "response": "Get the number of files in the folder."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run(self):\n config = config_creator()\n debug = config.debug\n branch_thread_sleep = config.branch_thread_sleep\n while 1:\n url = self.branch_queue.get()\n if debug:\n print('branch thread-{} start'.format(url))\n branch_spider = self.branch_spider(url)\n sleep(random.randrange(*branch_thread_sleep))\n branch_spider.request_page()\n if debug:\n print('branch thread-{} end'.format(url))\n self.branch_queue.task_done()", "response": "run your main spider here\n as for branch spider result data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads version info from a file without importing it", "response": "def get_version(relpath):\n \"\"\"Read version info from a file without importing it\"\"\"\n from os.path import dirname, join\n\n if '__file__' not in globals():\n # Allow to use function interactively\n root = '.'\n else:\n root = dirname(__file__)\n\n # The code below reads text file with unknown encoding in\n # in Python2/3 compatible way. Reading this text file\n # without specifying encoding will fail in Python 3 on some\n # systems (see http://goo.gl/5XmOH). Specifying encoding as\n # open() parameter is incompatible with Python 2\n\n # cp437 is the encoding without missing points, safe against:\n # UnicodeDecodeError: 'charmap' codec can't decode byte...\n\n for line in open(join(root, relpath), 'rb'):\n line = line.decode('cp437')\n if '__version__' in line:\n if '\"' in line:\n # __version__ = \"0.9\"\n return line.split('\"')[1]\n elif \"'\" in line:\n return line.split(\"'\")[1]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef MakeDescriptor(desc_proto, package='', build_file_if_cpp=True,\n syntax=None):\n \"\"\"Make a protobuf Descriptor given a DescriptorProto protobuf.\n\n Handles nested descriptors. Note that this is limited to the scope of defining\n a message inside of another message. Composite fields can currently only be\n resolved if the message is defined in the same scope as the field.\n\n Args:\n desc_proto: The descriptor_pb2.DescriptorProto protobuf message.\n package: Optional package name for the new message Descriptor (string).\n build_file_if_cpp: Update the C++ descriptor pool if api matches.\n Set to False on recursion, so no duplicates are created.\n syntax: The syntax/semantics that should be used. Set to \"proto3\" to get\n proto3 field presence semantics.\n Returns:\n A Descriptor for protobuf messages.\n \"\"\"\n if api_implementation.Type() == 'cpp' and build_file_if_cpp:\n # The C++ implementation requires all descriptors to be backed by the same\n # definition in the C++ descriptor pool. To do this, we build a\n # FileDescriptorProto with the same definition as this descriptor and build\n # it into the pool.\n from typy.google.protobuf import descriptor_pb2\n file_descriptor_proto = descriptor_pb2.FileDescriptorProto()\n file_descriptor_proto.message_type.add().MergeFrom(desc_proto)\n\n # Generate a random name for this proto file to prevent conflicts with any\n # imported ones. We need to specify a file name so the descriptor pool\n # accepts our FileDescriptorProto, but it is not important what that file\n # name is actually set to.\n proto_name = str(uuid.uuid4())\n\n if package:\n file_descriptor_proto.name = os.path.join(package.replace('.', '/'),\n proto_name + '.proto')\n file_descriptor_proto.package = package\n else:\n file_descriptor_proto.name = proto_name + '.proto'\n\n _message.default_pool.Add(file_descriptor_proto)\n result = _message.default_pool.FindFileByName(file_descriptor_proto.name)\n\n if _USE_C_DESCRIPTORS:\n return result.message_types_by_name[desc_proto.name]\n\n full_message_name = [desc_proto.name]\n if package: full_message_name.insert(0, package)\n\n # Create Descriptors for enum types\n enum_types = {}\n for enum_proto in desc_proto.enum_type:\n full_name = '.'.join(full_message_name + [enum_proto.name])\n enum_desc = EnumDescriptor(\n enum_proto.name, full_name, None, [\n EnumValueDescriptor(enum_val.name, ii, enum_val.number)\n for ii, enum_val in enumerate(enum_proto.value)])\n enum_types[full_name] = enum_desc\n\n # Create Descriptors for nested types\n nested_types = {}\n for nested_proto in desc_proto.nested_type:\n full_name = '.'.join(full_message_name + [nested_proto.name])\n # Nested types are just those defined inside of the message, not all types\n # used by fields in the message, so no loops are possible here.\n nested_desc = MakeDescriptor(nested_proto,\n package='.'.join(full_message_name),\n build_file_if_cpp=False,\n syntax=syntax)\n nested_types[full_name] = nested_desc\n\n fields = []\n for field_proto in desc_proto.field:\n full_name = '.'.join(full_message_name + [field_proto.name])\n enum_desc = None\n nested_desc = None\n if field_proto.HasField('type_name'):\n type_name = field_proto.type_name\n full_type_name = '.'.join(full_message_name +\n [type_name[type_name.rfind('.')+1:]])\n if full_type_name in nested_types:\n nested_desc = nested_types[full_type_name]\n elif full_type_name in enum_types:\n enum_desc = enum_types[full_type_name]\n # Else type_name references a non-local type, which isn't implemented\n field = FieldDescriptor(\n field_proto.name, full_name, field_proto.number - 1,\n field_proto.number, field_proto.type,\n FieldDescriptor.ProtoTypeToCppProtoType(field_proto.type),\n field_proto.label, None, nested_desc, enum_desc, None, False, None,\n options=field_proto.options, has_default_value=False)\n fields.append(field)\n\n desc_name = '.'.join(full_message_name)\n return Descriptor(desc_proto.name, desc_name, None, None, fields,\n list(nested_types.values()), list(enum_types.values()), [],\n options=desc_proto.options)", "response": "Makes a protobuf Descriptor given a DescriptorProto."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef GetTopLevelContainingType(self):\n desc = self\n while desc.containing_type is not None:\n desc = desc.containing_type\n return desc", "response": "Returns the root if this is a nested type or itself."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsearches for the specified method and returns its descriptor.", "response": "def FindMethodByName(self, name):\n \"\"\"Searches for the specified method, and returns its descriptor.\"\"\"\n for method in self.methods:\n if name == method.name:\n return method\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n plugin = Register()\n if plugin.args.option == 'sqlserverlocks':\n plugin.sqlserverlocks_handle()\n else:\n plugin.unknown(\"Unknown actions.\")", "response": "Register your own mode and handle method here."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef MessageToJson(message, including_default_value_fields=False):\n js = _MessageToJsonObject(message, including_default_value_fields)\n return json.dumps(js, indent=2)", "response": "Converts a protobuf message to JSON format."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a Message to a JSON object according to Proto3 JSON Specification.", "response": "def _MessageToJsonObject(message, including_default_value_fields):\n \"\"\"Converts message to an object according to Proto3 JSON Specification.\"\"\"\n message_descriptor = message.DESCRIPTOR\n full_name = message_descriptor.full_name\n if _IsWrapperMessage(message_descriptor):\n return _WrapperMessageToJsonObject(message)\n if full_name in _WKTJSONMETHODS:\n return _WKTJSONMETHODS[full_name][0](\n message, including_default_value_fields)\n js = {}\n return _RegularMessageToJsonObject(\n message, js, including_default_value_fields)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _StructMessageToJsonObject(message, unused_including_default=False):\n fields = message.fields\n ret = {}\n for key in fields:\n ret[key] = _ValueMessageToJsonObject(fields[key])\n return ret", "response": "Converts a Struct message according to Proto3 JSON Specification."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef Parse(text, message):\n if not isinstance(text, six.text_type): text = text.decode('utf-8')\n try:\n if sys.version_info < (2, 7):\n # object_pair_hook is not supported before python2.7\n js = json.loads(text)\n else:\n js = json.loads(text, object_pairs_hook=_DuplicateChecker)\n except ValueError as e:\n raise ParseError('Failed to load JSON: {0}.'.format(str(e)))\n _ConvertMessage(js, message)\n return message", "response": "Parses a JSON representation of a protocol beffer message into a message."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _ConvertFieldValuePair(js, message):\n names = []\n message_descriptor = message.DESCRIPTOR\n for name in js:\n try:\n field = message_descriptor.fields_by_camelcase_name.get(name, None)\n if not field:\n raise ParseError(\n 'Message type \"{0}\" has no field named \"{1}\".'.format(\n message_descriptor.full_name, name))\n if name in names:\n raise ParseError(\n 'Message type \"{0}\" should not have multiple \"{1}\" fields.'.format(\n message.DESCRIPTOR.full_name, name))\n names.append(name)\n # Check no other oneof field is parsed.\n if field.containing_oneof is not None:\n oneof_name = field.containing_oneof.name\n if oneof_name in names:\n raise ParseError('Message type \"{0}\" should not have multiple \"{1}\" '\n 'oneof fields.'.format(\n message.DESCRIPTOR.full_name, oneof_name))\n names.append(oneof_name)\n\n value = js[name]\n if value is None:\n message.ClearField(field.name)\n continue\n\n # Parse field value.\n if _IsMapEntry(field):\n message.ClearField(field.name)\n _ConvertMapFieldValue(value, message, field)\n elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:\n message.ClearField(field.name)\n if not isinstance(value, list):\n raise ParseError('repeated field {0} must be in [] which is '\n '{1}.'.format(name, value))\n if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:\n # Repeated message field.\n for item in value:\n sub_message = getattr(message, field.name).add()\n # None is a null_value in Value.\n if (item is None and\n sub_message.DESCRIPTOR.full_name != 'google.protobuf.Value'):\n raise ParseError('null is not allowed to be used as an element'\n ' in a repeated field.')\n _ConvertMessage(item, sub_message)\n else:\n # Repeated scalar field.\n for item in value:\n if item is None:\n raise ParseError('null is not allowed to be used as an element'\n ' in a repeated field.')\n getattr(message, field.name).append(\n _ConvertScalarFieldValue(item, field))\n elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:\n sub_message = getattr(message, field.name)\n _ConvertMessage(value, sub_message)\n else:\n setattr(message, field.name, _ConvertScalarFieldValue(value, field))\n except ParseError as e:\n if field and field.containing_oneof is None:\n raise ParseError('Failed to parse {0} field: {1}'.format(name, e))\n else:\n raise ParseError(str(e))\n except ValueError as e:\n raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))\n except TypeError as e:\n raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))", "response": "Convert field value pairs into a regular protocol message."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _ConvertMessage(value, message):\n message_descriptor = message.DESCRIPTOR\n full_name = message_descriptor.full_name\n if _IsWrapperMessage(message_descriptor):\n _ConvertWrapperMessage(value, message)\n elif full_name in _WKTJSONMETHODS:\n _WKTJSONMETHODS[full_name][1](value, message)\n else:\n _ConvertFieldValuePair(value, message)", "response": "Convert a JSON object into a message."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _ConvertValueMessage(value, message):\n if isinstance(value, dict):\n _ConvertStructMessage(value, message.struct_value)\n elif isinstance(value, list):\n _ConvertListValueMessage(value, message.list_value)\n elif value is None:\n message.null_value = 0\n elif isinstance(value, bool):\n message.bool_value = value\n elif isinstance(value, six.string_types):\n message.string_value = value\n elif isinstance(value, _INT_OR_FLOAT):\n message.number_value = value\n else:\n raise ParseError('Unexpected type for Value message.')", "response": "Convert a JSON representation into a Value message."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a JSON representation into ListValue message.", "response": "def _ConvertListValueMessage(value, message):\n \"\"\"Convert a JSON representation into ListValue message.\"\"\"\n if not isinstance(value, list):\n raise ParseError(\n 'ListValue must be in [] which is {0}.'.format(value))\n message.ClearField('values')\n for item in value:\n _ConvertValueMessage(item, message.values.add())"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts a JSON representation into Struct message.", "response": "def _ConvertStructMessage(value, message):\n \"\"\"Convert a JSON representation into Struct message.\"\"\"\n if not isinstance(value, dict):\n raise ParseError(\n 'Struct must be in a dict which is {0}.'.format(value))\n for key in value:\n _ConvertValueMessage(value[key], message.fields[key])\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_config(new_config):\n flask_app.base_config.update(new_config)\n\n # Check for changed working directory.\n if new_config.has_key('working_directory'):\n wd = os.path.abspath(new_config['working_directory'])\n if nbmanager.notebook_dir != wd:\n if not os.path.exists(wd):\n raise IOError('Path not found: %s' % wd)\n nbmanager.notebook_dir = wd", "response": "Update config options with the provided dictionary of options."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresets config options to the provided dictionary of options and then update the config with the provided dictionary of options.", "response": "def set_config(new_config={}):\n \"\"\" Reset config options to defaults, and then update (optionally)\n with the provided dictionary of options. \"\"\"\n # The default base configuration.\n flask_app.base_config = dict(working_directory='.',\n template='collapse-input',\n debug=False,\n port=None)\n update_config(new_config)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexecute the command with the given arguments as an input.", "response": "def execute(self, correlation_id, args):\r\n \"\"\"\r\n Executes the command given specific arguments as an input.\r\n \r\n Args:\r\n correlation_id: a unique correlation/transaction id\r\n args: command arguments\r\n \r\n Returns: an execution result.\r\n \r\n Raises:\r\n ApplicationException: when execution fails for whatever reason.\r\n \"\"\"\r\n # Validate arguments\r\n if self._schema != None:\r\n self.validate_and_throw_exception(correlation_id, args)\r\n \r\n # Call the function\r\n try:\r\n return self._function(correlation_id, args)\r\n # Intercept unhandled errors\r\n except Exception as ex:\r\n raise InvocationException(\r\n correlation_id,\r\n \"EXEC_FAILED\",\r\n \"Execution \" + self._name + \" failed: \" + str(ex)\r\n ).with_details(\"command\", self._name).wrap(ex)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cache2errors(function, cache, disp=0, ftol=0.05):\n\t\n\tvals = numpy.array(sorted(cache, key=lambda x: x[0]))\n\tif disp > 0: print ' --- cache2errors --- ', vals\n\tvi = vals[:,1].min()\n\tdef renormedfunc(x):\n\t\ty = function(x)\n\t\tcache.append([x, y])\n\t\tif disp > 1: print ' renormed:', x, y, y - (vi + 1)\n\t\treturn y - (vi + 1)\n\tvals[:,1] -= vi + 1\n\tlowmask = vals[:,1] < 0\n\thighmask = vals[:,1] > 0\n\tindices = numpy.arange(len(vals))\n\tb, vb = vals[indices[lowmask][ 0],:]\n\tc, vc = vals[indices[lowmask][-1],:]\n\tif any(vals[:,0][highmask] < b):\n\t\tif disp > 0: print 'already have bracket'\n\t\ta, va = vals[indices[highmask][vals[:,0][highmask] < b][-1],:]\n\telse:\n\t\ta = b\n\t\tva = vb\n\t\twhile b > -50:\n\t\t\ta = b - max(vals[-1,0] - vals[0,0], 1)\n\t\t\tva = renormedfunc(a)\n\t\t\tif disp > 0: print 'going further left: %.1f [%.1f] --> %.1f [%.1f]' % (b, vb, a, va)\n\t\t\tif va > 0:\n\t\t\t\tif disp > 0: print 'found outer part'\n\t\t\t\tbreak\n\t\t\telse:\n\t\t\t\t# need to go further\n\t\t\t\tb = a\n\t\t\t\tvb = va\n\t\n\tif disp > 0: print 'left bracket', a, b, va, vb\n\tif va > 0 and vb < 0:\n\t\tleftroot = scipy.optimize.brentq(renormedfunc, a, b, rtol=ftol)\n\telse:\n\t\tif disp > 0: print 'WARNING: border problem found.'\n\t\tleftroot = a\n\tif disp > 0: print 'left root', leftroot\n\t\n\tif any(vals[:,0][highmask] > c):\n\t\tif disp > 0: print 'already have bracket'\n\t\td, vd = vals[indices[highmask][vals[:,0][highmask] > c][ 0],:]\n\telse:\n\t\td = c\n\t\tvd = vc\n\t\twhile c < 50:\n\t\t\td = c + max(vals[-1,0] - vals[0,0], 1)\n\t\t\tvd = renormedfunc(d)\n\t\t\tif disp > 0: print 'going further right: %.1f [%.1f] --> %.1f [%.1f]' % (c, vc, d, vd)\n\t\t\tif vd > 0:\n\t\t\t\tif disp > 0: print 'found outer part'\n\t\t\t\tbreak\n\t\t\telse:\n\t\t\t\t# need to go further\n\t\t\t\tc = d\n\t\t\t\tvc = vd\n\tif disp > 0: print 'right bracket', c, d, vc, vd\n\tif vd > 0 and vc < 0:\n\t\trightroot = scipy.optimize.brentq(renormedfunc, c, d, rtol=ftol)\n\telse:\n\t\tif disp > 0: print 'WARNING: border problem found.'\n\t\trightroot = d\n\tif disp > 0: print 'right root', rightroot\n\t\n\tassert leftroot < rightroot\n\n\tif disp > 2:\n\t\tfullvals = numpy.array(sorted(cache, key=lambda x: x[0]))\n\t\tfullvals[:,1] -= vi + 1\n\t\tplt.figure()\n\t\tplt.plot(fullvals[:,0], fullvals[:,1], 's')\n\t\tplt.plot(vals[:,0], vals[:,1], 'o')\n\t\tplt.xlim(a, d)\n\t\tplt.ylim(min(va, vb, vc, vd), max(va, vb, vc, vd))\n\t\tymin, ymax = plt.ylim()\n\t\tplt.vlines([leftroot, rightroot], ymin, ymax, linestyles='dotted')\n\t\tplt.savefig('cache_brent.pdf')\n\n\treturn leftroot, rightroot", "response": "This function will attempt to identify 1 - sigma errors in a list of values."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nends measuring time interval and updates counter.", "response": "def end_timing(self):\r\n \"\"\"\r\n Completes measuring time interval and updates counter.\r\n \"\"\"\r\n\r\n if self._callback != None:\r\n elapsed = time.clock() * 1000 - self._start\r\n self._callback.end_timing(self._counter, elapsed)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a Duration to a string format.", "response": "def ToJsonString(self):\n \"\"\"Converts Duration to string format.\n\n Returns:\n A string converted from self. The string format will contains\n 3, 6, or 9 fractional digits depending on the precision required to\n represent the exact Duration value. For example: \"1s\", \"1.010s\",\n \"1.000000100s\", \"-3.100s\"\n \"\"\"\n if self.seconds < 0 or self.nanos < 0:\n result = '-'\n seconds = - self.seconds + int((0 - self.nanos) // 1e9)\n nanos = (0 - self.nanos) % 1e9\n else:\n result = ''\n seconds = self.seconds + int(self.nanos // 1e9)\n nanos = self.nanos % 1e9\n result += '%d' % seconds\n if (nanos % 1e9) == 0:\n # If there are 0 fractional digits, the fractional\n # point '.' should be omitted when serializing.\n return result + 's'\n if (nanos % 1e6) == 0:\n # Serialize 3 fractional digits.\n return result + '.%03ds' % (nanos / 1e6)\n if (nanos % 1e3) == 0:\n # Serialize 6 fractional digits.\n return result + '.%06ds' % (nanos / 1e3)\n # Serialize 9 fractional digits.\n return result + '.%09ds' % nanos"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a string to a Sequence object.", "response": "def FromJsonString(self, value):\n \"\"\"Converts a string to Duration.\n\n Args:\n value: A string to be converted. The string must end with 's'. Any\n fractional digits (or none) are accepted as long as they fit into\n precision. For example: \"1s\", \"1.01s\", \"1.0000001s\", \"-3.100s\n\n Raises:\n ParseError: On parsing problems.\n \"\"\"\n if len(value) < 1 or value[-1] != 's':\n raise ParseError(\n 'Duration must end with letter \"s\": {0}.'.format(value))\n try:\n pos = value.find('.')\n if pos == -1:\n self.seconds = int(value[:-1])\n self.nanos = 0\n else:\n self.seconds = int(value[:pos])\n if value[0] == '-':\n self.nanos = int(round(float('-0{0}'.format(value[pos: -1])) *1e9))\n else:\n self.nanos = int(round(float('0{0}'.format(value[pos: -1])) *1e9))\n except ValueError:\n raise ParseError(\n 'Couldn\\'t parse duration: {0}.'.format(value))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef FromJsonString(self, value):\n self.Clear()\n for path in value.split(','):\n self.paths.append(path)", "response": "Converts string to FieldMask according to proto3 JSON spec."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a CouchDB document given its ID revision and database name.", "response": "def get_doc(doc_id, db_name, server_url='http://127.0.0.1:5984/', rev=None):\n \"\"\"Return a CouchDB document, given its ID, revision and database name.\"\"\"\n db = get_server(server_url)[db_name]\n if rev:\n headers, response = db.resource.get(doc_id, rev=rev)\n return couchdb.client.Document(response)\n return db[doc_id]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning an existing CouchDB database instance.", "response": "def get_or_create_db(db_name, server_url='http://127.0.0.1:5984/'):\n \"\"\"Return an (optionally existing) CouchDB database instance.\"\"\"\n server = get_server(server_url)\n if db_name in server:\n return server[db_name]\n return server.create(db_name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngive reST format README for pypi.", "response": "def read(readme):\n \"\"\"Give reST format README for pypi.\"\"\"\n extend = os.path.splitext(readme)[1]\n if (extend == '.rst'):\n import codecs\n return codecs.open(readme, 'r', 'utf-8').read()\n elif (extend == '.md'):\n import pypandoc\n return pypandoc.convert(readme, 'rst')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n plugin = Register()\n if plugin.args.option == 'sql':\n plugin.sql_handle()\n elif plugin.args.option == 'database-used':\n plugin.database_used_handle()\n elif plugin.args.option == 'databaselog-used':\n plugin.database_log_used_handle()\n else:\n plugin.unknown(\"Unknown actions.\")", "response": "Register your own mode and handle method here."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse(self, args):\n if args is None:\n args = self._default_args\n if isinstance(args, six.string_types):\n args = shlex.split(args)\n return args", "response": "parse the arguments for a set of keys"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsend an HTTP request to the REST API and receives the requested data.", "response": "def _request(self, method, *relative_path_parts, **kwargs):\n \"\"\"Sends an HTTP request to the REST API and receives the requested data.\n\n :param str method: HTTP method name\n :param relative_path_parts: the relative paths for the request URI\n :param kwargs: argument keywords\n :returns: requested data\n :raises APIError: for non-2xx responses\n\n \"\"\"\n uri = self._create_api_uri(*relative_path_parts)\n if method == 'get':\n response = get(uri, auth=self.auth, params=kwargs.get('params', None))\n elif method == 'post':\n response = post(uri, auth=self.auth, json=kwargs.get('data', None))\n else:\n response = delete(uri, auth=self.auth, json=kwargs.get('data', None))\n return self._handle_response(response).json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _place_order(self,\n side,\n product_id='BTC-USD',\n client_oid=None,\n type=None,\n stp=None,\n price=None,\n size=None,\n funds=None,\n time_in_force=None,\n cancel_after=None,\n post_only=None):\n \"\"\"``_\"\"\"\n data = {\n 'side':side,\n 'product_id':product_id,\n 'client_oid':client_oid,\n 'type':type,\n 'stp':stp,\n 'price':price,\n 'size':size,\n 'funds':funds,\n 'time_in_force':time_in_force,\n 'cancel_after':cancel_after,\n 'post_only':post_only\n }\n return self._post('orders', data=data)", "response": "Place order in order list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef place_market_order(self,\n side,\n product_id='BTC-USD',\n size=None,\n funds=None,\n client_oid=None,\n stp=None):\n \"\"\"``_\"\"\"\n return self._place_order(type='market',\n side=size,\n product_id=product_id,\n size=size,\n funds=funds,\n client_oid=client_oid,\n stp=stp)", "response": "Place a market order."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _deposit_withdraw(self, type, amount, coinbase_account_id):\n data = {\n 'type':type,\n 'amount':amount,\n 'coinbase_account_id':coinbase_account_id\n }\n return self._post('transfers', data=data)", "response": "Deposit withdraw a payment."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new report.", "response": "def _new_report(self,\n type,\n start_date,\n end_date,\n product_id='BTC-USD',\n account_id=None,\n format=None,\n email=None):\n \"\"\"``_\"\"\"\n data = {\n 'type':type,\n 'start_date':self._format_iso_time(start_date),\n 'end_date':self._format_iso_time(end_date),\n 'product_id':product_id,\n 'account_id':account_id,\n 'format':format,\n 'email':email\n }\n return self._post('reports', data=data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new fills report.", "response": "def new_fills_report(self,\n start_date,\n end_date,\n account_id=None,\n product_id='BTC-USD',\n format=None,\n email=None):\n \"\"\"``_\"\"\"\n return self._new_report(start_date,\n 'fills',\n end_date,\n account_id,\n product_id,\n format,\n email)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _request(self, method, *relative_path_parts, **kwargs):\n uri = self._create_api_uri(*relative_path_parts)\n if method == 'get':\n response = get(uri, auth=self.auth, params=kwargs.get('params', None))\n elif method == 'post':\n response = post(uri, auth=self.auth, json=kwargs.get('data', None))\n else:\n response = delete(uri, auth=self.auth, json=kwargs.get('data', None))\n self.is_initial = False\n self.before_cursor = response.headers.get('cb-before', None)\n self.after_cursor = response.headers.get('cb-after', None)\n return self._handle_response(response).json()", "response": "Sends an HTTP request to the REST API and receives the requested data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fetch(self, collection, **kwargs):\n '''\n return one record from the collection whose parameters match kwargs\n ---\n kwargs should be a dictionary whose keys match column names (in\n traditional SQL / fields in NoSQL) and whose values are the values of\n those fields.\n e.g. kwargs={name='my application name',client_id=12345}\n '''\n callback = kwargs.pop('callback')\n data = yield Op(self.db[collection].find_one, kwargs)\n callback(data)", "response": "fetch one record from the collection"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving records from collection with optional parameters", "response": "def remove(self, collection, **kwargs):\n '''\n remove records from collection whose parameters match kwargs\n '''\n callback = kwargs.pop('callback')\n yield Op(self.db[collection].remove, kwargs)\n callback()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstores the current object in the mongodb collection.", "response": "def store(self, collection, **kwargs):\n '''\n validate the passed values in kwargs based on the collection,\n store them in the mongodb collection\n '''\n callback = kwargs.pop('callback')\n key = validate(collection, **kwargs)\n data = yield Task(self.fetch, collection, **{key: kwargs[key]})\n if data is not None:\n raise Proauth2Error('duplicate_key')\n yield Op(self.db[collection].insert, kwargs)\n callback()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_api(version):\n def get_partial_api(key, token=None):\n return TrelloAPI(ENDPOINTS[version], version, key, token=token)\n\n get_partial_api.__doc__ = \\\n \"\"\"Interfaz REST con Trello. Versi\u00f3n {}\"\"\".format(version)\n\n return get_partial_api", "response": "Generates a factory function to instantiate the API with the given version."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresolves the URL to this point.", "response": "def _url(self):\n \"\"\"\n Resolve the URL to this point.\n\n >>> trello = TrelloAPIV1('APIKEY')\n >>> trello.batch._url\n '1/batch'\n >>> trello.boards(board_id='BOARD_ID')._url\n '1/boards/BOARD_ID'\n >>> trello.boards(board_id='BOARD_ID')(field='FIELD')._url\n '1/boards/BOARD_ID/FIELD'\n >>> trello.boards(board_id='BOARD_ID').cards(filter='FILTER')._url\n '1/boards/BOARD_ID/cards/FILTER'\n\n \"\"\"\n if self._api_arg:\n mypart = str(self._api_arg)\n else:\n mypart = self._name\n\n if self._parent:\n return '/'.join(filter(None, [self._parent._url, mypart]))\n else:\n return mypart"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _api_call(self, method_name, *args, **kwargs):\n params = kwargs.setdefault('params', {})\n params.update({'key': self._apikey})\n if self._token is not None:\n params.update({'token': self._token})\n\n http_method = getattr(requests, method_name)\n return http_method(TRELLO_URL + self._url, *args, **kwargs)", "response": "Makes the HTTP request."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a text representation of a protocol buffer message into a protocol buffer message.", "response": "def Merge(text, message, allow_unknown_extension=False,\n allow_field_number=False):\n \"\"\"Parses an text representation of a protocol message into a message.\n\n Like Parse(), but allows repeated values for a non-repeated field, and uses\n the last one.\n\n Args:\n text: Message text representation.\n message: A protocol buffer message to merge into.\n allow_unknown_extension: if True, skip over missing extensions and keep\n parsing\n allow_field_number: if True, both field number and field name are allowed.\n\n Returns:\n The same message passed as argument.\n\n Raises:\n ParseError: On text parsing problems.\n \"\"\"\n return MergeLines(text.split('\\n'), message, allow_unknown_extension,\n allow_field_number)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing a text representation of a protocol buffer message into a protocol buffer message.", "response": "def ParseLines(lines, message, allow_unknown_extension=False,\n allow_field_number=False):\n \"\"\"Parses an text representation of a protocol message into a message.\n\n Args:\n lines: An iterable of lines of a message's text representation.\n message: A protocol buffer message to merge into.\n allow_unknown_extension: if True, skip over missing extensions and keep\n parsing\n allow_field_number: if True, both field number and field name are allowed.\n\n Returns:\n The same message passed as argument.\n\n Raises:\n ParseError: On text parsing problems.\n \"\"\"\n parser = _Parser(allow_unknown_extension, allow_field_number)\n return parser.ParseLines(lines, message)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nskips over a field value.", "response": "def _SkipFieldValue(tokenizer):\n \"\"\"Skips over a field value.\n\n Args:\n tokenizer: A tokenizer to parse the field name and values.\n\n Raises:\n ParseError: In case an invalid field value is found.\n \"\"\"\n # String/bytes tokens can come in multiple adjacent string literals.\n # If we can consume one, consume as many as we can.\n if tokenizer.TryConsumeByteString():\n while tokenizer.TryConsumeByteString():\n pass\n return\n\n if (not tokenizer.TryConsumeIdentifier() and\n not tokenizer.TryConsumeInt64() and\n not tokenizer.TryConsumeUint64() and\n not tokenizer.TryConsumeFloat()):\n raise ParseError('Invalid field value: ' + tokenizer.token)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses an integer. Args: text: The text to parse. is_signed: True if a signed integer must be parsed. is_long: True if a long integer must be parsed. Returns: The integer value. Raises: ValueError: Thrown Iff the text is not a valid integer.", "response": "def ParseInteger(text, is_signed=False, is_long=False):\n \"\"\"Parses an integer.\n\n Args:\n text: The text to parse.\n is_signed: True if a signed integer must be parsed.\n is_long: True if a long integer must be parsed.\n\n Returns:\n The integer value.\n\n Raises:\n ValueError: Thrown Iff the text is not a valid integer.\n \"\"\"\n # Do the actual parsing. Exception handling is propagated to caller.\n try:\n # We force 32-bit values to int and 64-bit values to long to make\n # alternate implementations where the distinction is more significant\n # (e.g. the C++ implementation) simpler.\n if is_long:\n result = long(text, 0)\n else:\n result = int(text, 0)\n except ValueError:\n raise ValueError('Couldn\\'t parse integer: %s' % text)\n\n # Check if the integer is sane. Exceptions handled by callers.\n checker = _INTEGER_CHECKERS[2 * int(is_long) + int(is_signed)]\n checker.CheckValue(result)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef PrintMessage(self, message):\n fields = message.ListFields()\n if self.use_index_order:\n fields.sort(key=lambda x: x[0].index)\n for field, value in fields:\n if _IsMapEntry(field):\n for key in sorted(value):\n # This is slow for maps with submessage entires because it copies the\n # entire tree. Unfortunately this would take significant refactoring\n # of this file to work around.\n #\n # TODO(haberman): refactor and optimize if this becomes an issue.\n entry_submsg = field.message_type._concrete_class(\n key=key, value=value[key])\n self.PrintField(field, entry_submsg)\n elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:\n for element in value:\n self.PrintField(field, element)\n else:\n self.PrintField(field, value)", "response": "Convert protobuf message to text format."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint a single value for a single field.", "response": "def PrintFieldValue(self, field, value):\n \"\"\"Print a single field value (not including name).\n\n For repeated fields, the value should be a single element.\n\n Args:\n field: The descriptor of the field to be printed.\n value: The value of the field.\n \"\"\"\n out = self.out\n if self.pointy_brackets:\n openb = '<'\n closeb = '>'\n else:\n openb = '{'\n closeb = '}'\n\n if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:\n if self.as_one_line:\n out.write(' %s ' % openb)\n self.PrintMessage(value)\n out.write(closeb)\n else:\n out.write(' %s\\n' % openb)\n self.indent += 2\n self.PrintMessage(value)\n self.indent -= 2\n out.write(' ' * self.indent + closeb)\n elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:\n enum_value = field.enum_type.values_by_number.get(value, None)\n if enum_value is not None:\n out.write(enum_value.name)\n else:\n out.write(str(value))\n elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:\n out.write('\\\"')\n if isinstance(value, six.text_type):\n out_value = value.encode('utf-8')\n else:\n out_value = value\n if field.type == descriptor.FieldDescriptor.TYPE_BYTES:\n # We need to escape non-UTF8 chars in TYPE_BYTES field.\n out_as_utf8 = False\n else:\n out_as_utf8 = self.as_utf8\n out.write(text_encoding.CEscape(out_value, out_as_utf8))\n out.write('\\\"')\n elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:\n if value:\n out.write('true')\n else:\n out.write('false')\n elif field.cpp_type in _FLOAT_TYPES and self.float_format is not None:\n out.write('{1:{0}}'.format(self.float_format, value))\n else:\n out.write(str(value))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts an text representation of a protocol buffer message into a protocol buffer message.", "response": "def _ParseOrMerge(self, lines, message):\n \"\"\"Converts an text representation of a protocol message into a message.\n\n Args:\n lines: Lines of a message's text representation.\n message: A protocol buffer message to merge into.\n\n Raises:\n ParseError: On text parsing problems.\n \"\"\"\n tokenizer = _Tokenizer(lines)\n while not tokenizer.AtEnd():\n self._MergeField(tokenizer, message)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _MergeMessageField(self, tokenizer, message, field):\n is_map_entry = _IsMapEntry(field)\n\n if tokenizer.TryConsume('<'):\n end_token = '>'\n else:\n tokenizer.Consume('{')\n end_token = '}'\n\n if field.label == descriptor.FieldDescriptor.LABEL_REPEATED:\n if field.is_extension:\n sub_message = message.Extensions[field].add()\n elif is_map_entry:\n # pylint: disable=protected-access\n sub_message = field.message_type._concrete_class()\n else:\n sub_message = getattr(message, field.name).add()\n else:\n if field.is_extension:\n sub_message = message.Extensions[field]\n else:\n sub_message = getattr(message, field.name)\n sub_message.SetInParent()\n\n while not tokenizer.TryConsume(end_token):\n if tokenizer.AtEnd():\n raise tokenizer.ParseErrorPreviousToken('Expected \"%s\".' % (end_token,))\n self._MergeField(tokenizer, sub_message)\n\n if is_map_entry:\n value_cpptype = field.message_type.fields_by_name['value'].cpp_type\n if value_cpptype == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:\n value = getattr(message, field.name)[sub_message.key]\n value.MergeFrom(sub_message.value)\n else:\n getattr(message, field.name)[sub_message.key] = sub_message.value", "response": "Merges a single scalar field into a message."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ConsumeIdentifier(self):\n result = self.token\n if not self._IDENTIFIER.match(result):\n raise self._ParseError('Expected identifier.')\n self.NextToken()\n return result", "response": "Consumes protocol message field identifier."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ConsumeInt32(self):\n try:\n result = ParseInteger(self.token, is_signed=True, is_long=False)\n except ValueError as e:\n raise self._ParseError(str(e))\n self.NextToken()\n return result", "response": "Consumes a signed 32bit integer number."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ConsumeFloat(self):\n try:\n result = ParseFloat(self.token)\n except ValueError as e:\n raise self._ParseError(str(e))\n self.NextToken()\n return result", "response": "Consumes a floating point number."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ConsumeBool(self):\n try:\n result = ParseBool(self.token)\n except ValueError as e:\n raise self._ParseError(str(e))\n self.NextToken()\n return result", "response": "Consumes a boolean value."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconsuming one token of a string literal.", "response": "def _ConsumeSingleByteString(self):\n \"\"\"Consume one token of a string literal.\n\n String literals (whether bytes or text) can come in multiple adjacent\n tokens which are automatically concatenated, like in C or Python. This\n method only consumes one token.\n\n Returns:\n The token parsed.\n Raises:\n ParseError: When the wrong format data is found.\n \"\"\"\n text = self.token\n if len(text) < 1 or text[0] not in _QUOTES:\n raise self._ParseError('Expected string but found: %r' % (text,))\n\n if len(text) < 2 or text[-1] != text[0]:\n raise self._ParseError('String missing ending quote: %r' % (text,))\n\n try:\n result = text_encoding.CUnescape(text[1:-1])\n except ValueError as e:\n raise self._ParseError(str(e))\n self.NextToken()\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef timestamp(t = None, forfilename=False):\n\n datetimesep = ' '\n timesep = ':'\n if forfilename:\n datetimesep = '-'\n timesep = '-'\n\n return time.strftime('%Y-%m-%d' + datetimesep +\n '%H' + timesep + '%M' + timesep + '%S',\n time.localtime(t))", "response": "Returns a human - readable timestamp given a Unix timestamp t."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a human - readable timestamp given an Ark timestamp arct.", "response": "def arktimestamp(arkt, forfilename=False):\n \"\"\"Returns a human-readable timestamp given an Ark timestamp 'arct'.\n An Ark timestamp is the number of seconds since Genesis block,\n 2017:03:21 15:55:44.\"\"\"\n\n t = arkt + time.mktime((2017, 3, 21, 15, 55, 44, 0, 0, 0))\n return '%d %s' % (arkt, timestamp(t))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert ark timestamp to unix timestamp", "response": "def arkt_to_unixt(ark_timestamp):\n \"\"\" convert ark timestamp to unix timestamp\"\"\"\n res = datetime.datetime(2017, 3, 21, 15, 55, 44) + datetime.timedelta(seconds=ark_timestamp)\n return res.timestamp()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_version():\n with open(VERSION_FILE, encoding='utf-8') as fp:\n content = fp.read()\n match = re.search(r'^__version__ = [\\'\"]([^\\'\"]*)[\\'\"]', content, re.M)\n if match:\n return match.group(1)\n raise RuntimeError(\"Could not extract package __version__\")", "response": "Extract version from VERSION_FILE"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef de(output_basename, parameter_names, transform, loglikelihood, prior, nsteps=40000, vizfunc=None, printfunc=None, **problem):\n\timport json\n\timport inspyred\n\timport random\n\tprng = random.Random()\n\tif 'seed' in problem:\n\t\tprng.seed(problem['seed'])\n\t\n\tn_params = len(parameter_names)\n\tseeds = problem.get('seeds', [])\n\tif 'start' in problem:\n\t\tseeds.append(problem['start'])\n\tprefix = output_basename\n\t\n\tdef viz(candidate, args):\n\t\tif vizfunc is not None:\n\t\t\tvizfunc(candidate)\n\tdef print_candidate(candidate, l, args):\n\t\tif printfunc is not None:\n\t\t\tprintfunc(cube=candidate, loglikelihood=l)\n\t\telse:\n\t\t\tprint l, candidate\n\tdef eval_candidate(candidate):\n\t\tparams = transform(candidate)\n\t\tl = loglikelihood(params)\n\t\tp = prior(params)\n\t\tif numpy.isinf(p) and p < 0:\n\t\t\tprint ' prior rejection'\n\t\t\treturn -1e300\n\t\tif numpy.isnan(l):\n\t\t\treturn -1e300\n\t\treturn l, p\n\t@inspyred.ec.utilities.memoize\n\t@inspyred.ec.evaluators.evaluator\n\tdef fitness(candidate, args):\n\t\tl, p = eval_candidate(candidate)\n\t\t#print_candidate(candidate, (l + p), args)\n\t\treturn (l + p)\n\n\tcutoff_store = 10\n\tdef solution_archiver(random, population, archive, args):\n\t\tpsize = len(population)\n\t\tpopulation.sort(reverse=True)\n\t\tbest = population[0].fitness\n\t\t#print 'BEST: ', best, \n\t\tall_candidates = sorted(population + archive, reverse=True)\n\t\tall_fitness = numpy.array([c.fitness for c in all_candidates])\n\t\tmask = best - all_fitness > cutoff_store / 3\n\t\tif mask.sum() < 20:\n\t\t\tmask = best - all_fitness > cutoff_store\n\t\tnewarchive = [c for i, c in enumerate(all_candidates) if i == 0 or all_fitness[i - 1] != c.fitness]\n\t\tprint 'ARCHIVE: ', len(archive), len(newarchive)\n\t\tjson.dump([{'candidate': [float(f) for f in c.candidate], 'fitness':c.fitness} for c in newarchive], \n\t\t\topen(prefix + '_values.json', 'w'), indent=4)\n\t\treturn newarchive\n\t\n\tdef observer(population, num_generations, num_evaluations, args):\n\t\tpopulation.sort(reverse=True)\n\t\tcandidate = population[0]\n\t\tprint ('{0} evaluations'.format(num_evaluations)), ' best:', \n\t\tprint_candidate(candidate.candidate, candidate.fitness, args)\n\t\tif num_evaluations % len(population) == 0 or num_evaluations < len(population) or args.get('force_viz', False):\n\t\t\t# for each turnaround of a full generation\n\t\t\tviz(candidate.candidate, args)\n\t\n\tdef generator(random, args): \n\t\tu = [random.uniform(0, 1) for _ in range(n_params)]\n\t\tu = [random.gauss(0.5, 0.1) for _ in range(n_params)]\n\t\treturn bounder(u, args)\n\t\n\tea = inspyred.ec.DEA(prng)\n\tea.terminator = inspyred.ec.terminators.evaluation_termination\n\tea.archiver = solution_archiver\n\tbounder = inspyred.ec.Bounder(lower_bound=1e-10, upper_bound=1-1e-10)\n\t#bounder = inspyred.ec.Bounder(lower_bound=-20, upper_bound=20)\n\timport copy\n\tfrom math import log\n\t@inspyred.ec.variators.mutator\n\tdef double_exponential_mutation(random, candidate, args):\n\t\tmut_rate = args.setdefault('mutation_rate', 0.1)\n\t\tmean = args.setdefault('gaussian_mean', 0.0)\n\t\tstdev = args.setdefault('gaussian_stdev', 1.0)\n\t\tscale = log(0.5) / - (stdev)\n\t\tbounder = args['_ec'].bounder\n\t\tmutant = copy.copy(candidate)\n\t\tfor i, m in enumerate(mutant):\n\t\t\tdice = random.random()\n\t\t\tif dice < mut_rate:\n\t\t\t\tsign = (dice < mut_rate / 2) * 2 - 1\n\t\t\t\tdelta = -log(random.random()) / scale\n\t\t\t\tmutant[i] += delta * sign\n\t\t\tmutant = bounder(mutant, args)\n\t\treturn mutant\n\t\n\tdef minute_gaussian_mutation(random, candidates, args):\n\t\targs = dict(args)\n\t\targs['mutation_rate'] = 1\n\t\targs['gaussian_stdev'] = 1e-6\n\t\treturn inspyred.ec.variators.gaussian_mutation(random, candidates, args)\n\tea.variator = [inspyred.ec.variators.n_point_crossover, inspyred.ec.variators.gaussian_mutation, minute_gaussian_mutation]\n\t#ea.variator = [inspyred.ec.variators.n_point_crossover, double_exponential_mutation]\n\t\n\tea.replacer = inspyred.ec.replacers.steady_state_replacement\n\tea.observer = observer\n\t\n\tpop_size = 20\n\t\n\tfinal_pop = ea.evolve(pop_size=pop_size, \n\t\tmax_evaluations=nsteps, maximize=True, seeds=seeds, \n\t\tgaussian_stdev=0.01, #mutation_rate=0.3,\n\t\tbounder=bounder, generator=generator, evaluator=fitness,\n\t\t)\n\t\n\tbest = max(final_pop)\n\tseeds = [c.candidate for c in ea.archive]\n\tprint 'final candidate:', best\n\t\n\t\n\treturn {'start': best.candidate, 'value': best.fitness,\n\t\t'seeds': seeds, 'method': 'DE'}", "response": "de - function to do differential evolution of a current best solution."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreplace macros with content defined in the config.", "response": "def process_macros(self, content: str) -> str:\n '''Replace macros with content defined in the config.\n\n :param content: Markdown content\n\n :returns: Markdown content without macros\n '''\n\n def _sub(macro):\n name = macro.group('body')\n params = self.get_options(macro.group('options'))\n\n return self.options['macros'].get(name, '').format_map(params)\n\n return self.pattern.sub(_sub, content)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _request(self, method, *relative_path_parts, **kwargs):\n uri = self._create_api_uri(*relative_path_parts)\n response = get(uri, params=kwargs.get('params', None))\n return self._handle_response(response).json()", "response": "Sends an HTTP request to the REST API and receives the requested data."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_historic_trades(self, start, end, granularity, product_id='BTC-USD'):\n params = {\n 'start':self._format_iso_time(start),\n 'end':self._format_iso_time(end),\n 'granularity':granularity\n }\n return self._get('products', product_id, 'candles', params=params)", "response": "Get historical trades from a given product."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _request(self, method, *relative_path_parts, **kwargs):\n uri = self._create_api_uri(*relative_path_parts)\n response = get(uri, params=self._get_params(**kwargs))\n self.is_initial = False\n self.before_cursor = response.headers.get('cb-before', None)\n self.after_cursor = response.headers.get('cb-after', None)\n return self._handle_response(response).json()", "response": "Sends an HTTP request to the REST API and receives the requested data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_unique_pathname(path, root=''):\r\n\tpath = os.path.join(root, path)\r\n\t# consider the path supplied, then the paths with numbers appended\r\n\tpotentialPaths = itertools.chain((path,), __get_numbered_paths(path))\r\n\tpotentialPaths = six.moves.filterfalse(os.path.exists, potentialPaths)\r\n\treturn next(potentialPaths)", "response": "Return a unique pathname possibly with a number appended to it so that it is unique in the directory."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of the numbered paths in sequential order to the filename or folder name", "response": "def __get_numbered_paths(filepath):\r\n\t\"\"\"Append numbers in sequential order to the filename or folder name\r\n\tNumbers should be appended before the extension on a filename.\"\"\"\r\n\tformat = '%s (%%d)%s' % splitext_files_only(filepath)\r\n\treturn map(lambda n: format % n, itertools.count(1))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_time(filename, mod_time):\r\n\tlog.debug('Setting modified time to %s', mod_time)\r\n\tmtime = calendar.timegm(mod_time.utctimetuple())\r\n\t# utctimetuple discards microseconds, so restore it (for consistency)\r\n\tmtime += mod_time.microsecond / 1000000\r\n\tatime = os.stat(filename).st_atime\r\n\tos.utime(filename, (atime, mtime))", "response": "Set the modified time of a file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_time(filename):\r\n\tts = os.stat(filename).st_mtime\r\n\treturn datetime.datetime.utcfromtimestamp(ts)", "response": "Get the modified time for a file as a datetime instance"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insert_before_extension(filename, content):\r\n\tparts = list(os.path.splitext(filename))\r\n\tparts[1:1] = [content]\r\n\treturn ''.join(parts)", "response": "Insert the content just before the extension of the file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef recursive_glob(root, spec):\r\n\tspecs = (\r\n\t\tos.path.join(dirpath, dirname, spec)\r\n\t\tfor dirpath, dirnames, filenames in os.walk(root)\r\n\t\tfor dirname in dirnames\r\n\t)\r\n\r\n\treturn itertools.chain.from_iterable(\r\n\t\tglob.iglob(spec)\r\n\t\tfor spec in specs\r\n\t)", "response": "Like iglob, but recurse directories\r\n\r\n\t>>> any('path.py' in result for result in recursive_glob('.', '*.py'))\r\n\tTrue\r\n\r\n\t>>> all(result.startswith('.') for result in recursive_glob('.', '*.py'))\r\n\tTrue\r\n\r\n\t>>> len(list(recursive_glob('.', '*.foo')))\r\n\t0"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef encode(name, system='NTFS'):\r\n\tassert system == 'NTFS', 'unsupported filesystem'\r\n\tspecial_characters = r'<>:\"/\\|?*' + ''.join(map(chr, range(32)))\r\n\tpattern = '|'.join(map(re.escape, special_characters))\r\n\tpattern = re.compile(pattern)\r\n\treturn pattern.sub('_', name)", "response": "Encode a name for a suitable name in the given filesystem"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping a function that returns a dir making sure it exists", "response": "def ensure_dir_exists(func):\r\n\t\"wrap a function that returns a dir, making sure it exists\"\r\n\t@functools.wraps(func)\r\n\tdef make_if_not_present():\r\n\t\tdir = func()\r\n\t\tif not os.path.isdir(dir):\r\n\t\t\tos.makedirs(dir)\r\n\t\treturn dir\r\n\treturn make_if_not_present"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading file in chunks of size chunk_size.", "response": "def read_chunks(file, chunk_size=2048, update_func=lambda x: None):\r\n\t\"\"\"\r\n\tRead file in chunks of size chunk_size (or smaller).\r\n\tIf update_func is specified, call it on every chunk with the amount\r\n\tread.\r\n\t\"\"\"\r\n\twhile(True):\r\n\t\tres = file.read(chunk_size)\r\n\t\tif not res:\r\n\t\t\tbreak\r\n\t\tupdate_func(len(res))\r\n\t\tyield res"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_hidden(path):\r\n\tfull_path = os.path.abspath(path)\r\n\tname = os.path.basename(full_path)\r\n\r\n\tdef no(path):\r\n\t\treturn False\r\n\tplatform_hidden = globals().get('is_hidden_' + platform.system(), no)\r\n\treturn name.startswith('.') or platform_hidden(full_path)", "response": "Check if a file is presumed hidden"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the age of the current entry.", "response": "def age(self):\n \"\"\"\n Get closer to your EOL\n \"\"\"\n # 0 means this composer will never decompose\n if self.rounds == 1:\n self.do_run = False\n elif self.rounds > 1:\n self.rounds -= 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nopen a connection to the serial line and receive data lines", "response": "def run(self):\n \"\"\"\n Open a connection over the serial line and receive data lines\n \"\"\"\n if not self.device:\n return\n try:\n data = \"\"\n while (self.do_run):\n try:\n if (self.device.inWaiting() > 1):\n l = self.device.readline()[:-2]\n l = l.decode(\"UTF-8\")\n\n if (l == \"[\"):\n # start recording\n data = \"[\"\n elif (l == \"]\") and (len(data) > 4) and (data[0] == \"[\"):\n # now parse the input\n data = data + \"]\"\n self.store.register_json(data)\n self.age()\n elif (l[0:3] == \" {\"):\n # this is a data line\n data = data + \" \" + l\n else:\n # this is a slow interface - give it some time\n sleep(1)\n # then count down..\n self.age()\n except (UnicodeDecodeError, ValueError):\n # only accepting unicode: throw away the whole bunch\n data = \"\"\n # and count down the exit condition\n self.age()\n\n except serial.serialutil.SerialException:\n print(\"Could not connect to the serial line at \" + self.device_name)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef append_main_thread(self):\n thread = MainThread(main_queue=self.main_queue,\n main_spider=self.main_spider,\n branch_spider=self.branch_spider)\n thread.daemon = True\n thread.start()", "response": "create & start main thread"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the text of a node.", "response": "def getTextFromNode(node):\n \"\"\"\n Scans through all children of node and gathers the\n text. If node has non-text child-nodes then\n NotTextNodeError is raised.\n \"\"\"\n t = \"\"\n for n in node.childNodes:\n if n.nodeType == n.TEXT_NODE:\n t += n.nodeValue\n else:\n raise NotTextNodeError\n return t"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getbalance(self, url='http://services.ambientmobile.co.za/credits'):\n postXMLList = []\n postXMLList.append(\"%s\" % self.api_key)\n postXMLList.append(\"%s\" % self.password)\n postXML = '%s' % \"\".join(postXMLList)\n result = self.curl(url, postXML)\n\n if result.get(\"credits\", None):\n return result[\"credits\"]\n else:\n raise AmbientSMSError(result[\"status\"])", "response": "Get the number of credits remaining at AmbientSMS"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sendmsg(self,\n message,\n recipient_mobiles=[],\n url='http://services.ambientmobile.co.za/sms',\n concatenate_message=True,\n message_id=str(time()).replace(\".\", \"\"),\n reply_path=None,\n allow_duplicates=True,\n allow_invalid_numbers=True,\n ):\n\n \"\"\"\n Send a mesage via the AmbientSMS API server\n \"\"\"\n if not recipient_mobiles or not(isinstance(recipient_mobiles, list) \\\n or isinstance(recipient_mobiles, tuple)):\n raise AmbientSMSError(\"Missing recipients\")\n\n if not message or not len(message):\n raise AmbientSMSError(\"Missing message\")\n\n postXMLList = []\n postXMLList.append(\"%s\" % self.api_key)\n postXMLList.append(\"%s\" % self.password)\n postXMLList.append(\"%s\" % \\\n \"\".join([\"%s\" % \\\n m for m in recipient_mobiles]))\n postXMLList.append(\"%s\" % message)\n postXMLList.append(\"%s\" % \\\n (1 if concatenate_message else 0))\n postXMLList.append(\"%s\" % message_id)\n postXMLList.append(\"%s\" % \\\n (1 if allow_duplicates else 0))\n postXMLList.append(\n \"%s\" % \\\n (1 if allow_invalid_numbers else 0)\n )\n if reply_path:\n postXMLList.append(\"%s\" % reply_path)\n\n postXML = '%s' % \"\".join(postXMLList)\n result = self.curl(url, postXML)\n\n status = result.get(\"status\", None)\n if status and int(status) in [0, 1, 2]:\n return result\n else:\n raise AmbientSMSError(int(status))", "response": "Send a message via the AmbientSMS API."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend a curl request to the AmbientSMS API", "response": "def curl(self, url, post):\n \"\"\"\n Inteface for sending web requests to the AmbientSMS API Server\n \"\"\"\n try:\n req = urllib2.Request(url)\n req.add_header(\"Content-type\", \"application/xml\")\n data = urllib2.urlopen(req, post.encode('utf-8')).read()\n except urllib2.URLError, v:\n raise AmbientSMSError(v)\n return dictFromXml(data)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef execute(self, correlation_id, args):\r\n return self._intercepter.execute(_next, correlation_id, args)", "response": "Executes the command given specific arguments as an input."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef contents(self, f, text):\n text += self._read(f.abs_path) + \"\\r\\n\"\n return text", "response": "Returns the contents of the file f"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_date_type(cls):\n if not isinstance(cls, type):\n return False\n return issubclass(cls, date) and not issubclass(cls, datetime)", "response": "Return True if the class is a date type."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_datetime(when):\n if when is None or is_datetime(when):\n return when\n if is_time(when):\n return datetime.combine(epoch.date(), when)\n if is_date(when):\n return datetime.combine(when, time(0))\n raise TypeError(\"unable to convert {} to datetime\".format(when.__class__.__name__))", "response": "Convert a date or time to a datetime."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef totz(when, tz=None):\n if when is None:\n return None\n when = to_datetime(when)\n if when.tzinfo is None:\n when = when.replace(tzinfo=localtz)\n return when.astimezone(tz or utc)", "response": "Return a date time or datetime converted to a date or time object in the given timezone."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a datetime so much time ago. Takes the same arguments as timedelta.", "response": "def timeago(tz=None, *args, **kwargs):\n \"\"\"Return a datetime so much time ago. Takes the same arguments as timedelta().\"\"\"\n return totz(datetime.now(), tz) - timedelta(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ts(when, tz=None):\n if not when:\n return None\n when = totz(when, tz)\n return calendar.timegm(when.timetuple())", "response": "Returns a Unix timestamp in seconds for the provided datetime."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a Unix timestamp in milliseconds for the provided datetime.", "response": "def tsms(when, tz=None):\n \"\"\"\n Return a Unix timestamp in milliseconds for the provided datetime. The `totz` function is\n called on the datetime to convert it to the provided timezone. It will be converted to UTC if\n no timezone is provided.\n \"\"\"\n if not when:\n return None\n when = totz(when, tz)\n return calendar.timegm(when.timetuple()) * 1000 + int(round(when.microsecond / 1000.0))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fromts(ts, tzin=None, tzout=None):\n if ts is None:\n return None\n when = datetime.utcfromtimestamp(ts).replace(tzinfo=tzin or utc)\n return totz(when, tzout)", "response": "Returns a datetime representation of the provided Unix timestamp."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fromtsms(ts, tzin=None, tzout=None):\n if ts is None:\n return None\n when = datetime.utcfromtimestamp(ts / 1000).replace(microsecond=ts % 1000 * 1000)\n when = when.replace(tzinfo=tzin or utc)\n return totz(when, tzout)", "response": "Returns the Unix timestamp in milliseconds as a datetime object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef truncate(when, unit, week_start=mon):\n if is_datetime(when):\n if unit == millisecond:\n return when.replace(microsecond=int(round(when.microsecond / 1000.0)) * 1000)\n elif unit == second:\n return when.replace(microsecond=0)\n elif unit == minute:\n return when.replace(second=0, microsecond=0)\n elif unit == hour:\n return when.replace(minute=0, second=0, microsecond=0)\n elif unit == day:\n return when.replace(hour=0, minute=0, second=0, microsecond=0)\n elif unit == week:\n weekday = prevweekday(when, week_start)\n return when.replace(year=weekday.year, month=weekday.month, day=weekday.day,\n hour=0, minute=0, second=0, microsecond=0)\n elif unit == month:\n return when.replace(day=1, hour=0, minute=0, second=0, microsecond=0)\n elif unit == year:\n return when.replace(month=1, day=1, hour=0, minute=0, second=0, microsecond=0)\n elif is_date(when):\n if unit == week:\n return prevweekday(when, week_start)\n elif unit == month:\n return when.replace(day=1)\n elif unit == year:\n return when.replace(month=1, day=1)\n elif is_time(when):\n if unit == millisecond:\n return when.replace(microsecond=int(when.microsecond / 1000.0) * 1000)\n elif unit == second:\n return when.replace(microsecond=0)\n elif unit == minute:\n return when.replace(second=0, microsecond=0)\n return when", "response": "Returns the datetime truncated to the precision of the provided unit."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the date for the day of this week.", "response": "def weekday(when, weekday, start=mon):\n \"\"\"Return the date for the day of this week.\"\"\"\n if isinstance(when, datetime):\n when = when.date()\n\n today = when.weekday()\n delta = weekday - today\n if weekday < start and today >= start:\n delta += 7\n elif weekday >= start and today < start:\n delta -= 7\n return when + timedelta(days=delta)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef prevweekday(when, weekday, inclusive=True):\n if isinstance(when, datetime):\n when = when.date()\n delta = weekday - when.weekday()\n if (inclusive and delta > 0) or (not inclusive and delta >= 0):\n delta -= 7\n return when + timedelta(days=delta)", "response": "Return the date for the most recent weekday."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfunction to optimize the parameters in a 1D grid", "response": "def opt_grid(params, func, limits, ftol=0.01, disp=0, compute_errors=True):\n\t\"\"\"\n\tsee :func:`optimize1d.optimize`, considers each parameter in order\n\t\n\t:param ftol: \n\t\tdifference in values at which the function can be considered flat\n\t:param compute_errors:\n\t \tcompute standard deviation of gaussian around optimum\n\t\"\"\"\n\tcaches = [[] for p in params]\n\tnewparams = numpy.copy(params)\n\terrors = [[] for p in params]\n\tfor i, p in enumerate(params):\n\t\tcache = []\n\t\tdef func1(x0):\n\t\t\tnewparams[i] = x0\n\t\t\tv = func(newparams)\n\t\t\tcache.append([x0, v])\n\t\t\treturn v\n\t\tlo, hi = limits[i]\n\t\tbestval = optimize(func1, x0=p,\n\t\t\tcons=[lambda x: x - lo, lambda x: hi - x], \n\t\t\tftol=ftol, disp=disp - 1)\n\t\tbeststat = func1(bestval)\n\t\tif compute_errors:\n\t\t\terrors[i] = cache2errors(func1, cache, disp=disp - 1)\n\t\t\n\t\tnewparams[i] = bestval\n\t\tcaches[i] = cache\n\t\tif disp > 0:\n\t\t\tif compute_errors:\n\t\t\t\tprint '\\tnew value of %d: %e [%e .. %e] yielded %e' % (i, bestval, errors[i][0], errors[i][1], beststat)\n\t\t\telse:\n\t\t\t\tprint '\\tnew value of %d: %e yielded %e' % (i, bestval, beststat)\n\tbeststat = func(newparams)\n\tif disp > 0:\n\t\tprint 'optimization done, reached %.3f' % (beststat)\n\t\n\tif compute_errors:\n\t\treturn newparams, errors\n\telse:\n\t\treturn newparams"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef Cwd(directory):\n '''\n Context manager for current directory (uses with_statement)\n\n e.g.:\n # working on some directory\n with Cwd('/home/new_dir'):\n # working on new_dir\n\n # working on some directory again\n\n :param unicode directory:\n Target directory to enter\n '''\n old_directory = six.moves.getcwd()\n if directory is not None:\n os.chdir(directory)\n try:\n yield directory\n finally:\n os.chdir(old_directory)", "response": "Context manager for working on current directory."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnormalizes a path maintaining the final slashes.", "response": "def NormalizePath(path):\n '''\n Normalizes a path maintaining the final slashes.\n\n Some environment variables need the final slash in order to work.\n\n Ex. The SOURCES_DIR set by subversion must end with a slash because of the way it is used\n in the Visual Studio projects.\n\n :param unicode path:\n The path to normalize.\n\n :rtype: unicode\n :returns:\n Normalized path\n '''\n if path.endswith('/') or path.endswith('\\\\'):\n slash = os.path.sep\n else:\n slash = ''\n return os.path.normpath(path) + slash"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a version of a path that is unique.", "response": "def CanonicalPath(path):\n '''\n Returns a version of a path that is unique.\n\n Given two paths path1 and path2:\n CanonicalPath(path1) == CanonicalPath(path2) if and only if they represent the same file on\n the host OS. Takes account of case, slashes and relative paths.\n\n :param unicode path:\n The original path.\n\n :rtype: unicode\n :returns:\n The unique path.\n '''\n path = os.path.normpath(path)\n path = os.path.abspath(path)\n path = os.path.normcase(path)\n\n return path"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreplaces all slashes and backslashes with the target separator StandardPath: We are defining that the standard-path is the one with only back-slashes in it, either on Windows or any other platform. :param bool strip: If True, removes additional slashes from the end of the path.", "response": "def StandardizePath(path, strip=False):\n '''\n Replaces all slashes and backslashes with the target separator\n\n StandardPath:\n We are defining that the standard-path is the one with only back-slashes in it, either\n on Windows or any other platform.\n\n :param bool strip:\n If True, removes additional slashes from the end of the path.\n '''\n path = path.replace(SEPARATOR_WINDOWS, SEPARATOR_UNIX)\n if strip:\n path = path.rstrip(SEPARATOR_UNIX)\n return path"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef NormStandardPath(path):\n '''\n Normalizes a standard path (posixpath.normpath) maintaining any slashes at the end of the path.\n\n Normalize:\n Removes any local references in the path \"/../\"\n\n StandardPath:\n We are defining that the standard-path is the one with only back-slashes in it, either\n on Windows or any other platform.\n '''\n import posixpath\n if path.endswith('/'):\n slash = '/'\n else:\n slash = ''\n return posixpath.normpath(path) + slash", "response": "Normalizes a standard path."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef CreateMD5(source_filename, target_filename=None):\n '''\n Creates a md5 file from a source file (contents are the md5 hash of source file)\n\n :param unicode source_filename:\n Path to source file\n\n :type target_filename: unicode or None\n :param target_filename:\n Name of the target file with the md5 contents\n\n If None, defaults to source_filename + '.md5'\n '''\n if target_filename is None:\n target_filename = source_filename + '.md5'\n\n from six.moves.urllib.parse import urlparse\n source_url = urlparse(source_filename)\n\n # Obtain MD5 hex\n if _UrlIsLocal(source_url):\n # If using a local file, we can give Md5Hex the filename\n md5_contents = Md5Hex(filename=source_filename)\n else:\n # Md5Hex can't handle remote files, we open it and pray we won't run out of memory.\n md5_contents = Md5Hex(contents=GetFileContents(source_filename, binary=True))\n\n # Write MD5 hash to a file\n CreateFile(target_filename, md5_contents)", "response": "Creates a md5 file from a source file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncopies a file from source to target.", "response": "def CopyFile(source_filename, target_filename, override=True, md5_check=False, copy_symlink=True):\n '''\n Copy a file from source to target.\n\n :param source_filename:\n @see _DoCopyFile\n\n :param target_filename:\n @see _DoCopyFile\n\n :param bool md5_check:\n If True, checks md5 files (of both source and target files), if they match, skip this copy\n and return MD5_SKIP\n\n Md5 files are assumed to be {source, target} + '.md5'\n\n If any file is missing (source, target or md5), the copy will always be made.\n\n :param copy_symlink:\n @see _DoCopyFile\n\n :raises FileAlreadyExistsError:\n If target_filename already exists, and override is False\n\n :raises NotImplementedProtocol:\n If file protocol is not accepted\n\n Protocols allowed are:\n source_filename: local, ftp, http\n target_filename: local, ftp\n\n :rtype: None | MD5_SKIP\n :returns:\n MD5_SKIP if the file was not copied because there was a matching .md5 file\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from ._exceptions import FileNotFoundError\n\n # Check override\n if not override and Exists(target_filename):\n from ._exceptions import FileAlreadyExistsError\n raise FileAlreadyExistsError(target_filename)\n\n # Don't do md5 check for md5 files themselves.\n md5_check = md5_check and not target_filename.endswith('.md5')\n\n # If we enabled md5 checks, ignore copy of files that haven't changed their md5 contents.\n if md5_check:\n source_md5_filename = source_filename + '.md5'\n target_md5_filename = target_filename + '.md5'\n try:\n source_md5_contents = GetFileContents(source_md5_filename)\n except FileNotFoundError:\n source_md5_contents = None\n\n try:\n target_md5_contents = GetFileContents(target_md5_filename)\n except FileNotFoundError:\n target_md5_contents = None\n\n if source_md5_contents is not None and \\\n source_md5_contents == target_md5_contents and \\\n Exists(target_filename):\n return MD5_SKIP\n\n # Copy source file\n _DoCopyFile(source_filename, target_filename, copy_symlink=copy_symlink)\n\n # If we have a source_md5, but no target_md5, create the target_md5 file\n if md5_check and source_md5_contents is not None and source_md5_contents != target_md5_contents:\n CreateFile(target_md5_filename, source_md5_contents)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _DoCopyFile(source_filename, target_filename, copy_symlink=True):\n '''\n :param unicode source_filename:\n The source filename.\n Schemas: local, ftp, http\n\n :param unicode target_filename:\n Target filename.\n Schemas: local, ftp\n\n :param copy_symlink:\n @see _CopyFileLocal\n\n :raises FileNotFoundError:\n If source_filename does not exist\n '''\n from six.moves.urllib.parse import urlparse\n\n source_url = urlparse(source_filename)\n target_url = urlparse(target_filename)\n\n if _UrlIsLocal(source_url):\n if not Exists(source_filename):\n from ._exceptions import FileNotFoundError\n raise FileNotFoundError(source_filename)\n\n if _UrlIsLocal(target_url):\n # local to local\n _CopyFileLocal(source_filename, target_filename, copy_symlink=copy_symlink)\n elif target_url.scheme in ['ftp']:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)\n\n elif source_url.scheme in ['http', 'https', 'ftp']:\n if _UrlIsLocal(target_url):\n # HTTP/FTP to local\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)\n else:\n # HTTP/FTP to other ==> NotImplemented\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol # @Reimport\n raise NotImplementedProtocol(source_url.scheme)", "response": "Copy a file from source_filename to target_filename."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncopy a file locally to a directory.", "response": "def _CopyFileLocal(source_filename, target_filename, copy_symlink=True):\n '''\n Copy a file locally to a directory.\n\n :param unicode source_filename:\n The filename to copy from.\n\n :param unicode target_filename:\n The filename to copy to.\n\n :param bool copy_symlink:\n If True and source_filename is a symlink, target_filename will also be created as\n a symlink.\n\n If False, the file being linked will be copied instead.\n '''\n import shutil\n try:\n # >>> Create the target_filename directory if necessary\n dir_name = os.path.dirname(target_filename)\n if dir_name and not os.path.isdir(dir_name):\n os.makedirs(dir_name)\n\n if copy_symlink and IsLink(source_filename):\n # >>> Delete the target_filename if it already exists\n if os.path.isfile(target_filename) or IsLink(target_filename):\n DeleteFile(target_filename)\n\n # >>> Obtain the relative path from link to source_filename (linkto)\n source_filename = ReadLink(source_filename)\n CreateLink(source_filename, target_filename)\n else:\n # shutil can't copy links in Windows, so we must find the real file manually\n if sys.platform == 'win32':\n while IsLink(source_filename):\n link = ReadLink(source_filename)\n if os.path.isabs(link):\n source_filename = link\n else:\n source_filename = os.path.join(os.path.dirname(source_filename), link)\n\n shutil.copyfile(source_filename, target_filename)\n shutil.copymode(source_filename, target_filename)\n except Exception as e:\n reraise(e, 'While executiong _filesystem._CopyFileLocal(%s, %s)' % (source_filename, target_filename))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncopies files from source to target.", "response": "def CopyFiles(source_dir, target_dir, create_target_dir=False, md5_check=False):\n '''\n Copy files from the given source to the target.\n\n :param unicode source_dir:\n A filename, URL or a file mask.\n Ex.\n x:\\coilib50\n x:\\coilib50\\*\n http://server/directory/file\n ftp://server/directory/file\n\n\n :param unicode target_dir:\n A directory or an URL\n Ex.\n d:\\Temp\n ftp://server/directory\n\n :param bool create_target_dir:\n If True, creates the target path if it doesn't exists.\n\n :param bool md5_check:\n .. seealso:: CopyFile\n\n :raises DirectoryNotFoundError:\n If target_dir does not exist, and create_target_dir is False\n\n .. seealso:: CopyFile for documentation on accepted protocols\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n import fnmatch\n\n # Check if we were given a directory or a directory with mask\n if IsDir(source_dir):\n # Yes, it's a directory, copy everything from it\n source_mask = '*'\n else:\n # Split directory and mask\n source_dir, source_mask = os.path.split(source_dir)\n\n # Create directory if necessary\n if not IsDir(target_dir):\n if create_target_dir:\n CreateDirectory(target_dir)\n else:\n from ._exceptions import DirectoryNotFoundError\n raise DirectoryNotFoundError(target_dir)\n\n # List and match files\n filenames = ListFiles(source_dir)\n\n # Check if we have a source directory\n if filenames is None:\n return\n\n # Copy files\n for i_filename in filenames:\n if md5_check and i_filename.endswith('.md5'):\n continue # md5 files will be copied by CopyFile when copying their associated files\n\n if fnmatch.fnmatch(i_filename, source_mask):\n source_path = source_dir + '/' + i_filename\n target_path = target_dir + '/' + i_filename\n\n if IsDir(source_path):\n # If we found a directory, copy it recursively\n CopyFiles(source_path, target_path, create_target_dir=True, md5_check=md5_check)\n else:\n CopyFile(source_path, target_path, md5_check=md5_check)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncopying files into directories according to a file mapping.", "response": "def CopyFilesX(file_mapping):\n '''\n Copies files into directories, according to a file mapping\n\n :param list(tuple(unicode,unicode)) file_mapping:\n A list of mappings between the directory in the target and the source.\n For syntax, @see: ExtendedPathMask\n\n :rtype: list(tuple(unicode,unicode))\n :returns:\n List of files copied. (source_filename, target_filename)\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n # List files that match the mapping\n files = []\n for i_target_path, i_source_path_mask in file_mapping:\n tree_recurse, flat_recurse, dirname, in_filters, out_filters = ExtendedPathMask.Split(i_source_path_mask)\n\n _AssertIsLocal(dirname)\n\n filenames = FindFiles(dirname, in_filters, out_filters, tree_recurse)\n for i_source_filename in filenames:\n if os.path.isdir(i_source_filename):\n continue # Do not copy dirs\n\n i_target_filename = i_source_filename[len(dirname) + 1:]\n if flat_recurse:\n i_target_filename = os.path.basename(i_target_filename)\n i_target_filename = os.path.join(i_target_path, i_target_filename)\n\n files.append((\n StandardizePath(i_source_filename),\n StandardizePath(i_target_filename)\n ))\n\n # Copy files\n for i_source_filename, i_target_filename in files:\n # Create target dir if necessary\n target_dir = os.path.dirname(i_target_filename)\n CreateDirectory(target_dir)\n\n CopyFile(i_source_filename, i_target_filename)\n\n return files"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef IsFile(path):\n '''\n :param unicode path:\n Path to a file (local or ftp)\n\n :raises NotImplementedProtocol:\n If checking for a non-local, non-ftp file\n\n :rtype: bool\n :returns:\n True if the file exists\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from six.moves.urllib.parse import urlparse\n url = urlparse(path)\n\n if _UrlIsLocal(url):\n if IsLink(path):\n return IsFile(ReadLink(path))\n return os.path.isfile(path)\n\n elif url.scheme == 'ftp':\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(url.scheme)", "response": "Checks if a file exists in the base directory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef GetDriveType(path):\n '''\n Determine the type of drive, which can be one of the following values:\n DRIVE_UNKNOWN = 0\n The drive type cannot be determined.\n\n DRIVE_NO_ROOT_DIR = 1\n The root path is invalid; for example, there is no volume mounted at the specified path.\n\n DRIVE_REMOVABLE = 2\n The drive has removable media; for example, a floppy drive, thumb drive, or flash card reader.\n\n DRIVE_FIXED = 3\n The drive has fixed media; for example, a hard disk drive or flash drive.\n\n DRIVE_REMOTE = 4\n The drive is a remote (network) drive.\n\n DRIVE_CDROM = 5\n The drive is a CD-ROM drive.\n\n DRIVE_RAMDISK = 6\n The drive is a RAM disk\n\n :note:\n The implementation is valid only for Windows OS\n Linux will always return DRIVE_UNKNOWN\n\n :param path:\n Path to a file or directory\n '''\n if sys.platform == 'win32':\n import ctypes\n kdll = ctypes.windll.LoadLibrary(\"kernel32.dll\")\n\n return kdll.GetDriveType(path + '\\\\')\n\n import win32file\n if IsFile(path):\n path = os.path.dirname(path)\n\n # A trailing backslash is required.\n return win32file.GetDriveType(path + '\\\\')\n\n else:\n return DRIVE_UNKNOWN", "response": "Returns the type of drive in the specified path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef IsDir(directory):\n '''\n :param unicode directory:\n A path\n\n :rtype: bool\n :returns:\n Returns whether the given path points to an existent directory.\n\n :raises NotImplementedProtocol:\n If the path protocol is not local or ftp\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from six.moves.urllib.parse import urlparse\n directory_url = urlparse(directory)\n\n if _UrlIsLocal(directory_url):\n return os.path.isdir(directory)\n elif directory_url.scheme == 'ftp':\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(directory_url.scheme)", "response": "Returns True if the given path points to an existent directory."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef Exists(path):\n '''\n :rtype: bool\n :returns:\n True if the path already exists (either a file or a directory)\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from six.moves.urllib.parse import urlparse\n path_url = urlparse(path)\n\n # Handle local\n if _UrlIsLocal(path_url):\n return IsFile(path) or IsDir(path) or IsLink(path)\n return IsFile(path) or IsDir(path)", "response": "Checks if a path exists in the\n ."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef CopyDirectory(source_dir, target_dir, override=False):\n '''\n Recursively copy a directory tree.\n\n :param unicode source_dir:\n Where files will come from\n\n :param unicode target_dir:\n Where files will go to\n\n :param bool override:\n If True and target_dir already exists, it will be deleted before copying.\n\n :raises NotImplementedForRemotePathError:\n If trying to copy to/from remote directories\n '''\n _AssertIsLocal(source_dir)\n _AssertIsLocal(target_dir)\n\n if override and IsDir(target_dir):\n DeleteDirectory(target_dir, skip_on_error=False)\n\n import shutil\n shutil.copytree(source_dir, target_dir)", "response": "Recursively copy a directory tree."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete a file from the specified local filename.", "response": "def DeleteFile(target_filename):\n '''\n Deletes the given local filename.\n\n .. note:: If file doesn't exist this method has no effect.\n\n :param unicode target_filename:\n A local filename\n\n :raises NotImplementedForRemotePathError:\n If trying to delete a non-local path\n\n :raises FileOnlyActionError:\n Raised when filename refers to a directory.\n '''\n _AssertIsLocal(target_filename)\n\n try:\n if IsLink(target_filename):\n DeleteLink(target_filename)\n elif IsFile(target_filename):\n os.remove(target_filename)\n elif IsDir(target_filename):\n from ._exceptions import FileOnlyActionError\n raise FileOnlyActionError(target_filename)\n except Exception as e:\n reraise(e, 'While executing filesystem.DeleteFile(%s)' % (target_filename))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef AppendToFile(filename, contents, eol_style=EOL_STYLE_NATIVE, encoding=None, binary=False):\n '''\n Appends content to a local file.\n\n :param unicode filename:\n\n :param unicode contents:\n\n :type eol_style: EOL_STYLE_XXX constant\n :param eol_style:\n Replaces the EOL by the appropriate EOL depending on the eol_style value.\n Considers that all content is using only \"\\n\" as EOL.\n\n :param unicode encoding:\n Target file's content encoding.\n Defaults to sys.getfilesystemencoding()\n\n :param bool binary:\n If True, content is appended in binary mode. In this case, `contents` must be `bytes` and not\n `unicode`\n\n :raises NotImplementedForRemotePathError:\n If trying to modify a non-local path\n\n :raises ValueError:\n If trying to mix unicode `contents` without `encoding`, or `encoding` without\n unicode `contents`\n '''\n _AssertIsLocal(filename)\n\n assert isinstance(contents, six.text_type) ^ binary, 'Must always receive unicode contents, unless binary=True'\n\n if not binary:\n # Replaces eol on each line by the given eol_style.\n contents = _HandleContentsEol(contents, eol_style)\n\n # Handle encoding here, and always write in binary mode. We can't use io.open because it\n # tries to do its own line ending handling.\n contents = contents.encode(encoding or sys.getfilesystemencoding())\n\n oss = open(filename, 'ab')\n try:\n oss.write(contents)\n finally:\n oss.close()", "response": "Append content to a local file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmoving a file from source_filename to target_filename.", "response": "def MoveFile(source_filename, target_filename):\n '''\n Moves a file.\n\n :param unicode source_filename:\n\n :param unicode target_filename:\n\n :raises NotImplementedForRemotePathError:\n If trying to operate with non-local files.\n '''\n _AssertIsLocal(source_filename)\n _AssertIsLocal(target_filename)\n\n import shutil\n shutil.move(source_filename, target_filename)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmoving a directory. :param unicode source_dir: :param unicode target_dir: :raises NotImplementedError: If trying to move anything other than: Local dir -> local dir FTP dir -> FTP dir (same host)", "response": "def MoveDirectory(source_dir, target_dir):\n '''\n Moves a directory.\n\n :param unicode source_dir:\n\n :param unicode target_dir:\n\n :raises NotImplementedError:\n If trying to move anything other than:\n Local dir -> local dir\n FTP dir -> FTP dir (same host)\n '''\n if not IsDir(source_dir):\n from ._exceptions import DirectoryNotFoundError\n raise DirectoryNotFoundError(source_dir)\n\n if Exists(target_dir):\n from ._exceptions import DirectoryAlreadyExistsError\n raise DirectoryAlreadyExistsError(target_dir)\n\n from six.moves.urllib.parse import urlparse\n source_url = urlparse(source_dir)\n target_url = urlparse(target_dir)\n\n # Local to local\n if _UrlIsLocal(source_url) and _UrlIsLocal(target_url):\n import shutil\n shutil.move(source_dir, target_dir)\n\n # FTP to FTP\n elif source_url.scheme == 'ftp' and target_url.scheme == 'ftp':\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)\n else:\n raise NotImplementedError('Can only move directories local->local or ftp->ftp')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef GetFileContents(filename, binary=False, encoding=None, newline=None):\n '''\n Reads a file and returns its contents. Works for both local and remote files.\n\n :param unicode filename:\n\n :param bool binary:\n If True returns the file as is, ignore any EOL conversion.\n\n :param unicode encoding:\n File's encoding. If not None, contents obtained from file will be decoded using this\n `encoding`.\n\n :param None|''|'\\n'|'\\r'|'\\r\\n' newline:\n Controls universal newlines.\n See 'io.open' newline parameter documentation for more details.\n\n :returns str|unicode:\n The file's contents.\n Returns unicode string when `encoding` is not None.\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n source_file = OpenFile(filename, binary=binary, encoding=encoding, newline=newline)\n try:\n contents = source_file.read()\n finally:\n source_file.close()\n\n return contents", "response": "Reads a file and returns its contents."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread a file and returns its contents as a list of lines.", "response": "def GetFileLines(filename, newline=None, encoding=None):\n '''\n Reads a file and returns its contents as a list of lines. Works for both local and remote files.\n\n :param unicode filename:\n\n :param None|''|'\\n'|'\\r'|'\\r\\n' newline:\n Controls universal newlines.\n See 'io.open' newline parameter documentation for more details.\n\n :param unicode encoding:\n File's encoding. If not None, contents obtained from file will be decoded using this\n `encoding`.\n\n :returns list(unicode):\n The file's lines\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n return GetFileContents(\n filename,\n binary=False,\n encoding=encoding,\n newline=newline,\n ).split('\\n')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nopening a file and returns it.", "response": "def OpenFile(filename, binary=False, newline=None, encoding=None):\n '''\n Open a file and returns it.\n Consider the possibility of a remote file (HTTP, HTTPS, FTP)\n\n :param unicode filename:\n Local or remote filename.\n\n :param bool binary:\n If True returns the file as is, ignore any EOL conversion.\n If set ignores univeral_newlines parameter.\n\n :param None|''|'\\n'|'\\r'|'\\r\\n' newline:\n Controls universal newlines.\n See 'io.open' newline parameter documentation for more details.\n\n :param unicode encoding:\n File's encoding. If not None, contents obtained from file will be decoded using this\n `encoding`.\n\n :returns file:\n The open file, it must be closed by the caller\n\n @raise: FileNotFoundError\n When the given filename cannot be found\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from six.moves.urllib.parse import urlparse\n filename_url = urlparse(filename)\n\n # Check if file is local\n if _UrlIsLocal(filename_url):\n if not os.path.isfile(filename):\n from ._exceptions import FileNotFoundError\n raise FileNotFoundError(filename)\n\n mode = 'rb' if binary else 'r'\n return io.open(filename, mode, encoding=encoding, newline=newline)\n\n # Not local\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(target_url.scheme)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlists the files in the given directory.", "response": "def ListFiles(directory):\n '''\n Lists the files in the given directory\n\n :type directory: unicode | unicode\n :param directory:\n A directory or URL\n\n :rtype: list(unicode) | list(unicode)\n :returns:\n List of filenames/directories found in the given directory.\n Returns None if the given directory does not exists.\n\n If `directory` is a unicode string, all files returned will also be unicode\n\n :raises NotImplementedProtocol:\n If file protocol is not local or FTP\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from six.moves.urllib.parse import urlparse\n directory_url = urlparse(directory)\n\n # Handle local\n if _UrlIsLocal(directory_url):\n if not os.path.isdir(directory):\n return None\n return os.listdir(directory)\n\n # Handle FTP\n elif directory_url.scheme == 'ftp':\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(directory_url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(directory_url.scheme)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef CreateFile(filename, contents, eol_style=EOL_STYLE_NATIVE, create_dir=True, encoding=None, binary=False):\n '''\n Create a file with the given contents.\n\n :param unicode filename:\n Filename and path to be created.\n\n :param unicode contents:\n The file contents as a string.\n\n :type eol_style: EOL_STYLE_XXX constant\n :param eol_style:\n Replaces the EOL by the appropriate EOL depending on the eol_style value.\n Considers that all content is using only \"\\n\" as EOL.\n\n :param bool create_dir:\n If True, also creates directories needed in filename's path\n\n :param unicode encoding:\n Target file's content encoding. Defaults to sys.getfilesystemencoding()\n Ignored if `binary` = True\n\n :param bool binary:\n If True, file is created in binary mode. In this case, `contents` must be `bytes` and not\n `unicode`\n\n :return unicode:\n Returns the name of the file created.\n\n :raises NotImplementedProtocol:\n If file protocol is not local or FTP\n\n :raises ValueError:\n If trying to mix unicode `contents` without `encoding`, or `encoding` without\n unicode `contents`\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n # Lots of checks when writing binary files\n if binary:\n if isinstance(contents, six.text_type):\n raise TypeError('contents must be str (bytes) when binary=True')\n else:\n if not isinstance(contents, six.text_type):\n raise TypeError('contents must be unicode when binary=False')\n\n # Replaces eol on each line by the given eol_style.\n contents = _HandleContentsEol(contents, eol_style)\n\n # Encode string and pretend we are using binary to prevent 'open' from automatically\n # changing Eols\n encoding = encoding or sys.getfilesystemencoding()\n contents = contents.encode(encoding)\n binary = True\n\n # If asked, creates directory containing file\n if create_dir:\n dirname = os.path.dirname(filename)\n if dirname:\n CreateDirectory(dirname)\n\n from six.moves.urllib.parse import urlparse\n filename_url = urlparse(filename)\n\n # Handle local\n if _UrlIsLocal(filename_url):\n # Always writing as binary (see handling above)\n with open(filename, 'wb') as oss:\n oss.write(contents)\n\n # Handle FTP\n elif filename_url.scheme == 'ftp':\n # Always writing as binary (see handling above)\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(directory_url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(filename_url.scheme)\n\n return filename", "response": "Creates a file with the given contents."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreplacing all occurrences of old by new in the given file.", "response": "def ReplaceInFile(filename, old, new, encoding=None):\n '''\n Replaces all occurrences of \"old\" by \"new\" in the given file.\n\n :param unicode filename:\n The name of the file.\n\n :param unicode old:\n The string to search for.\n\n :param unicode new:\n Replacement string.\n\n :return unicode:\n The new contents of the file.\n '''\n contents = GetFileContents(filename, encoding=encoding)\n contents = contents.replace(old, new)\n CreateFile(filename, contents, encoding=encoding)\n return contents"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a directory including any missing intermediate directories.", "response": "def CreateDirectory(directory):\n '''\n Create directory including any missing intermediate directory.\n\n :param unicode directory:\n\n :return unicode|urlparse.ParseResult:\n Returns the created directory or url (see urlparse).\n\n :raises NotImplementedProtocol:\n If protocol is not local or FTP.\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information\n '''\n from six.moves.urllib.parse import urlparse\n directory_url = urlparse(directory)\n\n # Handle local\n if _UrlIsLocal(directory_url):\n if not os.path.exists(directory):\n os.makedirs(directory)\n return directory\n\n # Handle FTP\n elif directory_url.scheme == 'ftp':\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(directory_url.scheme)\n else:\n from ._exceptions import NotImplementedProtocol\n raise NotImplementedProtocol(directory_url.scheme)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef DeleteDirectory(directory, skip_on_error=False):\n '''\n Deletes a directory.\n\n :param unicode directory:\n\n :param bool skip_on_error:\n If True, ignore any errors when trying to delete directory (for example, directory not\n found)\n\n :raises NotImplementedForRemotePathError:\n If trying to delete a remote directory.\n '''\n _AssertIsLocal(directory)\n\n import shutil\n def OnError(fn, path, excinfo):\n '''\n Remove the read-only flag and try to remove again.\n On Windows, rmtree fails when trying to remove a read-only file. This fix it!\n Another case: Read-only directories return True in os.access test. It seems that read-only\n directories has it own flag (looking at the property windows on Explorer).\n '''\n if IsLink(path):\n return\n\n if fn is os.remove and os.access(path, os.W_OK):\n raise\n\n # Make the file WRITEABLE and executes the original delete function (osfunc)\n import stat\n os.chmod(path, stat.S_IWRITE)\n fn(path)\n\n try:\n if not os.path.isdir(directory):\n if skip_on_error:\n return\n from ._exceptions import DirectoryNotFoundError\n raise DirectoryNotFoundError(directory)\n shutil.rmtree(directory, onerror=OnError)\n except:\n if not skip_on_error:\n raise", "response": "Deletes a directory.\n\n :param unicode directory:\n\n :param bool skip_on_error:\n If True, ignore any errors when trying to delete directory (for example, directory not\n found)\n\n :raises NotImplementedForRemotePathError:\n If trying to delete a remote directory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef GetMTime(path):\n '''\n :param unicode path:\n Path to file or directory\n\n :rtype: float\n :returns:\n Modification time for path.\n\n If this is a directory, the highest mtime from files inside it will be returned.\n\n @note:\n In some Linux distros (such as CentOs, or anything with ext3), mtime will not return a value\n with resolutions higher than a second.\n\n http://stackoverflow.com/questions/2428556/os-path-getmtime-doesnt-return-fraction-of-a-second\n '''\n _AssertIsLocal(path)\n\n if os.path.isdir(path):\n files = FindFiles(path)\n\n if len(files) > 0:\n return max(map(os.path.getmtime, files))\n\n return os.path.getmtime(path)", "response": "Get the modification time of a file or directory."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ListMappedNetworkDrives():\n '''\n On Windows, returns a list of mapped network drives\n\n :return: tuple(string, string, bool)\n For each mapped netword drive, return 3 values tuple:\n - the local drive\n - the remote path-\n - True if the mapping is enabled (warning: not reliable)\n '''\n if sys.platform != 'win32':\n raise NotImplementedError\n drives_list = []\n netuse = _CallWindowsNetCommand(['use'])\n for line in netuse.split(EOL_STYLE_WINDOWS):\n match = re.match(\"(\\w*)\\s+(\\w:)\\s+(.+)\", line.rstrip())\n if match:\n drives_list.append((match.group(2), match.group(3), match.group(1) == 'OK'))\n return drives_list", "response": "Returns a list of mapped network drives on Windows."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a symbolic link at link_path pointing to target_path.", "response": "def CreateLink(target_path, link_path, override=True):\n '''\n Create a symbolic link at `link_path` pointing to `target_path`.\n\n :param unicode target_path:\n Link target\n\n :param unicode link_path:\n Fullpath to link name\n\n :param bool override:\n If True and `link_path` already exists as a link, that link is overridden.\n '''\n _AssertIsLocal(target_path)\n _AssertIsLocal(link_path)\n\n if override and IsLink(link_path):\n DeleteLink(link_path)\n\n # Create directories leading up to link\n dirname = os.path.dirname(link_path)\n if dirname:\n CreateDirectory(dirname)\n\n if sys.platform != 'win32':\n return os.symlink(target_path, link_path) # @UndefinedVariable\n else:\n #import ntfsutils.junction\n #return ntfsutils.junction.create(target_path, link_path)\n\n import jaraco.windows.filesystem\n return jaraco.windows.filesystem.symlink(target_path, link_path)\n\n from ._easyfs_win32 import CreateSymbolicLink\n try:\n dw_flags = 0\n if target_path and os.path.isdir(target_path):\n dw_flags = 1\n return CreateSymbolicLink(target_path, link_path, dw_flags)\n except Exception as e:\n reraise(e, 'Creating link \"%(link_path)s\" pointing to \"%(target_path)s\"' % locals())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef IsLink(path):\n '''\n :param unicode path:\n Path being tested\n\n :returns bool:\n True if `path` is a link\n '''\n _AssertIsLocal(path)\n\n if sys.platform != 'win32':\n return os.path.islink(path)\n\n import jaraco.windows.filesystem\n return jaraco.windows.filesystem.islink(path)", "response": "Returns True if path is a link."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread the target of the symbolic link at path.", "response": "def ReadLink(path):\n '''\n Read the target of the symbolic link at `path`.\n\n :param unicode path:\n Path to a symbolic link\n\n :returns unicode:\n Target of a symbolic link\n '''\n _AssertIsLocal(path)\n\n if sys.platform != 'win32':\n return os.readlink(path) # @UndefinedVariable\n\n if not IsLink(path):\n from ._exceptions import FileNotFoundError\n raise FileNotFoundError(path)\n\n import jaraco.windows.filesystem\n result = jaraco.windows.filesystem.readlink(path)\n if '\\\\??\\\\' in result:\n result = result.split('\\\\??\\\\')[1]\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _AssertIsLocal(path):\n '''\n Checks if a given path is local, raise an exception if not.\n\n This is used in filesystem functions that do not support remote operations yet.\n\n :param unicode path:\n\n :raises NotImplementedForRemotePathError:\n If the given path is not local\n '''\n from six.moves.urllib.parse import urlparse\n if not _UrlIsLocal(urlparse(path)):\n from ._exceptions import NotImplementedForRemotePathError\n raise NotImplementedForRemotePathError", "response": "Checks if a given path is local raises an exception if not."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _HandleContentsEol(contents, eol_style):\n '''\n Replaces eol on each line by the given eol_style.\n\n :param unicode contents:\n :type eol_style: EOL_STYLE_XXX constant\n :param eol_style:\n '''\n if eol_style == EOL_STYLE_NONE:\n return contents\n\n if eol_style == EOL_STYLE_UNIX:\n return contents.replace('\\r\\n', eol_style).replace('\\r', eol_style)\n\n if eol_style == EOL_STYLE_MAC:\n return contents.replace('\\r\\n', eol_style).replace('\\n', eol_style)\n\n if eol_style == EOL_STYLE_WINDOWS:\n return contents.replace('\\r\\n', '\\n').replace('\\r', '\\n').replace('\\n', EOL_STYLE_WINDOWS)\n\n raise ValueError('Unexpected eol style: %r' % (eol_style,))", "response": "Handles the contents of a single line."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall Windows NET command.", "response": "def _CallWindowsNetCommand(parameters):\n '''\n Call Windows NET command, used to acquire/configure network services settings.\n\n :param parameters: list of command line parameters\n\n :return: command output\n '''\n import subprocess\n popen = subprocess.Popen([\"net\"] + parameters, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n stdoutdata, stderrdata = popen.communicate()\n if stderrdata:\n raise OSError(\"Failed on call net.exe: %s\" % stderrdata)\n return stdoutdata"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks if the given source filename should be re - generated because the target filename has changed.", "response": "def CheckForUpdate(source, target):\n '''\n Checks if the given target filename should be re-generated because the source has changed.\n :param source: the source filename.\n :param target: the target filename.\n :return bool:\n True if the target is out-dated, False otherwise.\n '''\n return \\\n not os.path.isfile(target) or \\\n os.path.getmtime(source) > os.path.getmtime(target)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef MatchMasks(filename, masks):\n '''\n Verifies if a filename match with given patterns.\n\n :param str filename: The filename to match.\n :param list(str) masks: The patterns to search in the filename.\n :return bool:\n True if the filename has matched with one pattern, False otherwise.\n '''\n import fnmatch\n\n if not isinstance(masks, (list, tuple)):\n masks = [masks]\n\n for i_mask in masks:\n if fnmatch.fnmatch(filename, i_mask):\n return True\n return False", "response": "Tests if a filename matches with a list of masks."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef FindFiles(dir_, in_filters=None, out_filters=None, recursive=True, include_root_dir=True, standard_paths=False):\n '''\n Searches for files in a given directory that match with the given patterns.\n\n :param str dir_: the directory root, to search the files.\n :param list(str) in_filters: a list with patterns to match (default = all). E.g.: ['*.py']\n :param list(str) out_filters: a list with patterns to ignore (default = none). E.g.: ['*.py']\n :param bool recursive: if True search in subdirectories, otherwise, just in the root.\n :param bool include_root_dir: if True, includes the directory being searched in the returned paths\n :param bool standard_paths: if True, always uses unix path separators \"/\"\n :return list(str):\n A list of strings with the files that matched (with the full path in the filesystem).\n '''\n # all files\n if in_filters is None:\n in_filters = ['*']\n\n if out_filters is None:\n out_filters = []\n\n result = []\n\n # maintain just files that don't have a pattern that match with out_filters\n # walk through all directories based on dir\n for dir_root, directories, filenames in os.walk(dir_):\n\n for i_directory in directories[:]:\n if MatchMasks(i_directory, out_filters):\n directories.remove(i_directory)\n\n for filename in directories + filenames:\n if MatchMasks(filename, in_filters) and not MatchMasks(filename, out_filters):\n result.append(os.path.join(dir_root, filename))\n\n if not recursive:\n break\n\n if not include_root_dir:\n # Remove root dir from all paths\n dir_prefix = len(dir_) + 1\n result = [file[dir_prefix:] for file in result]\n\n if standard_paths:\n result = map(StandardizePath, result)\n\n return result", "response": "Searches for files in a given directory that match with the given patterns."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef DumpDirHashToStringIO(directory, stringio, base='', exclude=None, include=None):\n '''\n Helper to iterate over the files in a directory putting those in the passed StringIO in ini\n format.\n\n :param unicode directory:\n The directory for which the hash should be done.\n\n :param StringIO stringio:\n The string to which the dump should be put.\n\n :param unicode base:\n If provided should be added (along with a '/') before the name=hash of file.\n\n :param unicode exclude:\n Pattern to match files to exclude from the hashing. E.g.: *.gz\n\n :param unicode include:\n Pattern to match files to include in the hashing. E.g.: *.zip\n '''\n import fnmatch\n import os\n\n files = [(os.path.join(directory, i), i) for i in os.listdir(directory)]\n files = [i for i in files if os.path.isfile(i[0])]\n for fullname, filename in files:\n if include is not None:\n if not fnmatch.fnmatch(fullname, include):\n continue\n\n if exclude is not None:\n if fnmatch.fnmatch(fullname, exclude):\n continue\n\n md5 = Md5Hex(fullname)\n if base:\n stringio.write('%s/%s=%s\\n' % (base, filename, md5))\n else:\n stringio.write('%s=%s\\n' % (filename, md5))", "response": "Dump the contents of a directory into a stringio."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates the md5 of the contents of the file.", "response": "def Md5Hex(filename=None, contents=None):\n '''\n :param unicode filename:\n The file from which the md5 should be calculated. If the filename is given, the contents\n should NOT be given.\n\n :param unicode contents:\n The contents for which the md5 should be calculated. If the contents are given, the filename\n should NOT be given.\n\n :rtype: unicode\n :returns:\n Returns a string with the hex digest of the stream.\n '''\n import io\n import hashlib\n md5 = hashlib.md5()\n\n if filename:\n stream = io.open(filename, 'rb')\n try:\n while True:\n data = stream.read(md5.block_size * 128)\n if not data:\n break\n md5.update(data)\n finally:\n stream.close()\n\n else:\n md5.update(contents)\n\n return six.text_type(md5.hexdigest())"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an iterator that yields random hexadecimal hashes.", "response": "def IterHashes(iterator_size, hash_length=7):\n '''\n Iterator for random hexadecimal hashes\n\n :param iterator_size:\n Amount of hashes return before this iterator stops.\n Goes on forever if `iterator_size` is negative.\n\n :param int hash_length:\n Size of each hash returned.\n\n :return generator(unicode):\n '''\n if not isinstance(iterator_size, int):\n raise TypeError('iterator_size must be integer.')\n\n count = 0\n while count != iterator_size:\n count += 1\n yield GetRandomHash(hash_length)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef PushPopItem(obj, key, value):\n '''\n A context manager to replace and restore a value using a getter and setter.\n\n :param object obj: The object to replace/restore.\n :param object key: The key to replace/restore in the object.\n :param object value: The value to replace.\n\n Example::\n\n with PushPop2(sys.modules, 'alpha', None):\n pytest.raises(ImportError):\n import alpha\n '''\n if key in obj:\n old_value = obj[key]\n obj[key] = value\n yield value\n obj[key] = old_value\n\n else:\n obj[key] = value\n yield value\n del obj[key]", "response": "A context manager that replaces and restore a value using a getter and setter."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef Split(cls, extended_path_mask):\n '''\n Splits the given path into their components: recursive, dirname, in_filters and out_filters\n\n :param str: extended_path_mask:\n The \"extended path mask\" to split\n\n :rtype: tuple(bool,bool,str,list(str),list(str))\n :returns:\n Returns the extended path 5 components:\n - The tree-recurse flag\n - The flat-recurse flag\n - The actual path\n - A list of masks to include\n - A list of masks to exclude\n '''\n import os.path\n r_tree_recurse = extended_path_mask[0] in '+-'\n r_flat_recurse = extended_path_mask[0] in '-'\n\n r_dirname, r_filters = os.path.split(extended_path_mask)\n if r_tree_recurse:\n r_dirname = r_dirname[1:]\n\n filters = r_filters.split(';')\n r_in_filters = [i for i in filters if not i.startswith('!')]\n r_out_filters = [i[1:] for i in filters if i.startswith('!')]\n\n return r_tree_recurse, r_flat_recurse, r_dirname, r_in_filters, r_out_filters", "response": "Splits the given path into its components recursive dirname in_filters and out_filters."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef GetMessages(self, files):\n result = {}\n for file_name in files:\n file_desc = self.pool.FindFileByName(file_name)\n for name, msg in file_desc.message_types_by_name.items():\n if file_desc.package:\n full_name = '.'.join([file_desc.package, name])\n else:\n full_name = msg.name\n result[full_name] = self.GetPrototype(\n self.pool.FindMessageTypeByName(full_name))\n\n # While the extension FieldDescriptors are created by the descriptor pool,\n # the python classes created in the factory need them to be registered\n # explicitly, which is done below.\n #\n # The call to RegisterExtension will specifically check if the\n # extension was already registered on the object and either\n # ignore the registration if the original was the same, or raise\n # an error if they were different.\n\n for name, extension in file_desc.extensions_by_name.items():\n if extension.containing_type.full_name not in self._classes:\n self.GetPrototype(extension.containing_type)\n extended_class = self._classes[extension.containing_type.full_name]\n extended_class.RegisterExtension(extension)\n return result", "response": "Gets all the messages from a specified file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef specifier_to_db(db_spec):\n local_match = LOCAL_RE.match(db_spec)\n remote_match = REMOTE_RE.match(db_spec)\n plain_match = PLAIN_RE.match(db_spec)\n # If this looks like a local specifier:\n if local_match:\n return local_match.groupdict()['database']\n # If this looks like a remote specifier:\n elif remote_match:\n # A fancy 'unpacking'...\n hostname, portnum, database = map(remote_match.groupdict().get,\n ('hostname', 'portnum', 'database'))\n local_url = settings._('COUCHDB_SERVER', 'http://127.0.0.1:5984/')\n localhost, localport = urlparse.urlparse(local_url)[1].split(':')\n # If it's local, return a local DB string.\n if (localhost == hostname) and (localport == portnum):\n return database\n # Otherwise, get a remote URL.\n return 'http://%s:%s/%s' % (hostname, portnum, database)\n # If this looks like a plain database name, return it.\n elif plain_match:\n return plain_match.groupdict()['database']\n # Throw a wobbly.\n raise ValueError('Invalid database spec: %r' % (db_spec,))", "response": "Converts a database specifier into a database string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert a database string to a database specifier.", "response": "def db_to_specifier(db_string):\n \"\"\"\n Return the database specifier for a database string.\n \n This accepts a database name or URL, and returns a database specifier in the\n format accepted by ``specifier_to_db``. It is recommended that you consult\n the documentation for that function for an explanation of the format.\n \"\"\"\n local_match = PLAIN_RE.match(db_string)\n remote_match = URL_RE.match(db_string)\n # If this looks like a local specifier:\n if local_match:\n return 'local:' + local_match.groupdict()['database']\n # If this looks like a remote specifier:\n elif remote_match:\n # Just a fancy way of getting 3 variables in 2 lines...\n hostname, portnum, database = map(remote_match.groupdict().get,\n ('hostname', 'portnum', 'database'))\n local_url = settings._('COUCHDB_SERVER', 'http://127.0.0.1:5984/')\n localhost, localport = urlparse.urlparse(local_url)[1].split(':')\n # If it's the local server, then return a local specifier.\n if (localhost == hostname) and (localport == portnum):\n return 'local:' + database\n # Otherwise, prepare and return the remote specifier.\n return 'remote:%s:%s:%s' % (hostname, portnum, database)\n # Throw a wobbly.\n raise ValueError('Invalid database string: %r' % (db_string,))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a CouchDB database instance from a database string.", "response": "def get_db_from_db(db_string):\n \"\"\"Return a CouchDB database instance from a database string.\"\"\"\n server = get_server_from_db(db_string)\n local_match = PLAIN_RE.match(db_string)\n remote_match = URL_RE.match(db_string)\n # If this looks like a local specifier:\n if local_match:\n return server[local_match.groupdict()['database']]\n elif remote_match:\n return server[remote_match.groupdict()['database']]\n raise ValueError('Invalid database string: %r' % (db_string,))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes sure a DB specifier exists creating it if necessary.", "response": "def ensure_specifier_exists(db_spec):\n \"\"\"Make sure a DB specifier exists, creating it if necessary.\"\"\"\n local_match = LOCAL_RE.match(db_spec)\n remote_match = REMOTE_RE.match(db_spec)\n plain_match = PLAIN_RE.match(db_spec)\n if local_match:\n db_name = local_match.groupdict().get('database')\n server = shortcuts.get_server()\n if db_name not in server:\n server.create(db_name)\n return True\n elif remote_match:\n hostname, portnum, database = map(remote_match.groupdict().get,\n ('hostname', 'portnum', 'database'))\n server = shortcuts.get_server(\n server_url=('http://%s:%s' % (hostname, portnum)))\n if database not in server:\n server.create(database)\n return True\n elif plain_match:\n db_name = plain_match.groupdict().get('database')\n server = shortcuts.get_server()\n if db_name not in server:\n server.create(db_name)\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef coerce(value1, value2, default=None):\n if value1 is not NoSet:\n return value1\n elif value2 is not NoSet:\n return value2\n else:\n return default", "response": "Coerce two values into a base64 encoded object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_events_vote_cluster(self, delegate_address):\n ''' Returns all transactions and forged blocks by voters clustered around a single delegate_address'''\n\n delegate_pubkey = self.account_details(address=delegate_address)['public_key']\n\n plusvote = '+{delegate_pubkey}'.format(delegate_pubkey=delegate_pubkey)\n\n resultset = self._cursor.execute_and_fetchall(\"\"\"\n SELECT *\n FROM (\n SELECT \n trs.\"{transactions[id]}\" AS a,\n 'transaction' AS b, \n trs.\"{transactions[amount]}\" AS c,\n trs.\"{transactions[timestamp]}\" AS d, \n trs.\"{transactions[recipient_id]}\" AS e,\n trs.\"{transactions[sender_id]}\" AS f, \n trs.\"{transactions[rawasset]}\" AS g,\n trs.\"{transactions[type]}\" AS h, \n trs.\"{transactions[fee]}\" AS i, \n trs.\"{transactions[block_id]}\" AS j,\n blocks.\"{blocks[height]}\" AS k\n FROM {transactions[table]} AS trs\n INNER JOIN {blocks[table]} AS blocks\n ON (blocks.\"{blocks[id]}\" = trs.\"{transactions[block_id]}\")\n WHERE trs.\"{transactions[sender_id]}\" IN\n (SELECT trs.\"{transactions[sender_id]}\"\n FROM {transactions[table]} AS trs, {votes[table]} AS votes\n WHERE trs.\"{transactions[id]}\" = votes.\"{votes[transaction_id]}\"\n AND votes.\"{votes[votes]}\" = '{plusvote}') \n OR trs.\"{transactions[recipient_id]}\" IN\n (SELECT trs.\"{transactions[sender_id]}\"\n FROM {transactions[table]} AS trs, {votes[table]} AS votes\n WHERE trs.\"{transactions[id]}\" = votes.\"{votes[transaction_id]}\"\n AND votes.\"{votes[votes]}\" = '{plusvote}') \n UNION\n SELECT \n blocks.\"{blocks[id]}\" AS a, \n 'block' AS b, \n blocks.\"{blocks[reward]}\"as c, \n blocks.\"{blocks[total_fee]}\" AS d,\n ENCODE(mem.\"{mem_accounts[public_key]}\"::BYTEA, 'hex') AS e,\n mem.\"{mem_accounts[address]}\" AS f,\n mem.\"{mem_accounts[username]}\" AS g,\n NULL AS h,\n blocks.\"{blocks[timestamp]}\" AS i,\n NULL AS j,\n blocks.\"{blocks[height]}\" AS k\n FROM blocks\n INNER JOIN {mem_accounts[table]} AS mem\n ON (mem.\"{mem_accounts[public_key]}\" = blocks.\"{blocks[generator_public_key]}\") \n WHERE\n blocks.\"{blocks[generator_public_key]}\" IN (\n SELECT mem2.\"{mem_accounts[public_key]}\"\n FROM {mem_accounts[table]} mem2\n WHERE mem2.\"{mem_accounts[address]}\" IN \n (SELECT trs.\"{transactions[sender_id]}\"\n FROM {transactions[table]} AS trs, {votes[table]} AS votes\n WHERE trs.\"{transactions[id]}\" = votes.\"{votes[transaction_id]}\"\n AND votes.\"{votes[votes]}\" = '{plusvote}') \n )) total\n \n ORDER BY total.k ASC;\"\"\".format(\n address=delegate_address,\n transactions=self.scheme['transactions'],\n blocks=self.scheme['blocks'],\n mem_accounts=self.scheme['mem_accounts'],\n mem_accounts2delegates=self.scheme['mem_accounts2delegates'],\n votes=self.scheme['votes'],\n plusvote=plusvote))\n res = {}\n for i in resultset:\n\n if i[1] == 'transaction':\n res.update({i[0]: {\n 'tx_id': i[0],\n 'event_type': i[1],\n 'amount': i[2],\n 'timestamp': i[3],\n 'recipient_id': i[4],\n 'sender_id': i[5],\n 'rawasset': i[6],\n 'type': i[7],\n 'fee': i[8],\n 'block_id': i[9],\n 'height': i[10]\n }})\n\n elif i[1] == 'block':\n res.update({i[0]: {\n 'block_id': i[0],\n 'event_type': i[1],\n 'reward': i[2],\n 'total_fee': i[3],\n 'timestamp': i[8],\n 'address': i[5],\n 'username': i[6],\n 'public_key': i[4],\n 'height': i[10]\n }})\n\n return res", "response": "Returns all transactions and forged blocks by voters clustered around a single delegate_address"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef classical(transform, loglikelihood, parameter_names, prior, \n\tstart = 0.5, ftol=0.1, disp=0, nsteps=40000,\n\tmethod='neldermead', **args):\n\t\"\"\"\n\t**Classic optimization methods**\n\n\t:param start: start position vector (before transform)\n\t:param ftol: accuracy required to stop at optimum\n\t:param disp: verbosity\n\t:param nsteps: number of steps\n\t:param method: string\n\t\tneldermead, cobyla (via `scipy.optimize `_)\n\t\tbobyqa, ralg, algencan, ipopt, mma, auglag and many others from the OpenOpt framework (via `openopt.NLP `_)\n\t\tminuit (via `PyMinuit `_)\n\t\"\"\"\n\timport scipy.optimize\n\t\n\tn_params = len(parameter_names)\n\t\n\tdef minfunc(params):\n\t\tl = loglikelihood(params)\n\t\tp = prior(params)\n\t\tif numpy.isinf(p) and p < 0:\n\t\t\tprint ' prior rejection'\n\t\t\treturn -1e300\n\t\tif numpy.isnan(l):\n\t\t\treturn -1e300\n\t\treturn -l - p\n\tdef minfunc_cube(cube):\n\t\tcube = numpy.array(cube)\n\t\tif (cube <= 1e-10).any() or (cube >= 1-1e-10).any():\n\t\t\treturn 1e100\n\t\tparams = transform(cube)\n\t\tl = loglikelihood(params)\n\t\tp = prior(params)\n\t\tif numpy.isinf(p) and p < 0:\n\t\t\tprint ' prior rejection'\n\t\t\treturn -1e300\n\t\tif numpy.isnan(l):\n\t\t\treturn -1e300\n\t\treturn -l - p\n\t\n\tstart = start + numpy.zeros(n_params)\n\tret = {}\n\tif method == 'neldermead':\n\t\tfinal, value, _niter, neval, warnflag = scipy.optimize.fmin(minfunc_cube, start, ftol=ftol, disp=disp, maxfun=nsteps, full_output=True)\n\telif method == 'cobyla':\n\t\tcons = [lambda params: params[i] for i in range(n_params)]\n\t\tcons += [lambda params: 1 - params[i] for i in range(n_params)]\n\t\tfinal = scipy.optimize.fmin_cobyla(minfunc_cube, start, cons, rhoend=ftol / 10, disp=disp, maxfun=nsteps)\n\t\tneval = nsteps\n\telif method == 'minuit' or method == 'hesse':\n\t\t\"\"\"\n\t\tWe use eval here, and it is a dangerous thing to do.\n\t\tBut Minuit probes the objective function for parameter names,\n\t\tand there is no way to create the objective function \n\t\tdynamically with an unknown number of parameters other than\n\t\tthrough eval.\n\t\t\"\"\"\n\t\ts = ', '.join(parameter_names)\n\t\ts = \"\"\"lambda %s: minfunc([%s])\"\"\" % (s, s)\n\t\tif method == 'hesse':\n\t\t\tf = eval(s, dict(minfunc=minfunc, numpy=numpy))\n\t\t\tstart = transform(start)\n\t\telse:\n\t\t\tf = eval(s, dict(minfunc=minfunc_cube, numpy=numpy))\n\t\timport minuit\n\t\tm = minuit.Minuit(f)\n\t\tfor i, p in enumerate(parameter_names):\n\t\t\tm.values[p] = start[i]\n\t\t\tif method == 'minuit':\n\t\t\t\tm.limits[p] = (1e-10, 1 - 1e-10)\n\t\tm.up = 0.5\n\t\tm.tol = ftol * 100\n\t\tm.printMode = disp\n\t\tif method == 'minuit':\n\t\t\tm.migrad()\n\t\telif method == 'hesse':\n\t\t\tm.hesse()\n\t\tfinal = [m.values[p] for p in parameter_names]\n\t\tneval = m.ncalls\n\t\terrors = [m.errors[p] for p in parameter_names]\n\n\t\tif method == 'minuit':\n\t\t\tc0 = final\n\t\t\tp0 = transform(c0)\n\t\t\tstdev = numpy.zeros(n_params)\n\t\t\tlower = numpy.zeros(n_params)\n\t\t\tupper = numpy.zeros(n_params)\n\t\t\tfor i, w in enumerate(errors):\n\t\t\t\tc1 = numpy.copy(c0)\n\t\t\t\tc1[i] -= w\n\t\t\t\tc2 = numpy.copy(c0)\n\t\t\t\tc2[i] += w\n\t\t\t\tp1 = transform(c1)\n\t\t\t\tp2 = transform(c2)\n\t\t\t\tstdev[i] = numpy.abs(p2[i] - p1[i]) / 2\n\t\t\t\tlower[i] = min(p2[i], p1[i])\n\t\t\t\tupper[i] = max(p2[i], p1[i])\n\t\t\tret['stdev'] = stdev\n\t\t\tret['upper'] = upper\n\t\t\tret['lower'] = lower\n\t\telif method == 'hesse':\n\t\t\tret['stdev'] = errors\n\t\t\tret['cov'] = numpy.matrix([[m.covariance[(a, b)] for b in parameter_names] for a in parameter_names])\n\t\t\t\n\telse:\n\t\tfrom openopt import NLP\n\t\tlo = [1e-10] * n_params\n\t\thi = [1-1e-10] * n_params\n\t\tiprint = 0 if disp == 0 else 10 if disp == 1 else 1\n\t\tp = NLP(f=minfunc_cube, x0=start, lb=lo, ub=hi,\n\t\t\tmaxFunEvals=nsteps, ftol=ftol, iprint=iprint)\n\t\tr = p.solve(method)\n\t\tfinal = r.xf\n\t\tneval = r.evals['f']\n\t\n\tret.update(dict(start=final, maximum=transform(final), method=method, neval=neval))\n\treturn ret", "response": "Classical optimization for the current state of the model."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing a hub key into a dictionary of component parts", "response": "def parse_hub_key(key):\n \"\"\"Parse a hub key into a dictionary of component parts\n\n :param key: str, a hub key\n :returns: dict, hub key split into parts\n :raises: ValueError\n \"\"\"\n if key is None:\n raise ValueError('Not a valid key')\n\n match = re.match(PATTERN, key)\n if not match:\n match = re.match(PATTERN_S0, key)\n if not match:\n raise ValueError('Not a valid key')\n\n return dict(map(normalise_part, zip([p for p in PARTS_S0.keys()], match.groups())))\n\n return dict(zip(PARTS.keys(), match.groups()))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef match_part(string, part):\n if not string or not re.match('^(' + PARTS[part] + ')$', string):\n raise ValueError('{} should match {}'.format(part, PARTS[part]))", "response": "Raise an exception if string doesn t match a part s regex\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_hub_key(resolver_id, hub_id, repository_id, entity_type, entity_id=None):\n parsed = urlparse(resolver_id)\n if not parsed.scheme:\n parsed = parsed._replace(scheme=PROTOCOL, netloc=idna_encode(parsed.path.lower()), path=u'')\n else:\n parsed = parsed._replace(netloc=idna_encode(parsed.netloc.lower()))\n\n resolver_id = urlunparse(parsed)\n\n hub_id = url_quote(hub_id.lower())\n\n if not entity_id:\n entity_id = str(uuid.uuid4()).replace('-', '')\n else:\n match_part(entity_id, 'entity_id')\n\n # If any of these checks fail a ValueError exception is raised\n match_part(resolver_id, 'resolver_id')\n match_part(hub_id, 'hub_id')\n match_part(repository_id, 'repository_id')\n match_part(entity_type, 'entity_type')\n\n hub_key = SEPARATOR.join(\n [resolver_id, SCHEMA, hub_id, repository_id, entity_type, entity_id])\n return hub_key", "response": "Generate and return a hub key"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncompacts a CouchDB database with optional synchronicity.", "response": "def compact(db_spec, poll_interval=0):\n \n \"\"\"\n Compact a CouchDB database with optional synchronicity.\n \n The ``compact`` function will compact a CouchDB database stored on an\n running CouchDB server. By default, this process occurs *asynchronously*,\n meaning that the compaction will occur in the background. Often, you'll want\n to know when the process has completed; for this reason, ``compact`` will\n return a function which, when called, will return the state of the\n compaction. If it has completed, ``True`` will be returned; otherwise,\n ``False``. This may be called multiple times.\n \n Alternatively, you may opt to run ``compact`` in synchronous mode, for\n debugging or profiling purposes. If this is the case, an optional keyword\n argument ``poll_interval`` is accepted, which should be a number (in\n seconds) representing the time to take between polls. A sensible default\n may be around 0.5 (seconds).\n \n Because this function operates on database specifiers, you can choose to\n operate on the local server or any remote server.\n \"\"\"\n \n server = get_server_from_specifier(db_spec)\n db = get_db_from_specifier(db_spec)\n # Get logger\n logger = logging.getLogger('relax.couchdb.compact')\n logger.info('Pre-compact size of %r: %s' % (db_spec,\n repr_bytes(db.info()['disk_size']),))\n logger.debug('POST ' + urlparse.urljoin(db.resource.uri + '/', '_compact'))\n # Start compaction process by issuing a POST to '//_compact'.\n resp_headers, resp_body = db.resource.post('/_compact')\n # Asynchronous compaction\n if not poll_interval:\n if not (resp_body.get('ok', False) and\n resp_headers['status'] == '202'):\n err = CompactionError('Compaction of %r failed.')\n # Give the exception some useful information.\n err.response = (resp_headers, resp_body)\n raise err\n # Return a function which, when called, will return whether or not the\n # compaction process is still running.\n def check_completed():\n logger.debug(\n 'Polling database to check if compaction has completed')\n logger.debug('GET ' + db.resource.uri + '/')\n db_info = db.info()\n completed = not db_info.get('compact_running', False)\n if completed and db_info.get('disk_size', None):\n logger.info('Post-compact size of %r: %s' % (db_spec,\n repr_bytes(db_info['disk_size'])))\n return completed\n return check_completed\n # Synchronous compaction\n elif poll_interval > 0:\n logger.debug(\n 'Polling database to check if compaction has completed')\n logger.debug('GET ' + db.resource.uri + '/')\n # Shows whether compaction is running or not.\n running = db.info().get('compact_running', False)\n # Poll the running state of the compaction.\n while running:\n time.sleep(poll_interval)\n logger.debug(\n 'Polling database to check if compaction has completed')\n logger.debug('GET ' + db.resource.uri + '/')\n running = db.info().get('compact_running', False)\n size_after = db.info().get('disk_size', None)\n if size_after:\n logger.info('Post-compact size of %r: %s' % (db_spec,\n repr_bytes(size_after)))\n return True\n else:\n raise ValueError('Poll interval must be greater than zero.')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\napplying default settings to commands", "response": "def apply_defaults(self, commands):\n \"\"\" apply default settings to commands\n not static, shadow \"self\" in eval\n \"\"\"\n for command in commands:\n if 'action' in command and \"()\" in command['action']:\n command['action'] = eval(\"self.{}\".format(command['action']))\n if command['keys'][0].startswith('-'):\n if 'required' not in command:\n command['required'] = False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_commands(self, commands, parser):\n self.apply_defaults(commands)\n def create_single_command(command):\n keys = command['keys']\n del command['keys']\n kwargs = {}\n for item in command:\n kwargs[item] = command[item]\n parser.add_argument(*keys, **kwargs)\n\n if len(commands) > 1:\n for command in commands:\n create_single_command(command)\n else:\n create_single_command(commands[0])", "response": "add commands to parser"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_subparsers(self, parser):\n subparsers = parser.add_subparsers()\n for name in self.config['subparsers']:\n subparser = subparsers.add_parser(name)\n self.create_commands(self.config['subparsers'][name], subparser)", "response": "get config for subparser and create commands"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the consumer and oauth tokens with three - legged OAuth process and save in a yaml file in the user home directory.", "response": "def new_user(yaml_path):\n '''\n Return the consumer and oauth tokens with three-legged OAuth process and\n save in a yaml file in the user's home directory.\n '''\n\n print 'Retrieve API Key from https://www.shirts.io/accounts/api_console/'\n api_key = raw_input('Shirts.io API Key: ')\n\n tokens = {\n 'api_key': api_key,\n }\n\n yaml_file = open(yaml_path, 'w+')\n yaml.dump(tokens, yaml_file, indent=2)\n yaml_file.close()\n\n return tokens"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _AddPropertiesForExtensions(descriptor, cls):\n extension_dict = descriptor.extensions_by_name\n for extension_name, extension_field in extension_dict.items():\n constant_name = extension_name.upper() + \"_FIELD_NUMBER\"\n setattr(cls, constant_name, extension_field.number)", "response": "Adds properties for all fields in this protocol message type."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nunpacks Any message and returns the unpacked message.", "response": "def _InternalUnpackAny(msg):\n \"\"\"Unpacks Any message and returns the unpacked message.\n\n This internal method is differnt from public Any Unpack method which takes\n the target message as argument. _InternalUnpackAny method does not have\n target message type and need to find the message type in descriptor pool.\n\n Args:\n msg: An Any message to be unpacked.\n\n Returns:\n The unpacked message.\n \"\"\"\n type_url = msg.type_url\n db = symbol_database.Default()\n\n if not type_url:\n return None\n\n # TODO(haberman): For now we just strip the hostname. Better logic will be\n # required.\n type_name = type_url.split(\"/\")[-1]\n descriptor = db.pool.FindMessageTypeByName(type_name)\n\n if descriptor is None:\n return None\n\n message_class = db.GetPrototype(descriptor)\n message = message_class()\n\n message.ParseFromString(msg.value)\n return message"}